text
stringlengths 14
1.76M
|
|---|
# The dyadic and the continuous Hilbert transforms with values in Banach
spaces
Komla Domelevo and Stefanie Petermichl111Partially supported by ERC project
CHRiSHarMa no. DLV-682402 and the Alexander von Humboldt foundation
###### Abstract
We show that if the Hilbert transform with values in a Banach space is $L^{p}$
bounded, then so is the so called dyadic Hilbert transform, a new dyadic shift
operator, with a linear relation of the bounds.
## 1 Introduction
It is a known fact that the Hilbert transform with values in a Banach space is
bounded if and only if the Banach space has the UMD property. The
relationships of the norms, however, remains unclear. If the $L^{p}$ norm of
the Hilbert transform is noted $h_{p}$ and the UMD constant is noted $m_{p}$,
then it is known that
$m^{1/2}_{p}\lesssim h_{p}\lesssim m^{2}_{p}.$
The lower estimate on the left hand side is due to Bourgain [1] and the
estimate on the right hand side is due to Burkholder [2]. It is an open
question whether these relations can be improved, ideally if they are linear.
Using the classical Haar shift of Petermichl [6], it was shown by
Petermichl–Pott [7] that if $s^{\operatorname{cl}}_{p}$ is the $L^{p}$ bound
for the classical Haar shift, then $s^{\operatorname{cl}}_{p}\leqslant
m^{2}_{p}$. Their short argument gives an alternative to Burkholder’s
direction. Through averaging, which is the main idea of the classical dyadic
shift, it is clear that $h_{p}\lesssim s^{\operatorname{cl}}_{p}$. While the
classical shift has proven a useful model for the Hilbert transform for some
applications, it lacks a number of defining properties and similarities. In
this paper we consider what we call the dyadic Hilbert transform
$\mathcal{S}_{0}:h_{I_{\pm}}\mapsto\pm h_{I_{\mp}}.$
While this operator does average to a 0 multiple of the Hilbert transform via
the ideas in [6] and is therefore not applicable in the same way as the
classical shift, it is in other ways incomparably closer to the Hilbert
transform. Its square is the negative identity, it is antisymmetric and –
apparently importantly for this subject – it has no even component.
If $\mathcal{S}_{0}$ has the $L^{p}$ bound $s_{p}$, it is our aim to show in
this paper that Bourgain’s direction holds with a linear relation for this
pair of operators:
$s_{p}\lesssim h_{p}.$
This lower bound for the norm of Hilbert transform is surprising and new –
prior dyadic shift arguments have all been based on convex hull arguments and
averaging, and as such they are a priori useful for upper estimates only. This
lower bound requires a deeper understanding and a better model. Indeed, the
reader will see when reading the proof, that this new shift operator mimics
the Cauchy–Riemann equations ‘in the probability space’. We add new elements
to an argument by Bourgain [1] using high oscillations. In his work, he needed
to apply the Hilbert transform twice to control the martingale transforms. His
beautiful argument does not seem as if it can be directly improved to change
the resulting quadratic relation. Our relationship of the Hilbert transform to
this new Haar shift rather than the martingale transform is shown to be much
more direct and we therefore manage to only use the Hilbert transform once.
It is also highly interesting that even operators pose fewer problems. Indeed,
Geiss–Montgomery-Smith–Saksman [4] proved remarkable linear two sided
estimates between $m_{p}$ and the norm of the difference of squares of Riesz
transforms in the plane, i.e. the even operator $R_{1}^{2}-R_{2}^{2}$. Part of
their argument is also based on a use of high oscillations. They also gave an
upper estimate for even convolution type singular integrals. We wish to
highlight a strong result on linear upper estimates on even shift operators
and hence more general even Calderón–Zygmund operators by Pott–Stoica [9].
## 2 Main result
Let us consider the dyadic system $\mathcal{D}$ and its $L^{2}$ normalized
Haar functions $\\{h_{I}:I\in\mathcal{D}\\}$ on the unit interval $I_{0}$.
Analysts write the orthonormal basis expansion of a function $f$ as follows:
$f(x)=\langle f\rangle_{I_{0}}+\sum_{I\in\mathcal{D}}(f,h_{I})h_{I}(x)=\langle
f\rangle_{I_{0}}+\frac{1}{2}\sum_{I\in\mathcal{D}}(\langle
f\rangle_{I_{+}}-\langle f\rangle_{I_{-}})(\chi_{I_{+}}-\chi_{I_{-}})(x).$
Any dyadic interval corresponds to a path of sign tosses, with a sequence of
$k$ sign tosses yielding an interval of length $|I_{k}|=2^{-k}$ so that
$(I_{1},\ldots,I_{k})$ corresponds to a sequence
$(\varepsilon_{1},\ldots,\varepsilon_{k})\in\\{-,+\\}^{k}$. Assuming $\langle
f\rangle_{I_{0}}=0$ we can relate their martingale difference sequence, using
the usual notation, for example from [1],
$\sum^{K}_{k=0}\Delta^{f}_{k}(\varepsilon_{1},\ldots,\varepsilon_{k})\varepsilon_{k+1}$
to the Haar series by writing
$\Delta^{f}_{k}(\varepsilon_{1},\ldots,\varepsilon_{k})=\frac{1}{2}(\langle
f\rangle_{I_{k+}}-\langle f\rangle_{I_{k-}}).$
If $\varepsilon_{k+1}=\pm$ then we end up in $I_{k\pm}$. In the above notation
we have the constant $\Delta^{f}_{0}=\frac{1}{2}(\langle
f\rangle_{I_{0+}}-\langle f\rangle_{I_{0-}})$. Here, $f$ has values in a
Banach space $X$ and so $(f,h_{I})\in X$.
The Banach space has the UMD property if there exists a constant $C_{p}$ so
that for any sign tosses $\alpha_{k}=\pm 1$ there holds
$\left\|\sum_{k}\alpha_{k+1}\Delta^{f}_{k}(\varepsilon_{1},\ldots,\varepsilon_{k})\varepsilon_{k+1}\right\|_{L_{X}^{p}}\leqslant
C_{p}\left\|\sum_{k}\Delta^{f}_{k}(\varepsilon_{1},\ldots,\varepsilon_{k})\varepsilon_{k+1}\right\|_{L_{X}^{p}}.$
The best such constant for $L^{p}$ is called the $\operatorname{UMD}$ constant
of $X$ and denoted here $m_{p}$. For basic properties of these spaces see the
books by Pisier [8] or Hytönen–van Neerven–Veraar–Weis [5].
In the Haar basis notation this becomes with $T_{\alpha}:\langle
f\rangle_{I_{0}}\mapsto 0,h_{I}\mapsto\alpha_{I}h_{I}$ the requirement that
$\sup_{\alpha}\|T_{\alpha}\|_{L_{X}^{p}\mapsto L_{X}^{p}}=m_{p}.$ Recall that
with $\|\mathcal{H}\|_{L_{X}^{p}\rightarrow L_{X}^{p}}=h_{p}$ it is known that
$m_{p}^{1/2}\lesssim h_{p}\lesssim m^{2}_{p}$. As mentioned before, linear
relations on either side are a famous open problem.
Instead of $T_{\alpha}$ we consider the shift operator
$\mathcal{S}_{0}:\langle f\rangle_{I_{0}}\mapsto 0$, $h_{I_{0}}\mapsto 0$ and
$S_{0}:h_{I_{\pm}}\mapsto\pm h_{I_{\mp}}$ for $I\subset I_{0}$. It is our goal
to prove
###### Theorem 1
Given the $X$ valued shift operator $\mathcal{S}_{0}:I_{0}\rightarrow X$ and
the Hilbert transform $\mathcal{H}:\mathbb{T}\rightarrow X$ with $X$ a Banach
space. There exists an absolute constant $C$ so that
$\|\mathcal{S}_{0}\|_{L_{X}^{p}\rightarrow L_{X}^{p}}\leqslant
C\|\mathcal{H}\|_{L_{X}^{p}\rightarrow L_{X}^{p}}.$
The remaining part of this text is dedicated to the proof of Theorem 1.
## 3 Proof
### Using sign tosses $\varepsilon$
Write again the identity for functions highlighting contributions from left
and right halves of intervals, in other words $\mathcal{D}^{-}$ and
$\mathcal{D}^{+}$:
$f=\langle
f\rangle_{I_{0}}+(f,h_{I_{0}})h_{I_{0}}+\sum_{k=0}^{\infty}\sum_{I:|I|=2^{-k}|I_{0}|}(f,h_{I_{-}})h_{I_{-}}+(f,h_{I_{+}})h_{I_{+}}.$
Let us now encode the sign tosses in a way that respects a shift operator of
complexity one (as opposed to complexity zero for $T_{\alpha}$). In the first
step, we just choose $\varepsilon_{0}=\pm$ as the first sign toss. Then,
depending on the outcome of $\varepsilon_{0}$, we use generators
$\varepsilon_{1}^{-}$ and $\varepsilon_{1}^{+}$. This means, if via the
previous tosses, we arrived in a $\mathcal{D}^{\pm}$ interval, then
$\varepsilon^{\pm}_{1}$ is the relevant toss. They are independent with
probability 1/2 each. So the above can be rewritten as
$\displaystyle\mathrm{d}f_{-2}+\mathrm{d}f_{-1}\varepsilon_{0}$
$\displaystyle+\sum^{\infty}_{k=0}\mathrm{d}f^{+}_{k}(\varepsilon_{0},\varepsilon^{-}_{1},\varepsilon^{+}_{1},\ldots,\varepsilon^{+}_{k})\varepsilon^{+}_{k+1}+\mathrm{d}f^{-}_{k}(\varepsilon_{0},\varepsilon^{-}_{1},\varepsilon^{+}_{1},\ldots,\varepsilon^{+}_{k})\varepsilon_{k+1}^{-}.$
We have $\mathrm{d}f_{-2}=\langle f\rangle_{I_{0}}$ and
$\mathrm{d}f_{-1}=(f,h_{I_{0}})|I_{0}|^{-1/2}$. The second and third summands
above involve $\mathrm{d}f_{0}^{\pm}$, where
$\mathrm{d}f_{0}^{-}(\varepsilon_{0})=\left\\{\begin{array}[]{ll}(f,h_{I_{0-}})|I_{0-}|^{-1/2}&\operatorname{if}\varepsilon_{0}=-\\\
0&\operatorname{if}\varepsilon_{0}=+\end{array}\right.$
$\mathrm{d}f_{0}^{+}(\varepsilon_{0})=\\{\begin{array}[]{ll}0&\operatorname{if}\varepsilon_{0}=-\\\
(f,h_{I_{0+}})|I_{0+}|^{-1/2}&\operatorname{if}\varepsilon_{0}=+\end{array}.$
Then
$\mathrm{d}f^{\pm}_{1}(\varepsilon_{0},\varepsilon^{+}_{1},\varepsilon^{-}_{1})$
depends on $\varepsilon_{0}$ and depending on the outcome of $\varepsilon_{0}$
on either $\varepsilon^{+}_{1}$ or $\varepsilon^{-}_{1}$. Depending on the
outcome of the relevant $\varepsilon_{1}$ we determine if
$\mathrm{d}f_{1}^{+}$ or $\mathrm{d}f_{1}^{-}$ is active. This is certainly
not the most concise notation, but may be more clearly resembling a sum over
$\mathcal{D}_{+}$ and $\mathcal{D}_{-}$.
### Using angles $\theta$
Let us change to a random generator to facilitate the use of the norm of the
Hilbert transform. Let $\varphi^{+}(\theta)=\operatorname{signcos}\theta$ and
$\varphi^{-}(\theta)=\operatorname{signsin}\theta$. Set for
$\theta_{j}\in[-\pi,\pi]$ with $j\in\mathbb{N}_{0}$,
$\varepsilon_{0}=\varphi^{+}(\theta_{0})$ as the first sign toss. Then,
$\varepsilon_{j}^{-}=\varphi^{-}(\theta_{j})$ and
$\varepsilon_{j}^{+}=\varphi^{+}(\theta_{j})$ for $j>0$. In order to get a
more concise notation, we write $\theta=(\theta_{0},\ldots)$ and
$\vec{\theta}_{k}=(\theta_{0},\ldots,\theta_{k})$. Together we receive
$\displaystyle F(\theta)$ $\displaystyle=$
$\displaystyle\mathrm{d}F_{-2}+\mathrm{d}F_{-1}\varphi^{+}(\theta_{0})$
$\displaystyle+\sum^{\infty}_{k=0}\mathrm{d}F^{+}_{k}(\vec{\theta}_{k})\varphi^{+}(\theta_{k+1})+\mathrm{d}F^{-}_{k}(\vec{\theta}_{k})\varphi^{-}(\theta_{k+1})$
with $\mathrm{d}F_{k}=\mathrm{d}f_{k}$ for $k=-1,-2$ and for $k\geqslant 0$
$\mathrm{d}F_{k}^{\pm}(\vec{\theta}_{k})=\mathrm{d}f_{k}^{\pm}(\varphi^{+}(\theta_{0}),\varphi^{-}(\theta_{1}),\varphi^{+}(\theta_{1}),\ldots,\varphi^{-}(\theta_{k}),\varphi^{+}(\theta_{k})).$
### Fourier side, modulation and action of $\mathcal{H}$
Let us assume for the moment that all sums are finite, including all Fourier
series if we expand in the $\theta$. This will be accomplished later by a
standard limiting procedure. With $\sigma=\pm$ observe
$\theta\mapsto\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k})\varphi^{\sigma}(\theta_{k+1})\text{
or
}\theta\mapsto\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k})\mathcal{H}\varphi^{\sigma}(\theta_{k+1})$
have a Fourier series with terms of the form
$e^{il_{0}\theta_{0}}\ldots e^{il_{k+1}\theta_{k+1}}x_{l_{0},\ldots,l_{k+1}},$
where $x_{l_{0},\ldots,l_{k+1}}\in X$ and where these terms are summed in the
$l_{j}\in\mathbb{Z}$. Recall that $\int\varphi^{\sigma}(\theta_{k+1})=0$ and
$\int\mathcal{H}\varphi^{\sigma}(\theta_{k+1})=0$ and thus
$x_{l_{0},\ldots,l_{k},0}=0$, so that we may assume in what follows
$l_{k+1}\neq 0$.
Using that all sums are finite, build a sequence $N=(N_{k})_{k\geqslant 0}$ so
that there holds for the spectra $|\gamma_{0}|+\ldots+|\gamma_{k}|\leqslant
N_{k}$ (only non-zero contributions if $|l_{i}|\leqslant|\gamma_{i}|$). Let us
define a sequence $n=(n_{k})_{k\geqslant 0}$ inductively by
$n_{0}=1,n_{k+1}=2n_{k}N_{k}.$
Write
$\vec{\theta}_{k}+\vec{n}_{k}\psi=(\theta_{0}+n_{0}\psi,\ldots,\theta_{k}+n_{k}\psi)$.
Then the Fourier series of
$\psi\mapsto\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k}+\vec{n}_{k}\psi)\varphi^{\sigma}(\theta_{k+1}+n_{k+1}\psi)$
or
$\psi\mapsto\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k}+\vec{n}_{k}\psi)\mathcal{H}\varphi^{\sigma}(\theta_{k+1}+n_{k+1}\psi)$
have summands of the form
$e^{il_{0}(\theta_{0}+n_{0}\psi)}\ldots
e^{il_{k+1}(\theta_{k+1}+n_{k+1}\psi)}x_{l_{0},\ldots,l_{k+1}}$
with $x_{l_{0},\ldots,l_{k},0}=0$. To take the Hilbert transform in the
variable $\psi$, we must understand the sign of
$l_{0}n_{0}+\ldots+l_{k}n_{k}+l_{k+1}n_{k+1}$. Notice that, provided
$l_{k+1}\neq 0$,
$|l_{0}n_{0}+\ldots+l_{k}n_{k}|\leqslant(|l_{0}|+\ldots+|l_{k}|)n_{k}\leqslant
N_{k}n_{k}<2N_{k}n_{k}=n_{k+1}\leqslant|l_{k+1}n_{k+1}|$
by the assumption on the spectrum and so the sign of $l_{k+1}$ will determine
the sign of the sum $l_{0}n_{0}+\ldots+l_{k}n_{k}+l_{k+1}n_{k+1}$. For this
reason $\mathcal{H}$ only sees the high frequencies of $\varphi$. Considering
just the signum of $l_{k+1}$ means we take the Hilbert transform of
$\varphi^{\sigma}$ directly:
$\displaystyle\mathcal{H}_{\psi}(\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k}+\vec{n}_{k}\psi)\varphi^{\sigma}(\theta_{k+1}+n_{k+1}\psi))$
$\displaystyle=$
$\displaystyle\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k}+\vec{n}_{k}\psi)\mathcal{H}(\varphi^{\sigma})(\theta_{k+1}+n_{k+1}\psi).$
For ease of notation, we write $F^{\mathcal{H}}$ for $F$ with the Hilbert
transform applied in the last variable of each increment (such as in the
equation above) and for the modulated versions, depending upon the sequence
$N=(N_{k})_{k\geqslant 0}$, we write
$\Phi_{\theta,N}(\psi)=F(\vec{\theta}+\vec{n}\psi)\text{ and
}\Phi^{\mathcal{H}}_{\theta,N}(\psi)=F^{\mathcal{H}}(\vec{\theta}+\vec{n}\psi)$
and similar for $G$ and $\Gamma$.
### Action of $\mathcal{S}_{0}$
$\mathcal{S}_{0}$ understood in the language of sign tosses maps as follows:
$\mathcal{S}_{0}:\mathrm{d}f^{\pm}_{k}(\varepsilon_{0,}\varepsilon^{+}_{1},\varepsilon^{-}_{1},\ldots,\varepsilon^{-}_{k})\varepsilon^{\pm}_{k+1}\mapsto\pm\mathrm{d}f^{\pm}_{k}(\varepsilon^{+}_{1},\varepsilon^{-}_{1},\ldots,\varepsilon^{-}_{k})\varepsilon_{k+1}^{\mp}.$
and note that in the language involving angles we have
$\mathcal{S}_{0}\varphi^{-}=-\varphi^{+}\operatorname{and}\mathcal{S}_{0}\varphi^{+}=\varphi^{-}$,
that is
$\mathcal{S}_{0}\varphi^{\sigma}=\sigma\varphi^{\bar{\sigma}}$
for $\sigma=\pm$ and $\bar{\sigma}$ the opposing sign.
We will require the following crucial lemma.
###### Lemma 1
There holds
$\displaystyle\left|\mathbb{E}^{\theta}\langle\sum_{k=0}^{\infty}\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k})\mathcal{H}\varphi^{\sigma}(\theta_{k+1}),\sum_{l=0}^{\infty}\mathrm{d}G^{\eta}_{l}(\vec{\theta}_{l})\varphi^{\eta}(\theta_{l+1})\rangle_{X,X^{\ast}}\right|$
$\displaystyle\leqslant$ $\displaystyle
h_{p}\|F\|_{L_{X}^{p}}\|G\|_{L_{X^{\ast}}^{q}}.$
Proof By a standard approximation argument, we may assume the above sums are
finite and all functions of $\theta$ have finite spectrum. We modulate as
described above and pull out the Hilbert transform as follows:
$\displaystyle|\mathbb{E}^{\theta}\langle
F^{\mathcal{H}}(\theta),G(\theta)\rangle_{X,X^{\ast}}|$ $\displaystyle=$
$\displaystyle|\mathbb{E}^{\psi}\mathbb{E}^{\theta}\langle
F^{\mathcal{H}}(\theta),G(\theta)\rangle_{X,X^{\ast}}|$ $\displaystyle=$
$\displaystyle|\mathbb{E}^{\psi}\mathbb{E}^{\theta}\langle\Phi_{\theta,N}^{\mathcal{H}}(\psi),\Gamma_{\theta,N}(\psi)\rangle_{X,X^{\ast}}|.$
We tend to the second equality above. It follows immediately if we observe
that
$|\mathbb{E}^{\theta}\langle\phi^{\mathcal{H}}_{\theta,N}(\psi),\Gamma_{\theta,N}(\psi)\rangle_{X,X^{\ast}}|$
does not depend upon $N$ or $\psi$. Indeed, the terms that arise are of the
form
$\langle\mathrm{d}F_{k}^{\sigma}(\vec{\theta}_{k}+\vec{n}_{k}\psi)\mathcal{H}\varphi^{\sigma}(\theta_{k+1}+n_{k+1}\psi),\mathrm{d}G_{l}^{\eta}(\vec{\theta}_{l}+\vec{n}_{l}\psi)\varphi^{\eta}(\theta_{l+1}+n_{l+1}\psi)\rangle_{X,X^{\ast}}.$
A successive integration in the $\theta$ shows by periodicity of the involved
functions an independence from $\psi$ and $N$.
We now use equality (3) and continue to estimate
$\displaystyle|\mathbb{E}^{\psi}\mathbb{E}^{\theta}\langle\Phi_{\theta,N}^{\mathcal{H}}(\psi),\Gamma_{\theta,N}(\psi)\rangle_{X,X^{\ast}}|$
$\displaystyle=$
$\displaystyle|\mathbb{E}^{\psi}\mathbb{E}^{\theta}\langle\mathcal{H}_{\psi}\Phi_{\theta,N}(\psi),\Gamma_{\theta,N}(\psi)\rangle_{X,X^{\ast}}|$
$\displaystyle\leqslant$ $\displaystyle
h_{p}\|F\|_{L_{X}^{p}}\|G\|_{L_{X^{\ast}}^{q}},$
where we again used independence of the expectation of the modulations of
$\psi$. $\Box$
### Comparing $\mathcal{H}$ and $\mathcal{S}_{0}$ in a projected form
Let us also define the operator $\pi$ on functions $f$ defined on $\mathbb{T}$
by $\pi(f)(x)=\sum^{1}_{i=-2}\langle f\rangle_{A_{i}}$, where the arcs $A_{i}$
correspond to angles $[i\pi/2,i\pi/2+\pi/2)$.
###### Lemma 2
There exists $c_{0}>0$ such that for both signatures $\sigma$ there holds
$\pi\mathcal{H}\varphi^{\sigma}=c_{0}\mathcal{S}_{0}\varphi^{\sigma}.$
Proof $g(x)=\mathcal{H}\chi_{(-\pi/2,\pi/2)}$ is an odd function with zeros
at $\pm\pi$ and 0. There is a singularity at $\pm\pi/2$. One can verify (for
example by inspecting the explicit expression of the singular integral) that
$\lim_{x\rightarrow\pi/2}\mathcal{H}\chi_{(-\pi/2,\pi/2)}(x)=+\infty$. By
translation invariance and antisymmetry of $\mathcal{H}$, we know that
$\mathcal{H}\varphi^{+}(x)=g(x)-g(x+\pi)$. By a similar argument, we get
$\mathcal{H}\varphi^{-}(x)=g(x-\pi/2)-g(x+\pi/2)=\mathcal{H}\varphi^{+}(x-\pi/2)$.
We also know that $\mathcal{H}\varphi^{+}(x)$ is odd and that
$\mathcal{H}\varphi^{-}(x)$ is even. From this, we deduce that
$\mathcal{H}\varphi^{+}|_{(0,\pi)\setminus\\{\pi/2\\}}$ is positive and
symmetric about the axis $x=\pi/2$ and that
$\mathcal{H}\varphi^{+}|_{(-\pi,0)\setminus\\{-\pi/2\\}}$ is negative and
symmetric about $x=-\pi/2$. Gathering the information, we obtain
$\pi\mathcal{H}\varphi^{+}=c_{0}\varphi^{-}=c_{0}\mathcal{S}_{0}\varphi^{+}\operatorname{and}\pi\mathcal{H}\varphi^{-}=-c_{0}\varphi^{+}=c_{0}\mathcal{S}_{0}\varphi^{-}$
for some $c_{0}>0$. $\Box$
Notice the elementary fact for functions $f,g\in L^{2}$:
$(\pi f,g)_{L^{2}}=(\pi f,\pi g)_{L^{2}}=(f,\pi g)_{L^{2}}.$
### The weak form
For $k,l\geqslant 0$ we consider terms of the form
$\displaystyle\mathbb{E}^{\theta}\langle\mathcal{S}_{0}\mathrm{d}F^{\sigma}_{k}(\vec{\theta}_{k})\varphi^{\sigma}(\theta_{k+1}),\mathrm{d}G_{l}^{\eta}(\vec{\theta}_{l})\varphi^{\eta}(\theta_{l+1})\rangle_{X,X^{\ast}}$
$\displaystyle=$
$\displaystyle\mathbb{E}^{\theta}\langle\mathrm{d}F_{k}^{\sigma}(\vec{\theta}_{k})(\mathcal{S}_{0}\varphi^{\sigma})(\theta_{k+1}),\mathrm{d}G_{l}^{\eta}(\vec{\theta}_{l})\varphi^{\eta}(\theta_{l+1})\rangle_{X,X^{\ast}}$
$\displaystyle=$
$\displaystyle\mathbb{E}^{\theta}\langle\mathrm{d}F_{k}^{\sigma}(\vec{\theta}_{k})(\sigma\varphi^{\bar{\sigma}})(\theta_{k+1}),\mathrm{d}G_{l}^{\eta}(\vec{\theta}_{l})\varphi^{\eta}(\theta_{l+1})\rangle_{X,X^{\ast}}.$
Terms where $k\neq l$ are zero after the integration in
$\theta_{\max(l,k)+1}$. Let us assume $l=k$. If $\sigma=\eta$, then
$\bar{\sigma}\neq\eta$ and
$\mathbb{E}^{\theta_{k}}(\sigma\varphi^{\bar{\sigma}})(\theta_{k+1})\varphi^{\eta}(\theta_{k+1})=0$.
So $k=l$ and $\bar{\sigma}=\eta$ are the only arising terms.
Equally, we consider the action under $\mathcal{H}$.
$\displaystyle\mathbb{E}^{\theta}\langle\mathrm{d}F_{k}^{\sigma}(\vec{\theta}_{k})\mathcal{H}\varphi^{\sigma}(\theta_{k+1}),\mathrm{d}G_{l}^{\eta}(\vec{\theta}_{l})\varphi^{\eta}(\theta_{l+1})\rangle_{X,X^{\ast}}$
Only diagonal terms remain because integration in $\theta_{\max(l,k)+1}$
yields again 0 if $l\neq k$. Let now $l=k$, then
$\varphi^{\eta}(\theta_{k+1})$ is constant on the quarters of $[-\pi,\pi]$.
Thus by Lemma 2
$\displaystyle\mathbb{E}^{\theta_{k+1}}(\mathcal{H}\varphi^{\sigma})(\theta_{k+1})\varphi^{\eta}(\theta_{k+1})$
$\displaystyle=$
$\displaystyle\mathbb{E}^{\theta_{k+1}}(\pi\mathcal{H}\varphi^{\sigma})(\theta_{k+1})\varphi^{\eta}(\theta_{k+1})$
$\displaystyle=$ $\displaystyle
c_{0}\mathbb{E}^{\theta_{k+1}}(\mathcal{S}_{0}\varphi^{\sigma})(\theta_{k+1})\varphi^{\eta}(\theta_{k+1})$
and we have together, when $\langle f\rangle_{I_{0}}=0$ and $(f,h_{I_{0}})=0$
$\mathbb{E}^{\theta}\langle
F^{\mathcal{H}}(\theta),G(\theta)\rangle_{X,X^{\ast}}=c_{0}\mathbb{E}^{\theta}\langle\mathcal{S}_{0}F(\theta),G(\theta)\rangle_{X,X^{\ast}}.$
Putting things together, we estimate assuming $\langle f\rangle_{I_{0}}=0$ and
$(f,h_{I_{0}})=0$ that
$\displaystyle|\mathbb{E}^{x}\langle\mathcal{S}_{0}f(x),g(x)\rangle_{X,X^{\ast}}|$
$\displaystyle=$
$\displaystyle|\mathbb{E}^{\theta}\langle\mathcal{S}_{0}F(\theta),G(\theta)\rangle_{X,X^{\ast}}|$
$\displaystyle=$ $\displaystyle c^{-1}_{0}|\mathbb{E}^{\theta}\langle
F^{\mathcal{H}}(\theta),G(\theta)\rangle_{X,X^{\ast}}|$
$\displaystyle\leqslant$ $\displaystyle
h_{p}c^{-1}_{0}\|F\|_{L_{X,\theta}^{p}}\|G\|_{L_{X^{\ast},\theta}^{q}}$
$\displaystyle=$ $\displaystyle
h_{p}c^{-1}_{0}\|f\|_{L_{X}^{p}}\|g\|_{L_{X^{\ast}}^{q}}.$
The first and last equalities hold because by construction $f(x)$ and
$F(\theta)$ as well as $g(x)$ and $G(\theta)$ have the same probability
distributions respectively. Notice that the random generators $\varphi^{\pm}$
are independent. We have used Lemma 1 for the last inequality. To finish the
estimate for general $f$, write $\tilde{f}=f-(f,h_{I_{0}})h_{I_{0}}-\langle
f\rangle_{I_{0}}\chi_{I_{0}}$ and get
$\|\mathcal{S}_{0}f\|_{L_{X}^{p}}=\|\mathcal{S}_{0}\tilde{f}\|_{L_{X}^{p}}\leqslant
c^{-1}_{0}h_{p}\|\tilde{f}\|_{L_{X}^{p}}\lesssim
c^{-1}_{0}h_{p}\|f\|_{L_{X}^{p}}.$
We have used that averaging operators are bounded.
Therefore, the bound of $\mathcal{S}_{0}$ is proportional to $h_{p}$ and our
proof of Theorem 1 complete.
## References
* [1] J. Bourgain. _Some remarks on Banach spaces in which martingale difference sequences are unconditional_ , Ark. Mat. 21(2): 163–168, 1983.
* [2] D. L. Burkholder. _A geometric condition that implies the existence of certain singular integrals of Banach-space-valued functions_ , Conference on harmonic analysis in honor of Antoni Zygmund, Vol. I, II (Chicago, Ill., 1981), Wadsworth Math. Ser., 270–286. Wadsworth, Belmont, CA, 1983.
* [4] S. Geiss, S. Montgomery-Smith and E. Saksman. _On singular integral and martingale transforms_ , Trans. Amer. Math. Soc., 362(2):553–575, 2010.
* [5] T. Hytönen, J. van Neerven, M. Veraar, and L. Weis. _Analysis in Banach spaces_ , Vol. I. Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Springer, Cham, 2016.
* [6] S. Petermichl. _Dyadic shift and a logarithmic estimate for Hankel operators with matrix symbol_ , C. R. Acad. Sci. Paris Sér. I Math., 330(6):455–460, 2000.
* [7] S. Petermichl and S. Pott. _A version of Burkholder’s theorem for operator-weighted spaces_ , Proc. Amer. Math. Soc., 131(11):3457–3461, 2003.
* [8] G. Pisier. _Martingales in Banach spaces_ , Cambridge Studies in Advanced Mathematics, vol. 155, Cambridge University Press, Cambridge, 2016.
* [9] S. Pott, A. Stoica. _Linear bounds for Calderón-Zygmund operators with even kernel on UMD spaces_ , J. Funct. Anal., 266(5): 3303–3319, 2014.
|
# On the flavor/mass dichotomy for mixed neutrinos: a phenomenologically
motivated analysis based on lepton charge conservation in neutron decay
Giuseppe Gaetano Luciano111email<EMAIL_ADDRESS>Applied
Physics Section of Environmental Science Department, Universitat de Lleida,
Av. Jaume II, 69, 25001 Lleida, Spain
###### Abstract
_Flavor/mass dichotomy_ for mixed fields is one of the most controversial
issues in Quantum Field Theory. In this work we approach the problem by
considering mixing of neutrinos and computing the transition amplitude for the
paradigmatic neutron $\beta$-decay
$n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}$. Calculations are developed by
utilizing the following different representations of neutrino states: _i_)
Pontecorvo states, _ii_) mass states and _iii_) exact QFT flavor states, which
are defined as eigenstates of the flavor charge. As a guiding principle, we
invoke the conservation of lepton charge in the interaction vertex, which is
fundamentally inbuilt into the Standard Model at tree level. In the short-time
limit, we show that the only fully consistent scenario is that based on QFT
states, whereas the other two frameworks contradict the underlying theory.
This result provides a crucial step towards understanding the rôle of neutrino
mixing in weak decays and contributes to elucidate the flavor/mass controversy
in QFT.
###### pacs:
14.60.Pq, 13.15.+g
## I Introduction
Neutrino mixing and oscillations are among the most fascinating, but least
understood phenomena in particle physics. Although the idea of neutrino
oscillations was first proposed by Pontecorvo in 1957 Ponte57 by analogy with
kaons, the theoretical framework in its modern form only dates back to the
1970’s Grib ; Frit , after the discovery of the second Maki and third Third
generation of neutrinos and the later extension to include environmental
effects Wolf ; Mikh . On the experimental side, significant progress was
achieved between 1998 and 2002 with the detection of atmospheric and solar
neutrino oscillations by Super-Kamiokande SuperK ; NP and SNO SNO ; McD ,
shortly thereafter confirmed by KamLand KamL .
Besides their intrinsic relevance, neutrino oscillations provide to date the
only evidence of physics beyond the Standard Model (SM), thus motivating ever-
new research both at theoretical and experimental level. In this direction,
special focus has been recently placed on the study of
gravitational/acceleration effects on flavor oscillations Stodo ; Burgard ;
Fornengo ; Cardall ; Lambiase ; GiuntiKim ; Mavro ; Paglia ; Vissani ;
Capozziello ; Dvo ; ETG ; BlasSma ; Luc ; Luc2 ; Swami ; Khalifeh , with non-
trivial results in the framework of extended theories Capozziello ; ETG . In
parallel, interesting applications have been found in quantum information
BlasEnt ; Alok ; Kumar ; Roggero ; Li ; Matrella , quantum optics Marini and
geophysics Geo , where neutrinos are used as a probe to determine the chemical
composition of the inner Earth.
Despite the intensive study in quantum mechanics (QM), comparably less
attention has been devoted to the analysis of neutrino mixing in Quantum Field
Theory (QFT). However, it is the field theoretical investigation that is more
pertinent to high-energy phenomena in particle physics, due to the SM being
built upon QFT. The first consistent treatment of flavor mixing based on the
construction of Fock spaces for quantized neutrino fields was developed in
BV95 , highlighting the shortcomings of the original QM theory. Indeed, while
Pontecorvo mixing transformations act as pure rotations between single-
particle states of neutrinos with definite masses (henceforth “mass states”)
and neutrinos with definite flavors (“flavor states”), their QFT counterparts
exhibit the structure of a rotation nested into a non-commuting Bogoliubov
transformation BV95 ; Garg . This leads to orthogonal vacuum states for mass
and flavor fields BV95 , the latter behaving as a condensate of massive
particle/antiparticle pairs with a complex entangled structure LucPetr ; Cabo
. In turn, the Fock spaces built upon these two vacua are _unitarily_ (i.e.
physically) _inequivalent_ to each other, giving rise to characteristic
effects that lie outside the domain of QM Bare ; CapolDark ; MavDark ;
Blasone:2017nbf ; Jizba ; Quaranta ; LucianoTs (see SmVit for a recent
review). We emphasize that similar considerations had been previously outlined
in Berna ; Berna2 in relativistic QM. On mathematical grounds, the
theoretical understanding of QFT mixing has been confirmed by the rigorous
analysis of Hannab .
In recent years renewed interest in the field theoretical analysis of mixing
has been prompted by the study of weak interactions involving neutrinos. In
particular, a fruitful arena where to test the QFT formalism was offered by
the weak decay of accelerated protons Ahluw ; ProtBlas ; ProtMat , whose
occurrence is allowed by the Unruh effect. In this context, by invoking
consistency with the general covariance of QFT, the scalar decay rate for the
proton was computed both in the laboratory frame and in the comoving rest-
frame, with conflicting results on whether to utilize flavor Ahluw ; ProtBlas
or mass states ProtMat for mixed neutrinos (_flavor/mass dichotomy_ , see
also GiuntiLee ; GiuntiUn ; Akh ; Remarks ; SmaldTur ; GaetanoLucia for
further discussion on the topic). It must be emphasized that such a
controversy is peculiar to QFT QFTCon , whereas in QM the formal equivalence
between the two representations is warranted by the Stone-von Neumann
uniqueness theorem Stone ; VN . While providing valuable insights on the rôle
of QFT mixing in weak interactions, these studies have posed the problem of
the very nature of asymptotic neutrinos, requiring fresh efforts to sort
things out. In passing, we mention that stimulating analysis of flavor mixing
and weak decays were conducted in CapW and CYL .
Starting from the above premises, in this work we explore more in-depth the
inherent features of neutrino mixing in QFT. Specifically, we attempt to
elucidate the aforementioned flavor/mass controversy by focusing on the
computation of the transition amplitude for the neutron $\beta$-decay. We
carry out calculations by resorting to three different representations of
neutrino states: _i_) Pontecorvo states, _ii_) mass states and _iii_) exact
QFT flavor states, which are defined as eigenstates of the flavor charge
operators. As a result, we obtain three different expressions for the neutron
decay amplitude, which only overlap in the limit of ultra-relativistic
neutrinos. The question arises as to which of these descriptions provides the
physically correct scenario. Waiting for experimental hints that could point
us toward the solution, here we follow a phenomenologically motivated
approach: by requiring the conservation of lepton charge in the interaction
vertex, which is empirically observed and built into the SM at tree level Cli
, we show that the only fully consistent framework is that founded on QFT
states. On the other hand, both Pontecorvo and mass representations lead to a
violation of lepton charge, contradicting the underlying theory. We remark
that this conclusion is simply reached on the basis of consistency with SM
expectations and does not assume any detector-dependent model for neutrino
oscillations. Such a result should help clarifying the true nature of
asymptotic neutrinos against the variety of exotic claims recently appeared in
the literature.
The remainder of the work is organized as follows: in the next Section we
review the QFT formalism of neutrino mixing. Section III is devoted to the
calculation of the tree-level $\beta$-decay amplitude in the Pontecorvo and
mass representations. Consistency with the conservation of lepton charge is
checked by considering the “short-time limit” of the above amplitudes, i.e.
the limit for small distances from the production vertex. The same analysis is
then developed with exact QFT flavor states in Sec. IV. To avoid unnecessary
technicalities and make the physical insight of our analysis as transparent as
possible, we neglect flavor changing loop induced processes, which however
would not be relevant for our discussion. Conclusions and outlook are
summarized in Sec. V. Throughout the whole manuscript, we use natural units
$\hbar=c=1$.
## II Neutrino mixing in QFT: mathematical aspects and physical implications
Let us start by reviewing the QFT formalism of neutrino mixing and the main
differences with the QM Pontecorvo treatment (for more details, see BV95 ).
Toward this end, we remind that Pontecorvo mixing transformations for single-
particle flavor states read222In this work we consider a simplified model
involving only two flavors. However, the validity of our result is not spoilt
by the generalization to the third generation.
$\displaystyle|\nu^{r}_{{\bf{k}},e}\rangle_{P}$
$\displaystyle=\cos\theta\,|\nu^{r}_{{\bf{k}},1}\rangle+\sin\theta\,|\nu^{r}_{{\bf{k}},2}\rangle\,,$
(1a) $\displaystyle|\nu^{r}_{{\bf{k}},\mu}\rangle_{P}$
$\displaystyle=-\sin\theta\,|\nu^{r}_{{\bf{k}},1}\rangle+\cos\theta\,|\nu^{r}_{{\bf{k}},2}\rangle\,,$
(1b)
where $|\nu^{r}_{{\bf{k}},i}\rangle$, $i=1,2$, are the states of neutrino with
definite masses $m_{i}$, while $|\nu^{r}_{{\bf{k}},\ell}\rangle_{P}$,
$\ell=e,\mu$, are the states of neutrino with definite flavors $\ell$
(electron and muon, respectively). We are assuming equal momentum ${\bf{k}}$
and helicity $r=1,2$ for neutrinos with different masses. The subscript $P$ is
a reminder for Pontecorvo states.
At level of QFT, the above relations are rewritten in the form
$\displaystyle\nu_{e}(x)$
$\displaystyle=\cos\theta\,\nu_{1}(x)+\sin\theta\,\nu_{2}(x)\,,$ (2a)
$\displaystyle\nu_{\mu}(x)$
$\displaystyle=-\sin\theta\,\nu_{1}(x)+\cos\theta\,\nu_{2}(x)\,,$ (2b)
where $\nu_{\ell}(x)$, $\ell=e,\mu$, are the (interacting) Dirac neutrino
fields with definite flavors, while $\nu_{i}(x)$, $i=1,2$, are the (free)
fields with definite masses. These are expanded according to the usual
Fourier-decomposition BV95
$\nu_{i}(x)\,=\,\frac{1}{\sqrt{V}}\sum_{\textbf{k},r}\left[u_{\textbf{k},i}^{r}\,\alpha^{r}_{\textbf{k},i}(x^{0})\,+\,v_{-\textbf{k},i}^{r}\,\beta^{r\dagger}_{-\textbf{k},i}(x^{0})\right]e^{i\textbf{k}\cdot\textbf{x}}\,,\qquad
i=1,2\,,$ (3)
where
$\alpha^{r}_{\textbf{k},i}(x^{0})=\alpha^{r}_{\textbf{k},i}\,e^{-i\omega_{{\bf{k}},i}x^{0}}$,
$\beta^{r\dagger}_{-\textbf{k},i}(x^{0})=\beta^{r\dagger}_{-\textbf{k},i}\,e^{i\omega_{{\bf{k}},i}x^{0}}$.
The operators $\alpha^{r}_{\textbf{k},i}$ annihilate a field-mode of quantum
numbers ${\bf{k}},r$ and energy
$\omega_{{\bf{k}},i}=\sqrt{m_{i}^{2}+{\bf{k}}^{2}}$. They are defined by
$\alpha^{r}_{\textbf{k},i}|0\rangle_{m}\,=\,\beta^{r}_{\textbf{k},i}|0\rangle_{m}\,=\,0\,,\qquad
i=1,2\,,$ (4)
where $|0\rangle_{m}\equiv|0\rangle_{1}\otimes|0\rangle_{2}$ is the “mass
vacuum”. Similar relations hold for $\beta^{r}_{\textbf{k},i}$.
By imposing the canonical anti-commutation relations for the fields,
$\left\\{\nu_{i}^{\alpha}(x),\nu_{j}^{\beta\dagger}(y)\right\\}_{x^{0}=x^{0^{\prime}}}\,=\,\delta^{3}{\left({\bf{x}}-{\bf{y}}\right)}\,\delta_{\alpha\beta}\,\delta_{ij}\,,\qquad\alpha,\beta=1,\dots,4\,,$
(5)
we have
$\left\\{\alpha^{r}_{\textbf{k},i},\alpha^{s\dagger}_{\textbf{q},j}\right\\}\,=\,\delta_{{\bf{k}}{\bf{q}}}\delta_{rs}\delta_{ij}\,,\qquad
i,j=1,2\,,$ (6)
and similarly for $\beta^{r}_{{\bf{k}},i}$, with all other anti-commutators
vanishing.
For more details on the explicit form used for the spinors
$u_{\textbf{k},i}^{r}$, $v_{-\textbf{k},i}^{r}$ and the Dirac
$\gamma$-matrices, we remand to BV95 . Here, it is enough to write down the
following orthonormality and completeness relations
$\displaystyle
u_{{\bf{k}},i}^{r\dagger}u_{{\bf{k}},i}^{s}\,=\,v_{{\bf{k}},i}^{r\dagger}v_{{\bf{k}},i}^{s}\,=\,\delta_{rs}\,,$
(7) $\displaystyle
u_{{\bf{k}},i}^{r\dagger}v_{-{\bf{k}},i}^{s}\,=\,v_{-{\bf{k}},i}^{r\dagger}u_{{\bf{k}},i}^{s}=0\,,$
(8)
$\displaystyle\sum_{r}\left(u_{{\bf{k}},i}^{r}u_{{\bf{k}},i}^{r\dagger}\,+\,v_{-{\bf{k}},i}^{r}v_{-{\bf{k}},i}^{r\dagger}\right)=\mathds{1}\,.$
(9)
For reasons that will appear clear below, in Eq. (3) we have considered the
finite-volume expansion of free fields. Formally, the infinite-volume limit is
obtained by implementing the standard prescription
$\displaystyle\sqrt{V}{\alpha}_{{\bf{k}}}$ $\displaystyle\rightarrow$
$\displaystyle(2\pi)^{{3}/{2}}\,{\alpha}_{\bf{k}}\,,$ (10)
$\displaystyle\frac{1}{V}\sum_{{\bf{k}}}$ $\displaystyle\rightarrow$
$\displaystyle\frac{1}{(2\pi)^{3}}\,\int\hskip 0.28453ptd^{3}\textbf{k}\,,$
(11) $\displaystyle\frac{V\delta_{{\bf{k}}{\bf{q}}}}{(2\pi)^{3}}$
$\displaystyle\rightarrow$ $\displaystyle\delta^{3}({\bf{k}}-{\bf{q}})\,.$
(12)
Let us now turn to the expansion of flavor fields. For this purpose, it is
convenient to express the transformations (2) in terms of the mixing generator
BV95
$G_{\theta}(x^{0})\,=\,\exp\left\\{\theta\int
d^{3}{\bf{x}}\,\left[{\nu}^{\dagger}_{1}(x){\nu}_{2}(x)-{\nu}^{\dagger}_{2}(x){\nu}_{1}(x)\right]\right\\}\,,$
(13)
which gives
${\nu}^{\alpha}_{\ell}(x)\,=\,{G}_{\theta}^{-1}(x^{0})\,{\nu}^{\alpha}_{i}(x)\,{G_{\theta}}(x^{0})\,,\qquad(\ell,i)=(e,1),(\mu,2)\,.$
(14)
At finite volume $V$, it is straightforward to see that this is a unitary
operator, i.e.
$G^{-1}_{\theta}(x^{0})=G_{-\theta}(x^{0})=G^{\dagger}_{\theta}(x^{0})$, which
preserves the canonical anitcommutators. Furthermore, it provides us with the
map between the Hilbert space for free fields $\mathcal{H}_{m}$ and that for
mixed fields $\mathcal{H}_{f}$. In particular, the vacuum for flavor fields
(“flavor vacuum”) $|0(x^{0})\rangle_{f}$ is given by
$|0(x^{0})\rangle_{f}=G_{\theta}^{-1}(x^{0})|0\rangle_{m}\,.$ (15)
Following the convention of BV95 , we shall denote by $|0\rangle_{f}$ the
flavor vacuum at $x^{0}=0$.
While being unitary for finite $V$, in the infinite-volume limit (i.e., for
systems with infinite degrees of freedom like fields) $G_{\theta}(x^{0})$
exhibits a non-unitary nature, which results into the orthogonality of flavor
and mass vacua BV95 ; JiMi
${}_{f}\langle 0(x^{0})|0\rangle_{m}=0,\,\,\,\,\forall x^{0}\,.$ (16)
Likewise, flavor vacua at different times are orthogonal to each other Terra ,
i.e. ${}_{f}\langle 0(x^{0^{\prime}})|0(x^{0})\rangle_{f}=0,\forall
x^{0^{\prime}}\neq x^{0}$. We stress that these are distinctive QFT features
that are missing in QM, due to the validity of the Stone-von Neumann theorem
Stone ; VN . To avoid any technical issue, in what follows we shall perform
all computations at finite volume and only at the end we consider the
$V\rightarrow\infty$ limit.
By letting the generator $G_{\theta}(x^{0})$ act on $\nu_{i}(x)$ in Eq. (14),
the flavor fields take the form
$\nu_{\ell}(x)\,=\,\frac{1}{\sqrt{V}}\sum_{\textbf{k},r}\left[u_{\textbf{k},i}^{r}\,\alpha^{r}_{\textbf{k},\nu_{\ell}}(x^{0})\,+\,v_{-\textbf{k},i}^{r}\,\beta^{r\dagger}_{-\textbf{k},\nu_{\ell}}(x^{0})\right]e^{i\textbf{k}\cdot\textbf{x}}\,,$
(17)
with $(\ell,i)=(e,1),(\mu,2)$. The flavor annihilators are given by
$\displaystyle\alpha^{r}_{{\bf{k}},\nu_{e}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\alpha^{r}_{{\bf{k}},1}(x^{0})\,+\,\sin\theta\sum_{s}\left[u^{r\dagger}_{{\bf{k}},1}u^{s}_{{\bf{k}},2}\,\alpha^{s}_{{\bf{k}},2}(x^{0})\,+\,u^{r\dagger}_{{\bf{k}},1}v^{s}_{-{\bf{k}},2}\,\beta^{s\dagger}_{-{\bf{k}},2}(x^{0})\right],$
(18) $\displaystyle\alpha^{r}_{{\bf{k}},\nu_{\mu}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\alpha^{r}_{{\bf{k}},2}(x^{0})\,-\,\sin\theta\sum_{s}\left[u^{r\dagger}_{{\bf{k}},2}u^{s}_{{\bf{k}},1}\,\alpha^{s}_{{\bf{k}},1}(x^{0})\,+\,u^{r\dagger}_{{\bf{k}},2}v^{s}_{-{\bf{k}},1}\,\beta^{s\dagger}_{-{\bf{k}},1}(x^{0})\right],$
(19) $\displaystyle\beta^{r}_{-{\bf{k}},\nu_{e}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\beta^{r}_{-{\bf{k}},1}(x^{0})\,+\,\sin\theta\sum_{s}\left[v^{s\dagger}_{-{\bf{k}},2}v^{r}_{-{\bf{k}},1}\,\beta^{s}_{-{\bf{k}},2}(x^{0})\,+\,u^{s\dagger}_{{\bf{k}},2}v^{r}_{-{\bf{k}},1}\,\alpha^{s\dagger}_{{\bf{k}},2}(x^{0})\right],$
(20) $\displaystyle\beta^{r}_{-{\bf{k}},\nu_{\mu}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\beta^{r}_{-{\bf{k}},2}(x^{0})\,-\,\sin\theta\sum_{s}\left[v^{s\dagger}_{-{\bf{k}},1}v^{r}_{-{\bf{k}},2}\,\beta^{s}_{-{\bf{k}},1}(x^{0})\,+\,u^{s\dagger}_{{\bf{k}},1}v^{r}_{-{\bf{k}},2}\,\alpha^{s\dagger}_{{\bf{k}},1}(x^{0})\right].$
(21)
Notice that these transformations can be inverted by utilizing the symmetry
$\nu_{\ell}(x)\leftrightarrow\nu_{i}(x)$ when $\theta\rightarrow-\theta$. The
time-dependence of the flavor operators (19)-(21) indicates that flavor fields
are interacting fields. It must be noted that this interacting model can be
solved exactly, without needing perturbation expansions Jizba . Furthermore,
we emphasize that the choice of the reference time $x^{0}=0$ at which the
flavor vacuum (15) and the related particle states are defined is not unique.
In principle, any other choice would be possible and equivalent, provided that
the flavor states which are acted upon by the corresponding flavor operators
(19)-(21) are consistently evaluated at the same time as the operators
themselves and the commutators are all considered at equal times BV95 .
As remarked in Fuji ; BlasDiMa , the field expansion (17) relies on a special
choice of the bases of spinors, namely those referring to the free field
masses $m_{1}$ and $m_{2}$, respectively. However, it is always possible to
perform a Bogoliubov transformation in order to expand the field operators in
a different basis of spinors, referring to an arbitrarily chosen couple of
mass parameters (for instance, the natural bases corresponding to the couple
of masses $m_{e}$, $m_{\mu}$). The relevant point is that this transformation
leaves measurable quantities (such as flavor charges and oscillation
probabilities) invariant, as it should be. Therefore, in this sense flavor
mixing can be reformulated as a gauge theory BlasDiMa .
Equations (18)-(21) clearly show the non-trivial structure of flavor mixing in
QFT: in fact, besides the standard Pontecorvo rotation, flavor annihilators
also contain a Bogoliubov transformation (the terms in the brackets) arising
from the products of (anti-)spinors with different masses. The presence of
such extra terms lies at the heart of the QFT treatment of mixing, as they
imply that flavor vacuum annihilators do not annihilate mass vacuum, and vice-
versa.
Without loss of generality, we can now select the reference frame such that
${\bf{k}}=\left(0,0,|{\bf{k}}|\right)$. In this setting, only the products of
wave functions with $r=s$ are non-vanishing, which allows us to rewrite Eqs.
(18)-(21) in the simpler form
$\displaystyle\alpha^{r}_{{\bf{k}},\nu_{e}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\alpha^{r}_{{\bf{k}},1}(x^{0})\,+\,\sin\theta\left(|U_{\bf{k}}|\,\alpha^{r}_{{\bf{k}},2}(x^{0})\,+\,\epsilon^{r}|V_{\bf{k}}|\,\beta^{r\dagger}_{-{\bf{k}},2}(x^{0})\right),$
(22) $\displaystyle\alpha^{r}_{{\bf{k}},\nu_{\mu}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\alpha^{r}_{{\bf{k}},2}(x^{0})\,-\,\sin\theta\left(|U_{\bf{k}}|\,\alpha^{r}_{{\bf{k}},1}(x^{0})\,-\,\epsilon^{r}|V_{\bf{k}}|\,\beta^{r\dagger}_{-{\bf{k}},1}(x^{0})\right),$
(23) $\displaystyle\beta^{r}_{-{\bf{k}},\nu_{e}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\beta^{r}_{-{\bf{k}},1}(x^{0})\,+\,\sin\theta\left(|U_{\bf{k}}|\,\beta^{r}_{-{\bf{k}},2}(x^{0})\,-\,\epsilon^{r}|V_{\bf{k}}|\,\alpha^{r\dagger}_{{\bf{k}},2}(x^{0})\right),$
(24) $\displaystyle\beta^{r}_{-{\bf{k}},\nu_{\mu}}(x^{0})$ $\displaystyle=$
$\displaystyle\cos\theta\,\beta^{r}_{-{\bf{k}},2}(x^{0})\,-\,\sin\theta\left(|U_{\bf{k}}|\,\beta^{r}_{-{\bf{k}},1}(x^{0})\,+\,\epsilon^{r}|V_{\bf{k}}|\,\alpha^{r\dagger}_{{\bf{k}},1}(x^{0})\right),$
(25)
where $\epsilon^{r}=(-1)^{r}$. The Bogoliubov coefficients $|U_{\bf{k}}|$ and
$|V_{\bf{k}}|$ are given by
$\displaystyle|U_{\bf{k}}|$ $\displaystyle=$ $\displaystyle
u_{{\bf{k}},i}^{r\dagger}\,u_{{\bf{k}},j}^{r}\,=\,v^{r\dagger}_{-{\bf{k}},i}\,v^{r}_{-{\bf{k}},j}$
(26) $\displaystyle=$
$\displaystyle\frac{|{\bf{k}}|^{2}+\left(\omega_{{\bf{k}},1}+m_{1}\right)\left(\omega_{{\bf{k}},2}+m_{2}\right)}{2\sqrt{\omega_{{\bf{k}},1}\omega_{{\bf{k}},2}\left(\omega_{{\bf{k}},1}+m_{1}\right)\left(\omega_{{\bf{k}},2}+m_{2}\right)}}\,,$
$\displaystyle|V_{\bf{k}}|$ $\displaystyle=$
$\displaystyle\epsilon^{r}\,u_{{\bf{k}},1}^{r\dagger}\,v^{r}_{-{\bf{k}},2}\,=\,-\epsilon^{r}\,u^{r\dagger}_{{\bf{k}},2}v_{-{\bf{k}},1}^{r}$
(27) $\displaystyle=$
$\displaystyle\frac{\left(\omega_{{\bf{k}},1}+m_{1}\right)-\left(\omega_{{\bf{k}},2}+m_{2}\right)}{2\sqrt{\omega_{{\bf{k}},1}\omega_{{\bf{k}},2}\left(\omega_{{\bf{k}},1}+m_{1}\right)\left(\omega_{{\bf{k}},2}+m_{2}\right)}}\,|{\bf{k}}|\,,$
with $i,j=1,2$, $i\neq j$ (similar relations hold for
${\bf{k}}\rightarrow-{\bf{k}}$). It is a matter of calculations to show that
$|U_{{\bf{k}}}|^{2}\,+\,|V_{{\bf{k}}}|^{2}\,=\,1\,,$ (28)
which ensures that flavor ladder operators still satisfy canonical anti-
commutation relations (at equal times).
For our next purposes, we note that the following relations hold true in the
reference frame where ${\bf{k}}=\left(0,0,|{\bf{k}}|\right)$:
$\displaystyle\bar{u}_{{\bf{k}},1}^{r}|U_{\bf{k}}|\,-\,\epsilon^{r}\,\bar{v}^{r}_{-{\bf{k}},1}|V_{{\bf{k}}}|$
$\displaystyle=$ $\displaystyle\bar{u}_{{\bf{k}},2}^{r}\,,$ (29)
$\displaystyle\bar{u}_{{\bf{k}},1}^{r}|V_{\bf{k}}|\,+\,\epsilon^{r}\,\bar{v}^{r}_{-{\bf{k}},1}|U_{{\bf{k}}}|$
$\displaystyle=$ $\displaystyle\epsilon^{r}\,\bar{v}_{-{\bf{k}},2}^{r}\,,$
(30) $\displaystyle
v^{r}_{{\bf{k}},1}|U_{{\bf{k}}}|\,-\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}|V_{{\bf{k}}}|$
$\displaystyle=$ $\displaystyle v_{{\bf{k}},2}^{r}\,,$ (31) $\displaystyle
v^{r}_{{\bf{k}},1}|V_{{\bf{k}}}|\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}|U_{{\bf{k}}}|$
$\displaystyle=$ $\displaystyle\epsilon^{r}\,u_{-{\bf{k}},2}^{r}\,,$ (32)
where $\bar{u}_{{\bf{k}},i}^{r}=u_{{\bf{k}},i}^{r\dagger}\,\gamma_{0}$ and
$\bar{v}_{-{\bf{k}},i}^{r}=v_{-{\bf{k}},i}^{r\dagger}\,\gamma_{0}$, $i=1,2$.
Furthermore, we have
$\displaystyle\bar{u}_{{\bf{k}},2}^{r}\,|U_{\bf{k}}|\,+\,\epsilon^{r}\,\bar{v}^{r}_{-{\bf{k}},2}\,|V_{\bf{k}}|$
$\displaystyle=$ $\displaystyle\bar{u}_{{\bf{k}},1}^{r}\,,$ (33)
$\displaystyle
v_{{\bf{k}},2}^{r}\,|U_{\bf{k}}|\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},2}\,|V_{\bf{k}}|$
$\displaystyle=$ $\displaystyle v_{{\bf{k}},1}^{r}\,.$ (34)
QFT neutrino states are now defined by the action of flavor creation operators
on the flavor vacuum $|0\rangle_{f}$ at $x^{0}=0$, i.e. BV95
$|\nu^{r}_{{\bf{k}},\ell}\rangle\,\equiv\,\alpha^{r\dagger}_{{\bf{k}},\nu_{\ell}}(0)|0\rangle_{f}\,,\qquad\ell=e,\mu\,.$
(35)
As usual, the time evolution of these states is ruled by
$|\nu^{r}_{{\bf{k}},\ell}(x^{0})\rangle=e^{iH_{0}x^{0}}\alpha^{r\dagger}_{{\bf{k}},\nu_{\ell}}(0)|0\rangle_{f}$.
It should be noticed that the QFT formalism described above reproduces the
standard QM Pontecorvo treatment in the ultra-relativistic limit $\frac{\Delta
m}{|{\bf{k}}|}=\frac{|m_{2}-m_{1}|}{{|{\bf{k}}|}}\ll 1$. To this end, we
observe that the Bogoliubov coefficients $|U_{\bf{k}}|$ and $|V_{{\bf{k}}}|$
at the first order in $\mathcal{O}\left(\frac{\Delta m}{2|{\bf{k}}|}\right)$
take the form
$\displaystyle|U_{\bf{k}}|$ $\displaystyle\,\approx\,1\,,$ (36a)
$\displaystyle|V_{\bf{k}}|$ $\displaystyle\,\approx\,\frac{\Delta
m}{2|{\bf{k}}|}\,,$ (36b)
from which we get $|U_{{\bf{k}}}|\rightarrow 1,|V_{{\bf{k}}}|\rightarrow 0$ as
$\frac{\Delta m}{|{\bf{k}}|}\rightarrow 0$. Inserting these values into the
Bogoliubov transformations (22)-(25), it is straightforward to see that
Pontecorvo rotations for flavor annihilators are trivially recovered, which
entails that flavor and mass vacua are equivalent to each other in this limit.
In turn, this implies that the definition (35) of exact QFT flavor states
reduces to the usual Pontecorvo definition (1).
The above considerations provide us with the basic ingredients to analyze the
effects of neutrino mixing in a self-consistent field theoretical way. In
particular, we shall resort to the field expansions (17) and the Bogoliubov
transformations (22)-(25) to compute the transition amplitude for the neutron
$\beta$-decay.
## III Neutron $\beta$-decay with Pontecorvo and mass states for neutrinos
In this Section we consider the weak decay
$a)\,\,\,n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{e}\,,$ (37)
as paradigmatic process to check the consistency of neutrino mixing with the
conservation of lepton charge in the (tree-level) interaction
vertex333Although our considerations are specific for the $\beta$-decay
process, results and conclusions are more general and can be extended to all
weak interactions involving neutrinos.. We stress that this symmetry is
familiar to any physics student from the most common muon decay mode
$\mu^{-}\,\rightarrow\,e^{-}\bar{\nu}_{e}\nu_{\mu}$, which would be a direct
conversion $\mu^{-}\,\rightarrow\,e^{-}$ without the introduction of neutrinos
to both balance lepton flavor and explain the $3$-body spectrum of electron
energy.
In parallel to the interaction (37), we also consider the flavor-violating
process
$b)\,\,\,n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{\mu}\,,$ (38)
which is in principle allowed because of neutrino oscillations. Nevertheless,
for time intervals sufficiently longer than the neutron lifetime $\tau_{n}$,
but much shorter compared to the characteristic oscillation time $T_{osc}$,
the tree-level amplitude for this process is expected to vanish for
consistency with lepton charge conservation. We shall refer to this time scale
as “short-time” limit. Of course, given the experimental values of $\tau_{n}$
and $T_{osc}$, such interval is well defined.
In the scattering theory for finite-range potentials, it is typically assumed
that the interaction Hamiltonian $H_{int}(x)$ can be switched off far before
and after the interaction, i.e.
$\lim_{x^{0}_{in}\rightarrow-\infty}H_{int}(x)=\lim_{x^{0}_{out}\rightarrow+\infty}H_{int}(x)=0$
(adiabatic approximation) Itz , so that initial and final states can be
represented as eigenstates of the free hamiltonian $H_{0}$. However, in the
presence of mixed neutrinos, such an approximation fails, since neutrino
fields cannot be defined as asymptotic operators acting on the mass vacuum. As
discussed in Sec. II, this follows from the fact that Fock spaces for flavor
and mass fields are unitarily inequivalent to each other. The supposed
ambiguity has been successfully addressed in CYL , showing that the long-time
limit of the amplitudes (37) and (38) is actually well-defined when working
with exact QFT states. Furthermore, this limit turns out to be consistent with
SM predictions in the relativistic approximation, which corroborates the
physical rigor of the definition of QFT neutrino states.
For our purposes of computing the short-time limit of the decay amplitudes
(37) and (38), we consider the perturbation theory at the first order. We then
set the integration limits $x^{0}_{in}$, $x^{0}_{out}$ such that $\Delta
x^{0}=x^{0}_{out}-x^{0}_{in}\ll T_{osc}$, as discussed above. In the
interaction picture, denoting the ingoing and outgoing states by
$|\psi_{i}\rangle$ and $|\psi_{f}\rangle$, respectively, the probability
amplitude is given by $\langle
e^{iH_{0}x^{0}}\psi_{f}|U_{I}(x^{0})|\psi_{i}\rangle$, where the time
evolution operator $U_{I}(x^{0})$ to the leading order is
$U_{I}(x^{0})\simeq
1-i\int_{0}^{x^{0}}dx^{0^{\prime}}\,H_{int}(x^{0^{\prime}})\,.$ (39)
Here, $H_{int}(x^{0})=e^{iH_{0}x^{0}}\,H_{int}\,e^{-iH_{0}x^{0}}$ is the
interaction Hamiltonian in the interaction picture.
The effective Lagrangian for the interaction (37) is given by Cli
$\mathscr{L}_{int}(x)\,=\,-\frac{G_{F}}{\sqrt{2}}\left[\bar{e}(x)\hskip
0.56905pt\gamma^{\mu}(\mathds{1}-\gamma^{5})\nu_{e}(x)\right]\left[V_{ud}\,\bar{q}_{u}(x)\gamma_{\mu}\hskip
0.56905pt(f\,-\,\gamma^{5}g)\hskip 0.56905ptq_{d}(x)\right]\,,$ (40)
where $G_{F}$ is the Fermi constant, $V_{ud}$ is the CKM matrix element
describing the coupling between the up and down quarks, while $f,g$ are the
form factors. The electron, neutrino, up and down quark fields have been
denoted by $e(x)$, $\nu_{e}(x)$, $q_{u}(x)$ and $q_{d}(x)$, respectively. As
well known, the projection operator
$P_{L}=1/2\left(\mathds{1}-\gamma_{5}\right)$ appears because charged-current
in weak interactions only couple to left-chiral fermions. Furthermore, we have
omitted the hermitian conjugate in Eq. (40), since it gives a vanishing
contribution to the transition amplitude.
The above expression of $\mathscr{L}_{int}$ is justified within the framework
of (current-current interaction) Fermi theory. Clearly, the actual Lagrangian
is more complicated than Eq. (40) (see Act ). However, such additional
complications will be mostly irrelevant for our following discussion and can
be safely overlooked.
Based on the above considerations, let us compute the amplitude for the decay
channels $a)$ and $b)$. We perform calculations by resorting first to
Pontecorvo states $|\nu_{{\bf{k}},\ell}\rangle_{P}$ in Eqs. (1) and then to
mass states $|\nu_{{\bf{k}},i}\rangle$ as neutrino fundamental representation.
At a later stage, we shall focus on the short-time limit of the resulting
amplitudes, discussing the regime of their validity.
### III.1 Pontecorvo states
As a first step, we consider the process $a)$. The transition amplitude at
tree level takes the form
$A^{a)}_{P}\,=\,_{P}\langle\bar{\nu}^{r}_{{\bf{k}},e},e^{s}_{{\bf{q}}},\hskip
1.13809ptp^{\sigma_{p}}_{{\bf{p}}_{p}}|\left[-i\int_{x^{0}_{in}}^{x^{0}_{out}}d^{4}x\,\mathscr{L}_{int}(x)\right]|n^{\sigma_{n}}_{{\bf{p}}_{n}}\rangle\,,$
(41)
where we have denoted by ${\bf{p}}_{n}$ ($\sigma_{n}$), ${\bf{p}}_{p}$
($\sigma_{p}$) and ${\bf{q}}$ ($s$) the momentum (helicity) of the neutron,
proton and electron, respectively, and utilized Pontecorvo state for the
outgoing (anti-)neutrino. To simplify the notation, we have omitted the time-
dependence of particle states. Clearly, in the standard scattering theory in
the interaction picture, the ingoing/outgoing states must be evaluated at
$x^{0}=x^{0}_{in}$ and $x^{0}=x^{0}_{out}$, respectively444As observed in CapW
, the amplitude for the case of the exact QFT neutrino states should be
consistently defined as
$\langle\psi_{\ell}(x^{0}_{out})|e^{-iH\left(x_{out}^{0}-x_{in}^{0}\right)}|\psi_{\ell}(x^{0}_{in})\rangle=\langle\psi_{\ell}(x^{0}_{in})|U_{I}(x^{0}_{out},x_{in}^{0})|\psi_{\ell}(x^{0}_{in})\rangle$,
due to the orthogonality of the Hilbert spaces at different times (see Sec.
II). This will be explicitly taken into account in Sec. IV..
Now, by plugging Eq. (40) into (41), the term involving the expectation value
of the quark up and down fields can be easily computed to give
$\displaystyle\langle
p^{\sigma_{p}}_{{\bf{p}}_{p}}|V_{ud}\,\bar{q}_{u}(x)\gamma_{\mu}\hskip
0.56905pt(f\,-\,\gamma^{5}g)\hskip
0.56905ptq_{d}(x)|n^{\sigma_{n}}_{{\bf{p}}_{n}}\rangle$ $\displaystyle\simeq$
$\displaystyle V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}$ (42)
$\displaystyle\times\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,e^{-i\left[\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}\right)\hskip
0.56905ptx^{0}-\left({\bf{p}}_{n}-{\bf{p}}_{p}\right)\cdot{\bf{x}}\right]}\,,$
up to a normalization factor. Here we have denoted by $\omega_{{\bf{p}}_{n}}$
and $\omega_{{\bf{p}}_{p}}$ the energy of the neutron and proton,
respectively.
Concerning the lepton current, it is convenient to compute separately the
terms involving the electron and neutrino fields. For the former we have
$\langle
e^{s}_{\bf{q}}|\bar{e}(x)|0\rangle_{e}\,\simeq\,\bar{u}_{{\bf{q}},e}^{s}\,e^{-i{\bf{q}}\cdot{\bf{x}}}\,e^{-i\omega_{\bf{q}}^{e}(x^{0}_{out}-x^{0})}\,,$
(43)
where $\omega_{\bf{q}}^{e}$ is the electron energy. On the other hand, we get
for the neutrino field
$_{P}\langle\bar{\nu}^{r}_{{\bf{k}},e}|\nu_{e}(x)|0\rangle_{m}\,\,=\,\cos^{2}\theta\,_{P}\langle\bar{\nu}^{r}_{{\bf{k}},1}|\nu_{1}(x)|0\rangle_{m}\,+\,\sin^{2}\theta\,_{P}\langle\bar{\nu}^{r}_{{\bf{k}},2}|\nu_{2}(x)|0\rangle_{m}\,,$
(44)
where $|0\rangle_{m}$ is the vacuum for the massive neutrino fields
$\nu_{i}(x)$, $i=1,2$ (see Eq. (4)). Notice that we have used the mixing
transformations (1a) and (2a) for the neutrino state and field, respectively.
Equation (44) can be further manipulated to give
$_{P}\langle\bar{\nu}^{r}_{{\bf{k}},e}|\nu_{e}(x)|0\rangle_{m}\,\simeq\,e^{-i{\bf{k}}\cdot{\bf{x}}}\left[\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}\left(x^{0}_{out}-x^{0}\right)}\,+\,\sin^{2}\theta\,v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}\left(x^{0}_{out}-x^{0}\right)}\right],$
(45)
where we have still omitted the normalization.
Combining Eqs. (42), (43) and (45), the amplitude (41) becomes
$\displaystyle A^{a)}_{P}$ $\displaystyle=$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}x^{0}_{out}}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$
(46)
$\displaystyle\times\int_{x_{in}^{0}}^{x_{out}^{0}}\,dx^{0}\left[\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}x^{0}_{out}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)x^{0}}\right.$
$\displaystyle\left.+\,\sin^{2}\theta\,v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}x^{0}_{out}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)x^{0}}\right]$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,,$
where we have encompassed all the factors omitted above in the normalization
$\mathcal{N}=\left[{\sqrt{2}\left(2\pi\right)^{3}}\right]^{-1}$.
In a similar fashion, if we consider the decay channel $b)$ in Eq. (38), the
amplitude reads
$A^{b)}_{P}\,=\,_{P}\langle\bar{\nu}^{r}_{{\bf{k}},\mu},e^{s}_{{\bf{q}}},\hskip
1.13809ptp^{\sigma_{p}}_{{\bf{p}}_{p}}|\left[-i\int_{x^{0}_{in}}^{x^{0}_{out}}d^{4}x\,\mathscr{L}_{int}(x)\right]|n^{\sigma_{n}}_{{\bf{p}}_{n}}\rangle\,.$
(47)
In this case, the expectation value involving the neutrino field is
$_{P}\langle\bar{\nu}^{r}_{{\bf{k}},\mu}|\nu_{e}(x)|0\rangle_{m}\,\simeq\,e^{-i{\bf{k}}\cdot{\bf{x}}}\,\sin\theta\cos\theta\left[v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}\left(x^{0}_{out}-x^{0}\right)}\,-\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}\left(x^{0}_{out}-x^{0}\right)}\right],$
(48)
where we have resorted to the transformation (1b) for the state
$|\bar{\nu}_{{\bf{k}},\mu}\rangle_{P}$.
From Eq. (48), we get
$\displaystyle A^{b)}_{P}$ $\displaystyle=$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\sin\theta\cos\theta\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}x^{0}_{out}}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$
(49)
$\displaystyle\times\int_{x_{in}^{0}}^{x_{out}^{0}}\,dx^{0}\,\left[v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}x^{0}_{out}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)x^{0}}\right.$
$\displaystyle\left.-\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}x^{0}_{out}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)x^{0}}\right]$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,.$
For vanishing mixing (i.e. $\theta\rightarrow 0$), this expression is
identically zero, as expected in the absence of flavor oscillations.
### III.2 Short-Time Limit
Let us now use Eqs. (46) and (49) to analyze lepton charge conservation in the
tree-level interaction vertex for the processes
$n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{e}$ and
$n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{\mu}$. For this purpose, we study
the structure of $A_{P}^{a)}$ and $A_{P}^{b)}$ in the short-time
approximation. We set the limits of integration as $x^{0}_{in}=-\Delta t/2$
and $x^{0}_{out}=\Delta t/2$, such that $\Delta t$ is sufficiently longer than
the neutron lifetime $\tau_{n}$ to ensure (on average) the neutron decay, but
shorter than the characteristic neutrino oscillation time $T_{osc}$, in
compliance with the discussion below Eq. (38). Notice that a similar
assumption has been considered in Nishi in the context of the pion decay.
With reference to the process $a)$, the transition amplitude (46) becomes
$\displaystyle A^{a)}_{P}$ $\displaystyle=$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}\Delta
t/2}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$ (50)
$\displaystyle\times\int_{-\Delta t/2}^{\Delta
t/2}\,dx^{0}\left[\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}\Delta
t/2}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)x^{0}}\right.$
$\displaystyle\left.+\,\sin^{2}\theta\,v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}\Delta
t/2}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)x^{0}}\right]$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}+\omega_{{\bf{p}}_{n}})\Delta
t/2}\,.$
By integrating over $x^{0}$, we then get
$\displaystyle A^{a)}_{P}$ $\displaystyle=$ $\displaystyle
2i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}\Delta
t/2}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$ (51)
$\displaystyle\times\left\\{\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}\Delta
t/2}\
\frac{\sin\left[\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)\Delta
t/2\right]}{\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}}\right.$
$\displaystyle\left.\,+\sin^{2}\theta\,v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}\Delta
t/2}\,\frac{\sin\left[\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)\Delta
t/2\right]}{\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}}\right\\}$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}+\omega_{{\bf{p}}_{n}})\Delta
t/2}\,.$
A comment is now in order: in the limit of large $\Delta t$, the
$\sin(\tilde{\omega}\Delta t)/\tilde{\omega}$ factors in the above relation
become the usual energy-conserving Dirac $\delta$ functions. Here we have used
the shorthand notation
$\tilde{\omega}_{i}=\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},i}$,
$i=1,2$. On the other hand, the apparent energy fluctuations for finite time
intervals can be understood in terms of (and are constrained by) the time-
energy uncertainty relation for neutrino oscillations, with $\Delta t$ being
the interval defined above. This scenario has been rigorously analyzed in both
perturbative QM BilPhys and QFT SmaldFl , where it has been shown that the
Mandelstam-Tamm version of time-energy uncertainty relations for neutrino
oscillations can be cast in the form of flavor-energy uncertainty relations.
In other terms, neutrino flavor charges obtained via Noether’s theorem and
energy turn out to be incompatible observables in the usual quantum mechanical
sense. Flavor-energy uncertainty relations have been recently studied also in
stationary curved spacetime BlasSma .
Let us now consider Eq. (51) in the short-time limit. Clearly, the dominant
contributions in this regime are those for which $\tilde{\omega}_{i}\approx
0$. At the leading order, we obtain
$\displaystyle A^{a)}_{P}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,\left(\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,+\sin^{2}\theta\,v^{r}_{{\bf{k}},2}\right)$
(52) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
Since observed neutrinos are relativistic, it is convenient to study the
result (52) in the relativistic limit. This approximation will also be useful
for a more direct comparison with the analysis of the next Section. In this
regard, we use the identity (31) and rewrite $A^{a)}_{P}$ in the form
$\displaystyle A^{a)}_{P}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,\left\\{v^{r}_{{\bf{k}},1}\left[1-\sin^{2}\theta\left(1-|U_{\bf{k}}|\right)\right]-\sin^{2}\theta\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{\bf{k}}|\right\\}$
(53) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
By means of Eqs. (36), we obtain to the first order in
$\mathcal{O}\left(\frac{\Delta m}{2|{\bf{k}}|}\right)$
$\displaystyle A^{a)}_{P}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,\left(v^{r}_{{\bf{k}},1}\,-\,\sin^{2}\theta\,\frac{\Delta
m}{2|{\bf{k}}|}\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\right)$ (54)
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,,$
which becomes in the ultarelativistic limit (i.e. for $\frac{\Delta
m}{|{\bf{k}}|}\rightarrow 0)$
$\displaystyle A^{a)}_{P}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,v^{r}_{{\bf{k}},1}$
(55) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
Notice that this expression exhibits the same structure as the amplitude for
the emission of a free (anti-)neutrino with mass $m_{1}$. Further discussion
on the meaning of this result will be given below.
We now consider the short-time approximation of the amplitude (49). Following
the same steps as above, we are led to
$\displaystyle A^{b)}_{P}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\sin\theta\cos\theta\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\left(v_{{\bf{k}},2}^{r}\,-\,v^{r}_{{\bf{k}},1}\right)$
(56) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,,$
which is evidently non-vanishing, since $v_{{\bf{k}},2}^{r}\neq
v^{r}_{{\bf{k}},1}$ for $m_{2}\neq m_{1}$.
Again, it is interesting to consider the relativistic limit. By using Eqs.
(31) and (36), we obtain after some algebra
$\displaystyle A^{b)}_{P}$ $\displaystyle\simeq$
$\displaystyle-i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\sin\theta\cos\theta\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\frac{\Delta
m}{2|{\bf{k}}|}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,,$
(57) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
It must be emphasized that Eq. (56) (or, equivalently, Eq. (57)) signals a
clear violation of lepton charge in the tree-level interaction vertex, as it
states that the flavor-violating process
$n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{\mu}$ has a nonzero probability
even for intervals much shorter than the characteristic neutrino oscillation
time. In the next Section, we will show that the origin of this flaw can be
traced back to the incorrect definition of Pontecorvo states as eigenstates of
the flavor charge in QFT. As further evidence of this, from Eq. (57) we see
that the correct vanishing amplitude is recovered in the ultra-relativistic
limit $\frac{\Delta m}{|{\bf{k}}|}\rightarrow 0$, where, in fact, Pontecorvo
states well-approximate exact QFT states (see the discussion below Eqs. (35)).
### III.3 Mass states
Proceeding in the same way as done in the previous subsection, we now compute
the neutron $\beta$-decay amplitude by using the mass eigenstates
$|\nu^{r}_{{\bf{k}},i}\rangle$, $i=1,2$, as fundamental representation for
neutrinos. This amounts to consider the two processes
$n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{i}\,,\qquad i=1,2\,.$ (58)
In this context, we observe that Eq. (41) (or, equivalently, Eq. (47)) must be
rewritten as
$A_{i}=\langle\bar{\nu}^{r}_{{\bf{k}},i},e^{s}_{{\bf{q}}},\hskip
1.13809ptp^{\sigma_{p}}_{{\bf{p}}_{p}}|\left[-i\int_{x^{0}_{in}}^{x^{0}_{out}}d^{4}x\,\mathscr{L}_{int}(x)\right]|n^{\sigma_{n}}_{{\bf{p}}_{n}}\rangle\,,\qquad
i=1,2\,,$ (59)
where the subscript $i$ refers to the $i^{th}$ mass state.
The amplitude (59) can be straightforwardly derived from Eq. (46) by using the
transformation (1) to counter-rotate the expectation value (45) in the mass
basis. In doing so, we obtain
$\displaystyle A_{1}$ $\displaystyle=$ $\displaystyle i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\left(\omega_{\bf{q}}^{e}\,+\,\omega_{{\bf{k}},1}\right)x^{0}_{out}}\cos\theta\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$
(60)
$\displaystyle\times\int_{x_{in}^{0}}^{x_{out}^{0}}\,dx^{0}\,v^{r}_{{\bf{k}},1}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)x^{0}}$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,,$
which becomes in the short-time limit
$\displaystyle A_{1}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\cos\theta\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,v^{r}_{{\bf{k}},1}$
(61) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
This is predictably consistent with Eq. (55) derived in the ultra-relativistic
limit, where the effects of the mass difference can in fact be neglected.
Similarly, we infer the following expression for $A_{2}$
$\displaystyle A_{2}$ $\displaystyle=$ $\displaystyle i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\left(\omega_{\bf{q}}^{e}\,+\,\omega_{{\bf{k}},2}\right)x^{0}_{out}}\sin\theta\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$
(62)
$\displaystyle\times\int_{x_{in}^{0}}^{x_{out}^{0}}\,dx^{0}\,v^{r}_{{\bf{k}},2}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)x^{0}}$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,.$
In the short-time approximation this yields
$\displaystyle A_{2}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\sin\theta\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,v^{r}_{{\bf{k}},2}$
(63) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
To make a comparison with existing literature, it is convenient to consider
for a while the decay rates for the two channels in Eq. (58), which are
obtained by squaring the amplitudes $A_{i}$, $i=1,2$. Then, the approach is to
compute the total decay rate for the processes (37) and (38) through an
incoherent sum of the contributions from the different mass eigenstates
GiuntiKim ; suminc , each averaged by $\cos^{2}\theta$ or $\sin^{2}\theta$,
depending on whether one refers to the emission of an electron or muon
(anti-)neutrino555Notice that neutrinos can be produced incoherently if their
mass differences are larger than the energy uncertainty in the production
process. This is quite plausible for heavy neutrinos ($m\gtrsim
100\,\mathrm{KeV}$ for typical energy of $10\,\mathrm{MeV}$). However, the
present analysis is presumed to have a more general applicability than this
specific case.. Following this recipe and resorting to Eqs. (61) and (63), it
is immediate to see that also in this framework the decay probability for the
process (38) is non-vanishing in the short-time limit, since it is given by
the sum of two positive contributions. As already discussed for Pontecorvo
states, this outcome is in contrast with what expected at tree level in the
SM, which means that mass states are not eligible to be the fundamental
representation for mixed neutrinos, if we want to preserve the internal
consistency of the theory.
## IV Neutron $\beta$-decay with exact QFT flavor states
Let us now go through the above calculations by using exact QFT neutrino
states. Toward this end, we remind that these states are defined as
eigenstates of flavor charges operators BV95 . Indeed, by using the definition
(35), it is straightforward to check that BV95
$\displaystyle::Q_{\nu_{e}}(0)::|\nu^{r}_{\;{\bf
k},e}\rangle\,=\,|\nu^{r}_{\;{{\bf{k}}},e}\rangle\,,\qquad::Q_{\nu_{\mu}}(0)::|\nu^{r}_{\;{\bf
k},\mu}\rangle\,=\,|\nu^{r}_{\;{{\bf{k}}},\mu}\rangle\,,$ (64)
$\displaystyle::Q_{\nu_{e}}(0)::|\nu^{r}_{\;{\bf
k},\mu}\rangle\,=\;::Q_{\nu_{\mu}}(0)::|\nu^{r}_{\;{{\bf{k}}},e}\rangle\,=\,0\,,\qquad::Q_{\nu_{\ell}}(0)::|0\rangle_{f}\,=\,0\,,$
(65)
where
$Q_{\nu_{\ell}}(x^{0})\,\equiv\,\int
d^{3}{{\bf{x}}}\,\nu_{\ell}^{\dagger}(x)\hskip
0.85358pt\nu_{\ell}(x)\,,\qquad\ell=e,\mu\,,$ (66)
are the flavor charge operators and $::Q_{\nu_{\ell}}::$ denotes the normal
ordering respect to the flavor vacuum. By contrast, it must be emphasized that
Pontecorvo states (and, a fortiori, mass states) are not eigenstates of the
flavor charges Nucl .
In this framework, the decay amplitude (41) becomes
$A^{a)}\,=\,\langle\bar{\nu}^{r}_{{\bf{k}},e},e^{s}_{{\bf{q}}},\hskip
1.13809ptp^{\sigma_{p}}_{{\bf{p}}_{p}}|\left[-i\int_{x^{0}_{in}}^{x^{0}_{out}}d^{4}x\,\mathscr{L}_{int}(x)\right]|n^{\sigma_{n}}_{{\bf{p}}_{n}}\rangle\,.$
(67)
The term involving the expectation value of the neutrino field given in Eq.
(44) is now replaced by
$\displaystyle\langle\bar{\nu}^{r}_{{\bf{k}},e}(x^{0}_{in})|\nu_{e}(x)|0(x^{0}_{in})\rangle_{f}$
$\displaystyle\simeq$ $\displaystyle
e^{-i{\bf{k}}\cdot{\bf{x}}}\left\\{\cos^{2}\theta\,v_{{\bf{k}},1}^{r}\,e^{i\omega_{{\bf{k}},1}(x^{0}-x^{0}_{in})}\right.$
(68)
$\displaystyle\,+\sin^{2}\theta\left[\left(v^{r}_{{\bf{k}},1}\,|U_{\bf{k}}|\,-\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{\bf{k}}|\right)|U_{\bf{k}}|\,e^{i\omega_{{\bf{k}},2}(x^{0}-x^{0}_{in})}\right.$
$\displaystyle\left.\left.+\left(v^{r}_{{\bf{k}},1}\,|V_{{\bf{k}}}|\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|U_{{\bf{k}}}|\right)|V_{{\bf{k}}}|\,e^{-i\omega_{{\bf{k}},2}(x^{0}-x^{0}_{in})}\right]\right\\},$
which is defined consistently with the orthogonality of the Hilbert spaces at
different times, see footnote 3 (notice the presence of the flavor vacuum and
the Bogoliubov coefficients $U_{\bf{k}}$ and $V_{\bf{k}}$). Here we have used
the field expansion (17) and the explicit form of the flavor
annihilation/creation operators given in Eqs. (22)-(25). As above,
normalization will be taken into account only at the end of calculations.
By means of the identities (31) and (32), Eq. (68) can be rewritten as
$\displaystyle\langle\bar{\nu}^{r}_{{\bf{k}},e}(x^{0}_{in})|\nu_{e}(x)|0(x^{0}_{in})\rangle_{f}$
$\displaystyle\simeq$ $\displaystyle
e^{-i{\bf{k}}\cdot{\bf{x}}}\left\\{\cos^{2}\theta\,v_{{\bf{k}},1}^{r}\,e^{i\omega_{{\bf{k}},1}(x^{0}-x^{0}_{in})}\right.$
(69)
$\displaystyle\left.+\sin^{2}\theta\left[v^{r}_{{\bf{k}},2}\,|U_{\bf{k}}|\,e^{i\omega_{{\bf{k}},2}(x^{0}-x^{0}_{in})}\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},2}\,|V_{{\bf{k}}}|\,\,e^{-i\omega_{{\bf{k}},2}(x^{0}-x^{0}_{in})}\right]\right\\}.$
Combining Eqs. (42), (43) and (69), the decay amplitude (67) takes the form
$\displaystyle A^{a)}$ $\displaystyle=$ $\displaystyle i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}x^{0}_{out}}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$
(70)
$\displaystyle\times\int_{x^{0}_{in}}^{x^{0}_{out}}\,dx^{0}\left\\{\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,e^{-i\omega_{{\bf{k}},1}x^{0}_{in}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)x^{0}}\right.$
$\displaystyle\,+\sin^{2}\theta\left[v^{r}_{{\bf{k}},2}\,|U_{{\bf{k}}}|\,\,e^{-i\omega_{{\bf{k}},2}x^{0}_{in}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)x^{0}}\right.$
$\displaystyle\left.\left.+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},2}\,|V_{{\bf{k}}}|\,e^{i\omega_{{\bf{k}},2}x^{0}_{in}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}+\omega_{{\bf{k}},2}\right)x^{0}}\right]\right\\}$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,.$
The structure of this amplitude is clearly different from the one in Eq. (46),
due to the presence of the Bogoliubov coefficients and the extra contribute of
the anti-particle degrees of freedom (the term multiplying the $u$-neutrino
mode)666The negative neutrino energy $\omega_{{\bf{k}},2}$ in the fourth line
of Eq. (70) should not be misleading, since it is associated to the “hole”
contributions in the flavor vacuum condensate. Of course, this is a richness
of the QFT treatment of mixing, which arises from the peculiar structure of
flavor vacuum (see the discussion in the Introduction).. Regardless of
irrelevant phase factors, Eq. (46) is however recovered in the ultra-
relativistic limit, where $|U_{\bf{k}}|\rightarrow 1$,
$|V_{\bf{k}}|\rightarrow 0$ and exact flavor states reduce to the standard
Pontecorvo states. We show below that the presence of such additional term in
Eq. (70) is essential to restore the conservation of lepton charge in the
short-time limit and validate the use of exact QFT flavor states as correct
neutrino representation.
On the other hand, if we consider the decay channel in Eq. (38), the amplitude
(47) becomes
$A^{b)}\,=\,\langle\bar{\nu}^{r}_{{\bf{k}},\mu},e^{s}_{{\bf{q}}},\hskip
1.13809ptp^{\sigma_{p}}_{{\bf{p}}_{p}}|\left[-i\int_{x^{0}_{in}}^{x^{0}_{out}}d^{4}x\,\mathscr{L}_{int}(x)\right]|n^{\sigma_{n}}_{{\bf{p}}_{n}}\rangle\,,$
(71)
where the expectation value involving the neutrino field is now given by
$\displaystyle\langle\bar{\nu}^{r}_{{\bf{k}},\mu}(x^{0}_{in})|\nu_{e}(x)|0(x^{0}_{in})\rangle_{f}$
$\displaystyle\simeq$ $\displaystyle
e^{-i{\bf{k}}\cdot{\bf{x}}}\,\sin\theta\cos\theta\left[\left(v^{r}_{{\bf{k}},1}\,|U_{\bf{k}}|\,-\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{\bf{k}}|\right)\,e^{i\omega_{{\bf{k}},2}(x^{0}-x^{0}_{in})}\right.$
(72)
$\displaystyle\left.-\,v^{r}_{{\bf{k}},1}\,|U_{{\bf{k}}}|\,e^{i\omega_{{\bf{k}},1}(x^{0}-x^{0}_{in})}\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{{\bf{k}}}|\,e^{-i\omega_{{\bf{k}},1}(x^{0}-x^{0}_{in})}\right].$
This can be simplified by use of the relation (31) to give
$\displaystyle\langle\bar{\nu}^{r}_{{\bf{k}},\mu}(x^{0}_{in})|\nu_{e}(x)|0(x^{0}_{in})\rangle_{f}$
$\displaystyle\simeq$ $\displaystyle
e^{-i{\bf{k}}\cdot{\bf{x}}}\,\sin\theta\cos\theta\left[v^{r}_{{\bf{k}},2}\,e^{i\omega_{{\bf{k}},2}(x^{0}-x^{0}_{in})}\right.$
(73)
$\displaystyle\left.-\,v^{r}_{{\bf{k}},1}\,|U_{{\bf{k}}}|\,e^{i\omega_{{\bf{k}},1}(x^{0}-x^{0}_{in})}\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{{\bf{k}}}|\,e^{-i\omega_{{\bf{k}},1}(x^{0}-x^{0}_{in})}\right].$
The amplitude (71) is then equal to
$\displaystyle A^{b)}$ $\displaystyle=$ $\displaystyle i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\sin\theta\cos\theta\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}x^{0}_{out}}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$
(74)
$\displaystyle\times\int_{x^{0}_{in}}^{x^{0}_{out}}\,dx^{0}\,\left[v^{r}_{{\bf{k}},2}\,e^{-i\omega_{{\bf{k}},2}x^{0}_{in}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)x^{0}}\right.$
$\displaystyle-\,v^{r}_{{\bf{k}},1}\,|U_{{\bf{k}}}|\,e^{-i\omega_{{\bf{k}},1}x^{0}_{in}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)x^{0}}$
$\displaystyle\left.+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{{\bf{k}}}|\,e^{i\omega_{{\bf{k}},1}x^{0}_{in}}\,e^{-i\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}+\omega_{{\bf{k}},1}\right)x^{0}}\right]$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}x_{out}^{0}-\omega_{{\bf{p}}_{n}}x^{0}_{in})}\,.$
Again, the ultra-relativistic limit gives back Eq. (49) up to an irrelevant
phase factor.
### IV.1 Short-Time Limit
We now consider the short-time limit of the amplitudes (70) and (74). Toward
this end, we follow the same recipe as in Sec. III.2 and set
$x^{0}_{in}=-\Delta t/2$ and $x^{0}_{out}=\Delta t/2$. By integrating over
$x^{0}$, we get for $A^{a)}$
$\displaystyle A^{a)}$ $\displaystyle=$ $\displaystyle 2i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)e^{-i\omega_{\bf{q}}^{e}\,\Delta
t/2}\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})$ (75)
$\displaystyle\times\left\\{\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,e^{i\omega_{{\bf{k}},1}\Delta
t/2}\
\frac{\sin\left[\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}\right)\Delta
t/2\right]}{\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},1}}\right.$
$\displaystyle+\,\sin^{2}\theta\left[v^{r}_{{\bf{k}},2}\,|U_{\bf{k}}|\,e^{i\omega_{{\bf{k}},2}\Delta
t/2}\
\frac{\sin\left[\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}\right)\Delta
t/2\right]}{\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}-\omega_{{\bf{k}},2}}\right.$
$\displaystyle\left.\left.+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},2}\,|V_{{\bf{k}}}|\,e^{-i\omega_{{\bf{k}},2}\Delta
t/2}\
\frac{\sin\left[\left(\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}+\omega_{{\bf{k}},2}\right)\Delta
t/2\right]}{\omega_{{\bf{p}}_{n}}-\omega_{{\bf{p}}_{p}}-\omega_{{\bf{q}}}^{e}+\omega_{{\bf{k}},2}}\right]\right\\}$
$\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,e^{-i(\omega_{{\bf{p}}_{p}}+\omega_{{\bf{p}}_{n}})\Delta
t/2}\,.$
In the short-time approximation, this yields to the first order
$\displaystyle A^{a)}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,\left\\{\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,+\,\sin^{2}\theta\left[v^{r}_{{\bf{k}},2}\,|U_{\bf{k}}|\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},2}\,|V_{{\bf{k}}}|\right]\right\\}$
(76) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,,$
which can be further manipulated by means of the identity (34) to give
$\displaystyle A^{a)}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\,v^{r}_{{\bf{k}},1}$
(77) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
As remarked above, this correctly matches the amplitude (55) calculated with
Pontecorvo states in the ultra-relativistic approximation.
Let us now turn to the process (38). In the short-time limit, the amplitude
(74) takes the form
$\displaystyle A^{b)}$ $\displaystyle\simeq$ $\displaystyle
i\,\mathcal{N}\hskip
0.28453ptG_{F}\,\sin\theta\cos\theta\,\delta^{3}\left({\bf{p}}_{n}-{\bf{p}}_{p}-{\bf{q}}-{\bf{k}}\right)\Delta
t\,\bar{u}^{s}_{{\bf{q}},e}\,\gamma^{\mu}(\mathds{1}-\gamma^{5})\left[v^{r}_{{\bf{k}},2}\,-\,v^{r}_{{\bf{k}},1}\,|U_{{\bf{k}}}|\,+\,\epsilon^{r}\,u^{r}_{-{\bf{k}},1}\,|V_{{\bf{k}}}|\right]$
(78) $\displaystyle\times\,V_{ud}\,\bar{u}_{{\bf{p}}_{p}}^{\sigma_{p}}\hskip
0.85358pt\gamma_{\mu}(f\,-\,\gamma^{5}g)\hskip
0.85358ptu_{{\bf{p}}_{n}}^{\sigma_{n}}\,.$
By comparison with Eq. (56), we see that the main difference is now the
splitting of the mode with mass $m_{1}$ into two contributions multiplying the
Bogoliubov coefficients $|U_{\bf{k}}|$ and $|V_{\bf{k}}|$, respectively. While
the former survives in the ultra-relativistic limit (returning Eq. (56)), the
latter has no counterpart in Pontecorvo framework, as it vanishes for
$|V_{\bf{k}}|\rightarrow 0$. However, it is exactly this term which allows to
get consistency with expectations from the SM at tree level. Indeed, due to
the identity (31), it is immediate to realize that the quantity in the square
brackets is identically zero, which gives
$A^{b)}=0\,.$ (79)
This proves that the usage of the exact flavor states leads to the
conservation of lepton charge in the production vertex.
In order to better understand the above results, we observe that the
amplitudes $A^{a)}$ and $A^{b)}$ in the short-time limit provide information
on the decay processes in proximity of the interaction vertex. We can think of
associating a wave function $v^{r}_{{\bf{k}},\nu_{e}}$ to the emitted electron
(anti-)neutrino in Eqs. (52) and (77) (and, similarly, in Eqs. (56) and (78)).
For the case of exact flavor states, Eq. (77), one can identify
$v^{r}_{{\bf{k}},\nu_{e}}=v^{r}_{{\bf{k}},1}$, with
$v^{r\dagger}_{{\bf{k}},1}\,v^{r}_{{\bf{k}},1}=1$ being properly normalized
(see Eq. (7)). By contrast, Eq. (52) suggests that
$v^{r}_{{\bf{k}}\,\nu_{e}}=\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,+\,\sin^{2}\theta\,v^{r}_{{\bf{k}},2}$
for Pontecorvo states. However, by using Eqs. (7) and (26), simple
calculations show that
$\displaystyle v^{r\dagger}_{{\bf{k}}\,\nu_{e}}\,v^{r}_{{\bf{k}}\,\nu_{e}}$
$\displaystyle=$
$\displaystyle(\cos^{2}\theta\,v^{r\dagger}_{{\bf{k}},1}\,+\,\sin^{2}\theta\,v^{r\dagger}_{{\bf{k}},2})\,(\cos^{2}\theta\,v^{r}_{{\bf{k}},1}\,+\,\sin^{2}\theta\,v^{r}_{{\bf{k}},2})$
(80) $\displaystyle=$
$\displaystyle\cos^{4}\theta\,+\,\sin^{4}\theta\,+\,2\sin^{2}\theta\,\cos^{2}\theta\,|U_{\bf{k}}|\,,$
from which we infer that the above wave function is not normalized, since
$|U_{{\bf{k}}}|<1$ for $m_{1}\neq m_{2}$.
On the other hand, we notice that Eq. (56) includes the wave function
$v^{r}_{{\bf{k}},\Delta_{\nu_{e}}}=\sin\theta\cos\theta(v^{r}_{{\bf{k}},2}-v^{r}_{{\bf{k}},1})$.
Not even this combination is normalized to unity, since
$\displaystyle
v^{r\dagger}_{{\bf{k}},\Delta_{\nu_{e}}}v^{r}_{{\bf{k}},\Delta_{\nu_{e}}}$
$\displaystyle=$
$\displaystyle\sin^{2}\theta\cos^{2}\theta(v^{r\dagger}_{{\bf{k}},2}-v^{r\dagger}_{{\bf{k}},1})\,(v^{r}_{{\bf{k}},2}-v^{r}_{{\bf{k}},1})$
(81) $\displaystyle=$ $\displaystyle
2\sin^{2}\theta\cos^{2}\theta\left(1-|U_{{\bf{k}}}|\right)\,.$
However, this is exactly the missing contribution to restore the normalization
of the wave function (80). Indeed, we have
$v^{r\dagger}_{{\bf{k}}\,\nu_{e}}\,v^{r}_{{\bf{k}}\,\nu_{e}}\,+\,v^{r\dagger}_{{\bf{k}},\Delta_{\nu_{e}}}v^{r}_{{\bf{k}},\Delta_{\nu_{e}}}\,=\,\cos^{4}\theta\,+\,\sin^{4}\theta\,+\,2\sin^{2}\theta\,\cos^{2}\theta\,|U_{\bf{k}}|\,+\,2\sin^{2}\theta\cos^{2}\theta\left(1-|U_{{\bf{k}}}|\right)\,=\,1.$
(82)
A similar reasoning can be developed for mass states. In this case
$v^{r}_{{\bf{k}}\,\nu_{e}}=\cos\theta\,v^{r}_{{\bf{k}}\,1}$ or
$v^{r}_{{\bf{k}}\,\nu_{e}}=\sin\theta\,v^{r}_{{\bf{k}}\,2}$, depending on
whether one considers Eq. (61) or (63). Again, such wave functions are not
normalized if taken separately. Nevertheless, the following condition holds
$\cos\theta^{2}\,v^{r\dagger}_{{\bf{k}}\,1}v^{r}_{{\bf{k}}\,1}\,+\,\sin\theta^{2}\,v^{r\dagger}_{{\bf{k}}\,2}v^{r}_{{\bf{k}}\,2}\,=\,1\,.$
(83)
The above arguments clearly show that neither Pontecorvo nor mass states
provide a self-consistent description of the neutron decay with neutrino
mixing, as they are not exact eigenstates of the flavor charge. However, the
reason why the sum over the two flavors (Eq. (82)) or the two masses (Eq.
(83)) allows to recover the correct result can be understood by considering
the time-dependent flavor charges (66) and the corresponding conserved charges
for neutrinos with definite masses
$Q_{\nu_{i}}\,\equiv\,\int d^{3}{{\bf{x}}}\,\nu_{i}^{\dagger}(x)\hskip
0.85358pt\nu_{i}(x)\,\,,\qquad i=1,2\,.$ (84)
In BJV it has been proved that
$\sum_{i}Q_{i}\,=\,\sum_{\ell}Q_{\ell}(t)\,=\,Q\,,$ (85)
where $Q$ represents the total charge of the system. The above equality can be
interpreted as the conservation of the _total lepton number_ : when there is
no mixing, this number is given by the sum of two separately conserved family
lepton charges (left-hand side). On the other hand, in the presence of mixing
the same conserved number is obtained by adding time-dependent flavor charges,
which are indeed associated to neutrino oscillations (central term).
In conclusion, we remark that our analysis is focused on the short-time limit
of the $\beta$-decay amplitude. Clearly, a comprehensive study should also
take into account the opposite $\Delta t\rightarrow\infty$ limit. This has
been investigated in CYL , showing that the usage of exact QFT states leads to
predictions which are consistent with the SM model for the case of
relativistic neutrinos.
## V Discussion and Conclusions
In this work we have analyzed the flavor/mass dichotomy for mixed neutrino
fields. We have considered the neutron $\beta$-decay
$n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{e}$ and the corresponding flavor-
violating channel $n\,\rightarrow\,p\,+\,e^{-}\,+\,\bar{\nu}_{\mu}$ as a test
bench. For these processes, we have explicitly computed the transition
amplitude by using the scattering theory at tree level. Special focus has been
devoted to the study of the short-time limit, i.e. the limit for very small
distances from the interaction vertex. We have developed calculations by
resorting to the following different representations of neutrino states: _i_)
Pontecorvo states, _ii_) mass states and _iii_) exact QFT flavor states, which
are defined as eigenstates of the flavor charge. We have found that the usage
of the latter representation leads to results consistent with the conservation
of lepton charge in the vertex, whereas Pontecorvo and mass states fail. This
provides a solid argument in favor of exact QFT states as neutrino fundamental
representation. We stress that such a result has been achieved based on the
sole requirement of consistency with SM at tree level. In other terms, our
formalism does not require any detector-dependent model for the emission and
absorption of neutrinos, nor a particular empirical description of flavor
oscillations MatsasDet .
In the above treatment, we have not calculated explicitly the neutron decay
width. This should be taken into account for a more direct comparison of our
results with existing literature. Nevertheless, the fact that the amplitude
$A_{n\rightarrow p+e^{-}+\bar{\nu}_{\mu}}$ computed with the exact QFT flavor
states identically vanishes in the short-time limit is enough to infer that
the decay width vanishes as well. By contrast, this does not happen when
working with Pontecorvo states or, a fortiori, with mass states, for which the
amplitude is in general nonzero. This outcome reveals unambiguously that both
Pontecorvo and mass representations are at odds with SM predictions.
Further aspects remain to be addressed in order to improve the above analysis.
Strictly speaking, we should have carried out calculations with three neutrino
generations, including the possibility of $CP$ violation. Likewise, a
description based on wave-packets would be more appropriate to take into
account finite spatial localization of neutrino production and detection.
However, we expect that such generalizations do not affect the overall
validity of our results. Furthermore, we have shown that the amplitudes
computed with exact QFT and Pontecorvo states are in good agreement in the
ultra-relativistic limit. Since observed neutrinos are essentially ultra-
relativistic, it is quite difficult to find experimental evidences that help
to discern between the two representations at present. In spite of this,
valuable hints could be provided by the PTOLEMY experiment Betts , which aims
at detecting neutrinos of the cosmic background (CNB) through capture on
tritium. Given that the temperature of the CNB is estimated around $T\approx
2\,\mathrm{K}$, it is reasonable to expect that these particles are mostly
non-relativistic, thus offering a promising phenomenological window on the
issue.
Finally, relevant advances in our understanding of neutrino mixing and
oscillations in QFT might be achieved via the study of these phenomena in non-
trivial spacetime. In this context, an emblematic research line is represented
by the analysis of the accelerated proton decay with in Rindler metric Ahluw ;
ProtBlas ; ProtMat ; GaetanoLucia . Work along this direction is presently
under active investigation and will be elaborated in future works.
###### Acknowledgements.
The author is grateful to Massimo Blasone for useful discussions. He also
acknowledges the Spanish “Ministerio de Universidades” for the awarded Maria
Zambrano fellowship and funding received from the European Union -
NextGenerationEU. He is grateful for participation in the COST Association
Action CA18108 “Quantum Gravity Phenomenology in the Multimessenger Approach”
and LISA Cosmology Working group.
## References
* (1)
* (2)
## References
* (3) B. Pontecorvo, Zh. Eksp. Teor. Fiz. 34, 247 (1957).
* (4) V. N. Gribov and B. Pontecorvo, Phys. Lett. B 28, 493 (1969).
* (5) H. Fritzsch and P. Minkowski, Phys. Lett. B 62, 72 (1976).
* (6) Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys. 28, 870 (1962).
* (7) M. L. Perl et al., Phys. Rev. Lett. 35, 1489 (1975).
* (8) L. Wolfenstein, Phys. Rev. D 17, 2369 (1978).
* (9) S. P. Mikheyev and A. Y. Smirnov, Sov. J. Nucl. Phys. 42, 913 (1985).
* (10) Y. Fukuda et al. [Super-Kamiokande], Phys. Rev. Lett. 81, 1562 (1998).
* (11) T. Kajita, Rev. Mod. Phys. 88, 030501 (2016).
* (12) Q. R. Ahmad et al. [SNO], Phys. Rev. Lett. 89, 011301 (2002).
* (13) A. B. McDonald, Rev. Mod. Phys. 88, 030502 (2016).
* (14) K. Eguchi et al. [KamLAND], Phys. Rev. Lett. 90, 021802 (2003).
* (15) L. Stodolsky, Gen. Rel. Grav. 11, 391 (1979).
* (16) D. V. Ahluwalia and C. Burgard, Gen. Rel. Grav. 28, 1161 (1996).
* (17) N. Fornengo, C. Giunti, C. W. Kim and J. Song, Phys. Rev. D 56, 1895 (1997).
* (18) C. Y. Cardall and G. M. Fuller, Phys. Rev. D 55, 7960 (1997).
* (19) G. Lambiase, G. Papini, R. Punzi and G. Scarpetta, Phys. Rev. D 71, 073011 (2005).
* (20) C. Giunti and C. W. Kim, _Fundamentals of Neutrino Physics and Astrophysics_ (Oxford Univ. Press, 2007).
* (21) N. E. Mavromatos, A. Meregaglia, A. Rubbia, A. Sakharov and S. Sarkar, Phys. Rev. D 77, 053014 (2008).
* (22) G. Pagliaroli, F. Vissani, E. Coccia and W. Fulgione, Phys. Rev. Lett. 103, 031102 (2009).
* (23) F. Vissani, G. Pagliaroli and F. Rossi-Torres, Int. J. Mod. Phys. D 20, 1873 (2011).
* (24) S. Capozziello and M. De Laurentis, Phys. Rept. 509, 167 (2011).
* (25) M. Dvornikov, Mod. Phys. Lett. A 30, 1530017 (2015).
* (26) L. Buoninfante, G. G. Luciano, L. Petruzziello and L. Smaldone, Phys. Rev. D 101, 024016 (2020).
* (27) M. Blasone, G. Lambiase, G. G. Luciano, L. Petruzziello and L. Smaldone, Class. Quant. Grav. 37, 155004 (2020).
* (28) G. G. Luciano and M. Blasone, Universe 7, 417 (2021).
* (29) G. G. Luciano, Phys. Lett. B 823, 136772 (2021).
* (30) H. Swami, K. Lochan and K. M. Patel, Phys. Rev. D 104, 095007 (2021).
* (31) A. R. Khalifeh and R. Jimenez, Phys. Dark Univ. 34, 100897 (2021).
* (32) M. Blasone, F. Dell’Anno, S. De Siena and F. Illuminati, EPL 85, 50002 (2009).
* (33) S. Banerjee, A. K. Alok, R. Srikanth and B. C. Hiesmayr, Eur. Phys. J. C 75, 487 (2015).
* (34) A. Kumar Jha, S. Mukherjee and B. A. Bambah, Mod. Phys. Lett. A 36, 2150056 (2021).
* (35) A. Roggero, Phys. Rev. D 104, 103016 (2021).
* (36) L. J. Li, F. Ming, X. K. Song, L. Ye and D. Wang, Eur. Phys. J. C 81, 728 (2021).
* (37) M. Blasone, S. De Siena and C. Matrella, Eur. Phys. J. C 81, 660 (2021).
* (38) A. Marini, S. Longhi and F. Biancalana, Phys. Rev. Lett. 113, 150401 (2014).
* (39) C. Rott, A. Taketa and D. Bose, Sci. Rep. 5, 15225 (2015).
* (40) M. Blasone and G. Vitiello, Annals Phys. 244, 283 (1995) [erratum: Annals Phys. 249, 363 (1996)].
* (41) M. Blasone, M. V. Gargiulo and G. Vitiello, Phys. Lett. B 761, 104 (2016).
* (42) M. Blasone, F. Illuminati, G. G. Luciano and L. Petruzziello, Phys. Rev. A 103, 032434 (2021).
* (43) A. Cabo Montes de Oca and N. G. Cabo Bizet, [arXiv:2005.07758 [hep-ph]].
* (44) G. Barenboim and N. E. Mavromatos, Phys. Rev. D 70, 093015 (2004)
* (45) A. Capolupo, S. Capozziello and G. Vitiello, Phys. Lett. A 363, 53 (2007).
* (46) N. Mavromatos and M. Sakellariadou, Phys. Lett. B 652, 97 (2007).
* (47) M. Blasone, G. Lambiase and G. G. Luciano, Phys. Rev. D 96, 025023 (2017); J. Phys. Conf. Ser. 956, 012021 (2018).
* (48) M. Blasone, P. Jizba, N. E. Mavromatos and L. Smaldone, Phys. Rev. D 100, 045027 (2019).
* (49) A. Capolupo, G. Lambiase and A. Quaranta, Phys. Rev. D 101, 095022 (2020).
* (50) G. G. Luciano and M. Blasone, Phys. Rev. D 104, 045004 (2021); Eur. Phys. J. C 81, 995 (2021).
* (51) L. Smaldone and G. Vitiello, Universe 7, 504, (2021).
* (52) A. E. Bernardini and S. De Leo, Eur. Phys. J. C 37, 471 (2004).
* (53) A. E. Bernardini and S. De Leo, Phys. Rev. D 71, 076008 (2005).
* (54) K. C. Hannabuss and D. C. Latimer, J. Phys. A 33, 1369 (2000); K. C. Hannabuss and D. C. Latimer, J. Phys. A 36, L69 (2003).
* (55) D. V. Ahluwalia, L. Labun, and G. Torrieri, Eur. Phys. J. A 52, 189 (2016).
* (56) M. Blasone, G. Lambiase, G. G. Luciano and L. Petruzziello, Phys. Rev. D 97, 105008 (2018); Phys. Lett. B 800, 135083 (2020); Eur. Phys. J. C 80, 130 (2020).
* (57) G. Cozzella, S. A. Fulling, A. G. S. Landulfo, G. E. A. Matsas and D. A. T. Vanzella, Phys. Rev. D 97, 105022 (2018).
* (58) C. Giunti, C. W. Kim, J. A. Lee and U. W. Lee, Phys. Rev. D 48 4310 (1993).
* (59) C. Giunti, Eur. Phys. J. C 39, 377 (2005).
* (60) E. K. Akhmedov and J. Kopp, JHEP 1004, 008 (2010).
* (61) M. Blasone, G. Lambiase, G. G. Luciano and L. Petruzziello, J. Phys. Conf. Ser. 1275, 012063 (2019).
* (62) M. Blasone and L. Smaldone, Mod. Phys. Lett. A 35, 2050313 (2020).
* (63) G. Gaetano Luciano, J. Phys. Conf. Ser. 1956, 012008 (2021).
* (64) M. Blasone, P. Jizba, and G. Vitiello, _Quantum Field Theory and Its Macroscopic Manifestations: Boson Condensation, Ordered Patterns, and Topological Defects_ (Imperial College Press, 2011).
* (65) M. H. Stone, Proceedings of the National Academy of Sciences 16, 172 (1930).
* (66) J. von Neumann, Mathematische Annalen 104, 570 (1931).
* (67) M. Blasone, A. Capolupo, C. R. Ji and G. Vitiello, Int. J. Mod. Phys. A 25, 4179 (2010).
* (68) C. Y. Lee, Mod. Phys. Lett. A 35, 2030015 (2020).
* (69) T. P. Cheng and L. F. Li, _Gauge theory of elementary particle physics_ (Oxford Univ. Press, 1984).
* (70) C. R. Ji and Y. Mishchenko, Phys. Rev. D 64, 076004 (2001); Phys. Rev. D 65, 096015 (2002).
* (71) M. Blasone, A. Capolupo, F. Terranova and G. Vitiello, Phys. Rev. D 72, 013003 (2005).
* (72) K. Fujii and T. Shimomura, [arXiv:hep-ph/0406079 [hep-ph]].
* (73) M. Blasone, M. Di Mauro and G. Vitiello, Phys. Lett. B 697, 238 (2011).
* (74) C. Itzykson and J. B. Zuber, _Quantum Field Theory_ (McGraw-Hill, New York, 1980).
* (75) S. Ando, H. W. Fearing, V. P. Gudkov, K. Kubodera, F. Myhrer, S. Nakamura and T. Sato, Phys. Lett. B 595, 250 (2004).
* (76) C. C. Nishi, Phys. Rev. D 78, 113007 (2008).
* (77) S.M. Bilenky, Phys. Scripta T 127, 8 (2006); S.M Bilenky and M.D. Mateev, Phys. Part. Nucl. 38, 117 (2007).
* (78) M. Blasone, P. Jizba and L. Smaldone, Phys. Rev. D 99, 016014 (2019).
* (79) R. E. Shrock, Phys. Lett. B 96, 159 (1980); B. Kayser, Phys. Rev. D 24, 110 (1981); I. Y. Kobzarev, B. V. Martemyanov, L. B. Okun and M. G. Shchepkin, Sov. J. Nucl. Phys. 35, 708 (1982).
* (80) M. Blasone, A. Capolupo, C. R. Ji and G. Vitiello, Nucl. Phys. B Proc. Suppl. 188, 37 (2009).
* (81) M. Blasone, P. Jizba and G. Vitiello Phys. Lett. B, 517, 471 (2001).
* (82) B. d. L. Torres, T. R. Perche, A. G. S. Landulfo and G. E. A. Matsas, Phys. Rev. D 102, 093003 (2020).
* (83) S. Betts, W. R. Blanchard, R. H. Carnevale, C. Chang, C. Chen, S. Chidzik, L. Ciebiera, P. Cloessner, A. Cocco and A. Cohen, et al. [arXiv:1307.4738 [astro-ph.IM]].
|
# On the power of nonstandard quantum oracles
Roozbeh Bassirian<EMAIL_ADDRESS>University of Chicago Bill Fefferman
<EMAIL_ADDRESS>University of Chicago Kunal Marwaha<EMAIL_ADDRESS>University of Chicago
###### Abstract
We study how the choices made when designing an oracle affect the complexity
of quantum property testing problems defined relative to this oracle. We
encode a regular graph of even degree as an invertible function $f$, and
present $f$ in different oracle models. We first give a one-query
${\mathsf{QMA}}$ protocol to test if a graph encoded in $f$ has a small
disconnected subset. We then use representation theory to show that no
classical witness can help a quantum verifier efficiently decide this problem
relative to an in-place oracle. Perhaps surprisingly, a simple modification to
the standard oracle prevents a quantum verifier from efficiently deciding this
problem, even with access to an unbounded witness.
## 1 Introduction
Computational complexity is the study of the innate amount of resources
required to complete some task. We assign _complexity classes_ to sets of
tasks that require similar amounts of resources; from here, the goal is to
understand the relationship between complexity classes. There has been some
success proving that two complexity classes are equal, for example
$\text{IP}=\text{PSPACE}$ [Sha92], the PCP theorem [ALM+98], and
$\text{MIP}^{*}=\text{RE}$ [JNV+21]; however, proving that two complexity
classes are _unequal_ has been much more elusive. For example, we cannot prove
$\text{P}\neq\text{PSPACE}$, let alone $\text{P}\neq\text{NP}$.
One response to this difficulty is to equip a computational model with an
_oracle_ , which computes a fixed (but arbitrarily powerful) quantity in a
single timestep. It is often easier to prove that a statement (e.g.
$\text{P}\neq\text{NP}$) is true relative to an oracle; furthermore, this
restricts the kinds of proof techniques that can show the statement is false
without an oracle. In addition to separating complexity classes, oracles and
_query complexity_ naturally arise in cryptography (e.g. [KL20]) and learning
theory (e.g. [KM93]).
Even with respect to an oracle, proving that some complexity classes are
unequal can be surprisingly difficult. Notably, Aharonov and Naveh define
${\mathsf{QCMA}}$, a subset of ${\mathsf{QMA}}$ where the witness is a
classical bitstring [AN02], and ask if
${\mathsf{QCMA}}\subsetneq{\mathsf{QMA}}$. Aaronson and Kuperberg conjecture
that an oracle separates these classes, but only prove a “quantum oracle”
where this occurs [AK07]. Subsequent works [FK18, NN22] remove the
“quantumness” from the oracle model, but still use models with internal
randomness or other nonstandard aspects.
We consider quantum property testing problems defined relative to oracles from
various oracle models: encoding the edges of a graph in an invertible function
$f$, we present $f$ as either a _standard_ oracle or _in-place_ oracle, with
or without internal randomness. With mild restrictions on the workspace of
quantum verifiers, we find:
1. 1.
In several oracle models presenting $f$, a _quantum_ witness can help a
quantum verifier efficiently decide if the graph encoded in $f$ has a small
disconnected subset.
2. 2.
Where $f$ is presented as a randomized in-place oracle, no _classical_ witness
can help a quantum verifier efficiently decide this problem.
3. 3.
Where $f$ is presented as a randomized phase oracle, no witness _of any type
or size_ can help a quantum verifier efficiently decide this problem.
Our results highlight that the quantum complexity of a task defined relative
to an oracle is influenced by the choice of oracle model.
### 1.1 Our techniques
We use a well-known fact of Petersen to encode the edges of any even-degree
regular graph in an invertible function $f$. We then consider natural ways to
install $f$ within an oracle; we say that $f$ is _presented_ as a particular
kind of oracle. For example, a standard oracle presents $f$ through the map
$\ket{c,x}\mapsto\ket{c\oplus f(x),x}$, while an in-place oracle presents $f$
through the map $\ket{x}\mapsto\ket{f(x)}$. In general, we consider oracles
that give access both to $f$ and $f^{-1}$. An oracle may also have internal
randomness: on every query to a _randomized_ oracle, $f$ is chosen uniformly
at random from a fixed set of functions $F$.
Consider the Laplacian $L_{f}$ of a graph encoded in $f$. We first provide a
test such that for any input state $\ket{\psi}$, the test succeeds with
probability expressible in terms of $\bra{\psi}L_{f}\ket{\psi}$, independently
of how an oracle presents $f$. We use this test to construct a
${\mathsf{QMA}}$ protocol verifying that the graph is not an _expander_ graph.
This problem is primarily motivated by the preimage-testing problem of
Fefferman and Kimmel [FK18], which separates ${\mathsf{QMA}}$ and
${\mathsf{QCMA}}$ relative to a nonstandard oracle. They encode an invertible
function $\pi$ in an oracle _without efficient access to $\pi^{-1}$_, and test
a property of $\pi^{-1}$; by design, this property can be verified but not
easily computed. Crucially, we view a permutation and its inverse as the edges
of an _undirected graph_ ; properties of undirected graphs are not sensitive
to the ordering of $(x,\pi(x))$. We use multiple permutations to study graphs
of higher degree, and notice that detecting if a graph has a non-expanding
region is hard without traversing most of the graph. Some of these ideas are
related to the _component mixer_ concept of Lutomirski [Lut11], and are
simultaneously and independently explored by Natarajan and Nirkhe [NN22].
A randomized oracle presenting a set of functions $F$ can be seen as a quantum
channel, so small changes to $F$ cause statistically indistinguishable changes
to the oracle. We use this flexibility to modify non-expansion testing to a
simple permutation problem: do the functions $f\in F$ stabilize a small set
$V\subseteq[N]$, or is $F$ the set of all permutations on $[N]$? Notice that
$F$ is a group in both cases. When an oracle presenting $F$ preserves the
group structure of $F$, we can use representation theory. For this problem,
this is satisfied by an _in-place_ oracle; the oracle is then an orthogonal
projector onto one of two symmetric subspaces of matrices. After finding an
orthogonal basis for each subspace, we construct a hybrid argument to prove
that only witnesses with knowledge of $V$ can help a quantum verifier
efficiently decide this problem. We also use representation theory to give a
${\mathsf{QCMA}}$ protocol for an analogous permutation problem in randomized
standard oracles.
We finally study the permutation problem in a randomized phase oracle. We
directly analyze the effect of the oracle on any input density matrix; with
high probability, the oracle decreases the magnitude of every off-diagonal
term by a $\frac{1}{2^{{\rm poly}(n)}}$ factor. We then construct a hybrid
argument, bounding our measure of progress using an inequality relating the
sizes of Schatten $p$-norms. When the state space is not too large, we prove
that an exponential number of queries are required to distinguish most YES
instances from the NO instance, _regardless of input state_. As a result, no
witness can help a verifier distinguish YES from NO.
Note that our quantum verifiers are not fully general. Our lower bound
techniques restrict the number of extra workspace qubits in the verifier;
however, our upper bounds also work in this setting. In Section 4, we explain
these restrictions in more detail and discuss prospects for generalizing our
results.
### 1.2 Related work
#### Quantum oracle models
A fundamental constraint of quantum oracle models is that they must be
unitary. We describe several nonstandard oracle models used in quantum
computing:
* •
A _quantum_ oracle is any unitary operation $U$ in the full Hilbert space.
Although the operation is unitary, the verifier doesn’t necessarily have
access to $U^{-1}$. Oracles like these are not typically classical because the
unitary’s action is not efficiently and classically representable.
* •
An _in-place_ oracle maps $\ket{x}\to\ket{\pi(x)}$ for some classical
invertible function $\pi$. Again, this computation is not efficiently
reversible since the verifier may not have access to $\pi^{-1}$. When a
standard oracle gives access to $\pi^{-1}$, an in-place oracle query can be
simulated in two queries; otherwise, an exponential number of queries are
required to construct one from the other [KKVB02].
* •
A _phase_ oracle puts the output of a classical function $f$ in the phase of a
basis state. We consider the map $\ket{x}\to e^{f(x)\cdot 2\pi i/N}\ket{x}$.
To contrast, note that the map $\ket{c,x}\to e^{cf(x)\cdot 2\pi i/N}\ket{c,x}$
is unitarily equivalent to the standard oracle.
All of these oracles can optionally have _internal randomness_ , as considered
by Harrow and Rosenbaum [HR11]; we call these _randomized_ oracles. On every
query to a randomized oracle, a unitary is chosen at random from a fixed set.
This can be very powerful; for example, [HR11] gives examples of randomized
oracles where problems _impossible_ to decide with classical queries can be
decided with a single quantum query.
#### ${\mathsf{QMA}}$ and ${\mathsf{QCMA}}$
The _Merlin-Arthur_ style of complexity classes considers a decision problem
and two players. The magician (Merlin) has claimed the answer to the decision
problem is YES, and gives the verifier a token (the _proof_ or _witness_) to
convince them. The verifier (Arthur) must then ensure the answer is actually
YES. Given a problem with size $n$, the verifier must accept a correct witness
(i.e. when the answer is YES) with probability $1/q$ higher than a “lying”
witness (i.e. when the answer is NO) for some $q={\rm poly}(n)$. The set of
problems that can be decided this way in a classical setting is known as
Merlin Arthur (${\mathsf{MA}}$). If the verifier is a quantum computer, this
is ${\mathsf{QCMA}}$; if the witness can be any quantum state, this is
${\mathsf{QMA}}$.
| verifier is classical | verifier is quantum
---|---|---
witness is classical | ${\mathsf{MA}}$ | ${\mathsf{QCMA}}$
witness is quantum | - | ${\mathsf{QMA}}$
Table 1: Complexity classes in the style of _Merlin-Arthur_. ${\mathsf{QCMA}}$
is a subset of ${\mathsf{QMA}}$ where the witness can be efficiently written
as a classical bitstring.
Since any classical bitstring can be efficiently written as a quantum state,
${\mathsf{QCMA}}$ $\subseteq$ ${\mathsf{QMA}}$. But is the reverse true? Even
the _oracle_ version of this problem is open: at the top of a recent list of
open problems, Aaronson asks for a standard oracle that separates the two
classes [Aar21]. All previous progress [AK07, FK18, NN22] relies on
specifically chosen _nonstandardness_ in the oracle.
Natarajan and Nirkhe [NN22] make progress on a standard oracle separation of
${\mathsf{QMA}}$ and ${\mathsf{QCMA}}$ by constructing an oracle with
randomness. They simultaneously and independently provide a ${\mathsf{QMA}}$
protocol for testing non-expansion of a graph in an oracle. To prove their
lower bound, they combine the adversary method, the polynomial method, and a
reduction to a problem of Ambainis, Childs, and Liu [ACL11]. However, their
notion of randomness is different from ours and other works [HR11, FK18,
AGS22], and acts as follows: when an oracle is first queried, it chooses a
function $f$ from a distribution, but on subsequent queries, it uses the same
function $f$. By contrast, our notion of randomness is memoryless: an oracle
chooses $f$ from a uniform distribution on $F$ for _every_ query. This allows
one to make small changes to $F$ without affecting the success of the
${\mathsf{QMA}}$ protocol; we use this flexibility to study a simpler
permutation problem.
### 1.3 Outline of the paper
In Section 2, we show how to encode the edges of a graph in an invertible
function, and define the oracle models and decision problems we consider. In
Section 3, we explain our ${\mathsf{QMA}}$ protocol for non-expansion and for
a simpler permutation problem of randomized oracles. We apply representation
theory to randomized oracles and prove our classical-witness lower bound in
Section 4; some technical details are deferred to Appendix A and Appendix C.
In Section 5, we consider a phase oracle and prove our witness-agnostic lower
bound. We include relevant facts about norms and inner products in Appendix B,
and contrast our setup with quantum walks in Appendix D.
## 2 Our setup
Consider a $d$-regular graph on $N:=2^{n}$ vertices for any $n$ and even $d$.
We show that an invertible function can list the edges adjacent to each vertex
in $G$.
###### Definition 2.1 (Graph-coded function).$p_{p_{p_{p}}}$
Consider a $d$-regular graph $G$ (for even $d$) on $N$ vertices. A $G$-coded
function is a function $f:[N]\times[d/2]\to[N]$, such that $f_{i}(x):=f(x,i)$
is a bijection for each $i\in[d/2]$, and each edge is uniquely represented by
a tuple $(x,f_{i}(x))$.
###### Remark 2.2 (Even-degree regular graphs have graph-coded functions).
Every regular graph $G$ of even degree has a $G$-coded function.
###### Proof.
A $d$-regular graph $G$ of even degree always has a 2-factorization [Pet00].
This means that the edges of $G$ can be partitioned into $d/2$ edge-disjoint
subgraphs $[E_{1},\dots,E_{d/2}]$ where in each $E_{i}$, all vertices have
degree two (i.e. a collection of cycles). Thus, we can represent each $E_{i}$
with a permutation $\pi_{i}$, where the edge $(x,y)\in E_{i}$ if and only if
$\pi_{i}(x)=y$ or $\pi_{i}(y)=x$. Then $f(x,i):=\pi_{i}(x)$ is a $G$-coded
function. ∎
Graph-coded functions $f$ are bijective, and therefore invertible. We now
_present_ $f$ in various oracle models. Note that we define all oracles with
access both to $f$ and $f^{-1}$.
###### Definition 2.3 (An oracle model _presents_ a function
$f$).$p_{p_{p_{p}}}$
For each oracle model below (e.g. standard oracle), we say that this oracle
model _presents_ the function $f$.
###### Remark 2.4.
For notational convenience, we refer to a qubit $z$ that controls the
inversion of a function $f$ as taking on values in $\\{\pm 1\\}$, so that
$f^{z}$ is either $f^{1}=f$ or $f^{-1}$.
###### Definition 2.5 (Standard oracle).$p_{p_{p_{p}}}$
For any $f:[N]\to[N]$, define
$U_{f}:\mathbb{C}^{2N^{2}}\to\mathbb{C}^{2N^{2}}$ as
$\displaystyle U_{f}\sum_{c,x\in[N],z\in\\{\pm
1\\}}\alpha_{c,x,z}\ket{c,x,z}:=\sum_{c,x\in[N],z\in\\{\pm
1\\}}\alpha_{c,x,z}\ket{c\oplus f^{z}(x),x,z}\,.$ (2.1)
###### Definition 2.6 (In-place oracle [KKVB02]).$p_{p_{p_{p}}}$
For any permutation $\pi:[N]\to[N]$, define
$\widetilde{U}_{\pi}:\mathbb{C}^{2N}\to\mathbb{C}^{2N}$ as
$\displaystyle\widetilde{U}_{\pi}\sum_{x\in[N],z\in\\{\pm
1\\}}\beta_{x,z}\ket{x,z}:=\sum_{x\in[N],z\in\\{\pm
1\\}}\beta_{x,z}\ket{\pi^{z}(x),z}\,.$ (2.2)
###### Remark 2.7 ([KKVB02]).
A standard oracle $U_{f}$ (with access to $f^{-1}$) can simulate an in-place
oracle $\widetilde{U}_{f}$ in two queries:
$\displaystyle(\mathbb{I}\otimes X)\circ U_{f}\circ(SWAP_{n,n}\otimes X)\circ
U_{f}\ket{0}^{\otimes n}\ket{x,z}=\ket{0}^{\otimes n}\ket{f^{z}(x),z}\,.$
(2.3)
###### Definition 2.8 ($N^{\text{th}}$ root of unity).$p_{p_{p_{p}}}$
Define the $N^{\text{th}}$ root of unity as $\omega_{N}:=e^{2\pi i/N}$.
###### Definition 2.9 (Phase oracle).$p_{p_{p_{p}}}$
For any function $f:[N]\to[N]$, define
$\overline{U}_{f}:\mathbb{C}^{2N}\to\mathbb{C}^{2N}$ as
$\displaystyle\overline{U}_{f}\sum_{x\in[N],z\in\\{\pm
1\\}}\alpha_{x,z}\ket{x,z}:=\sum_{x\in[N],z\in\\{\pm
1\\}}\alpha_{x,z}\omega_{N}^{f^{z}(x)}\ket{x,z}\,.$ (2.4)
We describe how an oracle in our setup exhibits internal randomness. On each
query, a _randomized_ oracle chooses a function uniformly from a set $F$. We
say that a randomized oracle _presents_ $F$.
###### Remark 2.10.
Given a unitary $U$, we use the notation $\mathcal{U}$ to denote an operator
on density matrices; that is,
$\displaystyle\mathcal{U}[\rho]:=U\rho U^{\dagger}\,.$ (2.5)
###### Definition 2.11 (Randomized oracle (e.g. [HR11,
FK18])).$p_{p_{p_{p}}}$
For any set $F$ of functions $f:[N]\to[N]$ corresponding to oracles $\\{U_{f}\
|\ f\in F\\}$, define the linear operator $\mathcal{O}_{F}$ as
$\displaystyle\mathcal{O}_{F}:=\frac{1}{|F|}\sum_{f\in F}\mathcal{U}_{f}\,.$
(2.6)
We match the notation of randomized oracle $\mathcal{O}_{F}$ with oracle
$U_{f}$; e.g. $\mathcal{\widetilde{O}}_{F}$ is a randomized _in-place_ oracle.
### 2.1 Problem statements
The problems below are not fully specified without the choice of oracle model.
We prepend the names below with the choice of oracle model; for example, we
denote 2.12 in a standard oracle as STANDARD NON-
EXPANSION$(d,\alpha,\epsilon)$.
###### Problem 2.12 (NON-EXPANSION$(d,\alpha,\epsilon)$).
Consider an oracle $U_{f}$ presenting a $G$-coded function $f$.
1. 1.
In a YES instance, we are promised that $G$ is a union of two disconnected
$d$-regular graphs, and that the smaller graph has $N^{\alpha}$ vertices.
2. 2.
In a NO instance, we are promised that $G$ is $d$-regular and has spectral gap
at least $\epsilon$ (for example, an _expander graph_).
The problem is to decide whether $U_{f}$ is a YES instance or NO instance.
We also consider a version of this problem with _randomized_ oracles, where
each randomized YES instance is specified by the set of vertices $V$ of the
smaller graph. On each query, an oracle chooses an graph-coded function $f$
uniformly at random that corresponds to a graph where $V$ and $[N]/V$ are
disconnected.
###### Problem 2.13 (RANDOMIZED NON-EXPANSION$(d,\alpha,\epsilon)$).
Consider a randomized oracle $\mathcal{O}_{F}$ presenting a set of graph-coded
functions $F$.
1. 1.
Each subset $V\subseteq[N]$ of size $|V|=N^{\alpha}$ specifies a YES instance
$\mathcal{O}_{F_{V}}$. Let $F_{V}$ be the set of all $G$-coded functions of
$d$-regular graphs $G$ with no edges between $V$ and $[N]/V$.
2. 2.
There is a single NO instance $\mathcal{O}_{F_{\varnothing}}$. Let
$F_{\varnothing}$ be the set of all $G$-coded functions of $d$-regular graphs
$G$ with spectral gap at least $\epsilon$.
The problem is to decide whether $\mathcal{O}$ is a YES instance or a NO
instance.
In the configuration model of a random graph, $F_{V}$ contains all functions
$f(x,i)$ such that $f_{i}(x):=f(x,i)$ is the union of a permutation on $[N]/V$
and a permutation on $V$. In fact, we can use the oracle’s internal randomness
to adjust the underlying set $F$, and even consider graphs that are not
typically expander graphs.
###### Definition 2.14 (Subset indicator).$p_{p_{p_{p}}}$
For a set $V\subseteq[N]$, define the function $i_{V}:[N]\to\\{V,[N]/V\\}$ as
$\displaystyle i_{V}(x)=\begin{cases}V&x\in V\\\ [N]/V&x\notin
V\,.\end{cases}$ (2.7)
###### Definition 2.15 (Permutations that stabilize a subset).$p_{p_{p_{p}}}$
For a set $V\subseteq[N]$, define the set of permutations
$\displaystyle T_{V}:=\\{\pi:[N]\to[N]\,:\,i_{V}(x)=i_{V}(\pi(x))\,\forall
x\in[N]\\}\,.$ (2.8)
We say that $T_{V}$ _stabilizes_ the subset $V$.
###### Problem 2.16 (RANDOMIZED HIDDEN SUBSET$(\alpha)$).
Consider a randomized oracle $\mathcal{O}_{F}$ presenting a set of functions
$F$.
1. 1.
Each subset $V\subseteq[N]$ of size $|V|=N^{\alpha}$ specifies a YES instance
$\mathcal{O}_{T_{V}}$.
2. 2.
There is a single NO instance $\mathcal{O}_{T_{\varnothing}}$, where
$T_{\varnothing}$ is the set of all permutations of $[N]$.
The problem is to decide whether $\mathcal{O}$ is a YES instance or a NO
instance.
Notice that RANDOMIZED HIDDEN SUBSET$(\alpha)$ is exactly RANDOMIZED NON-
EXPANSION$(2,\alpha,0)$.
Notice that $T_{V}$ is a group under function composition. One can generalize
this algebraic structure to a problem distinguishing oracles presenting
subgroups of $T_{\varnothing}$ from an oracle presenting $T_{\varnothing}$:
###### Problem 2.17 (RANDOMIZED HIDDEN SUBGROUP$(\times,\\{H_{i}\\})$).
Consider the set $T_{\varnothing}$ of all permutations on $[N]$ as a group
with operation $\times$, such that each $H_{i}\subsetneq T_{\varnothing}$ is
also a group. Suppose a randomized oracle $\mathcal{O}$ presents either
$T_{\varnothing}$ or any $H_{i}$.
1. 1.
Each subgroup $H_{i}$ specifies a YES instance $\mathcal{O}_{H_{i}}$.
2. 2.
There is a single NO instance $\mathcal{O}_{T_{\varnothing}}$, where
$T_{\varnothing}$ is the set of all permutations of $[N]$.
The problem is to decide whether $\mathcal{O}$ is a YES instance or a NO
instance.
For example, RANDOMIZED HIDDEN SUBSET$(\alpha)$ is a special case of
RANDOMIZED HIDDEN SUBGROUP$(\times,\\{H_{i}\\})$ using the group operation of
function composition.
## 3 Verifying non-expansion with a quantum witness
There is a one-query ${\mathsf{QMA}}$ protocol for NON-
EXPANSION$(d,\alpha,\epsilon)$ in many oracle models presenting a graph-coded
function. Graphs with good _expansion_ are well-connected despite their
sparsity. For any graph $G$, let $A_{G}$ be the adjacency matrix of $G$, and
$L_{G}=d\mathbb{I}-A_{G}$ be the _graph Laplacian_ of $G$. The smallest
eigenvalue of $L_{G}$ is $\lambda_{1}(L_{G})=0$, and the next-smallest
eigenvalue $\lambda_{2}(L_{G})$ measures the expansion of $G$. In this
framework, NON-EXPANSION$(d,\alpha,\epsilon)$ asks if an oracle presenting a
$G$-coded function has $\lambda_{2}(L_{G})=0$ (YES), or if
$\lambda_{2}(L_{G})\geq\epsilon$ (NO).
At the heart of our protocol is the _spectral test_ , which takes an input
state $\ket{\psi}$ and fails with probability proportional to
$\bra{\psi}L_{G}\ket{\psi}$. We describe the spectral test for both standard
oracles and in-place oracles in Section 3.1. A state that passes the spectral
test is essentially supported on a subspace according to
$\lambda(L_{G})=o(\frac{1}{{\rm poly}(n)})$; in a NO instance, this is one-
dimensional, and in a YES instance, this is at least two-dimensional. In fact,
the uniform superposition over all inputs, $\ket{+}^{\otimes n}$, is always in
this subspace. As a result, our protocol (Theorem 3.5) either runs the
spectral test, or checks if the input state is close to $\ket{+}^{\otimes n}$.
Consider the randomized variant of NON-EXPANSION$(d,\alpha,\epsilon)$. The
graph of any graph-coded function presented in a YES instance is guaranteed to
have a small set $V$ (i.e. $|V|=N^{\alpha}$) disconnected from the rest of the
graph. As a result, there is a state, defined only by the vertices of $V$,
that is all-but-negligibly supported in the $\lambda(L_{G})=0$ subspace. This
state is the _subset state_ $\ket{V}$:
###### Definition 3.1 (Subset state).$p_{p_{p_{p}}}$
For any non-empty subset $S\subseteq[N]$, define the subset state of $S$ as
$\displaystyle\ket{S}:=\frac{1}{\sqrt{|S|}}\sum_{x\in S}\ket{x}\,.$ (3.1)
Since $\ket{V}$ is a good witness for _every_ graph encoded in a YES instance,
the ${\mathsf{QMA}}$ protocol works just as well in the randomized setting
(Theorem 3.6).
Randomized oracles that present a set $F$ of graph-coded functions are stable
to small changes in the set $F$. In fact, an oracle presenting $F$ encoding
all $d$-regular expander graphs is indistinguishable from an oracle presenting
$F$ encoding all $d$-regular graphs. The latter oracle can be simulated with
$d/2$ queries to the NO instance of RANDOMIZED HIDDEN SUBSET$(\alpha)$; we
show in Theorem 3.7 that the same ${\mathsf{QMA}}$ protocol can also decide
this problem.
### 3.1 The spectral test
We give a test that takes an input state
$\ket{\psi}=\sum_{x\in[N]}a_{x}\ket{x}$ on $n$ qubits, and fails with
probability proportional to $\bra{\psi}L_{G}\ket{\psi}$. This relies on a
curious fact:
###### Lemma 3.2.
Consider a $d$-regular graph $G$ (for even $d$) on $2^{n}$ vertices and a
$G$-coded function $f$. Suppose we have a normalized quantum state
$\ket{\psi}=\sum_{x\in[N]}a_{x}\ket{x}$ on $n$ qubits. Then
$\displaystyle\sum_{i\in[d/2]}\sum_{x\in[N]}\|a_{x}\pm
a_{f(x,i)}\|^{2}=d\pm\bra{\psi}A_{G}\ket{\psi}\,.$ (3.2)
###### Proof.
$\displaystyle\sum_{i\in[d/2]}\sum_{x\in[N]}\|a_{x}\pm a_{f(x,i)}\|^{2}$
$\displaystyle=\sum_{i\in[d/2]}\sum_{x\in[N]}\|a_{x}\|^{2}+\|a_{f(x,i)}\|^{2}\pm(a_{x}a_{f(x,i)}^{*}+a_{x}^{*}a_{f(x,i)})$
(3.3)
$\displaystyle=d\pm\sum_{i\in[d/2]}\sum_{x\in[N]}(a_{x}a_{f(x,i)}^{*}+a_{x}^{*}a_{f(x,i)})$
(3.4) $\displaystyle=d\pm\bra{\psi}A_{G}\ket{\psi}.$ (3.5)
∎
We construct the spectral test with one query either to a standard oracle or
in-place oracle presenting a graph-coded function $f$. The former (3.3) is a
SWAP test but with an oracle query in the middle. The latter (3.4) relies on
controlled access to the in-place oracle.
###### Procedure 3.3 (Spectral test with a standard oracle).
Consider a $d$-regular graph $G$ on $N=2^{n}$ vertices where $d$ is even, and
normalized state $\ket{\psi}=\sum_{x\in[N]}a_{x}\ket{x}\in\mathbb{C}^{N}$. We
assume access to a standard oracle $U_{f}:\mathbb{C}^{k\times k}$ for
$k=N^{2}2^{\lceil\log_{2}{d}\rceil}$, which acts on a basis vector as
$\displaystyle U_{f}\ket{c,x,i,z}=\ket{c\oplus f^{z}(x,i),x,i,z}\,,$ (3.6)
for $c,x\in[N]$, $i\in 2^{\lceil\log_{2}{d}\rceil-1}$, and $z\in\\{\pm 1\\}$.
1. 1.
Pick $i\in[d/2]$ uniformly at random, and prepare the state
$\ket{i}\in\mathbb{C}^{2^{\lceil\log_{2}{d}\rceil-1}}$.
2. 2.
Prepare a qubit in the state
$\ket{+}=\frac{\ket{1}+\ket{-1}}{\sqrt{2}}\in\mathbb{C}^{2}$. (Recall that we
label the values of this register in $\\{\pm 1\\}$.)
3. 3.
Combine $n$ registers $\ket{0}^{\otimes n}$, the input state $\ket{\psi}$, and
$\ket{i}$ and $\ket{+}$ to create $\ket{0}^{\otimes
n}\ket{\psi}\ket{i}\ket{+}$.
4. 4.
Apply the oracle $U_{f}$, which creates the state
$\displaystyle\frac{1}{\sqrt{2}}\sum_{x\in[N]}a_{x}\left(\ket{f(x,i)}\ket{x}\ket{i}\ket{1}+\ket{f^{-1}(x,i)}\ket{x}\ket{i}\ket{-1}\right)\,.$
(3.7)
5. 5.
Swap the first two sets of $n$ qubits, controlled by the last qubit. This
creates the state
$\displaystyle\frac{1}{\sqrt{2}}\sum_{x\in[N]}a_{x}\left(\ket{f(x,i)}\ket{x}\ket{i}\ket{1}+\ket{x}\ket{f^{-1}(x,i)}\ket{i}\ket{-1}\right)$
(3.8) $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\sum_{x\in[N]}a_{x}\ket{f(x,i)}\ket{x}\ket{i}\ket{1}+\frac{1}{\sqrt{2}}\sum_{x\in[N]}a_{f(x,i)}\ket{f(x,i)}\ket{x}\ket{i}\ket{-1}\,.$
(3.9)
6. 6.
Apply a Hadamard on the last qubit, which creates the state
$\displaystyle\frac{1}{2}\sum_{x\in[N]}(a_{x}+a_{f(x,i)})\ket{f(x,i)}\ket{x}\ket{i}\ket{1}+\frac{1}{2}\sum_{x\in[N]}(a_{x}-a_{f(x,i)})\ket{f(x,i)}\ket{x}\ket{i}\ket{-1}\,.$
(3.10)
7. 7.
Measure the last qubit and accept if it is $1$.
Moreover, by Lemma 3.2, this procedure fails with probability
$\displaystyle\frac{1}{d/2}\sum_{i\in[d/2]}\sum_{x\in[N]}\frac{\|a_{x}-a_{f(x,i)}\|^{2}}{4}=\frac{\bra{\psi}L_{G}\ket{\psi}}{2d}\,.$
(3.11)
###### Procedure 3.4 (Spectral test with an in-place oracle).
Consider a $d$-regular graph $G$ on $N=2^{n}$ vertices where $d$ is even, and
normalized state $\ket{\psi}=\sum_{x\in[N]}a_{x}\ket{x}\in\mathbb{C}^{N}$. We
assume controlled access to an in-place oracle
$\widetilde{U}_{f}:\mathbb{C}^{k\times k}$ for
$k=N2^{\lceil\log_{2}{d}\rceil+1}$, which acts on a basis vector as
$\displaystyle\widetilde{U}_{f}\ket{a,x,i,z}=\ket{a,f^{a\cdot z}(x,i),i,z}\,.$
(3.12)
for control qubit $a\in\\{0,1\\}$, $x\in[N]$, $i\in
2^{\lceil\log_{2}{d}\rceil-1}$, and $z\in\\{\pm 1\\}$.111Note that this
procedure does not actually need access to the inverse of $f$ to conduct the
spectral test.
1. 1.
Pick $i\in[d/2]$ uniformly at random, and prepare the state
$\ket{i}\in\mathbb{C}^{2^{\lceil\log_{2}{d}\rceil-1}}$.
2. 2.
Prepare a qubit in the state
$\ket{+}=\frac{\ket{0}+\ket{1}}{\sqrt{2}}\in\mathbb{C}^{2}$.
3. 3.
Combine $\ket{+}$, the input state $\ket{\psi}$, $\ket{i}$, and a register
$\ket{1}$ to create $\ket{+}\ket{\psi}\ket{i}\ket{1}$.
4. 4.
Apply the oracle $\widetilde{U}_{f}$, which creates the state
$\displaystyle\frac{1}{\sqrt{2}}\sum_{x\in[N]}a_{x}\left(\ket{0}\ket{x}+\ket{1}\ket{f(x,i)}\right)\ket{i}\ket{1}$
(3.13) $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}\sum_{x\in[N]}(a_{x}\ket{0}+a_{f^{-1}(x,i)}\ket{1})\ket{x}\ket{i}\ket{1}\,.$
(3.14)
5. 5.
Apply a Hadamard on the first qubit, which creates the state
$\displaystyle\frac{1}{2}\sum_{x\in[N]}\left((a_{x}+a_{f^{-1}(x,i)})\ket{0}+(a_{x}-a_{f^{-1}(x,i)})\ket{1}\right)\ket{x}\ket{i}\ket{1}\,.$
(3.15)
6. 6.
Measure the first qubit and accept if it is $0$.
Moreover, by Lemma 3.2, this procedure fails with probability
$\displaystyle\frac{1}{d/2}\sum_{i\in[d/2]}\sum_{x\in[N]}\frac{\|a_{x}-a_{f^{-1}(x,i)}\|^{2}}{4}=\frac{\bra{\psi}L_{G}\ket{\psi}}{2d}\,.$
(3.16)
### 3.2 A one-query protocol
###### Theorem 3.5.
There is a ${\mathsf{QMA}}$ protocol for STANDARD NON-
EXPANSION$(d,\alpha,\epsilon)$ and IN-PLACE NON-EXPANSION$(d,\alpha,\epsilon)$
at every even $d\geq 4$, all $0<\alpha<\frac{1}{2}$, and all constant
$\epsilon>0$.
###### Proof.
Suppose the oracle presents a $G$-coded function. Let
$\lambda_{1}\leq\lambda_{2}\leq\dots\leq\lambda_{N}$ be the eigenvalues of the
graph Laplacian $L_{G}$. Note that the smallest eigenvalue of a regular graph
$G$ is $\lambda_{1}=0$. We always choose the eigenvector associated with
$\lambda_{1}$ as a uniform superposition over vertices of the graph (i.e.
$\ket{\lambda_{1}}:=\ket{[N]}=\ket{+}^{\otimes n}$).
Suppose Arthur receives a state
$\ket{\psi}=\sum_{i\in[N]}\alpha_{i}\ket{\lambda_{i}}$ from Merlin. Consider
the following strategy:
* •
With probability $\frac{1}{2}$, measure $\ket{\psi}$ in the Hadamard basis.
Fail if it is in the basis state according to $\ket{+}^{\otimes n}$, and pass
otherwise.
* •
With probability $\frac{1}{2}$, use the spectral test (3.3 or 3.4,
respectively).
The probability of failure FAIL is
FAIL $\displaystyle=\frac{1}{2}\|\bra{\psi}\ket{+}^{\otimes
n}\|^{2}+\frac{1}{2}\frac{\bra{\psi}L_{G}\ket{\psi}}{2d}$ (3.17)
$\displaystyle=\frac{1}{2}\Big{(}\|\alpha_{1}\|^{2}+\frac{1}{2d}\sum_{i=1}^{N}\lambda_{i}\|\alpha_{i}\|^{2}\Big{)}\,.$
(3.18)
In a NO instance, $\lambda_{k}=\Omega(1)$ for all $k>1$. So the probability of
failure is always a positive constant:
$\displaystyle\text{FAIL}_{\text{NO}}=\frac{1}{2}\|\alpha_{1}\|^{2}+\frac{1}{2d}\sum_{i=2}^{N}\Omega(\|\alpha_{i}\|^{2})=\Omega(\sum_{i=1}^{N}\|\alpha_{i}\|^{2})=\Omega(1)\,.$
(3.19)
In a YES instance, the spectrum of $L_{G}$ is the combined spectrum of the two
disconnected graphs. This means $\lambda_{1}=\lambda_{2}=0$, and the
associated eigenvectors are linear combinations of $\ket{V}$ and
$\ket{[N]/V}$. Recall that $\ket{\lambda_{1}}:=\ket{+}^{\otimes n}$. We find
the orthogonal eigenvector $\ket{\lambda_{2}}$ in this subspace by inspection:
$\displaystyle\ket{\lambda_{2}}=\sqrt{\frac{N-|V|}{N}}\ket{V}+\sqrt{\frac{|V|}{N}}\ket{[N]/V}\,.$
(3.20)
Note that any vector with $\|\alpha_{2}\|^{2}=1-o(\frac{1}{{\rm poly}(n)})$
has negligible probability of failure:
$\displaystyle\text{FAIL}_{\text{YES}}=\frac{1}{2}\|\alpha_{1}\|^{2}+\frac{1}{2d}\sum_{i=1}^{N}\lambda_{i}\|\alpha_{i}\|^{2}=O(\|\alpha_{1}\|^{2}+\sum_{i=3}^{N}\|\alpha_{i}\|^{2})=O(1-\|\alpha_{2}\|^{2})\,.$
(3.21)
Suppose Merlin sends the subset state $\ket{V}$. Since
$\|\bra{V}\ket{\lambda_{2}}\|^{2}=1-\frac{|V|}{N}=1-O(\frac{1}{2^{{\rm
poly}(n)}})$, the strategy has probability of failure $O(\frac{1}{2^{{\rm
poly}(n)}})$. ∎
In general, the spectral test can be used in a ${\mathsf{QMA}}$ protocol to
test the magnitude of the second-smallest or largest eigenvalue of a graph
Laplacian to inverse polynomial precision. The former is a measure of the
quality of a graph’s expansion, and the latter is related to a measure of a
graph’s bipartiteness named the _bipartiteness ratio_ [Tre08].
Because this ${\mathsf{QMA}}$ protocol requires only one query of either a
standard oracle or an in-place oracle, it works even when these oracles are
randomized.
###### Theorem 3.6.
There is a ${\mathsf{QMA}}$ protocol for STANDARD RANDOMIZED NON-
EXPANSION$(d,\alpha,\epsilon)$ and IN-PLACE RANDOMIZED NON-
EXPANSION$(d,\alpha,\epsilon)$ at every even $d\geq 4$, all
$0<\alpha<\frac{1}{2}$, and all constant $\epsilon>0$.
###### Proof.
The strategy in Theorem 3.5 also works here. Consider any $G$-coded function
presented in a YES instance; the same vertices $V$ are exactly the vertices of
the smaller component of $G$. So the witness $\ket{V}$ is close to the second
eigenvector $\ket{\lambda_{2}}$, and the failure probability is negligible.
Now consider any $G$-coded function presented in a NO instance. By definition,
$G$ is an expander graph, so the failure probability is always a positive
constant. ∎
Because a randomized oracle chooses a function uniformly from a set $F$, it is
statistically indistinguishable from an oracle with exponentially small
changes to $F$. We use this fact to simplify the NO instance in RANDOMIZED
NON-EXPANSION$(d,\alpha,\epsilon)$. Suppose the NO instance instead presents
graph-coded functions of _all_ $d$-regular graphs. Since $1-O(\frac{1}{{\rm
poly}(N)})=1-O(\frac{1}{2^{{\rm poly}(n)}})$ graphs have a constant spectral
gap [Fri04] when $d>2$, the failure probability in the ${\mathsf{QMA}}$
protocol changes by at most $O(\frac{1}{2^{{\rm poly}(n)}})$.
Notice that with this modification, the oracles are exactly $d/2$ copies of
the oracles in RANDOMIZED HIDDEN SUBSET$(\alpha)$. One way to interpret this
is that the randomization offers a substitute for expander graphs. An expander
graph is sparse but well-mixing; a randomized oracle query instantaneously
mixes across a graph’s connected component. As a result, we can distinguish
degree-2 graphs with this ${\mathsf{QMA}}$ protocol, even though they are not
typically expander graphs:
###### Theorem 3.7.
There is a ${\mathsf{QMA}}$ protocol for STANDARD RANDOMIZED HIDDEN
SUBSET$(\alpha)$ and IN-PLACE RANDOMIZED HIDDEN SUBSET$(\alpha)$ for all
$0<\alpha<\frac{1}{2}$.
###### Proof.
Perhaps surprisingly, the strategy in Theorem 3.5 also works here:
* •
Consider the graph $G$ of any $G$-coded function presented in a YES instance.
By definition, the vertices $V$ are disconnected from all vertices in $[N]/V$.
So the witness $\ket{V}$ is close to the second eigenvector
$\ket{\lambda_{2}}$, and the failure probability is negligible.
* •
Consider the NO instance. Then $f$ is chosen uniformly from the set
$T_{\varnothing}$ of all permutations of $[N]$. Then the spectral test fails
with probability
$\displaystyle\mathop{\bf E\/}_{\pi\in
T_{\varnothing}}\left[\frac{d-\bra{\psi}A_{\pi}\ket{\psi}}{2d}\right]\Bigg{|}_{d=2}$
$\displaystyle=\frac{1}{2}-\frac{1}{4}\mathop{\bf E\/}_{\pi\in
T_{\varnothing}}\left[{\bra{\psi}A_{\pi}\ket{\psi}}\right]$ (3.22)
$\displaystyle=\frac{1}{2}-\frac{1}{4}\left(\frac{1}{N!}\sum_{\pi\in
T_{\varnothing}}\bra{\psi}A_{\pi}\ket{\psi}\right)$ (3.23)
$\displaystyle=\frac{1}{2}-\frac{1}{8}\left(\frac{1}{(N!)^{2}}\sum_{\pi_{1},\pi_{2}\in
T_{\varnothing}}\bra{\psi}(A_{\pi_{1}}+A_{\pi_{2}})\ket{\psi}\right)\,.$
(3.24)
The matrix $A_{\pi_{1}}+A_{\pi_{2}}$ determines the adjacency matrix of a
random $4$-regular graph in the configuration model; as a result,
$\displaystyle\mathop{\bf E\/}_{\pi\in
T_{\varnothing}}\left[\frac{d-\bra{\psi}A_{\pi}\ket{\psi}}{2d}\right]\Bigg{|}_{d=2}=\mathop{\bf
E\/}_{\pi_{1},\pi_{2}\in
T_{\varnothing}}\left[\frac{d-\bra{\psi}A_{\pi_{1},\pi_{2}}\ket{\psi}}{2d}\right]\Bigg{|}_{d=4}\,.$
(3.25)
Since a random $4$-regular graph has constant spectral gap with probability
$1-O(\frac{1}{{\rm poly}(N)})=1-O(\frac{1}{2^{{\rm poly}(n)}})$ [Fri04], the
failure probability is at least $\text{FAIL}_{\text{YES}}$ from Theorem 3.5,
less $O(\frac{1}{2^{{\rm poly}(n)}})$. So the failure probability is
$\Omega(1)$, just as before.
∎
## 4 Randomized oracles and symmetric subspaces
Our main goal in this section is to show that a general class of verifiers
cannot decide IN-PLACE RANDOMIZED HIDDEN SUBSET$(\alpha)$. Recall that in this
problem, a verifier has access to a quantum channel and a polynomial-sized
classical witness, and must distinguish whether the oracle presents a
uniformly random permutation or a permutation that stabilizes a hidden subset
$V$. Let $\mathcal{Y}$ be the set of all YES instances; note that each
instance is uniquely defined by a subset $V$.
Suppose there exists a ${\mathsf{QCMA}}$ algorithm for this problem. Since
there are at most $O(2^{{\rm poly}(n)})$ different classical witnesses, there
exists a set of YES instances $\mathcal{Y}^{\prime}$ that share the same
witness, such that
$\left|\mathcal{Y}^{\prime}\right|/\left|\mathcal{Y}\right|=\Omega(2^{-{\rm
poly}(n)})$. We can refute the existence of such an algorithm by proving that
the same verification “strategy” cannot distinguish all instances of
$\mathcal{Y}^{\prime}$ from the NO case with non-negligible probability. A
“strategy” is exactly a quantum algorithm: a series of unitaries and oracle
queries, followed by a POVM. Without loss of generality, a $T$-query algorithm
alternates between unitaries and oracle queries on
$\mathcal{H}_{O}\otimes\mathcal{H}_{W}$ followed by a measurement222Note that
the last operation does not have to be a unitary – one can simply replace a
unitary followed by a POVM with another equivalent POVM., where
$\mathcal{H}_{O}$ is the Hilbert space of the “oracle” qubits and
$\mathcal{H}_{W}$ is the extra workspace:
$\displaystyle\mathcal{E}_{O}[\rho_{0}]=\left(\mathcal{O}\otimes\mathbb{I}\right)\circ\mathcal{U}_{T}\circ\ldots\circ\mathcal{U}_{2}\circ\left(\mathcal{O}\otimes\mathbb{I}\right)\circ\mathcal{U}_{1}[\rho_{0}]\,.$
(4.1)
One may try to use the hybrid argument of Bennett, Bernstein, Brassard, and
Vazirani [BBBV97] and Ambainis [Amb00] to prove that the diamond norm
$\left|\mathcal{E}_{\mathcal{\widetilde{O}}_{T_{V}}}-\mathcal{E}_{\mathcal{\widetilde{O}}_{T_{\varnothing}}}\right|_{\diamond}$
is small in expectation over the choice of
$\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{\prime}$. This would imply
that the verifier cannot distinguish all instances of $\mathcal{Y}^{\prime}$
with the same strategy. We can consider the optimal distinguishing probability
in terms of
$\left|\mathcal{E}_{\mathcal{\widetilde{O}}_{T_{V}}}[\rho_{0}]-\mathcal{E}_{\mathcal{\widetilde{O}}_{T_{\varnothing}}}[\rho_{0}]\right|_{1}$
for some fixed $\rho_{0}\in\mathcal{H}_{O}\otimes\mathcal{H}_{W}$.
However, this statistical argument does not hold for some choices of
$\mathcal{Y}^{\prime}$. Consider the following simple example:
$\mathcal{Y}^{\prime}$ contains all $V$ such that $1\in V$. First,
$\mathcal{Y}^{\prime}$ satisfies the size implied by the pigeonhole principle.
Second, for $\rho_{0}=\outerproduct{1}{1}\otimes\mathbb{I}$,
$\left|\mathcal{\widetilde{O}}_{T_{V}}[\rho_{0}]-\mathcal{\widetilde{O}}_{T_{\varnothing}}[\rho_{0}]\right|_{1}$
is large for _all_ instances in $\mathcal{Y}^{\prime}$, since $\ket{1}\bra{1}$
mixes only within a small subset. Note that this only implies the existence of
an _instance-specific_ POVM distinguishing each YES instance in
$\mathcal{Y}^{\prime}$ from the NO instance. By contrast, a verification
strategy has a _fixed_ POVM $\left\\{E,\mathbb{I}-E\right\\}$. This allows us
to prove that the following value is small on average over the choice of $V$:
$\displaystyle\left|\Tr\left[E\mathcal{E}_{\mathcal{\widetilde{O}}_{T_{V}}}[\rho_{0}]\right]-\Tr\left[E\mathcal{E}_{\mathcal{\widetilde{O}}_{T_{\varnothing}}}[\rho_{0}]\right]\right|$
(4.2)
We must bound this value for arbitrary choices of $E$, $\rho_{0}$ and
$\mathcal{U}_{i}$ fixed in the algorithm. In order to do this, we leverage
tools from representation theory; this allows us to see randomized oracles in
our problem as _orthogonal projectors_ into a subspace of matrices with low
dimension. One caveat of our technique is that the verifier is only allowed to
have $O(\log(n))$ extra workspace qubits. This restriction is necessary to
reduce the subspace dimension to regimes we can handle.
Representation theory has been previously used to study symmetric operators on
variables (in probability) or qubits (in quantum computing) using the language
of de Finetti theorems (e.g. [Har13]); these operators project into subspaces
of permutation-invariant sequences or quantum states. By contrast, we notice
that some randomized oracles are symmetric operators on _density matrices_.
This allows us to explicitly find an orthogonal basis for the associated
symmetric subspaces. We match oracle models with problems with the same group
structure: RANDOMIZED HIDDEN SUBSET$(\alpha)$ for in-place oracles in Section
4.1, and an analogous special case of RANDOMIZED HIDDEN
SUBGROUP$(\times,\\{H_{i}\\})$ for standard oracles in Section 4.2.
We now formalize how randomized oracles are orthogonal projectors. We defer
the proofs to Appendix C.
###### Definition 4.1 (Representation of a group).$p_{p_{p_{p}}}$
Consider a group $G$ and a vector space $\mathsf{V}$. A _representation_ of
$G$ is a map $R$ that sends each $g\in G$ to a linear operator
$R(g):\mathsf{V}\to\mathsf{V}$ such that $R(g_{1}g_{2})=R(g_{1})\circ
R(g_{2})$ for all $g_{1},g_{2}\in G$.
###### Theorem 4.2 (Projecting onto the symmetric subspace [Har13,
Proposition 2]).
Consider a finite group $G$, a vector space $\mathsf{V}$, and a representation
$R:G\to L(\mathsf{V})$. Then the operator
$\displaystyle\Pi_{R}:=\frac{1}{|G|}\sum_{g\in G}R(g)$ (4.3)
is an orthogonal projector onto $\mathsf{V}^{G}\subseteq\mathsf{V}$, where
$\displaystyle\mathsf{V}^{G}:=\\{v\in\mathsf{V}\,:\,R(g)[v]=v\,\forall g\in
G\\}\,.$ (4.4)
###### Theorem 4.3 (Oracles on density matrices form a representation).
Consider a group $G$ of functions $f:[N]\to[N]$ with bitwise $\oplus$ as the
group operation. Then the map $f\mapsto\mathcal{U}_{f}$ is a representation
over the vector space of $2N^{2}\times 2N^{2}$ complex matrices.
Similarly, consider a group $\widetilde{G}$ of permutations $\pi:[N]\to[N]$
with composition as the group operation. Then the map
$\pi\mapsto\mathcal{\widetilde{U}}_{\pi}$ is a representation over the vector
space of $2N\times 2N$ complex matrices.
###### Theorem 4.4 (Some randomized oracles are orthogonal projectors).
Consider a group $G$ of functions $f:[N]\to[N]$ with bitwise $\oplus$ as the
group operation. Then $\mathcal{O}_{G}$ is an orthogonal projector, under the
Frobenius inner product $(x|y)=\Tr[x^{\dagger}y]$ for
$x,y\in\mathbb{C}^{2N^{2}\times 2N^{2}}$, onto
$\displaystyle\mathsf{V}_{G}:=\\{\rho\in\mathbb{C}^{2N^{2}\times
2N^{2}}\,:\mathcal{U}_{f}[\rho]=\rho\,\forall f\in G\\}\,.$ (4.5)
Similarly, consider a group $\widetilde{G}$ of permutations $\pi:[N]\to[N]$
with composition as the group operation. Then
$\mathcal{\widetilde{O}}_{\widetilde{G}}$ is an orthogonal projector, under
the Frobenius inner product $(x|y)=\Tr[x^{\dagger}y]$ for
$x,y\in\mathbb{C}^{2N\times 2N}$, onto
$\displaystyle\mathsf{\widetilde{V}}_{\widetilde{G}}:=\\{\rho\in\mathbb{C}^{2N\times
2N}\,:\mathcal{\widetilde{U}}_{\pi}[\rho]=\rho\,\forall\pi\in\widetilde{G}\\}\,.$
(4.6)
In IN-PLACE RANDOMIZED HIDDEN SUBSET$(\alpha)$, a quantum verifier is either
given $\mathcal{\widetilde{O}}_{T_{\varnothing}}$ (NO) or
$\mathcal{\widetilde{O}}_{T_{V}}$ for some $V\subseteq[N]$ where
$|V|=N^{\alpha}$ (YES). Since $T_{V}\subseteq T_{\varnothing}$, the symmetric
subspace according to $T_{\varnothing}$ is a subspace of that according to
$T_{V}$, i.e.
$\mathsf{\widetilde{V}}_{T_{\varnothing}}\subseteq\mathsf{\widetilde{V}}_{T_{V}}$.
So we can exactly find the basis of the symmetric subspaces
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$ and
$\mathsf{\widetilde{V}}_{T_{V}}$ (see Appendix A for details). This key
property is used throughout Section 4.1.
### 4.1 In-place oracles: when classical witnesses are not enough
We interpret IN-PLACE RANDOMIZED HIDDEN SUBSET$(\alpha)$ as distinguishing the
set of all permutations from a subgroup that stabilizes a small subset
$V\subseteq[N]$. In Theorem 4.9, we prove that classical witnesses designed
for the verifier to choose YES cannot help a quantum verifier efficiently
decide this problem. This requires three main lemmas. First, we show in Lemma
4.6 that input states distinguishing a YES instance or NO instance must have
knowledge of the hidden subset $V$ (either as a subset state $\ket{V}$ or a
mixed state $\mathbb{I}_{V}$). However, no density matrix can be close to too
many subset states $\ket{V}$ (Lemma 4.7), and no POVM can choose the right
answer for too many mixed states $\mathbb{I}_{V}$ (Lemma 4.8). We combine
these facts in a hybrid argument; note that we must fix an algorithm by its
unitaries _and_ its POVM. We formally state the lemmas (deferring the proofs
to Appendix A and Appendix C), and then prove Theorem 4.9.
We use the following measure of “progress” for the hybrid argument:
###### Definition 4.5 (Difference of oracle queries).$p_{p_{p_{p}}}$
For any $\rho$, let $d_{V,\rho}$ be the difference of the two oracle queries
$\displaystyle
d_{V,\rho}:=\mathcal{\widetilde{O}}_{T_{V}}[\rho]-\mathcal{\widetilde{O}}_{T_{\varnothing}}[\rho]\,.$
(4.7)
If the nuclear norm of $d_{V,\rho}$ is non-negligible, we say that $\rho$ is a
good distinguisher of $\mathcal{\widetilde{O}}_{T_{V}}$ and
$\mathcal{\widetilde{O}}_{T_{\varnothing}}$. We show that every good
distinguisher $\rho$ has a certain form; the proof is deferred to Appendix A.
###### Lemma 4.6 (Good distinguishers have a certain form).
Consider a density matrix $\rho$ and up to $O(\log(n))$ extra workspace
qubits. Suppose $\|d_{V,\rho}\|_{1}=\Omega(\frac{1}{{\rm poly}(n)})$. Then
among the quantities
$\displaystyle\bra{V,z}\rho\ket{V,z}\,,$ (4.8)
$\displaystyle\Tr[\rho\left(\mathbb{I}_{V,z}-\frac{|V|}{N}\mathbb{I}_{[N],z}\right)]\,,$
(4.9)
for any $z\in\\{\pm 1\\}$, at least one has magnitude $\Omega(\frac{1}{{\rm
poly}(n)})$.
We now state two lemmas about subsets and subset states. These help us prove
that no quantum state can be a good distinguisher of too many YES instances.
We defer the proofs to Appendix C.
###### Lemma 4.7 (Can’t approximate too many subset states).
Consider a Hermitian $N\times N$ matrix $\rho$ that is positive semidefinite
and has trace at most $1$. Consider the set of all subsets $V\subseteq[N]$,
where $|V|=N^{\alpha}$ for a fixed $0<\alpha<\frac{1}{2}$. Then the fraction
of subsets $V$ such that $\bra{V}\rho\ket{V}=\Omega(\frac{1}{{\rm poly}(n)})$
decreases faster than any exponential in ${\rm poly}(n)$.
###### Lemma 4.8 (Not too many subsets can have elevated mean).
Consider any $N\times N$ POVM $\\{E,\mathbb{I}-E\\}$, and the set of all
subsets $V\subseteq[N]$, where $|V|=N^{\alpha}$ for a fixed
$0<\alpha<\frac{1}{2}$. Then the fraction of subsets $V$ where
$\displaystyle|f(V)|:=\left|\frac{1}{|V|}\Tr[\mathbb{I}_{V}E]-\frac{1}{N}\Tr[E]\right|=\Omega(\frac{1}{{\rm
poly}(n)})\,,$ (4.10)
decreases faster than any exponential in ${\rm poly}(n)$.
Intuitively, Lemma 4.7 and Lemma 4.8 hold because subset states can
approximate _any_ quantum state well. Grilo, Kerenidis, and Sikora [GKS15]
show that for any $n$-qubit quantum state $\ket{\psi}$, there exists a subset
state $\ket{S}$ such that $|\innerproduct{S}{\psi}|\geq\frac{1}{8\sqrt{n+3}}$.
We now prove the main statement:
###### Theorem 4.9.
No quantum verifier that entangles oracle queries with at most $O(\log(n))$
additional qubits can efficiently decide IN PLACE RANDOMIZED HIDDEN
SUBSET$(\alpha)$ for any $0<\alpha<\frac{1}{2}$, even with a polynomial-length
classical witness designed for the verifier to choose YES.
###### Proof.
Let the set of YES instances be $\mathcal{Y}$; note that each YES instance
corresponds to a set $V\subseteq[N]$ where $|V|=N^{\alpha}$, for some fixed
$0<\alpha<\frac{1}{2}$.
Suppose for contradiction that there is a protocol for this problem at some
$\alpha<\frac{1}{2}$. Then the verifier can distinguish
$\mathcal{\widetilde{O}}_{T_{\varnothing}}$ from any
$\mathcal{\widetilde{O}}_{T_{V}}$ in a polynomial number of queries using a
classical witness of size $O({\rm poly}(n))$. By the pigeonhole principle,
there must exist a set of YES instances $\mathcal{Y}^{\prime}$ such that
$|\mathcal{Y}^{\prime}|/|\mathcal{Y}|=\Omega(\frac{1}{2^{{\rm poly}(n)}})$,
where the verifier can use the _same algorithm_ to distinguish
$\mathcal{\widetilde{O}}_{T_{\varnothing}}$ from _every_ YES instance in
$\mathcal{Y}^{\prime}$.
We then construct a _hybrid argument_ in the style of Bennett, Bernstein,
Brassard, and Vazirani [BBBV97] and Ambainis [Amb00], which interpolates from
queries of one oracle to queries of another oracle. For simplicity we write
the proof without extra workspace qubits; however, we can have up to
$O(\log(n))$ extra workspace qubits to satisfy Lemma 4.6. Any polynomial query
algorithm can be written as a set of unitaries $A=\\{U^{(1)},\dots,U^{(k)}\\}$
for some $k=O({\rm poly}(n))$ (alternating between unitary evolutions and
oracle queries), and a POVM $\\{E,\mathbb{I}-E\\}$. Consider the following
“hybrid” algorithms:
###### Definition 4.10.$p_{p_{p_{p}}}$
Given any set of $k$ unitaries $A=\\{U^{(1)},\dots,U^{(k)}\\}$, define the
hybrid algorithm
$\displaystyle
A_{V,\ell}\left[\rho_{0}\right]=\mathcal{\widetilde{O}}_{T_{V}}^{(k)}\circ\mathcal{U}^{(k)}\circ\dots\circ\mathcal{\widetilde{O}}_{T_{V}}^{(\ell+1)}\circ\mathcal{U}^{(\ell)}\circ\mathcal{\widetilde{O}}_{T_{\varnothing}}^{(\ell)}\circ\mathcal{U}^{(\ell)}\circ\dots\mathcal{\widetilde{O}}_{T_{\varnothing}}^{(1)}\circ\mathcal{U}^{(1)}\left[\rho_{0}\right],$
(4.11)
which evolves $\rho_{0}$ under the oracle $\mathcal{O}_{T_{\varnothing}}$ for
$\ell$ steps and under $\mathcal{O}_{T_{V}}$ for the other $k-\ell$ steps.
Then the following is true for each
$\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{\prime}$:
$\displaystyle\Omega(\frac{1}{{\rm poly}(n)})$
$\displaystyle=\left|\Tr[EA_{V,k}[\rho_{0}]]-\Tr[EA_{V,0}[\rho_{0}]]\right|\leq\sum_{i=0}^{k-1}\left|\Tr[EA_{V,i+1}[\rho_{0}]]-\Tr[EA_{V,i}[\rho_{0}]]\right|\,,$
(4.12)
which implies
$\displaystyle\Omega(\frac{1}{{\rm poly}(n)})$
$\displaystyle=\sum_{i=0}^{k-1}\left|\Tr[E\left(\mathcal{\widetilde{O}}_{T_{V}}^{(k)}\circ\dots\mathcal{U}^{(i)}\circ\left(\mathcal{\widetilde{O}}_{T_{V}}^{(i)}-\mathcal{\widetilde{O}}_{T_{\varnothing}}^{(i)}\right)[\rho^{(i)}]\right)]\right|=\sum_{i=0}^{k-1}\left|\Tr[E^{V,(i)}d_{V,\rho^{(i)}}]\right|\,,$
(4.13)
for the operator $E^{V,(i)}$ constructed by
$\displaystyle
E^{V,(i)}=\mathcal{U}^{\dagger(i)}\circ\mathcal{\widetilde{O}}_{T_{V}}^{(i)}\circ\dots\circ\mathcal{U}^{\dagger(k)}\circ\mathcal{\widetilde{O}}_{T_{V}}^{(k)}\left[E\right]\,.$
(4.14)
By B.8, the operators $E^{V,(i)}$ and $\mathbb{I}-E^{V,(i)}$ are also
Hermitian and positive semidefinite, so $\\{E^{V,(i)},\mathbb{I}-E^{V,(i)}\\}$
is a POVM.
Using the pigeonhole principle, there must be a step $\ell$ in the summation
with magnitude $\Omega(\frac{1}{{\rm poly}(n)})$. Each
$\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{\prime}$ has such a step.
Again by the pigeonhole principle, there is a $\ell^{*}$ and set
$\mathcal{Y}^{*}\subseteq\mathcal{Y}^{\prime}$ where
$\displaystyle\left|\Tr[E^{V,(\ell^{*})}d_{V,\rho^{(\ell^{*})}}]\right|=\Omega(\frac{1}{{\rm
poly}(n)})\,,$ (4.15)
and
$|\mathcal{Y}^{*}|/|\mathcal{Y}^{\prime}|\geq\frac{1}{k}=\Omega(\frac{1}{{\rm
poly}(n)})$. Notice that this implies
$|\mathcal{Y}^{*}|/|\mathcal{Y}|=\Omega(\frac{1}{2^{{\rm poly}(n)}})$.
Since the trace of $M$ with a POVM operator is at most $\|M\|_{1}$ (B.5), we
have for all $\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{*}$,
$\displaystyle\Omega(\frac{1}{{\rm
poly}(n)})=\left|\Tr[E^{V,(\ell^{*})}d_{V,\rho^{(\ell^{*})}}]\right|\leq\left\|d_{V,\rho^{(\ell^{*})}}\right\|_{1}\,.$
(4.16)
When queries are entangled with at most $O(\log(n))$ additional qubits, the
premise of Lemma 4.6 holds; then one of the quantities in the theorem
statement must be large. However, Lemma 4.7 says that a given $\rho$ can only
satisfy either of the first two quantities for a smaller-than-exponential
fraction of $\mathcal{Y}$. So for most choices of
$\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{*}$,
$\displaystyle\Tr[\rho^{(\ell^{*})}\left(\mathbb{I}_{V,z}-\frac{|V|}{N}\mathbb{I}_{[N],z}\right)]=\Omega(\frac{1}{{\rm
poly}(n)})\,.$ (4.17)
for at least one of $z\in\\{\pm 1\\}$.
Inspecting the proof of Lemma 4.6, this implies $d_{V,\rho^{(\ell^{*})}}$ can
only have $\Omega(\frac{1}{{\rm poly}(n)})$ weight on $C_{4,z}$ for some
$z\in\\{\pm 1\\}$ across all matrices in $\mathcal{C}$. In fact, for most
choices of $\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{*}$, we show that
this is also true for
$\displaystyle
d_{V,\ell^{*},j}:=\mathcal{\widetilde{O}}_{T_{V}}^{(\ell^{*}+j)}\circ\mathcal{U}^{(\ell^{*}+j)}\circ\dots\circ\mathcal{\widetilde{O}}_{T_{V}}^{(\ell^{*}+1)}\circ\mathcal{U}^{(\ell^{*}+1)}\circ
d_{V,\rho^{(\ell^{*})}}\,,$ (4.18)
for all $0\leq j\leq k-\ell^{*}$. We show this by induction. Note that by B.5
and the fact that $d_{V,\ell^{*},k-\ell^{*}}$ is the difference of two objects
with nuclear norm $1$, $\|d_{V,\ell^{*},k-\ell^{*}}\|_{1}=\Omega(\frac{1}{{\rm
poly}(n)})=O(1)$.
Consider $d_{V,\ell^{*},i}$ for some $1\leq i\leq k-\ell^{*}$, which can be
represented with the basis $\mathcal{C}$. By B.9, it has Frobenius norm at
most $\|d_{V,\rho^{(\ell^{*})}}\|_{Fr}=O(\frac{1}{\sqrt{|V|}})$. So it must
have $o(\frac{1}{{\rm poly}(n)})$ weight on pure states. Inspecting the basis
$\mathcal{C}$, this means $d_{V,\rho^{(\ell^{*})}}$ can only have
$\Omega(\frac{1}{{\rm poly}(n)})$ weight on $C_{4,z}$ or
$\frac{1}{N}\mathbb{I}_{[N],z}$ for $z\in\\{\pm 1\\}$. By B.9,
$d_{V,\rho^{(\ell^{*})}}$ has nuclear norm at least
$\|d_{V,\ell^{*},k-\ell^{*}}\|_{1}=\Omega(\frac{1}{{\rm poly}(n)})$, so it
must have $\Omega(\frac{1}{{\rm poly}(n)})$ weight on at least one such
matrix. Suppose for contradiction that the matrix is
$\frac{1}{N}\mathbb{I}_{[N],z}$ for $z\in\\{\pm 1\\}$. Then
$\displaystyle\Omega(\frac{1}{{\rm
poly}(n)})=\Tr[\mathbb{I}_{[N],z}\mathcal{\widetilde{O}}_{T_{V}}\left[\mathcal{U}^{(\ell^{*}+i)}\left[d_{V,\ell^{*},i-1}\right]\right]]=\Tr[\left(\mathcal{U}^{(\ell^{*}+i)\dagger}\circ\mathcal{\widetilde{O}}_{T_{V}}^{\dagger}[\mathbb{I}_{[N],z}]\right)d_{V,\ell^{*},i-1}]\,.$
(4.19)
By the inductive hypothesis, $d_{V,\ell^{*},i-1}$ only has
$\Omega(\frac{1}{{\rm poly}(n)})$ weight on some $C_{4,z}$ for $z\in\\{\pm
1\\}$. Then for some $z^{\prime}\in\\{\pm 1\\}$,
$\displaystyle\Omega(\frac{1}{{\rm
poly}(n)})=\Tr[\left(\mathcal{U}^{(\ell^{*}+i)\dagger}\circ\mathcal{\widetilde{O}}_{T_{V}}^{\dagger}[\mathbb{I}_{[N],z}]\right)\left(\frac{1}{|V|}\mathbb{I}_{V,z^{\prime}}-\frac{1}{N}\mathbb{I}_{[N],z^{\prime}}\right)]\,.$
(4.20)
Notice that for any unitary $U$, the object
$\\{\mathcal{U}^{(\ell^{*}+i)\dagger}\circ\mathcal{\widetilde{O}}_{T_{V}}^{\dagger}[\mathbb{I}_{[N],z=+1}],\mathcal{U}^{(\ell^{*}+i)\dagger}\circ\mathcal{\widetilde{O}}_{T_{V}}^{\dagger}[\mathbb{I}_{[N],z=-1}]\\}$
forms a POVM. By Lemma 4.8, this can only be satisfied at either $z\in\\{\pm
1\\}$ for a smaller-than-exponential fraction of choices of $V$. So for most
choices of $\mathcal{\widetilde{O}}_{T_{V}}\in\mathcal{Y}^{*}$ (i.e. a
$\Omega(\frac{1}{2^{{\rm poly}(n)}})$ fraction of choices of $V$),
$d_{V,\ell^{*},i}$ has $\Omega(\frac{1}{{\rm poly}(n)})$ weight on $C_{4,z}$
for at least one of $z\in\\{\pm 1\\}$, and for no other matrices in
$\mathcal{C}$.
Since $\Omega(\frac{1}{{\rm
poly}(n)})=\left|\Tr[Ed_{V,\ell^{*},k-\ell^{*}}]\right|$, our supposition then
implies that for one of $z\in\\{\pm 1\\}$,
$\displaystyle\Omega(\frac{1}{{\rm
poly}(n)})=\left|\Tr[EC_{4,z}]\right|=\left|\Tr[E\left(\frac{1}{|V|}\mathbb{I}_{V,z}-\frac{1}{N}\mathbb{I}_{[N],z}\right)]\pm
O(\frac{1}{2^{{\rm poly}(n)}})\right|\,.$ (4.21)
But by Lemma 4.8, this can only be satisfied at either $z\in\\{\pm 1\\}$ for a
smaller-than-exponential fraction of $\mathcal{Y}$. This is a contradiction.
So there can be no efficient protocol for this problem. ∎
### 4.2 Standard oracles: when classical witnesses are enough
As shown in Theorem 4.3, randomized standard oracles can also form a
representation. But the preserved group structure is much different than for
randomized in-place oracles. Consider the set $T_{\varnothing}$ of
permutations on $[N]$. For any $f_{1},f_{2}\in T_{\varnothing}$, the element
$f_{1}f_{2}$ in this group structure acts for all $x\in[N]$ and $z\in\\{\pm
1\\}$ as
$\displaystyle(f_{1}f_{2})^{z}(x)=f_{1}^{z}(x)\oplus f_{2}^{z}(x)\,.$ (4.22)
Note that this operation is abelian; that is, $(f_{1}f_{2})=(f_{2}f_{1})$. Any
finite abelian group can always be represented as the direct sum of cyclic
groups. In fact, under this group operation, $T_{\varnothing}$ can be
decomposed by the input $x\in[N]$ and function inverter $z\in\\{\pm 1\\}$:
$\displaystyle T_{\varnothing}=\bigoplus_{x\in[2^{n}],z\in\\{\pm
1\\}}\mathbb{Z}_{2^{n}}\,.$ (4.23)
With this group operation, the only possible subgroups of $T_{\varnothing}$
have the form
$\displaystyle\bigoplus_{x\in[2^{n}],z\in\\{\pm
1\\}}\mathbb{Z}_{2^{k_{x,z}}}\,,$ (4.24)
for $0\leq k_{x,z}\leq n$. As a result, there is a ${\mathsf{QCMA}}$ protocol
to distinguish any strict subgroup of $T_{\varnothing}$ from
$T_{\varnothing}$.
###### Theorem 4.11.
There is a one-query ${\mathsf{QCMA}}$ protocol for STANDARD RANDOMIZED HIDDEN
SUBGROUP$(\times,\\{H_{i}\\})$ when the group operation $\times$ is bitwise
XOR, for any valid $\\{H_{i}\\}$.
###### Proof.
Suppose the classical witness is a bitstring of length at least $n+1$. The
verifier can then:
1. 1.
Use the first $n$ bits to construct $x$ and the next bit to construct $z$.
2. 2.
Prepare the state $\ket{0}^{\otimes n}\ket{x,z}$.
3. 3.
Apply $\mathcal{O}_{H}$, creating the state $\ket{f^{z}(x)}\ket{x,z}$ for some
$f\in H$.
4. 4.
Measure the first $n$ qubits, and accept if the result is even.333Depending on
the encoding, one can simply measure the $n^{\text{th}}$ qubit, and accept if
the result is $0$.
Consider a YES instance associated with a subgroup $H\subsetneq
T_{\varnothing}$. Then $H$ will have some $x\in[N],z\in\\{\pm 1\\}$ such that
$k_{x,z}<n$. A witness can store $x$ and $z$; since $k_{x,z}<n$, $f^{z}(x)$
will be even with probability $1$.
In the NO instance, $H=T_{\varnothing}$. Then $f^{z}(x)$ is even with
probability $0.5$ for every $x\in[N],z\in\\{\pm 1\\}$. ∎
Note that Theorem 4.11 holds even if the randomized standard oracle
$\mathcal{O}_{F}$ does not have access to the function inverse.
## 5 No witness is enough for phase oracles
We show that deciding RANDOMIZED HIDDEN SUBSET$(\alpha)$ in a phase oracle is
much harder than other oracle models we consider. A random phase has _zero_
expectation. We use this fact to show that queries to most YES instances and
the NO instance reduce the magnitude of each off-diagonal of the density
matrix by an exponential factor, regardless of the input state. We bound the
Frobenius norm of the difference of query outputs to show that these instances
are statistically indistinguishable when the state space is not too large. As
a result, no untrustworthy witness can help decide this problem.
###### Theorem 5.1.
No quantum verifier that entangles oracle queries with at most $o(n)$
additional qubits can efficiently decide PHASE RANDOMIZED HIDDEN
SUBSET$(\alpha)$ for any $0<\alpha<\frac{1}{2}$, even with _any_ witness
designed for the verifier to choose YES. Moreover, these verifiers require an
exponential number of queries to statistically distinguish a YES instance from
the NO instance, for _each_ of asymptotically all YES instances.
###### Proof.
We first explain why the query lower bound implies that a witness cannot help.
In the NO instance, the witness is designed to fool the verifier; in order to
overcome this, the verifier must use the witness in tandem with the oracle.
But this cannot be done efficiently; regardless of input state, distinguishing
the NO instance from nearly any YES instance requires an exponential number of
queries.
We now prove the query lower bound. Let $k$ be the number of queries required
to distinguish $\mathcal{\overline{O}}_{T_{V}}$ from
$\mathcal{\overline{O}}_{T_{\varnothing}}$. Consider any algorithm that
distinguishes the two instances, defined by a starting state $\rho_{0}$, $k$
unitaries, $k$ oracle queries, and a POVM $\\{E,\mathbb{I}-E\\}$. In the
framework of hybrid algorithms (Definition 4.10),
$\displaystyle\Omega(\frac{1}{{\rm poly}(n)})$
$\displaystyle=\left|\Tr[EA_{V,k}[\rho_{0}]]-\Tr[EA_{V,0}[\rho_{0}]]\right|$
(5.1)
$\displaystyle\leq\left\|A_{V,k}[\rho_{0}]-A_{V,0}[\rho_{0}]\right\|_{1}$
(5.2) $\displaystyle\leq
k\max_{i\in\\{0,\dots,k-1\\}}\left\|A_{V,i+1}[\rho_{0}]-A_{V,i}[\rho_{0}]\right\|_{1}$
(5.3) $\displaystyle\leq
k\max_{i\in\\{0,\dots,k-1\\}}\left\|\mathcal{\overline{O}}_{T_{V}}[\rho^{(i)}]-\mathcal{\overline{O}}_{T_{\varnothing}}[\rho^{(i)}]\right\|_{1}\,,$
(5.4)
where the last line follows because randomized oracles do not increase the
nuclear norm (B.9).
We now bound
$\left\|\mathcal{\overline{O}}_{T_{V}}[\rho]-\mathcal{\overline{O}}_{T_{\varnothing}}[\rho]\right\|_{1}$
for _any_ $\rho$. Recall that a phase oracle $\overline{O}_{F}$ acts as
$\displaystyle\overline{O}_{F}\left[\ket{x_{1},z_{1}}\bra{x_{2},z_{2}}\right]=\frac{1}{|F|}\sum_{f\in
F}\omega_{N}^{f^{z_{1}}(x_{1})-f^{z_{2}}(x_{2})}\ket{x_{1},z_{1}}\bra{x_{2},z_{2}}\,,$
(5.5)
for any $x_{1},x_{2}\in[N]$ and $z_{1},z_{2}\in\\{\pm 1\\}$. So every basis
vector $\ket{x_{1},z_{1}}\bra{x_{2},z_{2}}$ acquires a coefficient
$c_{x_{1},z_{1},x_{2},z_{2}}$.
We start with $\overline{O}_{T_{\varnothing}}$ (the NO instance). When
$(x_{1},z_{1})=(x_{2},z_{2})$, the coefficient is $1$. When $x_{1}\neq x_{2}$,
$f^{z}(x_{1})$ and $f^{-z}(x_{2})$ are uniformly likely to be any value, so
the coefficient is
$\displaystyle\frac{1}{N^{2}}\sum_{a\in[N],b\in[N]}\omega_{N}^{a-b}=\frac{1}{N^{2}}\left\|\sum_{a\in[N]}\omega_{N}^{a}\right\|^{2}=0\,.$
(5.6)
Similarly, when $x_{1}\neq x_{2}$, $f^{z}(x_{1})$ and $f^{z}(x_{2})$ are
uniformly likely to be any unequal values; the coefficient is
$\displaystyle\frac{1}{N(N-1)}\sum_{a\in[N],b\in[N],a\neq b}\omega_{N}^{a-b}$
$\displaystyle=\frac{1}{N(N-1)}\big{[}\sum_{a\in[N],b\in[N]}\omega_{N}^{a-b}-\sum_{a\in[N]}\omega_{N}^{a-a}\big{]}=-\frac{1}{N-1}\,.$
(5.7)
We now consider $\overline{O}_{T_{V}}$ (a YES instance). When
$(x_{1},z_{1})=(x_{2},z_{2})$, the coefficient is again $1$. When $x_{1}\neq
x_{2}$, the values of $f^{z}(x_{1})$ and $f^{-z}(x_{2})$ are uniformly likely
to be any value in $i_{V}(x_{1})$ and $i_{V}(x_{2})$, respectively, so the
coefficient is
$\displaystyle\frac{1}{|i_{V}(x_{1})|\times|i_{V}(x_{2})|}\sum_{a\in
i_{V}(x_{1}),b\in i_{V}(x_{2})}\omega_{N}^{a-b}\,.$ (5.8)
Similarly, when $x_{1}\neq x_{2}$, $f^{z}(x_{1})$ and $f^{z}(x_{2})$ are
uniformly likely to be any unequal values in $i_{V}(x_{1})$ and
$i_{V}(x_{2})$, respectively, so the coefficient is
$\displaystyle\frac{1}{|i_{V}(x_{1})|\times|i_{V}(x_{2})|}\sum_{a\in
i_{V}(x_{1}),b\in i_{V}(x_{2}),a\neq b}\omega_{N}^{a-b}=\frac{\left(\sum_{a\in
i_{V}(x_{1}),b\in
i_{V}(x_{2})}\omega_{N}^{a-b}\right)-\delta|i_{V}(x_{1})|}{|i_{V}(x_{1})|\times|i_{V}(x_{2})-\delta|}\,,$
(5.9)
where $\delta$ is $1$ if $i_{V}(x_{1})=i_{V}(x_{2})$ and $0$ otherwise.
Consider the object
$\mathcal{\overline{O}}_{T_{V}}[\rho]-\mathcal{\overline{O}}_{T_{\varnothing}}[\rho]$
as the sum of two matrices $A_{V}+B_{V}$. Let $A_{V}$ contain the
$(V,z)\times(V,z)$ submatrix for both $z\in\\{\pm 1\\}$, and $B_{V}$ contain
the rest of the entries. When the oracle query is entangled with $o(n)$
additional qubits, $A_{V}$ has rank $O(|V|)$, and $B_{V}$ has rank $O(N)$.
Since the roots of unity sum to zero, $\sum_{a\in
V}\omega_{N}^{a}=-\sum_{a\in[N]/V}\omega_{N}^{a}$ for any $V\subseteq[N]$.
Because of this,
$\displaystyle\left\|\sum_{a,b\in
i_{V}(x)}\omega_{N}^{a-b}\right\|\leq\left\|\sum_{a\in
i_{V}(x)}\omega_{N}^{a}\right\|^{2}\leq\left\|\sum_{a\in
V}\omega_{N}^{a}\right\|^{2}=O(|V|^{2})\,.$ (5.10)
As a result, all coefficients in $B_{V}$ are $O(\frac{1}{N^{1-\alpha}})$.
In Lemma 5.2, we show that for asymptotically all choices of $V$, all
coefficients in $A_{V}$ are $O(\frac{1}{N^{3\alpha/4}})$. This argument uses a
Chernoff bound and a central limit argument on samples without replacement.
We bound the nuclear norm of
$\mathcal{\overline{O}}_{T_{V}}[\rho]-\mathcal{\overline{O}}_{T_{\varnothing}}[\rho]$
with the rank and Frobenius norm of $A_{V}$ and $B_{V}$ (B.3):
$\displaystyle\left\|\mathcal{\overline{O}}_{T_{V}}[\rho]-\mathcal{\overline{O}}_{T_{\varnothing}}[\rho]\right\|_{1}$
$\displaystyle\leq\|A_{V}\|_{1}+\|B_{V}\|_{1}$ (5.11)
$\displaystyle=O(\sqrt{V})\|A_{V}\|_{Fr}+O(\sqrt{N})\|B_{V}\|_{Fr}$ (5.12)
$\displaystyle\leq\left(O(\sqrt{V})O(N^{-3\alpha/4})+O(\sqrt{N})O(N^{\alpha-1})\right)\|\rho\|_{Fr}$
(5.13) $\displaystyle=O(N^{-\alpha/4}+N^{\alpha-1/2})\,.$ (5.14)
Thus, for most choices of $V$, distinguishing $\mathcal{\overline{O}}_{T_{V}}$
and $\mathcal{\overline{O}}_{T_{\varnothing}}$ requires
$k=\Omega(\min(N^{\alpha/4},N^{1/2-\alpha}))$ queries. ∎
We now prove the Chernoff bound:
###### Lemma 5.2.
Fix any $0<\alpha<\frac{1}{2}$, and consider all subsets $V\subseteq[N]$ such
that $|V|=N^{\alpha}$. Then for all but a doubly exponentially small fraction
of choices of $V$,
$\displaystyle\left\|\frac{1}{N^{2\alpha}}\sum_{a,b\in
V}\omega_{N}^{a-b}\right\|=O(\frac{1}{N^{3\alpha/4}})\,.$ (5.15)
###### Proof.
Consider the distribution $X=\\{\omega_{N}^{k}\\}$ where $k$ is chosen
uniformly from $N$. Both $Re(X)$ and $Im(X)$ have mean zero and variance at
most $1$.
Take a size-$N^{\alpha}$ sample from the distribution $X$, _without
replacement_. Denote $Y$ as the distribution of the sample mean. Both $Re(Y)$
and $Im(Y)$ have expectation $Re(X)=Im(X)=0$, and variance
$\displaystyle\frac{\sigma_{X}^{2}}{N^{\alpha}}(1-\frac{N^{\alpha}-1}{N-1})\leq\frac{1}{N^{\alpha}}\,.$
(5.16)
Even when sampling without replacement, $Y$ is asymptotically normally
distributed [Erd59]. So its moment generating function is
$\displaystyle\text{MGF}_{Y}[t]=e^{t\mu_{Y}+\sigma_{Y}^{2}t^{2}/2}\leq
e^{t^{2}/N^{\alpha}}\,.$ (5.17)
We use a Chernoff bound to estimate when $Y$ has magnitude at least
$N^{-3\alpha/8}$. Notice that
$\displaystyle\Pr[Y\geq a]$ $\displaystyle=\Pr[e^{tY}\geq e^{ta}]\leq
e^{-at}\text{MGF}_{Y}[t]\leq e^{t^{2}/N^{\alpha}}e^{-at}\,,$ (5.18)
so
$\displaystyle\Pr[Y\geq\frac{0.5}{N^{3\alpha/8}}]$
$\displaystyle\leq\inf_{t\geq 0}\
\exp(\frac{t^{2}}{N^{\alpha}}-\frac{0.5t}{N^{3\alpha/8}})\
\underset{t=2N^{\alpha/2}}{\leq}\exp(4-N^{\alpha/8})=O(\frac{1}{\exp(\exp(n))})\,.$
(5.19)
This implies that $Y$ has magnitude at most $N^{-3\alpha/8}$ (and $Y^{2}$ at
most $N^{-3\alpha/4}$) except in a doubly exponentially small fraction of
choices of $V$. ∎
###### Remark 5.3.
Consider an oracle that sends $\ket{c,x,z}\to\omega_{N}^{c\cdot
f^{z}(x)}\ket{c,x,z}$, where the $c$ register has $k$ qubits. Note that
Theorem 5.1 applies whenever $k=o(n)$. However, there must be a phase
transition, since at $k=n$, this oracle is unitarily equivalent to a standard
oracle, and thus has a ${\mathsf{QMA}}$ protocol for RANDOMIZED HIDDEN
SUBSET$(\alpha)$ in Section 3.
## Acknowledgements
Thanks to Casey Duckering, Juspreet Singh Sandhu, Peter Shor, and Justin Yirka
for collaborating on early stages of this project. KM thanks many others for
engaging discussions, including Adam Bouland, Antares Chen, Aram Harrow, Matt
Hastings, Eric Hester, Neng Huang, Robin Kothari, Brian Lawrence, Yi-Kai Liu,
Patrick Lutz, Tushant Mittal, Abhijit Mudigonda, Chinmay Nirkhe, and Aaron
Potechin. Thanks to Chinmay Nirkhe for feedback on a previous version of this
manuscript.
BF and RB acknowledge support from AFOSR (YIP number FA9550-18-1-0148 and
FA9550-21-1-0008). This material is based upon work partially supported by the
National Science Foundation under Grant CCF-2044923 (CAREER) and by the U.S.
Department of Energy, Office of Science, National Quantum Information Science
Research Centers as well as by DOE QuantISED grant DE-SC0020360. KM
acknowledges support from the National Science Foundation Graduate Research
Fellowship Program under Grant No. DGE-1746045. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the National Science
Foundation.
## References
* [Aar21] Scott Aaronson. Open Problems Related to Quantum Query Complexity, 2021. arXiv:2109.06917.
* [ACL11] Andris Ambainis, Andrew M. Childs, and Yi-Kai Liu. Quantum Property Testing for Bounded-Degree Graphs. Lecture Notes in Computer Science, page 365–376, 2011. arXiv:1012.3174.
* [AGS22] Atul Singh Arora, Alexandru Gheorghiu, and Uttam Singh. Oracle separations of hybrid quantum-classical circuits, 2022. arXiv:2201.01904.
* [AK07] Scott Aaronson and Greg Kuperberg. Quantum Versus Classical Proofs and Advice. Theory Comput., 3(1):129–157, 2007. arXiv:quant-ph/0604056.
* [ALM+98] Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. Proof verification and the hardness of approximation problems. Journal of the ACM (JACM), 45(3):501–555, 1998. URL https://doi.org/10.1145/1236457.1236459.
* [Amb00] Andris Ambainis. Quantum lower bounds by quantum arguments. In Proceedings of the thirty-second annual ACM symposium on Theory of computing - STOC ’00. ACM Press, 2000. arXiv:quant-ph/0002066.
* [AN02] Dorit Aharonov and Tomer Naveh. Quantum NP - A Survey, 2002. arXiv:quant-ph/0210077.
* [BBBV97] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths and Weaknesses of Quantum Computing. SIAM Journal on Computing, 26(5):1510–1523, Oct 1997. arXiv:quant-ph/9701001.
* [Chi22] Andrew Childs. Lecture Notes on Quantum Algorithms, 2022. URL https://www.cs.umd.edu/~amchilds/qa/qa.pdf.
* [Erd59] Paul Erdos. On the central limit theorem for samples from a finite population. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 4:49–61, 1959. URL https://old.renyi.hu/~p_erdos/1959-13.pdf.
* [FK18] Bill Fefferman and Shelby Kimmel. Quantum vs Classical Proofs and Subset Verification, 2018. arXiv:1510.06750.
* [Fri04] Joel Friedman. A proof of Alon’s second eigenvalue conjecture and related problems. CoRR, 2004. arXiv:cs/0405020.
* [GKS15] Alex Bredariol Grilo, Iordanis Kerenidis, and Jamie Sikora. QMA with Subset State Witnesses. In Mathematical Foundations of Computer Science 2015 - 40th International Symposium. Springer, 2015. arXiv:1410.2882.
* [GZ19] Chris Godsil and Hanmeng Zhan. Discrete-time quantum walks and graph structures. Journal of Combinatorial Theory, Series A, 167:181–212, 2019. arXiv:1701.04474.
* [Har13] Aram W. Harrow. The Church of the Symmetric Subspace, 2013. arXiv:1308.6595.
* [HR11] Aram W. Harrow and David J. Rosenbaum. Uselessness for an Oracle Model with Internal Randomness, 2011. arXiv:1111.1462.
* [JNV+21] Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen. MIP*=RE. Communications of the ACM, 64(11):131–138, 2021. arXiv:2001.04383.
* [KKVB02] Elham Kashefi, Adrian Kent, Vlatko Vedral, and Konrad Banaszek. Comparison of quantum oracles. Physical Review A, 65(5), May 2002. arXiv:quant-ph/0109104.
* [KL20] Jonathan Katz and Yehuda Lindell. Introduction to modern cryptography. CRC press, 2020. URL http://www.cs.umd.edu/~jkatz/imc.html.
* [KM93] Eyal Kushilevitz and Yishay Mansour. Learning Decision Trees Using the Fourier Spectrum. SIAM J. Comput., 22(6):1331–1348, 1993. URL https://doi.org/10.1137/0222080.
* [Lut11] Andrew Lutomirski. Component mixers and a hardness result for counterfeiting quantum money, 2011. arXiv:1107.0321.
* [NN22] Anand Natarajan and Chinmay Nirkhe. A classical oracle separation between QMA and QCMA, 2022. arXiv:2210.15380.
* [Pet00] Julius Petersen. Die Theorie der regulären graphs. Acta Mathematica, 15:193 – 220, 1900. URL https://doi.org/10.1007/BF02392606.
* [Ren05] Jason D. M. Rennie. Relating the Trace and Frobenius Matrix Norms, August 2005. URL http://people.csail.mit.edu/jrennie/writing/traceFrobenius.pdf.
* [Sha92] Adi Shamir. IP=PSPACE. Journal of the ACM (JACM), 39(4):869–877, 1992. URL https://doi.org/10.1145/146585.146609.
* [Tre08] Luca Trevisan. Max Cut and the Smallest Eigenvalue, 2008. arXiv:0806.1978.
* [Wat98] John Watrous. Quantum simulations of classical random walks and undirected graph connectivity, 1998. arXiv:cs/9812012.
## Appendix A Basis elements of symmetric subspaces
In this section we find an orthogonal basis for
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$ and
$\mathsf{\widetilde{V}}_{T_{V}}$, and prove Lemma 4.6.
###### Theorem A.1 (Basis elements of
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$).
Consider $T_{\varnothing}$, the set of all permutations of $[N]$. Then
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$ is spanned by the matrices
$\displaystyle A_{z_{1},z_{2}}$ $\displaystyle=\ket{+}^{\otimes
n}\bra{+}^{\otimes n}\otimes\ket{z_{1}}\bra{z_{2}}\,,$ (A.1) $\displaystyle
B^{\prime}_{z_{1}}$
$\displaystyle=\mathbb{I}_{[N]}\otimes\ket{z_{1}}\bra{z_{1}}\,,$ (A.2)
for $z_{1},z_{2}\in\\{\pm 1\\}$.
###### Proof.
By definition, $\mathsf{\widetilde{V}}_{T_{\varnothing}}$ contains the set of
matrices $M$ such that $U_{\pi}MU_{\pi}^{\dagger}=M$ for all permutations
$\pi:[N]\to[N]$. We can always write $M$ in the following form:
$\displaystyle M$ $\displaystyle:=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{x_{1},z_{1},x_{2},z_{2}}\ket{x_{1},z_{1}}\bra{x_{2},z_{2}}\,.$
(A.3)
Then $U_{\pi}MU_{\pi}^{\dagger}$ has the form
$\displaystyle U_{\pi}MU_{\pi}^{\dagger}$
$\displaystyle=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{x_{1},z_{1},x_{2},z_{2}}\ket{\pi^{z_{1}}(x_{1}),z_{1}}\bra{\pi^{z_{2}}(x_{2}),z_{2}}$
(A.4) $\displaystyle=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{\pi^{-z_{1}}(x_{1}),z_{1},\pi^{-z_{2}}(x_{2}),z_{2}}\ket{x_{1},z_{1}}\bra{x_{2},z_{2}}\,.$
(A.5)
Thus, for all permutations $\pi$, and for all $x_{1},x_{2}\in[N]$ and
$z_{1},z_{2}\in\\{\pm 1\\}$,
$\displaystyle\alpha_{x_{1},z_{1},x_{2},z_{2}}=\alpha_{\pi^{-z_{1}}(x_{1}),z_{1},\pi^{-z_{2}}(x_{2}),z_{2}}\,.$
(A.6)
Given $(x_{1},z_{1},x_{2},z_{2})$, this restriction depends on the possible
values of $(\pi^{-z_{1}}(x_{1}),\pi^{-z_{2}}(x_{2}))$:
* •
When $z_{1}\neq z_{2}$, this can take on any value $(j,k)\in[N]\times[N]$.
* •
When $z_{1}=z_{2}$ and $x_{1}\neq x_{2}$, this can take on any value
$(j,k)\in[N]\times[N]$ such that $j\neq k$.
* •
When $z_{1}=z_{2}$ and $x_{1}=x_{2}$, this can take on any value
$(j,j)\in[N]\times[N]$.
As a result, for a given choice of $z_{1},z_{2}\in\\{\pm 1\\}$, all
coefficients are equal except possibly on the diagonal when $z_{1}=z_{2}$.
There are then six basis elements, one proportional to
$\mathbb{1}_{N}\otimes\ket{z_{1}}\bra{z_{2}}\propto\ket{+}^{\otimes
n}\bra{+}^{\otimes n}\otimes\ket{z_{1}}\bra{z_{2}}$ for each
$z_{1},z_{2}\in\\{\pm 1\\}$, and one proportional to
$\mathbb{I}_{[N]}\otimes\ket{z}\bra{z}$ for each $z\in\\{\pm 1\\}$. ∎
###### Theorem A.2 (Basis elements of $\mathsf{\widetilde{V}}_{T_{V}}$).
Consider $T_{V}$ for some non-empty $V\subsetneq[N]$. Then
$\mathsf{\widetilde{V}}_{T_{V}}$ is spanned by the matrices
$\displaystyle C^{\prime}_{S_{1},S_{2},z_{1},z_{2}}$
$\displaystyle=\ket{S_{1}}\bra{S_{2}}\otimes\ket{z_{1}}\bra{z_{2}}\,,$ (A.7)
$\displaystyle D_{S_{1},z_{1}}$
$\displaystyle=\mathbb{I}_{S_{1}}\otimes\ket{z_{1}}\bra{z_{1}}\,,$ (A.8)
for $S_{1},S_{2}\in\\{V,[N]/V\\}$ and $z_{1},z_{2}\in\\{\pm 1\\}$.
###### Proof.
The proof is similar to Theorem A.1. By definition,
$\mathsf{\widetilde{V}}_{T_{V}}$ contains the set of matrices $M$ such that
$U_{\pi}MU_{\pi}^{\dagger}=M$ for all permutations $\pi\in T_{V}$. We can
always write $M$ in the following form:
$\displaystyle M$ $\displaystyle:=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{x_{1},z_{1},x_{2},z_{2}}\ket{x_{1},z_{1}}\bra{x_{2},z_{2}}\,.$
(A.9)
Then for all $\pi\in T_{V}$, $x_{1},x_{2}\in[N]$ and $z_{1},z_{2}\in\\{\pm
1\\}$,
$\displaystyle\alpha_{x_{1},z_{1},x_{2},z_{2}}=\alpha_{\pi^{-z_{1}}(x_{1}),z_{1},\pi^{-z_{2}}(x_{2}),z_{2}}\,.$
(A.10)
Given $(x_{1},z_{1},x_{2},z_{2})$, this restriction depends on the possible
values of $(\pi^{-z_{1}}(x_{1}),\pi^{-z_{2}}(x_{2}))$:
* •
When $z_{1}\neq z_{2}$ or $i_{V}(x_{1})\neq i_{V}(x_{2})$,
$\alpha_{x_{1},z_{1},x_{2},z_{2}}=\alpha_{j,z_{1},k,z_{2}}$ whenever $j\in
i_{V}(x_{1})$ and $k\in i_{V}(x_{2})$.
* •
When $z=z_{1}=z_{2}$ and $i_{V}(x_{1})=i_{V}(x_{2})$,
$\alpha_{x_{1},z,x_{2},z}=\alpha_{j,z,k,z}$ whenever $j,k\in i_{V}(x_{1})$ and
$j\neq k$.
* •
When $z=z_{1}=z_{2}$ and $x=x_{1}=x_{2}$, $\alpha_{x,z,x,z}=\alpha_{j,z,j,z}$
whenever $j\in i_{V}(z)$.
As a result, for a given choice of $z_{1},z_{2}\in\\{\pm 1\\}$ and
$i_{V}(x_{1}),i_{V}(x_{2})\in\\{V,[N]/V\\}$, all coefficients are equal except
possibly on the diagonal when $z_{1}=z_{2}$ and $i_{V}(x_{1})=i_{V}(x_{2})$.
There are then twenty basis elements; one proportional to
$\ket{i_{V}(x_{1})}\bra{i_{V}(x_{2})}\otimes\ket{z_{1}}\bra{z_{2}}$ for each
$i_{V}(x_{1}),i_{V}(x_{2})\in\\{V,[N]/V\\}$ and $z_{1},z_{2}\in\\{\pm 1\\}$,
and one proportional to $\mathbb{I}_{i_{V}(x)}\otimes\ket{z}\bra{z}$ for each
$i_{V}(x)\in\\{V,[N]/V\\}$ and $z\in\\{\pm 1\\}$. ∎
We can now find an orthogonal basis for
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$ and
$\mathsf{\widetilde{V}}_{T_{V}}$:
###### Theorem A.3 (Orthogonal basis elements of
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$ and
$\mathsf{\widetilde{V}}_{T_{V}}$).
The symmetric subspace according to $T_{\varnothing}$ is spanned by the
orthogonal matrices
$\displaystyle A_{z_{1},z_{2}}$ $\displaystyle=\ket{+}^{\otimes
n}\bra{+}^{\otimes n}\otimes\ket{z_{1}}\bra{z_{2}}\,,$ (A.11) $\displaystyle
B_{z_{1}}$ $\displaystyle=\frac{1}{N}\left(\mathbb{I}_{[N]}-\ket{+}^{\otimes
n}\bra{+}^{\otimes n}\right)\otimes\ket{z_{1}}\bra{z_{1}}\,,$ (A.12)
for $z_{1},z_{2}\in\\{\pm 1\\}$.
Let $\delta_{i,j}$ be the Kronecker delta function. The symmetric subspace
according to $T_{V}$ for any nonempty $V\subsetneq[N]$ is spanned by the
matrices above, and the orthogonal matrices in $\mathcal{C}$, which are
$\displaystyle C_{1,z_{1},z_{2}}$
$\displaystyle=\left(\ket{V}\bra{[N]/V}-\ket{[N]/V}\bra{V}\right)\otimes\ket{z_{1}}\bra{z_{2}}\,,$
(A.13) $\displaystyle C_{2,z_{1},z_{2}}$
$\displaystyle=\left(\ket{V}\bra{[N]/V}+\ket{[N]/V}\bra{V}-\frac{2}{N\sqrt{|V|(N-|V|)}}\left(\ket{+}^{\otimes
n}\bra{+}^{\otimes
n}-\delta_{z_{1},z_{2}}\frac{1}{N}\mathbb{I}_{[N]}\right)\right)\otimes\ket{z_{1}}\bra{z_{2}}\,,$
(A.14) $\displaystyle C_{3,z_{1},z_{2}}$
$\displaystyle=\left(\ket{V}\bra{V}-\frac{|V|}{N-|V|}\ket{[N]/V}\bra{[N]/V}-\delta_{z_{1},z_{2}}\frac{1-\frac{|V|}{N-|V|}}{N}\mathbb{I}_{[N]}\right)\otimes\ket{z_{1}}\bra{z_{2}}\,,$
(A.15) $\displaystyle C_{4,z_{1}}$
$\displaystyle=\left(\frac{1}{|V|}(\mathbb{I}_{V}-\ket{V}\bra{V})-\frac{1}{N-|V|}(\mathbb{I}_{[N]/V}-\ket{[N]/V}\bra{[N]/V})\right)\otimes\ket{z_{1}}\bra{z_{1}}\,,$
(A.17)
for $z_{1},z_{2}\in\\{\pm 1\\}$. Moreover, these matrices have Frobenius norm
at most $O(1)$ and nuclear norm $\Theta(1)$.
###### Proof.
All matrices are orthogonal when the last qubit $\ket{z_{1}}\bra{z_{2}}$
doesn’t match. We first consider the basis elements of
$\mathsf{\widetilde{V}}_{T_{\varnothing}}$, as listed in Theorem A.1. When
$z_{1}\neq z_{2}$, $A_{z_{1},z_{2}}$ is the only basis vector; when
$z=z_{1}=z_{2}$, observe that $B_{z}$ is a linear combination of $A_{z,z}$ and
$B^{\prime}_{z}$, and
$\displaystyle\Tr[A_{z,z}^{\dagger}B_{z}]=\frac{1}{N}\left(\bra{+}^{\otimes
n}\mathbb{I}_{[N]}\ket{+}^{\otimes n}-\bra{+}^{\otimes n}\ket{+}^{\otimes
n}\bra{+}^{\otimes n}\ket{+}^{\otimes n}\right)=0\,.$ (A.18)
We now construct the basis of
$\mathsf{\widetilde{V}}_{T_{V}}/\mathsf{\widetilde{V}}_{T_{\varnothing}}$ by
orthogonalizing the vectors in Theorem A.2. When $z_{1}\neq z_{2}$, the only
vectors are linear combinations of $C^{\prime}_{S_{1},S_{2},z_{1},z_{2}}$ for
$S_{1},S_{2}\in\\{V,[N]/V\\}$ and $z_{1},z_{2}\in\\{\pm 1\\}$. Note that
$\bra{+}^{\otimes n}\ket{V}=\frac{1}{\sqrt{|V|N}}$.
* •
First of all, $C_{1,z_{1},z_{2}}$ is antisymmetric; for every state
$\ket{\psi}$, $\bra{\psi}C_{1,z_{1},z_{2}}\ket{\psi}=0$.
* •
$C_{2,z_{1},z_{2}}$ is constructed as the “symmetric” version of
$C_{1,z_{1},z_{2}}$, and then offset by $\ket{+}^{\otimes n}\bra{+}^{\otimes
n}$ to account for overlap with $A_{z_{1},z_{2}}$.
* •
$C_{3,z_{1},z_{2}}$ is immediately orthogonal $C_{1,z_{1},z_{2}}$ and
$C_{2,z_{1},z_{2}}$, and inspection shows orthogonality with
$A_{z_{1},z_{2}}$.
When $z=z_{1}=z_{2}$, we modify $C_{1,z,z},C_{2,z,z},C_{3,z,z}$ to be
traceless (i.e. orthogonal to $\mathbb{I}_{N}\otimes\ket{z}\bra{z}$). The last
matrix $C_{4,z}$ must include $\mathbb{I}_{V}$; it is immediately orthogonal
to $C_{1,z,z},C_{2,z,z}$, and inspection shows that it is traceless and
orthogonal to any $\ket{S}\bra{S}\otimes\ket{z}\bra{z}$ for
$S\in\\{V,[N]/V\\}$.
We now calculate the norms of each vector. The Frobenius norm and nuclear norm
of an outer product $\ket{\psi}\bra{\psi}$ is exactly $1$. The nuclear norm of
$\frac{1}{N}\mathbb{I}_{[N]}$ is $1$, and the Frobenius norm is
$\frac{1}{\sqrt{N}}$. Using Cauchy-Schwarz, the Frobenius norm and nuclear
norm of
$A_{z_{1},z_{2}},C_{1,z_{1},z_{2}},C_{2,z_{1},z_{2}},C_{3,z_{1},z_{2}}$ are
all $\Theta(1)$. The nuclear norm of $B_{z}$ and $C_{4,z}$ are both
$\Theta(1)$, and their Frobenius norms are $\Theta(\frac{1}{\sqrt{N}})$ and
$\Theta(\frac{1}{\sqrt{|V|}})$, respectively. ∎
We use the orthogonal basis in Theorem A.3 to describe the form of good
distinguishers of $\mathcal{\widetilde{O}}_{T_{V}}$ and
$\mathcal{\widetilde{O}}_{T_{\varnothing}}$.
###### Fact A.4 (Representing the difference of randomized oracles).
Since $\mathcal{\widetilde{O}}_{T_{V}}$ and
$\mathcal{\widetilde{O}}_{T_{\varnothing}}$ are both orthogonal projectors
under the Frobenius inner product, the difference of their outputs can be
represented with the basis elements $\mathcal{C}$ of
$\mathsf{\widetilde{V}}_{T_{V}}/\mathsf{\widetilde{V}}_{T_{\varnothing}}$
(computed in Appendix A). That is,
$\displaystyle
d_{V,\rho}:=\mathcal{\widetilde{O}}_{T_{V}}[\rho]-\mathcal{\widetilde{O}}_{T_{\varnothing}}[\rho]=\sum_{M\in\mathcal{C}}c_{M}M\,,$
(A.19)
where
$\displaystyle c_{M}=\frac{\Tr[M^{\dagger}\rho]}{\|M\|_{Fr}^{2}}\,.$ (A.20)
We refer to $c_{M}$ as the _weight_ (of $\rho$) on the matrix $M$.
###### Lemma A.5 (Lemma 4.6, restated).
Consider a density matrix $\rho$ and up to $O(\log(n))$ extra workspace
qubits. Suppose $\|d_{V,\rho}\|_{1}=\Omega(\frac{1}{{\rm poly}(n)})$. Then
among the quantities
$\displaystyle\bra{V,z}\rho\ket{V,z}\,,$ (A.21)
$\displaystyle\Tr[\rho\left(\mathbb{I}_{V,z}-\frac{|V|}{N}\mathbb{I}_{[N],z}\right)]\,,$
(A.22)
for any $z\in\\{\pm 1\\}$, at least one has magnitude $\Omega(\frac{1}{{\rm
poly}(n)})$.
###### Proof.
By A.4, $d_{V,\rho}$ can be written as a linear combination of basis elements
in $\mathcal{C}$, i.e.
$\displaystyle
d_{V,\rho}=\sum_{M\in\mathcal{C}}c_{M}M=\sum_{M\in\mathcal{C}}\frac{\Tr[M^{\dagger}\rho]M}{\|M\|_{Fr}^{2}}\,.$
(A.23)
By the pigeonhole principle and the triangle inequality, there is an
$M\in\mathcal{C}$ where
$\displaystyle\|c_{M}M\|_{1}\geq\frac{1}{|\mathcal{C}|}\Omega(\frac{1}{{\rm
poly}(n)})=\Omega(\frac{1}{{\rm poly}(n)})\,,$ (A.24)
Note that the last equality holds when $|\mathcal{C}|=O({\rm poly}(n))$; this
is true allowing up to $O(\log(n))$ extra workspace qubits. Inspecting each
condition:
* •
Suppose the matrix $M$ is $C_{1,z_{1},z_{2}}$ or $C_{2,z_{1},z_{2}}$ for some
$z_{1},z_{2}\in\\{\pm 1\\}$. By the proof of Theorem A.3, the Frobenius norm
and nuclear norm are both $\Theta(1)$, so
$\displaystyle\Omega(\frac{1}{{\rm poly}(n)})=\Tr[C_{1.5\pm
0.5,z_{1},z_{2}}^{\dagger}\rho]=\bra{[N]/V,z_{2}}\rho\ket{V,z_{1}}\mp\bra{V,z_{2}}\rho\ket{[N]/V,z_{1}}\pm
O(\frac{1}{2^{{\rm poly}(n)}})\,.$ (A.25)
By pigeonhole, this implies that there is some $z_{1},z_{2}\in\\{\pm 1\\}$
such that
$\left|\bra{[N]/V,z_{2}}\rho\ket{V,z_{1}}\right|=\Omega(\frac{1}{{\rm
poly}(n)})$, and by B.10, $\bra{V,z_{1}}\rho\ket{V,z_{1}}=\Omega(\frac{1}{{\rm
poly}(n)})$.
* •
Suppose the matrix $M$ is $C_{3,z_{1},z_{2}}$ for some $z_{1},z_{2}\in\\{\pm
1\\}$. By the proof of Theorem A.3, the Frobenius norm and nuclear norm are
both $\Theta(1)$, so
$\displaystyle\Omega(\frac{1}{{\rm
poly}(n)})=\Tr[C_{3,z_{1},z_{2}}^{\dagger}\rho]=\bra{V,z_{2}}\rho\ket{V,z_{1}}\pm
O(\frac{1}{2^{{\rm poly}(n)}})\,,$ (A.26)
and by B.10, $\bra{V,z_{1}}\rho\ket{V,z_{1}}=\Omega(\frac{1}{{\rm poly}(n)})$.
* •
Suppose the matrix $M$ is $C_{4,z}$ for some $z\in\\{\pm 1\\}$. By the proof
of Theorem A.3, the Frobenius norm is $\Theta(\frac{1}{\sqrt{|V|}})$ and the
nuclear norm is $\Theta(1)$, so
$\displaystyle\Omega(\frac{1}{{\rm
poly}(n)})=|V|\Tr[C_{4,z}^{\dagger}\rho]=\Tr[\rho\left(\mathbb{I}_{V,z}-\frac{|V|}{N}\mathbb{I}_{[N]/V,z}\right)]-\bra{V,z}\rho\ket{V,z}\pm
O(\frac{1}{2^{{\rm poly}(n)}})\,.$ (A.27)
By pigeonhole, this implies that at least one of the two terms has magnitude
$\Omega(\frac{1}{{\rm poly}(n)})$.
This proof holds even if we allow controlled access to the oracle. Consider
the addition of a control qubit $\ket{a}$, such that if $a=0$ the oracle has
no effect, and if $a=1$ the oracle acts as usual. We can separate $d_{V,\rho}$
into four matrices $d_{V,\rho}^{(a_{1},a_{2})}$ based on the values
$(a_{1},a_{2})$.
If $\|d_{V,\rho}\|_{1}=\Omega(\frac{1}{{\rm poly}(n)})$, one of the four
matrices $d_{V,\rho}^{(a_{1},a_{2})}$ must have $\Omega(\frac{1}{{\rm
poly}(n)})$ nuclear norm. We have already seen what happens if that matrix is
$d_{V,\rho}^{(1,1)}$. Recall that the oracle has no effect when
$a_{1}=a_{2}=0$. Every matrix $M$ with control register $\ket{0}\bra{0}$
satisfies $U_{\pi}MU_{\pi}^{\dagger}=M$ for all permutations $\pi$, and so
$M\in\mathsf{\widetilde{V}}_{T_{\varnothing}}$. As a result, the matrix
$d_{V,\rho}^{(0,0)}\in\mathsf{\widetilde{V}}_{T_{V}}/\mathsf{\widetilde{V}}_{T_{\varnothing}}$
is identically zero.
Suppose the matrix $d_{V,\rho}^{(1,0)}$ has $\Omega(\frac{1}{{\rm poly}(n)})$
nuclear norm. (The proof for $d_{V,\rho}^{(0,1)}$ is nearly identical.)
Despite an exponentially large basis of matrices of this form, we show in
Lemma A.6 that this matrix has small rank. In particular, up to
$O(\frac{1}{2^{{\rm poly}(n)}})$ multiplicative error, $d_{V,\rho}^{(1,0)}$ is
a weighted sum of outer products $P_{z=+1}=\ket{1,V,+1}\bra{\psi_{z=+1}}$ and
$P_{z=-1}=\ket{1,V,-1}\bra{\psi_{z=-1}}$ for some normalized states
$\ket{\psi_{z}}$. These outer products have nuclear norm $1$, so by the
pigeonhole principle, there is a $z\in\\{\pm 1\\}$ such that the weight
$c_{P_{z}}$ on $P_{z}$ has magnitude at least $O(\frac{1}{{\rm poly}(n)})$.
Since the Frobenius norm of each outer product is also $1$, the weight is
$c_{P_{z}}=\Tr[P_{z}^{\dagger}\rho]=\bra{1,V,z}\rho\ket{\psi_{z}}$. Thus, by
B.10, $\bra{1,V,z}\rho\ket{1,V,z}=\Omega(\frac{1}{{\rm poly}(n)})$ for some
$z\in\\{\pm 1\\}$. ∎
We now prove the lemma:
###### Lemma A.6.
Consider any matrix
$A\in\mathsf{\widetilde{V}}_{T_{V}}/\mathsf{\widetilde{V}}_{T_{\varnothing}}$
where the control register is $\ket{a_{1}=1}\bra{a_{2}=0}$ (or
$\ket{a_{1}=0}\bra{a_{2}=1}$). Up to $O(\frac{1}{2^{{\rm poly}(n)}})$
multiplicative error, $A$ is a linear combination of outer products
$\ket{1,V,+1}\bra{\psi_{z=+1}}$ and $\ket{1,V,-1}\bra{\psi_{z=-1}}$ (or their
conjugate transposes) for some states $\ket{\psi_{z=+1}}$ and
$\ket{\psi_{z=-1}}$.
###### Proof.
We set $a_{1}=1$ and $a_{2}=0$; the proof in the reverse case is nearly
identical. Consider any matrix with the form
$\displaystyle M:=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{x_{1},z_{1},x_{2},z_{2}}\ket{1,x_{1},z_{1}}\bra{0,x_{2},z_{2}}\,.$
(A.28)
For any permutation $\pi$, the matrix $U_{\pi}MU_{\pi}^{\dagger}$ is
$\displaystyle U_{\pi}MU_{\pi}^{\dagger}$
$\displaystyle=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{x_{1},z_{1},x_{2},z_{2}}\ket{1,\pi^{z_{1}}(x_{1}),z_{1}}\bra{0,x_{2},z_{2}}$
(A.29) $\displaystyle=\sum_{x_{1},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{\pi^{-z_{1}}(x_{1}),z_{1},x_{2},z_{2}}\ket{1,x_{1},z_{1}}\bra{0,x_{2},z_{2}}\,.$
(A.30)
For every matrix $M\in\mathsf{\widetilde{V}}_{T_{\varnothing}}$ and
permutation $\pi$, $M=U_{\pi}MU_{\pi}^{\dagger}$. Thus, the coefficients do
not depend on $x_{1}$: every $M\in\mathsf{\widetilde{V}}_{T_{\varnothing}}$
has the form
$\displaystyle M=\sum_{x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{z_{1},x_{2},z_{2}}\ket{1,+^{\otimes
n},z_{1}}\bra{0,x_{2},z_{2}}=\sum_{z\in\\{\pm 1\\}}\alpha_{z}\ket{1,+^{\otimes
n},z}\bra{\chi_{z}}\,.$ (A.31)
Similarly, any matrix in $\mathsf{\widetilde{V}}_{T_{V}}$ satisfies
$M=U_{\pi}MU_{\pi}^{\dagger}$ when $\pi$ stabilizes the subset $V$. So if
$M\in\mathsf{\widetilde{V}}_{T_{V}}$,
$\displaystyle M=\sum_{S\in\\{V,[N]/V\\},x_{2}\in[N],z_{1},z_{2}\in\\{\pm
1\\}}\alpha_{S,z_{1},x_{2},z_{2}}\ket{1,S,z_{1}}\bra{0,x_{2},z_{2}}=\sum_{S\in\\{V,[N]/V\\},z\in\\{\pm
1\\}}\alpha_{S,z}\ket{1,S,z}\bra{\chi_{z}}\,.$ (A.32)
Any matrix in
$\mathsf{\widetilde{V}}_{T_{V}}/\mathsf{\widetilde{V}}_{T_{\varnothing}}$ must
be orthogonal to all matrices in $\mathsf{\widetilde{V}}_{T_{\varnothing}}$.
Thus, if
$M\in\mathsf{\widetilde{V}}_{T_{V}}/\mathsf{\widetilde{V}}_{T_{\varnothing}}$,
$\displaystyle M=\sum_{z\in\\{\pm
1\\}}\alpha_{z}\ket{1}\ket{V^{\prime}}\ket{z}\bra{\psi_{z}}\,,$ (A.33)
where
$\displaystyle\ket{V^{\prime}}=\sqrt{\frac{N-|V|}{N}}\ket{V}+\sqrt{\frac{|V|}{N}}\ket{[N]/V}\,.$
(A.34)
Note that $\ket{V^{\prime}}$ is equal to $\ket{V}$ up to
$O(\sqrt{\frac{|V|}{N}})=O(\frac{1}{2^{{\rm poly}(n)}})$ multiplicative error.
∎
## Appendix B Norms and inner products
Note that we work with arbitrary matrices, not just positive semidefinite
ones.
###### Definition B.1 (Nuclear norm of a matrix).$p_{p_{p_{p}}}$
The nuclear norm of a matrix $M$ is the sum of its singular values; that is,
$\displaystyle\|M\|_{1}=\sum_{i}\sigma_{i}(M)=\Tr[\sqrt{M^{\dagger}M}]\,.$
(B.1)
###### Definition B.2 (Frobenius norm and inner product of a
matrix).$p_{p_{p_{p}}}$
The Frobenius inner product of $N\times N$ matrices $A,B$ is
$\displaystyle(A|B)=\Tr[A^{\dagger}B]$ (B.2)
This induces a norm, which is the square root of the sum of squares of the
singular values:
$\displaystyle\|A\|_{Fr}=\sqrt{\sum_{i}\sigma_{i}(A)^{2}}=\sqrt{\sum_{ij\in[N]}|A_{ij}|^{2}}\,.$
(B.3)
###### Fact B.3.
The nuclear norm of a matrix is at most the product of its Frobenius norm and
the square root of its rank.
###### Proof.
See Rennie [Ren05] for a proof with explanation. ∎
###### Fact B.4 (Nuclear norm of a positive semidefinite matrix).
The nuclear norm of a positive semidefinite Hermitian matrix is simply its
trace; that is, if $\rho$ is Hermitian and positive semidefinite, then
$\displaystyle\|\rho\|_{1}=\Tr[\rho]\,.$ (B.4)
###### Proof.
For a Hermitian and positive semidefinite matrix, the eigenvalues are all real
and nonnegative, so the singular values are exactly the eigenvalues.
Alternatively, notice that $\rho=\sqrt{\rho^{\dagger}\rho}$ and use Definition
B.1. ∎
###### Fact B.5 (POVM trace is at most the nuclear norm).
Consider any Hermitian matrix $M$ and a POVM $\\{E,\mathbb{I}-E\\}$. Then
$\displaystyle\Tr[EM]\leq\|M\|_{1}\,.$ (B.5)
###### Proof.
Consider the singular value decomposition of a Hermitian $M=UDU^{\dagger}$.
Then
$\displaystyle\Tr[EM]=\Tr[(U^{\dagger}EU)D]=\Tr[E^{\prime}D]$ (B.6)
for some matrix $E^{\prime}$. Note that
$\\{E^{\prime},\mathbb{I}-E^{\prime}\\}$ make a POVM; they have the same
eigenvalues of $\\{E,\mathbb{I}-E\\}$, respectively, and so are both positive
semidefinite. Recall that the diagonal elements of a POVM are all nonnegative
and at most $1$. Then
$\displaystyle\Tr[EM]=\Tr[E^{\prime}D]=\sum_{i}E^{\prime}_{ii}D_{i}\leq\sum_{i}|D_{i}|=\|D\|_{1}=\|M\|_{1}\,.$
(B.7)
∎
###### Fact B.6 (Trace of outer product is inner product).
Consider vectors $\ket{x},\ket{y}\in\mathbb{C}^{m}$ and a matrix
$A\in\mathbb{C}^{m\times m}$. Then the inner product of $\ket{y}\bra{x}$ and
$A$ is
$\displaystyle\Tr[(\ket{y}\bra{x})^{\dagger}A]=\Tr[A\ket{x}\bra{y}]=\bra{y}A\ket{x}\,.$
(B.8)
###### Proof.
$\displaystyle\Tr[A\ket{x}\bra{y}]=\Tr[\sum_{i,k\in[m]}\left(\sum_{j\in[m]}A_{ij}x_{j}\right)_{i}y^{\dagger}_{k}]=\sum_{k,j\in[m]}A_{kj}x_{j}y^{\dagger}_{k}=\bra{y}A\ket{x}\,.$
(B.9)
∎
###### Remark B.7 (Orthogonal basis for an input density matrix).
We can decompose $\rho$ into a basis $\mathcal{M}$ that is orthogonal under
the Frobenius inner product $(a|b)=\Tr[a^{\dagger}b]$:
$\displaystyle\rho=\sum_{M\in\mathcal{M}}c_{M}M\,.$ (B.10)
Because the basis is orthogonal, for any $M\in\mathcal{M}$,
$\displaystyle\Tr[M^{\dagger}\rho]=\sum_{M^{\prime}\in\mathcal{M}}c_{M^{\prime}}\Tr[M^{\dagger}M^{\prime}]=c_{M}\|M\|^{2}_{Fr}\,.$
(B.11)
Moreover, by Cauchy-Schwarz, the inner product of $M$ and $\rho$ is at most
the product of the norm of each, so
$\displaystyle\|c_{M}M\|_{Fr}=\frac{\left|\Tr[M^{\dagger}\rho]\right|}{\|M\|_{Fr}}\leq\|\rho\|_{Fr}\,.$
(B.12)
We also state two properties that hold for _any_ randomized oracle:
###### Fact B.8 (Randomized oracles preserve trace, Hermiticity, and positive
semidefiniteness).
Consider any randomized oracle $\mathcal{O}_{F}$ corresponding to a set of
functions $f\in F$. Then $\mathcal{O}_{F}$ preserves the trace of its input.
Moreover, if the input $M$ is Hermitian, so is $\mathcal{O}_{F}[M]$; if $M$ is
also positive semidefinite, so is $\mathcal{O}_{F}[M]$.
###### Proof.
Consider any input matrix $M$. Then
$\displaystyle\Tr[\mathcal{O}_{F}[M]]=\Tr[\frac{1}{|F|}\sum_{f\in
F}\mathcal{U}_{f}[M]]=\frac{1}{|F|}\sum_{f\in
F}\Tr[U_{f}MU_{f}^{\dagger}]=\frac{1}{|F|}\sum_{f\in F}\Tr[M]=\Tr[M]\,.$
(B.13)
Now suppose $M$ is Hermitian; that is, $M^{\dagger}=M$. Then
$\displaystyle\mathcal{O}_{F}[M]^{\dagger}=\big{(}\frac{1}{|F|}\sum_{f\in
F}\mathcal{U}_{f}[M]\big{)}^{\dagger}=\frac{1}{|F|}\sum_{f\in
F}\big{(}U_{f}MU_{f}^{\dagger})^{\dagger}=\frac{1}{|F|}\sum_{f\in
F}U_{f}M^{\dagger}U_{f}=\mathcal{O}_{F}[M]\,.$ (B.14)
Furthermore, suppose $M$ is positive semidefinite; that is, there is a matrix
$B$ such that $M=B^{\dagger}B$. Then
$\displaystyle\mathcal{O}_{F}[M]=\frac{1}{|F|}\sum_{f\in
F}\mathcal{U}_{f}[M]=\frac{1}{|F|}\sum_{f\in
F}U_{f}B^{\dagger}BU_{f}^{\dagger}=\sum_{f\in
F}\left(BU_{f}\right)^{\dagger}\left(BU_{f}\right)\,,$ (B.15)
which is a sum of positive semidefinite matrices. Thus, $\mathcal{O}_{F}[M]$
is positive semidefinite. ∎
###### Fact B.9 (Randomized oracles do not increase nuclear norm or Frobenius
norm).
Consider any randomized oracle $\mathcal{O}_{F}$ corresponding to a set of
functions $f\in F$. Then $\mathcal{O}_{F}$ does not increase the nuclear norm
nor the Frobenius norm of its input.
###### Proof.
Recall that both the nuclear norm and Frobenius norm are unitarily invariant.
Now consider any input matrix $M$. Then the nuclear norm of
$\mathcal{O}_{F}[M]$ is
$\displaystyle\left\|\mathcal{O}_{F}[M]\right\|_{1}=\left\|\frac{1}{|F|}\sum_{f\in
F}U_{f}MU_{f}^{\dagger}\right\|_{1}\leq\frac{1}{|F|}\sum_{f\in
F}\left\|U_{f}MU_{f}^{\dagger}\right\|_{1}=\frac{1}{|F|}\sum_{f\in
F}\left\|M\right\|_{1}=\left\|M\right\|_{1}\,.$ (B.16)
The Frobenius norm of $\mathcal{O}_{F}[M]$ follows in exactly the same way. ∎
We use one additional property of density matrices in the proof of Lemma 4.6:
###### Fact B.10.
Consider any $N\times N$ density matrix $\rho$ and normalized states
$\ket{v},\ket{w}$. If $|\bra{v}\rho\ket{w}|=\Omega(\frac{1}{{\rm poly}(n)})$,
then both $\bra{v}\rho\ket{v}$ and $\bra{w}\rho\ket{w}$ are
$\Omega(\frac{1}{{\rm poly}(n)})$.
###### Proof.
Recall that a density matrix is Hermitian and positive semidefinite, so it is
diagonalizable and has real and nonnegative eigenvalues. As a result, it has a
decomposition
$\displaystyle\rho=S^{\dagger}\Lambda
S=S^{\dagger}\sqrt{\Lambda}\sqrt{\Lambda}S=(\sqrt{\Lambda}S)^{\dagger}(\sqrt{\Lambda}S)=A^{\dagger}A\,,$
(B.17)
for some diagonal $\Lambda$ and $A:=\sqrt{\Lambda}S$. Then by Cauchy-Schwarz,
$\displaystyle|\bra{v}\rho\ket{w}|^{2}=\left|(A\ket{v})^{\dagger}(A\ket{w})\right|^{2}\leq\left|(A\ket{v})^{\dagger}(A\ket{v})\right|\cdot\left|(A\ket{w})^{\dagger}(A\ket{w})\right|=\bra{v}\rho\ket{v}\cdot\bra{w}\rho\ket{w}\,.$
(B.18)
Since $\Tr[\rho]=1$, $\bra{\psi}\rho\ket{\psi}\leq 1$ for all normalized
states $\ket{\psi}$. Thus, both $\bra{v}\rho\ket{v}$ and $\bra{w}\rho\ket{w}$
are at least $|\bra{v}\rho\ket{w}|^{2}=\Omega(\frac{1}{{\rm poly}(n)})$. ∎
## Appendix C Deferred proofs
###### Theorem C.1 (Theorem 4.2, restated; [Har13, Proposition 2]).
Consider a finite group $G$, a vector space $\mathsf{V}$, and a representation
$R:G\to L(\mathsf{V})$. Then the operator
$\displaystyle\Pi_{R}:=\frac{1}{|G|}\sum_{g\in G}R(g)$ (C.1)
is an orthogonal projector onto $\mathsf{V}^{G}\subseteq\mathsf{V}$, where
$\displaystyle\mathsf{V}^{G}:=\\{v\in\mathsf{V}\,:\,R(g)[v]=v\,\forall g\in
G\\}\,.$ (C.2)
###### Proof.
We include the proof for completeness.
Note that for any $g\in G$,
$\displaystyle R(g)\Pi_{R}=R(g)\frac{1}{|G|}\sum_{g^{\prime}\in
G}R(g^{\prime})=\frac{1}{|G|}\sum_{g^{\prime}\in
G}R(gg^{\prime})=\frac{1}{|G|}\sum_{g^{-1}g^{\prime}\in
G}R(g^{\prime})=\Pi_{R}\,.$ (C.3)
This implies $\Pi_{R}\Pi_{R}=\Pi_{R}$:
$\displaystyle\Pi_{R}\Pi_{R}=\frac{1}{|G|}\sum_{g\in
G}R(g)\Pi_{R}=\frac{1}{|G|}\sum_{g\in G}\Pi_{R}=\Pi_{R}\,.$ (C.4)
So $\Pi_{R}$ is a projection.
Note that for any $v\in\mathsf{V}$ and $g\in G$,
$\displaystyle R(g)[\Pi_{R}[v]]=(R(g)\circ\Pi_{R})[v]=\Pi_{R}[v]\,,$ (C.5)
so $\Pi_{R}[v]\in\mathsf{V}^{G}$. And similarly, for all $w\in\mathsf{V}^{G}$,
$\displaystyle\Pi_{R}[w]=\frac{1}{|G|}\sum_{g\in
G}R(g)[w]=\frac{1}{|G|}\sum_{g\in G}w=w\in\mathsf{V}^{G}\,.$ (C.6)
So the image of $\Pi_{R}$ is exactly $\mathsf{V}^{G}$.
In order to consider orthogonality, we must define an inner product. Consider
an inner product $(u|v)$ for $u,v\in\mathsf{V}$. If for some $g\in G$,
$(R(g)[u]|R(g)[v])\neq(u|v)$, use the inner product
$\displaystyle\langle u,v\rangle=\frac{1}{|G|}\sum_{g\in
G}(R(g)[u]|R(g)[v])\,.$ (C.7)
Then under this inner product, $R(g)$ is a unitary operator for all $g\in G$:
$\displaystyle\langle R(g)[u],R(g)[v]\rangle=\sum_{g^{\prime}g^{-1}\in
G}(R(g^{\prime})[u]|R(g^{\prime})[v])=\langle u,v\rangle\,.$ (C.8)
This implies that $\Pi_{R}$ is an orthogonal projection:
$\displaystyle\langle\Pi_{R}[u],v\rangle$
$\displaystyle=\frac{1}{|G|}\sum_{g,g^{\prime}\in
G}(R(gg^{\prime})[u]|R(g)[v])$
$\displaystyle=\frac{1}{|G|}\sum_{h,h^{\prime}\in
G}(R(h)[u]|R(h^{\prime})[v])$
$\displaystyle=\frac{1}{|G|}\sum_{j,j^{\prime}\in
G}(R(j)[u]|R(jj^{\prime})[v])=\langle u,\Pi_{R}[v]\rangle\,.$ (C.9)
∎
###### Theorem C.2 (Theorem 4.3, restated; oracles on density matrices form a
representation).
Consider a group $G$ of functions $f:[N]\to[N]$ with bitwise $\oplus$ as the
group operation. Then the map $f\mapsto\mathcal{U}_{f}$ is a representation
over the vector space of $2N^{2}\times 2N^{2}$ complex matrices.
Similarly, consider a group $\widetilde{G}$ of permutations $\pi:[N]\to[N]$
with composition as the group operation. Then the map
$\pi\mapsto\mathcal{\widetilde{U}}_{\pi}$ is a representation over the vector
space of $2N\times 2N$ complex matrices.
###### Proof.
See that $f_{1}f_{2}$ acts as $(f_{1}f_{2})^{z}(x)=f_{1}^{z}(x)\oplus
f_{2}^{z}(x)$ for any $f_{1},f_{2}\in G$. The associated unitary acts as
$\displaystyle U_{f_{1}f_{2}}\sum_{c,x\in[N],z\in\\{\pm
1\\}}\alpha_{c,x,z}\ket{c,x,z}$ $\displaystyle=\sum_{c,x\in[N],z\in\\{\pm
1\\}}\alpha_{c,x,z}\ket{c\oplus(f_{1}f_{2})^{z}(x),x,z}$ (C.10)
$\displaystyle=\sum_{c,x\in[N],z\in\\{\pm 1\\}}\alpha_{c,x,z}\ket{c\oplus
f_{1}^{z}(x)\oplus f_{2}^{z}(x),x,z}$ (C.11)
$\displaystyle=U_{f_{1}}U_{f_{2}}\sum_{c,x\in[N],z\in\\{\pm
1\\}}\alpha_{c,x,z}\ket{c,x,z}\,.$ (C.12)
Since $U_{f_{1}f_{2}}=U_{f_{1}}U_{f_{2}}$,
$\mathcal{U}_{f_{1}f_{2}}=\mathcal{U}_{f_{1}}\circ\mathcal{U}_{f_{2}}$. So the
map $f\mapsto\mathcal{U}_{f}$ respects the group operation of $G$.
Similarly, $\pi_{1}\pi_{2}$ acts as
$(\pi_{1}\pi_{2})^{z}(x)=\pi_{1}^{z}(\pi_{2}^{z}(x))$ for any
$\pi_{1},\pi_{2}\in\widetilde{G}$. The associated unitary acts as
$\displaystyle\widetilde{U}_{\pi_{1}\pi_{2}}\sum_{x\in[N],z\in\\{\pm
1\\}}\alpha_{x,z}\ket{x,z}$ $\displaystyle=\sum_{x\in[N],z\in\\{\pm
1\\}}\alpha_{x,z}\ket{(\pi_{1}\pi_{2})^{z}(x),z}$ (C.13)
$\displaystyle=\sum_{x\in[N],z\in\\{\pm
1\\}}\alpha_{x,z}\ket{\pi_{1}^{z}(\pi_{2}^{z}(x)),z}=\widetilde{U}_{\pi_{1}}\widetilde{U}_{\pi_{2}}\sum_{x\in[N],z\in\\{\pm
1\\}}\alpha_{x,z}\ket{x,z}\,.$ (C.14)
Since
$\widetilde{U}_{\pi_{1}\pi_{2}}=\widetilde{U}_{\pi_{1}}\widetilde{U}_{\pi_{2}}$,
$\mathcal{\widetilde{U}}_{\pi_{1}\pi_{2}}=\mathcal{\widetilde{U}}_{\pi_{1}}\circ\mathcal{\widetilde{U}}_{\pi_{2}}$.
So the map $\pi\mapsto\widetilde{U}_{\pi}$ respects the group operation of
$\widetilde{G}$. ∎
###### Theorem C.3 (Theorem 4.4, restated; some randomized oracles are
orthogonal projectors).
Consider a group $G$ of functions $f:[N]\to[N]$ with bitwise $\oplus$ as the
group operation. Then $\mathcal{O}_{G}$ is an orthogonal projector, under the
Frobenius inner product $(x|y)=\Tr[x^{\dagger}y]$ for
$x,y\in\mathbb{C}^{2N^{2}\times 2N^{2}}$, onto
$\displaystyle\mathsf{V}_{G}:=\\{\rho\in\mathbb{C}^{2N^{2}\times
2N^{2}}\,:\mathcal{U}_{f}[\rho]=\rho\,\forall f\in G\\}\,.$ (C.15)
Similarly, consider a group $\widetilde{G}$ of permutations $\pi:[N]\to[N]$
with composition as the group operation. Then
$\mathcal{\widetilde{O}}_{\widetilde{G}}$ is an orthogonal projector, under
the Frobenius inner product $(x|y)=\Tr[x^{\dagger}y]$ for
$x,y\in\mathbb{C}^{2N\times 2N}$, onto
$\displaystyle\mathsf{\widetilde{V}}_{\widetilde{G}}:=\\{\rho\in\mathbb{C}^{2N\times
2N}\,:\mathcal{\widetilde{U}}_{\pi}[\rho]=\rho\,\forall\pi\in\widetilde{G}\\}\,.$
(C.16)
###### Proof.
By Theorem 4.3, the maps $f\mapsto\mathcal{U}_{f}$ for $f\in G$ and
$\pi\mapsto\mathcal{\widetilde{U}}_{\pi}$ for $\pi\in\widetilde{G}$ both are
representations. Note that for any unitary $U$ and square matrices $x,y$ of
the same dimensions,
$\displaystyle(UxU^{\dagger}|UyU^{\dagger})=\Tr[(UxU^{\dagger})^{\dagger}UyU^{\dagger}]=\Tr[Ux^{\dagger}U^{\dagger}UyU^{\dagger}]=\Tr[Ux^{\dagger}yU^{\dagger}]=\Tr[x^{\dagger}y]\,.$
(C.17)
So the representation of each group element is a unitary operator under the
Frobenius inner product. The proof then follows by Theorem 4.2. ∎
###### Lemma C.4 (Lemma 4.7, restated; can’t approximate too many subset
states).
Consider a Hermitian $N\times N$ matrix $\rho$ that is positive semidefinite
and has trace at most $1$. Consider the set of all subsets $V\subseteq[N]$,
where $|V|=N^{\alpha}$ for a fixed $0<\alpha<\frac{1}{2}$. Then the fraction
of subsets $V$ such that $\bra{V}\rho\ket{V}=\Omega(\frac{1}{{\rm poly}(n)})$
decreases faster than any exponential in ${\rm poly}(n)$.
###### Proof.
Let $\mathcal{W}$ be any set of subsets $V$, with size
$|\mathcal{W}|=\omega({\rm poly}(n))=o(\frac{2^{\sqrt{n}}}{{\rm poly}(n)})$,
such that $|\bra{V_{1}}\ket{V_{2}}|=O(2^{-\sqrt{n}})$ for all
$V_{1},V_{2}\in\mathcal{W}$. Consider an approximate basis for $\rho$,
consisting of
* •
$\ket{V}\bra{V}$, for all $V\in\mathcal{W}$, and
* •
a set of matrices $\mathcal{M}^{\prime}$ orthogonal to all other matrices, and
with Frobenius norm $1$.
Representing $\rho$ into this approximate basis is not unique, but the
coefficient on each term must be close to $\bra{V}\rho\ket{V}$. Recall that
the Frobenius norm of a density matrix $\rho$ is at most $1$; by Remark B.7,
each $|c_{M}|\leq 1$:
$\displaystyle\rho$
$\displaystyle:=\sum_{V\in\mathcal{W}}c_{V}\ket{V}\bra{V}+\sum_{M\in\mathcal{M}^{\prime}}c_{M}M\,,$
(C.18) $\displaystyle\bra{V}\rho\ket{V}$
$\displaystyle=\sum_{V^{\prime}\in\mathcal{W}}c_{V^{\prime}}\left|\bra{V^{\prime}}\ket{V}\right|^{2}=c_{V}+\sum_{V^{\prime}\in\mathcal{W},V^{\prime}\neq
V}c_{V^{\prime}}\left|\bra{V^{\prime}}\ket{V}\right|^{2}=c_{V}\pm
o(\frac{1}{{\rm poly}(n)})\,.$ (C.19)
Let’s inspect $\Tr[\rho^{2}]\leq\Tr[\rho]^{2}=1$:
$\displaystyle 1\geq\Tr[\rho^{2}]$ $\displaystyle=\sum_{V_{1}\neq
V_{2}\in\mathcal{W}}\Tr[c_{V_{1}}c_{V_{2}}^{\dagger}\ket{V_{1}}\bra{V_{1}}\ket{V_{2}}\bra{V_{2}}]+\sum_{V\in\mathcal{W}}|c_{V}|^{2}+\sum_{M\in\mathcal{M}^{\prime}}|c_{M}|^{2}\,.$
(C.20)
Notice that the first summation has $o(2^{2\sqrt{n}})$ terms of size
$O(2^{-2\sqrt{n}})$, so it has magnitude $o(1)$. Since the last summation is
nonnegative, the second summation must be at most a constant $O(1)$. Thus, for
at most a $O({\rm poly}(n))$ distinct choices of $V\in\mathcal{W}$, $c_{V}$
(and thus $\bra{V}\rho\ket{V}$) can have magnitude $\Omega(\frac{1}{{\rm
poly}(n)})$.
So, after choosing a polynomial number of $\ket{V}\bra{V}$ for
$V\in\mathcal{W}$, all other subsets $V$ that have good overlap with $\rho$
must have $\omega(2^{-\sqrt{n}})$ overlap with some previously chosen
$V\in\mathcal{W}$. The fractional number of subsets $V$ with this property is
at most
$\displaystyle O({\rm poly}(n))\times\frac{{N\choose
N^{\alpha}(1-\omega(2^{-\sqrt{n}}))}}{{N\choose N^{\alpha}}}=\frac{O({\rm
poly}(n))}{N^{N^{\alpha}\omega(2^{-\sqrt{n}})}}=\frac{1}{2^{n2^{\alpha
n(1-o(1))}}}=o(\frac{1}{2^{{\rm poly}(n)}})\,.$ (C.21)
∎
###### Lemma C.5 (Lemma 4.8, restated; not too many subsets can have elevated
mean).
Consider any $N\times N$ POVM $\\{E,\mathbb{I}-E\\}$, and the set of all
subsets $V\subseteq[N]$, where $|V|=N^{\alpha}$ for a fixed
$0<\alpha<\frac{1}{2}$. Then the fraction of subsets $V$ where
$\displaystyle|f(V)|:=\left|\frac{1}{|V|}\Tr[\mathbb{I}_{V}E]-\frac{1}{N}\Tr[E]\right|=\Omega(\frac{1}{{\rm
poly}(n)})\,,$ (C.22)
decreases faster than any exponential in ${\rm poly}(n)$.
###### Proof.
Suppose for contradiction that $|f(V)|=\Omega(\frac{1}{{\rm poly}(n)})$ for at
least a $\Omega(\frac{1}{2^{{\rm poly}(n)}})$ fraction of all subsets
$V\subseteq[N]$ where $|V|=N^{\alpha}$. Because $f(V)$ is real, it can be
either positive or negative; by pigeonhole, a
$\frac{1}{2}\Omega(\frac{1}{2^{{\rm poly}(n)}})=\Omega(\frac{1}{2^{{\rm
poly}(n)}})$ fraction of subsets $V$ must have the same sign of $f(V)$
(without loss of generality, assume it is positive). Let $\mathcal{W}$ contain
the subsets $V$ with this property.
Let $k=N^{1-\alpha}-N^{1-\alpha-(\alpha/3-\epsilon)}$ for any $\epsilon>0$. We
now build a finite sequence
$\mathcal{Q}=\\{V_{1},V_{2},\dots,V_{k}\\}\subseteq\mathcal{W}$ that nearly
cover $[N]$, and where no two elements share more than $N^{\alpha/3}$
elements. First, choose $V_{1}\in\mathcal{W}$. On subsequent draws, choose
another $V_{m}\in\mathcal{W}$ such that
$\displaystyle\frac{|V_{m}\cap\left(\cup_{1\leq j<m}V_{j}\right)|}{|V|}\leq
N^{\alpha/3}\,.$ (C.23)
Suppose we could not draw $V_{m}\in\mathcal{W}$; then all remaining elements
of $\mathcal{W}$ share at least $N^{\alpha/3}$ elements with one of
$\\{V_{1},\dots,V_{m-1}\\}$. The fractional size of $\mathcal{W}$ (compared to
all subsets) would then be at most
$\displaystyle(m-1)+\frac{{(m-1)N^{\alpha}\choose N^{\alpha/3}}{N\choose
N^{\alpha}-N^{\alpha/3}}}{{N\choose
N^{\alpha}}}\leq\frac{{(k-1)N^{\alpha}\choose
N^{\alpha/3}}}{{N-(N^{\alpha}-N^{\alpha/3})\choose
N^{\alpha/3}}}\leq(\frac{k}{N^{1-\alpha}-1})^{N^{\alpha/3}}\leq\left(1-\frac{1}{N^{\alpha/3-\epsilon}}\right)^{N^{\alpha/3}}\leq
o(\frac{1}{2^{{\rm poly}(n)}})\,,$ (C.24)
which is a contradiction. So we can always construct $\mathcal{Q}$. Now
consider
$\displaystyle\sum_{V\in\mathcal{Q}}f(V)=\frac{1}{|V|}\sum_{V\in\mathcal{Q}}\Tr[E\mathbb{I}_{V}]-\frac{|\mathcal{Q}|}{N}\Tr[E]\,.$
(C.25)
By supposition, all $f(V)>0$ and have magnitude $\Omega(\frac{1}{{\rm
poly}(n)})$, so this quantity should be $\Omega(|\mathcal{Q}|\frac{1}{{\rm
poly}(n)})$. We show however that this sum is small because the two sums are
very close. The number of “overcounts” of an element $x\in[N]$ is at most
$|\mathcal{Q}|N^{\alpha/3}$, and the number of elements $x\in[N]$ not in some
$V\in\mathcal{Q}$ is at most
$\displaystyle N-k(N^{\alpha}-N^{\alpha/3})=O(N^{1-(\alpha/3-\epsilon)})\,.$
(C.26)
Recall that the diagonal elements of a POVM are each nonnegative and at most
$1$. So then
$\displaystyle\left|\frac{1}{|V|}\sum_{V\in\mathcal{Q}}\Tr[E\mathbb{I}_{V}]-\frac{|\mathcal{Q}|}{N}\Tr[E]\right|=O(\frac{|\mathcal{Q}|}{|V|}N^{\alpha/3})+O(\frac{|\mathcal{Q}|}{N}N^{1-(\alpha/3-\epsilon)})=o(\frac{|\mathcal{Q}|}{{\rm
poly}(n)})\,.$ (C.27)
This contradicts our supposition. So the fraction of subsets $V$ where
$|f(V)|=\Omega(\frac{1}{{\rm poly}(n)})$ decreases faster than any exponential
in ${\rm poly}(n)$. ∎
## Appendix D Our setup contrasted with a discrete-time quantum walk
The way one stores a graph in an oracle drastically changes the difficulty of
some problems. Consider a _discrete-time quantum walk_ [Wat98], which allows a
vertex access to a superposition of its neighbors.444[Chi22, Chapter 17] has a
good introduction to this topic. Given a $d$-regular graph $G(V,E)$, the
operator $W:\mathbb{C}^{N^{2}\times N^{2}}$ acts as
$\displaystyle W$ $\displaystyle=\left(\sum_{(j,k)\in
E}\ket{j,k}\bra{k,j}\right)C$ (D.1) $\displaystyle C$
$\displaystyle=\sum_{j\in
V}\ket{j}\bra{j}\otimes(2\ket{\partial_{j}}\bra{\partial_{j}}-\mathbb{I})$
(D.2) $\displaystyle\ket{\partial_{j}}$
$\displaystyle=\frac{1}{\sqrt{d}}\sum_{(j,k)\in E}\ket{k}\,.$ (D.3)
Using a discrete-time quantum walk, we can learn about the mixing properties
of the associated graph; these are fundamentally related to the graph’s
spectral gap [GZ19].
By contrast, we query each neighbor of a vertex $v\in G$ with the value of the
registers encoding $i\in[d/2]$ (defined by a $G$-coded function). For example,
[ACL11] uses a similar oracle to show that deciding whether a graph is a
single expander graph or two equal-sized disconnected expander graphs is
outside of ${\mathsf{BQP}}$. Intuitively, a lack of superposition access to
neighbors of a vertex makes it harder for a quantum computer to “traverse” the
graph.
|
mCube: Multinomial Micro-level reserving Model.]mCube: Multinomial Micro-level reserving Model.
[1]Emmanuel Jordy<EMAIL_ADDRESS>
2]Jolien<EMAIL_ADDRESS>These authors contributed equally to this work.
3]Robin Van<EMAIL_ADDRESS>These authors contributed equally to this work.
1,4]Tim<EMAIL_ADDRESS>These authors contributed equally to this work.
*[1]KU Leuven, Department of Mathematics, Section of Statistics and Data Science, Leuven, Belgium
[2]National Bank of Belgium
[3]DKV Belgium
[4]University of Antwerp, Department of Mathematics, Section of Applied Mathematics, Antwerp, Belgium
This paper presents a multinomial multi-state micro-level reserving model, denoted mCube. We propose a unified framework for modelling the time and the payment process for IBNR and RBNS claims and for modeling IBNR claim counts. We use multinomial distributions for the time process and spliced mixture models for the payment process. We illustrate the excellent performance of the proposed model on a real data set of a major insurance company consisting of bodily injury claims. It is shown that the proposed model produces a best estimate distribution that is centered around the true reserve.
§ INTRODUCTION
A central part of an insurance company is the management of its future cash flows and solvency capital. To this end, insurers have to set aside reserves to cover outstanding claims liabilities. After an insured event has occurred, it always takes some time to settle the final payment of a claim. Taking into account two different sources of delay in general insurance, insurers typically set aside separate reserves for Incurred But Not Reported claims (IBNR) and Reported But Not Settled claims (RBNS). The number of RBNS claims is known, together with specific information relative to each claim. To determine the reserve for RBNS claims, the aim is to predict the amount that still needs to be paid. For IBNR claims, the insurer does not have information about each specific claim or the total number of these claims. The goal when modelling IBNR reserves is firstly to estimate the number of these claims, and secondly, to estimate the cost for each claim.
New regulations such as the Solvency II and IFRS frameworks guide insurance companies towards best practices for the calculation of their reserves. Following these regulations, it is important that models for claims reserving not only predict accurately the ultimate reserve amount but also the distribution of future cash-flows conditional on currently available information. More information on the calculation of insurance reserves can be found in [1].
A first class of models developed for the task of claims reserving are collective or macro-level models that focus on aggregated data organized in a so-called run-off triangle (often on an annual or quarterly basis). Popular macro-level models include the chain-ladder method [2] and the Bornhuetter–Ferguson method [3]. These methods have been successfully applied for decades due to their ease of use and sound theoretical foundations (see for example [4, 5]).
Furthermore, many extensions have been developed to produce more realistic results (e.g. [6, 7, 8, 9]).
However, aggregating data may lead to several problems and yields a loss of information. Therefore, micro-level or individual claims reserving models focus on granular or claim-specific data of the individual claims.
They aim to model two processes: a time process representing the individual states occupied by a claim, and a payment process which represents the amount paid for a claim in a particular state. The earliest articles on micro-level reserving include [10] and [11]. The modelling ideas from the early papers have been extended in a (semi-)parametric form [12, 13], as well as a non-parametric form [14, 15].
In this paper, we adopt the multi-state approach to loss reserving proposed by [16] and further considered by [17], [18], and [19]. In particular, we
* introduce a new model for IBNR claim counts, based on the multinomial distribution;
* make a connection between the time process modelling and the multi-state competing risk framework;
* use a semi-parametric modelling of the payment distribution, through a mixture distribution with a Generalised Pareto Distribution (GPD) for the tails;
* include practical recommendations on how to apply the proposed method to any micro-level data set.
* compare the predictive capabilities of our proposed method with other micro-reserving models.
The remainder of this paper is organised as follows. Section <ref> presents the claim reserving problem and in Section <ref> we construct a model for IBNR claims based on the multinomial distribution. The models for the time and payment processes based on the multinomial distribution, are the subject of Section <ref>. In Section <ref>, the hyper-parameter tuning for the models is presented and a numerical example on real data is investigated in Section <ref>. Finally, the main findings and suggestions for further research are summarized in Section <ref>.
§ THE CLAIMS RESERVING PROBLEM
The development of a non-life insurance claim is presented in Figure <ref>.
Claim development process.
The occurrence date, $T_{oc}$, is the date at which the claim event occurs and the reporting date, $T_{0}$, refers to the date at which the claim is reported to the insurer.
Once the insurer is aware of the claim and accepts the claim for reimbursement, some payments at different moments (here represented by $T_{1}$ and $T_{2}$) follow to compensate the insured for their loss. Once the insurance company reimburse the complete loss covered by the policy, the claim closes, which is represented by $T_{c}$. Note that we assume that once a claim is closed, it cannot be reopened.
At the moment of evaluation (commonly: end of a quarter, mid year or end of book year), denoted $\tau$, insurance companies have to set reserves aside to cover their future liabilities. These liabilities can come from three sources, which are enumerated below.
* Claims that have occurred before the evaluation period but which are not yet reported to the insurer, i.e. $T_{oc} \leq \tau < T_{0}$. These are called Incurred But Not Reported (IBNR) claims.
* Claims that have occurred and have been reported before the evaluation period, but which are still open at evaluation date, i.e. $T_{0} \leq \tau < T_{c}$. These are called Reported But Not Settled (RBNS) claims.
* Claims that have been closed before the evaluation date, but might get reopened.
Let us assume that we work on a sufficiently rich probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and denote $C_{k,t}$ as the random variable representing the cumulative amount paid for claim $k$ at time $t$. Next, we want to predict the reserve at evaluation time $\tau$, or in other words, the remaining amount to be paid for the claim until it is closed.
This can be estimated by $\hat{\mathbb{E}}[{R}_{k,\tau} \mid \mathcal{F}_{\tau}]$, denoted by $\hat{R}_{k,\tau}$, which is obtained by the difference of $\hat{C}_{k,T_{c}}$ and $C_{k,\tau}$. Here, ${F}_{\tau}$ represents the information available at time $\tau$ and $\hat{C}_{k,T_{c}}$ equals the estimated total cost of the claim at closing time, which can be obtained by $\hat{\mathbb{E}}[{C}_{k,T_{c}} \mid \mathcal{F}_{\tau}]$.
Note that for IBNR claims, we have that ${C}_{k,\tau}$ equals zero and hence, $\hat{R}_{k,\tau} = \hat{C}_{k,T_{c}}$. Finally, the estimated reserve for the whole portfolio is calculated as follows:
\begin{equation}
\hat{R}_{\tau} = \sum_{k_{1} =1}^{ \mathbf{n}^{RBNS}} \hat{R}_{k_1,\tau} + \sum_{k_{2} =1}^{ \hat{\mathbf{N}}^{IBNR}} \hat{R}_{k_2,\tau},
\label{eq: best estimate cost}
\end{equation}
with $\mathbf{n}^{RBNS}$ representing the number of RBNS claims and $\hat{\mathbf{N}}^{IBNR}$
the estimated number of IBNR claims. The goal of this paper is to determine $\hat{R}_{\tau}$, the estimated reserve of the whole portfolio, which is also called the best estimate cost of the portfolio.
§ MULTINOMIAL IBNR MODEL
Suppose an insurer has aggregated data on reported claims that occurred in accident year $i = 1,\ldots,I$, with $I$ the year of evaluation, and reported in development year $j= 0, \ldots,J-1$. This data is then typically represented in a table with the accident years as rows and the development years as columns. From the reported claims at time $\tau = I$, we obtain the upper triangle generating information $\mathcal{N}_{\tau} = \sigma\{ N_{i,j}; 0 \leq i\leq \tau, j \geq 0, i+j \leq \tau\}$. The goal of this section is to develop a model to determine $\hat{\mathbf{N}}^{IBNR}$, which represents the estimated number of IBNR claims based on the information $\mathcal{N}_{\tau}]$. In order to define this model, some notation needs to be introduced first.
Denote $\pi_{i,j}$ as the probability that a claim occurred in year $i$, will be reported $j$ years later. By assuming that no accident is reported beyond the last development year (no tail factor), we have the following multinomial probability vector for each accident year $i$: $\pi_i = (\pi_{i,0}, \ldots, \pi_{i,J-1})$. Next, $N_{i}$ represents the number of claims that occurred in accident year $i$ and $N_{i,j}$ the number of these claims that have been reported after $j$ years. Note that since we assume that no claims will be reported after the last development year, we have $N_{i} = \sum_{j=0}^{J-1}N_{i,j}$. The number of observed claims that occurred in year $i$ is written by $N_i^{obs}$, and since a claim can only be observed once reported, we have that $N_{i}^{obs} = \sum_{j=0}^{I- i} N_{i,j}$. Finally, the number of IBNR claims that occurred in year $i$ is given by $N_i^{IBNR} = \sum_{j=I-i+1}^{J-1} N_{i,j}$, such that the number of IBNR claims over all accident years is given by
\begin{equation}
N^{IBNR} = \sum_{i=1}^{I} N_{i}^{IBNR} = \sum_{i=1}^{I} \sum_{j=I-i+1}^{J-1} N_{i,j}
\label{eq: N_IBNR}
\end{equation}
Hence, $N_i^{IBNR}$ equals $N_i - N_i^{obs}$.
We assume that stationarity holds, e.g. $\pi_{1} = \pi_{2} = \ldots = \pi_{I}$. Moreover, we assume that conditional on $N_{i}$, we have that $(N_{i,0}, \ldots, N_{i,J-1})$ follows a multinomial distribution with event probabilities $\pi_{i}$ and $N_{i}$ number of trials.
A final assumption is that, conditional on the number of observed claims, the number of IBNR claims follows a negative binomial distribution with parameters $p_{i} = \sum_{j=0}^{I-i} \pi_{i,j}$ and $r_{i} = N_{i}^{obs} = \sum_{j=0}^{I-i} N_{i,j}$. In other words, $N_{i}^{IBNR} \mid N_{i}^{obs} \sim \text{NegBinom} (r_{i},p_{i})$, such that $E[N_{i}^{IBNR}\mid N_{i}^{obs}] = r_{i}(1-p_{i})/p_{i}$ and $V[N_{i}^{IBNR}\mid N_{i}^{obs}] = r_{i}(1-p_{i})/p_{i}^2$. Note that the negative binomial distribution expresses the distribution of the number of failures in a sequence of Bernoulli trials before $r_i$ successes are reached, with $p_i$ being the probability of success. In this case, a success coincides with reporting.
As shown by [20], these specifications are consistent with the chain-ladder [2] since the predicted number of IBNR claims have the same expected value. We can build a predictive distribution for the number of yearly IBNR claims by repeating the following steps a sufficiently large number (e.g., 100) of times:
* Estimate $\pi_{1} = (\pi_{1,0}, \pi_{1,1},…, \pi_{1,J-1})$ by its maximum likelihood estimator, denoted by $\hat{\pi}_1$. The other probabilities ${\pi}_k$, $k\in \{2,\ldots,I\}$, are by the stationary assumption also estimated as $\hat{\pi}_k=\hat{\pi}_1$.
* Standardise the empirical multinomial probabilities such that the probabilities for the unobserved development years sum to 1. This implies that if there are $k$ unobserved development years for accident year $i$, their corresponding multinomial probabilities are given by
\begin{equation}
(\tilde{\pi}_{i,J-k},\ldots, \tilde{\pi}_{i,J-1}) = \left(\frac{\hat{\pi}_{i,J-k}}{\sum_{l=1}^{k} \hat{\pi}_{i,J-l}},\ldots, \frac{\hat{\pi}_{i,J-1}}{\sum_{l=1}^{k} \hat{\pi}_{i,J-l}} \right).
\end{equation}
* Sample the estimated yearly number of IBNR claims, ${\hat{N}_{i}^{IBNR}}$, using each accident year specific negative binomial distribution.
* To obtain the estimated number of IBNR claims for the unobserved development years and accident year $i$, $ \{\hat{N}_{i,I-i+1}, \ldots, \hat{N}_{i,J-1}\} $, sample from a multinomial distribution with parameters $n=N_{i}^{IBNR}$ and $p=(\tilde{\pi}_{i,I-i+1},\ldots, \tilde{\pi}_{i,J-1})$.
Following Assumption <ref>, it holds that $\hat{N}^{IBNR} = \sum_{i=1}^{I} r_{i}(1-\hat{p}_{i})/\hat{p}_{i}$. The simulations are only used to build a predictive distribution or to construct confidence intervals. We note that the aim of this model for IBNR claim count is not to propose a model that captures aspects that are not captured in the framework of Over-Dispersed Poisson for the chain-ladder. Rather, the aim is to propose a parametrization which uses the multinomial distribution, as this multinomial distribution is the central theme of the current article.
§ MODELS FOR INDIVIDUAL CLAIMS
In this section, the details of the proposed multinomial micro-level reserving model are discussed. This model will be used to determine the estimated reserves $\hat{R}_{k,\tau}$ for each claim $k$, such that we are able to obtain the best estimate cost of the portfolio based on Equation (<ref>). We start by a detailed explanation of the multi-state approach, followed by a discussion of the models that are selected for the time and payment process.
§.§ Multi-state approach
This section uses the multi-state approach of [19] represented in Figure <ref>. In this approach, an RBNS claim occured in state $S_{oc}$ and is reported in state $S_{0}$. Once reported, either a first payment can occur, implying a transition from state $S_{0}$ to state $S_{1}$, or the claim can go to one of two absorbing states, $S_{tn}$ or $S_{tp}$. Here, $S_{tn}$ represents the fact that the claim went to a terminal state without payment, and $S_{tp}$ represents that the claim went to a terminal state with payment. In this framework, the states $S_{j}$, $ \{j \in \{0, n_{pmax}-1\}\}$ are all strictly transient and moreover, for $j>0$, state $S_j$ implies $j$ payments were made prior to the current time point. The integer $ n_{pmax}$ represents the maximum number of transitions a claim is allowed before being forced to an absorbing state. More specifically, we represent the multi-state model $(\mathscr{S},\mathscr{T})$ with state space $\mathscr{S}=\{S_{0},S_{1},\ldots,S_{n_{pmax}-1},S_{tn},S_{tp}\}$ and set of direct transitions $\mathscr{T}$. An event corresponds to the transition from one state in $\mathscr{S}$ to another. The set of direct transitions $\mathscr{T}$ defines all possible transitions in the multi-state model, indicated by arrows in Figure <ref>. One advantage of individual claims reserving models is that they allow us to take into account covariate information such as the history of incremental payments, line of business, reporting delay, as well as any other type of information which is available to describe individual claims and their development. We denote by $\mathcal{F}_{k,T}$ the filtration containing all the information concerning claim $k$ at time $T$. In a multi-state framework, $C_{k,T_{c}}$ can be computed by determining the next states the claim will occupy and summing the amount paid in these future states together with the amount already paid.
Discrete time multi-state model, source: [18].
§.§ Multinomial model for the time process
We use the methodology from discrete time survival analysis, and model the time until an event or transition, from one state to the other. We define an event as being the occurrence of a payment or as the transition to a terminal state without payment as in [17] and [19]. Furthermore, we say that a claim is censored or open when it is not in one of the two absorbing states at the moment of evaluation. The choice of discretization for the multi-state model of the previous section is arbitrarily chosen to be monthly, with one month corresponding to 30 days. The proposed model can be adapted to work with any discretization but the number of parameters increase with the granularity considered for the data.
Our goal is to model the time $T_{k,j}$ of the transition of claim $k$ from a state $S_{j}$ to a state $S_{j+1}$, $ j \in \{ 0,1,\ldots,n_{pmax}-2\}$ or to a terminal state, using covariate information included in $\mathcal{F}_{k,t}$, through the covariate vector $\mathbf{x_{k,t}}$. If we denote by $\Delta_{k,j}$ the random and discrete censoring time of claim $k$ in state $S_{j}$, we have the following assumption:
We assume that $T_{k,j}$ and $\Delta_{k,j}$ are independent and that the censoring mechanism is non-informative.
Based on discrete-time competing risks literature [21], we represent the event type as a random variable $\epsilon_{k,j}$, taking values $P$, $TP$, $TN$ corresponding respectively to a transition due to a payment, a terminal payment, or a termination without payment. The discrete-time cause-specific hazard functions are then given by:
\begin{align}
\label{timeprocmultinom}
\lambda_{j,j+1}(t \mid \mathbf{x_{k,t}}) &= \mathbb{P}(T_{k,j} = t, \epsilon_{k,j} = P \mid T_{k,j} \geq t,\mathbf{x_{k,t}} ) \\
&= \frac{\exp(\alpha_{j,j+1} + \mathbf{\beta}_{j,j+1}^{T} \mathbf{x_{k,t}})} {1 + \sum_{e} \exp(\alpha_{j,e} +\mathbf{\beta}_{j,e}^{T} \mathbf{x_{k,t}})}, \nonumber\\
\lambda_{j,{tp}}(t \mid \mathbf{x_{k,t}}) &= \mathbb{P}(T_{k,j} = t, \epsilon_{k,j} = TP \mid T_{k,j} \geq t, \mathbf{x_{k,t}}) \nonumber\\
&= \frac{\exp(\alpha_{j,TP} + \mathbf{\beta}_{j,TP}^{T} \mathbf{x_{k,t}})} {1 + \sum_{e} \exp(\alpha_{j,e} +\mathbf{\beta}_{j,e}^{T} \mathbf{x_{k,t}})} ,\nonumber\\
\lambda_{j,{tn}}(t \mid \mathbf{x_{k,t}}) &= \mathbb{P}(T_{k,j} = t, \epsilon_{k,j} = TN \mid T_{k,j} \geq t, \mathbf{x_{k,t}}) \nonumber\\
&= \frac{\exp(\alpha_{j,TN} + \mathbf{\beta}_{j,TN}^{T} \mathbf{x_{k,t}})} {1 + \sum_{e} \exp(\alpha_{j,e} +\mathbf{\beta}_{j,e}^{T} \mathbf{x_{k,t}})} \nonumber,\\
\lambda_{j,{j}}(t \mid \mathbf{x_{k,t}}) &= \mathbb{P}(T_{k,j} > t \mid T_{k,j} \geq t,\mathbf{x_{k,t}} )\nonumber\\
&= 1 - \lambda_{j,j+1}(t \mid \mathbf{x_{k,t}}) - \lambda_{j,{tp}}(t \mid \mathbf{x_{k,t}}) - \lambda_{j,{tn}}(t \mid \mathbf{x_{k,t}}), \nonumber
\end{align}
where $e$ iterates over all event types. $\alpha_{j,e}$ and $\mathbf{\beta}_{j,e}^{T}$ are the parameters in the multinomial regression model relating to event type $e$. Following Assumption <ref>, the parameters are estimated by their maximum likelihood estimator[using the function in the package [22] in ]. For a claim that has occured but has not yet been reported, only one event in the multi-state process is possible, namely reporting. Similarly to [19], we estimate the monthly probability of reporting using a binomial Generalized Linear Model (GLM):
\begin{align}
\label{eq:repDel}
\lambda_{oc,0}(t \mid \mathbf{x_{k,t}}) &= \mathbb{P}(T_{k,oc} = t \mid T_{k,oc} \geq t, \mathbf{x_{k,t}})\\
&= \frac{1}{ 1 + \exp(\alpha_{oc} +\mathbf{\beta}_{oc}^{T} \mathbf{x_{k,t}})}, \nonumber
\end{align}
with $\alpha_{oc}$ and $\mathbf{\beta}_{oc}^{T}$ the logistic regression parameters estimated by their maximum likelihood estimator.
Note that, $T_{k,j}$ denotes the time period at which the claim moves out of state $S_{j}$ and is reset to 0 each time the claim enters a new non-absorbing state. We treat each discrete time unit as a separate observation in the data set. Hence, for a claim $k$ in state $S_{j}$, there are as many lines as the number of time units the claim is in this state. A representation of the different cause-specific hazards is shown in Figure <ref>.
Representation of transition probabilities in the multi-state model, source: [18].
§.§ Modelling of the payment distributions
The difficulties in modelling the payment distribution arise from some stylised properties. First of all, negative payments can be present. For example, when the insurance company has to pay a third party and the insured has an insurance policy with a per-loss deductible of $d$, she has to pay $d$ to the insurance company. Moreover, the payment distribution consists of a high number of small incremental payments, and a small number of very large payments in absolute value. Multiple models have been proposed to overcome these issues: [13] use a lognormal distribution to model payments, [19] use a mixture of lognormal distributions for the first payment and a mixture of lognormal or lognormal and pareto distributions for the link ratios, [23] use a generalised beta distribution of the second kind, and [12] use a multivariate extension of the univariate skew normal distribution. [24] propose to model censored losses using a mixed Erlang distribution for the body of the distribution and a Generalised Pareto Distribution (GPD) for the tail. The authors also propose to use the mean excess plot [25] to assess when to split the body and the tail of the distribution. This estimation procedure has the advantage of taking into account both random censoring and truncation.
We propose to model the payment distribution using a data-driven modification of [26], which allows for the inclusion of covariate information and which can model the skewness of the distribution. Let $Y^{j}$ denote the random variable representing the payment size for a claim in state $S_{j}$. Moreover, we make the following assumption on the conditional distribution of $Y^{j}$ given $\mathbf{x_{t}}$.
We assume that the density of $Y^{j}$ conditional on $\mathbf{x_{t}}$ is a $L$-component mixture i.e. $f(y \mid x)= \sum_{l=1}^{L} \pi_{l}^{j}(x) f_{l}^{j}(y)$, where $\pi_{l}^{j}(x)$ is the $lth$ element of the covariate-dependent vector of multinomial logistic mixture weights and $f_{l}^{j}$ are the densities of the mixture components. We further assume that $L$ is known, $f_{1}^{j}$ and $f_{L}^{j}$ are densities of a Generalized Pareto Distribution (GPD) and $f_{l}$ for $ l \in \{2,L-1\}$ are truncated normal distributions on the interval $[b_{l-1}^{j}, b_{l}^{j}[$. In this case, $b_{1},\ldots, b_{L-1}$ represent the splitting points separating the density into bins $\mathcal{B}_1^{(j)}, \ldots, \mathcal{B}_L^{(j)}$.
A representation of Assumption <ref> is shown in Figure <ref> with $L = 4$ bins.
Spliced payment distribution with three splitting points.
The splitting points $b_{2}^{j},\ldots, b_{L-2}^{j}$ can be chosen freely so that each bin has some interpretation. In practice, four splitting points can be chosen as shown in figure <ref> to represent small or large negative payments, as well as small or large positive payments. The leftmost and rightmost splitting points, respectively $b_{1}^{j}$ and $b_{L-1}^{j}$, need to be well chosen in order to assure that the observations of these bins can be considered to be a sample from a GPD with $b_{1}^{j}$, $\iota_{1}^{j}$ and $\varphi_{1}^{j}$ ($b_{L-1}^{j}$, $\iota_{L}^{j}$ and $\varphi_{L}^{j}$) as the location, scale and shape parameter for $ \mathbf{B}_{1}^{j}$ ($ \mathbf{B}_{L}^{j}$). To this end, we can use tools from extreme value theory such as the mean excess plot or the Gerstengarbe plot [27]. Following Assumption <ref>, the expected payment for a claim $k$ in state $S_{j}$ conditional on its covariate vector $\mathbf{x_{k,t}}$ is given by
\begin{align}
\label{eq:splicing}
\mathbb{E}[Y^{j} \mid \mathbf{x_{k,t}}] &= \sum_{l = 1}^{L} \pi_{l}^{j}(\mathbf{x_{k,t}}) \mu_{l}\\
\pi_{l}(\mathbf{x_{k,t}}) &= \frac{\exp({\gamma}_{0,l}^{(j)} + \mathbf{x_{k,t}}^{T}{\gamma}_{l}^{(j)})}{1 + \sum_{m=1}^{L} \exp({\gamma}_{0,m}^{(j)} + \mathbf{x_{k,t}}^{T}{\gamma}_{m}^{(j)}) }, \nonumber
\end{align}
with $\mu_{l}$ the mean of the $lth$ component in the mixture. The parameter vectors ${\gamma}_{0}^{(j)}$ and ${\gamma}^{(j)}$ are estimated using maximum likelihood[using the function in the package [22] in ]. Also the parameters $\mu_{l}, \ldots, \mu_{L}$ are obtained using maximum likelihood estimation. Hence, we can express the expected cumulative amount paid for claim $k$ at closure as
\begin{equation}
\label{totalPay}
{C}_{k,T_{c}} = \sum_{j: S_{j} \notin \{ S_{tn}, S_{tp}\} } \mathbb{E}[Y^{j} \mid \mathbf{x_{k,T_{k,j}}}].
\end{equation}
§.§ Total cost simulation for RBNS claims
Similarly to [28], we choose a one-period-ahead forecast to determine the estimated reserves $\hat{R}_{k, \tau}$ for each open claim $k$ as it allows us to intervene during the settlement process and helps us keep interpretability in the claims development process. In the modelling of the time process (<ref>), the next state to visit is decided by a multinomial probability vector. Instead of assigning an open claim to the state with the highest multinomial probability, we take a sample of size one from the estimated multinomial distribution. Next, the expected payment for a claim in that state and with a given covariate vector is simulated using (<ref>). All this has the advantage of adding variability to the predictive distribution. We note that using the expected value of the payments results in smoothed inputs in the simulation of RBNS total cost as in [28] and [29]. However, this is acceptable in our case since we are mostly interested in expected future payments and the expected total cost of the claim at closure as explained in the introduction. Another strategy could be to take into account the full distribution of future payments, by simulating an observation from a mixture component chosen based on the probabilities in (<ref>).
Repeating these multinomial samplings ${N_{sim}}$ times, we obtain a predictive distribution for $C_{k,T_C}^{r}$, the total payment of claim $k$, and its reserve ${R}_{k,\tau}^{r}= C_{k,T_C}^{r} - C_{k,\tau} $ with $r \in \{1, \ldots, N_{sim}\}$.
RBNS total cost simulation strategy.
§.§ Total cost simulation for IBNR claims
For IBNR claims, an extra step is added compared to RBNS claims, since we need to take into account the reporting delay. For each IBNR claim, this extra step consists in taking a sample of size one from a Bernoulli distribution using the estimated reporting probabilities (<ref>). If this Bernoulli sample is 1, the RBNS total cost simulation strategy is applied. If the sample equals 0, then the covariate vector is updated and the claim remains in state $S_{oc}$. The Bernoulli sampling procedure is repeated until the claim leaves the state $S_{oc}$.
§ HYPER-PARAMETER OPTIMIZATION
The flexibility of the proposed models lead to multiple hyper-parameters that need to be set prior to fitting the time and payment model. In this section, we explain the role of each hyper-parameter, as well as our tuning strategy. Furthermore, we want to alleviate the linearity assumption of the continuous variables that are used in the various regression models of mCube. To this end, each continuous variable will be binned into categories using the strategy that will be explained in the next section. A binning strategy is applied such that the proposed non-complex model can still capture complicated patterns.
§.§ Feature engineering
Individual claims data contain both static information such as the line of business, claim type and injured body part as well as dynamic information such as the cumulative payments and dates, that evolve throughout the claim's life. Table <ref> shows a sample of the dynamic information available in an individual claim.
PolNumb cumPay bookDate accDate repDate Status closedDate
2640440 4,087.61 09-01-2012 01-01-2012 02-01-2012 O 28-08-2012
2640440 4,127.11 10-01-2012 01-01-2012 02-01-2012 O 28-08-2012
2640440 7.12 02-02-2012 01-01-2012 02-01-2012 O 28-08-2012
2640440 297.12 07-02-2012 01-01-2012 02-01-2012 O 28-08-2012
2640440 297.12 28-08-2012 01-01-2012 02-01-2012 C 28-08-2012
Example of the dynamic claim information available from the database.
Since mCube requires that all time varying variables are transformed into their respective values at the end of the fixed discrete time steps, the raw data set shown in Table <ref> needs to be processed. More specifically, in our case, we have set 30 days as our fixed time step ('perLen') and for example for the cumulative payment we record its value at the end of the subsequent time step. If this value differs in absolute value more than "minPayVal" from previously recorded payment, we consider that a payment was performed and the claim will experience a transition from the current state. In case that the claim resides in $S_0$, the previously recorded payment equals 0. As a result, transType will be set to P, TN or TP, depending on the transition type. In case that no payment has occurred, transType will be equal to N. See Table 2 for how the information in Table 1 will be transformed given the above described rules.
Next we will work out how all commonly present time varying variables are transformed to comply to mCube. These variables can be considered as the minimal set of variables that are present in data set used for any micro-level reserving model and this set can be augmented by other time-varying variables that happen to be recorded by the insurance company at hand. As we work. As we work in a discrete time setting with a chosen period length ("perLen"), we denote the event times for claim $k$, as $t_{0}^{k} \leq t_{1}^{k}<t_{2}^{k} \ldots \leq t_{Q}^{k}$ with $t_{0}$ representing the accident date, $t_{1}$ the reporting date, $t_{2}, \ldots, t_{Q-1}$ the payment dates and $t_{Q}$ representing the closing date. If $i \geq 1$ and $t$ is such that $t_{i}^{k} \leq t < t_{i+1}^{k}$, we create the following variables for claim $k$:
* $ x_{k,t}^{1} = \max\left( 1, \ceil*{\frac{t_{1}^{k} - t_{0}^{k}}{perLen}} \right)$, to represent the reporting delay (deltRep).
* $x_{k,t}^{2} = \mathbbm{1}_{t_{1}^{k} = t_{0}^{k}}$, to a represent a fast reporting indicator (fastRep).
* $x_{k,t}^{3} = \max\left( 1, \ceil*{\frac{t - t_{1}^{k}}{perLen}} \right)$, to represent the time since reporting (inProcTime).
* $x_{k,t}^{4} = y_{i-1}^{k}$, the payment at time $t_{i-1}^{k}$, with $i \geq 3$ (delt1Pay).
* $x_{k,t}^{5} = \ceil*{\frac{t- t_{i-1}^{k}}{perLen}}$, with $ i \geq 3$ for the time since the previous payment (delt1PayTime).
* $x_{k,t}^{6} = \sum_{\{s : t_{s}^{k}< t\}} y_{s}^{k}$, for the cumulative payments up to time $t$ (cumDel1tPay).
* $x_{k,t}^{7} = x_{k,t}^{3} \mathbbm{1}_{\{i = 1\}} + x_{k,t}^{5} \mathbbm{1}_{\{i>1\}}$ for the time spent in the current state (inStateTime).
Hence, at each time $t$, we have the claim's feature information vector $\mathbf{x_{k,t}} = (x_{k,t}^{1}, x_{k,t}^{2}, x_{k,t}^{3}, x_{k,t}^{4}, x_{k,t}^{5}, x_{k,t}^{6}, x_{k,t}^{7}, \mathbf{x}_{base})$ with $\mathbf{x}_{base}$ representing the remaining static information of the claim. When we want to model the payment distribution, we also add as covariate, an indicator of if it is a terminal payment or not as this information is always available before estimating a payment. Using the accident, reporting, booking or settlement date, additional features such that the day, month, quarter or financial year during which any of these events have occurred can be engineered. These would then help us capture calendar effects more easily. For the sake of simplicity, none of these seasonal variables were included in our analysis, even if there is no technical limitation to not do so. Table <ref> contains the transformed database which contains any of the proposed transformations of this section.
cumPay bookDate accDate repDate transType closedDate
2640440 4,127.11 10-01-2012 01-01-2012 02-01-2012 P 28-08-2012
2640440 297.12 07-02-2012 01-01-2012 02-01-2012 P 28-08-2012
2640440 297.12 08-03-2012 01-01-2012 02-01-2012 N 28-08-2012
2640440 297.12 07-04-2012 01-01-2012 02-01-2012 N 28-08-2012
2640440 297.12 07-05-2012 01-01-2012 02-01-2012 N 28-08-2012
2640440 297.12 06-06-2012 01-01-2012 02-01-2012 N 28-08-2012
2640440 297.12 06-07-2012 01-01-2012 02-01-2012 N 28-08-2012
2640440 297.12 28-08-2012 01-01-2012 02-01-2012 TN 28-08-2012
deltRep fastRep procTime deltPay cumDeltPay stateTime state
1 0 1 NA NA 1 $S_{0}$
1 0 2 4,127.11 4,127.11 1 $S_{1}$
1 0 3 -3829.99 297.12 1 $S_{2}$
1 0 4 -3829.99 297.12 2 $S_{2}$
1 0 5 -3829.99 297.12 3 $S_{2}$
1 0 6 -3829.99 297.12 4 $S_{2}$
1 0 7 -3829.99 297.12 5 $S_{2}$
1 0 8 -3829.99 297.12 6 $S_{2}$
Example of the dynamic claim information available from the database.
§.§ Binning continuous predictors
We use a modified version of the data-driven strategy to bin continuous variables of [30], where the authors propose to fit a Generalized Additive Model (GAM) where the covariate effects of the continuous variables are fitted using cubic splines. Next, the spline estimates of the continuous predictors are binned using an evolutionary regression tree [31]. Therefore, we make the following adaptation to the algorithm of [30] :
* First a sufficiently large bootstrap sample is taken from the data set. We recommend to sample between 50,000 to 100,000 observations for each bootstrap sample and to only take a limited number of bootstrap samples. In our case study, we have used a sample size per bootstrap of 100,000 and 10 bootstraps samples were taken.
* In each bootstrap repetition, and for each continuous predictor, we split each continuous variable into 40 groups where the split points are the 0.025, 0.05,…, 0.95, 0.975 quantiles.
For each group, the median value of the continuous variable of interest is chosen as the group representative or mediod.
* We then fit a multinomial regression similar to (<ref>) in which the variable $x_{k,t}^{7}$ (inStateTime), and the medioids of interest obtained in the previous step are used as the predictors and the transition type is chosen as a response.
* For each hazard function, the corresponding multinomial parameter estimates for each group representative are used as responses in a local regression (loess). The predictors of the local regression are the medioids. As such, a covariate estimate is obtained for each value of the considered continuous variable, instead of just its mediods only. Note that the approach described from step (2) to (4) can also be replaced by using a spline estimate for the considered continuous variable in the multinomial model of step (3), instead of what is currently proposed, however this requires a much longer fitting time than the current proposal and the end result of both approaches was found to be relatively similar.
* For each hazard function, a regression tree is then fitted on the obtained parameters to obtain a set of "nGroups" -1 splitting points for the continuous variable.
* In the final step, the splitting points of the different hazard functions are merged, as to obtain a single set of splitting points that is used to bin the considered continuous variable in the same way for each considered transition function. As such, we don't need to define a separate binned version of the considered continuous variable for each hazard function. We proceed by merging and ordering the splitting points of all 3 hazard functions.
Note that we impose that each bin obtained by this method has at least "nMinLev" observations.
The choice of the hyper-parameters "nGroups", "nGroupsFin" and "nMinLev" is discussed in appendix D.
§.§ Hyper-parameters
Due to the high flexibility of the time and payment models of mCube, multiple hyper-parameters need to be set prior to fitting any of the models. Due to the computational complexity of mCube and the large number of hyper-parameters and possible values for these hyper-parameters, a search in the hyper-parameter space is not feasible. We propose to choose values for the hyper-parameters based on patterns present in the data and business logic. We refer to Appendix C for more details on this matter.
§ CASE STUDY
In this section, the mCube is applied on a real data set of a major insurance company in order to explore its performance.
§.§ Data
The data used in this section is a random sample of the set of claims obtained obtained from a European insurer and resulting in a total of 25,821 body injury claims occurring between 2006 and 2012, of which the latest evaluation moment was December 31, 2018, some of the claims still being opened at that time. To anonymize the payment data, we have multiplied all payment by a non-disclosed constant value. Furthermore, we adjust all payments for inflation. All the information that is present prior to and including December 31, 2012, is equal to the training set. Our aim is to predict the remaining cost of every open claim at the end of December 31, 2012 until December 31, 2018. The set of all such open claims corresponds to our entire test set. Since we have all information up to December 31, 2018 and since we have 6 years of information for each claim, the true cost evaluated at December 31, 2012, is known for all claims in our test set.
Distribution of the number of claims per accident and reporting year, together with the distribution of the reporting and settlement delay.
From Figures <ref>, <ref>, <ref> and <ref>, obtained from the entire data set, we observe that the number of accidents is stable over the different accident years and most claims are reported in the month in which they happened. The claim settlement distribution is right skewed with most claims being settled within 2 years.
Table <ref> shows the distribution of the total number of transitions for claims in the data set. We observe that only around 13% of claims have more than 6 transitions.
Furthermore, we observe that most claims have 2 or 3 transitions in the multi-state process and about 64% of all selected claims have 3 transitions or less, hence a substantial part of the data set consists of claims with a long and relatively complicated development pattern, which is rather typical for bodily injury claims. Table <ref> shows the distribution of the time spent in each state. We note that observations for states greater than $5$ are lumped together in a state $S_{5+}$ due to the small number of observations in each of the individual data sets and given that only a small percentage of the claims have more than 6 transitions (see also Table <ref>). We observe that claims spend on average between 5 and 7 months (30-day periods) in each state. Furthermore, states $S_{0}$ and $S_{5+}$ are the states with the lowest average time spent and also the most skewed. The reason being that these data sets have the most diverse types of claims.
7c Total number of transitions
$1$ $2$ $3$ $4$ $5$ 6 $>6$
abs.freq 2,275 7,429 7,250 3,125 1,481 848 3,413
rel.freq 8.81 % 28.77 % 28.08 % 12.10 % 5.74 % 3.28% 13.22%
Absolute and relative frequencies of the total number of transitions for each claim.
State Min Median Mean Max IQR Skewness Kurtosis
$S_{0}$ 1.00 2.00 5.36 72.00 5.00 3.36 14.46
$S_{1}$ 1.00 3.00 5.89 72.00 5.00 3.07 12.55
$S_{2}$ 1.00 4.00 7.02 72.00 6.00 2.82 10.14
$S_{3}$ 1.00 4.00 6.44 72.00 6.00 2.59 8.50
$S_{4}$ 1.00 3.00 6.16 61.00 6.00 2.68 9.07
$S_{5+}$ 1.00 3.00 4.64 61.00 4.00 3.10 12.91
Summary statistics for the time spent in each state.
Table <ref> shows the payment distribution in each sate. From this table, we can see how complicated payment distributions are, due to the presence of, namely, the presence of negative payment amounts, small median payment amounts, right skewed distributions, and very large excess kurtosis. Furthermore, states $S_0$ and $S_{5+}$ have by far the highest values for skewness and kurtosis, which underlines yet again the heterogeneity present in the claim that are in both states.
State Min Median Mean Max IQR Skewness Kurtosis
$S_{0}$ -18,365.69 1,149.30 2,808.84 343,463.26 2,360.00 22.35 679.28
$S_{1}$ -45,968.06 662.06 1,390.16 198,923.65 1,747.04 10.19 256.32
$S_{2}$ -58,154.22 808.90 2041.66 165,782.32 1,936.61 8.89 131.32
$S_{3}$ -53,078.64 924.56 2,549.96 273,638.48 2,291.40 8.67 154.79
$S_{4}$ -33,892.65 1,013.95 2,550.49 97,253.20 2,622.99 5.19 57.49
$S_{5+}$ -123,556.32 938.34 4,088.08 586,846.69 2,932.84 16.84 472.04
Summary statistics for the payment distribution in each state.
§.§ Time models evaluation
We will now evaluate the performance of the time models on the training data. Table <ref> shows the number of observations (rows) for each data set, used to estimate the time models. Since we have added one row for each time period that a claim has spend in the respective state, we observe a high number of observations in Table. To evaluate the accuracy of the time models, we perform for each considered state a 5-fold cross-validation where for each claim in a hold-out set, we predict the time that it takes to exit as well as the state it will exit to. For claim in a holdout set, we simulate 100 trajectories and for each trajectory, we record the time it took to exit the state and the transition type. For each claim, we define the final transition type as the transition type that was simulated the most in the 100 trajectories. Furthermore, we note that once a claim has stayed more than 24 months in a state, we force it to exit the state and once it has stayed more than 180 months in the process, we force the claim to an exit state. Table <ref> shows the percentage of correctly predicted transitions and the mean bias time averaged for the 5 folds. We observe that for most states, we can predict around 80% of the correct transitions. State $S_{5+}$ is more complicated as claims from different states are used to build that model hence we observe a drop in the performance. We also observe that for states $S_{0}, \ldots, S_{4}$, the mean bias (predicted - true) exit time is close to 0 and for state $S_{5+}$ it is negative, hence the next states as well as the time a claim stayed in a state are correctly estimated.
6c State
$S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
abs.freq 64,253 76,977 58,342 27,476 15,462 53,520
Number of observations in each state to model the time process.
TransType $S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
N 62 % 73 % 77% 75% 72% 65%
P 35 % 19 % 13% 22% 21% 31%
TN 0 % 4 % 6% 5% 3% 1%
TP 3 % 4 % 4% 4% 3% 3%
Percentage of transitions in the training data.
$S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
% correct transitions 92% 83% 82% 84% 86% 68%
mean bias time 0.14 0.09 -0.23 -0.08 0.04 -14.09
Percentage of correct transitions and mean bias time (months) for the time models.
§.§ Interpreting marginal effects of covariates on the time models
In this section, we investigate the marginal effect of the covariates on the transition probabilities using partial dependence plots (PDP) introduced by [32] and implemented using the [33] package in R . These PDP illustrate the marginal effect of a covariate (predictor) on the predictions made by the models. This marginal effect is marginalized over the other covariates meaning that to get the PDP for a categorical variable, we assign to each observation that same category for the variable of interest and average the predictions. Hence, as described by [34], the PDP of a category represents the average prediction made by the model if we force each observation to have that category for the given categorical variable.
From Figure <ref>, we observe that for claims in $S_{0}$, an increase in the time spent in the process, and hence in the time spent in the state, decreases the probability to have a payment. Similarly, high values of the reporting delay decrease the probability of exiting the state.
Partial dependence plots representing the marginal effect on transition probabilities of the time spent in the process and the reporting delay for claims in $S_{0}$.
For claims in $S_{1}$, we have two extra covariates, namely, the binned version of the size of the first (hence previous) payment, where the bins are given in Appendix E and the time spent in the current state. We observe that as for the model for claims in $S_{0}$, high values for the time spent in the process, the time spent in the state and the reporting delay decrease the probability of exiting the state. We also observe that a large cumulative previous payment increases the probability of having a payment.
Partial dependence plots representing the marginal effect on transition probabilities of the time spent in the process (a), the reporting delay (b), the time spent in the state (c) and the previous payment size (d) for claims in $S_{1}$.
Figure <ref> shows the transition probabilities $S_{2}$, we have one extra covariate as compared to $S_{1}$, namely, the binned size of the previous payment, where the bins are given in Appendix E and the time spent in the current state. Just like for the $S_1$ time model, we observe that, high values for the time spent in the process (inProcTime), and the time spent in the State (inStateTime), decrease the probability of a transition. We also observe that a large cumulative previous payment increases the probability of having a payment. However, the size of the previous payment only has a small impact on the transition probabilities. . Transition probabilities for claims in $S_{3}$, $S_{4}$ and $S_{5+}$ are shown in Figures <ref>, <ref>, <ref> from Appendix F, where we observe similar effects for the covariates as in $S_{2}$.
Partial dependence plots representing the marginal effect on transition probabilities of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{2}$.
§.§ Payment models evaluation
In this section we evaluate the accuracy of the payment models used through 5-fold cross-validation. Table <ref> shows the number of observations in each state, which will be used to estimate the payment distributions. We observe that the number of observationsused to fit the payment model of a given transition is lower than the number of observations of the time fit of the same transition (see Table <ref> versus Table <ref>) , the reason being that we model a payment distribution conditionally on the occurrence of a payment. Hence, to model the payment distribution, we only use the lines in the data sets where the transition type contain a payment, i.e lines where transType is equal to 'P' or 'TP' in Table <ref>. During the 5-fold cross-validation, the payment models are trained on the training portion and evaluated on each holdout set and we compute the Root Mean Square Error (RMSE) and Median Absolute Error (MAE) between the true and predicted payments. For each fold, we set $b_{2} = 0$ and other splitting points for the payment distribution are estimated using the Gerstengarbe plot as explained in Section <ref>. In this way, the four bins can be interpreted respectively as small and large negative or positive payments. From Table <ref>, we observe that the center of the payment distribution is well captured as the MAE is between 911 and 2,323 euro. We observe furthermore that the payment distributions in data sets for states $S_{0}$ and $S_{5+}$ are the most difficult to model, given the non-homogeneity of claims in those data sets.
6c State
$S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
abs.freq 24,480 18,465 10,062 5,849 3,941 19,084
Number of observations in each data set to model the payment process.
$S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
RMSE 9,561 4,946 4,494 5,238 4,391 8,008
MAE 1,668 1,060 911 1,193 1,478 2,323
5-fold CV RMSE and MAE of the payment distributions in each data set.
§.§ Interpreting marginal effects of covariates on the payment models
We now look at the marginal effects of the covariates on the prediction of either a large or small negative payment, represented by the bins $B_{1}$ and $B_{2}$ respectively. Similarly, we also look at the probabilities of obtaining a small or large positive payment, represented by the bins $B_{3}$ and $B_{4}$. Table <ref> shows the splitting points for the payment distribution in each data set and in Table <ref> the predicted mean payment of each bin is shown, estimated as explained in Section <ref>.
6c State
$S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
$b_{1}$ -1,230 -3,000 -2,310 -1,780 -2,140 -2,690
$b_{2}$ 0 0 0 0 0 0
$b_{3}$ 3,500 3,200 2,970 3,107 2,500 2,530
Splitting points ($b_{l}^{j}$) for the payment distribution in each data set.
6c State
$S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
$B_{1}$ -6,043 -11,725 -10,644 -8,024 -10,047 -13,725
$B_{2}$ -475 -1,248 -800 -670 -788 -993
$B_{3}$ 1,404 1,152 1,070 1,094 948 909
$B_{4}$ 7,230 7,510 6,915 7,400 7,004 9,914
Mean payment ($\mu_{l}$) in bins for each data set.
TransType $S_{0}$ $S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
$B_{1}$ 0.3 % 6.9 % 3.5% 3.5% 1.8% 1%
$B_{2}$ 0.4 % 17.5 % 9.3% 6.1% 4.8% 2.5%
$B_{3}$ 76.2 % 60.9 % 71.9% 72.7% 70.2% 66.3%
$B_{4}$ 23.1 % 14.7 % 15.3% 17.7% 23.2% 30.2%
Percentage of claims in each bin per state in the training data.
From Figure <ref>, we observe that the effect of the time spent in the process together with the reporting delay has little impact on the probability of having a small or large payment. Note in Table <ref> that almost all payments in $S_0$ are positive, hence pertaining to $B_3$ or $B_4$.
Partial dependence plots representing the marginal effect on probabilities to belong to a payment bin of the time spent in the process and the reporting delay for claims in $S_{0}$.
In $S_1$ we see that about 25% of all payments are negative. From Figure <ref> we can see that the different covariates do have a small, yet much bigger effect than for $S_0$. The most likely payment is a small positive payment. Furthermore, we observe a probability of around 40% for a large positive previous payment to lead to either a large payment of either sign.
Partial dependence plots representing the marginal effect on probabilities to belong to a payment bin of the time spent in the process (a), the reporting delay (b), the time spent in the state (c) and the previous payment size (d) for claims in $S_{1}$.
From Figure <ref>, we observe that claims that stay longer in the process tend to have a higher probability of a positive payment. This relates to the fact that these are longer tailed claims that tend to cost more. We also observe that claims with a high cumulative previous payment tend to produce higher payments, hence claims that are costly early in development tend to stay costly as development progresses. The marginal effect of covariates for states $S_{3}$, $S_{4}$ and $S_{5+}$ are similar to those of state $S_{2}$ and are shown in Appendix G.
Partial dependence plots representing the marginal effect on probabilities to belong to a payment bin of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{2}$.
§.§ IBNR count comparison with the chain-ladder
We start by computing the yearly IBNR claim counts based on the methodology presented in Section <ref>. As explained in this section, the four steps to obtain the predicted number of IBNR claims for a given accident and development year are repeated 1000 times. The mean and 95% quantiles of these IBNR predictions are shown in Table <ref>, where the quantiles for the chain-ladder are obtained with the bootstrapped strategy proposed by [4] with an over-dispersed Poisson process distribution. For each accident year, the mean of these IBNR counts are used to build the IBNR data set as explained in Appendix B.
2006 2007 2008 2009 2010 2011 2012 Total
Database 0 0 0 0 2 29 283 314
CL mean 0 0 1 1 3 15 173 193
CL 95% 0 0 4 4 8 26 213 233
mCube mean 0 0 1 1 2 15 175 194
mCube 95% 0 0 3 3 5 20 197 228
Predicted yearly IBNR claim counts.
Clearly, most of the IBNR claims come from the last observed accident year, as these are bodily injury claims which are typically reported rather fast. As mentioned in Section <ref>, the average number of IBNR claims predicted by mCube corresponds to the number of IBNR claims predicted by Mack's chain-ladder. However, both the chain-ladder and mCube underestimate the number of IBNR claims in the later accident years.
§.§ Comparison with other micro-reserving models
In this section, we compare the performance of the proposed methodology on an individual level to the multi-state model of [19] and the hierarchical GLM of [35]. The goal is to see how well the predictive distributions of the reserves capture the true reserves for the RBNS claims. Let $\hat{q}_{k}^{\alpha}$ denote the $\alpha$-quantile of the predictive distribution of ${R}_{k,\tau}$, consisting of $N_{sim}$ possible reserve values $\hat{R}_{k,\tau}^{1}, \ldots, \hat{R}_{k,\tau}^{N_{sim}}$, then we can define the following measures:
* The Interval Score, IS := $mean(\hat{q}_{k}^{\alpha} - \hat{q}_{k}^{1-\alpha} )$, measures the width of the prediction intervals.
* The Prediction Interval Coverage Probability, PICP = $mean( \mathbbm{1}_{{R}_{k,\tau} \in [\hat{q}_{k}^{1-\alpha}, \hat{q}_{k}^{\alpha} ] })$, represents which fraction of the true reserves falls in the prediction intervals of the different reserving methods.
* The Continuous Ranked Probability Score (CRPS) of [36] is a strictly proper scoring rule, which assesses the quality of probabilistic forecasts and rewards the forecaster for honest estimation of the predictive distribution. We use the formulation from [37], given by
\begin{equation*}
CRPS ({R}_{k,\tau}) = \frac{1}{N_{sim}} \sum_{i = 1}^{N_{sim}} \mid \hat{R}_{k,\tau}^{i} - {R}_{k,\tau} \mid - \frac{1}{2N_{sim}^{2}} \sum_{i=1}^{N_{sim}} \sum_{r=1}^{N_{sim}} \mid \hat{R}_{k,\tau}^{i} - \hat{R}_{k,\tau}^{r} \mid ,
\end{equation*}
where a lower score for CRPS represents a more accurate probabilistic forecast.
mCube Bettonville Crevecoeur
mean_CRPS $\mathbf{11,148.82}$ 14,166.51 15,405.99
median_CRPS 4,077.85 $\mathbf{2,377.34}$ 4,251.77
PICP_99 $\mathbf{0.60}$ 0.37 0.12
IS_99 101,642.41 $\mathbf{385,729.23}$ 71,808.31
PICP_95 $\mathbf{0.57}$ 0.29 0.11
IS_95 66,168.42 $\mathbf{66,284.85}$ 61,955.80
CRPS, IS, PICP for the RBNS claims of the 3 competing micro-reserving methods on the subset of Allianz bodily injury claims.
From Table <ref>, we observe that the predictive distribution produced by mCube provides a better representation of the true observed reserves than the two other micro-reserving models. mCube obtains the lowest mean CRPS, representing that the predictive distribution for the RBNS claims are the most accurate. However, the method from [19] obtains the lowest median CRPS, representing the fact that it is more suited for the claims with a lower reserve amount. Moreover, the prediction interval coverage probabilities are the highest for mCube, although none of the methods have the required coverage of 95% or 99%.
We also compare the observed reserves with the mean of the predictive distributions of the methods. To this end, we use the following pointwise accuracy measures, where $R_{k,\tau}$ represents the true observed reserve and $\hat{R}_{k,\tau}$ the mean of the predictive distribution obtained from the methods:
* bias := $\sum_{k}({R}_{k,\tau} - \hat{R}_{k,\tau})$
* Mean Absolute Error (MAE) := $mean( \mid \hat{R}_{k,\tau} - R_{k,\tau} \mid )$
* Root Mean Square Error (RMSE) := $\sqrt{mean( \mid \hat{R}_{k,\tau} - R_{k,\tau} \mid ^2)}$
* Symmetric Mean Absolute Percentage Error (sMAPE):= $mean(200 \times \mid {R}_{k,\tau} - \hat{R}_{k,\tau} \mid / ({R}_{k,\tau} + \hat{R}_{k,\tau}))$
mCube Bettonville Crevecoeur
bias -582.34 2,145.68 $\mathbf{-130.53}$
MAE $\mathbf{16,089.18}$ 24,746.39 21,442.60
RMSE $\mathbf{41,119.29}$ 100,108.94 48,911.23
sMAPE $\mathbf{1.30}$ 1.63 1.59
Pointwise accuracy measures for the individual RBNS reserves on the subset of Allianz bodily injury claims.
From Table <ref>, we observe that mCube produces the best pointwise forecast for most of the metrics under consideration. We remark however that the method of [35] has the lowest absolute bias. These findings highlights the superiority of mCube with respect to the other methods under consideration for the data set of Allianz bodily injury claims.
§.§ Best estimate comparison with the chain-ladder and other micro-reserving models
In this section, we compare the performance of mCube to the chain-ladder and other micro-reserving models for predicting best estimate reserves on the same Allianz data set.
To obtain the reserve predicted by mCube, we follow Sections <ref> and <ref>. Given that most claims are reported within one month of their occurrence as shown in Figure <ref>, we use $\hat{\lambda}_{oc,0} = 1$ in Equation (<ref>).
From Figure <ref>, we observe the good performance of mCube on the prediction of the reserves as the true reserve is near the center of the best estimate distribution. In particular, from Table <ref>, we observe that we perform very well on the RBNS reserves, but less well on the IBNR reserves given that we do not model the number of IBNR claims sufficiently well as explained in the previous section. The chain-ladder however under performs on this data set, showing the added value of our proposed methodology. The predicted reserve by mCube is 73,916,247 giving a percentage error (PE) of 0.37%, whereas the chain-ladder predicts 62,433,801 giving a percentage error of -15%. Given that the method from [35] does not produce IBNR claims, we choose to simulate reserves of IBNR claims for [19] and [35] using the methodology explained in Section <ref>. The methods produce reserves of 58,104,262 and 78,436,112, given a percentage error of respectively 21% and 7%.
Best estimate distribution for all claims (RBNS and IBNR)
Database mCube Chain-ladder Bettonville Crevecoeur
RBNS reserve 69,837,157 72,689,539 62,433,801 57,628,135 68,725,695
IBNR reserve 3,808,605 1,226,708 - 476,127 9,710,417
Total reserve 73,645,764 73,916,247 62,433,801 58,104,262 78,436,112
PE 0 0.37% -15% -21% 7%
Observed reserves and mean of the predicted reserves for the subset of bodily injury claims.
In this final section, the mCube reserving model is applied on a real data set to illustrate its performance.
§.§ Data
The data used in this section is a random sample of claims obtained from a European insurer and consists of body injury claims occurring in 2009 and later, of which the latest evaluation moment was 31st December 2018. We decided to work on the data set consisting of the claims that are closed between the 1st January 2009 and 31 December 2018. To anonymize the payment data, we have multiplied all payment by a constant. The data set consists of 37,527 unique policies, of which the total number of payments are tabulated in Table <ref>. This table reveals that most of the claims are closed after one or two payments. The list of variables that were included in the analysis together with the corresponding pre-processing, is available in Appendix A. Table <ref> shows some summary statistics of the distribution of the time spent in each state. We observe that the average time spent in the later states is around twice the average time spent in the first state, which hints to the difficulty for the insurer in dealing with claims that require multiple payments.
10cDevelopment Year
Accident Year 0 1 2 3 4 5 6 7 8 9
2009 3630 146 7 2 0 2 0 0 0 1
2010 3310 163 10 0 0 0 0 0 0 $\cdot$
2011 3659 229 7 3 1 0 0 0 $\cdot$ $\cdot$
2012 3565 205 8 2 1 1 0 $\cdot$ $\cdot$ $\cdot$
2013 3173 170 5 1 1 2 $\cdot$ $\cdot$ $\cdot$ $\cdot$
2014 3406 188 6 4 1 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2015 3989 245 12 6 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2016 4390 218 9 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2017 4078 285 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2018 3293 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
Claim number triangle for body injury claims.
10cDevelopment Year
Accident Year 0 1 2 3 4 5 6 7 8 9
2009 14442 10910 6451 5332 4403 2944 2468 1978 1677 594
2010 12140 9743 4355 3131 2468 2141 1375 1100 339 $\cdot$
2011 13161 12603 7689 5502 4208 2851 1703 1541 $\cdot$ $\cdot$
2012 12962 11600 6752 5038 3250 1688 484 $\cdot$ $\cdot$ $\cdot$
2013 11486 9097 4996 3159 1750 1589 $\cdot$ $\cdot$ $\cdot$ $\cdot$
2014 11637 9641 4964 3148 1578 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2015 13273 11299 5797 3137 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2016 15036 11006 4054 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2017 14065 8602 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
2018 9779 $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$ $\cdot$
Claim amount triangle for body injury claims (Thousand Euro).
State Min Median Mean Max IQR Skewness Kurtosis
$S_{1}$ 1.00 1.00 1.67 62.00 0 8.99 116.50
$S_{2}$ 1.00 1.00 1.31 117.00 0 29.27 1,609.20
$S_{3}$ 1.00 1.00 1.21 57.00 0 21.55 668.24
$S_{4}$ 1.00 1.00 1.29 70.00 0 22.09 734.76
$S_{5}$ 1.00 1.00 1.21 16.00 0 8.21 96.75
$S_{6}$ 1.00 1.00 1.33 28.00 0 8.79 101.93
Summary statistics for the reporting delay of claims in each state, with a time discretization of 30 days.
State Min Median Mean Max IQR Skewness Kurtosis
$S_{1}$ -35617.00 1700.00 2959.22 429329.07 2794.58 22.34 1759.73
$S_{2}$ -68938.13 991.08 1737.73 206012.94 2398.28 8.08 238.63
$S_{3}$ -131000.00 1202.43 2601.35 190524.47 2842.64 5.97 129.82
$S_{4}$ -82810.00 1421.00 3438.77 433042.60 3547.35 16.08 623.18
$S_{5}$ -41899.79 1500.00 4269.98 143321.14 4500.00 4.95 43.31
$S_{6}$ -155835.41 1498.68 6702.56 1178100.23 4767.45 18.68 596.60
Summary statistics for the payment distribution per state.
Furthermore, the data was split into three parts: a training set, a calibration set, and a testing set, each containing respectively 70%, 20% and 10% of the data.
Given that for the claims in our multi-state model, a transition to a non-absorbing state occurs whenever there is a payment of at least 200 Euro, we choose to stratify the claims according to their total number of payments before they were closed shown in Table <ref>. Concretely, for all the claims that had a total of 1 payment before they were closed, 70% of the claims were assigned to the training set, 20% were assigned to the calibration set and 10% to the testing set. This procedure is repeated for the claims with a total of 2,3,4,…,6. As our goal is to use the calibration and testing sets for the assessment of predictions of our model, we cannot just include all the available information of the claims in the respective sets since they are closed. Therefore, a time point is selected at random for each claims separately after which all available information is discarded, hereby rendering the closed claim in some way open again.We will refer to these claims as opened closed claims. In this sense, we have created RBNS claims in the calibration and testing sets, and our aim is to see how well our multi-state model will predict the RBNS reserves of these opened closed claims.
§.§ Study on the calibration set
In this section, we look at the properties of our model on the calibration set. The calibration set is a central element of our modelling procedure as explained in Section <ref>. Using the time and payment models discussed in Section <ref> as well as the simulation procedure for open claims from Section <ref>, we can obtain for each claim of the calibration set a predictive distribution for their reserves. Next is the calibration methodology, of which the first step is deciding on which summary statistic W will be used. We choose the one that has the highest concordance probability <cit.> with respect to the true observed reserves. Note that these observed reserves are available since all the claims in the calibration set are closed and hence, we know their true total cost. As candidate summary statistics, we consider the mean and all the deciles of the predictive distribution. The Table with the concordance probabilities of the summary statistics is shown in Appendix <ref>, where we observe that the summary statistics with the highest concordance probability is the third decile, with a concordance probability of 0.624.
We set up a small simulation study to check whether the calibration procedure discussed in Section <ref> results in unbiased predictions. Therefore, we consider 100 runs and in each one, 50% of the claims from the calibration set is used for the calibration methodology and the other 50% to assess the relative and absolute differences between the true and predicted total reserve of all claims together. To obtain this prediction, we sample one value from each calibrated predictive distribution. Summing up these predictions give an estimation for the total reserve for all claims in the calibration set together. From Table <ref>, we observe that on average, our calibration procedure produces reserves that are centered around the true observed reserves. Hence, we can infer that on a portfolio level, the calibration procedure produces results that are unbiased.
Min $q_{.25}$ Median Mean $q_{.75}$ Max
Relative differences 0.65 0.86 1.00 1.03 1.15 1.54
Absolute differences -1,485,636 -532,913 4,910 20,462 481,347 1,459,625
Summary statistics for the relative and absolute differences between the predicted and observed reserves in the calibration set.
Besides the total reserve of all claims together, we can also focus on the predictions of the reserves on an individual (per claim) level. A first check is done by determining the percentage of true observed reserves that lies between the 2.5% quantile and the 97.5% quantile of the calibrated predictive distribution. Theoretically, this percentage should be equal to 95%. In Table <ref>, we see that the prediction intervals have good coverage, with on average 91% of the true reserves being contained in the 95% prediction interval. This Table also reveals that the prediction intervals are wide, with an average width of 6,457 Euro for the 95% prediction interval. Hence, we can conclude that very costly claims are present. The same conclusions can be drawn for theoretical coverage of 50% and 90%, as can be seen in Table <ref>.
Another way to check the performance of the calibration procedure, is to record for each of the 100 simulations, the median absolute differences (MAD) and median relative differences (MRD) between the predicted and observed reserves on an individual claim level. Furthermore, for each simulation separately, we can look at the MAD and MRD for the 5% highest and 5%lowest observed reserves to assess the quality of the model for the extreme claims. Summary statistics for these differences are given in Table <ref> where we observe that the average MAD is 394 while the average MRD is 8.78. This shows that on average the predicted reserves are slightly higher than the observed reserves. However, the average MAD and MRD for the highest 5% of observed reserves are respectively -6,123 and 0.11, which shows that these claims are under estimated by the calibration. Similarly, the average MAD and MRD for the lowest 5% of observed reserves are respectively 2,568 and -0.37 showing the difficulty for the model to deal with these claims.
3cTheoretical coverage
50% 90% 95%
Coverage 54% 87% 91%
Width 760 4,319 6,457
Average coverage and average width (Euro) of the prediction intervals obtained from the calibration procedure.
Min $q_{.25}$ Median Mean $q_{.75}$ Max
MRD 5.01 7.29 8.38 8.78 10.00 15.76
MAD 256 328 382 394 452 560
MRD top 5% 0.08 0.10 0.11 0.11 0.11 0.14
MAD top 5% -6,893 -6,341 -6,166 -6,123 -5,928 -5,246
MRD bottom 5% -0.50 -0.41 -0.37 -0.37 -0.34 -0.29
MAD bottom 5% 2,128 2,462 2,555 2,568 2,665 3,108
Summary statistics for the relative and absolute differences of the individual RBNS claims in the calibration set.
§.§ Study on the test set
After studying the properties of the calibration procedure on the calibration set, we now can look at the predictions of the RBNS reserves on the test set. A prediction for the reserve is obtained by sampling one value from the (non-)calibrated predictive distribution. Table <ref> shows some summary statistics of the true observed reserves, the predicted reserves without calibration and the predicted reserves with calibration. We observe that the mean on the calibrated reserves is closer to the mean of the true observed reserves, than the mean of the non-calibrated reserves. However, looking at range and the interquartile range, we observe that the calibration procedure produces claims that have less variability and a smaller range than the observed reserves. As a result, extreme claims are not well captured by the proposed methodology.
Figure <ref> shows the best estimate (BE) distribution for the total reserve of all RBNS of the test set after calibration, where the true observed reserve is shown as a vertical line in red and the mean of the BE distribution is shown as a vertical dashed line in blue. As expected, the BE distribution is centered around the true observed reserve. This shows that on a portfolio level, the BE distribution is well estimated by the calibration procedure.
Min Median Mean Max IQR Skewness Kurtosis
-29,584.03 0.00 701.55 48,214.60 800.00 4.67 65.30
NoCalib -8,504.19 784.49 1,217.65 16,885.59 1,897.58 1.94 8.40
Calib -1,286.73 679.02 925.84 16,179.56 680.09 7.25 67.83
Summary statistics for observed, non-calibrated and calibrated individual RBNS reserves in the test set.
Best estimate distribution for RBNS claims in the test set. The true observed reserve is shown as a vertical line in red and the mean of the BE distribution is shown as a vertical dashed line in blue.
§.§ IBNR reserves
We start by computing the yearly IBNR claim counts based on Section <ref>. As explained in this section, the four steps to obtain the predicted number of IBNR claims for a given accident and development year are repeated 100 times. The median values of these IBNR predictions are shown in Table <ref> for each accident year and are used to build the IBNR data set as explained in Appendix B.
2010 2011 2012 2013 2014 2015 2016 2017 2018 Total
0 0 0 0 1 2 5 13 158 179
Predicted yearly IBNR claim counts.
Clearly, most of the IBNR claims come from the last observed accident year, as these are bodily injury claims which are typically reported rather fast. Figure <ref> shows the best estimate distribution for the IBNR reserves obtained by treating the IBNR data set similarly to the test set of the previous section. We observe that the mean of the BE distribution for IBNR claims is around 200,000 Euro, which is represented as a vertical dashed line in blue.
Best estimate distribution for IBNR claims, with the mean indicated by a bleu vertical dashed line.
§.§ Comparison with the chain-ladder
We start by comparing the predicted yearly IBNR claim counts using the negative binomial assumption and the IBNR counts given by the bootstrap chain-ladder procedure from [4]. As expected, the predicted yearly claim counts are similar given that the negative binomial assumption produces consistent estimates with the chain-ladder method. However, the 95% basic bootstrap confidence interval for the total number of IBNR claims is $[177,184]$ for the negative binomial procedure and $[180, 183]$ for the bootstrap chain-ladder procedure. Hence, the negative binomial assumption produces more conservative confidence intervals than the bootstrap chain-ladder procedure.
Model 2010 2011 2012 2013 2014 2015 2016 2017 2018 Total
Neg. binom. 0 0 0 0 1 2 5 13 158 179
Chain-ladder 0 0 0 0 1 2 6 13 158 180
Predicted yearly IBNR claim counts.
Regarding the RBNS reserves on the test set, we can compare the predictions without calibration, the predictions with calibration and the predictions obtained by the chain-ladder. Table <ref> shows the observed total RBNS reserve, the predictive mean of the BE distribution with and without calibration, as well as the prediction by the chain-ladder. For ease of comparison, we also divide the different predictions by the true observed reserves.
Obs Reserve NoCalib Calib chain-ladder
Amount 2,698,159 9,361,273 2,723,594 6,103,197
Ratio 1.00 3.47 1.01 2.26
Mean of the BE distribution for the RBNS predictions of the multi-state model and the chain-ladder.
Table <ref> clearly shows the needs and benefits of the calibration methodology. The chain-ladder predictions are about 2.26 times higher than the true observed reserves, whereas the calibrated predictions of the multi-state model are only 1.01 times higher than the true reserves and are thus much closer. Notice that the predicted reserves without calibration are higher than those predicted by the chain-ladder and are about 3.47 times higher than the true reserves. Finally, Figure <ref> shows the best estimate distribution of the IBNR and RBNS reserves together with a mean of around 2.9 million Euro.
Best estimate distribution for all incurred claims
§ CONCLUSION
In this article we have presented a multinomial multi-state micro-level (mCube) model to estimate the reserves of IBNR and RBNS claims. We present a semi-parametric modelling of the payment distribution, taking into account claim specific information. On a portfolio level, the proposed model is unbiased
and produces a best estimate distribution that is centered around the true reserve. Moreover, the estimates on an individual level are very accurate in our real data analysis.
Future studies could replace the multinomial models used for the time and payment processes with more flexible machine learning models to obtain a higher predictive power.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge the financial support from the Allianz Research Chair Prescriptive business analytics in insurance at KU Leuven.
§.§ A. Variables in the data set
The original data set obtained from the European insurer was further pre-processed by the following steps:
* As discussed in Section <ref>, a transition occurs in the multi-state model when a payment was recorded higher than "minPayVal" (or lower than -"minPayVal" in case of a reimbursement). Therefore, payments are lumped together when necessary.
* Several variables were created based on the time on which the claim occured, the time of reporting of the claim, and the time on which payments were made. These variables include "fastRep", which is is an indicator of if the claim was reported less than 30 days from when it occured and "finYear" which is the financial year in which a payment happens. Other variables that are also created are "deltRep" representing the reporting delay, "inStateTime" which is the time a claim spends in a specific state and "inProcTime" which is the time a claim spends in the entire multi-state process. The variable "delt1PayTimeTrans" is the time since the previous payment. All these time variables are expressed in the equivalent number periods of 30 days. Moreover, a maximum of 15 periods have been set such that if a claim has more than 15 periods, it is either forced to move to the next state or out of the multi-state process if it has reached the maximum number of transitions.
* Variables are created from the payment amount: The variable "delt0Pay" is the amount of the payment in the current state, The variable "delt1Pay" is the payment amount of the previous state, and the variable cumDelt1Pay is the cumulative payment amount from all the previous states of the claim.
§.§ B. Building the IBNR data set
Once the yearly number of IBNR claims have been estimated and the monthly reporting delay evaluated, an IBNR multi-state data set must be constructed and passed through the multi-state process starting from the reporting state. The variables from Table <ref> are constructed in the following way: The reporting delay (deltRep) depends on the accident year and the estimated reporting month, fastRep is 0 as the claim is IBNR, procTime and stateTime are both 1, deltPay, deltPayTime and cumDeltPay are NA as there has been no previous payment.
§.§ C. Algorithm for simulating open claim reserves trajectories
This section presents the algorithm for simulating claims reserves as illustrated and explained in Section <ref>. For this algorithm, We need the following elements:
* timeMods: The fitted time models. This should be a list of length "maxMod".
* payMods: The fitted payment models. This should be a list of length "maxMod".
* testData: The test data on which to simulate reserves.
* splits: Splitting points for the numeric variables that were binned. This should also contain the levels for the time variables that are categorized.
* fixedTimeMax: Maximum amount of time a claim is allowed to stay in a state. This should be of length "maxMod".
* nSims: Number of trajectories to be simulated for each claim.
* npmax: Maximum number of transitions we allow a claim to make. This is used to capture claims with longer developments.
We note that when a claim has stayed for too long in a state as defined by "fixedStateTimeMax", we modify the estimated discrete-time hazard functions to
\begin{align}
\label{timeprocmultinommodif}
\tilde{\lambda}_{j,j+1}(t \mid \mathbf{x_{k,t}}) &= \hat{\lambda}_{j,j+1}(t \mid \mathbf{x_{k,t}}) + \frac{ 1 - \hat{\lambda}_{j,j+1}(t \mid \mathbf{x_{k,t}}) - \hat{\lambda}_{j,tp}(t \mid \mathbf{x_{k,t}}) - \hat{\lambda}_{j,tn}(t \mid \mathbf{x_{k,t}})}{3} \nonumber,\\
\tilde{\lambda}_{j,{j}}(t \mid \mathbf{x_{k,t}}) &= \hat{\lambda}_{j,tp}(t \mid \mathbf{x_{k,t}}) + \frac{ 1 - \hat{\lambda}_{j,j+1}(t \mid \mathbf{x_{k,t}}) - \hat{\lambda}_{j,tp}(t \mid \mathbf{x_{k,t}}) - \hat{\lambda}_{j,tn}(t \mid \mathbf{x_{k,t}})}{3} \nonumber,\\
\tilde{\lambda}_{j,{tn}}(t \mid \mathbf{x_{k,t}}) &= \hat{\lambda}_{j,tn}(t \mid \mathbf{x_{k,t}}) + \frac{ 1 - \hat{\lambda}_{j,j+1}(t \mid \mathbf{x_{k,t}}) - \hat{\lambda}_{j,tp}(t \mid \mathbf{x_{k,t}}) - \hat{\lambda}_{j,tn}(t \mid \mathbf{x_{k,t}})}{3} \nonumber,\\
\tilde{\lambda}_{j},{j}(t \mid \mathbf{x_{k,t}}) &= 0 \nonumber \\
\end{align}
Similarly, when a claim has reached state "npmax"-1, we need to modify the transition probabilities in the following way:
\begin{align}
\label{timeprocmultinommodif2}
\tilde{\tilde{\lambda}}_{npmax-1,npmax}(t \mid \mathbf{x_{k,t}}) & = 0 \nonumber,\\
\tilde{\tilde{\lambda}}_{npmax-1,{tp}}(t \mid \mathbf{x_{k,t}}) &= \hat{\lambda}_{npmax-1,tp}(t \mid \mathbf{x_{k,t}}) + \hat{\lambda}_{npmax-1,npmax}(t \mid \mathbf{x_{k,t}}) \nonumber,\\
\tilde{\tilde{\lambda}}_{npmax-1,tn}(t \mid \mathbf{x_{k,t}}) & = {\hat{\lambda}}_{npmax-1,tn}(t \mid \mathbf{x_{k,t}}) \nonumber,\\
\tilde{\tilde{\lambda}}_{npmax-1,npmax-1}(t \mid \mathbf{x_{k,t}}) & = {\hat{\lambda}}_{npmax-1,npmax-1}(t \mid \mathbf{x_{k,t}}) \nonumber,\\
\end{align}
If a claim stays too long in state "npmax"-1, we can also modify (<ref>) using (<ref>).
§.§ D. Selecting hyper-parameters
During the pre-processing stage, we need to decide on the value for the hyper-parameters "nMinLev" and "nGroups" that are necessary for binning the continuous predictors. We suggest to set "nMinLev" to at least 30 for statistical significance of estimated parameters. We suggest to set "nGroups" between 5 and 15. We also have to define "minPayVal", which is the minimum amount paid for a non-terminal payment to be considered. Intermediate payments that are lower in absolute value than this amount, will be aggregated and considered as a single payment. We suggest to discuss with the business to determine what is a meaningful payment amount. We also define "perLen", which is the number of days in one time period. We recommend to choose "perLen" so that it represents either monthly, quarterly or yearly information.
For binning the continuous variable "inStateTime", representing the time a claim spends in a specific state, the minimum number of observations in each category should be "nMinTimeLev". Note that this hyper-parameter can differ from "nMinLev". We choose a value of 30 for statistical signifacnce of estimated parameters. Other hyper-parameters include "nMaxLevInstate", the maximum number of time periods a claim is allowed to stay in the same state and "nMaxLevInProc", the maximum number of periods a claim is allowed to stay in the whole multi-state process. These two hyper-parameters should be chosen so that only a small percentage, say 1 % of the claims stay for more than these hyper-parameters in the state or in the process respectively. In order to have valid statistical models, we need a sufficient number of observations. Therefore, we define "nMinModT" as the minimum number of observations required to fit a multinomial model with predictors in the time process and we define "nMinNoModT" similarly in case of no predictors. Note that in case the chosen value for "nMinModT" is smaller than the number of predictors multiplied with "nTimesParamT", it is replaced by this product. However, it is quite likely that the number of required observations is not met for the time model in state $S_{npmax-1}$, since claims with a large number of payments are rare. Therefore, it is decided to construct "maxMod" unique time models for states $S_1$, …, $S_{maxMod}$. For the states $S_{maxMod+1}$,…, $S_{npmax-1}$ the model of state $S_{maxMod}$ will be reused. This implies that the model of state $S_{maxMod}$ is based on payments that happened from the $maxMod^{th}$ payment on for each claim.
During the modelling of the payment process, we have to choose "nBins" which equals $L+1$ and thus represents the number of bins during the splicing procedure. Besides the number of bins, the splitting points $\mathbf{b}$ themselves need to be determined as well. Finally, we define "nMinModP" as the minimum number of observations to fit a multinomial model with predictors in the payment process and we define "nMinNoModP" similarly in case of no predictors. Once again, when the chosen value for "nMinModP" is smaller than the number of predictors multiplied with "nTimesParamP", it is replaced by this product.
When simulating trajectories for open claims, we need to choose a value for "nSims", "fixedTimeMax", and "npmax". The value for "fixedTimeMax" should make business sense, and should be such that only a small percentage of the claims stay in a state for longer than this, hence we set it to 24. The value for "npmax" should be large enough to capture claims with longer developments, hence we set it to 50.
The final phase of the modelling is the calibration procedure, where we need to choose $N_{grid}$, the number of points in the quantization grid and $N_c$, the number of claims in the calibration set. Since this calibration set only consists of closed claims, we formulate $N_c$ to be a percentage of the total number of closed claims $N_{cl}$. For smoother quantile regression curves in the calibration procedure, a bootstrap estimate can be constructed with the hyper-parameter $N_B$ representing the number of bootstrap samples.
Taking into account the time necessary to fit our multi-state process, a cross validation strategy for hyper-parameter tuning proved to be extremely time consuming. We therefore propose to set values for these hyper-parameters based on actuarial experience and observed results.
1*Pre-processing nMinLev = 30; nGroups = 5;
minPayVal = 200; perLen = 30
3*Time process nMinTimeLev = 30; nMaxLevInState = 12;
nMaxLevInProc = 24 ; maxMod = 6
nMinModT = 500; nMinNoModT = 50
nTimesParamsT = 5; $npmax$ = 30
2*Payment process nBins = 4; nMinModP = 500; nMinNoModP = 50;
nTimesParamsP = 5
Open claims simulations $N_{sim}=100$; fixedTimeMax = 24; npmax = 50
Hyper-parameters for the multinomial multi-state model.
§.§ E. Splitting points for past payment information
The split obtained after binning the continuous predictors deltPay and cumDeltPay are shown respectively in table <ref> and <ref>.
5c State
$S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
586.20 -4045.03 -657.02 0.00 0.00
1,247.06 -1,169.81 1,179.08 965.04 611.74
2,584.57 1,669.23 3,615.08 3,576.90 2,596.18
7,955.07 3,805.08 4,775.55 5,070.10 3,501.23
Splitting points for the previous payment (deltPay) in each data set.
5c State
$S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5+}$
80.00 24.62 146.26 249.78 761.56
380.70 258.52 1,284.04 4,651.18 4,630.25
1,416.29 634.86 5,048.92 8,626.82 10,292.00
2,631.85 7285.79 6,954.32 12,356.65 18,322.42
Splitting points for the cumulative previous payment (cumDeltPay) in each data set.
§.§ F. Partial dependence plots of the time models
In this section, we present the partial dependence plots of the time models for states $S_{3}$, $S_{4}$ and $S_{5+}$. From figures <ref>, <ref>, <ref>, we observe similar marginal effects of covariates as in state $S_{2}$.
Partial dependence plots representing the marginal effect on transition probabilities of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{3}$.
Partial dependence plots representing the marginal effect on transition probabilities of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{4}$.
Partial dependence plots representing the marginal effect on transition probabilities of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{5+}$.
§.§ G. Partial dependence plots of the payment models
This section,shows the marginal effect of predictors in the payment models for states $S_{3}$, $S_{4}$ and $S_{5+}$ using partial dependence plots. From figures <ref>, <ref>, <ref>, we observe similar marginal effects of covariates as in state $S_{2}$.
Partial dependence plots representing the marginal effect on probabilities to belong to a payment bin of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{3}$.
Partial dependence plots representing the marginal effect on probabilities to belong to a payment bin of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{4}$.
Partial dependence plots representing the marginal effect on probabilities to belong to a payment bin of the time spent in the process (a), the previous payment size (b), the time spent in the state (c) and the cumulative previous payment size (d) for claims in $S_{5+}$.
§.§ C. Claims adjustment estimation
This section presents the estimated state specific claims adjustment factors and the estimated robust regression parameters. From Table <ref>, we see that the highest estimated parameter is $0.065$ from $S_{5}$ while the lowest is $-0.012$. Similarly, from Table <ref> we see that some of the adjustment factors are negative, going as low as -1.25% in $S_{1}$ while the highest adjustment factor is 6.74% in $S_{5}$.
$S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5}$ $S_{6}$
$\beta_{1}$ -0.017 0.010 0.031 0.046 0.065 0.023
Estimated robust GLM parameter for the finYear variable in each state.
$S_{1}$ $S_{2}$ $S_{3}$ $S_{4}$ $S_{5}$ $S_{6}$
AF -1.72 0.96 3.18 4.74 6.74 2.34
Estimated state specific adjustment factors in percentage for each state.
§.§ D. Concordance probabilities for the calibration set
We present the table of concordance probabilities that was used to select the best summary statistics to be used in the calibration methodology of Section <ref>. Clearly from Table <ref>, the summary statistic with the highest concordance probability is the 4th decile.
Statistics Mean q10 q20 q30 q40 q50 q60 q70 q80 q90
ConcProb 0.573 0.576 0.609 0.624 0.607 0.596 0.578 0.565 0.552 0.545
Concordance probabilities of summary statistics for the calibration set.
#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook
[1]
Dreksler, S.,
Allen, C.,
Akoh-Arrey, A.,
Courchene, J.A.,
Junaid, B.,
Kirk, J.,
Lowe, W.,
O’Dea, S.,
Piper, J.,
Shah, M.,
et al.:
solvency ii technical provisions for general insurers.
British Actuarial Journal
[2]
Mack, T.:
Distribution-free calculation of the standard error of chain ladder
reserve estimates.
ASTIN Bulletin
[3]
Bornhuetter, R.L.,
Ferguson, R.E.:
The actuary and ibnr.
Proceedings of the Casualty Actuarial Society
[4]
England, P.D.,
Verrall, R.J.:
Stochastic claims reserving in general insurance.
British Actuarial Journal
[5]
Wüthrich, M.V.,
Merz, M.:
Stochastic Claims Reserving Methods in Insurance
vol. 435.
John Wiley & Sons,
New Jersey
[6]
Liu, H.,
Verrall, R.:
Predictive distributions for reserves which separate true ibnr and
ibner claims.
ASTIN Bulletin: The Journal of the IAA
[7]
Merz, M.,
Wüthrich, M.V.:
Paid–incurred chain claims reserving method.
Insurance: Mathematics and Economics
[8]
Quarg, G.,
Mack, T.:
Munich chain ladder: A reserving method that reduces the gap between
ibnr projections based on paid losses and ibnr projections based on incurred
[9]
Verdonck, T.,
Debruyne, M.:
The influence of individual claims on the chain-ladder estimates:
Analysis and diagnostic tool.
Insurance: Mathematics and Economics
[10]
Arjas, E.:
The claims reserving problem in non-life insurance: Some structural
ASTIN Bulletin
[11]
Norberg, R.:
Prediction of outstanding liabilities in non-life insurance.
ASTIN Bulletin
[12]
Pigeon, M.,
Antonio, K.,
Denuit, M.:
Individual loss reserving with the multivariate skew normal
ASTIN Bulletin
[13]
Antonio, K.,
Plat, R.:
Micro-level stochastic loss reserving for general insurance.
Scandinavian Actuarial Journal
[14]
Duval, F.,
Pigeon, M.:
Individual loss reserving using a gradient boosting-based approach.
[15]
Delong, L.,
Wüthrich, M.V.:
Neural networks for the joint development of individual payments and claim
[16]
Hachemeister, C.A.:
A stochastic model for loss reserving.
Transactions of the 21st International Congress of Actuaries
[17]
Hesselager, O.:
A markov model for loss reserving.
ASTIN Bulletin
[18]
Antonio, K.,
Godecharle, E.,
Van Oirbeek, R.:
A multi-state approach and flexible payment distributions for micro-level
reserving in general insurance
Available at: <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2777467>
[19]
Bettonville, C.,
d'Oultremont, L.,
Denuit, M.,
Trufin, J.,
Oirbeek, R.V.:
Matrix calculation for ultimate and 1-year risk in the semi-markov
individual loss reserving model.
Scandinavian Actuarial Journal
[20]
Schmidt, K.D.,
Wünsche, A.:
Chain ladder, marginal sum and maximum likelihood estimation.
Blätter der DGVFM
[21]
Allison, P.D.:
Discrete-time methods for the analysis of event histories.
Sociological Methodology
[22]
Venables, W.N.,
Ripley, B.D.:
Modern Applied Statistics with S,
4th edn.
New York
ISBN 0-387-95457-0.
[23]
Frees, E.W.,
Valdez, E.A.:
Hierarchical insurance claims modeling.
Journal of the American Statistical Association
[24]
Reynkens, T.,
Verbelen, R.,
Beirlant, J.,
Antonio, K.:
Modelling censored losses using splicing: A global fit strategy with
mixed erlang and extreme value distributions.
Insurance: Mathematics and Economics
[25]
Beirlant, J.,
Dierckx, G.,
Guillou, A.:
Estimation of the extreme-value index and generalized quantile plots.
[26]
Montuelle, L.,
Pennec, E.L.:
Mixture of Gaussian regressions model with logistic weights, a
penalized maximum likelihood approach.
Electronic Journal of Statistics
[27]
Beirlant, J.,
Goegebeur, Y.,
Segers, J.J.J.,
Teugels, J.:
Statistics of Extremes: Theory and Applications.
Wiley, ???
[28]
Łukasz Delong,
Lindholm, M.,
Wüthrich, M.V.:
Collective reserving using individual claims data.
Scandinavian Actuarial Journal
[29]
Gabrielli, A.:
An individual claims reserving model for reported claims.
European Actuarial Journal
[30]
Henckaerts, R.,
Antonio, K.,
Clijsters, M.,
Verbelen, R.:
A data driven binning strategy for the construction of insurance
tariff classes.
Scandinavian Actuarial Journal
[31]
Grubinger, T.,
Zeileis, A.,
Pfeiffer, K.-P.:
evtree: Evolutionary learning of globally optimal classification and
regression trees in r.
Journal of Statistical Software
[32]
Friedman, J.H.:
Greedy function approximation: a gradient boosting machine.
Annals of statistics,
[33]
Molnar, C.,
Bischl, B.,
Casalicchio, G.:
iml: An r package for interpretable machine learning.
[34]
Molnar, C.:
Interpretable Machine Learning,
[35]
Crevecoeur, J.,
Robben, J.,
Antonio, K.:
A hierarchical reserving model for reported non-life insurance
Insurance: Mathematics and Economics
[36]
Gneiting, T.,
Raftery, A.E.:
Strictly proper scoring rules, prediction, and estimation.
Journal of the American Statistical Association
[37]
Grimit, E.P.,
Gneiting, T.,
Berrocal, V.J.,
Johnson, N.A.:
The continuous ranked probability score for circular variables and its
application to mesoscale forecast ensemble verification.
Quarterly Journal of the Royal Meteorological Society
|
# An Empirical Study on the Bugs Found while Reusing Pre-trained Natural
Language Processing Models
Rangeet Pan<EMAIL_ADDRESS>Iowa State UniversityAmesIowaUSA , Sumon
Biswas<EMAIL_ADDRESS>Iowa State UniversityAmesIowaUSA , Mohna
Chakraborty<EMAIL_ADDRESS>Iowa State UniversityAmesIowaUSA , Breno
Dantas Cruz<EMAIL_ADDRESS>Iowa State UniversityAmesIowaUSA and Hridesh
Rajan<EMAIL_ADDRESS>Iowa State UniversityAmesIowaUSA
###### Abstract.
In Natural Language Processing (NLP), reusing pre-trained models instead of
training from scratch has gained a lot of popularity; however, these models
are mostly black-boxes, extremely large, and building a model from scratch
often requires significant resources. To ease, models trained with large
corpora are made available, and developers reuse them in different problems.
In contrast, developers mostly build their models from scratch for traditional
deep learning (DL)-related problems. By doing so, they have full control over
the choice of algorithms, data-processing, model structure, tuning
hyperparameters, etc. Whereas in NLP, due to the reuse of the pre-trained
models, NLP developers are limited to very little to no control over such
design decisions. They can either apply tuning or transfer learning on pre-
trained models to meet their requirements. Also, NLP models and their
corresponding datasets are significantly larger than the traditional DL models
and require heavy computation. Such reasons often lead to bugs in the system
while reusing the pre-trained models. While bugs in traditional DL software
have been intensively studied, the nature of extensive reuse and black-box
structure motivates us to understand the different types of bugs that occur
while reusing NLP models? What are the root causes of those bugs? How do these
bugs affect the system? To answer these questions, we studied the bugs
reported while reusing the 11 popular NLP models. We mined 9,214 issues from
their respective GitHub repositories and identified 984 bugs. We created a
taxonomy with the bug types, root causes, and impacts. Our observations led to
several key findings, including limited access to model internals resulting in
lack of robustness, lack of input validation leading to the propagation of the
algorithmic and data bias, high-resource consumption causing more crashes, and
memory-out-of-bound errors, etc. Our observations suggest several bug
patterns, which would greatly facilitate further research and development for
reducing bugs in the pre-trained models as well as the code that reuses them.
empirical study, bugs, NLP, deep learning, model reuse
††price: ††isbn: ††booktitle:
## 1\. Introduction
“Hey, Alexa! How is the weather outside?”, “Hey, Google! Play my favorite
song”, or “Hey, Siri! Set the alarm at 6 pm?” these are a few examples in
which Natural Language Processing (NLP) has been used in our daily lives. In
each one, human speech is transformed into machine-level representations.
Figure 1. An example of NLP reuse-related bug (GitHub Issue, 2022a)
The pre-trained networks analyze the inputs and generates results. Over the
last decade, there has been extensive work on processing natural language and
an increasing uptick in usage of these models in both industries (Devlin et
al., 2019; Radford et al., 2019; Lample and Conneau, 2019) and academia
(Vaswani et al., 2017, 2018; Wolf et al., 2020).
Figure 2. Data collection steps for studying bugs in NLP pre-trained models
However, NLP models require more training data (Bender et al., 2021) and
extensive computation (Strubell et al., 2019), compared to other deep learning
(DL)-based models. The additional computation cost requirements led to the
development of several pre-trained NLP models (Devlin et al., 2019; Radford et
al., 2019; Lample and Conneau, 2019) that can be reused with minimal changes.
The extensive reuse of NLP pre-trained models helped many developers implement
their desired applications efficiently (Lan et al., 2019; Lewis et al., 2020;
Keskar et al., 2019; Liu et al., 2019), but the black-box nature of the model
often leads to software issues and bugs. For example, Figure 1 shows an issue
in which a developer was fine-tuning a multi-lingual Bart (Facebook Research,
2022a) model to translate sentences from Spanish to English ( 1). However, for
short sentences, the model is producing output in the Arabic language ( 2 and
3). Since the developers often do not know the internal processes of the pre-
trained model, localizing these issues are challenging.
There is a vast body of works on understanding the bugs, their root causes,
and the challenges faced by DL (Islam et al., 2020, 2019; Zhang et al., 2018)
and machine learning (Thung et al., 2012; Humbatova et al., 2020) developers.
These works facilitate bug detection, localization (Wardat et al., 2021a, b;
Nikanjam et al., 2021), and suggest repair strategies (Zhang et al., 2021b) to
practitioners. However, these studies are mostly done to characterize bugs
developers face while building models from scratch, not while reusing pre-
trained models. We found that the dataset shared by Islam et al. (Islam et
al., 2019) has only 16.18% and 33.57% of the DL bugs from Stack Overflow and
GitHub, respectively, are related to reusability. Moreover, the reusability in
the prior work’s dataset is limited to the image-related models, e.g., ResNet,
ImageNet, and are not related to NLP. In contrast to the prior works, we
understand how the bugs introduced while reusing the NLP models are different
from the traditional DL bugs and characterized them into the type, root cause,
and impact to facilitate further studies. In this context, we consider a bug
to be caused by reusing the pre-trained models if (1) the bugs were present in
the reused model and propagated to the code while reusing, and/or (2) bugs
found while adapting the pre-trained model by altering code.
In this study, we curated the 11 most popular NLP pre-trained models from a
model hub, Huggingface Transformers (HuggingFace, 2022b). Then we mined 9,214
issues from the GitHub repositories of these models that use these pre-trained
models. We further filtered the issues that report bugs. After manual
checking, we identified 954 issues that contained 984 bugs introduced while
reusing NLP models. We adapt the classification scheme from a prior work
(Islam et al., 2019) to characterize deep learning bugs and update it based on
the open-coding scheme. We answer the following research questions:
* •
RQ1 (Bug Types): What kind of bugs are prevalent while reusing NLP models, and
how do they differ from DL software?
* •
RQ2 (Root Causes): What are the root causes of the bugs?
* –
How do the updates in the pre-trained models introduce bugs?
* –
Which NL configuration parameters are causing the models to be bug-prone?
* •
RQ3 (Impacts): How do the bugs impact the NLP software?
We found several frequent bug patterns that include the wrong update of the
models, API misuse, architectural issues, or domain-specific confusion. The
key findings of the paper are as follows:
1. (1)
Reused NLP models suffer more initialization and memory-related bugs than the
traditional DL models due to the larger model size. We identified that the
average number of parameters and dataset size for the largest DL (ResNet1100,
VGG16) models are 80 million and 100GB, respectively. Whereas, in NLP (CTRL,
T5), it is 7 billion and 650GB, respectively, which are significantly large
compared to the DL models.
2. (2)
Traditional DL models are barely reused in production and focus more on data.
Whereas, in NLP, the focus is on setting correct parameters (32.3% of the
total bugs) as users often do not have access to the training data of the
reused model.
3. (3)
Due to the black-box nature of the reuse, NLP developers often do not clearly
understand the underlying architecture of these models. We found that such
bugs are seen 14.9% of the time in NLP, but for DL, it has been seen for 6.3%.
Outline: In §2, we discuss data collection and labeling methodology. §3, §4,
and §5 describe the frequent bug types, the root causes of the bugs, and the
common impacts, respectively. §6 describes the related works, §7 explains the
threats to validity, and §8 concludes.
## 2\. Study Design
Here, we describe the methodology to collect the dataset and build the
classification scheme. First, we discuss the data collection steps. Then, we
discuss our classification scheme and labeling approach.
### 2.1. Data Collection
Figure 2 shows the data collection steps for selecting the pre-trained models,
mining issues from corresponding GitHub repositories, and identifying bugs
through manual investigation.
Model Selection. First, we identify the popular open-source NLP pre-trained
models and issues corresponding to that. We start by identifying the list of
available pre-trained models. For that, we refer to Huggingface Transformers
(Wolf et al., 2020; HuggingFace, 2022b), an open-source framework that
maintains an NLP model hub (HuggingFace, 2022a). It contains pre-trained
models for various NLP tasks such as text classification, translation, etc.
Huggingface Transformers is supported by the most popular deep learning
libraries (i.e., Tensorflow, Pytorch). Furthermore, several major
organizations, including Google, Facebook, Microsoft, Allen Institute, and
others, use this framework. However, the issues reported in the Huggingface
Transformers are related to problems faced by developers that correspond to
the bugs in their platform rather than bugs encountered while using pre-
trained models. To focus our study on the bugs that developers face while
reusing the pre-trained model, we study the issues reported in the
repositories corresponding to each such model. As of January 2022, Huggingface
Transformers listed 93 pre-trained models in their documentation (HuggingFace,
2022b). Among these models, we found that 36 pre-trained models are open-
source and have a GitHub repository. Since we aim to identify the bugs
developers face while reusing these models, we focus on the issues logged by
developers in the GitHub. Some of these repositories have very few issues
since they do not maintain or update the repository based on discussions and
reported bugs. Hence, we filter out the repositories having $\leq 50$ issues
(open + closed issues). Based on these criteria, we identified 24
repositories. Finally, to focus on the quality repositories (popularity among
developers) and keep our manual analysis manageable, we focus on the
repositories with at least 2000 stars (Borges et al., 2016). Thus, we found
repositories that correspond to 11 NLP pre-trained models. The number of
issues for these models ranges from 76 to 3214 and a total of 9214. We further
filtered them to identify bugs in these models. The details of each such model
are given in Table 1.
Model Name | Owner | #Issue | #Mined | #Bugs | Description | Parameters
---|---|---|---|---|---|---
ALBERT (Google Research, 2022a; Lan et al., 2019) | GR | 171 | 64 | 53 | A Lite BERT is trained on multiple datasets, with fewer parameters than Bert. | 223M
Bart (Facebook Research, 2022a; Lewis et al., 2020) | FR | 3214 | 156 | 119 | BART is based on a standard transformer architecture used for neural machine translation. It has both an encoder and a decoder. | 406M
Bert (Google Research, 2022b; Devlin et al., 2019) | GR | 1091 | 299 | 216 | Bidirectional Encoder Representations from Transformers is a transformer architecture-based model. It has been trained over a lot of unlabeled textual data. | 340M
CTRL (Salesforce, 2022; Keskar et al., 2019) | Salesforce | 76 | 33 | 27 | Conditional Transformer Language Model is trained on 140GB of text data. | 1.6B
GPT-2 (Open AI, 2022; Radford et al., 2019) | Open AI | 232 | 89 | 70 | Generative Pre-trained Transformer-2 is trained on millions of webpages. | 1.5B
GPTNeo (EleutherAI, 2022; Gao et al., 2020) | EleutherAI | 138 | 78 | 44 | GPTNeo is a transformer based model trained on 825GiB English text corpus. | 2.7B
RoBERTa (Facebook Research, 2022b; Liu et al., 2019) | Facebook | 3214 | 190 | 155 | Robustly Optimized BERT Approach is an improved version of Bert with more data. | 355M
T5 (Google Research, 2022c; Raffel et al., 2020) | Google AI | 374 | 147 | 150 | Text-to-Text Transfer Transformer model is a large neural network model, trained on a mixture of unlabeled text and labeled data from several downstream tasks. | 11B
Transformer-XL (Kimiyoung, 2022; Dai et al., 2019) | Google CMU | 128 | 35 | 19 | Transformer-XL model, based on Transformer architecture, allows language understanding without disturbing the temporal coherence unlike traditional transformer. | 257M
XLM (Facebook Research, 2022c; Lample and Conneau, 2019) | FR | 320 | 106 | 84 | XLM is an improved version of BERT that achieves better performance in classification and translation-related tasks. | 500M
XLNET (Zihangdai, 2022; Yang et al., 2019) | Google CMU | 256 | 80 | 47 | XLNET is a generalized version of Transformer-XL. It is based on a large bidirectional transformer that uses larger data. It outperformed Bert on 20 language tasks. | 340M
Total | 9214 | 1277 | 984 | |
Table 1. Dataset description – Pre-trained NLP models. GR: Google Research,
FR: Facebook Research.
Issue Mining. Once we fixed the models under study, we mine the issues related
to the bug and bug-fix. To mine such issues, we utilize the techniques applied
by Garcia et al. (Garcia et al., 2020). In this approach, the issues with
these keywords in the title or body are selected: fix, defect, error, bug,
issue, mistake, incorrect, fault, and flaw. In addition, the issues labeled as
a bug by the respective repository are also selected. On top of that, we
remove the issues that are labeled as wontfix, which are primarily not bugs,
and there is no fix needed for those issues. Finally, we identified 1227
issues from GitHub out of 9214 issues present in all ten repositories.
Bug Identification. Once we mine all the bugs-related issues based on the
keyword searching, two raters (first and second authors) manually went through
the issues to mark them as “bug” or “not bug”. If there is a discrepancy while
labeling the issues, it has been clarified based on the discussion among
raters. Finally, 954 bug-related issues are identified from 1227 issues.
Within these 954 issues, 30 bug-related issues have more than one bug in a
single issue. For such scenarios, we counted them as more than one bug.
### 2.2. Classification Strategy
Here, we classify the bugs into their types (bug-type), what is causing the
bug (root cause), and how the bug affects the system (impact). We define the
classification scheme based on prior works (Islam et al., 2019; Zhang et al.,
2018; Beizer, 1990) and follow the open coding scheme for each category.
First, a pilot study has been conducted to identify the need for a new
category, and if needed, it has been approved in the discussion among the
raters. For impacts and bug types, we found that the classification scheme
proposed by Islam et al. (Islam et al., 2019) and Zhang et al. (Zhang et al.,
2018) are sufficient to address bugs found in NLP-based systems. For root
cause, we adapted the classification scheme proposed by Islam et al. and added
new root cause kinds based on the open coding approach. We have added five
main kinds and several sub-kinds within that. The entire classification scheme
has been shown in Figure 5.
Labeling. We label the bug-related issues based on the classification scheme
fixed in the previous step. The first and second authors independently labeled
the bugs. To measure the raters’ agreement, we compute the Cohen’s Kappa co-
efficient (Viera et al., 2005) for each 10% of issues. After two rounds, the
Cohen’s kappa coefficients for all three categories were $>0.85$, which
indicates the perfect agreement. The first and second authors then labeled
independently and resolved any disagreement based on discussions.
## 3\. What are the prevalent bug types while reusing NLP models
Here, we discuss the type of bugs found while reusing NLP models.
API Bug. API-related issues are mostly due to the changes in the top-level
APIs that cause the models to behave aberrantly. These types of bugs can occur
both from misuse by the developer of the client code as well as for missing
the compatibility among the APIs.
Data Bug. When bugs are caused while loading the training data for the NLP
models, we refer to them as data bugs.
Structural Bug (SB). This genre of bugs is associated with the incorrect
definition of the model structure. Bugs associated with the structure of the
model is classified further into five categories:
1. (1)
Control and Sequence Bug: These bugs are related to the control sequence or
control flow of the code.
2. (2)
Data Flow Bug: While data bugs are related to the input data, data flow bugs
are referred to as the bugs seen when data passes through the model structure.
For instance, the output shape of the data after applying a layer operation
does not match with the input required for the operation of the consecutive
layer. In such cases, we classify the bugs as data-flow bugs.
3. (3)
Initialization Bug: Reusing the NLP models involves the initialization of
different hyper-parameters. These parameters help define the vocabulary size,
the sequence length of the input words, the number of sentences in a single
input to the system (batch size), etc. Often wrong initialization can cause
the NLP model to crash, produce unexpected outcomes, etc.
4. (4)
Logic Bug: Bugs can occur due to logical errors in the model and the code
structure.
5. (5)
Processing Bug: These bugs are related to the wrong choice of the algorithm.
Non-Model Structural Bug (NMSB). Any bugs (except data and API bugs) initiated
outside the model are classified as “non-structural bugs”. The sub-categories
are the same as structural bugs. However, we only find initialization-related
bugs in this category.
### 3.1. What are the key differences between DL and NLP bug types?
We found that the bug-type categories proposed by (Islam et al., 2019) are
sufficient to represent the bugs seen while reusing the NLP pre-trained
models. However, the distribution, characteristics, and implications are
significantly different. Here, we discuss each such category and extend that
to highlight the commonalities and variabilities with the DL bugs.
Figure 3. Classification of bug types
(a) Distribution of bug types in NLP (labels $<$5.8% are hidden)
(b) Distribution of bug types in DL (labels $<$3.0% are hidden)
Figure 4. Comparison between NLP and DL bug types (Islam et al., 2019) and
their distributions
In Figure. 4(a) and 4(b), we show the distribution of bugs in NLP-based
software and the traditional DL-based software, respectively. We found that
API bugs (26.1%), initialization bugs (both structural and non-structural)
(31.7%), and processing bugs (14.9%) have been observed for more than 70% of
the bugs in our dataset. Whereas, in the traditional DL, the most prevalent
bugs are API bugs (32.6%), data bugs (17.0%), dataflow bugs (15.8%), etc.
This suggests that both NLP and traditional DL software are API intensive. We
found that versioning (16.02%), API incompatibility (23.44%), and execution
environment setup (15.63%) are the most common cause of API bugs. A uniform
API definition can help inform the developer about different model reuse. For
example, even though Huggingface provides a platform to reuse curated models,
they do not provide a unified API specification applicable to different
models. Building such unified specifications could help developers to avoid
these issues.
We also found that the dependency on the data for NLP-based software and DL-
based software is significantly different. First, a pre-trained NLP model
requires little to no additional data for further usage. Second, retraining
these NLP models is not as data-intensive as training a traditional DL model
from scratch. Here, the majority of the obstacles developers face are setting
the correct model for a problem (NLP: 14.9%, DL: 6.2%) and correctly reusing
the model (NLP: 31.7%, DL: 3.5%), and such issues rarely occur in the
traditional DL-based software. Also, the logic bugs (8.3%) are not as common
as in traditional DL software (18.1%). The primary reason is that developers
often do not alter pre-trained models’ logic. Next, we discuss some of the key
differences.
### 3.2. How Initialization Bugs in NLP are Different From Traditional DL
Bugs?
One key part of building a model in both DL and NLP-based software is the
choice of the parameters, setting up the execution environment, etc. We found
that the majority of such problems are due to the massive size of these pre-
trained models. Finding 1: Larger NLP models introduce more initialization
bugs. Initialization bugs are prevalent in all models, with a heavy presence
in GPT-2 and CTRL. Further investigating, we found that 25.47% of the bugs in
this category originated due to issues while setting up the execution
environment. In the traditional DL model, initialization bugs do not occur
much (3.29%). Since the average size of the DL model is significantly less
than that of NLP models, the wrong environmental setup does not affect as much
as it does in NLP. We found that for the largest DL models (ResNet1100,
VGG16), there are $\sim$80 million parameters, and the size is $\sim$100GB, on
average. Whereas, for the same NLP models (CTRL, T5), the average number of
parameters and size is $\sim$7 billion and $\sim$650GB, respectively, which is
significantly more than the DL models. In Table 1, the models with their
corresponding sizes are shown. To identify the model size, we looked into
various versions of the models and listed them according to the total number
of parameters. Due to the large size of the model, loading them to CPU or GPU
requires high resources. These bugs cause the abrupt stop of the execution
(66.67%) and memory issues (13.65%). Also, Strubell et al. (Strubell et al.,
2019) identified the resource consumption of the models and their impact on
the environment. They found that training a language model emits 6x more
$CO_{2}e$ than a car running on fuel for a year. So, while using the models,
the size should also be taken into account.
Implications. The reuse of pre-trained NLP models is very similar to the
software reuse before the notion of modular programming. With finer reuse
(Parnas, 1976, 1972), parts of the software can be reused and replaced.
Recently, for image-based classification problems, Pan and Rajan (Pan and
Rajan, 2020, 2022) have proposed an approach to decompose a monolithic model
into modules to enable reusability and replaceability of the decomposed
modules. Similarly, a more modular architecture can be proposed for the NLP-
based systems so that instead of reusing the entire model, developers can use
the parts of it.
### 3.3. How processing bugs in NLP are different from traditional DL bugs?
Compared to the DL, NLP-based software suffers from bugs related to the
correct algorithm and parameters to retrain the model. Finding 2: 14.9% of
the bugs in reusing models are related to the processing bugs. For instance,
previously, we discussed an issue (Figure 1), where the developers could not
finetune the model for the language translation task (Spanish to English). The
translation works perfectly if the sentence length is large. However, when the
sentences are short, it predicts in different languages other than the
languages specified while finetuning the model. While the model might have the
knowledge of the third language, that is unnecessary for the developer’s need.
Without knowing the underlying architecture, fixing such bugs are not always
possible. In fact, the situation is akin to what has been proposed by Parnas
(Parnas, 1976) regarding the program family. In that work, Parnas argued that
software could be reused either as a complete program (that produces output
given input) or as smaller components (intermediate stages that may or may not
produce output given input). While reusing the complete program, the traits of
the ancestors are passed to the descendent program without the knowledge. Some
of these traits are important, but not all. In the previous example, the same
happened. While reusing, the descendent software receives knowledge of another
language other than English and Spanish, which is not necessary for this
context. Such bugs are not that common in traditional DL, as reusability is
not as common as in NLP-based software. For traditional DL models, such bugs
only account for 6.3% of the total bugs. While investigating, we found that
such bugs in traditional DL are due to confusion while choosing the correct
algorithm to train the model. However, for NLP, it is mostly the black-box
nature of the reuse.
Implications. To avoid such scenarios, Parnas (Parnas, 1976) has suggested
that one needs to reuse the intermediate stages of the software and reuse only
the required information. While the notion of the complete program vs.
intermediate program has not been identified yet, work by Pan and Rajan (Pan
and Rajan, 2020, 2022) has identified how these black-box models can be seen
as a composition of smaller black-boxes. Understanding these smaller black-
boxes might help researchers capture the model’s intermediate representation
and reuse that instead of the complete models.
## 4\. Common Root Causes
Figure 5. Classification of root causes of bugs in NLP pre-trained models
In this section, we discuss the common root causes of the bugs. First, we
discuss all the root causes that we identified, then highlight the most
prevalent causes. In Figure 5, we illustrate all classifications of the root
causes, and the blue boxes represent that these bugs are also present in the
traditional DL models.
Data Faults. If the bugs are caused by the incompatibility of the data to the
model, we categorize them as data faults. Bugs in this category can be broadly
classified into,
1. (1)
Type: These bugs occur due to wrong or mismatched data type.
2. (2)
Shape: Dimension mismatch of input data leads to these bugs.
3. (3)
Size: Often, the model expects a predetermined size of the input, which can
cause these bugs.
4. (4)
Unaligned Tensor: Incompatibility in the data flow of the tensors can cause
bugs.
Algorithmic Error. Bugs can occur due to logical or conditional errors such as
missing conditions in weight updates, division by zero, etc. The root causes
are sub-categorized into,
1. (1)
Logic: These bugs occur due to missing any concept or logic.
2. (2)
Computational Error: Such bugs appear when existing computation produces
incorrect results.
3. (3)
Missing Conditions: If the pre- or post-conditions of any computation need to
be added/fixed.
Updating the Pre-trained Model. Reusing the pre-trained models often requires
satisfying model-specific requirements. Violating them can introduce bugs in
the system. For example,
1. (1)
Migration: While migrating pre-trained models to accommodate different tasks
or datasets, the model requirement does not match with other environments and
causes bugs.
2. (2)
Checkpoint: In DL, creating checkpoint is a common practice during training.
This helps one to get intermediate results while performing extensive
training. However, we found that often problems may occur while saving these
checkpoints or accessing existing checkpoints of the trained model.
3. (3)
Re-train : Bugs introduced while re-training the models on new training data.
4. (4)
Refine Weights: A pre-trained model can offer ways to refine existing weights.
5. (5)
Fine Tune: Bug caused while tuning the parameters.
Architectural Incompatibility. The prebuilt NLP models can have an implicit
architectural requirement that poses different types of incompatibility in the
user’s system. Moreover, the models are resource hungry and sometimes do not
go through exhaustive testing on different combinations of computational
architecture, which causes these bugs. The types of such bugs are:
1. (1)
Memory: The pre-trained models require high memory and often that causes bugs.
2. (2)
Concurrency: Many models allow multiple training threads for speedup but cause
bugs for strict concurrency assumptions.
3. (3)
Distributed Learning: Sometimes, these models can be run on the distributed
architecture, e.g., multiple cores of CPU or GPU, which leads to bugs.
Incorrect Execution Environment. Pre-trained models have been made available
within different DL packages, e.g., Tensorflow, Pytorch. The subtypes in this
category are:
1. (1)
Cross-platform: These bugs can occur because of specific requirements of the
underlying platform, e.g., operating system.
2. (2)
Cross-framework: Incompatible versions of Python or required packages such as
Tensorflow, Pytorch can be the root cause of this type of bug.
API Misuse. These bugs can be caused due to,
1. (1)
Absence of Inter-API Compatibility: There are many API dependencies in the NLP
programs, i.e., Scikit-Learn, Tensorflow. Often these APIs are not compatible
in performing a task jointly.
2. (2)
API Change: These bugs are related to API versioning.
Confusion in NL-specific Specifications. A vast majority of the bugs are due
to the missing vocabulary, incorrect sequence length, wrong choice of
tokenization mechanism while preprocessing the data and re-training the model.
We further categorized such root causes into the following categories:
1. (1)
Vocabulary: Bugs can occur because of inappropriate vocabulary or accessing
the vocabulary in specific tasks.
2. (2)
Tokenization: When re-training the models on a new dataset, the tokens should
be preprocessed in accordance with the pre-trained model versions.
3. (3)
Sentence: Often, the semantics and syntax of the sentences differ with the
customization of NLP tasks, which leads to bugs.
4. (4)
Batch Size: Because of heavy resource computations and lengthy training
process, the batch size needs frequent updating, which results in NL-specific
bugs.
5. (5)
Language: Since different human languages have a variety of structures;
specific preprocessing and task customization is needed to reuse the pre-
trained models. In this process, developers can face NL-related bugs.
6. (6)
Maximum Length: These models have a hard threshold of the supported maximum
length of sentences. Developers often need to tune that, which causes bugs.
Programming Error. These bugs occur due to any other programming issues such
as:
1. (1)
Syntax Error: Bugs might occur due to incorrect syntactical errors such as
missing punctuations, parenthesis, etc.
2. (2)
Coding: Other programming errors such as the wrong loop breaking, missing
corner cases, misusing variable names, etc. causes bugs in the NLP models.
(a) Distribution of root causes in NLP (labels $<$7.8% are hidden)
(b) Distribution of root cause types in DL (labels $<$7.1% are hidden)
Figure 6. Comparison between the root causes of NLP and DL bugs (Islam et al.,
2019) and their distributions
We found that the root causes are significantly different and very specific to
the NLP-based software. In Figure 6(a), we show the high-level distribution of
the different root causes. We also show the distribution of root causes that
are prevalent in traditional DL-based software in Figure 6(b) based on bugs
reported by (Islam et al., 2019). Comparing both figures, we can observe that
the types and the distributions of root causes are significantly different.
For instance, DL models are more prone to bugs when API changes. However,
since NLP models are pre-trained with a version of API, users do not tend to
change the version, resulting in fewer bugs. Also, most bugs in the NLP system
are updating the model by retraining, refining weights, etc., which are not
common in DL-based software. This section focuses on the main root causes of
bugs in NLP-based models, which are not present in the traditional DL.
### 4.1. How updating the pre-trained models introduce bugs?
Approximately one-third of the bugs in reusing pre-trained models are related
to updating models. Such circumstances are very nominal in traditional DL,
with prior work not defining any classification category representing their
root causes. In this context, updating a pre-trained model can be done by
migrating the model to another system, setting up checkpoints to store
intermediate training results, retraining the model with either additional or
new data, refining the weight, fine-tuning the model, etc., to match the
user’s intended functionality with the one provided by the model.
Finding 3: Majority of the bugs (32.3%) are caused due to updating the pre-
trained models.
In an issue reported by a developer (GitHub Issue, 2022b) discusses that re-
training a T5 model with a custom dataset and the model produces unexpected
output, generating prefixes in German or Spanish while the model was trained
for the English dataset. This is due to insufficient training. Further
investigating, we found several sub-categories of such bugs, e.g., migrating
models and model-related artifacts, fine-tuning, imposing checkpoints,
refining models’ weight and bias, and re-training models with custom datasets
and parameters. As discussed previously, this phenomenon is unique to the NLP
model reusability as it is not very common to reuse traditional DL models.
Most traditional DL models are built from scratch. For instance, only 3.5% of
the models curated (out of 6368 GitHub repositories) by Gonzalez et al.
(Gonzalez et al., 2020) are related to reusing pre-trained model. We verify
the reuse by identifying the presence of a pre-trained model in the list of
import statements, and/or a compiled model is loaded to the code. We found
that most bugs introduced while re-training are due to migrating models, re-
training the model, and fine-tuning the models, and we discuss each cause in
the paragraphs below.
#### 4.1.1. Migration
Finding 4: 11.17% of the bugs are introduced while migrating the pre-trained
models.
A majority of the bugs are generated while loading the pre-trained models and
other associated components. The most common ones are the missing files and
the incompatibility between the model and the input. For instance, an issue
(Bert, 2022) in Bert, where a conversion error occurred due to the mismatch
between the input requirement of the pre-trained model and the available APIs,
e.g., Tensorflow.
Implications. These migration issues are very similar to the software
component migration (Plakidas et al., 2018; Kum et al., 2008; Fleurey et al.,
2007; Choi et al., 2004; Phan et al., 2017) problems, where due to ever-
changing platforms, paradigms, and techniques, transferring software from one
infrastructure to another may cause bugs. However, approaches to resolving
such issues (Plakidas et al., 2018; Kum et al., 2008; Fleurey et al., 2007)
are limited to the traditional software and currently are not applicable to
both DL and NLP-based software. Also, in SE, there have been works on program
fragments and linking (Cardelli, 1997). In this particular work (Cardelli,
1997), Cardelli has proposed the notion of the separate compilation of the
program fragments and linking them together to form a complete program. The
components involved in the migration tasks, e.g., the model, input data, API,
can also be compared as program fragments, where each fragment cannot
individually be compiled or type-checked. The type-checking only occurs when
all the components form the complete program. If the type-checking fails while
linking the fragments, then migration-related bugs occur. However, suppose we
can enable the separate compilation of the program fragments, we can identify
whether each fragment can safely be linked to other components or fragments.
Also, we can validate if certain fragments can be safely replaced with other
fragments while migrating to a different environment. We believe that both
research directions could be an interesting avenue to venture into the SE-PL-
DL community.
#### 4.1.2. Retraining
Finding 5: 9.14% of the all the bugs are related to retraining.
The pre-trained models can be reused (1) without any modification, (2) by re-
training with changing parameters, or (3) by re-training with an additional
dataset. In this section, we discuss the second and third approaches of
reusing pre-trained models. Re-training without new data can be achieved when
a model $M$ is trained on a dataset $D$ and has been further trained with a
different set of initialization parameters. Prior research (Song and
Raghunathan, 2020; Zanella-Béguelin et al., 2020) has found that retraining
increases the chance of information leakage as the dataset is not evolved
during the process. So, if an adversary receives certain information about the
dataset, the NLP model can easily be attacked by perturbing the words (Zhang
et al., 2021a).
Retraining by addition of dataset is also referred to as incremental training
(Syed et al., 1999) (Wu et al., 2019) or continuous learning (Collobert and
Weston, 2008). Here, a model $M$ is trained with a dataset $D$, and re-trained
with new data $D^{\prime}$ to create a model $M^{\prime}$. For instance, in an
issue (BERT, 2022c) logged by developers discussed that while using Bert model
on an example dataset, the accuracy is very low. However, while predicting,
the prediction accuracy is very high (close to 1) for the testing dataset. To
fix the issue, the model requires more training. Since the pre-trained model
already has a certain amount of knowledge, it was sufficient to predict the
examples in the testing dataset, which is essentially smaller than the
training dataset. However, it is not sufficient for predicting examples from
the training dataset.
Implications. Developers are often not aware of the differences between
different types of re-training. The focus is more on accuracy than on other
aspects, e.g., fairness and data leakage. Updating the APIs with the
consequences of different re-training can help developers make better-informed
decisions.
#### 4.1.3. Fine Tuning
Developers often re-structure the pre-trained models to fit the requirement.
For instance, in an issue reported in Bert (BERT, 2022a), where the developer
modified the model, which mostly includes steps, i.e., pruning the model
changing the parameters. However, it has been noticed that fine-tuning does a
negative effect due to the presence of spelling mistakes. A single mistake
from “could” to “cud” has changed the prediction result as well as the
attention on the words. Such incidents could also be used as backdoor attacks
to the model, where an attacker can knowingly change a single word in such a
way that the final outcome of the model has also been changed (Chen et al.,
2021a; Yang et al., 2021).
### 4.2. What are the most bug prone NL-specific parameters?
Figure 7. Example of the wrong specification (BERT, 2022a)
A majority of the bugs (21.4%) are caused by using a wrong parameter while
reusing the NL models. Most such cases are caused due to the black-box nature
of these models. Developers choose to tune these parameters without knowing
the internals of the model. Moreover, such practices often go beyond the
accuracy of the model and impact on robustness, fairness, and other properties
of the model. Here, we discuss these NL-specific parameters and other
specifications, e.g., tokenization, language, vocabulary; developers can alter
that. Also, we describe how different choices would impact the model’s
behavior. For instance, Figure 7 shows an example where the developer is
having trouble reusing the Bert model to perform a certain task in the
Devanagari language. There is a concept of compound letters in this particular
language, where two vowels can be combined to form another letter (as
suggested by the developer in the issue). Since such knowledge is not present
in the reused model, the tokenizer only works on the first vowel of the
compound letter. Such bugs are related to the wrong specification of the
upstream and the downstream software. Here, we discuss such types of root
causes in detail.
Finding 6: Wrong NL batch size and sequence length can affect the robustness
of the NL software.
#### 4.2.1. Batch Size
Among all the NL-related parameter settings, setting up the correct Batch Size
is the cause for most of the bugs (21.76% of bugs in this category (4.65%
overall)). Batch size controls the flow of the input to the model while
training. Every model is shipped with a default value for this parameter.
However, to accommodate the model and execute on the tailored problems, one
might need to alter the value of this parameter. The value can either be
increased or decreased based on the need and the available resources. However,
both options have their pros and cons.
Decreasing the Batch Size. 36.96% of the bugs caused by the batch size end up
having memory issues, and the other 34.78% halts the program abruptly. We
found that all the bugs related to the memory issue have a common fix:
decreasing the batch size to accommodate the program with limited resources.
However, developers are unaware that, while decreasing the batch size will
reduce the memory consumption and make the learning process faster, it might
also decrease the robustness. If the NLP model is related to a safety-critical
system or dataset with sensitive information, then decreasing the batch size
will jeopardize the system’s safety. NLP, as well as most DL architectures,
use a stochastic gradient-based approach for learning. With a fixed batch
size, the gradient computation approximates the loss between two input
batches. If a developer decreases the batch size significantly, then computed
approximation will have high variance, and it will direct the model to learn
faster. Small batch size often helps skip some local minima and contribute to
the right direction (McCandlish et al., 2018). However, if the batch size is
extremely small, then two things can happen. First, if the dataset has a bias
towards a particular word, then sample bias could be created, and the model
will try to remember the word. If an adversary changes the word to a different
word, then it will decrease the overall robustness of the model (Galloway et
al., 2019). For instance, there is a sample bias over the word “dog” in a
dataset. If a sentence, “Dog is barking” is classified as negative and the
adversary only changes the “barking” with “eating”, then the sentence will be
classified as negative as well, which is incorrect. Here, due to the small
batch size, the model starts to remember the words associated with the
sentiment and predicts if there is a presence of such words without validating
other sections of the sentence. Second, if the batch size is very small, the
model might never converge and will never halt, and thus, it will not reach
the desired accuracy, which will make the NLP system vulnerable to adversarial
attacks.
Increasing the Batch Size. While decreasing batch size will decrease the
robustness, increasing the batch size can also have adverse effects. If the
batch size is too big, then the approximation between the two batches of input
is too large, which will not help the model learn and take more iteration to
converge. This is the primary reason for having memory out of bound error.
Implications. There is no optimum value for the batch size that can help the
model be more robust. Runtime verification could be done on the gradient
approximation to identify such issues. This could be done at either API level,
e.g., TensorFlow and PyTorch APIs, or by building a dynamically analyzing
gradient approximation. If the value exceeds a certain threshold, developers
could either be informed or the program can be halted.
#### 4.2.2. Sequence Length.
Figure 8. Solution to a max length related bug (Jacob Devlin, 2022)
16.20% (3.46% overall) NL-specific parameter-related bugs are due to setting
incorrect sequence length, which can affect the robustness of the system. The
model creators commonly set up the sequence or max length. This parameter
limits the length of the sentence that can be processed at a time. For
instance, Bert and RoBERTa have a maximum allowable sequence length of 512.
While this constraint can certainly help the model perform better, it can have
adverse effects. If the average length of the sentences in a dataset is more
than the max length, then the model may not learn the reference, which the
adversary can utilize to break functionalities. For instance, as shown in
Figure 8, the sentence “The man went to the store and bought a gallon of milk”
will be split between two parts, 1) “the man went to the store”, and 2) “and
bought a gallon of milk” based on the restriction imposed by the sequence
length (for illustrative purpose, we take sequence length as 6). Due to the
constraint on the parameter, the meaning of the sentence is lost, and an
adversary can change a single word. For example, if the word “store” is
changed to “mars”, the first part of the sentence can still be valid. But
without the imposed constraint, two sentences will not be split, and the
semantic will be preserved. In the same figure, the author of the post, a co-
author of Bert suggested that a data split can help escape such problems.
Implications. There are works to auto-tune hyper-parameters in the DL (Deng et
al., 2013; Ilievski et al., 2017) and SE (Fu and Menzies, 2017; Arcuri and
Fraser, 2011) domains. For instance, AutoML (He et al., 2021) uses neural
architecture search (NAS) for parameter optimization. A similar architecture
could be used for the NL domain. While optimizing the parameters, besides
accuracy, other non-functional properties should also be taken into account.
In SE, works on search-based systems (Arcuri and Fraser, 2013; Agrawal et al.,
2018) have proposed different techniques to identify the near-optimum value
for the hyper-parameters. Such systems could be used to tune the hyper-
parameters in NL-specific software.
#### 4.2.3. Language.
Finding 7: Improper language specification can propagate bias in NLP pre-
trained models.
Figure 9. Natural language specification bug (BERT, 2022b)
16.67% (3.56% overall) of bugs related to the wrong specification of NL-
parameters are due to the wrong use of NL specifications while reusing the
models. For instance, Figure 9 shows an issue that a developer faces while
reusing the multilingual Bert for the Chinese language. Since there is no
whitespace needed to separate words in Chinese, the multilingual pre-trained
model often confuses while tokenizing. The pre-trained model is trained on the
English dataset, and the same has been used to create the embedding of the
Chinese language. However, the issue occurs due to the mismatch between the
specifications of the two natural languages. While these issues can cause
wrong translation by introducing whitespace, often it can invoke gender bias,
too. For example, the words in English are not gender-specific, whereas
languages like Spanish, French, Hindi, etc., have gender-specific words. For
instance, in Spanish, the word “doctor” is either “doctor” (male) or “doctore”
(female) based on the actor in the sentence. If a model is built to translate
sentences from English to Spanish, then when the gender-specific words are
encountered, the model will depend on the context of the surrounding words.
For instance, when “My friend is a doctor” is translated to Spanish, it can
either be considered as, 1) “Mi amigo es doctore” (female), or 2) “Mi amigo es
doctor” (male) (Johnson, 2020). However, due to the presence of gender bias in
the English dataset, the system predicts the second option or the male
version.
Implications. Recently, ML models are accused of propagating bias or
unfairness in the prediction (Galhotra et al., 2017; Aggarwal et al., 2019;
Udeshi et al., 2018). The NLP models also exhibit such bias because of their
nature and reuse scenario. Based on the above finding, we think a formal
specification should be incorporated while building such models. This is akin
to introducing contracts in software development. If we compare the source
language as the subtype of the target language, then a specification could be
built around the multilingual translation that will validate the operation
beforehand. The source language should have all the target language
characteristics and more. For instance, if one of the meta-variable on which
the contract can be invoked is gender, then the value corresponding to the
English language will be gender-neutral, whereas it will be gender-specific
for Spanish. Having such specifications can warn the developers about
information loss.
Finding 8: Bugs can occur due to incorrect semantic preservation.
#### 4.2.4. Tokenization.
24.07% (5.15% overall) of parameter-related bugs occur because of wrong
tokenization, mostly ending up crashing (68.2%) the system. In NLP-based
systems, tokenization has been done to preserve the semantics of the input
dataset and dismantle the input dataset into smaller units, where the units
can be word, sub-word, or a character.
Figure 10. Example of tokenization-related bug (BERT, 2022b)
For instance, we show an example in Figure 10, where a developer builds a
model to translate German sentences into English sentences. A model trained
with the English language has been reused. If an unseen word occurs in the
German language, wrong token representation can lose the semantics of the
input language. For instance, “Hallo” in German means “Hi”. If the pre-trained
model is reused to translate “Hallo”, then the output of the tokenization step
could be “Hall”, “##o”. Surprisingly, there is a word “Hall” in the English
vocabulary. In this scenario, the semantics of the word “Hallo” is not
preserved.
Implications. In SE and PL, vast works (Ouni et al., 2012; Chlipala, 2007;
Zhao et al., 2012) has been done to preserve the semantics of the code during
code translation. Also, a proof checking-based approach can be implemented to
validate the semantics of the source and the target language. Recently,
Lagouvardos et al. (Lagouvardos et al., 2020) have proposed a Tensor type to
validate the tensor-based operations in DL. Such type of system can help in
the NLP domain to formally prove the preservation and progress.
The rest of the (20.5%) bugs in this category are caused due to the incorrect
choice of the vocabulary length and wrong sentences given in the dataset.
## 5\. Frequent Impacts
In this RQ, we study the common effects of the bugs while reusing NLP models.
We found that the categories denoted by the prior work (Islam et al., 2019)
suffice to represent the bugs in NLP. However, the distributions are not the
same. For example, incorrect functionality (DL: 9.18%) and memory out of bound
(DL: 0.82%) errors are more prevalent in NLP (16.0 % and 6.4%) than
traditional deep learning software. The percentages of bugs that abruptly halt
the program (68.3%) and cause bad performance (8.1%) are less frequent in NLP.
First, we discuss each type of impact and highlight the variabilities.
Figure 11. Classification of impacts
Bad Performance. Developer often finds that the model is not performing
adequately. Such bugs are classified into this category.
Crash. When a program exits with an error, the effect of the bugs is
classified under this category.
Data Corruption. If the output data has been changed unexpectedly, then data
corruption occurs.
Hang. If a program does not return output for a stipulated time and keeps
running, then it generally enters the hanging state.
Incorrect Functionality. Often, due to a bug in the NLP system, the output of
the program is different from the expected behavior.
Memory Out of Bound. Often, a program in NLP halts due to the unavailability
of the memory.
Figure 12. Distribution of impacts (labels $\leq$0.7% are hidden)
Finding 9: Reusing a pre-trained model helps to reduce performance related
bugs.
Compared to the traditional DL software, developers prefer to reuse the pre-
trained model in the NLP domain. One of the prominent reasons is to reuse the
knowledge from the huge corpus utilized to train these models. The culture of
reusing instead of building every solution has helped to reduce the
performance-related issues significantly (in DL: 13.8%, in NLP: 8.1%). Out of
all the root causes, updating the models (47.5%) and changing the NL-specific
parameters (31.25%) are the most prevalent reasons for bad performance.
## 6\. Related Works
There is a vast body of works in DL bug study (Islam et al., 2019, 2020; Thung
et al., 2012; Zhang et al., 2018; Garcia et al., 2020; Humbatova et al., 2020;
Wardat et al., 2021b; Zhang et al., 2021b; Nikanjam et al., 2021; Schoop et
al., 2021; Chakraborty, 2021; Liu et al., 2021). Here, we discuss the closest
works.
Thung et al. (Thung et al., 2012) have studied three machine learning systems,
Apache Mahout, Lucene, and OpenNLP. Bug-type and their severity have been
identified for these systems. Though this dataset has a system related to NLP,
they did not focus on NLP model reusability.
Chen et al. (Chen et al., 2021b) have studied the deployment faults of DL-
based mobile applications using 304 deployment faults from Stack Overflow and
GitHub. They have provided a taxonomy with 23 fault categories and
corresponding fix strategies. Whereas we study the bugs while reusing the pre-
trained NLP models.
Zhang et al. (Zhang et al., 2018) have studied the software built using the
Tensorflow library. This work is done on 175 bugs found in Stack Overflow
posts, and GitHub commits. These bugs are classified into symptoms and causes.
Though this work has charted the course in studying DL bugs, the bugs found in
traditional DL libraries are significantly different from those found while
reusing the NLP models.
Islam et al. (Islam et al., 2019) have studied bugs from five different DL
libraries using 2716 Stack Overflow posts and 500 GitHub commit. They found
415 bugs from Stack Overflow posts and 555 from GitHub commits. This work
classified the bugs into the root cause, bug type, and effect. Furthermore,
they studied bugs found in different pipeline stages and the presence of
different anti-patterns. Our classification scheme has been adapted from this
work. However, we found that the NLP-model bugs are significantly different.
Humbatova et al. (Humbatova et al., 2020) have studied 1059 Stack Overflow and
GitHub codes and developed a taxonomy of the faults seen in these posts and
commits. This study also focused on traditional DL and did not consider the
reusability of models in the NLP domain.
Chakraborty (Chakraborty, 2021) has studied 80 Stack Overflow posts to
determine the type of bugs occurrence when reusing, particularly BERT models.
We study the reuse of the 11 popular NLP models. We mined 9,214 issues from
GitHub and identified 984 bugs. Also, we provide a taxonomy with bug types,
root causes, and impacts.
## 7\. Threats To Validity
Internal Threat. The finding and the implications are drawn based on the
classification scheme developed. To remove the threat regarding the quality of
the classification scheme, we adapted the base scheme from prior works and
added categories based on an open coding approach. The classification scheme
was developed by two researchers and with rigorous discussions, which is the
same way prior works (Islam et al., 2019, 2020; Zhang et al., 2018) have
developed their classification schemes. Also, to remove the researcher’s bias,
we compute the Cohen’s Kappa coefficient to measure the agreement. Only when
the perfect agreement was achieved, the raters labeled the post individually.
Moreover, any discrepancies were resolved over a discussion.
External Threat. The quality of the issues mined can be an external threat. To
mitigate, we selected the most popular NLP models from a vastly used framework
(Huggingface Transformer). Then, instead of selecting the issues that are
related to bugs in Huggingface Transformer, we mine the bugs found while using
these NLP models. Then, we removed GitHub repositories that are not well-
maintained (number of issues $\leq$50) and selected the popular ones (based on
star count). We mined the bug-related issues using keywords proposed by the
prior work (Garcia et al., 2020) and labeling done by the maintainer of the
repositories. After mining such issues, raters validated whether the issue was
related to a bug or not by manual verification.
## 8\. Conclusion
With the increase in the popularity of the NLP domain, developers are facing
several bugs while reusing the pre-trained models. In this study, we mined
9214 issues from 11 repositories of well-known pre-trained models and
identified 984 bugs. Then, we manually studied them to understand the common
bug type, root causes, and impacts. We built a classification scheme based on
an open coding scheme. We determined that the root causes of the bugs are
significantly different from the bugs found in traditional deep learning.
Specifically, we found that large models are bug-prone and cause memory-
related issues. We identified that a parameter tuning and validation-based
approach could be helpful to increase the robustness of such systems. We also
identified bugs related to the propagation of input bias to the output, loss
of semantic preservation in the system, etc. Lastly, we suggest different ways
to prevent such issues (i.e., verify the system to ensure semantic
preservation, etc.). Our findings can help guide both the NL and SE
practitioners and researchers through the most prevalent problems of reusing
these models and help build automated repairing approaches to address the
same.
## References
* (1)
* Aggarwal et al. (2019) Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2019. Black box fairness testing of machine learning models. In _Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 625–635.
* Agrawal et al. (2018) Amritanshu Agrawal, Wei Fu, and Tim Menzies. 2018. What is wrong with topic modeling? And how to fix it using search-based software engineering. _Information and Software Technology_ 98 (2018), 74–88.
* Arcuri and Fraser (2011) Andrea Arcuri and Gordon Fraser. 2011. On parameter tuning in search based software engineering. In _International Symposium on Search Based Software Engineering_. Springer, 33–47.
* Arcuri and Fraser (2013) Andrea Arcuri and Gordon Fraser. 2013. Parameter tuning or default values? An empirical investigation in search-based software engineering. _Empirical Software Engineering_ 18, 3 (2013), 594–623.
* Beizer (1990) Boris Beizer. 1990\. Software testing techniques. 1990. _New York_ (1990).
* Bender et al. (2021) Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery: New York, NY, USA_.
* BERT (2022a) BERT. 2022a. Can BERT really handle misspelled words? https://github.com/google-research/bert/issues/587.
* BERT (2022b) BERT. 2022b. German Bert tokenizer does not recognize (some) special characters (!,?,…). https://github.com/huggingface/transformers/issues/2685.
* BERT (2022c) BERT. 2022c. Tuned Bert Model on MRPC gives wrong predictions. https://github.com/google-research/bert/issues/663.
* Bert (2022) Bert. 2022. Unhandled Rejection (Error): Unknown layer: BertModelLayer. https://github.com/google-research/bert/issues/1098.
* BERT (2022a) BERT. 2022a. Vowel symbols are removed from Devanagari (Hindi) scripts. https://github.com/google-research/bert/issues/138.
* BERT (2022b) BERT. 2022b. Whitespace around Chinese characters may not keep the original intention of the sentence. https://github.com/google-research/bert/issues/134.
* Borges et al. (2016) Hudson Borges, Andre Hora, and Marco Tulio Valente. 2016\. Understanding the factors that impact the popularity of GitHub repositories. In _2016 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 334–344.
* Cardelli (1997) Luca Cardelli. 1997\. Program fragments, linking, and modularization. In _Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages_. 266–277.
* Chakraborty (2021) Mohna Chakraborty. 2021\. Does reusing pre-trained NLP model propagate bugs?. In _Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 1686–1688.
* Chen et al. (2021a) Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, and Yang Zhang. 2021a. Badnl: Backdoor attacks against nlp models. In _ICML 2021 Workshop on Adversarial Machine Learning_.
* Chen et al. (2021b) Zhenpeng Chen, Huihan Yao, Yiling Lou, Yanbin Cao, Yuanqiang Liu, Haoyu Wang, and Xuanzhe Liu. 2021b. An empirical study on deployment faults of deep learning based mobile applications. In _2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)_. IEEE, 674–685.
* Chlipala (2007) Adam Chlipala. 2007\. A certified type-preserving compiler from lambda calculus to assembly language. _ACM Sigplan Notices_ 42, 6 (2007), 54–65.
* Choi et al. (2004) Byung-Kyu Choi, Sangig Rho, and Riccardo Bettati. 2004\. Fast software component migration for applications survivability in distributed real-time systems. In _Seventh IEEE International Symposium onObject-Oriented Real-Time Distributed Computing, 2004. Proceedings._ IEEE, 269–276.
* Collobert and Weston (2008) Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In _Proceedings of the 25th international conference on Machine learning_. 160–167.
* Dai et al. (2019) Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019\. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_. 2978–2988.
* Deng et al. (2013) Li Deng, Geoffrey Hinton, and Brian Kingsbury. 2013. New types of deep neural network learning for speech recognition and related applications: An overview. In _2013 IEEE international conference on acoustics, speech and signal processing_. IEEE, 8599–8603.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. 4171–4186.
* EleutherAI (2022) EleutherAI. 2022\. GPTNeo. https://github.com/EleutherAI/gpt-neo/.
* Facebook Research (2022a) Facebook Research. 2022a. Bart. https://github.com/pytorch/fairseq.
* Facebook Research (2022b) Facebook Research. 2022b. RoBERTa. https://github.com/pytorch/fairseq.
* Facebook Research (2022c) Facebook Research. 2022c. XLM. https://github.com/facebookresearch/XLM.
* Fleurey et al. (2007) Franck Fleurey, Erwan Breton, Benoit Baudry, Alain Nicolas, and Jean-Marc Jézéquel. 2007\. Model-driven engineering for software migration in a large industrial context. In _International Conference on Model Driven Engineering Languages and Systems_. Springer, 482–497.
* Fu and Menzies (2017) Wei Fu and Tim Menzies. 2017. Easy over hard: A case study on deep learning. In _Proceedings of the 2017 11th joint meeting on foundations of software engineering_. 49–60.
* Galhotra et al. (2017) Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017\. Fairness testing: testing software for discrimination. In _Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering_. 498–510.
* Galloway et al. (2019) Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, and Graham W Taylor. 2019. Batch normalization is a cause of adversarial vulnerability. _arXiv preprint arXiv:1905.02161_ (2019).
* Gao et al. (2020) Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020\. The pile: An 800gb dataset of diverse text for language modeling. _arXiv preprint arXiv:2101.00027_ (2020).
* Garcia et al. (2020) Joshua Garcia, Yang Feng, Junjie Shen, Sumaya Almanee, Yuan Xia, and Qi Alfred Chen. 2020\. A comprehensive study of autonomous vehicle bugs. In _Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering_. 385–396.
* GitHub Issue (2022a) GitHub Issue. 2022a. Finetuned mBART model for Spanish-English produces lengthy incorrect results for short inputs. https://github.com/pytorch/fairseq/issues/2476.
* GitHub Issue (2022b) GitHub Issue. 2022b. Incorrect Output on custom dataset trained model. https://github.com/google-research/text-to-text-transfer-transformer/issues/409.
* Gonzalez et al. (2020) Danielle Gonzalez, Thomas Zimmermann, and Nachiappan Nagappan. 2020\. The State of the ML-universe: 10 Years of Artificial Intelligence & Machine Learning Software Development on GitHub. In _Proceedings of the 17th International Conference on Mining Software Repositories_. 431–442.
* Google Research (2022a) Google Research. 2022a. ALBERT. https://github.com/google-research/albert.
* Google Research (2022b) Google Research. 2022b. Bert. https://github.com/google-research/bert.
* Google Research (2022c) Google Research. 2022c. T5. https://github.com/google-research/text-to-text-transfer-transformer.
* He et al. (2021) Xin He, Kaiyong Zhao, and Xiaowen Chu. 2021. AutoML: A Survey of the State-of-the-Art. _Knowledge-Based Systems_ 212 (2021), 106622.
* HuggingFace (2022a) HuggingFace. 2022a. Huggingface Model Hub. https://huggingface.co/models.
* HuggingFace (2022b) HuggingFace. 2022b. Huggingface Transformer. https://github.com/huggingface/transformers.
* Humbatova et al. (2020) Nargiz Humbatova, Gunel Jahangirova, Gabriele Bavota, Vincenzo Riccio, Andrea Stocco, and Paolo Tonella. 2020. Taxonomy of real faults in deep learning systems. In _Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering_. 1110–1121.
* Ilievski et al. (2017) Ilija Ilievski, Taimoor Akhtar, Jiashi Feng, and Christine Shoemaker. 2017. Efficient hyperparameter optimization for deep learning algorithms using deterministic rbf surrogates. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , Vol. 31.
* Islam et al. (2019) Md Johirul Islam, Giang Nguyen, Rangeet Pan, and Hridesh Rajan. 2019. A comprehensive study on deep learning bug characteristics. In _Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 510–520.
* Islam et al. (2020) Md Johirul Islam, Rangeet Pan, Giang Nguyen, and Hridesh Rajan. 2020\. Repairing deep neural networks: Fix patterns and challenges. In _2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE)_. IEEE, 1135–1146.
* Jacob Devlin (2022) Jacob Devlin. 2022\. Plans to support longer sequences? https://github.com/google-research/bert/issues/27.
* Johnson (2020) Melvin Johnson. 2020\. A scalable approach to reducing gender bias in Google Translate. _Google Blog_ (2020).
* Keskar et al. (2019) Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019\. CTRL - A Conditional Transformer Language Model for Controllable Generation. _arXiv preprint arXiv:1909.05858_ (2019).
* Kimiyoung (2022) Kimiyoung. 2022\. Transformer-XL. https://github.com/kimiyoung/transformer-xl.
* Kum et al. (2008) Daehyun Kum, Gwang-Min Park, Seonghun Lee, and Wooyoung Jung. 2008. AUTOSAR migration from existing automotive software. In _2008 International Conference on Control, Automation and Systems_. IEEE, 558–562.
* Lagouvardos et al. (2020) Sifis Lagouvardos, Julian Dolby, Neville Grech, Anastasios Antoniadis, and Yannis Smaragdakis. 2020\. Static Analysis of Shape in TensorFlow Programs. In _34th European Conference on Object-Oriented Programming (ECOOP 2020)_. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
* Lample and Conneau (2019) Guillaume Lample and Alexis Conneau. 2019. Cross-lingual Language Model Pretraining. _Advances in Neural Information Processing Systems (NeurIPS)_ (2019).
* Lan et al. (2019) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019\. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In _International Conference on Learning Representations_.
* Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. 7871–7880.
* Liu et al. (2021) Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. 2021\. On the Reproducibility and Replicability of Deep Learning in Software Engineering. _ACM Transactions on Software Engineering and Methodology (TOSEM)_ 31, 1 (2021), 1–46.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. _arXiv preprint arXiv:1907.11692_ (2019).
* McCandlish et al. (2018) Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. 2018. An empirical model of large-batch training. _arXiv preprint arXiv:1812.06162_ (2018).
* Nikanjam et al. (2021) Amin Nikanjam, Houssem Ben Braiek, Mohammad Mehdi Morovati, and Foutse Khomh. 2021. Automatic Fault Detection for Deep Learning Programs Using Graph Transformations. _ACM Transactions on Software Engineering and Methodology (TOSEM)_ (2021). https://arxiv.org/abs/2105.08095
* Open AI (2022) Open AI. 2022\. GPT-2. https://github.com/openai/gpt-2.
* Ouni et al. (2012) Ali Ouni, Marouane Kessentini, Houari Sahraoui, and Mohamed Salah Hamdi. 2012. Search-based refactoring: Towards semantics preservation. In _2012 28th IEEE International Conference on Software Maintenance (ICSM)_. IEEE, 347–356.
* Pan and Rajan (2020) Rangeet Pan and Hridesh Rajan. 2020. On decomposing a deep neural network into modules. In _Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 889–900.
* Pan and Rajan (2022) Rangeet Pan and Hridesh Rajan. 2022. Decomposing Convolutional Neural Networks into Reusable and Replaceable Modules. In _ICSE’22: The 44th International Conference on Software Engineering_.
* Parnas (1972) David L Parnas. 1972\. On the criteria to be used in decomposing systems into modules. In _Pioneers and their contributions to software engineering_. Springer, 479–498.
* Parnas (1976) David Lorge Parnas. 1976\. On the design and development of program families. _IEEE Transactions on software engineering_ 1 (1976), 1–9.
* Phan et al. (2017) Hung Dang Phan, Anh Tuan Nguyen, Trong Duc Nguyen, and Tien N Nguyen. 2017. Statistical migration of API usages. In _2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C)_. IEEE, 47–50.
* Plakidas et al. (2018) Konstantinos Plakidas, Daniel Schall, and Uwe Zdun. 2018\. Software migration and architecture evolution with industrial platforms: A multi-case study. In _European Conference on Software Architecture_. Springer, 336–343.
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019\. Language Models are Unsupervised Multitask Learners. (2019).
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. _Journal of Machine Learning Research_ 21, 140 (2020), 1–67. http://jmlr.org/papers/v21/20-074.html
* Salesforce (2022) Salesforce. 2022\. CTRL. https://github.com/salesforce/ctrl.
* Schoop et al. (2021) Eldon Schoop, Forrest Huang, and Bjoern Hartmann. 2021\. UMLAUT: Debugging Deep Learning Programs using Program Structure and Model Behavior. In _Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems_. 1–16.
* Song and Raghunathan (2020) Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In _Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security_. 377–390.
* Strubell et al. (2019) Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019\. Energy and policy considerations for deep learning in NLP. _arXiv preprint arXiv:1906.02243_ (2019).
* Syed et al. (1999) Nadeem Ahmed Syed, Syed Huan, Liu Kah, and Kay Sung. 1999\. Incremental learning with support vector machines. (1999).
* Thung et al. (2012) Ferdian Thung, Shaowei Wang, David Lo, and Lingxiao Jiang. 2012\. An empirical study of bugs in machine learning systems. In _2012 IEEE 23rd International Symposium on Software Reliability Engineering_. IEEE, 271–280.
* Udeshi et al. (2018) Sakshi Udeshi, Pryanshu Arora, and Sudipta Chattopadhyay. 2018\. Automated directed fairness testing. In _Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering_. 98–108.
* Vaswani et al. (2018) Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for Neural Machine Translation. _CoRR_ abs/1803.07416 (2018). http://arxiv.org/abs/1803.07416
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017\. Attention is All you Need. In _NIPS_.
* Viera et al. (2005) Anthony J Viera, Joanne M Garrett, et al. 2005\. Understanding interobserver agreement: the kappa statistic. _Fam med_ 37, 5 (2005), 360–363.
* Wardat et al. (2021a) Mohammad Wardat, Breno Dantas Cruz, Wei Le, and Hridesh Rajan. 2021a. DeepDiagnosis: Automatically Diagnosing Faults and Recommending Actionable Fixes in Deep Learning Programs. _arXiv preprint arXiv:2112.04036_ (2021).
* Wardat et al. (2021b) Mohammad Wardat, Wei Le, and Hridesh Rajan. 2021b. DeepLocalize: Fault Localization for Deep Neural Networks. In _ICSE’21: The 43nd International Conference on Software Engineering_.
* Wolf et al. (2020) Thomas Wolf, Julien Chaumond, Lysandre Debut, Victor Sanh, Clement Delangue, Anthony Moi, Pierric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020\. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_. 38–45.
* Wu et al. (2019) Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. 2019. Large scale incremental learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 374–382.
* Yang et al. (2021) Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. 2021. Rethinking Stealthiness of Backdoor Attack against NLP Models. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. Association for Computational Linguistics, Online, 5543–5557. https://doi.org/10.18653/v1/2021.acl-long.431
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019\. XLNet: Generalized Autoregressive Pretraining for Language Understanding. _Advances in Neural Information Processing Systems_ 32 (2019), 5753–5763.
* Zanella-Béguelin et al. (2020) Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, and Marc Brockschmidt. 2020. Analyzing information leakage of updates to natural language models. In _Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security_. 363–375.
* Zhang et al. (2021b) Xiaoyu Zhang, Juan Zhai, Shiqing Ma, and Chao Shen. 2021b. AUTOTRAINER: An Automatic DNN Training Problem Detection and Repair System. In _2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)_. IEEE, 359–371.
* Zhang et al. (2018) Yuhao Zhang, Yifan Chen, Shing-Chi Cheung, Yingfei Xiong, and Lu Zhang. 2018. An empirical study on TensorFlow program bugs. In _Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis_. 129–140.
* Zhang et al. (2021a) Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Yasheng Wang, Xin Jiang, Zhiyuan Liu, and Maosong Sun. 2021a. Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks. _arXiv preprint arXiv:2101.06969_ (2021).
* Zhao et al. (2012) Jianzhou Zhao, Santosh Nagarakatte, Milo MK Martin, and Steve Zdancewic. 2012. Formalizing the LLVM intermediate representation for verified program transformations. In _Proceedings of the 39th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages_. 427–440.
* Zihangdai (2022) Zihangdai. 2022\. XLNET. https://github.com/zihangdai/xlnet.
|
# Carbon pseudospheres and the BTZ black hole
A. Iorio<EMAIL_ADDRESS>Institute of Particle and Nuclear
Physics, Faculty of Mathematics and Physics, Charles University, V
Holešovičkách 2, 18000 Praha 8, Czech Republic.
###### Abstract
I first recall the uses of Dirac materials as table top realizations of high
energy physics scenarios. Then I point to a specific system that might
reproduce a massless BTZ black hole, where the key role is played by
hyperbolic carbon pseudospheres. Finally some considerations are offered on
the possibility to realize rotating black holes, along with some comments on
the future of the whole analog gravity enterprise.
## I Introduction
The field of analogs has as noble father Richard Feynman who ignited the field
in a famous lecture titled “Electrostatic Analogs”, available in Feynman (see
also twostories ). There he explains why analog systems do describe the same
physics, based on the thrilling hypothesis of more elementary constituents
than the ones we deem to be fundamental. Amazingly, when space itself is
included as an emergent phenomenon, these are also the conclusions of certain
completely independent arguments of contemporary quantum gravity bekenstein ;
scholtz ; carroll .
As for the field of gravity analogs, see Volovik:2003fe , the seminal paper is
that of Unruh of 1981, where he proposes to search for experimental signatures
of his and Hawking’s effects, in a fluid dynamical analog UnruhAnalog . Due to
our deeper understanding and experimental control of condensed matter systems,
it is now becoming increasingly popular to reproduce that and other aspects of
fundamental physics in analog systems. Examples include the Hawking phenomenon
in Bose–Einstein condensates Steinhauer:2015saa , the Weyl symmetry iorio and
the related Hawking/Unruh phenomenon on graphene ioriolambiase1 ,
gravitational and axial anomalies in Weyl semimetals Gooth:2017mbd , and more
Ulf_LeonhardtPRL2019 . Actually, gravity analogs are not limited to condensed-
matter systems, as can be seen, e.g., by interpreting hadronization in heavy-
ion collisions as a consequence of the Unruh effect Castorina:2007eb ;
Castorina:2008gf .
Despite those impressive advances, there are still two milestones to reach.
One is to understand the epistemic role of analogs in fundamental hugh energy
physics, as not all theorists would agree that analogs are much more than mere
divertissements. In fact, experimental results obtained in analogs are not
used as feedbacks for the target theories they are analogs of (see, e.g.,
Dardashti2016 ; twostories ). Anothe milestone would be a reliable definition
of an analog BH entropy, or at least, of a QFT-like entanglement entropy that,
in the presence of horizons, might serve the scope of setting-up some form of
the second principle of BH thermodynamics.
Any progress in this direction would be truly important for the hep-th
research. Having some results there, we could eventually be able to address
the so-called information paradox, i.e., the apparent loss of information
during BH evaporation, a question that, most probably, cannot be entirely
solved via theoretical reasonings. See, e.g., Penrose1996 ; Hawking2004 ;
Almheiri2013 ; Hooft2016 ; Hooft2016_I ; Maldazena for different points of
view.
On the analog side, theoretical work has shown over the years that black hole
physics can find an indirect realization in BEC systems tris , and thrilling
experimental evidences have confirmed this fact Steinhauer:2015saa . The
latter findings are often referred to as the first experimental examples of
the Hawking effect. Here we focus on the proposal of graphene as an analog of
high-energy fundamental physics111Inspired by those findings, fundamental
constituents of both matter and space have been proposed in scholtz ;
twostories ; smaldone . ioriolambiase1 ; iorio ; pabloStran ; ioriopaiswitten
; grapheneQFTreview ; reach the unreachable , based on the fact that its low-
energy excitations CastroNeto2009 are massless Dirac pseudo-relativistic
fermions (the matter fields $\psi$), propagating in a carbon two-dimensional
honeycomb lattice. The emergent (long-wave limit) description of the latter is
a surface (piece of spacetime described by the “emergent” metric
$g_{\mu\nu}$). Such a behavior is shared by a wide range of materials, ranging
from silicene and germanene through d-wave superconductors to topological
insulators wehling . Each of those materials has its own peculiarities, which
allow for further extensions of results obtained with graphene, and hence
permit to explore a wider range of the high-energy target systems. Let us now
give some details.
## II Analog gravity on graphene
Graphene is a one-atom-thick allotrope of carbon, i.e. the closest in nature
to a 2-dimensional object. It was first theoretically speculated about wallace
, and, decades later, experimentally found geimnovoselovFIRST . Its honeycomb
lattice is made of two intertwined triangular sub-lattices. As is by now well
known, this structure is behind the description of its electronic properties
in terms of massless, (2+1)-dimensional, Dirac quasi-particles. If one
linearizes the tight-binding Hamiltonian around two Fermi points,
$\vec{k}^{D}_{\pm}=\left(\pm\frac{4\pi}{3\sqrt{3}\ell},0\right)$, then the
Hamiltonian becomes $H|_{\vec{k}_{\pm}}\simeq
v_{F}\sum_{\vec{p}}\left(\psi_{+}^{\dagger}\vec{\sigma}\cdot\vec{p}\;\psi_{+}-\psi_{-}^{\dagger}\vec{\sigma}^{*}\cdot\vec{p}\;\psi_{-}\right)$,
where $v_{F}=3\eta\ell/2\sim c/300$ is the Fermi velocity, $\psi_{\pm}$ are
two–component Dirac spinors, and $\vec{\sigma}\equiv(\sigma_{1},\sigma_{2})$,
$\vec{\sigma}^{*}\equiv(\sigma_{1},-\sigma_{2})$, with $\sigma_{i}$ the Pauli
matrices.
If one considers the linear regime only, the first scale is $E_{\ell}\sim
v_{F}/\ell\sim 4.2$eV. Notice that $E_{\ell}\sim 1.5\eta$, and that the
associated wavelength, $\lambda=2\pi/|\vec{p}|\simeq 2\pi v_{F}/E$, is
$2\pi\ell$. The electrons’ wavelength, at energies below $E_{\ell}$, is large
compared to the lattice length, $\lambda>2\pi\ell$. Those electrons see the
graphene sheet as a continuum. One Dirac point is enough, when only strain is
present (see, e.g., pabloStran ), and when certain approximations on the
curvature are valid ioriolambiase1 . The importance and relevance of the two
Dirac points for emergent hep-th descriptions has been discussed at length in
our work ioriopaiswitten , see also our recent tloop , where the focus though
is on torsion.
When only one Dirac point is necessary, the following Hamiltonian well
captures the physics of undeformed (planar and unstrained) graphene:
$H=-iv_{F}\int d^{2}x\;\psi^{\dagger}\vec{\sigma}\cdot\vec{\partial}\;\psi$,
where the two component spinor is, e.g., $\psi\equiv\psi_{+}$, we moved back
to configuration space, $\vec{p}\to-i\vec{\partial}$, and sums turned into
integrals because of the continuum limit. In various papers, we have exploited
this regime to a great extent, till the inclusion of curvature and torsion in
the geometric background. On the other hand, we also have investigated the
regimes beyond the linear one, where granular effects associated to the
lattice structure emerge, see GUP and also the related GUPBTZ . When both
Dirac points are necessary, one needs to consider four component spinors
$\Psi\equiv\left(\begin{array}[]{c}\psi_{+}\\\ \psi_{-}\\\
\end{array}\right)$, and $4\times 4$ Dirac matrices
$\alpha^{i}=\left(\begin{array}[]{cc}\sigma^{i}&0\\\ 0&-{\sigma^{*}}^{i}\\\
\end{array}\right)$, $\beta=\left(\begin{array}[]{cc}\sigma^{3}&0\\\
0&\sigma^{3}\\\ \end{array}\right)$, $i=1,2$. These matrices satisfy all the
standard properties, see, e.g., grapheneQFTreview and ioriopaiswitten . With
these, the Hamiltonian is $H=-iv_{F}\int
d^{2}x\left(\psi_{+}^{\dagger}\vec{\sigma}\cdot\vec{\partial}\;\psi_{+}-\psi_{-}^{\dagger}\vec{\sigma}^{*}\cdot\vec{\partial}\;\psi_{-}\right)=-iv_{F}\int
d^{2}x\;\bar{\Psi}\vec{\gamma}\cdot\vec{\partial}\;\Psi$.
In iorio the goal was to identify the conditions for which graphene might
realize aspects of QFT in curved spacetime. Therefore, key issues had to be
faced, such as the proper inclusion of the time variable in a relativistic-
like description, and the role of the nontrivial vacua and their relation to
different quantization schemes for different observers. All of this finds its
synthesis in the Unruh or the Hawking effects ioriolambiase1 . Let us explain
here the main issues and the approximations made there.
Besides $E_{\ell}$, when we introduce curvature, we also have a second scale.
When this happens, $E_{\ell}$ is our “high energy regime”. This is so because
we ask the curvature to be small compared to a limiting maximal curvature,
$1/\ell^{2}$, otherwise: i) it would make no sense to consider a smooth
metric, and ii) $r<\ell$ (where $1/r^{2}$ measures the intrinsic curvature),
means that we should bend the very strong $\sigma$-bonds, an instance that
does not occur. Therefore, our second scale is $E_{r}\sim v_{F}/r$, with
$E_{r}=\ell/r\;E_{\ell}<E_{\ell}$. To have a quantitative handle on these
scales, let us take, e.g., $r\simeq 10\ell$ as a small radius of curvature
(high intrinsic curvature). To this corresponds an energy $E_{r}\sim 0.4$eV,
whereas, to $r\sim 1{\rm mm}\sim 10^{6}\ell$, corresponds $E_{r}\sim
0.6\mu$eV. The “high energy” to compare with is $E_{\ell}\sim 4$eV. When
energies are within $E_{r}$ (wavelengths comparable to $2\pi r$) the electrons
experience the global effects of curvature. That is to say that, at those
wavelengths, they can distinguish between a flat and a curved surface, and
between, e.g., a sphere and a pseudosphere. Therefore, whichever curvature
$r>\ell$ we consider, the effects of curvature are felt until the wavelength
becomes comparable to $2\pi\ell$. The formalism we have used, though, takes
into account all deformations of the geometric kind, with the exception of
torsion. Hence, this includes intrinsic curvature, and elastic strain of the
membrane (on the latter see pabloStran ), but our predicting power stops
before $E_{\ell}$, because there local effects (such as the actual structure
of the defects) play a role that must be taken into account into a QG type of
theory. On the latter the first steps were moved in GUP (see also the related
GUPBTZ ).
The intrinsic curvature is taken here as produced by disclination defects,
that are customarily described in elasticity theory (see, e.g., Kleinert ), by
the (smooth) derivative of the (non-continuous) SO(2)-valued rotational angle
$\partial_{i}{\omega}\equiv{\omega_{i}}$, where $i=1,2$ is a curved spatial
index. The corresponding (spatial) Riemann curvature tensor is easily
obtained,
${R^{ij}}_{kl}=\epsilon^{ij}\epsilon_{kl}\epsilon^{mn}\partial_{m}\omega_{n}=\epsilon^{ij}\epsilon_{lk}2{\cal
K}$, where $\cal K$ is the Gaussian (intrinsic) curvature of the surface. In
our approach we have included time, although the metric we adopted is
$g^{\rm graphene}_{\mu\nu}=\left(\begin{array}[]{cc}1&0\quad 0\\\
\begin{array}[]{c}0\\\ 0\end{array}&g_{ij}\\\ \end{array}\right)\;,$ (1)
i.e., the curvature is all in the spatial part, and $\partial_{t}g_{ij}=0$.
Since the time dimension is included, the SO(2)-valued (abelian) disclination
field has to be lifted-up to a SO(1,2)-valued (non-abelian) disclination
field, ${\omega_{\mu}}^{a}$, $a=0,1,2$, with
$\omega_{\mu}^{\;a}=e^{b}_{\mu}\omega_{b}^{\;a}$ and the expression
$\omega_{a}^{\;d}=\frac{1}{2}\epsilon^{bcd}\left(e_{\mu
a}\partial_{b}E_{c}^{\mu}+e_{\mu b}\partial_{a}E_{c}^{\mu}+e_{\mu
c}\partial_{b}E_{a}^{\mu}\right)$, gives the relation between the disclination
field and the metric (dreibein). All the information about intrinsic curvature
does not change. For instance, the Riemann curvature tensor,
${R^{\lambda}}_{\mu\nu\rho}$, has only one independent component, proportional
to $\cal K$ (see iorio ). When only curvature is important, the long
wavelength/small energy electronic properties of graphene, are well described
by the action ${\cal A}=iv_{F}\int
d^{3}x\sqrt{g}\;\bar{\Psi}\gamma^{\mu}(\partial_{\mu}+\Omega_{\mu})\Psi$, with
$\Omega_{\mu}\equiv{\omega_{\mu}}^{a}J_{a}$, and $J_{a}$ are the generators of
SO(1,2), the local Lorentz transformations in this lower-dimensional setting.
In ioriopaiswitten we have discussed at length this action within the Witten
approach witten3dgravity to Poincaré ($ISO(2,1)$) or (A)dS gravity as gauge
theory, and within the USUSY approach, see also susyZanelli1 , and especially
the recent u-susy-graphene .
Figure 1: The hyperbolic pseudosphere for $a=1$, $C=1$. Here $\rho_{\rm
min}=1$ and $\rho_{\rm max}\simeq 1.4142$.
Within this approach, a nontrivial $g_{tt}$ in (1), hence a clean nontrivial
general relativistic effect (recall that $g_{tt}\sim U_{grav}$) can only
happen if specific symmetries and set-ups map the lab system into the wanted
one, one being the local Weyl symmetry. This produced a measurable predictions
of a Hawking/Unruh effect, for certain specific shapes. It was ofund that
surfaces of constant Gaussian curvature, and among them, those of negative
curvature (that necessarily have singular boundaries, see grapheneQFTreview
and icrystals ) are key. The above lead to the proposal of a variety of set-
ups, especially three key spacetimes with horizon: the Rindler, the de Sitter
and the BTZ black hole BTZ1992 .
## III Carbon pseudospheres and the BTZ black hole
Let us write the BTZ black hole metric as in GUPBTZ
$ds_{BTZ}^{2}=f(r)^{2}c^{2}dt^{2}-f(r)^{-2}dr^{2}-r^{2}(d\phi+N^{\phi}cdt)^{2}$
(2)
where
$f^{2}(r)=-\frac{8GM}{c^{2}}-\Lambda
r^{2}+\frac{16G^{2}J^{2}}{c^{4}\,r^{2}}\,,\quad\quad\quad
N^{\phi}=-\frac{4GJ}{c^{2}\,r^{2}}\,,$ (3)
with $M$ the mass, $\Lambda\equiv-1/\ell^{2}<0$ the negative cosmological
constant and $J$ the angular momentum. Horizons are located at the positive
zeros of $f(r)$
$r^{2}_{\pm}=\frac{4GM\ell^{2}}{c^{2}}\left[1\pm\left(1-\frac{J^{2}}{\ell^{2}M^{2}}\right)^{1/2}\right]\,.$
(4)
When $M>0$ and $|J|\leq M\ell$, we have a black hole, with $r_{+}$ a genuine
event horizon, and $r_{-}$ a Cauchy horizon.
### III.1 The massless black hole
In GUPBTZ , see also ioriolambiase1 , the extremal case of $M\to 0$
$ds_{0}^{2}=(r/\ell)^{2}c^{2}dt^{2}-(r/\ell)^{-2}dr^{2}-r^{2}d\phi^{2}\;,$ (5)
was put into (conformal) correspondence to a graphene analog realization of
this scenario, when shaped in a very specific manner. There it is shown that
the most natural choice is to identify $\ell$ with $\ell_{L}$, the lattice
spacing. The shape that is necessary to realize is that of the hyperbolic
pseudosphere $\Sigma_{\rm HYP}$, see the figures, whose line element is
$dl_{\rm HYP}^{2}=du^{2}+C^{2}\cosh^{2}(u/a)d\phi^{2}\,,$ (6)
with $C=\ell_{L}$, $u$ the longitudinal coordinate, $\phi\in[0,2\pi]$ and the
constant negative Gaussian curvature given by $K=-1/a^{2}<0$. In terms of the
radial coordinate
$\rho(u)=C\cosh(u/a)\,,$ (7)
there is a singular boundary at the largest circle of radius $\rho_{\rm
max}=\sqrt{a^{2}+C^{2}}\equiv\rho_{Hh}$, where $C$ is the radius of the
smallest throat, $\rho_{\rm min}=C$, cf. Figs.1, 2 and 3.
Figure 2: The hyperbolic pseudosphere for $a=1$, $C=1/10$. Here $\rho_{\rm
min}=C=1/10$ and $\rho_{\rm max}\simeq 1.005$.
Indeed, setting $J=0$, and easing a little the notation, $8G/c^{2}=1$
$ds^{2}_{BTZ}=\left(r^{2}/C^{2}-M\right)ds^{2}_{\rm HYP}\;,$ (8)
where $ds^{2}_{\rm HYP}$ is the line element of $\Sigma_{\rm
HYP}\times\mathbf{R}$. Therefore, all the relevant quantities of the BTZ black
hole are given in terms of measurable quantities
$\Lambda\equiv-1/\ell_{L}^{2}\quad,\quad M\equiv\ell_{L}^{2}/a^{2}\quad,\quad
r_{+}\equiv\ell_{L}^{2}/a\;.$ (9)
Before moving to the discussion of the $J\neq 0$ case, let us recall how the
event horizon, $r_{+}$ relates to the singular boundary of the hyperbolic
pseudosphere ioriolambiase1 ; GUPBTZ
$r_{Hh}\equiv r(u_{Hh})=r_{+}\coth\left({\rm
arccosh}\left(\sqrt{1+a^{2}/\ell_{L}^{2}}\right)\right)\;.$ (10)
In the limit of small $\ell_{L}/a$, these two horizons coincide. That is also
the limit where $M\to 0$, and, accordingly $r_{+}\to 0$, i.e. the announced
zero mass black hole. In the figures we show three cases of such pseudosphere,
for increasing closeness of $r_{Hh}$ to $r_{+}$.
There are three reasons to still call this a black hole.
First of all, it is continuously connected to the spectrum of BTZ black holes,
$M\geq 0$. In fact, unlike the $M<0$ cases, which are states under the black
hole threshold, and represent point-particles in $AdS_{3}$, the $M=0$ case is
not a point-particle. One way of looking at it is that in the flat limit
$\Lambda\to 0$ (i.e. $\ell\to\infty$), where point particles survive, the case
$M=0$ does not exist as a solution. Thus $M=0$ belongs more to the black holes
segment/set ($M>0$) than to the point-particles ($M<0$). On the point
particles see DJtH .
Second, the metric (5) has a horizon, even though it is not an event horizon.
Indeed, the massless BTZ is locally equivalent to $AdS_{3}$ in Poincaré
coordinates. If one decompactifies the angular coordinate, one finds that the
BTZ metric with $M=0$ is the $AdS_{3}$ Poincaré patch, since the “Poincaré
horizon” coincides with $r=0$. The Poincaré horizon is the $r=0$ of the $M=0$
BTZ in BTZ coordinates. It is written in coordinates $z=1/r$ sometimes. In
that case, obviously, the Poincaré horizon is $z=\infty$ while the boundary of
the space is $z=0$. It is well known, so it is not a result per se, but have a
look, e.g., at braga .
A third argument (related to the first) in favor of calling it a black hole is
that, from the point of view of holography, this solution ($M=0$) corresponds
to a well-defined state of the dual theory (although it is a state of $T=0$).
About the states of the dual theory, one should consider the link between
three-dimensional gravity and two-dimensional Liouville field theory, see,
e.g. the nice review modave , and also wittenM=0 , where the “black hole
threshold” is precisely the gap between $AdS_{3}$ and the $M=0$ BTZ.
### III.2 The rotating black hole?
It is an intriguing and challenging question whether we could reproduce with
graphene a rotating black hole, $J\neq 0$, that is whether we could find a
specific configuration that can be associated to the spacetime described by
(2). One crucial observation is that such configuration needs to include a
nontrivial behavior along the time direction, on top of a nontrivial behavior
along the space direction. Of course, we are referring here to the important
role played in that metric by
$g_{t\phi}=2r^{2}N^{\phi}=-J\,.$ (11)
We have, in fact, at least two choices. One choice would be to insist with the
strategy above illustrated, and look for a configuration that is conformal to
the full BTZ metric, rather than conformal only to the $J=0$ case, see (8). In
this case, one should probably give up the possibility of having both, the
inner and outer horizons, $r_{\pm}$, in favor of the most important inner
horizon only. Given that now
$r_{\pm}=\frac{1}{2}\frac{\ell_{L}^{4}}{a^{2}}\left(1\pm(1-a^{2}/\ell_{L}^{4}J^{2})^{1/2}\right)\,,$
(12)
we loose spherical symmetry, hence we should expect a deformed hyperbolic
pseudosphere. In that case, the parameter $J$ will have to be related to such
a geometric deformation, that might as well be a static one. Therefore, one
should not expect, in principle, $J$ to be a true angular momentum, like the
one stemming from a spinning pseudosphere, but $J$ can rather be just a
geometric measure of the deviation of actual boundary from the singular circle
of the previous discussion, of ray $r_{Hh}$. At this point, though the road
becomes quite steep. In fact, such a deformed surface might serve well the
scope of visually reproducing the deformed horizons, at least the inner one,
but then would probably fail to be a surface of constant Gaussian curvature.
In that case, the strategy adopted in the previous discussion cannot be
applied, because we would not be guaranteed to be in a conformally flat space
time iorio .
Therefore, why not trying a completely different road, that is to actually act
on the time components of (1), hence to construct the wanted metric directly?
For this to work, we could shape the membrane along the radial direction in
such a way to reproduce $1/f^{2}(r)$. This alone would clearly produce a flat
metric, as the curvature would only be in one direction. At this point it is
crucial to remember that the metric we are discussing, is that experienced by
the conductivity electron of the material, henceforth, at least in principle,
this could be achieved by letting such electrons interact with a suitably
fine-tuned external electromagnetic field PabloLaser , in such a way that
$U_{em}$ mimics the $U_{grav}$ that is in $g_{tt}$. Therefore, we need to act
with a space-dependent external electromagnetic field in such a way that the
$g_{tt}$ component is equal to $f^{2}(r)$, and we are nearly done.
Figure 3: The hyperbolic pseudosphere for $a=1$, $C=1/100$. Here $\rho_{\rm
min}=C=1/100$ and $\rho_{\rm max}\simeq 1.00005$.
In fact, the question left to answer is: what about the reproduction of the
crucial $g_{t\phi}$? This would probably be the most delicate step, because we
need here a feedback between spatial action and temporal action. One way to go
could be to use the electromagnetic field to induce elastic strain of the
membrane. If that happens, such a strain could produce a gauge field
$A_{\mu}^{strain}$, as customary, see, e.g., pabloStran , that can be summed
up to $A_{\mu}^{em}$, to obtain an overall term that mixes space and time
component, that is the spatial feedback on the metric obtained acting on the
time components, and viceversa. Notice that, while the interaction withe the
external field need be very fine tuned, in order to reproduce exactly the
wanted $f^{2}(r)$, the feedback on the strain can be quite generic as that
component is actually defining the $J$, as seen in (11), hence can be
arbitrary.
All the above is very fascinating, but need a thoroughly investigation of all
details, theoretical and experimental, that are beyond the scope of this
essay, and the focus on ongoing research PabloLaser .
## IV Conclusions
The exciting and rapidly evolving field of analog gravity is facing a new era.
The interest is shifting from the reproduction of the kinematical aspects of
the Hawking/Unruh phenomenon, that has reached a climax of precision and
accuracy, to the realization of some form of dynamics, a very challenging
problem. The most important phenomenon to study is black-hole evaporation, the
most prominent dynamical phenomenon of quantum gravity, with its plethora of
open fundamental issues, such as the possibility that information is not
preserved in this process, etc. Having recalled here how Dirac materials lend
themselves to realize crucial aspects of black hole physics, we believe that
the search for the realizations of such dynamical will benefit from Dirac
materials, such as graphene, that showed already to be a powerful and
versatile analog of both quantum fields on curved spaces and quantum gravity.
## Acknowledgements
The author is indebted to Gaston Giribet and Jorge Zanelli for discussions on
the special status of the massless BTZ black hole. He gladly acknowledges
support from Charles University Research Center (UNCE/SCI/013) and from the
grant SVV No. 260576.
## References
* (1) R. Feynman, et al., 2006 The Feynman Lectures on Physics (Pearson/Addison-Wesley).
* (2) A. Iorio, J. Phys.: Conf. Series 1275 (2019) 012013 [arXiv:1902.07096].
* (3) J. D. Bekenstein, Phys. Rev. D 23 (1981) 287; Phys. Rev. E 89 (2014) 1; Sci. Am. 289 (2003) 58.
* (4) G. Acquaviva, A. Iorio, M. Scholtz, Ann. Phys. 387 (2017) 317.
* (5) N . Bao, S. M. Carroll, A. Singh, Internat. J. Modern Phys. D 26 (12) (2017) 1743013.
* (6) G. E. Volovik, The Universe in a helium droplet, Int. Ser. Monogr. Phys. 117 (2006) 1; C. Barceló, S. Liberati, M. Visser, Liv. Rev. Rel. 14 (2011) 3.
* (7) W.G. Unruh, Phys. Rev. Lett. 46 (1981) 1351.
* (8) J. Steinhauer et al., Nature 569 (2019) 688; J. Steinhauer, Nature Phys. 12 (2016) 959.
* (9) A. Iorio, Ann. Phys. 326 (2011) 1334.
* (10) A. Iorio, G. Lambiase, Phys. Let. B 716 (2012) 334; Phys. Rev. D 90 (2014) 025006.
* (11) J. Gooth, et al, Nature 547 (2017) 324.
* (12) U. Leonhardt et al, Phys. Rev. Lett. 122 (2019) 010404.
* (13) P. Castorina, D. Kharzeev, H. Satz, Eur. Phys. J. C 52 (2007) 187.
* (14) P. Castorina, A. Iorio, H. Satz, Int. J. Mod. Phys. E 24 (2015) 1550056; P. Castorina, D. Grumiller, A. Iorio, Phys. Rev. D 77 (2008) 124034.
* (15) R. Dardashti, K. P. Thebault, E. Winsberg, Brit. J. Phil. Science 68 (2015) 55.
* (16) R. Penrose, Gen. Rel. Grav. 28 (1996) 581.
* (17) S. Hawking, in General relativity and gravitation, Proc. GR17, Dublin (2004) pp. 56.
* (18) J. Polchinski, et al., J. High Energy Phys. 2013 (2013) 62.
* (19) R. B. Mann, Black Holes: Thermodynamics, Information, and Firewalls (Springer, Berlin, 2015).
* (20) G. ’t Hooft, (2016), arXiv:1612.08640v1.
* (21) J. Maldacena, L. Susskind, Fortsch. Phys. 61 (2013) 781.
* (22) R. Balbinot, A. Fabbri, S. Fagnocchi, A. Recati, I. Carusotto, Phys. Rev. A 78 (2008) 021603(R).
* (23) G. Acquaviva, A. Iorio, L. Smaldone, Phys. Rev. D 102 (2020) 106002.
* (24) A. Iorio, P. Pais, Phys. Rev. D 92 (2015) 125005.
* (25) A. Iorio, P. Pais, Ann. Phys. 398 (2018) 265.
* (26) A. Iorio, Int. J. Mod. Phys. D 24 5 (2015) 1530013.
* (27) A. Iorio, Frontiers in Materials 1 (2015) 36.
* (28) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, A. K. Geim, Rev. Mod. Phys. 81 (2009) 109.
* (29) T. O Wehling, A. M. Black-Schaffer, A. V. Balatsky, Adv. Phys. 76 (2014) 1.
* (30) P. R. Wallace, Phys. Rev. 71 (1947) 622; G. W. Semenoff, Phys. Rev. Lett. 53 (1984) 2449.
* (31) K. S. Novoselov, A. K. Geim, et al, Science 306 (2004) 666.
* (32) M. Ciappina, A. Iorio, P. Pais, A. Zampeli, Phys. Rev. D 101 (2020) 036021.
* (33) A. Iorio, P. Pais, I. A. Elmashad, A. F. Ali, Mir Faizal, L. I. Abou-Salem, Int. J. Mod. Phys. D 27 (2018) 1850080.
* (34) A. Iorio, G. Lambiase, P. Pais, F. Scardigli, Phys. Rev. D 101 (2020) 105002.
* (35) H. Kleinert, Gauge fields in condensed matter, Vol II, World Scientific (Singapore) 1989; M. O. Katanaev, I. V. Volovich, Ann. Phys. 216 (1992) 1.
* (36) E. Witten, Nucl. Phys. B 311 (1988) 46.
* (37) J. Zanelli, et al., J. High Energy Phys. 1204 (2012) 058; J. High Energy Phys. 85 (2016) 201.
* (38) L. Andrianopoli, et al., J. High Energy Phys. 2001 (2020) 084.
* (39) A. Iorio, J. Phys.: Conf. Ser. 442 (2013) 012056; et al., J. Phys.: Cond. Matt. 28 (2016) 13LT01.
* (40) M. Bañados, C. Teitelboim, J. Zanelli, Phys. Rev. Lett. 69 (1992) 1849.
* (41) A. Iorio, G. Lambiase, G. Vitiello, Ann. Phys. 309 (2004) 151.
* (42) M. Cvetic and G. Gibbons, Ann. Phys. 327 (2012) 2617.
* (43) S. Deser, R. Jackiw, G. ’t Hooft, Three-dimensional Einstein gravity: Dynamics of flat space, Annals of Physics, 152 (1984) 220.
* (44) C. A. Bayona and N. R. F. Braga, Gen. Rel. Grav. 39 (2007) 1367.
* (45) L. Donnay, PoS Modave2015 (2016) 001 and references therein.
* (46) J.-M. Schlenker, E. Witten, No Ensemble Averaging Below the Black Hole Threshold, arXiv:2202.01372 [hep-th].
* (47) A. Iorio, P. Pais, Laser-graphene interaction and the improvement of gravity analogs, in preparation.
|
# Smart Insole: A Gait Analysis Monitoring Platform Targeting Parkinson’s
Disease Patients Based on Insoles
Dimitrios G. Boucharas, Christos Androutsos, George Gkois,
Vassilis D. Tsakanikas, Vasileios C. Pezoulas, Dimitrios Manousos, Vasileios
Skaramagkas,
Chariklia Chatzaki, Stathis Kontogiannis, Christos Spandonidis, Alexandros K.
Pantazis,
Nikolaos S. Tachos, Manolis Tsiknakis, Dimitrios I. Fotiadis, _Fellow, IEEE._
D.G. Boucharas, C. Androutsos, N.S. Tachos, G. Gkois, V.D. Tsakanikas, and
V.C. Pezoulas, are with the Unit of Medical Technology and Intelligent
Information Systems, Department of Materials Science and Engineering,
University of Ioannina, GR-45110, Ioannina, Greece.D. Manousos is with the
Institute of Computer Science, Foundation for Research and Technology Hellas
(FORTH), GR-70013, Heraklion, Crete, Greece.C. Chatzaki and M.Tsiknakis are
with Biomedical Informatics and eHealth Laboratory, Department of Electrical
and Computer Engineering, Hellenic Mediterranean University, Estavromenos,
GR-71004, Heraklion, Crete, Greece and with the Institute of Computer Science,
Foundation for Research and Technology Hellas (FORTH) and the Department of
Electric and Computer Engineering, Hellenic Mediterranean University,
GR-71004, Heraklion, Crete, Greece.V. Skaramagkas is with the Institute of
Computer Science, Foundation for Research and Technology Hellas (FORTH) and
the Department of Electric and Computer Engineering, Hellenic Mediterranean
University, GR-71004, Heraklion, Crete, Greece.S. Kontogiannis is with PD
Neurotechnology Ltd., GR-45110, Ioannina, Greece.C. Spandonidis is with Prisma
Electronics SA, GR-17564, P.Faliro Athens, Greece.A.K. Pantazis is with the
Microelectronics Research Group, Institute of Electronic Structure and Laser
(IESL), Foundation for Research and Technology Hellas (FORTH), GR-70013,
Heraklion, Crete, Greece.Dimitrios I. Fotiadis is with Biomedical Research
Institute, FORTH, GR-45110, Ioannina, Greece and with the Unit of Medical
Technology and Intelligent Information Systems, University of Ioannina,
GR-45110, Ioannina, Greece. (corresponding author e-mail<EMAIL_ADDRESS>
###### Abstract
During the preceding decades, human gait analysis has been the center of
attention for the scientific community, while the association between gait
analysis and overall health monitoring has been extensively reported.
Technological advances further assisted in this alignment, resulting in access
to inexpensive and remote healthcare services. Various assessment tools, such
as software platforms and mobile applications, have been proposed by the
scientific community and the market that employ sensors to monitor human gait
for various purposes ranging from biomechanics to the progression of
functional recovery. The framework presented herein offers a valuable digital
biomarker for diagnosing and monitoring Parkinson’s disease that can help
clinical experts in the decision-making process leading to corrective planning
or patient-specific treatment. More accurate and reliable decisions can be
provided through a wide variety of integrated Artificial Intelligence
algorithms and straightforward visualization techniques, including, but not
limited to, heatmaps and bar plots. The framework consists of three core
components: the insole pair, the mobile application, and the cloud-based
platform. The insole pair deploys 16 plantar pressure sensors, an
accelerometer, and a gyroscope to acquire gait data. The mobile application
formulates the data for the cloud platform, which orchestrates the component
interaction through the web application. Utilizing open communication
protocols enables the straightforward replacement of one of the core
components with a relative one (e.g., a different model of insoles),
transparently from the end user, without affecting the overall architecture,
resulting in a framework with the flexibility to adjust its modularity.
###### Index Terms:
Gait analysis, plantar pressure data, Parkinson’s disease, gait patterns,
computer architecture
## I INTRODUCTION
The interest in human gait analysis has reignited during the last decades. The
technological advances yielded a wide diversity of benefits for analyzing
human movement that transformed enormous laboratories equipped with several
video and infrared cameras, high-performance computers, and soaring-cost
treadmills, force plates or walkways into low-cost, lightweight, and flexible;
yet precise devices called sensors. Such devices allow individuals with
limited or no access to public healthcare services to obtain a gait
evaluation. Depending on the sensor placement, distinct motifs can be derived.
Foot-based sensors such as pressure and Inertial Measurement Unit (IMU)
sensors have been widely adopted in gait monitoring due to walking and running
instabilities being heavily linked with chronic diseases, and, by extension,
with overall health. Improper gait, in-addition, results in more strain on
multiple body parts, which may lead to injuries.
Various assessment tools have been proposed by the scientific community and
the market that exploit data derived from foot sensors. These tools have not
only been utilized for sports science, biomechanics, and monitoring the
progression of functional recovery, but have also expanded to be a valuable
digital biomarker for diagnosing and monitoring several neurological disorders
such as Parkinson’s disease (PD). Furthermore, they can be employed by the
general population for preventing the progression of gait inconsistencies that
might lead to severe issues, if left untreated. The results acquired from the
imminent gait analysis might provide the clinical expert or the physician with
the information needed for corrective planning and patient-specific treatment.
PD is a neurological disorder that affects the ability of patients to control
their movement. The most common symptoms are tremors (e.g., hands, legs, and
jaw), stiffness of muscles, movement slowness, and sudden loss of balance and
coordination. The latter may lead to falls that are the leading cause of death
related to injuries among 65-year-old adults [1, 2]. As the disease
progresses, patients face difficulties with walking and talking. Risk factors
are yet to be discovered; however, it is proved that the prevalence increases
with age [3]. Currently, there is no cure for the disease, but treatments are
available to relieve the patients from the symptoms and often drastically
maintain, if not improve, their quality of life [4]. Consequently, sensor-
based gait analysis assessment tools can help doctors administer patient-
specific therapy and corrective planning. On the other hand, patients will
save time by avoiding long queues in healthcare facilities, money due to less
frequent doctor visits, and energy as the data collection procedure occurs in
their daily living without requiring additional effort at the clinical sites.
Considering that monitoring can end up being long-term, it comes with
tremendous advantages for the individuals involved.
A variety of platforms has also been proposed to offer a monitoring system
that combines contextual information extracted from the Inertial Measurement
Unit (IMU) and pressure sensors with information derived from gait metrics.
Architectures have been constructed and refined to yield more robust results,
enabling several innovative features, such as cloud computing. Over the
preceding years, the examination of the gait has been aided by cloud
technology. For healthcare purposes, cloud computing plays a vital role in
processing large amounts of data using various decision-supporting methods.
Researchers have recognized that platforms, as clinical tools, can lead to
ideal gait analysis systems to observe abnormalities and identify the induced
severity. But their role is not limited to that, but further expanded to
offering a complete view of the patients’ gait parameters by visualizing
specific pressure diagrams and IMU graphical representations, among others.
The approach, as described above, goes beyond the traditional healthcare
approaches defining a new entry in the healthcare era named Telehealth [5].
Pressure sensors, accelerometers, and gyroscopes are the most widely deployed
gait-based sensors, while their placement varies on different sites of the
lower human body. For more reliable and accurate analysis, the sensors should
be employed in the shape of shoes, insoles, or outsoles. The latter captures
human motion and, by extension, movement patterns. The exploitation of cloud
services further supports the analysis with efficient resource allocation,
along with quick configuration and implementation. The collected raw data are
transmitted, stored, and analyzed by a pool of Machine Learning (ML) or Deep
Learning (DL) algorithms in the cloud. The effectiveness of this procedure can
potentially create a pathway for remote health monitoring that will allow
healthcare professionals such as clinicians, physicians, and formal caregivers
to continuously supervise and evaluate a person’s health data in real-time,
even from distant locations.
## II Related Work
According to the literature, several gait analysis platforms have been
developed, presented, and evaluated. Ziagkas et al. introduced a portable, yet
low cost tool for gait analysis, in daily living by synthesizing a walking
profile [6]. The latter features several gait metrics based on data derived
from a pair of smart insoles where integrated sensors are deployed. A summary
of the gait analysis results is presented in a graphical manner on a platform
utilizing an intermediate bluetooth connection box. To validate the underlying
architecture, a human motion-capturing ecosystem composed of high-resolution
infrared cameras, a central unit that acts as a camera synchronizer, and a
processing unit responsible for Vicon software execution was constructed. The
results indicate that some divergence can be observed between the two systems,
mainly affected by the low number of participants and the extracted gait
parameters; however, the provided measurements are valid. Loukovitis et al.
extended the previously described work by exploring the repeatability of the
developed system [7]. The findings, extracted from two distinct trials
conducted by 22 participants, support the reliability of the system in
analyzing gait parameters based on the intraclass correlation coefficient.
Chen et al. proposed a gait monitoring framework based on piezoelectric
insoles focused on three distinct target groups suffering either from PD,
stroke or diabetes [8]. These chronic diseases, along with their progression,
could possibly result in functional gait inconsistencies. The developed
framework employs an Internet of Things approach to connect and exchange gait
data between heterogeneous devices and systems over the Internet. In essence,
the smart insole connects to the user’s computer via bluetooth, and the Center
of Pressure (COP) data are uploaded to a cloud server from where a clinical
institution can retrieve it for medical purposes. Consequently, the presented
framework offers great convenience to both patients and doctors by avoiding
frequent visits to the hospital, leading to a complete solution.
Tsiouris et al. synthesized a mobile healthcare platform that acts as a
complete assistant targeting PD patients [9]. The accelerometer, gyroscope,
microphone, and pressure sensors were integrated into the proposed solution to
capture a wide variety of gait parameters to evaluate the severity of symptoms
caused by the disease. The sensors were placed on a commercial wrist, an
insole pair, and a mobile device. The latter was also used to estimate the
patient’s cognition, mental state, and diet through a series of tests that
resulted in a review of motivations and obliging recommendations. Patient data
was uploaded to the cloud encrypted, where Artificial Intelligence (AI)
methods were recruited to estimate the symptoms and progression of the
disease. Another mobile application was designed and developed for direct
access to patient data and real-time monitoring by clinicians. Clinicians
might assess the information derived from the cloud-embedded algorithms to
adjust the duration and dosages of the treatment accordingly. The system also
included a mechanism to not only notify clinicians when new symptoms appeared
or existing ones worsened, but also to suggest which course of medication
modifications to take.
PD is difficult to diagnose because, at present,
1. 1.
it is based on a patient’s clinical evaluation [10];
2. 2.
although there are some promising leads, there are no definitive biomarkers
for PD, and all relevant research findings work supportively for the
confirmation of the clinical diagnosis;
3. 3.
the symptoms develop gradually, with a lengthy delay between the actual
degeneration of dopaminergic neurons (70-80% loss) and the start of
recognizable clinical symptoms [11];
4. 4.
the variety of symptoms and the heterogeneity of their onset and progression
contribute to the complexity of PD clinical phenotypes [12];
5. 5.
an extra challenge to the diagnosis is accumulated due to the fact that
several syndromes mimic the symptoms of PD.
As a consequence, the evaluation of the disease severity, which plays a vital
role in the progression, is also hard to be addressed, especially in the early
stages fo the disease. To estimate the stages of PD, Balaji et al. compared
several ML models that could help make an accurate diagnosis. Based on the
Unified PD Rating Scale (UPDRS) and the Hohen and Yahr Scale (HY), a
classification problem was devised [13]. The dataset was based on vertical
ground reaction forces (VGRF) derived from PD patients scoring above 2.0 in HY
scale.
Figure 1: The ’Smart Insole’ Architecture
The feature vectors were constructed with gait spatio-temporal parameters
(e.g., cadence, step length, and swing time) and fed into four distinct
classifiers. Feature selection techniques were employed to improve the
performance and robustness of the developed models. The classifiers employed
were Decision Trees (DT), Support Vector Machines (SVM), an Ensemble
Classifier (EC), and a Bayes Classifier (BC), with DT producing the most
promising results.
Lazzaro di Biase et al. attempted to summarize the discriminative gait
features that might assist in distinguishing PD patients from healthy
subjects, along with the PD stages [14]. In early stages, gait observations
exhibit increased disparity reflecting instability, a few spatio-temporal
parameters demonstrate some degree of deduction, such as the reduced step
length, and dual-task sessions (e.g., walking and counting backwards) are hard
to comprehend for the PD patients. In mild to moderate stages, body asymmetry
diminishes. Accordingly, the double support time increases. In more advanced
stages of PD, the gait further worsens with increased frequency in freezing of
gait (FOG), loss of balance and lack of coordination.
Targeting the ageing population, García-Villamil et al. proposed a wearable
gait assessment device based on a 6-axis inertial sensor [15]. The sensor
data, collected across different surfaces such as indoor and outdoor flooring,
were utilized to measure several gait parameters. The system includes a mobile
application that visualizes gait-related information for users. Device
reliability was validated employing the intraclass correlation coefficient,
which was found to be 0.69. The relationship between the gait characteristics
obtained by the device and clinical tests was also explored. Sequentially, a
close connection was detected between some gait parameters (e.g., mean speed,
cadence, and mean stride) and the short physical performance battery test,
which combines gait speed, chair stand, and balance tests to produce a frailty
score. The results showed that frailty and falls could be predicted accurately
upon utilizing the score in the ageing population. To the same extent, Aspera
et al. examined the ability of gait parameters to approximate the levels of
frailty (for example, frail, prefrail, robust) presented by the elderly
population [16]. Wireless inertial sensors were placed on various parts of the
lower human body to record the corresponding gait data streams. Logistic
models were recruited to explore associations between frailty and the gait
parameters extracted. Receiver operating characteristic curves were employed
to calculate the area under the curve for each parameter, while the Youden
index was considered for the cut-off values. The results indicate that the
gait parameters could distinguish frailty levels, especially between frail and
robust levels.
## III Our Contribution
To address the aforementioned needs, an advanced wearable solution has been
developed. The ’Smart Insole’ [17] is an innovative digital ecosystem,
consisting of a pair of smart insoles (integrating various sensors), a mobile
application and a cloud-based platform. Specifically, 16 distinct plantar
pressure sensors, an accelerometer, a gyroscope, and a magnetometer were
embedded to the insole for acquiring gait raw data. A mobile application was
designed and developed to receive the insole data wirelessly (via BLE) and
provide valuable gait-related information to the end user (PD patients and
elders). The ’Smart Insole’ cloud platform encompasses the web services, the
well-defined components, and their interaction. The developed and integrated
web application is responsible for data ingestion, secure data storage, data
curation, and data integration, among other modules. The ’Smart Insole’
platform also incorporates an AI-based reasoning engine to provide decision
support to the clinical experts. The work presented herein describes in detail
the architecture and functionalities provided by the ’Smart Insole’ solution.
The proposed solution serves as a supplemental expert support to align into a
more reliable, accurate, and patient-specific decision-making process based on
graphical representations, understandable motifs, and gait characteristics
that are not limited to features derived from gait cycle phases from older
individuals and patients with PD.
## IV ARCHITECTURE
The proposed architecture, depicted in Fig. 1, contains three core components:
the smart insoles, the cloud-based platform, and the mobile application. The
system targets elderly population and patients with PD who can use smart
insoles in their daily living, together with the corresponding mobile
application to obtain informative gait-related reports. It is worth mentioning
that the proposed architecture was designed in order the system to be
independent of the Smart Insole characteristics. The system can support third-
party insoles on top of the developed ones, without compromising the
performance of the whole system. Specifically, the sensor data handling engine
is responsible for mapping the incoming raw data to the Smart Insole data
model and performing any normalization to meet the specifications for the data
analysis layer. The cloud-based application has as end users the healthcare
professionals by providing to them rich and informative clinical reports for
the gait analysis. These reports provide gait-related data, metadata
visualizations, and comprehensive visual analytics, interlinked with gait
patterns.
The architecture design encompasses all the vital elements that compose the
web services and their interaction. The architecture can be hierarchically
decomposed into user management-related modules, the sensor data handling
engine, the data layer, and the analytics layer.
The user management-related module includes the security components (i.e., the
SSL/TLS protocols and the OAUTH2 framework) that are invoked during user
authentication and user access management procedures and use any necessary
resources from the hardware infrastructure and software components, as well as
from the private database of user credentials for user authentication.
The engine is designed as a three-node pipeline including the data ingestion,
data curation, and data integration module. The sensor data handling engine is
responsible for collecting the sensor data from the insoles and directing the
data to the REST API where the pre-processing (e.g., curation) takes place.
Next, the data layer includes dedicated databases where the data storage
occurs, from where the data access handler engine will enable the AI-powered
analysis module. Data storage is characterized by various data storage
technologies, including NoSQL, SQL, and a file storage system. AI-powered
analysis module utilizes advanced AI models and algorithmic pipelines to
assess data information extracting valuable decision support tools, such as
graphs and metrics. The visual analytics module will then attempt to visualize
these graphs to the user interface through the rest API, once the end user
logins successfully by providing the credentials. REST APIs provide an
interface to all available system functions and data. All data, either
received or requested through the REST API, are passed to the data scheduler,
which is responsible for forwarding to the corresponding functional components
of the system.
### IV-A THE SMART INSOLES
The insoles, as depicted in Fig. 2, are one of the main components of the
system which provide the plantar pressures and inertial unit (IMU) data to
monitor the gait spatio-temporal characteristics. The pair of insoles
integrates a number of piezoresistive sensors to assess the pressure exerted
by different parts of the foot plantar during the gait cycle. The IMU sensor
includes an accelerometer, gyroscope, and magnetometer to capture the velocity
and orientation of the end-user foot during the various gait phases. The
insoles include an electronic component that acts as a communication gateway.
This subsystem incorporates a microprocessor that collects data from the
sensors and assigns them acquisition timestamps. The system connects
wirelessly with a mobile device via a Bluetooth Low Energy (BLE) protocol to
transfer the gait-related data. The Smart Insole prototype integrates 16
pressure sensors which are manufactured exploiting 3D printing technology.
Figure 2: The Smart Insoles
### IV-B THE MOBILE APPLICATION
In the ’Smart Insole’ system, a mobile device acts as a gateway to collect
data from smart insole devices via BLE and send it to the cloud server for
storage and analysis. A preprocessing step lies between these steps to
formulate the data into an appropriate structure that the cloud platform
demands. The mobile application provides the ability to store gait-related
data from smart insole devices in a local SQLite database. Synchronizing these
data with the data stored on the remote server occurs when the wearable device
is connected to the Internet (via WiFi or the 4G/5G network). The application
is responsible for pairing with the smart insole device, the main component of
the system, and registering a new user by creating a user account and
configuring the insole settings.
Furthermore, the mobile application provides a diverse set of capabilities for
its users, such as graphical representations of postprocessed measurements,
gait characteristics, and pressure distribution per activity period. The
processed gait related data and metadata are derived from the Smart Insole
cloud-based platform. Data are transferred to the cloud platform in an
encrypted way safeguarded by the application.
Figure 3: The Functionalities of the ’Smart Insole’ mobile application
Figure 3 encapsulates the interactions between user, device, and web server
through the mobile application. The functionalities that reside in the mobile
application are user log-in and log-out, data visualization, data retrieval
from insole devices, search and connect with the insole devices, remote
storage (on server), and remote database synchronization.
The user must have a personal account on the application to be able to use the
application. Upon successful login to the application, a session (user
session) is created that lasts for a specified period of time, during which
the user does not need to go through the login process again.
Figure 4: The Interactions of the Smart Insole Components
Once the data collection procedure initiates, a process is performed to
automatically update the application with each new measurement. Every
measurement taken by the insole devices, is stored by the application in the
local database. The synchronization process with the cloud infrastructure
depends on the existence of an Internet connection. The local database is used
to temporarily store raw data retrieved from the insole devices until the
mobile application can connect to the cloud-based platform, where the data are
permanently stored. To visualize the gait data, the GraphView library was
utilized through different types of graphs such as line graphs, bar charts,
scatter plots, and real-time graphs featuring scroll, scale, or zoom
functionality [18].
### IV-C THE WEB APPLICATION
The software of the proposed cloud technology platform is the Web application.
It includes the operating engine (backend) and the services of the working
environment (frontend). The back-end is responsible for all the processes
executed in the background (i.e. listening and responding to requests), while
the front-end is dedicated to user interaction such as user management and
patient monitoring. It is also responsible for API requests, data storing and
retrieving, and user authentication providing an easy and quick expansion to
the system.
Figure 5: The insole raw sensor data report page.
Specifically, several databases can be connected to this service, while it can
serve many and varied types of application. The primary benefit of the
application is the access control, which can be easily adapted to any
application connected to the service. Utilizing the service can reduce the
application workload caused by requests compared to applications employing
their own database connection and management schemes.
Figure 6: The insole raw data visualization.
In essence, the web application provides several analytical summary reports
based on the type of session (e.g., walking and standing balance) to the end
user. Sequentially, based on the raw data acquired from free-walking, 10-meter
straight walking, timed up and go (TUG) and standing tests, corresponding
analytical summary reports are produced. The insole raw sensor report enables
the user to choose from the patient/insole pairs by scrolling down the
available list.
Figure 7: Visualizing force and pressure on raw sensor data.
After selecting a pairing, the user can search for previous raw data using the
integrated calendar within the time field or leave it empty to retrieve the
complete history without time-related constraints. Selecting a session allows
the user to choose from the pool of deployed sensors (Fig. 5) on the right and
left foot to visualize the corresponding data. Data visualization plays a
crucial and integral role in the clinician’s decision by offering a way to
observe the variability or divergence from the general population.
Figure 8: Gait parameter graphical representation and pressure heatmaps.
A figure precedes (Fig. 6) that illustrates the progression of raw sensor data
over time under the previously selected sensors, where clinical experts can
inspect and provide feedback to end users. On the other hand, for free
walking, straight 10-meter walking or TUG exercises, the system offers
automated detailed walking summary reports consisting of several gait
parameters: single support time, double support time, cadence, stance phase,
pre-swing, cycle time, load response, and terminal stance [19].
The mean value, standard deviation, and respective p values are also
calculated per gait parameter to synthesize box plots, while line plots are
utilized for graphical representations of force and stance (Fig. 7).
Furthermore, pressure heatmaps and center of pressure plots are recruited to
assess the plantar pressure distribution as depicted in Fig. 8. In conclusion,
standing balance reports consist of sway analysis graphs representing medial-
lateral and anterior-posterior deviation per foot, center of pressure plots,
and butterfly graphs. The motifs drawn by the conducted analysis and the
accompanying diagrams are thoroughly detailed in a prior study [20].
#### IV-C1 SMART INSOLE DECISION SUPPORT TOOLS
The decision support tools consist of two categories. The first category
includes a powerful computational and visualization engine, which extracts
spatiotemporal metrics and renders visual representations like scatter, box
and violin plots. These algorithmic pipelines and graphical representations
are able to interpret even the tiniest abnormalities resulting from improper
gait behavior. The second category consists of a set of clinical reports
generated by AI. Due to its modularity, the specific tool can easily host
other AI models, even though the currently deployed ones pertain to PD.
Figure 9: Sway analysis based on Medial-lateral deviation.
The ’Smart Insole’ decision support tools exploited the ’Smart Insole’ dataset
(Protocol approval 279/14-05-2021 from Ethical Committee of General University
Hospital of Patras) to develop the AI models [21]. The dataset, which contains
contextual information on acceleration, orientation and plantar pressures, was
collected while healthy adults, the elderly, and PD patients completed three
to five different exercise protocols. In the first exercise, participants are
asked to walk 10 meters with a 180-degree turn at three speeds (e.g., slow,
normal, high), whereas in the second exercise, also known as the timed up and
go test, they begin seated and repeat the first exercise. Participants are
instructed to walk at a normal pace. In the third test, subjects’ balance is
evaluated while standing for 10 seconds with their eyes open, followed by 10
seconds with their eyes closed, with their feet approximately 30 cm apart.
With regard to the presented version of the system, there are two AI models
developed and integrated. The first one is a binary classifier, build on the
XGBoost algorithm, which classifies an individual as a PD patient of a non-PD
patient. The algorithm performance exhibits good results, yielding 0.75, 0.79,
0.65, and 0.71 in accuracy, precision, recall, and f1 score, respectively.
This model can be used for screening, raising flags for individuals with
”abnormal” gait profile, and suggesting a full scale assessment by the expert.
The second AI model refers to the assessment of the UPDRS scale part 3.10
related to gait. The model utilizes as input the gait features extracted from
the raw data of the 10-meter walking predefined exercise and classifies the
patient in a severity scale of 0, 1, 2 and 3. The model is build on a Random
Forest classifier, with 0.86, 0.87, 0.73, and 0.81 in accuracy, precision,
recall, and f1 score, respectively. The specific AI model can be utilized by
the clinical experts to automatically assess PD patients regularly and provide
information about the degradation of their condition, or even the assessment
of pharmacological treatment and its influence on the specific scale.
As already described, the Smart Insole system is collecting spatiotemporal
data either from real-world free walking or three predefined exercises, which
are well known in clinical practise. Thus, exploiting the data derived from
the stationary balance sessions, a balance assessment report was designed
which provides rich information along with visual patterns about the balance
status of the patients. Specifically, a pipeline was developed to create
specialized heatmaps based on dynamic COP calculation. During the balance
session, most of the COP coordinates tend to gather around a small area,
resulting in an intense heatmap region. In the same time, observations may
exhibit variability, which leads to complementary heatmap regions described as
distinct points.
The gait related reports of the system also include butterfly diagrams which
were utilized to monitor the gait cycle by capturing the COP shifting patterns
during walking. The annotated foot states (e.g., heel strike, heel rise, toe
off, and foot flat) derived from the developed state machine contributed to
the butterfly diagram construction. The height, the passing straight lines,
and the symmetry of the trajectory are unique traits found in the diagrams
that are explored for their contribution to differentiating PD patients from
healthy individuals.
Some gait features may interpret walking session instabilities to a greater
extent than others. Stance, swing, single, and double support phases can
capture gait-related time distributions. For instance, the single support
phase refers to the normalized time span a foot remains on the ground, while
the time both feet stand on the ground attributes to the double support phase.
Undoubtedly, an individual with a relevant high proportion in the double
support phase faces issues with walking, possibly due to fear of falling or
the need to have a higher level of gait control. Hence, the distributions are
indirect indicators of a faulty or normal gait cycle. As a result, in the
Smart Insole web application and specifically for the gait-related session
report, circular diagrams were developed to visualize the gait-phases
patterns. The latter may contribute to the decision-making from the clinical
expert in the gait analysis process.
It is evident from the data gathered over the course of the Smart Insole
project that specific motifs can be drawn from sensors embedded into a pair of
insoles that are capable of capturing both gait and balance patterns. If the
work is developed further, advanced patterns with the potential to explain a
method for detecting PD may emerge.
## V DISCUSSION & CONCLUSION
The framework presented herein acts as a decision support tool for clinical
experts who assess the gait quality in the population suffering from gait
inconsistencies, such as the elderly and PD patients. It is not only the
powerful computational and visualization engine that extracts features and
metrics and visually renders them to assist in the formation of the clinical
expert evaluation, but also the advanced AI algorithms that infer the
patients’ gait status.
The proposed architecture consists of four core components: the smart insoles,
the cloud platform, the web application, and the mobile application. Although
previous studies established the stepping stones for assembling efficient gait
monitoring systems, the current work technically expanded it by proposing an
architecture design characterized by modularity. Contrary to traditional
schemes where the data acquisition component is the main parameter of the
system, the Smart Insole framework is decentralized and offers a flexible
solution. Hence, the developed architecture and the employed communication
protocols allow the replacement of the insoles (e.g., a core component of the
system) without compromising its overall efficiency, functionality, and
architecture. The proposed design also takes advantage of the power of the
cloud for storing, processing, and authenticating purposes, among others,
resulting in advanced resource management.
Regarding the future of health monitoring platforms, multiple modalities can
be exploited to synthesize an advanced Telehealth monitoring framework which
is not limited to considering gait-related metrics. A framework that takes
into account emotional, physical activity, mental empowerment, psychological
stability, and nutritional well-being may result in complete monitoring of
human health and solving equally important problems.
## VI Acknowledgement
This research was funded by the European Regional Development Fund of the
European Union and Greek national funds through the Operational Program
Competitiveness, Entrepreneurship, and Innovation, under the call
RESEARCH–CREATE–INNOVATE (project code: T1E$\Delta$K-01888). The article
reflects only the authors’ views. The European and Greek Commissions are not
responsible for any use that may be made for the information it contains.
## References
* [1] Centers for Disease Control and Prevetion Web-based Injury Statistics Query and Reporting System (CDC’s WISQARS), Accessed: August 17, 2022 [Online] Available: https://www.cdc.gov/injury/wisqars/
* [2] Burns, Elizabeth, and Ramakrishna, Kakara “Deaths from falls among persons aged $\geq$ 65 years—United States, 2007–2016.”, Morbidity and Mortality Weekly Report, vol. 67, no. 18 pp 509, 2018.
* [3] Marras, Carlo Efisio, et al. “Prevalence of Parkinson’s disease across North America.”, NPJ Parkinson’s disease, vol. 4, no. 1
* [4] The National Health Service in United Kingdom (NHS), Accessed: August 17, 2022 [Online] Available: https://www.nhs.uk/conditions/parkinsons-disease/treatment/
* [5] Tuckson, Reed V, Edmunds, Margo, and Hodgkins, Michael L “Telehealth.”, New England Journal of Medicine, vol. 377, no. 16, pp 1585–1592, 2017
* [6] Ziagkas, Efthymios, Loukovitis, Andreas, Zekakos, Dimitrios Xypolias, Chau, Thomas Duc-Phu, Petrelis, Alexandros, and Grouios, George “A Novel Tool for Gait Analysis: Validation Study of the Smart Insole PODOSmart®.”, Sensors, vol. 21, no. 17, pp 5972, 2021.
* [7] Loukovitis, Andreas, Ziagkas, Efthymios, Zekakos, Dimitrios Xypolias, Petrelis, Alexandros, and Grouios, George “Test-Retest Reliability of PODOSmart® Gait Analysis Insoles.”, Sensors, vol. 21, no. 22, pp 7532, 2021.
* [8] Chen, Junliang, Zhao, Yifan, Lin, Jingjing, Dai, Yanning, Hu, Boyi, and Gao, Shuo “A flexible insole gait monitoring technique for the internet of health things.”, IEEE Sensors Journal, vol. 21, no. 23, pp 26397–26405, 2021.
* [9] Tsiouris, Kostas M, Gatsios, Dimitrios, Rigas, George, Miljkovic, Dragana, Koroušić Seljak, Barbara, Bohanec, Marko, Arredondo, Maria T, Antonini, Angelo, Konitsiotis, Spyros, Koutsouris, Dimitrios D, and Fotiadis, Dimitrios I “PD_Manager: an mHealth platform for Parkinson’s disease patient management.”, Healthcare technology letters, vol. 4, no. 3, pp 102–103, 2017
* [10] Marsili, Luca, Rizzo, Giovanni, and Colosimo, Carlo “Diagnostic criteria for Parkinson’s disease: from James Parkinson to the concept of prodromal disease.”, Frontiers in neurology, vol. 9, pp 156, 2018
* [11] Emamzadeh, Fatemeh N, and Surguchov, Andrei “Parkinson’s disease: biomarkers, treatment, and risk factors.”, Frontiers in neuroscience, vol. 12, pp 612, 2018
* [12] Markello, Ross D, Shafiei, Golia, Tremblay, Christina, Postuma, Ronald B, Dagher, Alain, Misic, Bratislav “Multimodal phenotypic axes of Parkinson’s disease.”, npj Parkinson’s Disease, vol. 7, no. 1, pp 1–12, 2021
* [13] Balaji, E and Brindha, D and Balakrishnan, R “Supervised machine learning based gait classification system for early detection and stage classification of Parkinson’s disease.”, Applied Soft Computing, vol. 94, pp 106494, 2020.
* [14] di Biase, Lazzaro, Di Santo, Alessandro, Caminiti, Maria Letizia, De Liso, Alfredo, Shah, Syed Ahmar, Ricci, Lorenzo, and Di Lazzaro, Vincenzo “Gait analysis in Parkinson’s disease: An overview of the most accurate markers for diagnosis and symptoms monitoring.”, Sensors, vol. 20, no. 12, pp 3529, 2020.
* [15] García-Villamil, Guillermo, Neira-Álvarez, Marta, Huertas-Hoyas, Elisabet, Ramón-Jiménez, Antonio, and Rodríguez-Sánchez, Cristina “A pilot study to validate a wearable inertial sensor for gait assessment in older adults with falls.”, Sensors, vol. 21, no. 13, pp 4334, 2021.
* [16] Apsega, Andrius, Petrauskas, Liudvikas, Alekna, Vidmantas, Daunoraviciene, Kristina, Sevcenko, Viktorija, Mastaviciute, Asta, Vitkus, Dovydas, Tamulaitiene, Marija, and Griskevicius, Julius “Wearable sensors technology as a tool for discriminating frailty levels during instrumented gait analysis.”, Applied Sciences, vol. 10, no. 23, pp 8451, 2020.
* [17] Smart Insole Project, Accessed: August 17, 2022 [Online] Available: https://www.smart-insole.eu
* [18] GraphView - Chart and Graph Library for Android, Accessed: August 17, 2022 [Online] Available: https://github.com/jjoe64/GraphView
* [19] Tsakanikas, Vassilis D, Dimopoulos, Dimitrios G, Tachos, Nikolaos S, Chatzaki, Chariklia, Skaramagkas, Vasileios, Christodoulakis, Georgios, Tsiknakis, Manolis, and Fotiadis, Dimitrios I “Gait and balance patterns related to Free-Walking and TUG tests in Parkinson’s Disease based on plantar pressure data.”, 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp 236–239, 2021.
* [20] Boucharas, Dimitrios G., Androutsos, Christos, Gkois, George, Tsakanikas, Vasileios D., Pezoulas, Vasileios C., Manousos, Dimitrios, Skaramagkas, Vasileios, Chatzaki, Chariklia, Tachos, Nikolaos S., Tsiknakis, Manolis, and Fotiadis, Dimitrios I. “Recognizing Movement and Stability Patterns in Patients with Parkinson’s Disease and the Elderly.”, In Proceedings of the 16th EAI International Conference on Pervasive Computing Technologies for Healthcare; PervasiveHealth, 2022.
* [21] Chatzaki, Chariklia, Skaramagkas, Vasileios, Tachos, Nikolaos, Christodoulakis, Georgios, Maniadi, Evangelia, Kefalopoulou, Zinovia, Fotiadis, Dimitrios I., and Tsiknakis, Manolis “The smart-insole dataset: gait analysis using wearable sensors with a focus on elderly and Parkinson’s patients.”, Sensors, vol. 21, 2021.
|
0000-0002-9602-751X
# Stretching the limits of multiparticle entanglement
Géza Tóth Department of Theoretical Physics, University of the Basque Country
UPV/EHU,
P. O. Box 644, E-48080 Bilbao, Spain IKERBASQUE, Basque Foundation for
Science, E-48013 Bilbao, Spain Wigner Research Centre for Physics, P. O. Box
49, H-1525 Budapest, Hungary
The classification of entangled mixed states in multiparticle systems is a
difficult task. Even for three-qubit pure states, six classes arise that are
inequivalent under Stochastic Local Operations and Classical Communication
(SLOCC), which can be used to define a classification of mixed states into six
classes [1]. For pure states of more than three particles, there are already
infinite number of classes [2]. So it is desirable to find coarser
classifications.
One of the possible classifications is the following. We look for the largest
particle group that contains particles entangled with each other while it is
non-entangled with the rest. Then, we call the state $5$, $10$ or $50$
particle entangled based on this [3, 4]. The entanglement depth or
$k$-producibility has been defined this way. In more detail, we call a pure
state of $N$ particles $k$-producible, if it can be written as
$|\Psi_{1}\rangle\otimes|\Psi_{2}\rangle\otimes|\Psi_{3}\rangle\otimes\dots,$
(1)
where each $|\Psi_{l}\rangle$ is of the state of at most $k$ particles. A
mixed state is $k$-producible if it is the mixture of pure $k$-producible
states. If a quantum state is not $k$-producible then it is at least
$(k+1)$-particle entangled, or it has at least an entanglement depth $k+1$.
There have been many groundbreaking experiments putting a lower bound on the
entanglement depth of the quantum system, aiming to produce larger and larger
entanglement depth, creating entanglement depth in the thousands [5, 6, 7, 8,
9, 10]. At this point, an important question arises. If we have 100 particles
in a $20$-particle entangled state, it can happen in various ways. For
example, it can happen, that all twenty-particle groups are fully entangled
$\Bigl{[}\frac{1}{\sqrt{2}}\bigl{(}|0\rangle^{\otimes 20}+|1\rangle^{\otimes
20}\bigr{)}\Bigr{]}^{\otimes 5},$ (2)
or it can also happen that there is a single twenty-particle group that is
genuine multiparticle entangled, while the rest of the particles are in the
trivial $|0\rangle$ state
$\Bigl{[}\frac{1}{\sqrt{2}}\bigl{(}|0\rangle^{\otimes 20}+|1\rangle^{\otimes
20}\bigr{)}\Bigr{]}\otimes|0\rangle^{\otimes 80}.$ (3)
It is natural to ask what further notions in entanglement theory can be used
to distinguish these two cases.
The article of Sz. Szalay from the Wigner Research Centre for Physics in
Budapest published in Quantum [11] is just doing that. The preliminary ideas,
laid down in previous works [12, 13, 14], are as follows. First, on level I,
the author characterizes the total system by the use of its _partitions,_
$\xi=X_{1}|X_{2}|X_{3}|\dots,$ (4)
where a part $X_{l}$ is a subsystem, possibly consisting of several elementary
subsystems (e.g., particles). _$\xi$ -uncorrelated states_ are just product
states of the form
$\varrho_{1}\otimes\varrho_{2}\otimes\varrho_{3}\otimes\dots,$ (5)
where $\varrho_{l}$ lives on subsystem $X_{l}$. _$\xi$ -separable states_ are
those which can be formed as mixtures of $\xi$-uncorrelated states, that is,
states that are separable for the partitioning given by $\xi$. After the level
I, the article defines level II and level III descriptions using fundamental
set theory that can handle in a coherent way a large variety of relevant cases
appearing in multiparticle systems. Level II is needed for handling mixtures
of states uncorrelated with respect to different partitions. For example,
considering three elementary subsystems $\mathrm{A}$, $\mathrm{B}$ and
$\mathrm{C}$, the
$\\{\mathrm{A}\mathrm{B}|\mathrm{C},\mathrm{B}\mathrm{C}|\mathrm{A},\mathrm{A}\mathrm{C}|\mathrm{B}\\}$-separable
states are mixtures of $\mathrm{A}\mathrm{B}|\mathrm{C}$-uncorrelated,
$\mathrm{B}\mathrm{C}|\mathrm{A}$-uncorrelated and
$\mathrm{A}\mathrm{C}|\mathrm{B}$-uncorrelated states. These states are not
considered tripartite entangled [1, 15].
So level II is about the possible states from which the state can be mixed,
then level III is about the possible states from which the state can be mixed
_and_ from which it cannot be mixed. For examples, besides the cases known
earlier [1, 15], we mention that there are states that are
$\\{\mathrm{A}\mathrm{B}|\mathrm{C},\mathrm{B}\mathrm{C}|\mathrm{A},\mathrm{A}\mathrm{C}|\mathrm{B}\\}$-separable,
but neither
$\\{\mathrm{A}\mathrm{B}|\mathrm{C},\mathrm{B}\mathrm{C}|\mathrm{A}\\}$-separable,
nor
$\\{\mathrm{A}\mathrm{B}|\mathrm{C},\mathrm{A}\mathrm{C}|\mathrm{B}\\}$-separable,
nor
$\\{\mathrm{B}\mathrm{C}|\mathrm{A},\mathrm{A}\mathrm{C}|\mathrm{B}\\}$-separable;
that is, to mix them, shared bipartite entanglement is needed in all the three
bipartite subsystems [12, 16]. Another example is that of the states which are
$\mathrm{A}\mathrm{B}|\mathrm{C}$-separable (while not being
$\mathrm{A}|\mathrm{B}|\mathrm{C}$-separable) and also
$\\{\mathrm{B}\mathrm{C}|\mathrm{A},\mathrm{A}\mathrm{C}|\mathrm{B}\\}$-separable;
that is, they can be mixed without shared bipartite entanglement in
$\mathrm{A}\mathrm{B}$, if we have shared bipartite entanglement in
$\mathrm{B}\mathrm{C}$ and in $\mathrm{A}\mathrm{C}$. Such “roundabout” states
[12, 16] were constructed only recently [17].
In a nutshell, levels I and II describe the possible multipartite correlation
and entanglement _properties_ , and level III is about the _classification_ in
the strict sense. For the multipartite properties, correlation and
entanglement measures are also constructed, generalizing the _mutual
information_ and the _entanglement of formation_ or _relative entropy of
entanglement_ for the mutipartite scenario. Partial orders can be defined for
the different sets on all the three levels, which give the structure of the
notions on the different levels, and can be expressed in diagrams. The
interesting properties of the structure of these is also the aim of recent
research [18]. This approach is compatible with the LO paradigm for
correlation and the LOCC paradigm entanglement. The state sets arising on
levels I and II are closed with respect to LO/LOCC, the measures are
correlation/entanglement monotones, and on level III the partial order goes
along the LO/LOCC convertibility among the classes.
The novel results of the manuscript, concerning the case when only
_permutation invariant properties_ are taken into account, fit well into this
general framework. This restriction is well motivated when an ensemble of
particles is described, which cannot be addressed one by one. For this, the
same construction as the above is built up, but now based on _integer
partitions_ ,
$\hat{\xi}=x_{1}|x_{2}|x_{3}|\dots,$ (6)
where $x_{l}$ is a possible subsystem-size (e.g., particle number).
_$\hat{\xi}$ -uncorrelated states_ are just product states of the form
$\varrho_{1}\otimes\varrho_{2}\otimes\varrho_{3}\otimes\dots,$ (7)
where now $\varrho_{l}$ lives on a subsystem of size $x_{l}$, not specified,
which one. _$\hat{\xi}$ -separable states_ are those which can be formed as
mixtures of $\hat{\xi}$-uncorrelated states. Again, level II and level III
descriptions can be formulated analogously.
Here comes an elaborate definition of $k$-producibility and
$k$-partitionability in the above picture. A partition of the type (4) is _$k$
-producible_, if all the subsystems contain _at most_ $k$ elementary
subsystems, e.g., particles. A partition is _$k$ -partitionable_, if the
number of subsystems is _at least_ $k$. Then, we can talk about $k$-producibly
uncorrelated and $k$-producibly separable states. We can also define
$k$-partitionably uncorrelated and $k$-partitionably separable states. (The
$k$-producibly separable states are called $k$-producible states in
entanglement theory, while $k$-partitionibly separable states are
$k$-separable states. The article uses the more general naming, because it
considers correlation and entanglement in parallel, and the name
“$k$-separably uncorrelated” would not make sense.)
Then, _Young diagrams_ are used to represent the permutationally invariant
case. This is very expressive: what matters is to know how many times the
various subsystem sizes appear, and it is not important, which elementary
subsystems a given subsystem consists of. In a Young diagram, every row of
$x_{l}$ squares indicates a group of $x_{l}$ elementary subsystems forming a
subsystem. A Young diagram of horizontal size $k$ and vertical size
$k^{\prime}$ correspond to a partition being $k$-producible and
$k^{\prime}$-partitionable. The _conjugation_ of Young diagrams, which is the
flip with respect to the diagonal, interchanges the horizontal and vertical
sizes, establishing an interesting _duality_ , connecting producibility and
partitionability.
Finally, _stretchability_ , appearing in the title, is the difference of
producibility and partitionability. Hence, the Young diagram mentioned above
would have a stretchability $k-k^{\prime}$. The notions can be clearly
understood based on Figure 6 in [11]. All these can be applied to define the
stretchability for correlation and for entanglement, combining the advantages
of producibility and partitionability in a balanced way. For $N$ particles,
the stretchability of entanglement $N-1$, if the state is fully $N$-partite
entangled. The stretchability of entanglement is $-(N-1)$, if the state is
fully separable. $k$-stretchability combines the advantages of
$k$-partitionability and $k$-producibility, it is large if there are a small
number of large correlated or entangled subsystems, and it is small, if the
subsystems are smaller, or if there are too many of them. For example, for the
states (2) and (3), the stretchability is $+15$ and $-61$, respectively. Also,
stretchability of correlation/entanglement is decreasing for LO/LOCC. In
short, $k$-stretchability is defined as a new quantity added to
$k$-producibility and $k$-separability to characterize better the multipartite
entanglement of the quantum state.
## References
* [1] A. Acín, D. Bruß, M. Lewenstein, and A. Sanpera. “Classification of mixed three-qubit states”. Phys. Rev. Lett. 87, 040401 (2001).
* [2] F. Verstraete, J. Dehaene, B. De Moor, and H. Verschelde. “Four qubits can be entangled in nine different ways”. Phys. Rev. A 65, 052112 (2002).
* [3] Anders S. Sørensen and Klaus Mølmer. “Entanglement and extreme spin squeezing”. Phys. Rev. Lett. 86, 4431–4434 (2001).
* [4] Otfried Gühne, Géza Tóth, and Hans J Briegel. “Multipartite entanglement in spin chains”. New J. Phys. 7, 229 (2005).
* [5] Christian Gross, Tilman Zibold, Eike Nicklas, Jerome Esteve, and Markus K Oberthaler. “Nonlinear atom interferometer surpasses classical precision limit”. Nature (London) 464, 1165–1169 (2010).
* [6] Bernd Lücke, Jan Peise, Giuseppe Vitagliano, Jan Arlt, Luis Santos, Géza Tóth, and Carsten Klempt. “Detecting multiparticle entanglement of Dicke states”. Phys. Rev. Lett. 112, 155304 (2014).
* [7] O. Hosten, N. J. Engelsen, R. Krishnakumar, and M. A. Kasevich. “Measurement noise 100 times lower than the quantum-projection limit using entangled atoms”. Nature (London) 529, 505–508 (2016).
* [8] R. McConnell, H. Zhang, J. Hu, S. Ćuk, and V. Vuletić. “Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon”. Nature (London) 519, 439–442 (2015).
* [9] Florian Haas, Jürgen Volz, Roger Gehr, Jakob Reichel, and Jerome Esteve. “Entangled states of more than 40 atoms in an optical fiber cavity”. Science 344, 180–183 (2014).
* [10] Yi-Quan Zou, Ling-Na Wu, Qi Liu, Xin-Yu Luo, Shuai-Feng Guo, Jia-Hao Cao, Meng Khoon Tey, and Li You. “Beating the classical precision limit with spin-1 Dicke states of more than 10,000 atoms”. Proc. Natl. Acad. Sci. U.S.A. 115, 6381–6385 (2018).
* [11] Szilárd Szalay. “$k$-stretchability of entanglement, and the duality of $k$-separability and $k$-producibility”. Quantum 3, 204 (2019).
* [12] Szilárd Szalay and Zoltán Kökényesi. “Partial separability revisited: Necessary and sufficient criteria”. Phys. Rev. A 86, 032341 (2012).
* [13] Szilárd Szalay. “Multipartite entanglement measures”. Phys. Rev. A 92, 042329 (2015).
* [14] Szilárd Szalay, Gergely Barcza, Tibor Szilvási, Libor Veis, and Örs Legeza. “The correlation theory of the chemical bond”. Sci. Rep. 7, 2237 (2017).
* [15] Michael Seevinck and Jos Uffink. “Partial separability and entanglement criteria for multiqubit quantum states”. Phys. Rev. A 78, 032101 (2008).
* [16] Szilárd Szalay. “Quantum entanglement in finite-dimensional Hilbert spaces” (2013). arXiv:1302.4654.
* [17] Kyung Hoon Han and Seung-Hyeok Kye. “Construction of three-qubit biseparable states distinguishing kinds of entanglement in a partial separability classification”. Phys. Rev. A 99, 032304 (2019).
* [18] Kyung Hoon Han, Seung-Hyeok Kye, and Szilárd Szalay. “Partial separability/entanglement violates distributive rules” (2019). arXiv:1911.06496.
|
# Towards True Lossless Sparse Communication in Multi-Agent Systems
Seth Karten
Carnegie Mellon University
&Mycal Tucker
Massachusetts Institute of Technology
&Siva Kailas
Carnegie Mellon University
&Katia Sycara
Carnegie Mellon University
Correspondence<EMAIL_ADDRESS>
###### Abstract
Communication enables agents to cooperate to achieve their goals. Learning
when to communicate, i.e., sparse (in time) communication, and whom to message
is particularly important when bandwidth is limited. Recent work in learning
sparse individualized communication, however, suffers from high variance
during training, where decreasing communication comes at the cost of decreased
reward, particularly in cooperative tasks. We use the information bottleneck
to reframe sparsity as a representation learning problem, which we show
naturally enables lossless sparse communication at lower budgets than prior
art. In this paper, we propose a method for true lossless sparsity in
communication via Information Maximizing Gated Sparse Multi-Agent
Communication (IMGS-MAC). Our model uses two individualized regularization
objectives, an information maximization autoencoder and sparse communication
loss, to create informative and sparse communication. We evaluate the learned
communication ‘language’ through direct causal analysis of messages in non-
sparse runs to determine the range of lossless sparse budgets, which allow
zero-shot sparsity, and the range of sparse budgets that will inquire a reward
loss, which is minimized by our learned gating function with few-shot
sparsity. To demonstrate the efficacy of our results, we experiment in
cooperative multi-agent tasks where communication is essential for success. We
evaluate our model with both continuous and discrete messages. We focus our
analysis on a variety of ablations to show the effect of message
representations, including their properties, and lossless performance of our
model.
## 1 Introduction
In multi-agent teams, communication is necessary to successfully complete
tasks when agents have partial observability of the environment. Multi-agent
reinforcement learning (MARL) has recently seen success in scenarios that
require communication [6, 20, 8, 17, 13]. Sparse multi-agent communication
(wherein agents communicate during only some time-steps) has been shown to be
an effective solution to internet packet routing [18], multi-robot navigation
[7], complex multiplayer online games such as StarCraft [20, 19, 10, 6], and
human-agent teaming [11]. In particular, these successes have been achieved
using neural network architectures in conjunction with a reinforcement
learning framework. Simultaneously, research in individualized multi-agent
communication [20, 19, 3, 1] has solved sparse cooperative-competitive multi-
agent problems where adversaries are listening, and sparsity is built into
their competitive objective. But such research is unable to provide sparse
individualized communication in fully-cooperative settings, where there is no
built-in incentive. This is particularly unreasonable in real-world settings
where multiple robots may need to adhere to bandwidth/budget restrictions. A
budget (or bandwidth) $b$ defines the maximum percentage of the time an agent
may communicate.
Figure 1: Overview of our multi-agent architecture with gated sparse,
informative communication. At every timestep, each agent receives an occluded
observation $x$. Each agent creates a communication message, which is passed
to the learned gating function $g$ as well as the Decoder. The gating function
determines whether to communicate the message to the other agents. The Decoder
receives all messages and attempts to reconstruct the full state of the
environment.
Emergent communication enables agents to learn a set of communications vectors
apt for solving a particular task; however, learning emergent communication
simultaneously with an action policy is highly unstable. Agents often converge
to undesirable policies in which communication is ignored, unless special
training terms are used [5, 16]. Enforcing sparse communication, i.e.,
limiting the number of messages over time or communicating within a
bandwidth/budget, only worsens this problem due to the additional constraint.
Using the information bottleneck framework [21] may adequately address
sparsity constraints [25], but due to their objective, exhibit a trade-off
between the total bandwidth and task performance. In these scenarios, the
agents fail to send necessary messages and transmit unnecessary messages,
which we dub null communications. In fact, many papers on sparsity suggest
lossless sparsity, but in actuality, have a non-trivial decrease in reward.
In this work, we propose a novel framework, Information Maximizing Gated
Sparse Multi-Agent Communication (IMGS-MAC), which aims to learn a
communication-action policy and then enforce a sparse communication budget
(learning when and whom to send messages) with lossless performance. Our key
insight in IMGS-MAC is reframing the sparse multi-agent communication problem
as a representation learning problem. The use of an information maximizing
autoencoder prevents shortcut solutions in order to structure the latent
communication space to allow for high reward with little communication. After
learning a non-sparse communication policy, we analyze the direct causal
effect of choosing to send each token to any other agent to determine null
messages. Then, IMGS-MAC uses a table of these null messages to prevent them
from being emitted, enabling sparsity with lossless performance without
additional reinforcement learning, which we call zero-shot sparsity. To
further promote sparsity for over-constrained budgets, we finetune our model
using an individualized communication regularization term for a learned
gating/targeting function $g$, which we call few-shot sparsity.
## 2 Related Work
### 2.1 Emergent Communication Vectors
Prior art in emergent communication establishes how agents may learn to
communicate to accomplish their goals with continuous communication vectors
[10, 20, 19]. Motivated by human communication in which people speak only when
necessary and using only a discrete set of words, we wish for agents to learn
sparse (in number of listeners over time) and discrete communication. While
previous work has been successful in learning discrete communication vectors
[12, 15, 6, 2, 7], the learned communication conventions often exhibit
undesirable properties. Learning discrete prototypes has been shown to promote
robustness in noisy communication channels, as well as human interpretability
and zero shot generalization [22]. Similar to word embeddings in natural
language processing, they capture the relationship between vectors. However,
many of these methodologies only try to learn token meanings through rewards.
In our work, we show that grounding messages in reproducing the concatenated
state of all agents with an autoencoder creates desirable representations
regardless of continuous or discrete settings.
### 2.2 Sparsity: Gating Total Messages
In this work, we attempt to reduce communication in MARL problems through
gating total messages. Gating methods learn a function which dictates whether
an agent will communicate to each other agent at any given timestep. Some
methods try to learn a gating probability to decide whether to broadcast a
budget, but these are unable to follow a communication budget [19, 9]. In
reward-based sparse communication [11, 24], by penalizing communication reward
during training, gating/targeting methods have reduced communication. However,
this method is not able to adequately choose a budget (what maximum percentage
of the time to communicate). Overall, gating methods are high variance and
often unstable [2]. Rather than building the objective into the reward, I2C
[4] tries to measure the causal effect of an individualized message through a
learned Q-value. However, I2C only tries to address sparse targeting in the
lossless sparsity case and fails to account for the effect of message
representation. In our work, we measure the actual effect of each token and
mask the emergent vocabulary accordingly.
## 3 Preliminaries
We formulate our setup as a centralized training, decentralized execution
(CTDE) [6], partially observable Markov Decision Process with individualized
communication (POMDP-Comm). Formally, our problem is defined by the tuple,
$(\mathcal{S},\mathcal{A},\mathcal{M},\mathcal{T},\mathcal{R},\mathcal{O},\Omega,\gamma)$.
We define $\mathcal{S}$ as the set of states, $\mathcal{A}_{i}\,,\,i\in[1,N]$
as the set of actions, which includes task specific actions, and
$\mathcal{M}_{i}$ as the set of communications for $N$ agents. $\mathcal{T}$
is the transition between states due to the multi-agent joint action space
$\mathcal{T}:\mathcal{S}\times\mathcal{A}_{1},...,\mathcal{A}_{N}\to\mathcal{S}$.
$\Omega$ defines the set of observations in our partially observable setting.
The partial observability requires communication to complete the tasks
successfully.
$\mathcal{O}_{i}:\mathcal{M}_{1},...,\mathcal{M}_{N}\times\mathcal{S}\to\Omega$
maps the communications and state to a distribution of observations for each
agent. $\mathcal{R}$ defines the reward function and $\gamma$ defines the
discount factor.
### 3.1 The Sparsity Objective
The multi-agent emergent communication problem is phrased as a combination of
a Lewis game [14] and the information bottleneck [21]. We seek to develop a
message representation $M$, which contains sufficient referential and ordinal
information to successfully complete a task. Notably, the information
bottleneck defines a trade-off between referential ($X$) mutual information,
$I(X;M)$, which is observable to an agent, and ordinal ($Y$) mutual
information, $I(M;Y)$, which requires coordination between agents.
The communication graph $G_{t}=(V,E)$ is a set of agents (vertices) and active
communication edges between them, where connectivity changes at each timestep.
Messages flow through the edges from agents to agents, $E:v_{i}\to v_{j}$. We
aim to learn a masking function $g$ to dynamically modify the graph to prevent
messages from flowing along the graph. The total number of bits communicated,
$s(M)$ can be defined in terms of vertices (gating), $v\in V$,
$s(M)=\sum_{m\in\mathcal{M}}v_{m}$ or in terms of edges (targeting), $e\in E$,
$s(M)=\sum_{m\in\mathcal{M}}e_{m}$, over an episode. One can see that gating
is a special form of targeting in which a vertex is disjoint from the graph.
We will use gating and targeting interchangeably, but in terms of sparsity,
limit the total number of message edges during an episode.
In MARL, the objective of sparse communication is to minimize the total number
of bits communicated while maximizing team task performance,
$\displaystyle\max\limits_{\pi:\mathcal{S}\to\mathcal{A}\times\mathcal{M}}\mathbb{E}\left[\sum_{t\in\mathcal{T}}\sum_{i\in
N}\gamma\mathcal{R}(s_{t},a_{t})\right]$ (1) $\displaystyle\text{s.t.
}(a_{t},m_{t})\sim\pi,s_{t}\sim\mathcal{T}(s_{t-1})$ subject to
$\displaystyle\min\mathbb{E}_{M\sim\pi}\left[s(M)\right]$
That is, to achieve this objective, first one maximizes task performance; then
one reduces total communication while keeping task performance fixed.
###### Definition 3.1 (Lossless Sparse Communication).
A communication policy $\pi_{m}$ is lossless and sparse iff it satisfies the
objective in equation 1. A lossless sparse communication policy defines the
minimum sparse budget (fraction of total messages) $b^{*}$.
Most sparse communication work rephrases the $\min\max$ problem to a single
objective by introducing a Lagrangian,
$\displaystyle\max\limits_{\pi:\mathcal{S}\to\mathcal{A}\times\mathcal{M}}\mathbb{E}\left[\sum_{t\in\mathcal{T}}\sum_{i\in
N}\gamma\mathcal{R}(s_{t},a_{t})-\lambda s(m_{t})\right]$ (2)
$\displaystyle\text{s.t.
}(a_{t},m_{t})\sim\pi,s_{t}\sim\mathcal{T}(s_{t-1}),m_{AVG}<b$
However, depending on the Lagrange multiplier, the objective in equation 1 is
not always the same as equation 2. Due to the dual-objective, equation 2 also
introduces the possibility of suboptimal sparse communication even when
lossless sparse communication is possible. It also explains the high variance
of lossless sparsity in prior art [2].
###### Definition 3.2 (Sub-Optimal Sparse Communication).
A communication policy $\pi_{M}$ is suboptimal and sparse iff there exists a
trade-off between task performance and messaging constraints as defined in
equation 2.
Thus, in our methodology, we cannot directly optimize equation 2. Recall that
in emergent communication, messages are generated based on their observations.
This implies that, in terms of the information bottleneck, messages represent
a combination of referential, $I(X;M)$, and ordinal, $I(M;Y)$, information.
That is, observations help guide ordinal (task-specific) information. Suppose,
we have a Lagrangian objective (see section 4.1), which allows for our
messages to have independent referential information. Then, given a
communication policy which adequately solves the task, one can determine the
ordinal utility of each token. By removing unnecessary tokens, we can satisfy
the objective in equation 1. Thus, in our methodology, we emphasize learning
emergent communication with properties that enable sparse communication with
lower optimal budgets $b^{*}$ (lossless sparsity).
## 4 Proposed Methodology
Algorithm 1 IMGS-MAC
1: $\theta\leftarrow\text{randomly initialized network parameters}$
2: $\texttt{useDiscreteMessaging}\leftarrow\\{true|false\\}$
3: while not converged do
4: for $i\leftarrow 1\text{ to }N$ {simultaneously} do
5: $x^{i}\sim\mathcal{S}$
6: $h^{i}\leftarrow\texttt{GRU}(x^{i})$
7: if useDiscreteMessaging then
8: $m^{i}\leftarrow\texttt{DiscreteProtoNet}(h^{i})$
9: else
10: $m^{i}\leftarrow h^{i}$
11: end if
12: $\texttt{SendMessages}(m^{i}\odot g(h^{i}))$
13: $\bar{m}^{i}\leftarrow\texttt{AggregateMessages}()$
14: $\tilde{h}^{i}\leftarrow\texttt{GRU}(\\{h^{i},\bar{m}^{i}\\})$
15:
$a^{i},v^{i},\tilde{x}^{i}\leftarrow\pi(\tilde{h}^{i}),V(\tilde{h}^{i}),\texttt{DecoderNet}(\tilde{h}^{i})$
16:
$L\leftarrow\pi\texttt{Loss}(a^{i},v^{i})+\mathcal{L}_{1}(x,\tilde{x}^{i})+\mathcal{L}_{2}(m^{i}_{AVG})$
17: end for
18: end while
In this section, we introduce the IMGS-MAC architecture as well as two types
of individualized regularization. The first is an autoencoder, which is used
to stabilize the dual training of the communication-action policy. The latter
is an individualized communication penalty to enforce each agent individually
follows a fixed communication budget/bandwidth. Note that it is important to
provide individualized regularization, as otherwise the gradient signal will
not be adequately recognized. Our model builds on related art [19, 2], but our
technique can be easily applied to any individualized MARL communication
module. Below, we introduce our information maximization autoencoder and
individualized communication regularization. Overall, the combined framework
can be observed in Alg. 1.
### 4.1 Sparsity through Information Maximization
The information bottleneck principle [21] is naturally encoded into any
communication module that uses deep learning. By creating a latent
representation, any nontrivial solution enforces the network to provide the
relevant information within the communication vector. Rather than requiring
centralized execution to maintain sparsity through the information bottleneck,
we provide a form of information regularization that allows for individualized
communication. Additionally, we enforce a structured representation for
message tokens, ensuring that tokens represent independent referential and
ordinal information from their observations.
We define the autoencoder as follows: The communication module of our network
serves as the encoder. Each agent produces their own hidden state $h^{i}$ and
receives communication vectors $m^{j}$ such that $i\neq j$. For each agent,
the model feeds $h^{i}+m^{j}$ into the decoder. We then calculate the $l2$
loss $U(s_{t},s_{t}^{i,\texttt{decoded}})$ between the state of all agents
$s_{t}=\\{x^{1}_{t},\dots,x^{N}_{t}\\}$ and the decoded state
$s_{t}^{i,\texttt{decoded}}$, which effectively measures the similarity
between the latent communication and the concatenated state of all agents.
$\mathcal{L}_{1}(\theta)=\lambda_{1}U(s_{t},s_{t}^{i,\texttt{decoded}})$ (3)
To enable sparsity, we first train IMGS-MAC with the autoencoder module and
non-sparse communication ($b=1$). Afterwards, we run evaluation episodes while
collecting data regarding each message token to detect null messages.
###### Definition 4.1 (Null Communication Vector).
A null communication vector from agent $i$ provides a lack of information to
another agent $j$. That is, in terms of the information bottleneck,
$I(m^{i};y^{j})=0$.
To determine the mutual information between a message $m$ and the task
specific information $y$, we measure if there is a change in the reward within
a small $\epsilon\approx 1e-3$. If there is no significant change, we consider
this token a null message.
While simple, in our experiments, we show that by combining this trick with
strong latent representations, our model can remove larger amounts of
unnecessary communication, or null communication vectors, without impacting
the performance. In fact, our lossless sparsity method requires no additional
reinforcement learning training, which we define as zero-shot sparsity.
Similar to zero-shot learning, which requires no additional data to satisfy an
objective, zero-shot sparsity enables satisfaction of sparse communication
constraints from non-sparse training through careful analysis of the emergent
communication policy. Our methodology exhibits zero-shot sparsity since no
additional reinforcement learning training is required to enforce sparsity
given our non-sparse model with informative communication, which is shown in
section 5.
### 4.2 Sparsity through Individualized Regularization
In the overconstrained bandwidth case, $b<b^{*}$, which implies that we will
not be able to maximize task performance, inducing the suboptimal sparsity
case. However, we can use the properties of lossless sparsity to maximize
performance such that $m_{AVG}<=b$. We combine previous techniques with a
second regularization term, a per-agent communication penalty
$\mathcal{L}_{2}$. The penalty depends on the nature of the communication
budget. At each discrete time-step $t$, each agent has the opportunity to
choose to emit a message. Thus, we define our budget $b$ as a fraction of the
total agents multiplied by the time-steps in which we measure communications.
We let $m_{AVG}$ define the actual fraction of messaging. Finally, we can
define the regularization penalty,
$\mathcal{L}_{2}(\theta)=\lambda_{2}\bigg{\lVert}m_{AVG}^{i}-(b+(1-b^{*}))\bigg{\rVert}_{2}^{2}$
(4)
where we penalize messages when $b<m_{AVG}<b^{*}$.
Similar to few-shot learning where a limited amount of data, we define few-
shot sparsity as enabling the satisfaction of sparse communication constraints
from non-sparse training through limited additional MARL training. We quantify
the amount of data in our experiments, notably Fig. 3. We finetune our model
using the regularization penalty in Eq. 4 to observe overconstrained budgets,
thus exhibiting few-shot sparsity.
## 5 Experiments
In this section, we first describe the benchmark environment. Then, we present
ablations showing the efficacy of our sparse model with informative
communication. As stated in section 2, IC3Net and I2C provide close framework
compatibility. We compare IMGS-MAC with IC3Net with non-sparse ($b=1$)
communication to understand the effect of our information maximizing
autoencoder in developing independent referential (based in observations $x$)
representations, $I(m_{j};m_{k})=0$. We evaluate with both continuous and
discrete messages to show the necessity of using our methodology to develop
structured latent tokens (messages $m$). Then we show the few-shot sparsity
benefits of finetuning sparse budgets when $b<b^{*}$ as compared with solving
the tri-objective (1: communicate effectively, 2: act effectively, and 3: obey
communication sparsity constraints), which is akin to trying to satisfy the
objective in Eq. 2 when $b\geq b^{*}$. We analyze our model’s communication
vectors to find zero-shot sparsity $b=b^{*}$. We show that our method can
provide lower optimal budgets $b^{*}$ than I2C. Finally, we verify that IMGS-
MAC has lossless performance at $b=b^{*}$ as compared with its non-sparse
performance $b=1$, and show the optimized trade-off between suboptimal budgets
$b<b^{*}$ and task performance, e.g., reward. We detail our experimental setup
in Appendix A.
### 5.1 Information Maximization Analysis
Figure 2: Left and middle figures compare the training IC3Net (blue) vs. our
IMGS-MAC (orange) with non-sparse communication ($b=1$) in Traffic Junction.
Our method converges to higher success earlier and with less variance. Right
figures compare in Predator-Prey. Our method converges to higher success
earlier and with less variance. Top figures use continuous communication
vectors while bottom figures use discrete.
To show the benefits of the autoencoder for information maximization, we first
show comparison with IC3Net with a fixed gate, i.e., non-sparse communication
($b=1$). In Figure 2, our results show that our method has much lower
variance. Note that IC3Net may have a shaded area higher than IMGS-MAC, but it
never actually performs that well. Rather, the variance comes from very low
performing runs. In the simple, easy setting, our method is able to find
solutions of equivalent quality as IC3Net. However, in hard settings, and in
all discrete communication vector settings, our method outperforms IC3Net in
terms of performance and the number of epochs required to find the solution.
Particularly, in the more difficult discrete communication vector scenarios,
the autoencoder drastically outperforms IC3Net. Note that the decreased
variance results in much more stable solutions.
Table 1: Average $\mu\pm\sigma$ for quality and performance of null communication vectors. IMGS-MAC (ours) provides significantly more informative communication, as recognized by its low usage of null communications. Lower is better. Environment | Method | % Null Comm. Vectors | # Observations
per Vector | % Null Comms.
Emitted
---|---|---|---|---
TJ Easy Cts. | IC3Net | 0.59 $\pm$ 0.107 | 3.81 $\pm$ 0.304 | 0.529 $\pm$ 0.112
IMGS-MAC | 0.0550 $\pm$ 0.198 | 1.785 $\pm$ 0.507 | 0.0565 $\pm$ 0.196
TJ Hard Cts. | IC3Net | 0.404 $\pm$ 0.0753 | 26.892 $\pm$ 6.662 | 0.543 $\pm$ 0.0999
IMGS-MAC | 0.0334 $\pm$ 0.107 | 16.928 $\pm$ 10.113 | 0.0310 $\pm$ 0.167
TJ Easy Discrete | IC3Net | 0.589 $\pm$ 0.265 | 3.39 $\pm$ 1.09 | 0.846 $\pm$ 0.263
IMGS-MAC | 0.0194 $\pm$ 0.0394 | 1.390 $\pm$ 0.220 | 0.0320 $\pm$ 0.0719
TJ Med. Discrete | IC3Net | 0.724 $\pm$ 0.139 | 15.944 $\pm$ 8.127 | 0.964 $\pm$ 0.0424
IMGS-MAC | 0.0857 $\pm$ 0.172 | 5.105 $\pm$ 3.154 | 0.201 $\pm$ 0.322
PP Hard Cts. | IC3Net | 0.784 $\pm$ 0.0445 | 73.148 $\pm$ 12.099 | 0.497 $\pm$ 0.0887
IMGS-MAC | 0.284 $\pm$ 0.160 | 17.523 $\pm$ 6.231 | 0.300 $\pm$ 0.173
PP Hard Discrete | IC3Net | 0.482 $\pm$ 0.145 | 104.803 $\pm$ 6.0713 | 0.719 $\pm$ 0.312
IMGS-MAC | 0.380 $\pm$ 0.0909 | 82.809 $\pm$ 6.507 | 0.141 $\pm$ 0.114
Our hypothesis is that decreasing the training epochs to converge to high task
performance implies that we have more informative communication. Our results
show that communication tokens which represent information more independently
allow for lower $b^{*}$. This is found by analyzing the number of states in
which the same message is emitted. Overall, this strengthens our hypothesis
that a structured latent space naturally allows for lower $b^{*}$ for lossless
sparsity. We analytically study the performance of the autoencoder in Table 1.
Percent null communication vectors determines the number of null tokens in the
emergent ‘vocabulary’, i.e., all possible messages. The number of observations
per vector reports the independence of a token or mutual information between
any two distinct tokens, $I(m_{j};m_{k})$. We want to minimize
$I(m_{j};m_{k})$ in order to decouple information into independent messages,
so that we can later promote stronger sparsity through the analysis of the
utility of each token in determining optimal actions. The percent of null
communications emitted reports the percentage of null messages that were
communicated to other agents over 500 episodes. We aim to minimize these
unnecessary null messages. We see that the IC3Net method uses more null
vectors on average and has high mutual information between tokens. Further,
using our IMGS-MAC, we effectively remove null messages and decrease mutual
information between tokens, further improving performance. In fact, IMGS-MAC
removes almost all null messages. We will later further see that it does so
without any reduction in performance.
### 5.2 Sparsity Analysis
#### 5.2.1 Few-shot Sparsity
Figure 3: Average success and 95% confidence interval for Tri-objective (left
bar, orange) vs. Pretraining with non-sparse $b=1$ (blue), then Finetuning
(orange) with $b=0.7$. The Pretraining+Finetuning paradigm takes half the
amount of training as the Tri-objective.
Figure 4: Above, the model follows the budget $b=0.7$ average over each
episode. Observe that the model (in blue) only needs to run for a few dozen
epochs before adequately following the budget (in red).
In the case where $b<b^{*}$ we require a small amount of additional training
data to enable sparse communication. We introduce an autoencoder to include
independent referential communication in order to ease the dual communication-
action policy learning. When we introduce the sparsity constraint (and the
corresponding individualized communication regularization), our model must
additionally learn a gating function, which further increases the complexity.
In order to avoid requiring more data, we introduce a pretraining and
finetuning paradigm. First, we pretrain dual communication-action policy with
a fixed open gate (non-sparse $b=1$). Then, we apply finetuning to train the
gating function (with the rest of the network) at any $b<b^{*}$. In Figure 3,
we see that the total number of epochs required for task success convergence
under a budget is about half as many for the pretraining+finetuning paradigm
than for the tri-objective, which aims to solve the objective in Eq. 2
directly. Note that the variance entirely comes from the dual objective
pretraining. The sparsity finetuning requires less than 10% of the total
training epochs. In fact, we can apply finetuning for any budget $b$ rather
than having to train the tri-objective from scratch, further decreasing
training time. In Figure 4, we observe that our model only needs a few dozen
epochs to converge to a communication budget and is able to safely reduce
total communication below the allowed budget. Overall, our objective exhibits
few-shot sparsity ($b<b^{*}$). The performance of few-shot sparsity is
analyzed in Figure 5.
Table 2: Minimum sparse budget $b^{*}$ with lossless performance, $\mu\pm\sigma$. Observe that our model can reduce 20-60% without a loss in task performance. Environment | IMGS-MAC $b^{*}$ | I2C-Cts. $b^{*}$
---|---|---
TJ Easy Cts. | 0.610 $\pm$ 0.191 | -
TJ Hard Cts. | 0.462 $\pm$ 0.249 | 0.63
TJ Easy Discrete | 0.815 $\pm$ 0.00469 | -
TJ Med Discrete | 0.519 $\pm$ 0.140 | 0.66
PP Hard Cts. | 0.244 $\pm$ 0.0644 | 0.48
PP Hard Discrete | 0.263 $\pm$ 0.00757 | 0.48
#### 5.2.2 Zero-shot Sparsity
We use sparsity through information maximization in section 4.1 to reduce the
number and usage of null prototypes. In Table 1, one can see that through our
analysis, we are able to remove significant usage of null communication
vectors, which allow our model to only use informative communication, enabling
true lossless sparsity. That is, the task performance, or success in our case,
will not decrease at all by decreasing the budget within the true lossless
range. Otherwise, enforcing a budget requires the learned gating function $g$
to determine whether an agent should communicate, which may induce a loss in
task performance. Of course, this is dependent on how well the initial
communication model is learned, i.e., the range is dependent on the learned
model. Each model has its own minimum lossless budget $b^{*}$, which depends
on the emergent communication model. In Table 2, we report the lossless budget
$b^{*}$ for each environment. We are able to reduce communication by 20-75%
with no additional training. Interestingly, we are able to reduce
communication more when we have continuous communication vectors instead of
discrete communication vectors. This implies that our continuous vectors have
more informative communication. Though, it most likely follows from the fact
that discrete communication is a harder problem than continuous communication,
confirming results from [23]. Additionally, we are able to find lower optimal
budgets $b^{*}$ than I2C, even without specific reinforcement learning
training to reduce the communication overhead.
Figure 5: Success versus budget for IMGS-MAC at baseline non-sparse $b=1$,
lossless $b=b^{*}$, and suboptimal $b<b^{*}$. Our model provides lossless
performance for $b=b^{*}$ for $b^{*}$ in Table 2 as compared with the baseline
non-sparse $b=1$. Our performance tapers for smaller budgets until it
approaches the no communication performance. Top: continuous communication
vectors; Bottom: discrete; Left, middle: Traffic Junction; Right: Predator-
Prey.
Finally, we analyze the lossless, $b=b^{*}$, and suboptimal, $b<b^{*}$,
performance for sparse budgets for our model in Figure 5, which uses the
lossless budget $b^{*}$ as reported in Table 2. We find that the lossless
budget $b^{*}$ provides true lossless performance. Unsurprisingly, for
overconstrained budgets $b<b^{*}$, there is a small task performance tradeoff
for adherence to the budget.
## 6 Conclusion and Future Work
In this paper, we have proposed a method for multi-agent individualized sparse
communication. We reframed sparsity as a representation learning problem
through the information bottleneck problem. We have shown that through
training a communication-action policy grounded with an autoencoder and
analysis during execution of non-sparse messaging, one can exhibit lossless
zero-shot sparsity. That is, the sparsity objective may be achieved without
any cost of performance with no additional reinforcement learning training.
Additionally, we produce individualized regularization to limit performance
loss with few-shot sparsity. This allows our model to adhere to messaging
constraints in over-constrained bandwidth scenarios. In a limitation of our
work, once the ’vocabulary’ is restricted by removing some null messages,
other messages are discovered later that could be removed and mutual
information between tokens is nonzero. Stronger theoretical bounds on message
content independence will further allow sparser communication. In our future
work, we aim to create an overarching framework that combines gating/targeting
sparsity and communication compression. This will remove the need for tuning
message sizes, but still opt for a decoupled training scenario. That is, first
learn an emergent language. Then adhere to sparsity constraints. Additionally,
further increases to the unsupervised representation learning will allow for
sparser performance.
## References
* [1] A. Agarwal, S. Kumar, K. Sycara, and M. Lewis. Learning transferable cooperative behavior in multi-agent teams. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pages 1741–1743, 2020.
* [2] S. Agrawal. Learning to imitate, adapt and communicate. Master’s thesis, Carnegie Mellon University, 2021.
* [3] A. Das, T. Gervet, J. Romoff, D. Batra, D. Parikh, M. Rabbat, and J. Pineau. Tarmac: Targeted multi-agent communication. In International Conference on Machine Learning, pages 1538–1546. PMLR, 2019.
* [4] Z. Ding, T. Huang, and Z. Lu. Learning individually inferred communication for multi-agent cooperation. Advances in Neural Information Processing Systems, 33:22069–22079, 2020.
* [5] T. Eccles, Y. Bachrach, G. Lever, A. Lazaridou, and T. Graepel. Biases for emergent communication in multi-agent reinforcement learning. Advances in neural information processing systems, 32, 2019.
* [6] J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson. Learning to communicate with deep multi-agent reinforcement learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 2145–2153, 2016.
* [7] B. Freed, R. James, G. Sartoretti, and H. Choset. Sparse discrete communication learning for multi-agent cooperation through backpropagation. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7993–7998, 2020.
* [8] B. Freed, G. Sartoretti, and H. Choset. Simultaneous policy and discrete communication learning for multi-agent cooperation. IEEE Robotics and Automation Letters, 5(2):2498–2505, 2020.
* [9] G. Hu, Y. Zhu, D. Zhao, M. Zhao, and J. Hao. Event-triggered communication network with limited-bandwidth constraint for multi-agent reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, pages 1–13, 2021.
* [10] J. Jiang and Z. Lu. Learning attentional communication for multi-agent cooperation. Advances in Neural Information Processing Systems, 31:7254–7264, 2018.
* [11] S. Karten, M. Tucker, H. Li, S. Kailas, M. Lewis, and K. Sycara. Interpretable learned emergent communication for human-agent teams. preprint, 2022.
* [12] A. Lazaridou and M. Baroni. Emergent multi-agent communication in the deep learning era. arXiv preprint arXiv:2006.02419, 2020.
* [13] A. Lazaridou, A. Peysakhovich, and M. Baroni. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182, 2016.
* [14] D. Lewis. Convention. Harvard University Press, Cambridge, MA, 1969.
* [15] S. Li, Y. Zhou, R. Allen, and M. J. Kochenderfer. Learning emergent discrete message communication for cooperative reinforcement learning. arXiv preprint arXiv:2102.12550, 2021.
* [16] T. Lin, J. Huh, C. Stauffer, S. N. Lim, and P. Isola. Learning to ground multi-agent communication with autoencoders. Advances in Neural Information Processing Systems, 34, 2021.
* [17] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6382–6393, 2017.
* [18] H. Mao, Z. Zhang, Z. Xiao, Z. Gong, and Y. Ni. Learning agent communication under limited bandwidth by message pruning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5142–5149, 2020.
* [19] A. Singh, T. Jain, and S. Sukhbaatar. Learning when to communicate at scale in multiagent cooperative and competitive tasks. In International Conference on Learning Representations, 2018.
* [20] S. Sukhbaatar, R. Fergus, et al. Learning multiagent communication with backpropagation. Advances in neural information processing systems, 29:2244–2252, 2016.
* [21] N. Tishby and N. Zaslavsky. Deep learning and the information bottleneck principle. In 2015 ieee information theory workshop (itw), pages 1–5. IEEE, 2015.
* [22] M. Tucker, H. Li, S. Agrawal, D. Hughes, K. Sycara, M. Lewis, and J. A. Shah. Emergent discrete communication in semantic spaces. Advances in Neural Information Processing Systems, 34, 2021.
* [23] M. Tucker, J. Shah, R. Levy, and N. Zaslavsky. Towards human-agent communication via the information bottleneck principle. arXiv preprint arXiv:2207.00088, 2022.
* [24] V. K. Vijay, H. Sheikh, S. Majumdar, and M. Phielipp. Minimizing communication while maximizing performance in multi-agent reinforcement learning. arXiv preprint arXiv:2106.08482, 2021.
* [25] R. Wang, X. He, R. Yu, W. Qiu, B. An, and Z. Rabinovich. Learning efficient multi-agent communication: An information bottleneck approach. In International Conference on Machine Learning, pages 9908–9918. PMLR, 2020.
* [26] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229–256, 1992.
## Appendix A Experimental Setup
Figure 6: Above are the easy, medium, and hard traffic junction environments.
Visibility is limited to the cell the car is located, so agents are
effectively blind. The bottom shows a zoomed-in view of the $20\times 20$
predator-prey environment. The predators are denoted by green aliens, while
the prey is denoted by a human (in a red square).
We train and evaluate our model in a blind traffic junction and predator-prey
environment settings following prior benchmarks [19, 20, 4]. For each of these
variants, we train on 10 random seeds and one epoch uses 5000 samples. We used
an RMSProp optimizer with a learning rate of 0.003. See Figure 6.
The blind traffic junction scenario involves multiple agents navigating a
discretized narrow intersection with no observability regarding the locations
of the other agents. Clearly, this necessitates informative communication in
order to avoid collisions in the environment. Note that both communication and
action occur in a single time-step. We study three variants of the blind
traffic junction and report results on the easiest and hardest environments
which converge for continuous and discrete communication.
The predator-prey scenario involves multiple agents, where one agent is
denoted as the prey and the remaining agents are denoted as predators. The
predator agents move and search the environment for the prey agent. The
predator agents can only observe its current cell and the adjacent cells
(limited visibility to 1 cell around itself). The episode terminates when all
predator agents reach the prey agent or when the maximum episode length is
hit.
Predator-prey does not necessarily require communication to solve the task.
However, in the fully-cooperative predator-prey environment, predators are
rewarded for maximizing the number of predators who reach the discovered prey.
Thus, there is no built-in incentive for fully-cooperative teams to decrease
total communication. In our experiments, we show that our method, IMGS-MAC, is
able to decrease messaging to a minimum sparse budget $b^{*}$ with lossless
performance.
Overall, our proposed method is trained (“pretraining”) using the autoencoder
in Eq. 3. We then analyze to determine if our model will follow a lossless
sparse budget. If not, we finetune our model for a suboptimal sparse budget
(Def. 3.2) using the message penalty in Eq. 4.
We use REINFORCE [26] to train both the gating function and policy network
subject to the previous constraints. In order to calculate the information
similarity, we compute loss, using Eq. 3, between each agent’s decoded state
$s_{t}^{i,\texttt{decoded}}$ and the concatenation of all agents’ states
$s_{t}$.
|
# The lstMCpipe library
Enrique García1,2, Thomas Vuillaume2 and Lukas Nikel3
###### Abstract
The Cherenkov Telescope Array (CTA) is the next generation of ground-based
gamma-ray astronomy observatory that will improve the sensitivity of current
generation instruments by one order of magnitude. The LST-1 is the first
telescope prototype built on-site on the Canary Island of La Palma and has
been taking data for a few years already. Like all imaging atmospheric
Cherenkov telescopes (IACTs), the LST-1 works by capturing the light produced
by the Cherenkov process when high-energy particles enter the atmosphere. The
analysis of the recorded snapshot of the camera allows to discriminate between
gamma photons and hadrons, and to reconstruct the physical parameters of the
selected photons. To build the models for the discrimination and
reconstruction, as well as to estimate the telescope response (by simulating
the atmospheric showers and the telescope optics and electronics), extensive
Monte Carlo simulations have to be performed. These trained models are later
used to analyse data from real observations.
lstMCpipe is an open source python package developed to orchestrate the
different stages of the analysis of the MC files on a computing facility.
Currently, the library is in production status, scheduling the full pipeline
in a SLURM cluster. It greatly simplifies the analysis workflow by adding a
level of abstraction, allowing users to start the entire pipeline using a
simple configuration file. Moreover, members of the LST collaboration can ask
for a new analysis to be produced with their tuned parameters through a pull
request in the project repository, allowing careful review by others
collaborators and a central management of the productions, thus reducing human
errors and optimising the usage of the computing resources.
1IT Department, CERN, 1211 Geneva 23, Switzerland
2Univ. Savoie Mont-Blanc, LAPP, CNRS, Annecy, France
3TU Dortmund: Dortmund, Germany
## 1 Introduction
The Cherenkov Telescope Array (CTA) is the next generation observatory of
Imaging Cherenkov Atmospheric Telescopes (IACTs) for ground-based gamma-ray
astronomy (CTA Consortium 2018). It is now in the pre-construction phase but
the first on-site prototype, the first Large-Sized Telescope (LST-1) is in the
commissioning phase and already taking data on the Canary Island of La Palma
(Spain) since 2018.
## 2 LST-1 data processing
To reconstruct the physical properties of the primary high energy particles
that enters the atmosphere, IACTs relies on Monte Carlo (MC) simulations. MC
data is analysed to train machine learning models and to compute Instrument
Response Functions (IRFs). LST-1 MC data are produced by the atmospheric
shower generator CORSIKA (Heck et al. 1998), an open-source software chosen as
the standard tool for CTA simulations and the simtel_array package to simulate
the telescopes optics and electronics responses (Bernlöhr 2008).
MC and LST real data then follow the same reduction pipeline, from data level
0 (DL0, full waveforms data from telescopes) to data level 3 (DL3,
reconstructed photons list) and Instrument Response Functions (IRFs). For
LST-1, this data reduction can be done thanks to the lstchain library (López-
Coto et al. 2021)111https://doi.org/10.5281/zenodo.7323874 which is based on
the ctapipe framework (Nöthe et al. 2021). However, the orchestration of the
different step for MC data and LST data analysis is very different and handled
by two different libraries, lstMCpipe for the MC data (this work) and LSTOSA
(Ruiz et al. 2021) for the LST data.
## 3 The lstMCpipe python library
The lstMCpipe library (Vuillaume et al. 2022) is a Python package developed to
orchestrate the MC data reduction pipeline steps implemented in lstchain on
the LST collaboration computing center. It takes advantage of the SLURM
workload manager system (Yoo et al. 2003) to divide the analysis steps into
jobs, handling their dependencies and the organization of their inputs and
outputs in the file system.
### 3.1 lstMCpipe workflow
The MC data reduction pipeline is composed of the processing described in
section 2 plus the training of the random forest (RF) models. From a set of MC
data, four RF models are trained: a classifier for background rejection, and
one classifier plus two regressors for the gamma-ray photon energy and
incident direction reconstruction. These RFs will be used for LST real data
processing, as well as to derive IRFs when applied to another MC dataset (see
Figure 1).
lstMCpipe automatically applies the different stage scripts (executable
provided by lstchain ) to a full set of MC data, greatly simplifying the
manual execution of a full pipeline. Each of the data reduction analysis steps
implies executing a script on many files. For example, in the first stage of
the MC data reduction (DL0 to DL1), the stage script can be applied to
hundreds of thousands of simulated files. lstMCpipe allows managing,
parallelizing and orchestrating the execution of these set of scripts (or
steps) in an ordered way, standardizing the intermediate and final outputs. An
entire reduction pipeline can be (re)started from any stage in case of failure
or of modification of an intermediate stage.
Figure 1.: A schematic view of the workflow handled by lstMCpipe . First, the
raw MC data DL0 is divided into training and testing and reduced to DL1 level,
then merged per sub-datasets. The training dataset is used to the train RF
models, which are then applied to the test dataset, that will then be used to
derive the instrument response functions. RFs and IRFs are necessary products
to analyse data from the LST-1.
A typical lstMCpipe workflow input is composed of 2 files: the lstMCpipe
configuration file describing the desired workflow, and a lstchain
configuration file that is passed along to the different analysis steps. The
lstMCpipe configuration file describes the stages to be run and their inputs
and outputs, as well as the software environment to use. A typical output is
composed of all the intermediate data level outputs (generated by the
different lstchain executable), the trained RF models and the IRFs.
Because of the increasing complexity of the analysis, the library went through
various updates that allowed managing the stages and the intermediate data
level output in a more modular and flexible way. Currently, the package
contains functionalities to automatically search the MC data in the LST
cluster and generate a lstMCpipe configuration file to be used for their
analysis. This functionality makes the workflow setup transparent and user-
friendly.
### 3.2 Data analysis as a service
In the LST-1 data-processing workflow, analyzers need to tune the MC data to
their specific source in order to train and produce dedicated RFs and IRFs. In
order to centralize these productions and minimize the errors and computing
resources usage, we setup a production request service based on GitHub pull-
requests. LST analyzers can thus open a simple pull-request (PR), including a
README.md describing their request and why it is needed for their analysis, a
lstchain configuration file and a lstmcpipe configuration file. This PR then
goes through unit tests to validate the correctness of the configuration files
and is reviewed by our team, checking for its validity. Once accepted, the
data production is launched manually on the cluster and the analyzer is
notified when finished. This centralization of the processes also allowed us
to produce an online database in the project documentation web-
page222https://cta-observatory.github.io/lstmcpipe/ with the logs of the
different productions and the available trained models and IRFs. All members
of the collaboration can then check if a set of model and IRFs fitting their
need has already been produced.
Centralizing the MC data analysis also allows us to keep track more easily of
the usage of computing resources required for their reduction for the LST
analyzers (see Figure 2). We notice the increase in the frequency of
productions when we introduced the analysis as a service in April 2022. The
increase in computing resources per job at a similar time is due to a newer
and larger MC data set used for the analysis of LST-1 data.
Figure 2.: Computing resources used for the MC data analysis since July 2020.
On the left the CPU usage per job and accumulated over time. On the right, the
same for RAM usage.
## 4 Conclusion
We presented the lstMCpipe library that is currently in production status and
is used in the LST collaboration for the reduction of MC data, the training of
random forest models and the production of IRFs needed for the data analysis.
The library is open-source, can be installed as a PyPI package and its
documentation can be found online.
#### Acknowledgments
The ASP would like to thank the dedicated researchers who are publishing with
the ASP. ESCAPE - The European Science Cluster of Astronomy & Particle Physics
ESFRI Research Infrastructures has received funding from the European Union’s
Horizon 2020 research and innovation programme under Grant Agreement no.
824064.
## References
* Bernlöhr (2008) Bernlöhr, K. 2008, Astroparticle Physics, 30, 149. 0808.2253
* CTA Consortium (2018) CTA Consortium 2018, Science with the Cherenkov Telescope Array (World Scientific). URL https://doi.org/10.1142%2F10986
* Heck et al. (1998) Heck, D., Knapp, J., Capdevielle, J. N., Schatz, G., & Thouw, T. 1998, CORSIKA: a Monte Carlo code to simulate extensive air showers. (.)
* López-Coto et al. (2021) López-Coto, R., Baquero, A., Bernardos, M. I., Cassol, F., Foffano, L., García, E., Gliwny, P., Iwamura, Y., Jacquemont, M., Jouvin, L., Kobayashi, Y., Moralejo, A., Morcuende, D., Neise, D., Nozaki, S., Nöthe, M., C., P., Renier, Y., Saha, L., Sakurai, S., Sitarek, J., & Takahashi, M. 2021, in Astronomical Society of the Pacific Conference Series, edited by J. E. Ruiz, F. Pierfedereci, & P. Teuben, vol. 532 of Astronomical Society of the Pacific Conference Series, 369
* Nöthe et al. (2021) Nöthe, M., Kosack, K., Nickel, L., & Peresano, M. 2021, in Proceedings, 37th International Cosmic Ray Conference, vol. 395
* Ruiz et al. (2021) Ruiz, J. E., Morcuende, D., Saha, L., Baquero, A., Contreras, J. L., & Aguado, I. 2021, in Astronomical Society of the Pacific Conference Series, edited by J. E. Ruiz, F. Pierfedereci, & P. Teuben, vol. 532 of Astronomical Society of the Pacific Conference Series, 357. 2101.09690
* Vuillaume et al. (2022) Vuillaume, T., Garcia, E., & Nickel, L. 2022, lstmcpipe. If you use this software, please cite it using Zenodo from https://doi.org/10.5281/zenodo.6460727, URL https://doi.org/10.5281/zenodo.7180216
* Yoo et al. (2003) Yoo, A. B., Jette, M. A., & Grondona, M. 2003, in Job Scheduling Strategies for Parallel Processing, edited by D. Feitelson, L. Rudolph, & U. Schwiegelshohn (Berlin, Heidelberg: Springer Berlin Heidelberg), 44
|
# Proton Computed Tomography Based on Richardson – Lucy Algorithm
Ákos Sudár1,2 and Gergely Gábor Barnaföldi1
for the Bergen pCT collaboration 1Wigner Research Centre for Physics,
Institute for Particle and Nuclear Physics, Budapest, Hungary 2Budapest
University of Technology and Economics , Institute of Nuclear Techniques,
Budapest, Hungary
###### Abstract.
Objective: Proton therapy is a emerging method against cancer. One of the main
development is to increase the accuracy of the Bragg-peak position
calculation, which requires more precise relative stopping power (RSP)
measurements. An excellent choice is the application of proton computed
tomography (pCT) systems which take the images under similar conditions to
treatment as they use the same irradiation device and hadron beam for imaging
and treatment. A key aim is to develop an accurate image reconstruction
algorithm for pCT systems to reach their maximal performance.
Approach: An image reconstruction algorithm was developed in this work, which
is suitable to reconstruct pCT images from the energy, position and direction
measurement of individual protons. The flexibility of an iterative image
reconstruction algorithm was utilised to appropriately model the trajectory of
protons. Monte Carlo (MC) simulations of a Derenzo and a CTP404 phantom was
used to test the accuracy of the image reconstruction.
Main results: The Richardson – Lucy algorithm was applied first and
successfully for pCT image reconstruction. Probability density based approach
was applied for interaction (system) matrix generation, which is an advanced
approach to consider the uncertain path of the protons in the patient.
Significance: The track of protons are scattered when they travel through
material at the hadron therapy energies. This property limits the achievable
spatial resolution, especially for single-sided pCT setups investigated in
this study. The main motivation of the presented research is to test new
approaches for the image reconstruction, focusing on the achieved spatial- and
density resolution and the image noise. Realistic imaging setup were simulated
with reasonably low proton statistics, to achieve results, which is likely to
be reproducible in clinical environment.
## 1\. Introduction
Hadron therapy is an emerging and efficient curing method against cancer. The
increasing number of hadron therapy centers and the number of successful
treatments presents its success. Today’s accelerator techniques led us to use
protons or heavier ions as bombarding particles. The application of hadron
beams instead of X-ray results more focused dose distribution [6]. Indeed,
using higher mass number beams than proton (He, C and O) can results an
increased relative biological effectiveness (RBE) in the tumor volume, while
keep the RBE close to one in healthy tissues [7]. The higher the dose gradient
around the treated volume requires the lower uncertainty in the relative
stopping power (RSP) distribution during dose planning, to avoid insufficient
dosage of the tumor or the overdose of organs at risk.
The development of proton Computed Tomography (pCT) techniques are promising
solutions for the above problems. Applying the same irradiation device, beam,
and hadron for both the medical imaging and the treatments can significantly
reduce the uncertainties of the imaging. To obtain this, two main imaging
strategies are exist:
* (i)
The first concept is to measure the average energy loss of the proton beam.
This design is feasible from a technical point of view, but can only achieve
poor spatial resolution with the clinically available proton beams [15].
* (ii)
The second concept is the so called list mode imaging concept, which measures
the energy loss and in parallel it estimates the path of each individual
proton. Monte Carlo (MC) simulations and prototype measurements showed that
this solution can meet the required spatial and density resolutions, so the
focus is moved toward this direction [13].
Nowadays, the pCT scanner R&Ds around the world are tending to reach the
prototyping and clinical/pre-clinical testing phase, which requires the
integration of the prototype scanners into clinical environment. Following the
list mode strategy, the path estimation of individual protons is usually based
on the measurements of the upstream and downstream tracker detector pairs,
which concept is called the double-sided scanner design (figure 1). One
important further step can be the abandonment of the upstream tracker
detectors and the application of a single-sided scanner design. The drawback
of this latter concept is the less accurate proton path measurement, however
the study by [27] concluded that the achievable spatial resolution meets the
minimum requirement. Nevertheless with lower-precision proton path measurement
compromises the spatial resolution of the scanner which immediately motivates
the development and application of more accurate image reconstruction
algorithms.
Figure 1. Detector design of the list mode imaging concept.
The realistic clinical applicability requires to complete the data taking in
minutes, which result in 1-10 million protons per second measurement rate. The
ultimate goal would be to finish the image capturing within the minimum gantry
rotation time. The LLU/UCSC Phase-II Scanner prototype detector showed speed
up to 1.2 million proton per second, which probably can be upgraded with 50%
in the near future [14]. To increase the data taking rate to 10 million
protons per second, two possible directions exist: the first is to apply
faster readout frequency of 10 MHz at least, the second is to measure multiple
proton tracks within one readout frame. The second solution fits mostly with
the single-sided scanner design, because this solution avoids the pairing
problem of the upstream and downstream measurements, which leads to track
confusion in case of a double-sided scanner even with low number of protons in
a frame.
Multiple proton measurements fits best for silicon pixel trackers and silicon
pixel sensor based range counters as presented by the Bergen pCT Collaboration
[19, 20, 3]. However, multiple proton measurements can be done by applying
three silicon strip detectors rotated relative to each other as presented by
the PRaVDA collaboration [8]. Another layout has been designed by the iMPACT
group. The ProXY detector combines the two acceleration possibilities with
monolithic active pixel detectors. Applying this layout, about 50 MHz readout
frequency can be reached, and it is planned to measure multiple-proton events
[17].
In this paper authors present for the first time a novel points, which may be
applied in the proposed pCT detector concepts: the Richardson – Lucy algorithm
[22, 16] was applied for the image reconstruction of a single-sided pCT
scanner. The paper organized as follows: section 2 begins with the general
approach to the image reconstruction problem itself, followed by the
presentation of the details of the Richardson – Lucy algorithm and the proton-
phantom interaction model. Section 3 compares detector designs and presents
the applied Monte Carlo simulation. Section 3 also contains the evaluation of
the spatial- and density resolution of phantoms. Results are summarized and
discussed in section 4 and 5, respectively.
## 2\. The Image Reconstruction Algorithm
The role of the image reconstruction is to give back the relative stopping
power (RSP) distribution from the measured data. Two family of image
reconstruction techniques exist: the first family contains the filtered
backprojections, in contrast with the second includes the iterative
reconstructions. The first family usually use integrals along straight lines,
which seems to be an inaccurate approximation for the scattered proton
trajectory. The so-called distance-driven backprojection belongs to this
family but can take into account the curvature of the MLPs during the filtered
backprojection [23]. This method provides reasonably good spatial resolution
however, it requires very high statistics, which might be not accepted by
requirements of the clinical use. The second family of the image
reconstructions models the imaging as the interaction of proton tracks and
volumetric pixels (voxels) of the reconstruction space. This approach is
suitable to handle curved proton trajectories and reaches reasonably good
spatial and density resolution with acceptable statistics, however requires
higher computational power. This method models the imaging as a large linear
system of equations, which is described by the following general algebraic
form:
$\textbf{y}=\textbf{A}\cdot\textbf{x},$ (1)
where y is an $m$-dimensional vector, which has typically $10^{8}$-$10^{9}$
elements. The y contains the water equivalent path length (WEPL) reduction of
the protons in the reconstruction area. Variable x is an $n$-dimensional
vector (typically $10^{5}$-$10^{7}$ elements) containing the relative stopping
power (RSP) of the voxels. Finally A is the so called system matrix, which has
$n\times m$ elements of $10^{13}$-$10^{16}$. The system matrix contains the
interaction coefficients between protons and voxels. The information in matrix
A can be described as the (expected) length of the proton’s path in the voxel.
In practice, $m$ is usually larger than $n$, so the linear equation system is
over-determined. The goal of the image reconstruction in general, is to
determinte the values of vector, x with the knowledge of vector, y and system
matrix, A.
Orthogonal projection based iterative algorithms are widely used for pCT image
reconstruction as presented by [13, 9, 18, 4, 10]. In this work the authors
applied the Richardson – Lucy deconvolution, which reached reasonable quality
increasement in the field of emission tomography [26], and have not been used
earlier for proton computed tomography.
### 2.1. The Richardson – Lucy Algorithm
The Richardson – Lucy deconvolution iteration cycle [22, 16] originating from
the field of optics, and known as a fixed point iteration. The iterative
solution is based on the formula,
$\textbf{x}_{i}^{k+1}=\textbf{x}_{i}^{k}\frac{1}{\sum\limits_{j}\textbf{A}_{ij}}\sum\limits_{j}\frac{\textbf{y}_{j}}{\sum\limits_{l}\textbf{A}_{lj}\,\textbf{x}_{l}^{k}}\textbf{A}_{ij}~{},$
(2)
for every $i=1,\ ...,\ N$, where $N$ is the length of vector x, which contains
the RSP of the voxels, $k$ is the number of iterations, matrix
$\textbf{A}_{ij}$ contains the interaction coefficients between the proton
trajectories and the voxels, $j=1,\ ...,\ M$ is the index of the trajectories,
where $M$ is the number of the trajectories, $\textbf{y}_{j}$ contains the
integrated RSP along the trajectories, which is equivalent with the WEPL
reduction of the protons travelling along the trajectories. The
$\textbf{y}_{j}/\sum\limits_{l}\textbf{A}_{lj}\,\textbf{x}_{l}^{k}$ term is
usually called Hadamard ratio, and it represents the ratio of the integrated
RSP along the proton path and its estimate based on the voxel values
calculated in the previous iteration.
### 2.2. The Proton-Phantom Interaction
Instead of the most simple straight line approximation, the literature uses
the estimated (most likely) path of the protons, based on the upstream and
downstream measurements of proton track position and angle in case of a
double-sided scanner design. Novel formulae are available in [24, 30, 25, 15]
to calculate the most likely path (MLP) of the protons. In case of a single-
sided scanner design, where upstream measurements are not available, the beam
information is used. Certainly, this contains much more uncertainty than a
precise measurement. In this work the authors followed the formalism of [15]
as it considers the uncertainty of the measurements and the beam. The path of
the protons was considered to be straight outside the phantom.
In this article the authors applied an advanced approach suggested by [30]: a
Gaussian probability density distribution of the real proton path around the
MLP, which takes into account the uncertain path of the protons in the
phantom. This approach was used by [29] for pCT image reconstruction. An
average standard deviation was considered during the proton path in the
patient, which is an approximation compared to the depth dependent probability
density investigated by [30], which work did not deal with image
reconstruction. The average standard deviation was chosen based on the
experience of the authors during the development.
## 3\. Simulations with the Algorithm
The ultimate goal of pCT imaging for proton therapy is to provide a stable
basis for accurate dose planning. However, it is a challenge to define a
measure, which characterizes the goodness of a reconstructed RSP distribution
for dose calculation. Instead of this missing ideal measurement, general image
properties (spatial and density resolution, image noise) are usually used to
quantify image quality. To study this, the imaging of dedicated spatial and
density resolution phantoms was simulated with Monte Carlo techniques,
reconstructed by the formerly described method and evaluated following the
instructions later on this section.
### 3.1. The Proton CT Scanner Model
A single-sided detector design (figure 2) with a 230 MeV/u pencil beam was
investigated. The full width at half maximum (FWHM) of the Gaussian beam was 7
mm (with about 3 mm standard deviation), the spot divergence was set to 2.8
mrad and the spot emittance was 3.0 mrad$\times$mm, following the beam model
of [27]. Three different detector layer was investigated: the first is an
idealized detector with no measurement errors, the second is a silicon pixel
tracker modelled after the design of the Bergen pCT Collaboration [20, 3] and
the third is a silicon strip detector based tracker layer followed the
LLU/UCSC Phase-II Scanner design of the Loma Linda University (LLU) and the
University of California at Santa Cruz (UCSC) [14]. We note, that the results
of this work is valid for an envisioned single-sided scanner, built of the
LLU/UCSC Phase-II Scanner tracker layers, in comparison with the existing
LLU/UCSC Phase-II Scanner which is a state-of-the-art double-sided setup.
Figure 2. Single-sided list mode detector design.
The properties of the three detector layer setups, the idealized, the silicon
pixel, and the silicon strip, are summarized in table 1. The idealized setup
is a single sensitive layer, but in the latter two realistic cases each
tracker layer contains two sensitive planes due to the existing technological
solutions (figure 2). If the detection is based on a silicon pixel detector a
double structure of two equivalent sensitive planes need to apply to fully
cover the alternating sensitive and readout electronics panels. While applying
silicon strip detectors, two separate planes are required for the perpenicular
$x$ and $y$ directions. Table 1 contains the joint material budget of these
double layers. The WEPL resolution of both realistic setups was chosen to be 3
mm (standard deviation of a normal distribution), which was added to the
simulated range straggling in the phantom. This is a realistic uncertainty,
however this is a very rudimentary model, as the measurement error is likely
to depend on the remaining range of the protons behind the patient. The
distance of the first detector pair to the rotation axis (isocenter) was
chosen for 400 mm in all cases, which result in 300 mm and 325 mm detector-
phantom distance for the Derenzo and the CTP404 phantoms, respectively.
Similar distances are used for portal detectors applied in photon therapy
gantries, so they can be considered to be realistic for the future pCT devices
as well [15].
| Unit | Ideal setup | Silicon pixel | Silicon strip
---|---|---|---|---
Layer material budget ($x/X_{0}$) | - | 0 | $4.2\times 10^{-3}$ | $8.5\times 10^{-3}$
Distance between layers | mm | - | 50 | 50
Spatial resolution | $\mu$m | 0 | 5 | 66
Angular resolution (130-230 MeV/u) | mrad | 0 | 1.7-2.9 | 3.1-4.6
Correlation (130-230 MeV/u) | mrad$\times$mm | 0 | $-5\times 10^{-4}$ | $-8.7\times 10^{-2}$
Statistical WEPL resolution | mm | 0 | 3.0 | 3.0
Table 1. Comparison of tracker detector pair model parameters: Ideal setup,
with no measurements errors, Silicon pixel detector based on the design of the
Bergen pCT Collaboration [20, 3], and Silicon strip detector model following
the structure of the LLU/UCSC Phase-II Scanner [14].
### 3.2. The Applied Phantoms
To test and validate the application of Richardson – Lucy algorithm was
required to apply standardised evaluation methods. Therefore, we applied two
widely-applied phantom in our analysis. In this study the RSP distribution was
reconstructed in one plane of the phantoms, so the phantoms were considered to
be offset invariant in the direction of the rotation axis. To ensure the
offset invariance 400 mm high phantoms were simulated in the axis direction.
The spatial resolution of the reconstruction was measured with the MC imaging
of the Derenzo phantom: a 200 mm diameter water cylinder, which contains six
sectors of 1.5-6 mm diameters aluminium rods, specially chosen for the current
analysis. The original idea of this phantom comes by [5].
We also used a CTP404 phantom for our study. CTP404 is produced by [28], and
designed to measure how accurately a material property is reconstructed in a
homogeneous region of the phantom. The reconstruction accuracy of the RSP can
be evaluated for proton CT imaging, also referred to as density resolution in
the literature. The CTP404 phantom is an epoxy 150 mm diameter cylinder, which
contains 8 different material inserts with a diameter of 12.2 mm. The average
RSP of the inserts was evaluated in an 8 mm diameter circle in the middle of
the inserts and compared to the real RSP values investigated by [3]. The
standard deviation of the RSP was also evaluated in every inserts to
characterise the noise of the image.
### 3.3. Steps of the Simulation with the Algorithm
A simulation code was developed to test the Richardson – Lucy algorithm, which
was divided into the following steps (schematic in figure 3):
Figure 3. Simulation steps.
* (1)
The data taking was simulated with Monte Carlo method. The beam and the
phantom were modeled appropriately in the simulation. The Geant4 (version
11.0.0) [1, 2] was used with GATE (version 9.2) [12, 11]. In the reference
physics list settings QGSP_BIC_EMY was activated for the calculations. Data
taking of one slice was simulated from 180 directions in $2^{\circ}$ steps.
Field of view (FOV) with 220 mm was applied for the Derenzo phantom (111 beam
positions with 2 mm steps), and 170 mm FOV was used in case of the CTP404
phantom (86 beam positions with 2 mm steps). Overal $\sim 2$ million and $\sim
1.5$ million primary protons were simulated from which $\sim 1.2$ million and
$\sim 0.95$ million remains after 3 sigma filtering in case of the Derenzo and
the CTP404 phantoms, respectively. Instead of modeling the detector in the MC
simulation, the exact position, direction and energy of the protons were read
out at the position of the first tracker layer, and the measurement
uncertainties were assigned in the next step.
* (2)
In this step the errors of position and direction measurements were drawn from
correlated Gaussian distributions and added to the exact positions and
directions of simulated protons. The measurement uncertainty was calculated
based on the guideline of [15]. The WEPL measurement error also was randomly
assigned from a Gaussian distribution to the WEPL of the protons calculated
from their energy losses, simulated in the previous step. The parameters from
table 1 were used. In case of the ideal setup this step was certainly skipped,
since the lack of errors/uncertainties of the idealized case.
* (3)
A 3 sigma filtering is applied for the direction and WEPL of the protons
originating from the same beam spot. The goal of this step was to filter out
protons, which undergo nuclear collisions in the patient. This type of
filtering was suggested and used by [25].
* (4)
Calculation of the most probable incoming and outgoing position of the protons
on a cylinder around the phantom is performed. In this step the formalism of
[15] was applied. The diameter of the cylinder was chosen to be 10 mm wider
than that of the phantom in order to avoid artefacts.
* (5)
The Richardson – Lucy algorithm was used to reconstruct the RSP distribution
from the individual proton histories. On the fly system matrix calculation was
applied based on simplified probability density around a third order spline
approximation of the MLP.
* (6)
In the final step the spatial resolution was evaluated based on the
reconstruction of the Derenzo phantom. The density resolution and the image
noise were calculated from the reconstructed CTP404 phantom.
Calculations were done on the machines of the Wigner Scientific Computing
Laboratory’s hardware. The computationally demanding part of the algorithm was
running on four 1080 Ti GPU cards. The focus of the current work was on the
proof of the concept of probability density based system matrix calculation
with Richardson – Lucy reconstruction algorithm, so the authors did not focus
on the optimization of the implemented code.
### 3.4. Evalution of the Derenzo Phantom
The evaluation of the measured phantom is based on the above simulation, and
starts with the comparison of the reconstructed intensity in the position of
the aluminium rods (peaks) and medium between them (valleys) as it is
demonstrated in figure 4. After subtraction of the background (defined by the
tails of the distribution) the valley-to-peak ratio can be calculated.
Figure 4. The RSP distribution along the sidelines (with three different
colors) of the triangle from the 4 mm rods in the reconstructed Derenzo
phantom. The Bergen pCT setup was applied.
The blurring effect of the reconstruction is modeled as a convolution with the
so called point-spread function. It is a Gaussian function, and its Fourier
transform is the modulation transfer function (MTF). The frequency of the MTF
at 10 % (measured in units of line pair per cm) can be derived from the
valley-to-peak ratio (figure 5) and quantifies the spatial resolution of the
reconstructed image. If the valley-to-peak ratio is too close to zero or one
the image noise suppress the information about the point-spread function, so a
sector of the Derenzo phantom is suitable to cover only a limited resolution
range. The phantom contains six sectors with different rod diameters to
increase the range of resolution that can be evaluated.
Figure 5. The spatial resolution (in the units of linepair per cm) as a
function of the valley to peak ratio.
## 4\. Results
As we pointed our earlier, in our study we investigated the simplified model
with a single-sided detector setup. The center line of all beams falls into
one perpendicular plane to the rotation axis. This image slice was
reconstructed containing 256$\times$256 pixels with 1 mm2 size. Every proton
was assigned to this layer, without take into account the deviation of their
path in the direction of the rotation axis. In the reconstruction step, every
reconstructed image was evaluated after 600 iterations as an optimum between
the magnitude of spatial resolution and the image noise. The standard
deviation ($\sigma$) around the MLP of the protons were set about 0.4 mm, 0.5
mm and 1.0 mm according to the spatial uncertainty of the proton path for the
ideal, silicon pixel and silicon strip detector layers, respectively.
The result of the reconstructed images of the Derenzo and CTP404 phantoms are
shown in figure 6, on the left and right columns, respectively.
Figure 6. Richardson – Lucy algorithm based reconstruction of the Derenzo and
CTP404 phantoms is presented on the left and right columns, respectively. From
top to bottom the ideal, the pixel detector and the strip detector layers were
drawn.
Top row of the figure 6 presentes the idealized case, which certainly the most
clearest reconstruction for both Derenzo and CTP404 phantoms. The middle and
bottom rows are present more realistic imaging models with silicon pixel and
silicon strip detectors, respectively. It is clearly visible that as the
position and direction measurement uncertainties increases, the spatial
resolution becomes worse and worse.
To quantify the quality of the above reconstructed images, the iteration-by-
iteration evolution of the spatial resolution, the image noise and the density
resolution are drawn in figure 7, respectively, form left to right. The
spatial resolution (left panel) of the ideal setup was found to be 2.4 lp/cm
for the ideal setup, 2.0 lp/cm and 1.5 lp/cm for the silicon pixel and the
silicon strip detector based setups, respectively. The noise (middle panel) of
the ideal and silicon pixel setups seems to be similar (5 % and 4.8 %,
respectively), while the silicon strip detector based setup reached
significantly lower noise (3.3 %). This observation indicates that if the
number of protons are fixed, then the $\sigma$ of the reconstruction has the
most important effect on noise, instead of spatial or WEPL measurement
uncertainties. We note, that the value of $\sigma$ parameter was set
significantly higher in case of the silicon strip setup, following the higher
uncertainty in the proton path measurement compared with the other two
detector layers.
Figure 7. Left: the spatial resolution, middle: the image noise and right: the
average relative RSP error as a function of the iteration number.
The error of the density reconstruction is quickly decreasing up to 70-90
iterations, thereafter all saturates, while for the ideal setup it is getting
better at large iteration numbers. The average relative RSP difference (except
air) was found to be 0.3 % for the ideal and 0.5 % for the realistic setups
after 600 iterations. The RSP of the air instead of the real 0.001 was found
to be 0.036, 0.051 and 0.061 for the ideal, the silicon pixel and the silicon
strip setups. The density resolution of the tissues around and above water can
be obviously reconstructed with better accuracy than the required 1 % [21],
but the density resolution is pure for low density regions. The reconstructed
RSP of the air inserts was significantly decreasing even after 600 iterations,
so even more iterations may solve this uncertainty. The RSP of all inserts
(instead of air) was underestimated, most likely caused by the overestimated
RSP of the air around the phantom. If the reconstruction area would be limited
to the area of the phantom (instead of the whole $256\times 256$ mm2), the RSP
estimation of these inserts probably would be even more accurate.
## 5\. Discussion
Based on our simulation, we found that the Richardson – Lucy algorithm with
probability density based proton-phantom interaction reached 2.4 lp/cm spatial
resolution for the ideal and 2.0 lp/cm resolution for a realistic setup. The
density resolution was found to be significantly better than the required 1 %
RSP accuracy, except of the low density region. We observed, that, the number
of iterations further improves the spatial resolutions and in parallel the
density resolution at low RSP as well. Meanwhile, the larger noise (around 5 %
after 600 iterations) limits the applicable number of iterations. This
limitation could be exceeded by imaging with higher statistics or maybe by
implementing superiorization, which reduces the noise.
## 6\. Summary
In this work the application of the Richardson – Lucy algorithm with
probability density based proton-phantom interaction calculation for proton CT
image reconstruction has been presented. The authors applied clinically
realistic setups and parameters in the Monte Carlo simulations: 400 mm
detector-isocenter distance, 1.5-2 million primary protons per image slice and
realistic beam and detector characteristics. For testing and for the
evaluation of the resolutions two widely used phantom were applied the Derenzo
and the CTP404.
The authors concluded that the presented reconstruction method meets the
required density resolution, and have similar density resolution as the state
of art prototypes. However, the spatial resolution of the images are
promising, but has not reached the clinical requirements yet. Further
development of the algorithm is necessary. It is also important to mention,
that however the reconstructed images are noisy due to the low statistics,
they are almost artefact free without the application of any post processing
method.
The authors are continuing the algorithm development, with a focus on the
spatial resolution and reconstruction time, which limited the investigations
of the current work to only one layer. Indeed the possibility of the speedup
of the algorithm has been also planned.
## Members of the Bergen pCT Collaboration
Max Aehlea, Johan Almeb, Gergely Gábor Barnaföldic, Tea Bodovab, Vyacheslav
Borshchovd, Anthony van den Brinke, Mamdouh Chaarb, Viljar Eikelande, Gregory
Feofilovf, Christoph Garthg, Nicolas R. Gaugera, Georgi Genovb, Ola Grøttvikb,
Havard Helstruph, Sergey Igolkinf, Ralf Keideli, Chinorat Kobdajj, Tobias
Kortusi, Viktor Leonhardtg, Shruti Mehendaleb, Raju Ningappa Mulawadei, Odd
Harald Odlandk, b, George O’Neillb, Gábor Pappl, Thomas Peitzmanne, Helge Egil
Seime Pettersenk, Pierluigi Piersimonib,m, Maksym Protsenkod, Max Rauchb,
Attiq Ur Rehmanb, Matthias Richtern, Dieter Röhrichb, Joshua Santanai,
Alexander Schillingi, Joao Secoo, p, Arnon Songmoolnakb, j, Jarle Rambo
Sølieq, Ákos Sudárc, r Ganesh Tambaveb, Ihor Tymchukd, Kjetil Ullalandb,
Monika Varga-Kofaragoc, Lennart Volzs, t, Boris Wagnerb, Steffen Wendzeli,
Alexander Wiebeli, RenZheng Xiaob, u, Shiming Yangb, Hiroki Yokoyamae,
Sebastian Zillieni
a) Chair for Scientific Computing, TU Kaiserslautern, 67663 Kaiserslautern,
Germany; b) Department of Physics and Technology, University of Bergen, 5007
Bergen, Norway; c) Wigner Research Centre for Physics, Budapest, Hungary; d)
Research and Production Enterprise “LTU” (RPELTU), Kharkiv, Ukraine; e)
Institute for Subatomic Physics, Utrecht University/Nikhef, Utrecht,
Netherlands; f) St. Petersburg University, St. Petersburg, Russia; g)
Scientific Visualization Lab, TU Kaiserslautern, 67663 Kaiserslautern,
Germany; h) Department of Computer Science, Electrical Engineering and
Mathematical Sciences, Western Norway University of Applied Sciences, 5020
Bergen, Norway; i) Center for Technology and Transfer (ZTT), University of
Applied Sciences Worms, Worms, Germany; j) Institute of Science, Suranaree
University of Technology, Nakhon Ratchasima, Thailand; k) Department of
Oncology and Medical Physics, Haukeland University Hospital, 5021 Bergen,
Norway; l) Institute for Physics, Eötvös Lóránd University, 1/A Pázmány P.
Sétány, H-1117 Budapest, Hungary; m) UniCamillus – Saint Camillus
International University of Health Sciences, Rome, Italy; n) Department of
Physics, University of Oslo, 0371 Oslo, Norway; o) Department of Biomedical
Physics in Radiation Oncology, DKFZ—German Cancer Research Center, Heidelberg,
Germany; p) Department of Physics and Astronomy, Heidelberg University,
Heidelberg, Germany; q) Department of Diagnostic Physics, Division of
Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway; r)
Budapest University of Technology and Economics, Budapest, Hungary; s)
Biophysics, GSI Helmholtz Center for Heavy Ion Research GmbH, Darmstadt,
Germany; t) Department of Medical Physics and Biomedical Engineering,
University College London, London, UK; u) College of Mechanical & Power
Engineering, China Three Gorges University, Yichang, People’s Republic of
China
## Acknowledgement
The authors would like to thank the support of the Hungarian National
Research, Development and Innovation Office (NKFIH) grants under the contract
numbers OTKA K135515 and 2019-2.1.6-NEMZ_KI-2019-00011,
2020-2.1.1-ED-2021-00179. This work was also supported by the Research Council
of Norway (Norges forskningsrad) and the University of Bergen, grant number
250858. The authors acknowledge the support os Trond Mohn Foundation
(BFS2017TMT07). Computational resources were provided by the Wigner Scientific
Computing Laboratory (WSCLAB).
## References
* [1] S. Agostinelli et al. “Geant4—a simulation toolkit” In _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_ 506.3, 2003, pp. 250–303 DOI: https://doi.org/10.1016/S0168-9002(03)01368-8
* [2] J. Allison et al. “Geant4 developments and applications” In _IEEE Transactions on Nuclear Science_ 53.1, 2006, pp. 270–278 DOI: 10.1109/TNS.2006.869826
* [3] Johan Alme et al. “A High-Granularity Digital Tracking Calorimeter Optimized for Proton CT” In _Frontiers in Physics_ 8, 2020 DOI: 10.3389/fphy.2020.568243
* [4] Yair Censor, Tommy Elfving, Gabor T. Herman and Touraj Nikazad “On Diagonally Relaxed Orthogonal Projection Methods” In _SIAM Journal on Scientific Computing_ 30, 2008 DOI: 10.1137/050639399
* [5] S.. Derenzo et al. “High Resolution Computed Tomography of Positron Emitters” In _IEEE Transactions on Nuclear Science_ 24.1, 1977, pp. 544–558 DOI: 10.1109/TNS.1977.4328738
* [6] M. Durante, R. Orecchia and JS. Loeffler “Charged-particle therapy in cancer: clinical uses and future perspectives” In _Nat Rev Clin Oncol_ 14.8, 2017, pp. 483–495 DOI: 10.1038/nrclinonc.2017.30
* [7] Marco Durante, Jürgen Debus and Jay S. Loeffler “Physics and biomedical challenges of cancer therapy with accelerated heavy ions” In _Nature Reviews Physics_ 3.12, 2021, pp. 777–790 DOI: 10.1038/s42254-021-00368-5
* [8] Michela Esposito et al. “PRaVDA: The first solid-state system for proton computed tomography” In _Physica Medica_ 55, 2018, pp. 149–154 DOI: https://doi.org/10.1016/j.ejmp.2018.10.020
* [9] Richard Gordon, Robert Bender and Gabor T. Herman “Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-ray photography” In _Journal of Theoretical Biology_ 29.3, 1970, pp. 471–481 DOI: https://doi.org/10.1016/0022-5193(70)90109-8
* [10] G.. Herman “Fundamentals of Computerized Tomography” Springer, 2009
* [11] S. Jan et al. “GATE V6: a major enhancement of the GATE simulation platform enabling modelling of CT and radiotherapy” In _Physics in Medicine and Biology 2011-jan 20 vol. 56 iss. 4_ 56, 2011 DOI: 10.1088/0031-9155/56/4/001
* [12] S. Jan et al. “GATE: a simulation toolkit for PET and SPECT” In _Physics in Medicine and Biology 2004-sep 10 vol. 49 iss. 19_ 49, 2004 DOI: 10.1088/0031-9155/49/19/007
* [13] Robert P Johnson “Review of medical radiography and tomography with proton beams” In _Reports on Progress in Physics_ 81, 2017 DOI: 10.1088/1361-6633/aa8b1d
* [14] Robert P. Johnson et al. “A Fast Experimental Scanner for Proton CT: Technical Performance and First Experience With Phantom Scans” In _IEEE Transactions on Nuclear Science_ 63, 2016 DOI: 10.1109/TNS.2015.2491918
* [15] Nils Krah et al. “A comprehensive theoretical comparison of proton imaging set-ups in terms of spatial resolution” In _Physics in Medicine and Biology_ 63, 2018 DOI: 10.1088/1361-6560/aaca1f
* [16] L.. Lucy “An iterative technique for the rectification of observed distributions” Provided by the SAO/NASA Astrophysics Data System In _Astronomical Journal_ 79, 1974, pp. 745 DOI: 10.1086/111605
* [17] S. Mattiazzo et al. “Advanced proton imaging in computed tomography” In _Radiation Protection Dosimetry_ 166, 2015 DOI: 10.1093/rpd/ncv197
* [18] S.. Penfold, R.. Schulte, Y. Censor and A.. Rosenfeld “Total variation superiorization schemes in proton computed tomography image reconstruction” In _Medical Physics_ 37, 2010 DOI: 10.1118/1.3504603
* [19] H.E.S. Pettersen et al. “Proton tracking in a high-granularity Digital Tracking Calorimeter for proton CT purposes” In _Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment_ 860, 2017, pp. 51–61 DOI: https://doi.org/10.1016/j.nima.2017.02.007
* [20] Helge Egil Seime Pettersen et al. “Design optimization of a pixel-based range telescope for proton computed tomography” In _Physica Medica_ 63, 2019, pp. 87–97 DOI: https://doi.org/10.1016/j.ejmp.2019.05.026
* [21] G Poludniowski, N M Allinson and P M Evans “Proton radiography and tomography with application to proton therapy” In _British Journal of Radiology_ 88, 2015 DOI: 10.1259/bjr.20150134
* [22] William Hadley Richardson “Bayesian-Based Iterative Method of Image Restoration” In _Journal of the Optical Society of America_ 62, 1972 DOI: 10.1364/JOSA.62.000055
* [23] Simon Rit et al. “Filtered backprojection proton CT reconstruction along most likely paths” In _Medical Physics_ 40, 2013 DOI: 10.1118/1.4789589
* [24] Uwe Schneider “Multiple Coulomb scattering and spatial resolution in proton radiography” In _Medical Physics_ 21, 1994 DOI: 10.1118/1.597212
* [25] R.. Schulte, S.. Penfold, J.. Tafas and K.. Schubert “A maximum likelihood proton path formalism for application in proton computed tomography” In _Medical Physics_ 35, 2008 DOI: 10.1118/1.2986139
* [26] L.. Shepp and Y. Vardi “Maximum Likelihood Reconstruction for Emission Tomography” In _IEEE Transactions on Medical Imaging_ 1.2, 1982, pp. 113–122 DOI: 10.1109/TMI.1982.4307558
* [27] Jarle Rambo Sølie et al. “Image quality of list-mode proton imaging without front trackers” In _Physics in Medicine and Biology_ 65.13 IOP Publishing, 2020, pp. 135012 DOI: 10.1088/1361-6560/ab8ddb
* [28] The Phantom Laboratory “Catphan® 600 phantom (containing CTP404 as a section)” Last accessed 5 October 2022, 2022 URL: https://www.phantomlab.com/catphan-600
* [29] Dongxu Wang, T. Mackie and Wolfgang A. Tomé “On the use of a proton path probability map for proton computed tomography reconstruction” In _Medical physics_ 37, 2010 DOI: 10.1118/1.3453767
* [30] D C Williams “The most likely path of an energetic charged particle through a uniform medium” In _Physics in Medicine and Biology_ 49, 2004 DOI: 10.1088/0031-9155/49/13/010
|
# An Optimized Privacy-Utility Trade-off Framework for Differentially Private
Data Sharing in Blockchain-based Internet of Things
Muhammad Islam Mubashir Husain Rehmani Munster Technological University,
Rossa Avenue, Bishopstown, Cork, Ireland Jinjun Chen
###### Abstract
Differential private (DP) query and response mechanisms have been widely
adopted in various applications based on Internet of Things (IoT) to leverage
variety of benefits through data analysis. The protection of sensitive
information is achieved through the addition of noise into the query response
which hides the individual records in a dataset. However, the noise addition
negatively impacts the accuracy which gives rise to privacy-utility trade-off.
Moreover, the DP budget or cost $\epsilon$ is often fixed and it accumulates
due to the sequential composition which limits the number of queries.
Therefore, in this paper, we propose a framework known as optimized privacy-
utility trade-off framework for data sharing in IoT (OPU-TF-IoT). Firstly,
OPU-TF-IoT uses an adaptive approach to utilize the DP budget $\epsilon$ by
considering a new metric of population or dataset size along with the query.
Secondly, our proposed heuristic search algorithm reduces the DP budget
accordingly whereas satisfying both data owner and data user. Thirdly, to make
the utilization of DP budget transparent to the data owners, a blockchain-
based verification mechanism is also proposed. Finally, the proposed framework
is evaluated using real-world datasets and compared with the traditional DP
model and other related state-of-the-art works. The results confirm that our
proposed framework not only utilize the DP budget $\epsilon$ efficiently, but
it also optimizes the number of queries. Furthermore, the data owners can
effectively make sure that their data is shared accordingly through our
blockchain-based verification mechanism which encourages them to share their
data into the IoT system.
###### Index Terms:
IoT, Blockchain, differential privacy, data sharing, privacy-utility trade-
off.
## I Introduction
Internet of Things (IoT) has provided an opportunity for enhanced,
intelligent, and smart services and applications in various domains such as
smart health, smart cities, smart industry, intelligent transportation, and
recommender systems [1]. The backbone of these applications is the data
collection on large scale from IoT devices and then mining it for beneficial
trends and patterns which are further used in intelligent decision making. For
instance, medical data collected in hospitals can be utilized to provide
useful insights for practitioners and researchers [2]. Similarly, in a smart
factory, the data collected from various machines and devices can be used for
predictive maintenance [3]. In intelligent transportation, the data collected
from vehicles can be utilized for traffic control purposes [4]. Moreover, the
data collected from vehicles, smart factories and hospitals can be used for
smart urban living through smart cities [5]. However, it is very common that
the data owner’s sensitive and identification information may leak during the
analysis of the data. As a result, data owners are often reluctant to share
their data [6, 7, 8].
(a)
(b)
Figure 1: (a) Differential private data sharing in IoT, (b) illustration of
relative error vs population size.
Recently, differential privacy got popularity in the context of private data
sharing in IoT [9]. The main principle of differential privacy is that it
hides an individual record in a group of records or dataset while calculating
the aggregated results [10]. The aggregated results could be either published
at once or shared in the form of responses to queries sent by data users. In
this work, we consider the data sharing through queries because it is more
practical than the other one [11]. In this case, the data sharing model
consists of a set of data owners, a data curator, and a data user (or set of
data users). A scenario of the data sharing model in IoT is shown in Fig 1(a).
In Fig. 1(a), the curator is trusted but the data user can act as an adversary
due to which the sensitive information can be leaked regarding the data
owners. To protect this leakage, the curator inserts a random noise before
sharing the response with the data user. More random noise added into the
query response means high privacy preservation and vice versa. However, the
random noise negatively impacts the accuracy of the query response.
Furthermore, a trade-off between privacy and utility or accuracy is developed,
i.e., increased privacy will result in less accuracy and vice versa [12]. It
is to be noted here that our previous work [13] presented transparency-privacy
trade-off problem which is different from privacy-utility trade-off problem.
Moreover, the current work focuses on optimizing the trade-off and increasing
the number of queries under a given differential privacy budget.
Similarly, another drawback of differential privacy is that the privacy budget
or privacy cost denoted as $\epsilon$ accumulates for sequential queries due
to the sequential composition of differential privacy [9]. Consequently, a
given privacy budget allows small number of queries under the constraint of
differential privacy. Due to these drawbacks, differential private models
cannot be adopted on large scale in IoT-based applications. In this context,
various studies suggested innovative techniques to solve the above-mentioned
problems. For instance, game theoretic models were adopted to solve the
privacy-utility trade-off problem by selecting suitable values of privacy
budget in [12, 14, 15]. Furthermore, to satisfy both data owners and data
users, reinforcement-based and heuristic-based approaches were adopted to
optimize the number of queries [11, 16, 17]. Moreover, in [18], a mechanism
was introduced to efficiently utilize the privacy budget and blockchain was
adopted to satisfy the data owners regarding the utilization of their data.
However, the above-mentioned works have used a fixed privacy budget allocation
approach and failed to consider the relationship of population size (dataset
size) and accuracy of the query response for various query functions such as
count, average, median and mode. For instance, Fig. 1(b) explains how the
population size impacts the accuracy of the count query response [19]. For
example, the population_1 represents the medical records for a whole country
and the population_2 represents the medical records for people of a state or
specific area code. Also, in real-world scenarios, data users are not always
interested in query evaluation over the whole population (dataset), and they
need query evaluation only on a specific portion of the population (dataset).
It is evident from Fig. 1(b) that for the same privacy budget and query type,
the Relative error_1 for population_1 with size 1000 is much smaller than the
Relative error_2 for population_2 with size 100. Furthermore, relative error
and accuracy are inversely proportional, therefore, a high value of relative
error means low accuracy and vice versa. In other words, to get same level of
accuracy of the query response over the two populations, the privacy budget
for population_1 needs to be less than the privacy budget for population_2.
However, the existing mechanisms treat each data user in a uniform manner and
don’t consider the population size. Consequently, the query is evaluated with
the same allocated budget for each population which satisfy both data owner
and data user, but the privacy budget is wasted due to which its allocated
value is exhausted quickly. As a result, the total number of queries that can
be answered reduces. Apart from this, to avoid the centralized authority from
processing and collecting the data, local differential privacy model is
adopted, however, it significantly reduces the accuracy of the aggregated
results. As a result, to get accurate aggregated results with transparency in
the operations of the centralized curator, more efficient and enhanced privacy
preserving models are needed.
Therefore, this paper proposes an optimized privacy-utility trade-off
framework for differentially private data sharing in IoT (OPU-TF-IoT) to
address all the above-mentioned problems. The novelty of the proposed
framework is that it uses an adaptive approach to utilize the privacy budget
by considering a new metric of population size and priority along with the
data user’s query to optimize the number of queries. Similarly, an algorithm
is proposed to avoid the waste of privacy budget due to the uniform treatment
of data users. Moreover, it reduces the privacy cost accordingly while
satisfying the needs of the data owners and data users. Finally, inspired from
the work in [18], a verification mechanism using blockchain technology is
proposed for data users to verify that the data is shared accordingly. The
main contributions of our work are as following.
* •
An optimized privacy-utility trade-off framework for IoT-based applications
(OPU-TF-IoT) is proposed which uses an adaptive approach by considering the
new metric of population size along with the query to optimize the number of
queries while satisfying both data owners and data users.
* •
An algorithm is proposed to reduce the privacy cost by avoiding its waste due
to uniform treatment of data users through heuristic search thus utilizing the
saved privacy budget for more query responses.
* •
A new blockchain-based mechanism is proposed through which data owners can
verify the utilization of privacy budget or cost which increases their
satisfaction on the data sharing system.
* •
A comprehensive comparison is presented using real-world datasets to verify
the improvement of the proposed framework (OPU-TF-IoT) over the traditional
differential privacy model and other state-of-the-art mechanisms in terms of
optimized privacy-utility trade-off.
The rest of the paper is organized according to the following sequence.
Section II presents the relevant works from the literature. Similarly, Section
III presents the proposed framework in detail. Section IV presents the
performance evaluation and comparison. Finally, Section V concludes the work.
## II Literature review
Previously, various techniques were adopted to efficiently utilize the
differential privacy budget in order to optimize the privacy-utility trade-
off. For instance, in [18], a blockchain-based mechanism has been proposed to
record each query, its type, and the privacy budget utilized in generating the
perturbed response. Afterwards, it checks the new incoming queries against the
record, and if it successfully finds it then the previous response is returned
instead of utilizing new privacy budget. In this way, the privacy budget is
utilized in an efficient manner. However, the data users have been treated in
uniform manner and their priorities have been ignored which results in waste
of privacy budget. Furthermore, if the number of repeated queries is small
then this technique is not successful, and it acts like the traditional
differential privacy model.
Similarly, another the work in [11] have introduced the mechanism of batch
queries which acts similar to the previous technique of [18]. Furthermore, a
heuristic search algorithm has been used to find a suitable setup where the
data curator can satisfy maximum number of data users while maintaining the
privacy preservation level of data users. Moreover, to reduce the complexity
of the heuristic algorithm reinforcement learning has been adopted which
significantly improves the search time. To this end, their proposed approach
performs better than the traditional differential privacy model. However,
disjoint dataset has been considered despite that the records in real-world
scenarios are corelated. Furthermore, single query has been considered while
multiple queries have been ignored which limits its applicability in IoT
applications. As opposed to the previous two techniques, an adaptive approach
has been used to optimize the privacy-utility trade-off problem in [16]. The
proposed approach adopts suitable noise generating algorithms based on
distribution of data, query functions and privacy settings which improves the
privacy-utility trade-off. However, theoretically it is not practical to find
an optimized threshold for the sampling of tuples which maintain the same
utility of data. Furthermore, the priorities of data users have been ignored.
Another similar work in [17] have proposed a general framework for location
privacy preservation. The main idea is to utilize different noise scale to
each point in a trajectory of movement which guarantees the utility. However,
it also adopts the traditional differential privacy model for utilization of
privacy budget and efficient utilization of privacy budget is not the primary
focus. Furthermore, the model is specifically developed for location privacy
preservation and thus cannot be adopted for other scenarios. Similarly, in
[12], the problem of correlated records has been discussed. More specifically,
the proposed model considers that the user privacy is not only affected by its
own choice of privacy budget, but it is also affected by the choice of privacy
budget of its neighbors. To this end, it improves the privacy preservation of
individuals in correlated databases. However, the optimization of privacy-
utility trade-off has been ignored.
Apart from this, in [15], a total variation distance has been adopted to
measure the privacy leakage. The proposed approach showed that the optimal
privacy-utility trade-off problem can be solved by a standard linear program.
However, the proposed model is very general, and it does not consider the
relationship between population size and accuracy of the data. Consequently,
although, it solves the privacy-utility trade-off problem, however, the
mechanism to avoid waste of privacy budget is missing. In [20], privacy has
been modelled as goods to be sold i.e., between data owners and data
collectors. A contract theoretic approach is then proposed in which data
collector deals with the privacy-utility trade-off. A contract between the
parties is signed which describes how much prices should the data owner
receive for a certain level of privacy preservation. In this way, the data
collector takes better decision whether higher a higher utility is needed or
less price is to be paid for providing higher guarantee of protecting privacy
of data owners.
TABLE I: Key notations and its description
Notation | Meaning
---|---
$\epsilon$ | Differential privacy budget
$\epsilon_{sut}$ | Suitable privacy budget
$\epsilon_{t}$ | Total privacy budget
$\epsilon_{def}$ | Default privacy budget $0<\epsilon_{def}<\epsilon_{t}$
$\mu$ | Mean for Laplace distribution
$\lambda$ | Laplace scale
${\bigtriangleup}f$ | Sensitivity
HSA | Heuristic search algorithm
C | Data curator
O | Set of data owners
U | Set of data users
$\bm{\epsilon}$ | Set of the desired $\epsilon$ values from data users
$q_{i}$ | $i^{th}$ query
$q^{{}^{\prime}}_{i}$ | $i^{th}$ query response
$T_{n,m}$ | Data table
$A^{i}_{req}$ | Required accuracy by $i^{i}$ data user
$A^{i}_{act}$ | Calculated/actual accuracy of $i^{th}$ query
$F$ | Query function
$N$ | Query type
$r^{i}_{err}$ | Relative error in the $q^{{}^{\prime}}_{i}$
$\Upsilon_{i}$ | Numerical value of the query $q_{i}$
$\tau$ | Tolerance coefficient
$\eta$ | Decrement factor
$\rho$ | Minimum no of satisfied data users
Figure 2: Demonstration of query-response mechanism with the recording of
query on blockchain.
In [14], the problem of privacy-accuracy trade-off has been discussed in the
context of distributed data mining. The selection of privacy level by
individual users impacts the accuracy of data for data classifier or mediator.
Similarly, a different game model is adopted to represent the interaction
among users in which a user cannot observe the privacy budgets of others. The
existence of satisfaction equilibrium (SE) is then proved in which each user
is satisfied in their individual constraints. However, the focus of these
works is not related to utilization and maximizing the number of queries.
Similarly, the work in [21] has proposed a generic model for selecting a
suitable privacy preservation mechanism based on the dimensions or type of
dataset. Furthermore, fuzzy logic has been used to get a fuzzy index (FI)
which decides which privacy mechanism to be selected. However, an explicit
privacy-utility trade-off, optimization and increase the number of queries,
and avoid the waste of privacy budget are missing. Furthermore, the
calculation of FI is costly in terms of computation, hence, not scalable.
Therefore, due to the above-discussed limitations of the existing mechanisms,
the differential private data sharing in IoT still needs further improvement.
More specifically, the problems of optimization of privacy-utility trade-off,
waste of privacy budget due to the uniform treatment of data users, and lack
of a verification mechanism for data owners to track the privacy budget and
data sharing activities still open to the research community. To this end, in
this paper, we present a solution to the above-mentioned problems though our
optimized privacy-utility framework (OPU-TF-IoT).
## III Proposed Work: OPU-TF-IoT
In this part of the paper, we present our proposed framework in detail.
Firstly, to set-up the background, the preliminaries section presents the
basics of differential privacy model and blockchain. Afterwards, system model,
adversary model, and the proposed heuristic search algorithm (HSA) are
presented. Moreover, throughout this paper, the word dataset is used to
represent a population and accuracy is used to represent the utility of the
data. Similarly, $\epsilon$ denotes the privacy budget or cost. Other
notations used in the paper are summarized in the Table I.
### III-A Preliminaries
#### III-A1 Differential privacy
C. Dwork for the first time introduced differential privacy for statistical
databases [10]. It is based on the principle which states that the output of
an algorithm applied to a dataset will not change in a significant way by
adding or removing a single record from the dataset. The formal definition of
differential privacy mentioned in [10] is given as following:
Definition 1. A randomized function Z satisfies $\epsilon$-differential
privacy if for all datasets $D_{i}$, $D_{j}$ which differs in one record, and
for all $S\subseteq Range(Z)$, the following holds [10]:
$\displaystyle P[Q(D_{i})\in S]\leq e^{\epsilon}\times P[(Q(D_{j})\in S]$ (by
[10]) (1)
Where Range(Z) is the range of all possible outputs of function Z, $\epsilon$
is differential privacy budget such that $\epsilon>0$, and $D_{i}$, $D_{j}$
are the two neighboring databases such that $D_{j}$ is generated by removing
or adding a single record of the data from $D_{i}$ and vice versa.
Furthermore, the maximum difference between query answers over $D_{i}$ and
$D_{j}$ is known as sensitivity which is denoted as ${\bigtriangleup}f$. The
sensitivity depends on the type of query function. For instance, in case of
count queries, the maximum difference between query responses calculated over
$D_{i}$ and $D_{j}$ is 1. Therefore, mathematically it can be written as
following:
$\displaystyle{\bigtriangleup}f={|f(D_{i})-f(D_{j})|}_{1}$ (by [10, 22]) (2)
Furthermore, in literature, two popular mechanisms have been used to implement
differential privacy which are (i) Laplace mechanism, and (ii) Exponential
mechanism [9]. We use Laplace mechanism because it is suitable for numerical
queries. The Laplace distribution function is given as following:
$\displaystyle
Lap(x,\mu,\lambda)=\frac{1}{2\lambda}e^{\frac{-|x-\mu|}{\lambda}}$ (by [9])
(3)
Where $\lambda$ and $\mu$ are the Laplace scale and mean for the Laplace
distribution, respectively. Furthermore,
$\lambda=\frac{{\bigtriangleup}f}{\epsilon}$, and $x\in\mathbb{R}$.
Apart from this, two composition theorems have been discussed in the context
of differential privacy which are given below [9].
##### Theorem 1 (Parallel composition)
for a set of privacy preserving mechanisms
$\textbf{M}=\\{M_{1},M_{2},M_{3}…M_{m}\\}$, if every mechanism $M_{i}$
satisfies differential privacy equivalent to $\epsilon_{i}$ on the disjoint
subsets of the dataset $D_{i}$ then M will satisfy differential privacy
equivalent to max-$\epsilon_{i}$ [9].
Figure 3: Overview of the proposed framework (OPU-TF-IoT).
##### Theorem 2 (Sequential composition)
for a set of privacy preserving mechanisms
$\textbf{M}=\\{M_{1},M_{2},M_{3}…M_{m}\\}$, if every mechanism $M_{i}$
satisfies differential privacy equivalent to $\epsilon_{i}$ on the same
dataset $D_{i}$ then M will satisfy differential privacy equivalent to
$\sum_{i=1}^{m}\epsilon_{i}$ [9].
#### III-A2 Blockchain
Satoshi Nakamoto was the first who gave the concept of blockchain in 2008 for
virtual currency [23]. Blockchain is a distributed ledger which keeps the
record of each transaction in the form of cryptographically connected data
blocks. In a distributed environment, it solves the problem of lack of trust
among the participants through transparency, traceability, and verification of
each transaction [24, 25]. Currently, it has been adopted in various network
scenarios to increase the transparency of operations, avoid frauds, and enable
tracking and provenance. Every change, transaction, and network activity are
verified through its distributed consensus mechanism which increases the trust
among the participants of the network.
As a result, we adopt blockchain to address the lack of transparency in the
context of data processing, collection, and sharing by the centralized
curator. In this way, we leverage the high accuracy of aggregated results
calculated over the data in the context of a centralized data curator whereas
the processing and sharing of data is made transparent to the data owners. In
this work, we adopt Hyperledger fabric. Furthermore, the transaction
processing is fast as compared to other types of blockchain which results in
high throughput [26].
### III-B System Model of OPU-TF-IoT
The proposed system model is presented in Fig. 2. Furthermore, the detailed
description of each component is given as following.
#### III-B1 Centralized Data Curator
A data curator denoted as C collects data from the data owners. For example,
it could be a server of Facebook, Google, or Cellular network. Furthermore, it
has its own database in which the collected data is recorded. Furthermore, the
data is shared with third parties or governmental agencies for analysis. In
this context, data curator and data owners agree on a maximum value of privacy
budget known as total privacy budget $\epsilon_{t}$ which is then used to
perturb the data before sharing to ensure the privacy protection of sensitive
information of individual data owners.
#### III-B2 Data Owner
A data owner is an individual person associated with an IoT device such as
cell phone, smart car, and body sensor. The device has the owner’s location,
health associated details, and financial transaction details etc. The data is
then collected by the server (curator). In our system model, we consider a set
of data owners which is denoted as
$\textbf{O}=\\{O_{1},O_{2},O_{3}...O_{n}\\}$ where $n$ is the number of data
owners.
#### III-B3 Data User
A data user is a third-party organization, company or governmental agency
which needs exploratory analysis of the data collected by the curator. We
assume a set of data users denoted as
$\textbf{U}=\\{U_{1},U_{2},U_{3}...U_{k}\\}$ where $k$ is the number of data
users.
#### III-B4 Query
A query represents a statistical query such as Cunt, Average, Maximum, and
Minimum which is denoted as $q$. The data users from the set U send queries to
the data curator C which are then evaluated by the data curator over the
actual dataset. Afterwards, a random noise is inserted into the query response
to perturb it before sharing it with the data user which is denoted as
$q_{{}^{\prime}}$.
#### III-B5 Blockchain Network
The blockchain network is based on Hyperledger fabric as shown in Fig. 2. Data
curator and data user act as complete organizations with its own databases.
Furthermore, each of these represents a Hyperledger fabric node in the
network. Therefore, the data sharing event is recorded as a transaction on the
blockchain ledger. The contents of the transaction include query type, and
differential privacy budget $\epsilon$ which is utilized in generating a
perturbed query response.
#### III-B6 Query and Verification by Data Owner
To make the data sharing event transparent and avoid the low-level threat of
privacy breach of data curator, each data owner sends query to the Hyperledger
fabric network. Furthermore, for the query to be evaluated on the blockchain
ledger, query transaction of Hyperledger fabric is adopted [26]. Afterwards,
the response is returned with the query type, and the privacy budget utilized
to the concerned data owner O.
### III-C Threat Model of OPU-TF-IoT
Two types of adversaries exist in the proposed system model which are (i) the
centralized curator, and (ii) third parties (companies, organizations,
advertisement agencies etc.). Furthermore, the curator can act as honest-but-
curious adversary. To this end, data owners trust on the curator that it will
ensure the privacy protection of sensitive information while sharing the
aggregated results with the third parties. However, due to lack of
transparency in the operations of the curator in the traditional approaches,
it can share the data with loose privacy preservation, i.e., by using a large
value of $\epsilon$ for its own benefit. Therefore, in this work, the threat
from the data curator is regarded as average level threat.
On the other hand, third parties cause serious threats to the privacy of data
owners because of their strong background knowledge. Consequently, despite of
using a suitable privacy budget for perturbation of query response, it is more
likely that an individual can be exposed through linking or inference privacy
attack. In a linking privacy attack, an adversary uses the perturbed data and
link it with the background knowledge to get the actual data of a data owner.
Similarly, in inference privacy attack, an adversary tries to predict the
actual data of a data owner based on mathematical or statistical techniques
such as average, median etc. Furthermore, in real-world scenarios, the
individual data records may be correlated which further increases the risk of
privacy breach. Therefore, care must be taken to avoid the privacy breach by
using a suitable privacy budget agreed between the curator and data owner.
Moreover, because the data records in real-world scenarios are often
correlated therefore, the curator should use the composition theorem, i.e.,
Theorem 2 given in Section III-A1 to keep an eye on the maximum privacy
budget.
In both cases, the privacy breach can result in the exposure of sensitive
information of data owners such as life style, shopping activities, location
visited, choice, financial status, social relationships, and political
beliefs.
1Input: privacy budget $\epsilon$, data table $T_{n,m}$, set of data users
$\textbf{U}=\\{U_{1},U_{2},U_{3}...U_{k}\\}$, required accuracy $A^{i}_{req}$
by data user $U_{i}$, query function $F$, query type $N$, query $q_{i}$
2Output: perturbed query response $q^{{}^{\prime}}_{i}$ with suitable privacy
budget $\epsilon_{sut}$
3Initialization: iteration i = 1, total privacy budget = $\epsilon_{t}$,
default privacy budget $0<\epsilon_{def}<\epsilon_{t}$, x is a random
variable, noise = 0, mean $\mu$ = 0, sensitivity $\bigtriangleup$f = 1,
Laplace scale $\lambda=\frac{{\bigtriangleup}f}{\epsilon}$,
$q_{i}=[A^{i}_{req},F,N]$, tolerance factor = $\tau$, decrement factor
$0<\eta<1$
4
5while _$i\leq k\land\epsilon_{t}\leq\epsilon_{def}$_ do // check the budget
availability and data users
6 parse the parameters $A^{i}_{req}$, $F$, $N$ from $q_{i}$ of $U_{i}$
7 classify the $q_{i}$ based on the value of $N$ // III-D
8 Call QueryFunction($q_{i},\epsilon_{def}$)
9 get $A^{i}_{act}$ // using equation 5
10
11 if _$|A^{i}_{act}-A^{i}_{req}|\leq\tau$_ then // from inequality 6
12 $\epsilon_{t}\leftarrow\epsilon_{t}-\epsilon_{def}$ // decrement
$\epsilon_{t}$
13 $q^{{}^{\prime}}_{i}\leftarrow q_{i}$
14
15 end if
16
17 else if _(
$|A^{i}_{act}-A^{i}_{req}|\not\leq\tau)\land(A^{i}_{act}<A^{i}_{req}$)_ then
18 needs an alternative plan
19 skip the query $q_{i}$
20
21 end if
22
23 else
24 $\epsilon_{sut}=\epsilon_{def}$
25 while _$|A^{i}_{act}-A^{i}_{req}|\not\leq\tau$_ do
26 $\epsilon_{sut}=\epsilon_{sut}-\eta$ // decrement by $\eta$
27
28 end while
29 Call QueryFunction($q_{i},\epsilon_{sut}$)
30 $\epsilon_{t}\leftarrow\epsilon_{t}-\epsilon_{sut}$ // decrement
$\epsilon_{t}$
31 return $q^{{}^{\prime}}_{i}$
32
33 end if
34
35 $\textbf{FUNCTION}\rightarrow QueryFunction(q_{i},\epsilon)$
36 evaluate query $q_{i}$ on the data table $T_{n,m}$
37 Call LaplacianFunction($q_{i},\epsilon$)
38 $q^{{}^{\prime}}_{i}\leftarrow q_{i}+noise$ // add noise into $q_{i}$
39 return _$q^{{}^{\prime}}_{i}$_
40 $\textbf{FUNCTION}\rightarrow LaplacianFunction(q_{i},\epsilon)$
41 generate noise using
$f(x;\mu,\frac{{\bigtriangleup}f}{\textit{$\epsilon$}})$// equation 3
42 return _$noise$_ // Laplacian random noise
43 $i\leftarrow i+1$
44 end while
45return
$\\{q^{{}^{\prime}}_{1},q^{{}^{\prime}}_{2},q^{{}^{\prime}}_{3}…q^{{}^{\prime}}_{k}\\}$
// set of queries responses with adjusted privacy budget
Algorithm 1 Heuristic search in OPU-TF-IoT (note: step 1 and 20 of our
proposed algorithm have been taken from [18])
### III-D Framework Design
The framework design is shown in Fig. 3. The data curator C collects the data
from the set of data owners O. At the same time, the privacy preservation
level, i.e., privacy budget for each data owner is also collected which
results in a set of privacy budget values denoted as
$\bm{\epsilon}=\\{\epsilon_{1},\epsilon_{2},\epsilon_{3}...\epsilon_{n}\\}$.
Each data owner wants to decrease the privacy leakage by adopting a small
value of $\epsilon$. Afterwards, the data is recorded in the form of a table
with columns and rows which is denoted as $T_{n,m}$ whereas $n$ represents the
number of rows and $m$ represents the number of columns. More specifically,
the $n^{th}$ row represents the record of the $n^{th}$ data owner in the
dataset. Similarly, the $m^{th}$ column represents the $m^{th}$ attribute of
the record. For instance, in a medical dataset, each row represents record of
a patient while each column represents a specific disease. For simplicity, we
assume that the data curator selects the minimum privacy budget from the list
$\bm{\epsilon}$ as the total privacy budget $\epsilon_{t}$ which will satisfy
the privacy requirements of all the data owners. Furthermore, the rows of
$T_{n,m}$ are considered as correlated which means if an individual from the
table is isolated by the adversary then the risk of privacy breach of other
related records increases [12].
The data user sends the query $q_{i}$ which consists of three parameters
denoted as $[A^{i}_{req},F,N]$ where $A^{i}_{req}$ is the required accuracy,
$F$ is the query function and $N$ is the query type defined as follows:
##### Accuracy $A^{i}_{req}$
it denotes the required accuracy of the query response set by the data user
$U_{i}$. For accuracy, we need to define the relative error in the query
response. According to [27], the relative error $r^{i}_{err}$ of the $i^{th}$
query response can be defined as follows:
$\displaystyle
r^{i}_{err}=\frac{|\Upsilon_{i}-\Upsilon^{{}^{\prime}}_{i}|}{\Upsilon_{i}}$
(as stated in [27]) (4)
where $\Upsilon_{i}$ and $\Upsilon^{{}^{\prime}}_{i}$ are the numerical values
of $q_{i}$ and $q^{{}^{\prime}}_{i}$, respectively.
Based on $r^{i}_{err}$ and the required accuracy $A^{i}_{req}$ of data user
$U_{i}$, the curator C calculates the the actual accuracy $A^{i}_{act}$ of the
query response as follows:
$\displaystyle A^{i}_{act}=1–r^{i}_{err}$ (5)
where $0\leq r^{i}_{err}\leq 1$ and hence, $0\leq A^{i}_{act}\leq 1$. As a
result, 0 means minimum accuracy and 1 means maximum accuracy.
Consequently, to satisfy the data user $U_{i}$, $A^{i}_{act}\geq A^{i}_{req}$.
Furthermore, if $A^{i}_{act}<A^{i}_{req}$ then an alternative plan is needed
which should be agreed by the curator and the data user. For instance, the
data user can compensate the accuracy, or the curator can generate more
accurate query response with the consent of data owners to satisfy the data
user.
##### Query Function $F$
it denotes the query function which is given as
$F\in\\{Count,Average,Maximum,Minimum\\}$. Each element of F represents a
category of statistical query. Therefore, the curator uses the value of F to
evaluate the associated query on the $T_{n,m}$.
Figure 4: Overview of the privacy budget verification mechanism in the
proposed framework (OPU-TF-IoT).
##### Query Type $N$
it denotes the query type which is defined as $N\in\\{0,1\\}$. Here, $N=0$
represents that the data user wants the query to be evaluated on the whole
dataset whereas N = 1 represents that data user is only interested in a part
of the dataset. For instance, if the patients medical records throughout the
USA is considered then the two types of queries are given below.
1. 1.
For N = 0, $q_{i}$ is how many patients throughout the USA have suffered from
disease $x$?
2. 2.
For N = 1, $q_{i}$ is how many patients suffered from a disease $x$ in the New
York region?
Therefore, by considering the parameters $[A^{i}_{req},F,N]$ of $q_{i}$, the
curator then uses the proposed HSA to decide a suitable $\epsilon$ and
generate $q^{{}^{\prime}}_{i}$ as shown in the Fig. 3. The details of HSA are
given in the next section.
Based on Definition 1 and Theorem 2, we define the guarantee of privacy
preservation against the adversaries discussed in section III-C as following:
Definition 2: If $\textbf{M}=\\{M_{1},M_{2},M_{3}…M_{k}\\}$ represents the set
of mechanisms for calculating the responses to the queries sent by the set of
data users U on the data table $T_{n,m}$ with the condition that each
mechanism $M_{i}\in\textbf{M}$ satisfies $\epsilon_{i}$-differential privacy,
then the M satisfies $(\sum_{i=1}^{k}\epsilon_{i})$-differential privacy.
Apart from this, the utilization of data is defined in terms of accuracy
$A_{act}$ of the query responses. Here, we use $A_{act}$ without the
superscript $i$ to denote accuracy in general and not for the $q_{i}$.
Furthermore, due to the random noise addition, it is very difficult to get a
smooth value of the actual accuracy $A_{act}$ so that $A_{act}\geq A_{req}$.
Hence, a tolerance coefficient $\tau$ is introduced such that $0\leq\tau\leq
1$, and it is defined as the fraction by which a data user can tolerate the
accuracy. Consequently, we define the utilization of the data as following:
Definition 3: For given values of $A_{act}$, $A_{req}$, and $\tau$, the
utilization of the data is satisfactory if the following holds:
$\displaystyle|A_{act}-A_{req}|\leq\tau$ (6)
Furthermore, if the inequality in 6 does not hold then an alternate plan is
needed as discussed earlier.
#### III-D1 Heuristic Search Algorithm (HSA)
The data curator executes the proposed algorithm 1 to adopt a suitable value
of $\epsilon$ denoted as $\epsilon_{sut}$ for generating a perturbed query
response $q^{{}^{\prime}}_{i}$. It is to be noted that two steps for managing
the privacy budget, i.e., step 1 ($\epsilon_{t}\leq\epsilon_{def}$) and step
20 ($\epsilon_{t}\leftarrow\epsilon_{t}-\epsilon_{sut}$) of our proposed
algorithm have been taken from [18]. The reason is that we have further
improved the previous algorithm proposed in [18]. Furthermore,
$\epsilon_{sut}$ is the minimum value of the privacy budget which satisfies
the accuracy requirements of the $q_{i}$ by a $U_{i}$. Based on the parameters
$[A^{i}_{req},F,N]$, the curator first uses a default privacy budget
$\epsilon_{def}$ which is selected randomly according to
$0<\epsilon_{def}<\epsilon_{t}$ to generate $q^{{}^{\prime}}_{i}$. Later in
this work, we will propose an algorithm for the curator to choose the default
privacy budget $\epsilon_{def}$ in order to further optimize the privacy-
utility trade-off. Afterwards, the curator calculates the accuracy
$A^{i}_{act}$ by using equation 5. To minimize the effect of randomness of
Laplacian noise, the curator generates a vector of noise values of length 1000
by using the same $\epsilon$. Similarly, the associated $A^{i}_{act}$ is
calculated for each noise value by using equation 5. Subsequently, the average
$A^{i}_{act}=\frac{\sum_{j=1}^{1000}A^{j}_{act}}{1000}$ is then compared with
the $A^{i}_{req}$ value of $q_{i}$. According to inequality 6, the data
curator can take three types of decisions which are given below.
1. 1.
If $|A^{i}_{act}-A^{i}_{req}|\leq\tau$ then the utilization is satisfactory
and the $q^{{}^{\prime}}_{i}$ is returned to the data user.
2. 2.
If $|A^{i}_{act}-A^{i}_{req}|\not\leq\tau$, and $A^{i}_{act}<A^{i}_{req}$,
then the data user is not satisfied, and an alternative plan is needed.
3. 3.
If $|A^{i}_{act}-A^{i}_{req}|\not\leq\tau$, and $A^{i}_{act}>A^{i}_{req}$,
then the data curator needs to adjust the $\epsilon$ in order to avoid the
waste of privacy budget.
In case 3 above, we introduce a decrement factor denoted as $\eta$ such that
$0<\eta<1$ which is used to decrement the default $\epsilon_{def}$ until the
condition $|A^{i}_{act}-A^{i}_{req}|\leq\tau$ is satisfied. The detailed steps
of the three cases are given in lines 6, 10, and 16 of algorithm 1,
respectively. Finally, the perturbed query response set
$\\{q^{{}^{\prime}}_{1},q^{{}^{\prime}}_{2},q^{{}^{\prime}}_{3}…q^{{}^{\prime}}_{k}\\}$
is retuned as the output of the algorithm 1. The output consists of all those
queries which satisfy the accuracy requirements
$\\{A^{1}_{req},A^{2}_{req},A^{3}_{req}…A^{k}_{req}\\}$ of the data users. In
addition, the queries which fail to satisfy the accuracy requirements are
skipped as shown in lines 10-13 of the algorithm 1. Furthermore, to maximize
the number of satisfied users $U_{i}\in\textbf{U}$ by minimizing the number of
skipped queries, the data curator selects suitable values of $\epsilon_{def}$
and $\eta$. The selection of $\epsilon_{def}$ and $\eta$ by the curator is
presented in the following section.
The novelty of the algorithm 1 is that it finds a suitable privacy budget
$\epsilon_{sut}$ by reducing the default privacy budget $\epsilon_{def}$ as
shown in lines 16-20 of algorithm 1. As a result, the waste of privacy budget
is avoided whereas both data owners and data users are satisfied. In the
following, we discuss the selection of $\epsilon_{def}$ and $\eta$ in detail.
#### III-D2 Selection of $\epsilon_{def}$ and $\eta$
The values of $\epsilon_{def}$ and $\eta$ impact the number of satisfied data
users. Furthermore, the curator tries to keep the number of satisfied data
users above a threshold $\rho$ such that $\rho\leq k$ which defines the
minimum number of satisfied data users from the set U. For instance, if the
curator starts with a smaller value of $\epsilon_{def}$ in the range
$0<\epsilon_{def}<\epsilon_{t}$ in algorithm 1 then the data users with high
accuracy requirements may not be satisfied due to less accurate query
responses (line 6 of algorithm 1). The reason is that a smaller value of
$\epsilon_{def}$ leads to high noise addition into the query response. On the
other hand, a relative high value of $\epsilon_{def}$ in algorithm 1 will
satisfy most of the data users because of the less noise addition into the
query responses. However, using a high value of $\epsilon_{def}$ will lead to
quick exhaustion of the total privacy budget $\epsilon_{t}$ as shown in lines
7 and 20 of algorithm 1. The reason is that a relatively high privacy budget
is utilized to generate individual query response which accumulates to a high
value according to Theorem 2.
Apart from the $\epsilon_{def}$, the curator also selects a suitable value of
$\eta$ to gradually decrease the $\epsilon_{def}$ as shown in lines 15-18 of
the algorithm 1. A smaller value of $\eta$ will decrease the $\epsilon_{def}$
in a more granular manner to find a best fit $\epsilon_{sut}$. Similarly, a
relative high value of $\eta$ may not find $\epsilon_{sut}$ to satisfy the
condition given in line 16 of algorithm 1. Consequently, the associated
$q_{i}$ will be skipped which is not desired. Therefore, the curator uses
algorithm 2 to find a suitable $\epsilon_{def}$ at the beginning and a
suitable $\eta$ which is used to find a best fit $\epsilon_{sut}$. In
algorithm 2, the curator first picks the values of $\epsilon_{def}$ and $\eta$
from the current execution of algorithm 1. Afterwards, it checks the number of
satisfied data users against the threshold $\rho$. Consequently, it enables
the curator to select best values of $\epsilon_{def}$ and $\eta$ which not
only decrease the privacy budget utilization but also satisfy the accuracy
requirements of all the data users. Moreover, to enable the verification of
the utilization of privacy budget in OPU-TF-IoT, the following section
discusses the proposed verification mechanism.
Repeat:
1
2get the values of $\epsilon_{def}$ and $\eta$ from the current execution of
algorithm 1
3 get no of satisfied data users from the current execution of algorithm 1
4 if _$no\\_of\\_satisfied\\_data\\_users <\rho$_ then
5 increase the current $\epsilon_{def}$ and decrease the previous $\eta$ for
the next execution of algorithm 1
6
7 end if
8else
9 continue
10
11 end if
Algorithm 2 Selection of $\epsilon_{def}$ and $\eta$ in OPU-TF-IoT
#### III-D3 Privacy Budget Verification Mechanism
Blockchain-based verification mechanism is shown in Fig. 4. The verification
mechanism uses smart contract, write transactions, query transactions, and
client application of the Hyperledger fabric which are defined as following
[26].
##### Smart contract
the smart contract of Hyperledger fabric is known as chaincode. An instance of
the smart contract is installed on each peer or node of the Hyperledger fabric
network. The smart contract defines the functions which operates on the
blockchain ledger such as write, read, query etc.
##### Write transaction
it invokes the smart contract function which alters the records on the ledger.
Therefore, a write transaction changes the state of the ledger.
##### Query transaction
it invokes the function of smart contract which evaluate the result of a query
on the ledger. Furthermore, it does not change the state of the ledger, i.e.,
the query is evaluated and returned to the requester.
(a)
(b)
Figure 5: (a) Working flow of the proposed privacy budget verification
mechanism, (b) illustration of transaction and transaction response.
##### Client application
it is used to access the ledger through query transactions in Hyperledger
fabric network. In the proposed scenario, the data owners act as client
applications. Moreover, the data owners O are light peers of the blockchain
network which means that it can only access the ledger state but cannot modify
it. On the other hand, the data curator C and the set of data users U are full
peers which means they have full rights of modifying and setting the policies
for the rest of the network.
##### Consensus
in the proposed work, the deterministic consensus mechanism of Hyperledger
fabric is adopted in which specified peers called orderer peers performs the
consensus process [26]. In the proposed scenario, data curator and data users
are responsible for carrying out the consensus, validation of transactions,
and configuration of the smart contract policies of the network.
The parameters $[F,N,\epsilon_{i},A^{i}_{req}]$ along with the
$q^{{}^{\prime}}_{i}$ are recorded on the Hyperledger fabric ledger as shown
in Fig. 4. Therefore, the record of the privacy budget utilization for each
successful query is maintained. The client applications then send query
transactions which are evaluated on the ledger and returned to the requestors.
The working flow, associated functions of the smart contract, and sample
response are shown in Fig. 5. In Fig. 5(a), client application sends
transaction with the parameter $N$ using the SendQueryTransaction() which
invokes the associated Evaluate_Query_Function() of the smart contract.
Subsequently, the query is evaluated on the ledger according to the value of
$N$ such that if $N=0$ then according to Theorem 2, the privacy budget
utilized is equal to $\sum_{i=1}^{z}\epsilon_{i}$ where $\epsilon_{i}$ is the
fraction of privacy budget used for generating $q^{{}^{\prime}}_{i}$, and $z$
is the number of all queries for which $N=0$. Similarly, if $N=1$ then the
privacy budget utilized is equal to $\sum_{i=1}^{z}\epsilon_{i}$ where $z$ is
the number of queries for which $N=1$. Furthermore, the sample response
consists of the total privacy budget, utilized privacy budget, and the
remaining privacy budget as shown in Fig. 5(b). In this way, the data owners
can verify and track the privacy budget utilization in each query. As a
result, it satisfies the data owners regarding the use of their private data.
Apart from the verification and tracking of the privacy budget, utilizing the
previous response of a repeated query can also save the accumulated privacy
budget [18]. Therefore, in OPU-TF-IoT, the curator searches the recorded query
responses before utilizing new privacy budget using algorithm 3. Consequently,
if the required accuracy $A^{i}_{req}$, query function $F$, and query type $N$
of the incoming query $q_{i}$ match with any of the record on blockchain
ledger then it is returned to the data user without utilizing a new privacy
budget as shown in lines 1-2 of the algorithm 3. In this way, the utilization
of privacy budget is further decreased. In the following sections, we present
the time complexity, performance evaluation and comparison of the proposed
work with the state-of-the-art works.
Repeat:
1 if _$[A^{i}_{req},F,N]==any\\_record\\_on\\_the\\_ledger$_ then
2 $q^{{}^{\prime}}_{i}\leftarrow record\\_on\\_blockchain\\_ledger$;
3 end if
4else
5 continue with the execution of algorithm 1
6
7 end if
Algorithm 3 Utilization of the previous privacy budget in OPU-TF-IoT
#### III-D4 Time Complexity of the Proposed Algorithms
In this section, we discuss the time complexity of the proposed algorithms.
Furthermore, as algorithm 1 performs the main implementation task of the
proposed OPU-TF-IoT so, we only evaluate the time complexity of algorithm 1.
Algorithm 1 consists of two while loops which are the outer and inner while
loops given in line 5 and 16, respectively. The outer while loop executes
according to the size of U whereas the inner while loop only executes when the
$\epsilon_{def}$ need to be adjusted. In real-world scenarios, the
$\epsilon_{def}$ is not necessarily adjusted for all data users.
Similarly, for typical values of $\epsilon_{def}$ in the range [0.1, 1], the
inner while loop takes around 1000 steps to reduce $\epsilon_{def}=1$ by 50%.
Therefore, the worst-case time complexity of algorithm 1 is calculated as
$|\textbf{U}|$*1000 where $|\textbf{U}|$ denotes the size of U. Consequently,
the time complexity O($|\textbf{U}|$) = 1000$|\textbf{U}|$. As a result, a
cloud server can easily execute the proposed algorithm 1.
Figure 6: Write transaction initialization in OPU-TF-IoT.
## IV Performance Evaluation
In this section, we present the performance evaluation of the proposed work
and its comparison with state-of-the-art works. The state-of-the-art works
include the standard differential privacy model (standard DP) presented in
[28] and a blockchain-based approach for saving and tracking differential-
privacy cost (BST-DP) [18]. Furthermore, the performance is evaluated over
three parameters which are (1) optimized privacy-utility trade-off (2)
verification of privacy budget utilization, and (3) impact of $\tau$ and
$\eta$ on the performance of OPU-TF-IoT. Firstly, we discuss the datasets and
simulation setup then the results and discussion are presented.
(a) Count queries
(b) Average queries
(c) Maximum queries
(d) Minimum queries
Figure 7: Evaluation and comparison of privacy-utility trade-off for OPU-TF-IoT, BST-DP [18], and Standard DP [28] with $\epsilon_{def}=0.5$, $\tau=0.02$, and $\eta=0.0005$. TABLE II: Comparison of total privacy budget utilization for OPU-TF-IoT, standard DP [28], and BST-DP [18] with $\tau=0.02$ and $\eta=0.0005$ where Count, Avg, Max, and Min represent Count, Average, Maximum, and Minimum queries, respectively. $\epsilon_{def}$ | Total privacy budget in OPU-TF-IoT | Total privacy budget in BST-DP [18] | Total privacy budget in Standard DP [28]
---|---|---|---
| Count | Avg | Max | Min | Count | Avg | Max | Min | Count | Avg | Max | Min
0.1 | 0.45 | 0.41 | 0.4 | 0.43 | 0.5 | 0.89 | 0.89 | 0.89 | 0.5 | 0.99 | 0.99 | 0.99
0.2 | 1.32 | 0.81 | 0.80 | 0.83 | 1.59 | 1.79 | 1.79 | 1.79 | 1.79 | 1.99 | 1.99 | 1.99
0.3 | 2.17 | 1.21 | 1.20 | 1.23 | 2.69 | 2.69 | 2.69 | 2.69 | 2.99 | 2.99 | 2.99 | 2.99
0.4 | 2.65 | 1.61 | 1.60 | 1.63 | 3.59 | 3.59 | 3.59 | 3.59 | 3.99 | 3.99 | 3.99 | 3.99
0.5 | 3.12 | 2.01 | 2.0 | 2.03 | 4.5 | 4.5 | 4.5 | 4.5 | 5 | 5 | 5 | 5
0.6 | 3.48 | 2.41 | 2.40 | 2.43 | 5.39 | 5.39 | 5.39 | 5.39 | 5.99 | 5.99 | 5.99 | 5.99
0.7 | 3.96 | 2.81 | 2.8 | 2.83 | 6.3 | 6.3 | 6.3 | 6.3 | 7 | 7 | 7 | 7
0.8 | 4.38 | 3.21 | 3.2 | 3.23 | 7.19 | 7.19 | 7.19 | 7.19 | 7.99 | 7.99 | 7.99 | 7.99
0.9 | 4.79 | 3.61 | 3.6 | 3.63 | 8.10 | 8.1 | 8.1 | 8.1 | 9 | 9 | 9 | 9
1 | 5.18 | 4.01 | 4 | 4.03 | 9 | 9 | 9 | 9 | 10 | 10 | 10 | 10
(a) Throughput
(b) Latency
(c) Throughput
(d) Latency
Figure 8: Evaluation of differential privacy budget verification mechanism in
OPU-TF-IoT. The results are within 95% of confidence interval. Here, we did
not compare [28] and [18] because the blockchain implementation of both these
works is missing. However, the analysis of differential privacy for the
mentioned references and proposed work is given in Fig. 7 and Table II.
### IV-A Experimental Setup
##### Software and Hardware configuration
To simulate the environment for evaluation, we consider a general network in
an IoT scenario which consists of a single curator C, a set of data owners
$\textbf{O}=\\{O_{1},O_{2},O_{3}...O_{n}\\}$, and a set of 10 data users
$\textbf{U}=\\{U_{1},U_{2},U_{3}...U_{k}\\}$ where $k=10$. The curator
collects data from the set of data owners O which are IoT devices such as
cellular phone, or home appliances. Similarly, we use Hyperledger fabric to
establish a blockchain network which consists of two organizations namely the
curator and one of the data users. One data user is considered for simplicity
which can be easily extended to multiple data users. Furthermore, each
organization has one peer and a Couch database connected through a single
channel called mychannel [26]. Moreover, a single smart contract is installed
on each of the peer. For data table $T_{n,m}$, we use the free available adult
dataset from [29] which consists of 32K records ($n=32K$) with 16 attributes
($m=16$). As a result, it is assumed that the data is associated with the set
of data owners $\textbf{O}=\\{O_{1},O_{2},O_{3}...O_{n}\\}$ where $n=32K$.
Random queries $\\{q_{1},q_{2},q_{3}…q_{k}\\}$ with $k=10$ are simulated
whereas each query $q_{i}$ is randomly generated which asks a numeric value
according to $F\in\\{Count,Average,Maximum,Minimum\\}$.
The required accuracies $A^{i}_{req}$ of the queries are simulated according
to $\\{0.99,0.98,0.96,0.96,0.95,0.93,0.99,0.98,0.95,0.97\\}$. For query type
$N$, we consider that the first five queries have type $N=1$ whereas the last
five queries have type $N=0$. To differentiate the query types $N$, queries
with type $N=0$ are configured with smaller number of requested attributes (a
portion of the dataset) in the predicate than the queries with type $N=1$.
Similarly, we take the total privacy budget $\epsilon_{t}=8$ whereas
$\epsilon_{def}$ is varied from 0.1 to 1 with the increment of 0.1.
Furthermore, the decrement factor $\eta$ is varied according to
$\\{0.0005,0.005,0.05\\}$ and the tolerance factor is taken as $\tau=0.02$.
The proposed heuristic search algorithm is implemented in Python to perform
the selection of suitable privacy budget whereas the proposed privacy budget
verification mechanism is implemented through Hyperledger fabric. Moreover,
Hyperledger fabric is used as the target SUT (software under test) with the
SDK version 1.4.11. We use Caliper version 0.4.0 for evaluation of the target
SUT [30]. Similarly, we use Ubuntu-18 64-bit operating system which is
installed along with Windows 10 using Oracle VM VirtualBox. The hardware
configuration of the system includes Intel(R)Core(TM) i5-8250U CPU @ 1.6 GHz
processor with 8 GB of installed physical memory.
TABLE III: Evaluation of the impact of $\eta$ on the performance of OPU-TF-IoT
with $\tau=0.02$ and $\eta\in\\{0.005,0.005,0.05\\}$ where Count, Avg, Max,
and Min represent Count, Average, Maximum, and Minimum queries, respectively.
$\epsilon_{def}$ | No of satisfied data users
---|---
Count | Avg | Max | Min
0.1 | 6 | 10 | 10 | 10
0.2 | 8 | 10 | 10 | 10
0.3 | 10 | 10 | 10 | 10
0.4 | 10 | 10 | 10 | 10
0.5 | 10 | 10 | 10 | 10
0.6 | 10 | 10 | 10 | 10
0.7 | 10 | 10 | 10 | 10
0.8 | 10 | 10 | 10 | 10
0.9 | 10 | 10 | 10 | 10
1 | 10 | 10 | 10 | 10
(a) $\eta=0.0005$
$\epsilon_{def}$ | No of satisfied data users
---|---
Count | Avg | Max | Min
0.1 | 5 | 5 | 5 | 8
0.2 | 9 | 5 | 5 | 8
0.3 | 10 | 5 | 5 | 8
0.4 | 10 | 5 | 5 | 8
0.5 | 10 | 5 | 5 | 7
0.6 | 10 | 5 | 5 | 8
0.7 | 10 | 5 | 5 | 7
0.8 | 10 | 5 | 5 | 8
0.9 | 10 | 5 | 5 | 8
1 | 10 | 5 | 5 | 8
(b) $\eta=0.005$
$\epsilon_{def}$ | No of satisfied data users
---|---
Count | Avg | Max | Min
0.1 | 4 | 4 | 5 | 5
0.2 | 7 | 4 | 5 | 5
0.3 | 8 | 4 | 5 | 5
0.4 | 10 | 4 | 5 | 5
0.5 | 9 | 4 | 5 | 5
0.6 | 9 | 4 | 5 | 5
0.7 | 9 | 4 | 5 | 5
0.8 | 10 | 4 | 5 | 5
0.9 | 10 | 4 | 5 | 5
1 | 10 | 4 | 5 | 5
(c) $\eta=0.05$
##### Benchmark configuration
The benchmark configuration of the Caliper tool consists of two rounds which
are initialization of the ledger and querying the ledger. In the first round,
a test with five workers is simulated which sends write transactions with a
varying transaction rate from 10 tran/sec to 50 tran/sec to the Hyperledger
fabric SUT. The initialization of write transaction with the given parameters
is performed through the SUTAdapter as shown in Fig. 6. In the second round,
the application sends query transaction (as shown in Fig. 5(b)) which is
evaluated by peers on the ledger to generate query responses. The simulation
results obtained from the experimental setup are presented in the next
section.
### IV-B Results and Discussion
#### IV-B1 Optimized privacy-utility trade-off
The privacy-utility trade-off comparison is evaluated and presented in Fig. 7.
It can be seen from Fig. 7 that the BST-DP of [18] and standard DP of [28] use
a flat allocation of privacy budget for the incoming queries due to which the
accumulation of the privacy budget shows a linear increase. In contrast, the
accumulation of privacy budget for the OPU-TF-IoT shows variation as we go
from left to right. The reason is that OPU-TF-IoT adjusts the privacy budget
according to the accuracy requirements of the data users. As a result, it can
be seen from Fig. 7 that the accumulated privacy budget in OPU-TF-IoT is less
than that for the other two approaches for all four query types, i.e., it is
(a) 3.12 vs 4.5 and 5 for count, (b) 2.01 vs 4.5 and 5 for average, (c) 2 vs
4.5 and 5 for maximum, and (d) 2.03 vs 4.5 and 5 for minimum queries for OPU-
TF-IoT, BST-DP, and standard DP, respectively. Consequently, the OPU-TF-IoT
saves the privacy budget by avoiding its waste due to flat allocation of BST-
DP by 30.6%, 55.3%, 55.5%, and 54.8% for count, average, maximum, and minimum
queries, respectively as shown in Fig. 7. Similarly, according to Fig. 7, OPU-
TF-IoT saves the privacy budget against the standard DP by 37.6%, 59.8%, 60%,
and 59.4% for count, average, maximum, and minimum queries, respectively.
The proposed OPU-TF-IoT and state-of-the-art BST-DP reuse the privacy budget
for repeated queries which saves the privacy budget as shown in Fig. 7.
However, it is evident from Fig. 7 that the OPU-TF-IoT outperforms BST-DP. The
reason is that BST-DP uses flat allocation scheme for non-repeated queries
whereas OPU-TF-IoT uses adjustment of privacy budget according to the accuracy
requirements of the data users. Consequently, the utilization of the privacy
budget is further improved. Table II presents the comprehensive comparison of
the total privacy budget utilization for different query types. It is evident
from the Table II that for all values of $\epsilon_{def}$, the total privacy
budget utilization in OPU-TF-IoT is less than the BST-DP and standard DP for
all types of queries which witnesses the improvement of the proposed approach
in the utilization and saving of privacy budget.
Consequently, it can be deduced from the analysis of the results in Fig. 7 and
Table II that the OPU-TF-IoT achieves optimized privacy-utility trade-off by
adjusting the privacy budget according to the accuracy requirements of the
data users. Furthermore, the data users are satisfied whereas the waste of
privacy budget due to flat allocation is avoided which is then utilized for
other queries. In this way, the OPU-TF-IoT enables the data curator to answer
more queries than the BST-DP and standard DP.
#### IV-B2 Verification of utilization of the privacy budget
In this part, we evaluate the privacy budget verification mechanism of OPU-TF-
IoT. For this reason, the output parameters of the algorithm 1, i.e., F, N,
$\epsilon$, and A are passed to the submitTransaction function of the client
application of Hyperledger fabric as shown in Fig. 6. The submitTransaction
function initializes the write transaction which is then used to write the
contractArguments to the blockchain ledger. Furthermore, the blockchain
network is then evaluated for processing of Init (write) and query
transactions. In the proposed experimental setting, throughput and latency of
transactions are evaluated to study the maximum processing capacity and
latency of transactions of SUT. The results are presented in Fig. 8. It is
evident from the results in Fig. 8, that the throughput increases for both
write and query transactions, respectively. The reason is that the range of
input transaction rate is within the processing capacity of the SUT.
Therefore, according to Fig. 8(a) and 8(c), more input transactions in the
unit time results in higher throughput. The maximum throughput of 50 and 30
tran/sec are obtained for write and query transactions, respectively.
Similarly, the evaluation of latency is shown in Fig. 8(b) and 8(d). According
to Fig. 8(b), for write transactions, the maximum processing capacity reaches
for input transactions rate of 40 tran/sec. Therefore, increasing the input
transaction rate beyond this point shows increase in the latency of write
transactions. In contrast, according to Fig. 8(d), for query transactions, a
steep increase beyond 20 tran/sec is detected which shows that the SUT reaches
its maximum capacity of transactions processing. As a result, increasing the
input transaction rate beyond this point results in abrupt increase in the
latency.
The results in Fig. 8 suggest that the privacy budget verification mechanism
of OPU-TF-IoT is suitable for practical scenarios in IoT. The reason is that
in the current setting, it achieves a maximum throughput of 50 tran/sec.
Similarly, the maximum latency in the current setting is around 12 sec for 50
tran/sec of input transaction rate which is again feasible in practical
scenarios. As a result, the privacy budget verification mechanism of OPU-TF-
IoT enables the data owners to verify the data sharing activities which
increases the transparency of the system.
#### IV-B3 Impact of $\epsilon_{def}$ and $\eta$ on the performance of OPU-
TF-IoT
In this section, we evaluate the impact of $\epsilon_{def}$ and $\eta$ on the
number of satisfied data users in OPU-TF-IoT. Table III(c) presents the number
of satisfied data users as a function of $\epsilon_{def}$ and $\eta$. It is
evident from the results that a smaller value of $\eta$ increases the number
of satisfied data users. For instance, for $\eta=0.0005$, the number of
satisfied data users is 100% for all query types except the two cells in the
count column as shown in Table III(a). The reason is that a smaller $\eta$
increments the $\epsilon_{def}$ by a small fraction which enables the curator
to find a suitable privacy budget $\epsilon_{sut}$. In contrast, both
$\eta=0.005$ and $\eta=0.05$ result in lower number of satisfied data users as
shown in Tables III(b) and III(c), respectively. The reason is that OPU-IT-IoT
cannot find a suitable adjusted value of $\epsilon_{sut}$ through the gradual
decrement of $\epsilon_{def}$ which is not desired.
Similarly, the number of satisfied data users vary with the selection of
$\epsilon_{def}$. For example, the results in Table III(a) indicate that the
data curator should select $\epsilon_{def}=0.3$ (row 3 of Table III(a))
instead of 0.1 and 0.2 (rows 1 and 2 of Table III(a), respectively) to avoid
the decrease in the number of satisfied data users. The reason is that if the
curator selects a smaller $\epsilon_{def}$ then the data users with high
accuracy requirements will not be satisfied. Therefore, the curator in the
proposed work uses algorithm 2 to keep track of the number of satisfied data
users and change the $\epsilon_{def}$ and $\eta$ accordingly. In this way,
OPU-TF-IoT increases the number of satisfied data user and avoid the waste of
privacy budget at the same time.
Finally, from the evaluation results, it is evident that the proposed OUP-TF-
IoT outperforms the state-of-the-art BST-DP of [18] and standard DP of [28] in
terms of optimized privacy-utility trade-off. More specifically, OPU-TF-IoT
avoids the waste of privacy budget, increases the number of satisfied data
users, and enable the data owners to verify their privacy preservation level
by making the data sharing activities transparent and accessible.
## V Conclusion
In this work, we proposed an optimized privacy-utility trade-off framework
(OPU-TF-IIoT) for IoT-based applications. Differential privacy has been
adopted to share the data in a privacy preserving manner. Similarly, to
optimize the privacy-utility trade-off, we considered the population or
dataset size along the query. Furthermore, an algorithm called heuristic
search is proposed to adjust the privacy budget according to the accuracy
requirements of the data users. Moreover, to avoid the risk of privacy leakage
due to central processing of the data, a verification mechanism is also
designed through Hyperledger fabric. It was found that the proposed OPU-TF-IoT
outperforms the state-of-the-art standard differential privacy of [28], and
BST-DP of [18] in terms of optimal privacy-utility trade-off. Finally, it was
also validated through the results that the proposed work can be implemented
using a cloud server and the transaction processing rate of the Hyperledger
fabric is also feasible. Consequently, it enables to share the data in more
efficient manner by avoiding the waste of privacy budget, increase the number
of satisfied data users, and making the data sharing events transparent to the
data owners.
## References
* [1] M. Stoyanova, Y. Nikoloudakis, S. Panagiotakis, E. Pallis, and E. K. Markakis, “A survey on the internet of things IoT forensics: Challenges, approaches, and open issues,” _IEEE Communications Surveys Tutorials_ , vol. 22, no. 2, pp. 1191–1221, 2020.
* [2] S. Raj, “An efficient IoT-based platform for remote real-time cardiac activity monitoring,” _IEEE Transactions on Consumer Electronics_ , vol. 66, no. 2, pp. 106–114, 2020.
* [3] D. A. Chekired, L. Khoukhi, and H. T. Mouftah, “Industrial IoT data scheduling based on hierarchical fog computing: A key for enabling smart factory,” _IEEE Transactions on Industrial Informatics_ , vol. 14, no. 10, pp. 4590–4602, 2018.
* [4] F. Zhu, Y. Lv, Y. Chen, X. Wang, G. Xiong, and F.-Y. Wang, “Parallel transportation systems: Toward IoT-enabled smart urban traffic control and management,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 21, no. 10, pp. 4063–4071, 2020.
* [5] F. Cirillo, D. Gómez, L. Diez, I. Elicegui Maestro, T. B. J. Gilbert, and R. Akhavan, “Smart city IoT services creation through large-scale collaboration,” _IEEE Internet of Things Journal_ , vol. 7, no. 6, pp. 5267–5275, 2020.
* [6] S.-C. Cha, T.-Y. Hsu, Y. Xiang, and K.-H. Yeh, “Privacy enhancing technologies in the internet of things: Perspectives and challenges,” _IEEE Internet of Things Journal_ , vol. 6, no. 2, pp. 2159–2187, 2019.
* [7] M. A. Lisovich, D. K. Mulligan, and S. B. Wicker, “Inferring personal information from demand-response systems,” _IEEE Security Privacy_ , vol. 8, no. 1, pp. 11–20, 2010.
* [8] W. Lin, X. Zhang, L. Qi, W. Li, S. Li, V. S. Sheng, and S. Nepal, “Location-aware service recommendations with privacy-preservation in the internet of things,” _IEEE Transactions on Computational Social Systems_ , vol. 8, no. 1, pp. 227–235, 2021.
* [9] T. Zhu, G. Li, W. Zhou, and S. Y. Philip, “Differentially private data publishing and analysis: A survey,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 29, no. 8, pp. 1619–1638, 2017.
* [10] C. Dwork, “Differential privacy: A survey of results,” in _Theory and Applications of Models of Computation_ , M. Agrawal, D. Du, Z. Duan, and A. Li, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 1–19.
* [11] Y. Jiang, K. Zhang, Y. Qian, and L. Zhou, “Reinforcement-learning-based query optimization in differentially private IoT data publishing,” _IEEE Internet of Things Journal_ , vol. 8, no. 14, pp. 11 163–11 176, 2021.
* [12] X. Wu, T. Wu, M. Khan, Q. Ni, and W. Dou, “Game theory based correlated privacy preserving analysis in big data,” _IEEE Transactions on Big Data_ , vol. 7, no. 4, pp. 643–656, 2021.
* [13] M. Islam, M. H. Rehmani, and J. Chen, “Transparency-privacy trade-off in blockchain-based supply chain in industrial internet of things,” in _2021 IEEE 23rd Int Conf on High Performance Computing and Communications; 7th Int Conf on Data Science and Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud and Big Data Systems and Application (HPCC/DSS/SmartCity/DependSys)_ , 2021, pp. 1123–1130.
* [14] L. Xu, C. Jiang, Y. Qian, J. Li, Y. Zhao, and Y. Ren, “Privacy-accuracy trade-off in differentially-private distributed classification: A game theoretical approach,” _IEEE Transactions on Big Data_ , pp. 1–1, 2017.
* [15] B. Rassouli and D. Gündüz, “Optimal utility-privacy trade-off with total variation distance as a privacy measure,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, pp. 594–603, 2020.
* [16] B. Niu, Y. Chen, B. Wang, Z. Wang, F. Li, and J. Cao, “Adapdp: Adaptive personalized differential privacy,” in _IEEE INFOCOM - IEEE Conference on Computer Communications_ , 2021, pp. 1–10.
* [17] H. Jiang, M. Wang, P. Zhao, Z. Xiao, and S. Dustdar, “A utility-aware general framework with quantifiable privacy preservation for destination prediction in lbss,” _IEEE/ACM Transactions on Networking_ , vol. 29, no. 5, pp. 2228–2241, 2021.
* [18] Y. Zhao, J. Zhao, J. Kang, Z. Zhang, D. Niyato, S. Shi, and K. Y. Lam, “A blockchain-based approach for saving and tracking differential-privacy cost,” _IEEE Internet of Things Journal_ , pp. 1–1, 2021.
* [19] A. Friedman and A. Schuster, “Data mining with differential privacy,” in _Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , ser. KDD ’10. New York, NY, USA: Association for Computing Machinery, 2010, p. 493–502.
* [20] L. Xu, C. Jiang, Y. Chen, Y. Ren, and K. J. R. Liu, “Privacy or utility in data collection? a contract theoretic approach,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 9, no. 7, pp. 1256–1269, 2015.
* [21] M. Chamikara, P. Bertok, I. Khalil, D. Liu, and S. Camtepe, “Ppaas: Privacy preservation as a service,” _Computer Communications_ , vol. 173, pp. 192–205, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0140366421001420
* [22] C. Dwork, A. Roth _et al._ , “The algorithmic foundations of differential privacy.” _Foundations and Trends in Theoretical Computer Science_ , vol. 9, no. 3-4, pp. 211–407, 2014.
* [23] S. Nakamoto and A. Bitcoin, “A peer-to-peer electronic cash system,” _Bitcoin.–URL: https://bitcoin. org/bitcoin. pdf_ , 2008.
* [24] M. U. Hassan, M. H. Rehmani, and J. Chen, “Anomaly detection in blockchain networks: A comprehensive survey,” _IEEE Communications Surveys and Tutorials_ , pp. 1–1, 2022.
* [25] J. Sengupta, S. Ruj, and S. D. Bit, “A comprehensive survey on attacks, security issues and blockchain solutions for IoT and IIoT,” _Journal of Network and Computer Applications_ , vol. 149, p. 102481, 2020\.
* [26] “Hyperledger-fabricdocs documentation,” https://hyperledger-fabric.readthedocs.io/en/release-2.2/, accessed: 2021-02-20.
* [27] X. Xiao, G. Bender, M. Hay, and J. Gehrke, “Ireduct: Differential privacy with reduced relative errors,” in _Proceedings of the ACM SIGMOD International Conference on Management of Data_ , ser. SIGMOD ’11. New York, NY, USA: Association for Computing Machinery, 2011, p. 229–240.
* [28] C. Dwork, “Differential privacy,” in _Automata, Languages and Programming_ , M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 1–12.
* [29] “Adult data,” https://archive.ics.uci.edu/ml/datasets/Adult, accessed: 2022-02-10.
* [30] “Hyperledger caliper,” https://www.hyperledger.org/use/caliper, accessed: 2021-02-20.
|
# DEL-Dock: Molecular Docking-Enabled Modeling of DNA-Encoded Libraries
Kirill Shmilovich [ Benson Chen [ Theofanis Karaletsos [
<EMAIL_ADDRESS>Mohammad M. Sultan [<EMAIL_ADDRESS>
###### Abstract
DNA-Encoded Library (DEL) technology has enabled significant advances in hit
identification by enabling efficient testing of combinatorially-generated
molecular libraries. DEL screens measure protein binding affinity though
sequencing reads of molecules tagged with unique DNA-barcodes that survive a
series of selection experiments. Computational models have been deployed to
learn the latent binding affinities that are correlated to the sequenced count
data; however, this correlation is often obfuscated by various sources of
noise introduced in its complicated data-generation process. In order to
denoise DEL count data and screen for molecules with good binding affinity,
computational models require the correct assumptions in their modeling
structure to capture the correct signals underlying the data. Recent advances
in DEL models have focused on probabilistic formulations of count data, but
existing approaches have thus far been limited to only utilizing 2-D molecule-
level representations. We introduce a new paradigm, DEL-Dock, that combines
ligand-based descriptors with 3-D spatial information from docked protein-
ligand complexes. 3-D spatial information allows our model to learn over the
actual binding modality rather than using only structure-based information of
the ligand. We show that our model is capable of effectively denoising DEL
count data to predict molecule enrichment scores that are better correlated
with experimental binding affinity measurements compared to prior works.
Moreover, by learning over a collection of docked poses we demonstrate that
our model, trained only on DEL data, implicitly learns to perform good docking
pose selection without requiring external supervision from expensive-to-source
protein crystal structures.
University of Chicago] Pritzker School of Molecular Engineering, University of
Chicago, Chicago, Illinois 60637, United States Equal Contribution Insitro]
Insitro, South San Francisco, California 94080, United States Equal
Contribution Insitro] Insitro, South San Francisco, California 94080, United
States Insitro] Insitro, South San Francisco, California 94080, United States
## 1 Introduction
One key component of early drug discovery is hit identification, or finding
molecules with desired activity levels and properties of interest 1. This task
has been commonly approached via high-throughput screening (HTS) techniques,
which involve testing a library of molecules against a biological target of
interest. While traditional screening techniques do not scale well to large
chemical spaces, DNA-encoded library (DEL) technologies provide one avenue
towards the desired scale. For instance, while traditional HTS libraries
typically contain only $\sim$50k-5M compounds, DELs enable screening of
combinatorially large molecular spaces that allow for testing $\sim$1M-5B
compounds in a single tube 2, 3. By capturing a broad chemical diversity
landscape, DEL technology has opened new opportunities in hit identification
4, 5, 6, 7.
DELs are constructed by sequentially assembling molecular building blocks, aka
synthons, into molecules tagged with unique DNA-barcode identifiers (Fig. 1).
Once synthesized, the library is tested for affinity against a protein of
interest through a series of selection experiments. This process, called
panning, typically involves spiking the DEL into a solution of the immobilized
protein, and washing the resulting mixture for multiple rounds. This procedure
leaves members of the library that remain bound to either the target or
matrix, which are subsequently identified using next-generation DNA
sequencing. The resulting data after bioinformatics processing consists of
sparse reads of the DNA and the corresponding molecules; the relative
abundance of the identified molecules is, in theory, a reasonable proxy for
their binding affinities. However, DEL data from panning experiments contain
various sources of noise such as matrix binding, truncation products, unequal
initial loads, replicate and sequencing biases 8, 9, 10, 11. Modeling the data
without considering these significant sources of noise can overfit to spurious
correlations. Consequently, while DELs have proven to be powerful tools in
drug discovery, careful consideration of the appropriate techniques to denoise
DEL data is required to discover reliable signals of the underlying small
molecule binding affinities.
Figure 1: Step-by-step diagram of a DNA-Encoded Library (DEL) panning
experiment.
Several approaches have been developed to tackle DEL modeling from a
computational perspective: these existing methods rely on computing an
enrichment score for each molecule that is representative of its binding
affinity to the protein target. One approach for calculating enrichment scores
involves fitting a Poisson distribution to observed on-target and control
count data for each molecule and then formulating enrichment scores as ratios
of these Poisson parameters 12. A deficiency in this enrichment metric is that
it neglects to include any structural information of the actual DEL molecules.
Since molecular structure represents an important aspect for determining
protein-ligand interactions 13, 14, other methods incorporate different types
of molecular representations into their models. McCloskey et al. 15 use a
Graph Neural Network (GNN) to encode molecules, formulating the learning
problem as multi-class binary classification over the counts. Recent methods
consider hybrid approaches that combine molecular representation learning with
probablistic modeling8, 9, 16. Central to these approaches involves modelling
observed sequencing counts as originating from latent Poisson or Gamma-Poisson
distributions, which are well suited to describe independently distributed
count events. This allows the models to incorporate the inherent noise in the
data-generating process within the model structure. However, these methods
focus on learning only representations of the molecules themselves and omit
the rich representation of the 3-D protein-ligand interactions which are
critical aspects for describing protein binding.
In order to leverage 3-D spatial data, we turn to molecular docking, which has
a long history of being an important tool in structure-based drug discovery
17. A key component of these docking methods is to define a scoring function
that can characterize the binding interactions between a protein and ligand.
To generate candidate poses, these methods use the aforementioned scoring
function to sample ligand conformations with the goal of ultimately inferring
the most likely binding mode. While there have been many advances to docking
models and software, in practice, there are still many pitfalls of these
methods 18. For instance, the difficulty of the docking problem can be
observed through the low empirical correlation between high scoring docked
poses and actual binding affinity 19. Improvements can be realized through
carefully calibrated scoring functions, but tuning these scoring functions is
frequently problem-specific and involves substantial domain knowledge 20, 21,
22. ML-enabled scoring functions trained on databases of crystal structures
have recently demonstrated improved performance in some settings 23, however,
these approaches are ultimately limited by scarce and expensive-to-generate
crystal structures. Even though docking generates noisy ligand conformations,
these docking techniques can provide a useful distribution of likely binding
poses which we will exploit in our new ideas for modeling DEL count data.
Towards the goal of more holistic DEL models, we propose DEL-Dock, a model
that directly learns a joint protein-ligand representation by synthesizing
multi-modal information within a unified probabilistic framework to learn
enrichment scores. Our approach combines molecule-level descriptors with
spatial information from docked protein-ligand complexes to explain sequencing
counts. We delineate contributions from spurious matrix binding and target
protein binding to better separate the signal from noise in the data.
Additionally, our model can learn to rank a collection of poses without
explicit supervision of pose scores by only training on the count data. When
viewed separately, DEL data and docked poses provide noisy signals of binding
affinity, but our DEL-Dock model effectively combines these two modalities to
better extract the learning signals from the data.
In this work, we engage the commonly studied protein human carbonic anhydrase
(CAIX) by training our model on publicly available DEL data generated by Gerry
et al. 12. On a held-out evaluation data set of 3041 molecules with
experimental affinity measurements extracted from the BindingDB 24 web
database, we demonstrate our approach effectively learns enrichment scores
that are well correlated with binding affinity. Moreover, we show that our
model is capable of extracting insights into the determinants of docked
protein-ligand complexes that are most influential for protein binding.
## 2 Methods
Our model combines two different representation modalities, molecule-level
descriptors and docked protein-ligand complexes, to capture the latent aspects
of protein binding from a probabilistic perspective (Fig. 2). The
combinatorial construction of DELs motivates using expressive molecular
representations to capture statistical correlation between the building block
substructures used in DEL synthesis. We use Morgan fingerprint, calculated
with RDKit (version 2020.09.1) 25, as basis for our molecular representations,
which is a standard descriptor for representational problems on small
molecules 26 and also provide the added benefit of their simple construction
and rapid processing. Morgan fingerprints compute a structural bit hash of the
molecule by enumerating $k$-hop substructures about each atom. Since there are
many shared structural features across different molecular compounds, these
fingerprints constitute a simple representation that has demonstrated
remarkable empirical performance throughout cheminformatic domains. We
represent docked protein-ligand poses using a pretrained voxel-based CNN model
from GNINA, which captures spatial relationships by discretizing space into
three-dimentional voxels and leveraging CNNs to learn complex hierarchical
representations 27, 28, 29. The CNN models used in this work was originally
trained on the PDBBind 30 database, capitalizing on this supervised data
source to capture the important features that characterize protein-ligand
interactions.
Figure 2: Schematic illustration of our DEL-Dock neural network architechture
and data flow.
Let $\mathcal{X}$ denote the set of molecules in our data, where each molecule
$x\in\mathcal{X}$ has an associated set of $n$ docked poses
$\\{p_{1},p_{2},...,p_{n}\\}\in\mathcal{P}$ and $c^{\text{matrix}}_{i}\in
C^{\text{matrix}},c^{\text{target}}_{i}\in C^{\text{target}}$ are the $i$th
replicates (repeated experiments) of count data from the beads-only control
and target protein experiments respectively. Additionally, we can define the
following featurization transformations that are used to construct the
molecule and pose embeddings: $\Phi:\mathcal{X}\rightarrow[0,1]^{n_{\phi}}$ is
the function that generates a $n_{\phi}$-bit molecular fingerprint; here, we
use a 2048-bit Morgan fingerprint with radius 3.
$\Psi:\mathcal{X}\times\mathcal{P}\rightarrow\mathbb{R}^{n_{\psi}}$ is the
transformation that outputs an embedding of the molecule and a specific
spatial protein-ligand complex, where we use a pre-trained voxel-based CNN to
perform this transformation.
Let $h^{\text{fps}}=\text{MLP}(\Phi(x))$ be the molecule embedding learned by
our model, which is computed by applying a multilayer perceptron (MLP) to the
fingerprint representation. Individual docked pose embeddings are similarly
computed, with one difference being that we also incorporate the fingerprint
embedding, $h^{\text{pose}}_{i}=\text{MLP}([\Psi(x,p_{i});h^{\text{fps}}])$
into this representation.
To synthesize the set of poses for each molecule we apply a self-attention
layer over the pose embeddings. Following previous work on self-attention and
multiple-instance learning (MIL) 31, we compute attention weights in eq. 1,
where $\big{(}w,W^{U},W^{V}\big{)}$ are learnable weights, $\sigma$ is the
sigmoid activation and $\odot$ is element-wise multiplication. The final
output pose embedding that combines information from the individual input
poses is then computed as a attention-score weighted embedding vector
$h^{\text{pose}}=\frac{1}{n}\sum_{i}a_{i}h^{\text{pose}}_{i}$.
$a_{i}=\frac{\exp\Big{[}w\cdot\big{(}\text{tanh}(W^{U}h^{\text{pose}}_{i})\odot\sigma(W^{V}h^{\text{pose}}_{i})\big{)}\Big{]}}{\sum_{j}\exp\Big{[}w\cdot\big{(}\text{tanh}(W^{U}h^{\text{pose}}_{j})\odot\sigma(W^{V}h^{\text{pose}}_{j})\big{)}\Big{]}}$
(1)
Equipped with these molecule and pose embeddings, our model learns the
contributions of both spurious matrix binding and target protein binding by
predicting latent scores that strive to maximize the likelihood of the
observed data under the model. We make several distinct modeling choices to
mirror our assumptions about the data-generation process that accounts for
various sources of experimental noise.
Matrix Binding is a confounding factor inherent to DEL experiments, since
molecules are prone to binding to the multifarious components comprising the
immobilized matrix in addition to the intended protein target. For each
molecule $x$, we learn a latent matrix binding score
$\lambda^{\text{matrix}}=f(h^{\text{fps}})$. Since matrix binding is not a
function of the protein-ligand pose representation, we enforce that the matrix
binding enrichment remains only a function of the molecule embedding
$h^{\text{fps}}$.
Target Binding is learned through
$\lambda^{\text{target}}=f(h^{\text{pose}})$, jointly utilizing both molecule
and pose representations. This design choice reflects that sequencing counts
from the target protein experiment must be a function of both small molecule
binding to the protein receptor, represented here as featurizations of the
docked protein-ligand complexes, along with promiscuous binding to the
immobilized matrix.
The observed count data for both the control and protein target experiments
can be modeled as originating from underlying Poisson distributions, which
naturally characterize any discrete count data from independently sampled
events. Due to possible sequencing noise we further augment this basic Poisson
model as a zero-inflated probability distribution. This design choice is
motivated by the chance that sparse zero counts in the data could be explained
as an artifact of imperfect sequencing technology, and as a result we directly
incorporate this assumption into the structure of our model. We note that
previous approaches have employed Gamma-Poisson distributions to model DEL
count data in the past 16, but we make the assumption here that replicate data
for the same molecule is sampled from an identical distribution.
$\text{P}(C=c|\lambda,\pi)=\begin{cases}\pi+(1-\pi)e^{-\lambda}&\text{if
}c=0\\\ (1-\pi)\frac{\lambda^{c}e^{-\lambda}}{c!}&\text{if }c>0\\\
\end{cases}$ (2)
$\displaystyle C^{\text{matrix}}$
$\displaystyle\sim\text{ZIP}(\lambda^{\text{matrix}},\pi^{\text{matrix}})$
$\displaystyle\lambda^{\text{matrix}}$
$\displaystyle=\exp(\text{MLP}(h^{\text{fps}}))$ (3) $\displaystyle
C^{\text{target}}$
$\displaystyle\sim\text{ZIP}(\lambda^{\text{matrix}}+\lambda^{\text{target}},\pi^{\text{target}})$
$\displaystyle\lambda^{\text{target}}$
$\displaystyle=\exp(\text{MLP}(h^{\text{pose}}))$ (4)
Let $C$ be distributed as a zero-inflated Poisson (ZIP), with its probability
density function (PDF) defined in eq. 2. Here, $\lambda$ is the rate parameter
of the underlying Poisson distribution, and $\pi$ denotes the occurrence of
choosing the zero distribution, and is taken to be the empirical average (when
$\pi$ = 0, this reduces to the typical Poisson distribution). Empirically, we
estimate these zero-count probabilities from the zero-count frequencies in the
control ($\pi^{\text{matrix}}\approx 0.0075$) and protein target
($\pi^{\text{target}}\approx 0.55$) experiments. Since we model the control
and protein experiments as originating from separate underlying count
distributions, we compute two distinct rate parameters for each ZIP
distribution as shown in eq. 3. The observed target counts is a function of
both matrix binding and binding to the protein target, so the rate parameter
for the target distribution is a function of $\lambda^{\text{matrix}}$ and
$\lambda^{\text{target}}$. We assume an additive functional form for the
latent matrix and target enrichments, which we find works well empirically. We
note however that there could be more motivated methods to parameterize the
interactions of matrix and target binding. The final loss function is then a
typical negative log-likelihood (NLL) loss over the observed counts from both
the control and target experiments eq. 5.
$L=-\sum_{i}\log\big{[}P(c^{\text{matrix}}_{i}|\lambda^{\text{matrix}},\pi^{\text{matrix}})\big{]}-\sum_{j}\log[P(c^{\text{target}}_{j}|\lambda^{\text{target}}+\lambda^{\text{matrix}},\pi^{\text{target}})]$
(5)
### 2.1 DEL Data
To train our model we use publicly available DEL data collected by Gerry et
al. 12. This tri-synthon library consists of $\sim$100k molecules with count
data for panning experiments for the human carbonic anhydrase IX (CAIX)
protein. In addition to on-target counts, the data includes beads-only no-
target controls. Four replicate sets of counts for the protein target
experiments are provided, while two replicates of the control experiments are
provided in this data set. To account for possible noise in different
replicates, we follow previous work and normalize the counts for each target
and control replicate by dividing each count by the sum of counts in that
replicate experiment and then multiplying by 1e6 to re-calibrate the scale of
the counts 16. This data prepossessing provides the interpretation of each
molecule count as a molecular frequency of that molecule within the DEL
library. The processed data set is then used to train our models employing a
80/10/10 - train/validation/test split. Complete training details are further
provided in the Supporting Information.
### 2.2 Evaluation Data
We evaluate the performance of our models on benchmarks using an external set
of affinity measurements of small molecule curated from the BindingDB 24 web
database. We queried binding affinities for the Human Carbonic anhydrase 9
(CAIX) protein target (UniProt: Q16790), and kept only molecules containing
the same atom types as those present in the DEL data set (C, O, N, S, H, I).
This external evaluation data set is composed of 3041 small molecules with
molecular weights ranging from $\sim$25 amu to $\sim$1000 amu and associated
experimental inhibitory constant (Ki) measurements ranging from $\sim$0.15 M
to $\sim$90 pM. We use the median affinity value in the cases where multiple
different affinity measurements were reported for the same molecule. We also
consider a subset of this dataset which consists of the 521 molecules with
molecular weights between 417 amu and 517 amu (Fig. 6 in the Supporting
Information). These molecular weights correspond to the interquartile range
bounding the 10th and 90th percentiles of the molecular weights in training
dataset. This restricted subset presents a more challenging test as
differentiation cannot rely on only extensive properties such as molecular
weight, but must also effectively identify chemical motifs that impact
molecular binding within this tightly bound range of molecular weights. We
notice that simple properties such as molecular weight or benzenesulfonamide
presence, which is known to be an important binding motif for carbonic
anhydrase 12, 32, 33, achieve better baseline performance on the full
evaluation data compared to the restricted subset. These metrics suggest that
this subset is more challenging as predictors must learn beyond these simple
molecular properties to achieve good performance.
### 2.3 Docking
We perform molecular docking to generate a collection of ligand-bound poses to
a target protein of interest for all molecules within in our training and
evaluation data sets. Docking is performed using the GNINA docking software
27, 28, 29 employing the Vina 34 scoring function. All molecules are docked
against CAIX (PDB:5FL4) with the location of the binding pocket determined by
the bound crystal structure ligand (9FK), using the default GNINA settings
defining an $8\times 8\times 8$ Å3 bounding box around this ligand. Initial
three-dimensional conformers for all docked molecules were generated with
RDKit. For each molecule, we obtain 20 docked poses from GNINA using an
exhaustiveness parameter of 50, using the Vina scoring for end-to-end pose
generation. This approach can similarly be performed using AutoDock Vina 34 or
Smina 35 using the Vina scoring function.
## 3 Results
We demonstrate that our model outperforms previous systems on DEL enrichment
prediction by jointly combining topological features from the molecular graph
and the spatial 3-D protein-ligand information, and additionally illustrate
the capability of our model to better rank ligand poses compared to
traditional docking. Our model learns latent binding affinity for each
molecule to both the matrix and the target as the denoised signals compared to
the observed count data. In this interpretation we should expect higher
enrichment scores predicted by our model to be well-correlated with binding
affinity, and therefore provide a useful metric for predicting anticipated
protein binding in virtual screening campaigns.
### 3.1 DEL-Dock outperforms baselines using only docking or molecule-level
descriptors
To evaluate the performance of our model, especially with respect to out-of-
domain protein binding prediction, we first train our model on DEL data
screened against the human carbonic anhydrase IX (CAIX) protein target 12, and
then predict enrichment scores for molecules with externally measured
experimental binding affinities to CAIX. We evaluate performance in this
setting by measuring spearman rank-correlation coefficients between predicted
enrichments and the experimental affinity measurements, which is a metric that
is agnostic to the scale of the values. Our model only restricts the
enrichment scores to be positive quantities, with no specific distributional
constraints, so spearman rank-correlation, which computes a correlation based
only on the ordinal ranking of the predicted enrichments, is well suited for
our test scenario.
Our method, DEL-Dock, which combines information from docked complexes with
molecular descriptors outperforms previous techniques which only utilize one
of these two data modalities (see Table 1). We find that traditional docking
scores alone generated from AutoDock Vina 34 result in the worst overall
correlations, commensurate with previous observations that docking scores
alone are typically not reliable predictors of binding affinity 19.
Performance based on docked poses alone is however greatly improved when re-
scoring the docked poses using pretrained GNINA 36 CNN models. Another set of
baselines we consider are DEL models that rely only on molecular descriptors.
First, we consider a simple model that involves training a random forest (RF)
on the Morgan fingerprints using the enrichment metrics originally formulated
to facilitate analysis of the DEL data set by Gerry et al. 12.
We then similarly predict the enrichment scores of molecules in the held-out
dataset and the correlation with experimental Ki data. This baseline achieves
reasonable performance on both the full and subset evaluation data, especially
given the simplicity of the model. For a more sophisticated baseline, we train
the Graph Neural Network (GNN) model with the DEL-specific loss function
defined by Lim et al. 9 While this approach achieves good performance on the
full evaluation data, correlations on the restricted subset are largely
unchanged for all docking-based and molecular descriptor-based baselines. Our
model which combines docking pose embeddings with molecular fingerprint
representations outperforms all other baselines, with the largest improvements
of $\sim$2$\times$ better spearman correlations than other approaches realized
on the more challenging molecular weight restricted subset. We provide further
ablation studies of our model in the Supporting Information.
Model | Spearman Ki (full) ↓ | Spearman Ki (subset) ↓
---|---|---
Molecular weight | -0.121 | 0.074
Benzenesulfonamide presence | -0.199 | -0.063
Top Vina 34 docking score | -0.068 | 0.119
Top GNINA 36 docking score | -0.279 $\pm$ 0.044 | -0.091 $\pm$ 0.061
| RF trained on enrichment
---
scores from ( Gerry et al. 12)
-0.231 $\pm$ 0.007 | -0.091 $\pm$ 0.012
GNN ( Lim et al. 9) | -0.298 $\pm$ 0.005 | -0.075 $\pm$ 0.011
DEL-Dock (ours) | -0.328 $\pm$ 0.01 | -0.186 $\pm$ 0.01
Table 1: Comparison of spearman rank-correlation coefficients between
predicted affinity scores and experimental inhibition constant ($K_{i}$)
measurements curated from BindingDB 24. Spearman correlations are shown for
the complete 3041-molecule data set (full), and a 521-molecule subset of this
full data set confined to molecular weights between 417-517 amu. This
molecular weight range approximately corresponds to the 10th and 90th
interquartile range of the molecular weights spanned by the DEL data set.
Error bars are reported as standard deviations over five independently
initialized models.
### 3.2 DEL-Dock better ranks molecules with known binding chemical motifs
While our approach displays good prediction accuracy with respect to
experimental binding measurements, we also find our model provides insights
into the structural and chemical factors that influence binding. Compounds
containing benzenesulfonamide have been well established in literature as the
primarily chemical motif that drives small molecule binding to carbonic
anhydrase 12, 32, 33. Though we do not explicitly incorporate this as a
learning signal for our model, we observe that our model is able to learn this
association, visibly predicting sulfonamides within our evaluation data set as
more highly enriched compared molecules which do not contain
benzenesulfonamides (Fig. 3a). Interestingly, we observe a comparatively large
fraction of non-benzenesulfonamides identified as good binders with low
experimental $K_{i}$. The elevated population of highly enriched non-
benzenesulfonamides in this data set could be an artifact of bias in
scientific literature. Our model is ultimately trained on DEL data and
therefore is expected to reflect underlying biases and idiosyncrasies of the
data generation process. The most notable difference lies in that DEL
experiments are only capable of measuring on-DNA binding, while the evaluation
data are measurements of off-DNA binding. Nevertheless, the clear delineation
of benzenesulfonamides in our predicted enrichments provides good post-hoc
evidence that our model correctly identifies this important binding motif for
this protein target.
Figure 3: Analysis of DEL-Dock model predictions on our evaluation data set
composed of experimental affinity (Ki) measurements. (a) Parity plot of our
model predicted enrichments and ground truth Ki measurements delineated by
benzenesulfonamide presence, with 1581 benzenesulfonamide-containing molecules
and 1460 non-benzenesulfonamide. (b) Distribution of zinc-sulfonamide
distances for the top-ranked pose of each benzenesulfonamide in our evaluation
data set identified by the AutoDock Vina scoring function 34, GNINA pose
selection 36, and our model predicted attention scores. (c) Comparison in the
fraction of top ranked poses identified with with zinc-sulfonamide distance
below a threshold distance between $\sim$2-12Å. This can be interpreted as the
CDF of the distributions in (b) with the appropriate normalization.
An important structural component of benzenesulfonamides binding to carbonic
anhydrase is coordination of the sulfonamide group with the zinc ion buried
within the active site 12, 32, 33. In the vast majority of cases, one would
then expect docking scoring functions to highly score poses that reflect this
anticipated binding mode. As our model performs self-attention over pose
embeddings, which are used to learn molecules’ enrichment scores, we can
interpret the magnitude of the attention probabilities as the importance
weight of that particular pose. Shown in Fig. 3b is the distribution of zinc-
sulfonamide distances for the top-selected docked pose comparing AutoDock
Vina, GNINA, and our method for all 1581 benzenesulfonamides-containing
molecules in our evaluation data set. An alternate view of this data is
presented as the fraction of top-selected poses with zinc-sulfonamide
distances below a distance threshold (Fig. 3c), which can effectively be
interpreted as the cumulative distribution function (CDF) of the appropriately
normalized associated probability distribution function (PDF) in Fig. 3b.
The AutoDock Vina scoring function exhibits largest spread of zinc-sulfonamide
distances, and as a result identifying a comparatively large fraction of poses
as incorrectly coordinated. GNINA pose selection performs significantly better
in this setting, identifying a larger fraction of well-coordinated poses with
low zinc-sulfonamide distance. We find our method ultimately correctly
coordinates the largest proportion of poses when compared to AutoDock Vina or
GNINA. We note that our approach for binding pose selection is markedly
different than the approach taken by GNINA, which involves a separate pose
scoring head trained to identify poses with low-RMSD to ground truth crystal
structures. Our attention scores on the other hand are effectively latent
variables trained only via the auxiliary task of modeling DEL data. The
benefit of our approach is that we can learn to identify good poses in an
unsupervised manner, without requiring scarce and expensive crystal structures
to serve as the source of supervision for pose selection.
### 3.3 DEL-Dock offers interpretability through its attention mechanisms
Lastly, we further demonstrate the interpretability of or model by examining
the distribution of attention scores learned by our model for a specific
molecule (Fig. 4). For this molecule, only 7 out of 20 docked poses correctly
coordinate the sulfonamide group with the zinc ion buried in the protein
active site. Our model appropriately identifies this binding mode and learns
attention scores that more favorably rank these 7 correctly coordinated poses
(Fig. 4). The top-three ranked poses (Fig. 4a) by our model have very similar
conformations, each exhibiting zinc-sulfonamide coordination, and differing
only in the orientation of the terminal benzene ring that is distant from the
active site. The other poses that show zinc-sulfonamide coordination (Fig.
4b-d) are also ranked highly by our model, however, these poses exhibit less
favorable conformations in several ways. For instance, the conformation in
Fig. 4b is more exposed, and less protected by the protein. Finally, our model
in general more poorly ranks poses that display incorrect zinc-sulfonamide
coordination (Fig. 4e). These conformations typically have the terminal
benzene ring inserted into the active site. Also, these poses reveal why zinc-
sulfonamide distances alone can be a deceiving metric as some poses are
capable of achieving low zinc-sulfonamide distances ($\sim$3 Å) due to the
molecule “curling in” on itself within the active site. Nevertheless, our
model recognizes this spurious binding mode and poorly ranks these poses with
comparatively low attention scores, even though AutoDock Vina highly ranks
many of these bad poses. Overall, we find the hierarchy of pose rankings by
our model to be commensurate with anticipated binding behavior for this
protein target.
Figure 4: Analysis of pose attention scores for a representative molecule in
our evaluation data set. (left) Our model predicted pose attention scores
plotted against the zinc-sulfonamide distance of the docked pose and colored
according to the ranking determined by the AutoDock Vina scoring function.
(right) Different protein-ligand complexes are visualized to show that our
model highly ranks the conformers with zinc-sulfonamide coordination (a-d),
while the conformers without the correct coordination are ranked lower (e).
## 4 Conclusions
In this work we present an approach for modeling DEL data that combines
docking-based and molecular descriptor-based data modalities. Our approach
involves predicting two interleaved quantities, enrichment scores, that
explain the sequencing counts of the panning experiment measurements for both
the on-target protein and the off-target control beads. We evaluate our method
by first training on DEL data screened against the human carbonic anhydrase
(CAIX) 12 protein target, and then predicting binding for unseen molecules
with external experimental constant of inhibition ($K_{i}$) affinity
measurements curated from the BindingDB 24 web database. For this prediction
task we find our approach outperforms previous docking and DEL modeling
techniques that only use either docked poses or molecular descriptor
information alone. Furthermore, a critical component of our model involves
performing self-attention over pose embedding, in order to learn over the set
of possible poses. Analyzing these latent attention scores, we find our model
effectively identifies good docked poses. Compared to docking pose selection
using either AutoDock Vina 34 or GNINA 27, 28, 29, 36, our model more reliably
selects poses displaying the appropriate zinc-sulfonamide coordination–which
is known to be the predominant binding mode for carbonic anhydrase 12, 32, 33.
Our model is interestingly capable of learning good pose selection in an
unsupervised manner, training only on the voluminous DEL data rather than
requiring crystal structures to serve as the source of supervision.
There are some limitations to our approach, however. Our approach assumes that
we can obtain reasonably accurate protein-ligand docking poses, which is not
always true. We can only utilize this model for proteins which have 3-D
crystal structures; and even if protein crystal structures are available, they
may not always be accurate. Additionally, we are limited by the quality of the
docking software, which may fail to capture the actual correct binding modes,
or ranking the good binding modes highly.
Our work focuses on introducing the concept of utilizing docked poses to
improve DEL models, but there are several avenues for future work. While we
use Morgan fingerprints as our molecule featurizer, deep learning approaches
have the potential to generate more expressive representations, such as Graph
Neural Networks (GNNs) that have demonstrated excellent predictive performance
on many molecular property prediction tasks 37. In our application,
equivariant GNNs could also be used as an alternative featurizer to CNNs for
embedding docked poses, which would provide the added benefit of producing
explicitly roto-translationally symmetric representations 38. We only select
the top ranking poses from docking, but providing a more diverse set of poses
could be more useful instead. Future research directions could also leverage
our approach to unsupervised pose selection for downstream free energy
calculations in larger-scale virtual screening campaigns. Our approach paves
the way for multi-modality modeling of DEL data and unsupervised learning of
bespoke, DEL-conditioned, scoring functions.
## 5 Acknowledgements
We would like to thank Nathaniel Stanley and Sam Mun at Insitro for helpful
discussions and advice related to Docking software and usage. We would also
like to thank Daphne Koller, Robert Hilgraf, Nathaniel Stanley, Fiorella
Ruggiu and Yujia Bao at Insitro for providing general feedback and review of
our work. Lastly, we would like to thank Patrick Conrad at Insitro for helping
us with the public code release.
## 6 Data Availability
We use publicly available data from Gerry et al. 12, and our code is publicly
available at: https://github.com/insitro/insitro-research.
## 7 Supporting Information
### 7.1 Training settings
Featurizations for the docked poses are generated using pre-trained GNINA
models provided in gnina-torch 28, 29. The dense variant of the GNINA models
composed of densely connected 3D residual CNN blocks introduced in Francoeur
et al. 36 are used to generate 224-dimentional embeddings of each docked pose.
Morgan fingerprints 26 for each molecule are calculated using RDKit with a
radius of 3 embedded into a 2048 dimensional bit-vector.
All models are trained end-to-end using mini-batch gradient decent with the
Adam optimizer 39 and coefficients for the running averages of
$\beta_{1}=0.95$ and $\beta_{2}=0.999$. A batch size of 64 is used with an
initial learning rate of $1\times 10^{-4}$ and a linearly decaying learning
rate scheduler where the learning rate is decayed by a factor of
$\gamma^{\frac{1}{n_{steps}}}$ every batch. For our learning rate scheduler we
use $\gamma=0.1$ and $n_{steps}=1250$, which corresponds to a $10\times$
reduction in the learning rate after $1250$ batches. We also apply gradient
clipping, where gradient norms are clipped to a maximum value of 0.1. During
training we maintain an exponential moving average over our model parameters
which are updated each step with a decay rate of 0.999. This exponential
moving average version of the model parameters is then used for evaluation and
throughout all inference tasks. Throughout our model we use LeakyReLU
activation functions with a negative slope constant of $1\times 10^{-2}$,
except for the final activation function applied to the output logits
corresponding to the matrix and target enrichment scores where we apply an
exponential function as our terminal activation. A hidden dimensionality of
256 is used within MLP layers in our network. The residual MLP layers, which
are responsible for processing the Morgan fingerprints along with the CNN
features and embeddings (Fig. 2 in the main text), are composed of 2
residually connected MLP layers using dropout with a probability of 0.5. Our
model in sum is composed of $\sim$1M parameters and is trained for 8 epochs on
a single NVIDIA T4 GPU. A PyTorch implementation of our model that makes use
of PyTorch Lightning 40 and Pyro 41 is publicly available at:
https://github.com/insitro/insitro-research.
### 7.2 Ablations
We present here a number of ablations on our model, exploring some different
architectural components and design choices (Table 2). First, we see that
training models with only fingerprint representations, without incorporating
any information from the docked poses, results in a marked decrease in
performance. On the other hand models, trained using only CNN representations
perform much better and display comparable performance to only GNINA pre-
trained models (Table 1). This represents an intuitive result, as this
training setting is effectively equivalent to fine-tuning GNINA using a multi-
instance learning over the pose representations. Interestingly, we find that
training on the CNN features alone already achieves good binding pose
selection based on the latent attention scores, a feature our model is
evidently capable of learning in isolation of the fingerprint representations.
Also shown as a baseline is the MLP network trained using the bespoke loss
function for modelling DEL data presented by Lim et al. 9. This approach
represents the lower performing of the two architectures explored by Lim et
al. 9, the other being the GNN architecture presented as a baseline in the
main text (Table 1).
Model | Spearman Ki (full) ↓ | Spearman Ki (subset) ↓
---|---|---
Only fingerprints | -0.191 $\pm$ 0.005 | -0.083 $\pm$ 0.019
Only CNN | -0.287 $\pm$ 0.005 | -0.124 $\pm$ 0.006
MLP from Lim et al. 9 | -0.244 $\pm$ 0.004 | -0.076 $\pm$ 0.017
| Without zero-inflated
---
distribution
-0.26 $\pm$ 0.02 | -0.08 $\pm$ 0.03
| End-to-end voxels
---
with frozen CNN
-0.278 $\pm$ 0.022 | -0.16 $\pm$ 0.03
Table 2: Model ablations and other baselines. Error bars are calculated as
standard deviations over five independently initialized models.
Interestingly using a zero-inflated loss appears to be critical to our
performance, resulting in $\sim$25% increase in spearman rank-correlation on
the full evaluation set and greater than a 2$\times$ increase on the subset.
We suspect this performance jump could be related to the disparity in zero-
counts between the control and on-target experiment: the control experiments
have a zero-count frequency of $\sim$0.75% while the protein target
experiments have a zero-count frequency of $\sim$55%. Using a zero-inflated
distribution could provide our model more flexibility to explain zero-counts
as an artifact of the data generation process, rather than an outcome of poor
protein binding.
Instead of using pre-computed CNN features from GNINA we also explored
training our model directly from the voxel representations using frozen CNN
featurizers. The benefit of this approach is the ability to use data
augmentation via radom rotations and translations to implicitly enforce that
the learned CNN embeddings remain roto-translationally equivariant. While we
notice performance on the evaluation subset is comparable with our trained on
pre-computed CNN features (Table 1), the performance on the full data set is
slightly reduced. We suspect this result could be due to the computational
challenges of using voxelized representations. In particular, when training
over many docked poses (in our case 20 poses per molecule) our batch size is
effectively 20$\times$ larger – which presents a significant memory bottleneck
as the voxel representation requires storing a
48$\times$48$\times$48$\times$28 molecular grids (three dimensions
discretizing space, and one for different atom types). Furthermore, our pre-
computed features are already being generated with pre-trained CNN featurizers
that have been trained using data augmentation, albeit on PDBBind for the
separate task of affinity and pose prediction. Nevertheless, we certainly
expect improved performances could still be achieved for these full
differentiable training approaches given the appropriate compute resources and
further tuning.
Lastly, presented in Table 3 is a comparison of spearman rank-correlation
performances training on variable numbers of poses. For each model the top-$k$
poses generated via docking are used for training. Performance tends to
generally improve with increasing number of poses used for training, with the
largest difference in improvements realized on the molecular weight restricted
subset. Beyond $\sim$10 poses appears to result in diminishing returns, in
comparison to the jumpy in improvements seen from 2 $\rightarrow$ 10 poses.
Number of training poses | Spearman Ki (full) ↓ | Spearman Ki (subset) ↓
---|---|---
2 poses | -0.278 $\pm$ 0.011 | -0.112 $\pm$ 0.023
5 poses | -0.304 $\pm$ 0.01 | -0.15 $\pm$ 0.02
10 poses | -0.318 $\pm$ 0.007 | -0.175 $\pm$ 0.02
15 poses | -0.324 $\pm$ 0.008 | -0.182 $\pm$ 0.014
20 poses | -0.328 $\pm$ 0.009 | -0.186 $\pm$ 0.013
Table 3: Model ablations training on different numbers of docked poses. Error
bars are calculated as standard deviations over five independently initialized
models.
### 7.3 Supplementary Figures
Shown in Fig. 5 is a TSNE embedding of the DEL data set alongside our
evaluation data. This TSNE embedding is generated by representing each
molecule with a concatenation of three fingerprint representations: a
2048-dimensional Morgan fingerprint with a radius of 3, 167-dimensional MACCS
(Molecular ACCess System) fingerprint, and finally a 2048-dimensional atom
pair fingerprint. All fingerprints are calculated using RDKit. Sci-kit learn
is then used to generate the TSNE embedding using a tanimoto similarity metric
with a perplexity of 30 trained on the combined DEL and evaluation data. We
notice the evaluation data is largely isolated from the DEL data in this TSNE
embedding, serving as an indication that our evaluation data is markedly
different, or out of domain, than the DEL data used in training our models.
Figure 5: TSNE embedding our of DEL and evaluation data. Molecular
representations for this TSNE embedding are generated as a concatenation of
three fingerprint representations: Morgan fingerprints, MACCS fingerprints,
and atom pair fingerprints. Figure 6: Comparison of the distribution of
molecular weights between the DEL data set and (a) the full evaluation data
set and (b) the 417-517 amu subset of the evaluation data set. Distributions
are generated as a Kernel Density Estimate (KDE) plot as implemented in
seaborn 42.
Shown in Fig. 7 are the distributions of zinc-sulfonamide distances throughout
the top-five ranking poses as identified by our DEL-DOCK model attention
scores, GNINA pose selection, and the AutoDock Vina scoring function. We
notice that the highly ranked poses by our model attribute more density in the
closely separated regime under $\sim$4Å than GNINA or Vina, and as a direct
consequence of this we see fewer poses selected by our model showing large
separations between $\sim$4Å - $\sim$13Å.
Figure 7: Distribution of Zinc-Sulfonamide distances throughout the top-five
ranked poses by our DEL-DOCK model attention scores, GNINA pose selection
score, and AutoDock Vina scoring function.
## References
* Hughes et al. 2011 Hughes, J. P.; Rees, S.; Kalindjian, S. B.; Philpott, K. L. Principles of early drug discovery. _British journal of pharmacology_ 2011, _162_ , 1239–1249
* Satz et al. 2022 Satz, A. L.; Brunschweiger, A.; Flanagan, M. E.; Gloger, A.; Hansen, N. J.; Kuai, L.; Kunig, V. B.; Lu, X.; Madsen, D.; Marcaurelle, L. A., et al. DNA-encoded chemical libraries. _Nature Reviews Methods Primers_ 2022, _2_ , 1–17
* Sunkari et al. 2021 Sunkari, Y. K.; Siripuram, V. K.; Nguyen, T.-L.; Flajolet, M. High-power screening (HPS) empowered by DNA-encoded libraries. _Trends in Pharmacological Sciences_ 2021,
* Clark et al. 2009 Clark, M. A.; Acharya, R. A.; Arico-Muendel, C. C.; Belyanskaya, S. L.; Benjamin, D. R.; Carlson, N. R.; Centrella, P. A.; Chiu, C. H.; Creaser, S. P.; Cuozzo, J. W., et al. Design, synthesis and selection of DNA-encoded small-molecule libraries. _Nature chemical biology_ 2009, _5_ , 647–654
* Kleiner et al. 2011 Kleiner, R. E.; Dumelin, C. E.; Liu, D. R. Small-molecule discovery from DNA-encoded chemical libraries. _Chemical Society Reviews_ 2011, _40_ , 5707–5717
* Goodnow et al. 2017 Goodnow, R. A.; Dumelin, C. E.; Keefe, A. D. DNA-encoded chemistry: enabling the deeper sampling of chemical space. _Nature Reviews Drug Discovery_ 2017, _16_ , 131–147
* Flood et al. 2020 Flood, D. T.; Kingston, C.; Vantourout, J. C.; Dawson, P. E.; Baran, P. S. DNA encoded libraries: a visitor’s guide. _Israel Journal of Chemistry_ 2020, _60_ , 268–280
* Binder et al. 2022 Binder, P.; Lawler, M.; Grady, L.; Carlson, N.; Leelananda, S.; Belyanskaya, S.; Franklin, J.; Tilmans, N.; Palacci, H. Partial Product Aware Machine Learning on DNA-Encoded Libraries. _arXiv preprint arXiv:2205.08020_ 2022,
* Lim et al. 2022 Lim, K. S.; Reidenbach, A. G.; Hua, B. K.; Mason, J. W.; Gerry, C. J.; Clemons, P. A.; Coley, C. W. Machine learning on DNA-encoded library count data using an uncertainty-aware probabilistic loss function. _Journal of Chemical Information and Modeling_ 2022,
* Zhu et al. 2021 Zhu, H.; Foley, T. L.; Montgomery, J. I.; Stanton, R. V. Understanding Data Noise and Uncertainty through Analysis of Replicate Samples in DNA-Encoded Library Selection. _Journal of Chemical Information and Modeling_ 2021, _62_ , 2239–2247
* Kómár and Kalinic 2020 Kómár, P.; Kalinic, M. Denoising DNA encoded library screens with sparse learning. _ACS Combinatorial Science_ 2020, _22_ , 410–421
* Gerry et al. 2019 Gerry, C. J.; Wawer, M. J.; Clemons, P. A.; Schreiber, S. L. DNA barcoding a complete matrix of stereoisomeric small molecules. _Journal of the American Chemical Society_ 2019, _141_ , 10225–10235
* Jones et al. 2021 Jones, D.; Kim, H.; Zhang, X.; Zemla, A.; Stevenson, G.; Bennett, W. D.; Kirshner, D.; Wong, S. E.; Lightstone, F. C.; Allen, J. E. Improved protein–ligand binding affinity prediction with structure-based deep fusion inference. _Journal of chemical information and modeling_ 2021, _61_ , 1583–1592
* Hwang et al. 2017 Hwang, H.; Dey, F.; Petrey, D.; Honig, B. Structure-based prediction of ligand–protein interactions on a genome-wide scale. _Proceedings of the National Academy of Sciences_ 2017, _114_ , 13685–13690
* McCloskey et al. 2020 McCloskey, K.; Sigel, E. A.; Kearnes, S.; Xue, L.; Tian, X.; Moccia, D.; Gikunju, D.; Bazzaz, S.; Chan, B.; Clark, M. A., et al. Machine learning on DNA-encoded libraries: a new paradigm for hit finding. _Journal of Medicinal Chemistry_ 2020, _63_ , 8857–8866
* Ma et al. 2021 Ma, R.; Dreiman, G. H.; Ruggiu, F.; Riesselman, A. J.; Liu, B.; James, K.; Sultan, M.; Koller, D. Regression modeling on DNA encoded libraries. NeurIPS 2021 AI for Science Workshop. 2021
* Meng et al. 2011 Meng, X.-Y.; Zhang, H.-X.; Mezei, M.; Cui, M. Molecular docking: a powerful approach for structure-based drug discovery. _Current computer-aided drug design_ 2011, _7_ , 146–157
* Chen 2015 Chen, Y.-C. Beware of docking! _Trends in pharmacological sciences_ 2015, _36_ , 78–95
* Gupta et al. 2018 Gupta, M.; Sharma, R.; Kumar, A. Docking techniques in pharmacology: How much promising? _Computational biology and chemistry_ 2018, _76_ , 210–217
* Li et al. 2019 Li, J.; Fu, A.; Zhang, L. An overview of scoring functions used for protein–ligand interactions in molecular docking. _Interdisciplinary Sciences: Computational Life Sciences_ 2019, _11_ , 320–328
* Pham and Jain 2008 Pham, T. A.; Jain, A. N. Customizing scoring functions for docking. _Journal of Computer-Aided Molecular Design_ 2008, _22_ , 269–286
* Jain 1996 Jain, A. N. Scoring noncovalent protein-ligand interactions: a continuous differentiable function tuned to compute binding affinities. _Journal of computer-aided molecular design_ 1996, _10_ , 427–440
* Wallach et al. 2015 Wallach, I.; Dzamba, M.; Heifets, A. AtomNet: a deep convolutional neural network for bioactivity prediction in structure-based drug discovery. _arXiv preprint arXiv:1510.02855_ 2015,
* Liu et al. 2007 Liu, T.; Lin, Y.; Wen, X.; Jorissen, R. N.; Gilson, M. K. BindingDB: a web-accessible database of experimentally determined protein–ligand binding affinities. _Nucleic acids research_ 2007, _35_ , D198–D201
* Landrum et al. 2020 Landrum, G. et al. rdkit/rdkit: 2020_09_1 (Q3 2020) Release. 2020; https://doi.org/10.5281/zenodo.4107869
* Rogers and Hahn 2010 Rogers, D.; Hahn, M. Extended-connectivity fingerprints. _Journal of chemical information and modeling_ 2010, _50_ , 742–754
* McNutt et al. 2021 McNutt, A. T.; Francoeur, P.; Aggarwal, R.; Masuda, T.; Meli, R.; Ragoza, M.; Sunseri, J.; Koes, D. R. GNINA 1.0: molecular docking with deep learning. _Journal of cheminformatics_ 2021, _13_ , 1–20
* Ragoza et al. 2017 Ragoza, M.; Hochuli, J.; Idrobo, E.; Sunseri, J.; Koes, D. R. Protein–ligand scoring with convolutional neural networks. _Journal of chemical information and modeling_ 2017, _57_ , 942–957
* Sunseri and Koes 2020 Sunseri, J.; Koes, D. R. Libmolgrid: graphics processing unit accelerated molecular gridding for deep learning applications. _Journal of chemical information and modeling_ 2020, _60_ , 1079–1084
* Liu et al. 2017 Liu, Z.; Su, M.; Han, L.; Liu, J.; Yang, Q.; Li, Y.; Wang, R. Forging the basis for developing protein–ligand interaction scoring functions. _Accounts of chemical research_ 2017, _50_ , 302–309
* Ilse et al. 2018 Ilse, M.; Tomczak, J.; Welling, M. Attention-based deep multiple instance learning. International conference on machine learning. 2018; pp 2127–2136
* Li et al. 2018 Li, Y.; De Luca, R.; Cazzamalli, S.; Pretto, F.; Bajic, D.; Scheuermann, J.; Neri, D. Versatile protein recognition by the encoded display of multiple chemical elements on a constant macrocyclic scaffold. _Nature chemistry_ 2018, _10_ , 441–448
* Buller et al. 2011 Buller, F.; Steiner, M.; Frey, K.; Mircsof, D.; Scheuermann, J.; Kalisch, M.; Bühlmann, P.; Supuran, C. T.; Neri, D. Selection of carbonic anhydrase IX inhibitors from one million DNA-encoded compounds. _ACS chemical biology_ 2011, _6_ , 336–344
* Trott and Olson 2010 Trott, O.; Olson, A. J. AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. _Journal of computational chemistry_ 2010, _31_ , 455–461
* Koes et al. 2013 Koes, D. R.; Baumgartner, M. P.; Camacho, C. J. Lessons learned in empirical scoring with smina from the CSAR 2011 benchmarking exercise. _Journal of chemical information and modeling_ 2013, _53_ , 1893–1904
* Francoeur et al. 2020 Francoeur, P. G.; Masuda, T.; Sunseri, J.; Jia, A.; Iovanisci, R. B.; Snyder, I.; Koes, D. R. Three-dimensional convolutional neural networks and a cross-docked data set for structure-based drug design. _Journal of chemical information and modeling_ 2020, _60_ , 4200–4215
* Stokes et al. 2020 Stokes, J. M.; Yang, K.; Swanson, K.; Jin, W.; Cubillos-Ruiz, A.; Donghia, N. M.; MacNair, C. R.; French, S.; Carfrae, L. A.; Bloom-Ackermann, Z., et al. A deep learning approach to antibiotic discovery. _Cell_ 2020, _180_ , 688–702
* Stärk et al. 2022 Stärk, H.; Ganea, O.; Pattanaik, L.; Barzilay, R.; Jaakkola, T. Equibind: Geometric deep learning for drug binding structure prediction. International Conference on Machine Learning. 2022; pp 20503–20521
* Kingma and Ba 2014 Kingma, D. P.; Ba, J. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ 2014,
* Falcon et al. 2020 Falcon, W. et al. PyTorchLightning/pytorch-lightning: 0.7.6 release. 2020; https://doi.org/10.5281/zenodo.3828935
* Bingham et al. 2019 Bingham, E.; Chen, J. P.; Jankowiak, M.; Obermeyer, F.; Pradhan, N.; Karaletsos, T.; Singh, R.; Szerlip, P.; Horsfall, P.; Goodman, N. D. Pyro: Deep universal probabilistic programming. _The Journal of Machine Learning Research_ 2019, _20_ , 973–978
* Waskom 2021 Waskom, M. L. seaborn: statistical data visualization. _Journal of Open Source Software_ 2021, _6_ , 3021
|
11institutetext: Birla Institute of Technology & Science Pilani, K.K. Birla
Goa Campus, India 11email<EMAIL_ADDRESS>22institutetext: Manipal
Institute of Technology, Karnataka, India 22email<EMAIL_ADDRESS>
# Movie Recommendation System using Composite Ranking
Irish Mehta 0000-0002-7001-258X Aashal Kamdar 0000-0002-7067-425X
###### Abstract
In today’s world, abundant digital content like e-books, movies, videos and
articles are available for consumption. It is daunting to review everything
accessible and decide what to watch next. Consequently, digital media
providers want to capitalise on this confusion and tackle it to increase user
engagement, eventually leading to higher revenues. Content providers often
utilise recommendation systems as an efficacious approach for combating such
information overload. This paper concentrates on developing a synthetic
approach for recommending movies. Traditionally, movie recommendation systems
use either collaborative filtering, which utilises user interaction with the
media, or content-based filtering, which makes use of the movie’s available
metadata. Technological advancements have also introduced a hybrid technique
that integrates both systems. However, our approach deals solely with content-
based recommendations, further enhancing it with a ranking algorithm based on
content similarity metrics. The three metrics contributing to the ranking are
similarity in metadata, visual content, and user reviews of the movies. We use
text vectorization followed by cosine similarity for metadata, feature
extraction by a pre-trained VGG19 followed by K-means clustering for visual
content, and a comparison of sentiments for user reviews. Such a system allows
viewers to know movies that ”feel” the same.
###### Keywords:
Recommendation systems Content-based filtering Sentiment Analysis Visual
Similarity.
## 1 Introduction
The internet has become widespread, leading to an unlikely problem for users;
the problem of having too many choices. From choosing electronics like mobiles
and laptops to choosing a university for graduation, there is just an
exorbitant amount of information at the tip of our fingers. This might get
overwhelming, so recommendation systems were introduced as an effective method
of combating this information overload. Even though the idea of recommendation
systems is relatively new in the field of research, it has long been an
integral component of society, significantly impacting our lives and also
those around us
[1].
Researchers have developed various recommender systems up to this point for
many different types of industries. Recommender systems benefit both service
providers and users [2]. They help companies with customer retention, thus
increasing revenues and reducing the time spent by a user looking for the next
best item. There are mainly three different types of recommendation systems
that are utilised, which have been described below:
Content-Based Filtering: Such systems are based on showing the user more
content that is similar to what they already liked or have liked in the past
[1]. The similarity between two objects can be estimated based on their
related features. This method uses features and likes provided by the user in
order to curate recommendations that the user might like.
Collaborative Filtering: This method focuses on finding similar users and
recommends what similar users like. This technique can filter out items that a
user might enjoy basis ratings or comments of other users [6].
Hybrid Recommender Systems: The idea behind this method is to provide a
combination of two recommendation systems so as to overcome the shortcomings
of each system [1].
In this paper, we focus on content-based recommendation systems and aim to
improve the recommendations based on a composite ranking system involving a
combination of visual aspects and user reviews of the content. We first use a
metadata-based recommendation system to get a set of initial recommendations
for movies. To understand the visual features of the reference and recommended
movies, we utilise key frames of the movie trailers, VGG19 for extraction of
features, and K Means clustering for grouping similar key frames. This is
followed by a novel approach to compare the closeness of both the trailers.
Additionally, we use sentiment analysis to understand how the wider audience
has received it. Ultimately, both of these are combined to create a ranking
algorithm.
## 2 Related Work
At Duke University, recommender systems developed into a separate field of
study in the middle of the 1970s. The first recommendation system was
developed at the Xerox Palo Alto Research Centre and was called Tapestry [3].
It was created as a solution to the rising use of electronic mail, which led
to a massive influx of documents.
Recommendation systems have been defined as a decision-making method for users
in complicated information settings [4]. They can also be defined as a tool
for assisting and enhancing the social process of using recommendations from
others to make decisions when one lacks sufficient personal knowledge or
expertise of other options [5]. User information overload is addressed by
recommender systems by offering users individualised, exclusive content and
service recommendations [2].
Several methods have been developed for building recommendation systems that
employ collaborative filtering, content-based filtering and hybrid filtering.
The most popular personalised recommendation technique in use is the
collaborative filtering algorithm [6]. Collaborative filtering gained
widespread attention when Amazon.com’s research team published how the company
uses collaborative filtering to improve its user recommendations [7].
According to them, Amazon utilised a memory-based approach by using an item-
to-item matrix for similar items. Content-based filtering provides
recommendations to a user by correlating the information describing an item
with other items in the database. Simon Philip et al. [8] deployed a content-
based recommendation system for recommending research papers for a digital
library. Robin van Meteran et al. [9] use content-based filtering to suggest
small articles on home improvements.
Nevertheless, these methods also have their limitations. Limited content
analysis, overspecialization, and data sparsity are some issues with content-
based filtering strategies [10]. Additionally, cold-start, sparsity, and
scalability issues are present in collaborative techniques. These can be
mitigated by creating a system that combines the features of different
filtering techniques to provide increased accuracy. This method is known as
hybrid filtering [11]. Mohammed Baidad et al. [12] used a hybrid
recommendation system to improve the quality of recommendations in education
to suit the needs of every learner since everyone learns differently the
majority of the time. By merging various matrix manipulation techniques with
fundamental recommendation strategies, hybrid filtering attempts to tackle the
data sparsity and cold-start problems. They also strive to make better use of
product attributes, product testimonials, user demographic information, or
other well-known user traits [13]. Y Dang et al. [14] propose a hybrid
collaborative filtering algorithm for the recommendation of news and
interesting information sources. Konstas et al. [15] propose a music
recommendation system that incorporates play counts, tagging data, and social
relationships.
Yashar et al. have developed a recommendation system based on the visual
features of the content itself [16]. Their method focuses on how a content-
based recommendation system can give a better recommendation based on the
visual similarity of the two movies based on the theory of Applied Media
Aesthetics. The team at MediaFutures has developed multiple recommendation
systems incorporating Deep Content Features, i.e., visual features, to solve
the cold start problem and provide a comparatively better recommendation than
metadata [17].
Elham Asani et al. [18] developed a recommendation system to suggest
restaurants to users by eliciting their food preferences from their comments.
Anmol Chauhan et al. [19] use sentiment analysis to recommend movies to a user
based on their view history.
However, almost all the existing recommendation systems utilise visual
similarity for either standalone recommendations or to recommend in the case
of a cold start problem. Regarding our contribution, there is no other
algorithm designed to rank recommendation systems in three dimensions of the
content, i.e., metadata, visual and sentiment analysis of movie reviews.
## 3 Dataset
Table 1: The list of all data points considered for the ranking system
| Details about the type of movie (Title, Overview, Genre, Tagline, Keywords)
---|---
Metadata | | The factual details of the movie (Runtime, Language, Director,
---
Cast, Writers, Production Companies, Release Date, Budget)
| | The Response to the movie from the audience (Popularity, Revenue,
---
Rating, Vote Count)
Visual Similarity | Official Trailer for each movie
Sentiment Analysis | All the user reviews for the given movie from IMDb
One of the most exhaustive sources for metadata of any movie released
worldwide is the Internet Movie Database (IMDb) [20]. To create a content-
based recommendation system, we publicly make available a dataset of the top
10,000 English language-based movies sorted in descending order by their vote
count till 13th August 2022. The metadata consists of all the data points
mentioned in Table 1. To rank the recommended movies based on their visual
similarity, we utilise the official trailers of all the movies as extracted
from YouTube [21]. In addition, for each of the recommended movies, all the
user reviews from IMDb are considered for sentiment analysis.
We tried experimenting with multilingual data by considering movies released
in regional Indian languages. However, the feasibility of using such data
drastically decreased because of its lack of uniformity and availability of
credible sources. Nonetheless, we utilise information revolving around
English-language-based movies only in the context of this paper.
## 4 Methodology
Fig 1 refers to the flow diagram of our proposed system with three components,
i.e., metadata similarity, visual similarity and sentiment analysis score.
Figure 1: Flow diagram for the proposed recommendation system
### 4.1 Metadata Similarity
Before ranking the recommendations of a movie, we generate a list of
recommendations for a random movie. We use a content-based filtering approach
similar to Rujhan Singla et al. [22] in their paper.
#### 4.1.1 Data Pre-Processing
All of the columns are not required for generating recommendations, so a
combination of the following metadata is vectorized and used -
$Combination=Keywords+Cast+Genres+Director+Overview$ (1)
In cases where we cannot get the required data (due to unavailability), we use
empty strings to ensure continuity. Moreover, redundant information was
removed by routine natural languages processing techniques such as removing
stop-words and lemmatisation.
#### 4.1.2 Algorithm
The data used for this content-based recommendation system is in the form of
text, but since machines cannot read strings, they require data to be
numerical. To transform this raw textual data into a numerical format, we use
a text vectorization algorithm, namely Term Frequency-Inverse Document
Frequency or TF-IDF, introduced by Karen Sparck Jones [23]. The term frequency
is the number of times a particular term appears in a document. The term
frequency represents each text from the data as a matrix. The quantity of
documents that use a particular term is known as document frequency. It
indicates how common the term is. Inverse Document Frequency or IDF aims to
reduce the weight of a term if it appears numerous times across all the
documents. It can be calculated as follows:
$idf\textsubscript{i}=log(\frac{n}{df\textsubscript{i}})$ (2)
Here, idfi is the IDF score for term i, dfi is the number of documents where
the term i appears and n is the total number of documents under consideration.
The TF-IDF score is the multiplication of the TF matrix with its IDF-
$w\textsubscript{i,j}=tf\textsubscript{i,j}\times idf\textsubscript{i}$ (3)
where Wi,j is TF-IDF score for each term i in document j, tfi,j is term
frequency for term i in document j, and idfi is IDF score for term i. The
combination which is to be vectorized is shown in (1).
We create a single string for each movie consisting of the above features.
These were chosen because they would provide the most value for deciding which
movies are most similar to the given movie. In this scenario, a movie
represents a document and the combination above denotes a term. In this case,
the IDF of a document in the corpus will denote the number of documents or
movies where words in the combination will appear. As explained in [24], this
will be used to assign less weight to terms that are used frequently. Cosine
similarity is used to calculate the distance between the unit vectors of the
movies. The movies having the shortest distance would be most similar to the
initially given movie, as also used by D Gunawan et al. [25] to calculate the
text relevance between two documents. Cosine similarity is the cosine of the
angle between two vectors.
$similarity=cos(\theta)=\frac{A.B}{|A||B|}=\frac{\sum_{i=1}^{n}A\textsubscript{i}B\textsubscript{i}}{\sqrt{\sum_{i=1}^{n}A\textsubscript{i}\textsuperscript{2}}\sqrt{\sum_{i=1}^{n}B\textsubscript{i}\textsuperscript{2}}}$
(4)
where A is the vector of the initially given movie and B is the vector of
every other movie in the corpus. We use the above-described content-based
filter to further test our ranking algorithm to recommend five suggestions for
three movies.
### 4.2 Visual Similarity
The primary approach to ranking a list of movies based on their visual
similarity to the reference movie is based on a process analogous to that
utilised by Yashar et al. [16] with a more unsupervised outlook rather than a
supervised classification problem. Instead of extracting low-level stylistic
features like colour, motion, and lighting, we rely on a clustering
methodology to segment the movie’s trailer. There is also sufficient evidence
implying that trailers and movies are highly correlated, allowing us to
consider trailers to represent full-length movies. Hence we consider a movie’s
trailer to be a good indicator of the visual features in the movie.
Figure 2: The flow diagram for the calculation of Video Similarity Algorithm 1
Calculation of Histogram difference for identifying Key Frames
1:Start Key Frame Extraction
2:$REF\leftarrow 1$
3:$KeyFrameList\leftarrow\\{\\}$
4:$\textit{N}\leftarrow\text{Total Frames}$
5:for $i\leftarrow 2,N$ do
6: $H1\leftarrow Histogram(Frame(REF))$
7: $H2\leftarrow Histogram(Frame(i))$
8: if $H1-H2\geq\displaystyle\left\lvert 0.85\right\rvert$ then
9: KeyFrameList.insert(Frame(i))
10: end if
11: $REF\leftarrow REF+1$
12:end for
13:Return KeyFrameList
Algorithm 2 Calculation of Cosine Similarity For removing similar Keyframes
1:Start Key Frame Checking
2:$REF\leftarrow 1$
3:$UniqueKeyFrames\leftarrow\\{\\}$
4:$\textit{N}\leftarrow\text{Total Key Frames}$
5:for $i\leftarrow 2,N$ do
6: $C1\leftarrow Cosine(Frame(REF))$
7: $C2\leftarrow Cosine(Frame(i))$
8: if $CosineSimilarity(C1,C2)<\displaystyle\left\lvert 0.9\right\rvert$ then
9: UniqueKeyFrames.insert(Frame(i))
10: $REF\leftarrow REF+1$
11: end if
12:end for
13:Return UniqueKeyFrames
#### 4.2.1 Image Extraction
The extraction of frames is based on a key frame extraction model [33] except
that instead of only histogram matching, we implement a combination of
histogram matching and a cosine similarity metric. This approach enables an
understanding of whether the frames have sufficient new information compared
to the previous frame and ensures that similar key frames are not extracted.
Moreover, since more recent movies are shot at higher frames per second,
extracting all frames would create a high correlation and be computationally
expensive. Though there are many ways of extracting frames from a video, we
use a fixed interval extraction before proceeding with the key frame
detection. We extract frames alternatingly as mentioned in Algorithms 1 and 2.
Based on a preliminary analysis of 100 randomly selected trailers from the
dataset, the average duration is 120–180 seconds and the average frame rate is
30 frames per second. As a result, the total number of extracted frames
(without additional filters) is between 2000 and 2500.
#### 4.2.2 Data Preprocessing
Good movie trailers are designed to pique the viewer’s curiosity and show the
quality of the movie [34]. For the same reasons, production houses often keep
many cut scenes, transitions, introductory animation, ending animation, and
text-based frame animation during cut scenes. However, the visual cues
provided in these frames are negligible as the relevant information is already
part of the metadata. Additionally, as these frames can interfere with the
algorithm, we implement filtering criteria to eliminate such frames. For
transitions and introductory animation, we utilise PySceneDetect [35]; for
images that are pure color (cut-scenes, transitions), we empirically determine
the threshold of the pixel intensities and the number of such pixels that do
not provide any extra information about the trailer. We tested this approach
manually on 20 trailers to determine the thresholds. Each black and white
frame is composed of a set number of pixels having intensity between 1 to 255,
with the former being a pitch dark pixel and the latter being pure white
pixel. The result is shown in Table 2.
Table 2: The absolute pixel intensities for black images and white images
Mostly Black: 16 samples | | Mostly White: 11 samples
---|---|---
Pixel Intensity | | Number of Images
---
with 80% pixels under
the intensity
| Pixel Intensity | | Number of Images
---
with 80% pixels under
the intensity
1 | 5 | | 215 | 2
2 | 6 | | 216-223 | 5
3-9 | 8 | | 224-235 | 6
10-12 | 9 | | 236-241 | 7
13-16 | 10 | | 242-253 | 8
17 | 11 | | 254 | 9
18 | 12 | | 255 | 11
19-21 | 13 | | |
22-26 | 14 | | |
27-30 | 15 | | |
31-33 | 16 | | |
We use a Laplacian Operator for blurred images to determine edges within the
picture [38]. Based on multiple iterations, the threshold we chose is 2, i.e.,
we filter out all those frames that have a variance of Laplacian less than 2
Additionally, not all movies are shot in the same camera setting. Due to
different aspect ratios and perceptive cinematography, the presence of
letterboxes and cinematic black bars causes a difference in the frames of
different movies. Our framework takes care of this using image-processing
functions provided by the OpenCV library.
#### 4.2.3 Feature Extraction
Figure 3: Breakdown of the VGG Convolutional Network used for feature
extraction
We use a pre-trained Visual Geometry Group (VGG) network architecture [36] for
extracting the most important stylistic features from the database of
keyframes of the reference trailer. It is based on a CNN model, trained on the
magnanimous ImageNet dataset, and performs considerably well in creating
feature vectors for a given image [37]. VGG19 inputs all images with a
dimension of 224x224x3 by default. The architecture generates a feature vector
with dimensions of 1x1x4096 for each frame (see Fig. 3). Key frames within
movie trailers store detailed information, and resizing them to a size of
224x224x3 before extracting features can cause some of these features to lose
information. However, the tradeoff is that generating features from high-
resolution images is computationally expensive.
#### 4.2.4 Kmeans Clustering
After extracting the feature vectors of multiple key frames, we use a K-means
clustering algorithm to cluster all the feature vectors [39]. This step helps
segment the type of scenes in the trailer (each key frame represents a scene).
We try to select a fixed number of clusters based on the Elbow method [40].
However, due to the fast pace of movie trailers and significant differences
between most of the scenes, most of them start showing minimum deviation in
WCSS as the number of clusters approaches the total length of extracted key
frames. Hence, we fix the number of clusters at 5, where we manually gauge the
clusters to be significantly different.
#### 4.2.5 Calculation of Similarity
After the query frame is preprocessed, we find the euclidean distance between
the feature vector of the query frame and the centroid of all the clusters of
the reference movie. The cluster centroid that is closest to the feature
vector of the query frame exhibits the highest similarity compared to other
centroids. Repeating this approach for all the frames of the query trailer, we
get a value of a cluster centroid of the reference movie for each frame of the
query trailer. Now we compare the percentage of query frames in each cluster
compared to the total frames in the query trailer. By doing this, we get a
distribution vector as shown in Equation 5. With this information, we compare
the distribution vector of the query trailer with that of the reference
trailer by using Equation 6.
$P=(v_{1},w_{1},x_{1},y_{1},z_{1})$ (5)
where,
v1, w1,…..,z1 signify the percentage distribution of key frames into clusters
$Euclidean\
Distance=\sqrt{\left({v_{1}-v_{2}}\right)^{2}+\left({w_{1}-w_{2}}\right)^{2}........\left({z_{1}-z_{2}}\right)^{2}}$
(6)
Based on the Euclidean distance between the two distributions, we quantify how
close the two points are. The inverse of this metric is what we call the
visual similarity score as mentioned in Equation 7. The complete calculation
methodology is visualised in Fig. 4.
$VSS=\frac{1}{1+x}\\\ $ (7)
where,
x is Euclidean distance
Figure 4: The calculation of Visual Similarity
### 4.3 Sentiment Analysis
A movie review is a piece of writing that expresses the writers’ opinions
about a particular film and offers either support or criticism, allowing a
viewer to decide whether or not they want to see the movie. Such reviews act
as indirect recommendations to the viewer. To better collect, retrieve,
measure, and evaluate viewers, it is crucial to be able to classify movie
reviews [26]. Thus, we define sentiment analysis of movie reviews as a
classification problem and attempt to solve it as presented by Mais Yasen et
al. [32].
#### 4.3.1 Data Pre-Processing
In order to prepare data for training a machine learning model, we need to
process it by removing all HTML tags, punctuations, single characters and
multiple spaces.
Since a machine learning algorithm cannot process direct text, we use Count
Vectorizer to convert the reviews into vectors.
#### 4.3.2 Algorithms Used
After obtaining the text in vector form, we use TF-IDF to get the importance
of each word in the review. The data is split into train and test, and we test
several classification algorithms to see which gives us the best result. The
algorithms used are-
##### Linear Support Vector Classifier (Linear SVC)
This algorithm classifies data using a linear kernel function and performs
well when many samples are involved. This algorithm aims to find a hyperplane
that will separate the given samples into two classes in a P-dimension space.
##### Logistic Regression
Introduced in 1958 [27], it is one of the earliest methods invented to perform
classification. Logistic regression measures the relationship between the
categorical dependent variable and one or more independent variables by
estimating probabilities using a sigmoid curve.
$f(x)=\frac{L}{1+e\textsuperscript{-k(x-x\textsubscript{0})}}\\\ $ (8)
Sigmoid Curve Equation
##### Decision Trees
This algorithm uses a tree-like graph or model of decisions and their possible
consequences. It has a flow-like structure in which each internal node
represents a ”test” on an attribute [28].
$Entropy(S)=-\sum{}P(I)\times log\textsubscript{2}(P(I))$ (9)
$Information\ Gain(S,A)=Entropy(S)-\sum{}P(S|A)\times Entropy(S|A)$ (10)
Entropy is the quantity of data required to describe a sample accurately.
Information gain is the amount of information provided by a particular
feature.
##### Random Forest Classifier
This algorithm is an extension of the Decision Tree algorithm [29]. It works
by creating more than one decision tree, which is equivalent to a ‘forest’.
Like more trees in the forest, the more robust the forest looks, the higher
the number of decision trees in a forest and the higher the accuracy.
##### XGBoost
This algorithm is based on the decision tree algorithm that uses the gradient
boosting framework [30]. Decision trees are created in a sequential form. Each
independent variable is weighed before being fed into the decision tree that
forecasts outcomes.
##### Naive Bayes
It is a group of linear classifiers that are simple and efficient. The
probabilistic model of this algorithm is based on the Bayes Theorem [31], and
the adjective model comes from the the fact that the features in a dataset are
mutually independent.
$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$ (11)
Bayes Theorem
#### 4.3.3 Metrics
We use two evaluation metrics, accuracy and F1 score.
Accuracy : Ratio of correctly predicted observations to the total
observations.
F1 Score : Weighted average of Precision and Recall.
_Note: Here precision is the ratio of correctly predicted positive
observations to the total predicted positive observations, and recall is the
ratio of correctly predicted positive observations to the total predicted
positive observations._
$Precision=\frac{TP}{TP+FP}$ (12) $Recall=\frac{TP}{TP+FN}$ (13) $F1\
Score=\frac{2\times Precision\times Recall}{Precision+Recall}$ (14)
$Accuracy=\frac{TP+TN}{TP+FN+TN+FP}$ (15)
_where TP, TN, FP, FN are True Positive, True Negative, True Negative and
False Negative respectively._
After finding out the best-performing algorithm, we use that algorithm to do
sentiment analysis on the movies that we obtain from the content-based
filtering approach discussed previously. Since the number of reviews available
for each movie will not be the same (lower ranked movies have fewer reviews),
we randomly sample 50 reviews at one time for each movie and repeat this
process 10 times to find the percentage of positive and negative reviews. This
ensures that the testing is unbiased due to the fluctuations in the number of
reviews across different recommendations. We calculate the positivity score
based on the average percentage of positive reviews of those 10 runs to
determine the overall sentiment of a movie. The sentiment classification
algorithm has been trained on a publicly available IMDb dataset [41].
### 4.4 Calculation of Movie Similarity Score
To generate a ranking of the movies recommended by our system, we have created
a metric that consists of a weighted sum of the sentiment score and the visual
similarity score. To decide the weights, we assume that the sentiment of the
audience is unrelated to the visual similarity of the content. Hence we keep
the weights to be 0.5 each giving equal importance to visual similarity and
the sentiment analysis of the movies. For visual similarity, we consider the
similarity between trailers of the reference and recommended movies based on
the methodology proposed above. We have considered the percentage of positive
reviews for a movie as the sentiment analysis score.
$Movie\ Similarity\ Score=(VSS\times 0.5)+(SC\times 0.5)$ (16)
where,
VSS is Visual Similarity Score and SC is the Sentiment Score
## 5 Results
We have taken 3 random movies [42, 43, 44] from the dataset and generated 5
movies as recommendations for each of those movies [45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56, 57, 58, 59] as listed in Table 3.
Table 3: The list of recommendations of the reference movies based on metadata similarity | Recommended Movies
---|---
Reference Movies | 1 | 2 | 3 | 4 | 5
Tenet | Interstellar | | The
---
355
Predestination | | The Man
---
from
U.N.C.L.E
| Mission
---
Impossible:
Fallout
Cast Away | | Six Days
---
Seven
Nights
| Lord of
---
the Flies
Nim’s Island | | The Blue
---
Lagoon
| The Most
---
Dangerous
Game
| 2001: A Space
---
Odyssey
| Dark
---
Star
| 2010: The
---
Year We Make
Contact
Gravity | | The Black
---
Hole
Ad Astra
### 5.1 Visual Similarity Score
For all the 18 movie titles, we compared the trailer of the recommended movies
to the reference movies. The average length of each trailer is 130 seconds and
the average length of key frames is 210. There is a reduction of 20% of the
key frames after using a cosine similarity approach to remove matching frames.
The final distribution metric gives the following video similarity score for
all the recommended movie trailers compared to the reference movie trailer.
Table 4: The Visual Similarity score of all recommended movies
Video Similarity of Recommended Movies
---
Tenet | CastAway | 2001: Space Odyssey
Mission Impossible: Fallout | 0.933 | Nims Island | 0.871 | Gravity | 0.803
Predestination | 0.892 | Most Dangerous Game | 0.805 | Ad Astra | 0.771
The Man from U.N.C.L.E | 0.879 | Six Days Seven Nights | 0.782 | Dark Star | 0.739
The 355 | 0.825 | Lord of The Flies | 0.764 | Black Hole | 0.733
Interstellar | 0.813 | Blue Lagoon | 0.754 | Contact 2010 | 0.686
### 5.2 Sentiment Score
Table 5 shows a comparative study for multiple sentiment classification
algorithms. Table 6, 7 and 8, show a summary of how we achieved a sentiment
score for each movie.
Table 5: Comparative study of the performance of different machine learning
models on the IMDb dataset
Sr. No. | Algorithm | Accuracy | Precision | Recall | F1-Score
---|---|---|---|---|---
1 | Linear Support Vector Classification | 90% | 90.79% | 88.92% | 89.85%
2 | Logistic Regression | 89.66% | 83.86% | 87.52% | 85.65%
3 | Decision Tree | 71.52% | 71.40% | 71.33% | 71.36%
4 | Random Forest Classifier | 74.58% | 70.77% | 83.06% | 76.43%
5 | XGBoost | 86.07% | 87.51% | 83.97% | 85.71%
6 | Naive Bayes | 85.41% | 83.86% | 87.52% | 85.65%
Based on table 5, Linear Support Vector Classifier gives the best result on
the IMDb data with an accuracy of 90% and F1 Score of 89.85%.
Table 6: The sentiment-level ranking of recommended movies for Tenet
| Interstellar | The 355 | Predestination | | The Man From
---
U.N.C.L.E
| Mission Impossible:
---
Fallout
| |
| Positive | Negative | Positive | Negative | Positive | Negative | Positive | Negative | Positive | Negative | | Ranking
Run 1 | 72% | 28% | 56% | 44% | 72% | 28% | 64% | 36% | 70% | 30% | | 1 | Predestination
Run 2 | 72% | 28% | 40% | 60% | 78% | 22% | 68% | 32% | 68% | 32% | | 2 | Interstellar
Run 3 | 74% | 26% | 46% | 54% | 68% | 32% | 60% | 40% | 64% | 36% | | 3 | Mission Impossible: Fallout
Run 4 | 70% | 30% | 46% | 54% | 82% | 18% | 66% | 34% | 54% | 46% | | 4 | The Man From U.N.C.L.E
Run 5 | 62% | 38% | 48% | 52% | 64% | 36% | 60% | 40% | 72% | 28% | | 5 | The 355
Run 6 | 68% | 32% | 42% | 58% | 74% | 26% | 74% | 26% | 68% | 32% | | |
Run 7 | 60% | 40% | 46% | 54% | 72% | 28% | 52% | 48% | 74% | 26% | | |
Run 8 | 54% | 46% | 36% | 64% | 70% | 30% | 58% | 42% | 50% | 50% | | |
Run 9 | 70% | 30% | 40% | 60% | 74% | 26% | 74% | 26% | 60% | 40% | | |
Run 10 | 66% | 34% | 52% | 48% | 72% | 28% | 64% | 36% | 64% | 36% | | |
Average | 67% | 33% | 45% | 55% | 73% | 27% | 64% | 36% | 64% | 36% | | |
Table 7: The sentiment-level ranking of recommended movies for Cast Away
| Six Days Seven Nights | Lord of the Flies | Nim’s Island | The Blue Lagoon | | The Most Dangerous
---
Game
| |
| Positive | Negative | Positive | Negative | Positive | Negative | Positive | Negative | Positive | Negative | | Ranking
Run 1 | 46% | 54% | 52% | 48% | 74% | 26% | 70% | 30% | 70% | 30% | | 1 | Nim’s Island
Run 2 | 46% | 54% | 44% | 56% | 74% | 26% | 78% | 22% | 84% | 16% | | 2 | The Most Dangerous Game
Run 3 | 58% | 42% | 48% | 52% | 74% | 26% | 64% | 36% | 76% | 24% | | 3 | The Blue Lagoon
Run 4 | 52% | 48% | 38% | 62% | 70% | 30% | 72% | 28% | 80% | 20% | | 4 | Six Days Seven Nights
Run 5 | 50% | 50% | 46% | 54% | 82% | 18% | 72% | 28% | 72% | 28% | | 5 | Lord of the Flies
Run 6 | 60% | 40% | 56% | 44% | 84% | 16% | 86% | 14% | 72% | 28% | | |
Run 7 | 54% | 46% | 50% | 50% | 78% | 22% | 64% | 36% | 62% | 38% | | |
Run 8 | 52% | 48% | 40% | 60% | 72% | 28% | 68% | 32% | 76% | 24% | | |
Run 9 | 48% | 52% | 52% | 48% | 72% | 28% | 74% | 26% | 76% | 24% | | |
Run 10 | 54% | 46% | 52% | 48% | 78% | 22% | 76% | 24% | 78% | 22% | | |
Average | 52% | 48% | 48% | 52% | 76% | 24% | 72% | 28% | 75% | 25% | | |
Table 8: The sentiment-level ranking of recommended movies for 2001: A Space
Odyssey
| Dark Star | | 2010: The Year We
---
Make Contact
Gravity | The Black Hole | Ad Astra | | |
| Positive | Negative | Positive | Negative | Positive | Negative | Positive | Negative | Positive | Negative | | Order of Ranks
Run 1 | 64% | 36% | 60% | 40% | 58% | 42% | 42% | 58% | 34% | 66% | | 1 | 2010: The Year We Make Contact
Run 2 | 70% | 30% | 60% | 40% | 66% | 34% | 48% | 52% | 40% | 60% | | 2 | Dark Star
Run 3 | 56% | 44% | 74% | 26% | 54% | 46% | 52% | 48% | 22% | 78% | | 3 | Gravity
Run 4 | 60% | 40% | 62% | 38% | 64% | 36% | 46% | 54% | 24% | 76% | | 4 | The Black Hole
Run 5 | 74% | 26% | 68% | 32% | 52% | 48% | 48% | 52% | 30% | 70% | | 5 | Ad Astra
Run 6 | 64% | 36% | 70% | 30% | 46% | 54% | 52% | 48% | 30% | 70% | | |
Run 7 | 56% | 44% | 56% | 44% | 60% | 40% | 46% | 54% | 24% | 76% | | |
Run 8 | 68% | 32% | 64% | 36% | 68% | 32% | 48% | 52% | 30% | 70% | | |
Run 9 | 60% | 40% | 68% | 32% | 60% | 40% | 56% | 44% | 30% | 70% | | |
Run 10 | 56% | 44% | 64% | 36% | 64% | 36% | 44% | 56% | 32% | 68% | | |
Average | 63% | 37% | 65% | 35% | 59% | 41% | 48% | 52% | 30% | 70% | | |
Compiling all the results, we note the following Sentiment scores for the
recommended movies in Table 9.
Table 9: The positivity score of all recommended movies based on the sentiment
analysis algorithm
Sentiment Analysis of Recommended Movies
---
Tenet | CastAway | 2001: Space Odyssey
Predestination | 0.726 | Nim’s Island | 0.758 | 2010: The Year We Make Contact | 0.646
Interstellar | 0.668 | The Most Dangerous Game | 0.746 | Dark Star | 0.628
Mission Impossible: Fallout | 0.644 | The Blue Lagoon | 0.724 | Gravity | 0.592
The Man From U.N.C.L.E | 0.64 | Six Days Seven Nights | 0.52 | The Black Hole | 0.482
The 355 | 0.452 | Lord of the Flies | 0.478 | Ad Astra | 0.296
### 5.3 Movie Similarity Score
The final movie similarity scores are listed in Table 10. These score are
derived from the weighted sum methodology explained in Section 4.4
Table 10: The Movie Similarity Score based on the weighted combination of
Sentiment Score and Video Similarity Score
| Proposed Ranking Algorithm |
---|---|---
| Recommended Movie | Movie Similarity Score |
Tenet | Predestination | 0.809 |
Interstellar | 0.7405 |
Mission Impossible: Fallout | 0.7885 |
The Man From U.N.C.L.E | 0.7595 |
The 355 | 0.6385 |
CastAway | Nim’s Island | 0.8145 |
The Most Dangerous Game | 0.7755 |
The Blue Lagoon | 0.739 |
Six Days Seven Nights | 0.651 |
Lord of the Flies | 0.621 |
2001: Space Odyssey | 2010: The Year We Make Contact | 0.666 |
Dark Star | 0.6835 |
Gravity | 0.6975 |
The Black Hole | 0.6075 |
Ad Astra | 0.5335 |
### 5.4 Comparative Study with Existing System
To compare our rankings, we shall use the publicly available IMDb Top 250
ranking algorithm along with another publicly available metric of the
popularity of the movie to rank the recommendations and then compare the
results. However, we extend this algorithm to all the movies as our dataset
has over 10,000 movies. The IMDb ranking algorithm is as follows
$w=\frac{(r\times v)+(c\times m)}{v+m}$ (17)
where,
w is the weighted rating
r is the average rating of the movie
v is the number of ratings for the movie
c is the mean of the ratings of all the movies in the corpus
m is the minimum votes required to be listed
We choose to take the inverse of popularity because lower the value of
popularity, more popular the movie is. The final score is based on
$Final\ Score=(W\times 0.5)+(\frac{1}{P}\times 0.5)$ (18)
where, W is normalized weighted rating and P is normalized popularity
Note: The results of the comparative study are listed in Table 11. However,
due to the subjective nature of recommendations and lack of theoretical
validation on what type of recommendation is better than the other, we do not
utilize any performance metric to determine what type of system is better than
the other.
Table 11: The comparison of our proposed methodology with the ranking of IMDBs
algorithm
| Rank | Proposed Ranking Algorithm | | Current Ranking Algorithm based
---
on Ratings and Popularity
Tenet | 1 | Predestination | Interstellar
2 | Mission Impossible: Fallout | Mission Impossible: Fallout
3 | The Man From U.N.C.L.E | Predestination
4 | Interstellar | The Man from U.N.C.L.E
5 | The 355 | The 355
CastAway | 1 | Nim’s Island | The Most Dangerous Game
2 | The Most Dangerous Game | Lord of The Flies
3 | The Blue Lagoon | Nim’s Island
4 | Six Days Seven Nights | Blue Lagoon
5 | Lord of the Flies | Six Days Seven Nights
2001: Space Odyssey | 1 | Gravity | Gravity
2 | Dark Star | 2010: The Year We Make Contact
3 | 2010: The Year We Make Contact | Ad Astra
4 | The Black Hole | Dark Star
5 | Ad Astra | The Black Hole
## 6 Conclusion and Future Work
To conclude, our contribution proposes a content-based recommendation system
for movies enhanced with a ranking algorithm that considers the visual
similarity of the content itself as well as measures the sentiment of user
reviews. For visual similarity, we use a pre-trained VGG network for feature
extraction from key frames of the movie trailer. We follow this by clustering
and calculating similarity based on the Euclidean distance of the distribution
of frames of the test and the reference movie. For sentiment analysis, we
utilize a publicly available IMDb dataset and choose the model with the best
combination of Accuracy and F1 score followed by calculating the % of positive
reviews in the movie. Finally, we combine these scores to create a unified
Movie Similarity Score. We then compare our results with the ranking algorithm
of IMDb and see that the results are noticeably different.
We believe our contribution can improve the quality of recommendations
compared to the existing content-based systems. Our methodology is not only
limited to ranking content-based recommendations based on visual similarity
and audience sentiment, but also other dimensions such as the auditory
similarity and closeness of the script. Our contribution can also be used to
recommend other types of visual media, such as YouTube videos and animated
content. It can also be used as an add-on to existing recommendation systems
to incorporate a ”similar feel” factor in the recommendations. To take this
research further, we plan to design a survey that tries to curate the opinion
of movie watchers regarding the quality of our recommendations. This would act
as a validation metric, allowing us to measure the improvement in
recommendations compared to a traditional content-based system.
## References
* [1] Sharma R, Singh R. Evolution of recommender systems from ancient times to modern era: a survey. Indian Journal of Science and Technology. 2016 May;9(20):1-2.
* [2] Isinkaye FO, Folajimi YO, Ojokoh BA. Recommendation systems: Principles, methods and evaluation. Egyptian informatics journal. 2015 Nov 1;16(3):261-73.
* [3] Goldberg D, Nichols D, Oki BM, Terry D. Using collaborative filtering to weave an information tapestry. Communications of the ACM. 1992 Dec 1;35(12):61-70.
* [4] Rashid AM, Albert I, Cosley D, Lam SK, McNee SM, Konstan JA, Riedl J. Getting to know you: learning new user preferences in recommender systems. InProceedings of the 7th international conference on Intelligent user interfaces 2002 Jan 13 (pp. 127-134).
* [5] Resnick P, Varian HR. Recommender systems. Communications of the ACM. 1997 Mar 1;40(3):56-8.
* [6] Song B, Gao Y, Li XM. Research on collaborative filtering recommendation algorithm based on mahout and user model. InJournal of Physics: Conference Series 2020 (Vol. 1437, No. 1, p. 012095). IOP Publishing.
* [7] Linden G, Smith B, York J. Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet computing. 2003 Jan 22;7(1):76-80.
* [8] Philip S, Shola P, Ovye A. Application of content-based approach in research paper recommendation system for a digital library. International Journal of Advanced Computer Science and Applications. 2014 Oct;5(10).
* [9] Van Meteren R, Van Someren M. Using content-based filtering for recommendation. InProceedings of the machine learning in the new information age: MLnet/ECML2000 workshop 2000 May 30 (Vol. 30, pp. 47-56).
* [10] Adomavicius G, Tuzhilin A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE transactions on knowledge and data engineering. 2005 Apr 25;17(6):734-49.
* [11] Göksedef M, Gündüz-Öğüdücü Ş. Combination of Web page recommender systems. Expert Systems with Applications. 2010 Apr 1;37(4):2911-22.
* [12] Baidada M, Mansouri K, Poirier F. Hybrid Filtering Recommendation System in an Educational Context: Experiment in Higher Education in Morocco. International Journal of Web-Based Learning and Teaching Technologies (IJWLTT). 2022 Jan 1;17(1):1-7.
* [13] Çano E, Morisio M. Hybrid recommender systems: A systematic literature review. Intelligent Data Analysis. 2017 Jan 1;21(6):1487-524.
* [14] Dong Y, Liu S, Chai J. Research of hybrid collaborative filtering algorithm based on news recommendation. In2016 9th international congress on image and signal processing, biomedical engineering and informatics (CISP-BMEI) 2016 Oct 15 (pp. 898-902). IEEE.
* [15] Konstas I, Stathopoulos V, Jose JM. On social networks and collaborative recommendation. InProceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval 2009 Jul 19 (pp. 195-202).
* [16] Deldjoo Y, Elahi M, Cremonesi P, Garzotto F, Piazzolla P, Quadrana M. Content-based video recommendation system based on stylistic visual features. Journal on Data Semantics. 2016 Jun;5(2):99-113.
* [17] Kvifte T, Elahi M, Trattner C. Hybrid Recommendation of Movies Based on Deep Content Features. InInternational Conference on Service-Oriented Computing 2022 (pp. 32-45). Springer, Cham.
* [18] Asani E, Vahdat-Nejad H, Sadri J. Restaurant recommender system based on sentiment analysis. Machine Learning with Applications. 2021 Dec 15;6:100114.
* [19] Chauhan A, Nagar D, Chaudhary P. Movie Recommender system using Sentiment Analysis. In2021 International Conference on Innovative Practices in Technology and Management (ICIPTM) 2021 Feb 17 (pp. 190-193). IEEE.
* [20] IMDb Homepage, https://www.imdb.com/. Last accessed 27 Sept 2022
* [21] YouTube Website, https://www.youtube.com/. Last accessed 27 Nov 2022
* [22] Singla R, Gupta S, Gupta A, Vishwakarma DK. FLEX: a content based movie recommender. In2020 International Conference for Emerging Technology (INCET) 2020 Jun 5 (pp. 1-4). IEEE.
* [23] Jones KS. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation. 1972.
* [24] Robertson S. Understanding inverse document frequency: on theoretical arguments for IDF. Journal of documentation. 2004 Oct 1.
* [25] Gunawan D, Sembiring CA, Budiman MA. The implementation of cosine similarity to calculate text relevance between two documents. InJournal of physics: conference series 2018 Mar 1 (Vol. 978, No. 1, p. 012120). IOP Publishing.
* [26] Mishne G, Glance NS. Predicting movie sales from blogger sentiment. InAAAI spring symposium: computational approaches to analyzing weblogs 2006 Mar 27 (pp. 155-158).
* [27] Cox DR. The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological). 1958 Jul;20(2):215-32.
* [28] Quinlan JR. Induction of decision trees. Machine learning. 1986 Mar;1(1):81-106.
* [29] Ho TK. Random decision forests. InProceedings of 3rd international conference on document analysis and recognition 1995 Aug 14 (Vol. 1, pp. 278-282). IEEE.
* [30] Chen T, Guestrin C. Xgboost: A scalable tree boosting system. InProceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining 2016 Aug 13 (pp. 785-794).
* [31] Webb GI, Keogh E, Miikkulainen R. Naïve Bayes. Encyclopedia of machine learning. 2010;15:713-4.
* [32] Yasen M, Tedmori S. Movies reviews sentiment analysis and classification. In2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT) 2019 Apr 9 (pp. 860-865). IEEE.
* [33] Ouyang S, Zhong L, Luo R. The comparison and analysis of extracting video key frame. InIOP Conference Series: Materials Science and Engineering 2018 May 1 (Vol. 359, No. 1, p. 012010). IOP Publishing.
* [34] Karray S, Debernitz L. The effectiveness of movie trailer advertising. International Journal of Advertising. 2017 Mar 4;36(2):368-92.
* [35] Castellano, B.: Pyscenedetect. http://github.com/Breakthrough/PySceneDetect(07 2012)
* [36] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014 Sep 4.
* [37] Ahmed T, Das P, Ali MF, Mahmud MF. A comparative study on convolutional neural network based face recognition. In2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT) 2020 Jul 1 (pp. 1-5). IEEE.
* [38] Bansal R, Raj G, Choudhury T. Blur image detection using Laplacian operator and Open-CV. In2016 International Conference System Modeling & Advancement in Research Trends (SMART) 2016 Nov 25 (pp. 63-67). IEEE.
* [39] Na S, Xumin L, Yong G. Research on k-means clustering algorithm: An improved k-means clustering algorithm. In2010 Third International Symposium on intelligent information technology and security informatics 2010 Apr 2 (pp. 63-67). Ieee.
* [40] Shi C, Wei B, Wei S, Wang W, Liu H, Liu J. A quantitative discriminant method of elbow point for the optimal number of clusters in clustering algorithm. EURASIP Journal on Wireless Communications and Networking. 2021 Dec;2021(1):1-6.
* [41] Maas A, Daly RE, Pham PT, Huang D, Ng AY, Potts C. Learning word vectors for sentiment analysis. InProceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies 2011 Jun (pp. 142-150).
* [42] Tenet. [Film] Directed by: Christopher Nolan. United States: Warner Bros.; 2020.
* [43] Cast Away. [Film] Directed by: Robert Zemeckis. United States: Twentieth Century Fox; 2000.
* [44] 2001: A Space Odyssey. [Film] Directed by: Stanley Kubrick. United Kingdom: Metro-Goldwyn-Mayer (MGM), Stanley Kubrick Productions; 1968.
* [45] Interstellar. [Film] Directed by: Christopher Nolan. United States: Paramount Pictures; 2014.
* [46] The 355. [Film] Directed by: Simon Kinberg. United States: Universal Pictures; 2022.
* [47] Predestination. [Film] Directed by: Michael Spierig, Peter Spierig. Australia: Screen Australia; 2014.
* [48] The Man from U.N.C.L.E. [Film] Directed by: Guy Ritchie. United States: Warner Bros; 2015.
* [49] Mission: Impossible - Fallout. [Film] Directed by: Christopher McQuarrie. United States: Paramount Pictures; 2018.
* [50] Six Days Seven Nights. [Film] Directed by: Ivan Reitman. United States: Touchstone Pictures; 1998.
* [51] Lord of the Flies. [Film] Directed by: Harry Hook. United States: Castle Rock Entertainment; 1990.
* [52] Nim’s Island. [Film] Directed by: Jennifer Flackett, Mark Levin. United States: Walden Media; 2008.
* [53] The Blue Lagoon. [Film] Directed by: Randal Kleiser. United States: Columbia Pictures; 1980.
* [54] The Most Dangerous Game. [Film] Directed by: Irving Pichel, Ernest B. Schoedsack. United States: Merian C. Cooper and Ernest Schoedsack; 1932.
* [55] Dark Star. [Film] Directed by: John Carpenter. United States: Jack H. Harris Enterprises, University of Southern California (USC); 1974.
* [56] 2010: The Year We Make Contact. [Film] Directed by: Peter Hyams. United States: Metro-Goldwyn-Mayer (MGM); 1984.
* [57] Gravity. [Film] Directed by: Alfonso Cuarón. United Kingdom: Warner Bros. ; 2013.
* [58] The Black Hole. [Film] Directed by: Gary Nelson. United States: Walt Disney Productions; 1974.
* [59] Ad Astra. [Film] Directed by: James Gray. United States: New Regency Productions; 1974.
|
# A Novel Framework for Decentralized Dynamic Resource Allocation Using
Voronoi Tessellations
Bhagyashri Telsang and Seddik Djouadi This paper was supported in part by the
National Science Foundation under grant NSF-CMMI-2024111.B. Telsang and S.
Djouadi are with the Department of Electrical Engineering and Computer
Science, University of Tennessee Knoxville, USA<EMAIL_ADDRESS>
###### Abstract
In this work, we approach the problem of resource allocation in a team of
agents through the framework of Centroidal Voronoi Tessellations. CVTs provide
a natural way to embed a desired global trend in the team through probability
distributions, and in one-dimensional spaces, CVTs offer an inherent line
structure allowing for a simple communication graph and scalability. We first
consider the amount of resource to be allocated to be a constant and provide
an analytical solution to such static resource allocation problem by embedding
the allocation constraint within the distribution through a system of
nonlinear equations. Using the solution of such a constrained CVT minimization
problem as an initialization step, we propose a decentralized dynamic resource
allocation solution that employs a one-step update when the desired
distribution is Gaussian. We introduce a “civility model” for negotiations
between the agents to allow for flexibility in local preferences and
maintaining robustness against local disturbances. We demonstrate the
effectiveness of the proposed method by considering the application of demand-
response in smart grids through the problem of power allocation in a group of
building thermal loads.
## I INTRODUCTION
Often times we see a conflicting, paradoxical problem around us in the world.
There is too much and yet there is not enough; obesity and hunger coexisting,
overpopulation and population scarcity coexisting, floods and drought
coexisting few hundred miles apart, vacant house and homeless people outside
of them. In each of the scenario, there is a resource that is abundant in one
sector, be it location or a group of people, but scarce in another. It makes
one wonder if we can alleviate the problem by allocating the resources in a
“right” manner.
Taking roots in the field of Economics through [1] in $1970^{\prime}s$, the
resource allocation problem has broadened to the field of engineering in more
recent decades. Mathematically, the resource allocation problem can be framed
as, [2]:
$\displaystyle\min_{z_{i}\in\mathbb{R}^{n}}\frac{1}{N}\sum_{i\in
I_{N}}f_{i}(z_{i})$ $\displaystyle\text{such that,}\hskip 14.22636pt\sum_{i\in
I_{N}}z_{i}=r$ (1)
In the resource allocation problem, an $r$ amount of resource is to be
allocated among $N$ agents while minimizing the sum of their individual cost
functions $\\{f_{i}\\}_{i\in I_{N}}$. Typically, in engineering problems, the
agents are local controllers tasked to maintain local interests while equipped
with capabilities to communicate with other agents.
Simultaneously seeming trivial and complex, the nature of (1) can be broken
down into the following aspects: the information structure in the group of
agents, the separability of the objective function, and the global constraint.
Due to the separability of the objective function, each agent can minimize the
(global) cost function without any dependance on other agents. However,
because of the global constraint imposed on the team, the team information
structure becomes a significant aspect.
Like most of the work on the resource allocation problem, the authors in [2]
assume the individual cost functions to be convex. In the case where the cost
functions are differentiable, they propose a gradient descent consensus
algorithm. And when the cost functions are not necessarily differentiable,
they present a sub-gradient based algorithm. While they let the team
information structure be dynamic, they impose reasonable mild conditions on
the team information structure like connectedness, and start at an initial
feasible condition.
While [2] proposes the gradient descent algorithm where the agents trade
resources in proportion to the gradient difference for their individual cost
functions, [3] takes up the allocation problem (1) to focus on choosing the
proportional weights (to the resource trading) to obtain sufficient conditions
for the convergence of the algorithm, and to further improve the rate of
convergence. [4] considers the dual of the resource allocation problem and
derives two methods using the alternating direction method of multipliers
(ADMM) algorithm. Also considering the dual problem, [5] includes
uncertainties in the individual cost functions and solves the problem using
sub-gradient methods on the distributed Lagrangian.
Mixing economics in the team, [6] considers a stochastic system in which
agents allocate shared system resources in response to customer requests that
arrive stochastically over time and introduces the notion of a transfer
contract to specify compensation among agents when resources are traded. Each
agent has a model of how resources are shared by others and it makes
allocation decisions by maximizing its utility function subject to such model.
However, it is worth noting that in most of the work on decentralized resource
allocation, the amount of resource to be allocated is fixed over the
iterations; the agents begin at a feasible solution and move along the
resource allocation constraint through the feasible solutions to only minimize
the cost function in (1).
In this work, we approach the resource allocation problem through Centroidal
Voronoi Tessellations (CVTs). Even though they date centuries, Voronoi
tessellations (VTs) have been found to be immensely helpful in various
applications ranging from health to computer graphics to natural sciences. The
first documented application of Voronoi tessellations appeared in [7] on the
1854 cholera epidemic in London in which it is demonstrated that proximity to
a particular well was strongly correlated to deaths due to the disease [8]. In
more recent decades, VTs have almost become a common basis tool for path
planning algorithms by multi-robot systems in the field of coverage control
[9] to such an extent that the VT-based coverage control has been generalized
using optimal transport-based control [10]. An adaptive coverage controller is
proposed in [11] where the leader in the leader-follower strategy therein
distributes the followers within its obstacle-free sensing range, and the
optimized distribution is obtained through a CVT. In their study on optimality
of multi-robot coverage control, the authors in [12] draw a relationship
between CVT configurations and the sufficient condition for optimality through
the spatial derivative of the density.
Despite the wide range of applications of CVTs, to the best of our knowledge,
they have not been employed in the resource allocation problem. Our main
motivations for employing them in the context of resource allocation problem
are minimal communication requirement, robustness, flexibility, scalability
and generalizability offered by the CVT framework. To allocate one-dimensional
resources, a line graph for the communication network in the team information
structure is sufficient to obtain the global optima. We will delve deeper into
the advantages and our motivation for using CVT in the resource allocation
problem in Section II.
To demonstrate our solution to the resource allocation problem using the CVT
framework, we consider the application of demand-response in smart grids. We
are in the era of a volatile energy market with a looming energy crisis. While
the underlying occurrences like geopolitical transitions that cause such
crises are beyond local control, the effects are certainly received through
the spectrum. In such cases, what one can do locally, on smaller scales is to
better employ the resources available at hand. The field of demand-response in
smart grids aims to maintain robust operation during such times.
As many parts of the world are gradually moving towards competitive
transactive energy markets as a means to generate and procure electricity
alongside many of the support services required to operate a power system,
many countries are pushing the reform of the electricity power sector very
positively. For example, Chile pioneered in the 1980s the deregulation of the
electric power industry. In today’s U.S. retail electricity market, fourteen
states have already adequate retail competition with Texas, Illinois, and Ohio
respectively having 100$\%$, 60$\%$, and 50$\%$ of their residential customers
receiving service from electricity suppliers [13]. But, even today, most of
the customers have very limited “direct” participation in supporting the grid.
Through the developments in the transactive energy market, there have been
some interesting and innovative proposals. [14] proposes a data-driven method
to forecast the electricity demand for decentralized energy management. On the
consumption end, although mostly work in progress, peer-to-peer (P2P)
electricity trading is gaining momentum with time. Analogous to internet
servers and clients, P2P electricity trading is the platform where the end
consumer becomes a prosumer (functioning as both energy producer and consumer)
and exchanges the remaining electricity with other consumers in the power grid
[15]. A detailed review of existing P2P trading projects is carried out in
[16]. Another major proposal in this direction is load aggregation. As defined
in [17] “An aggregator is a new type of energy service provider which can
increase or moderate the electricity consumption of a group of consumers
according to the total electricity demand on the grid. An aggregator can also
operate on behalf of a group of consumers producing their own electricity by
selling the excess electricity they produce.” A detailed review of the value
of aggregators in the electricity market can be found in [18].
While such a centralized framework is beneficial in certain applications the
cost and the risk of the associated communication overhead can be too high.
Such a center-heavy approach also makes the framework vulnerable to attacks
due to a single point of failure. The advantages of having communication
capabilities between agents reduces such associated risks. One then needs to
develop a framework to model the flow of information among the agents and
design control laws at global and local levels such that the global and local
objectives are achieved. We consider a certain power, generated or negotiated
between the aggregator and the utilities, as the resource to be allocated
among a team of agents that are building loads, HVACs to be specific. Such
thermal loads, due to their latency, inherently allow for further flexibility
in the CVT framework.
The paper organization is as follows. We begin with review of some definitions
and preliminaries, along with our motivation for employing CVTs to solve this
problem in Section II. Like most work on resource allocation problems, we
consider the static allocation problem where the amount of resource to be
allocated is fixed, and solve it using a system of non-linear equations in
Section III. We then move to varying the allocation amount and solve the
corresponding dynamic resource allocation problem in Section IV. In Section V
we demonstrate the developed decentralized dynamic resource allocation method
on a demand-response problem of power allocation in a group of building loads.
Finally, we draw conclusions in Section VI and present some lines of future
work.
## II Preliminaries
In this Section, we will first review some definitions and background of CVTs
in Section II-A, followed by a brief review on iterative and analytical
methods to compute CVTs in Section II-B. Then in Section II-C, we define some
notations on communication and resource graph employed in this paper.
### II-A Centroidal Voronoi Tessellations
Consider a region $\Omega\in\mathbb{R}^{n}$ with density $\rho(.)$. Denote a
team of $N\in\mathbb{N}$ agents indexed by the set $I_{N}=\\{1,2,\ldots,N\\}$.
* 1
Tessellation: $\\{V_{i}\\}_{i\in I_{N}}$ is a tessellation of $\Omega$ if
$V_{i}\cap V_{j}=\emptyset$ for $i\neq j$, and $\cup_{i\in
I}{V}_{i}={\Omega}$.
* 2
Voronoi region and generators: The Voronoi region ${V}_{z_{i}}$ of the Voronoi
generator $z_{i}$ is ${V}_{z_{i}}=\\{x\in\Omega:||x-z_{i}||<||x-z_{j}||,\
i\neq j\ \text{and}\ i,j\in I_{N}\\}$.
* 3
Voronoi tessellation: The set of Voronoi regions
$\textbf{V}_{\textbf{z}}=\\{V_{z_{i}}\\}_{i\in I_{N}}$ of $\\{z_{i}\\}_{i\in
I_{N}}$ is called a Voronoi tessellation
$\\{\textbf{z},\textbf{V}_{\textbf{z}}\\}$.
The mass centroid of a region $V_{i}\subset\Omega$ under the probability
density function $\rho(.)$ is defined as:
$z_{V_{i},\rho}^{c}=\frac{\int_{V_{i}}x\rho(x)dx}{\int_{V_{i}}\rho(x)dx}$ (2)
A Voronoi tessellation in which the generators are the mass centroids of their
respective Voronoi regions is called a Centroidal Voronoi Tessellation (CVT),
[19]. The CVT obtained for 3 generators in the region $\Omega=[0,15]$ under
Uniform and Normal distributions – $\mathcal{U}(0,15)$ and
$\mathcal{N}(7.5,1)$ – are shown in Fig. 1. The generators under Uniform ad
Normal distribution over $\Omega$ are marked in star and square symbols
respectively.
Figure 1: Centroidal Voronoi Tessellations of $[0,15]$ under Uniform and
Normal distributions, denoted in star and square symbols respectively.
Consider the functional $\mathcal{F}$ with any $N$ points $\\{z_{i}\\}_{i\in
I_{N}}\in\Omega$ and any tessellation $\\{V_{i}\\}_{i\in I_{N}}$ of $\Omega$
as its input arguments:
$\mathcal{F}((z_{i},V_{i}),i\in I_{N})=\sum_{i\in I_{N}}\int_{x\in
V_{i}}||x-z_{i}||^{2}\rho(x)dx$ (3)
Proposition $3.1$ in [19] states that a necessary condition for the function
$\mathcal{F}$ to be minimized is that $\\{V_{i}\\}_{i\in I_{N}}$ are the
Voronoi regions corresponding to $\\{z_{i}\\}_{i\in I_{N}}$, and
simultaneously, $\\{z_{i}\\}_{i\in I_{N}}$ are the centroids of their
respective Voronoi regions. In other words, the minimizer of $\mathcal{F}$ is
a Centroidal Voronoi Tessellation.
Additionally, if the tessellation in (3) is fixed to be the Voronoi
tessellation of $\\{z_{i}\\}_{i\in I_{N}}$, then the following functional
$\mathcal{K}$ has the same minimizer as $\mathcal{F}$, [19].
$\mathcal{K}((z_{i}),i\in I_{N})=\sum_{i\in I_{N}}\int_{x\in
V_{z_{i}}}||x-z_{i}||^{2}\rho(x)dx$ (4)
This functional $\mathcal{K}$ is also referred to as the energy of the
tessellation or the quantization energy.
Including the resource allocation constraint from (1) in the functional
$\mathcal{K}$, we obtain the following constrained CVT minimization problem:
$\displaystyle\min_{z_{i}}\sum_{i\in I_{N}}\int_{x\in
V_{i}}||x-z_{i}||^{2}\rho(x)dx$ s.t, $\displaystyle\hskip 28.45274pt\sum_{i\in
I_{N}}z_{i}=r$ (5)
Comparing with the resource allocation problem (1) we see that the individual
objective functions from (5) are $\\{\int_{x\in
V_{i}}\rho(x)||x-z_{i}||^{2}dx\\}_{i\in I_{N}}$. While the objective function
in (5) is separable, a global distribution $\rho(.)$ governs all the
individual objective functions. This enables embedding of a desired aggregate
behavior in the team through such distributions; the desired aggregate
behavior can arise from modeling individual preferences or from an external
global trendsetting factor depending on the application at hand.
### II-B Computation of CVT
Given $\Omega,N$ and $\rho(.)$, there are various iterative algorithms to
compute a CVT in $\Omega$. Under the same conditions, CVT need not be unique
for any dimensional region unless certain conditions are imposed on the
density function. In 1-D regions, the CVT is unique for log-concave density
functions with finite second moment [20]. For higher dimensions, finding the
conditions on the uniqueness for the general case, without assumptions on the
region, density or the number of generators $N$, remains an open area of
research. However, it is proved in [21] that for $N=2$, there does not exist a
unique CVT for any density for dimensions greater than one.
Accordingly, the solutions rendered by various algorithms to compute the CVT
need not be the unique global minimizers, and can converge to a local minima.
A deterministic, popular algorithm to obtain a CVT is the Lloyd’s algorithm.
Introduced in [22] to find the optimal quantization in pulse-code modulation,
Lloyd’s algorithm has been modified or adapted in various fields. At the core
of it, Lloyd’s algorithm is an iteration between constructing Voronoi
tessellations and their centroids:
Given: $\Omega\subset\mathbb{R}^{n}$, $N$, $\rho(x)$
Initialize: Generators $\textbf{z}=\\{z_{i}\\}_{i\in I}$, where each
$z_{i}\in\Omega$
* 1
Construct the Voronoi tessellation $\textbf{V}_{\textbf{z}}$.
* 2
Compute the mass centroids $z^{c}_{\textbf{V}_{\textbf{z},\rho}}$ of
$\textbf{V}_{\textbf{z}}$.
* 3
If the computed centroids meet certain stopping criteria then terminate. If
not, then set $\textbf{z}=z^{c}_{\textbf{V}_{\textbf{z},\rho}}$, and return to
Step 1.
Even though Lloyd’s algorithm is iterative and approximate, it has certain
desirable convergence properties. Various global convergence properties of the
Lloyd’s algorithm are rigorously proved in [23]. Specifically for one-
dimensional spaces with log-concave density function, the local convergence
using the Lloyd’s algorithm has been proved in [24]. Depending on the
application at hand, various algorithms that have faster convergence than
Lloyd’s have been proposed, [25], [26], [27]. Taking the probabilistic
approach, [28] offers a Monte Carlo sampling based method for the computation
of CVTs in any dimension. However, the MacQueen’s method only results in the
centroids of the CVT and not their Voronoi partitions.
In one-dimensional spaces, we can obtain the entire tessellation analytically
using a System of Non-linear Equations (SNLE). The core idea is to
parameterize the Voronoi regions in terms of their centroids. In
$\Omega=[a,b]\subset\mathbb{R}$, without loss of generality, let the $N$
generators be $z_{1}<z_{2}<\ldots<z_{N}\in\Omega$. Following Section II-A, by
definition, the Voronoi regions are given as:
$\displaystyle V_{1}$ $\displaystyle=[a,\frac{z_{1}+z_{2}}{2}]$
$\displaystyle\vdots$ $\displaystyle V_{i}$
$\displaystyle=[\frac{z_{i-1}+z_{i}}{2},\frac{z_{i}+z_{i+1}}{2}]$
$\displaystyle\vdots$ $\displaystyle V_{N}$
$\displaystyle=[\frac{z_{N-1}+z_{N}}{2},b]$ (6)
Additionally, by the definition of CVT the Voronoi generators must be the mass
centroids (2). Rewriting the centroids in terms of the parameterized Voronoi
regions from (6), we have $\forall i\in I_{N}$:
$\displaystyle z_{i}^{c}=$
$\displaystyle\frac{\int_{\frac{z_{i-1}^{c}+z_{i}^{c}}{2}}^{\frac{z_{i}^{c}+z_{i+1}^{c}}{2}}x\rho(x)dx}{\int_{\frac{z_{i-1}^{c}+z_{i}^{c}}{2}}^{\frac{z_{i}^{c}+z_{i+1}^{c}}{2}}\rho(x)dx}$
(7)
where $z_{V_{i},\rho}^{c}$ from (2) is denoted as $z_{i}^{c}$ for ease of
notation. In (7), there are $N$ number of unknowns: $\\{z_{i}^{c}\\}_{i\in
I_{N}}$, and $N$ equations. Therefore, solving this system of nonlinear
equations will result in the centroids of the CVT with which the Voronoi
regions can be computed as in (6). This is the exact solution of the
functional $\mathcal{K}$ from (4).
To employ CVTs to solve the resource allocation problem (1), we treat the
Voronoi generators as the agents’ resources. In the next Section, we introduce
the notations for resource and communication graph for the team which
highlights the advantage of employing one-dimensional CVTs in the resource
allocation problem.
### II-C Notations
Let $\textbf{z}=\\{z_{i}\\}_{i\in I_{N}}$ be the agents’ resources with each
$z_{i}\in\Omega\subset\mathbb{R}$. Let $\rho(.)$ denote a measure of
information or the probability density over $\Omega$.
Let $\mathcal{Z}$ denote the undirected resource graph and $\mathcal{C}$
denote the undirected communication network of all the agents $i\in I_{N}$.
Their vertex and edge sets are $\\{\textbf{z},\mathcal{E}_{Z}\\}$ and
$\\{I_{N},\mathcal{E}_{C}\\}$, respectively. Denote the set of neighbors of
agent $i\in I_{N}$ according to resource and communication graphs as
$\mathcal{N}_{\mathcal{Z}_{i}}$ and $\mathcal{N}_{\mathcal{C}_{i}}$,
respectively.
The set of resource neighbors $\mathcal{N}_{\mathcal{Z}_{i}}$ of each agent
$i\in I_{N}$ is given by [29]:
$\displaystyle\mathcal{N}_{\mathcal{Z}_{i}}=\\{j\in I_{N}:z_{k}<z_{j}<z_{i},\
\forall j,k\in I_{N}\\}\ \ \cup$ $\displaystyle\\{j\in
I_{N}:z_{i}<z_{j}<z_{k},\ \forall j,k\in I_{N}\\}$ (8) $\displaystyle\implies\
\ j\in\mathcal{N}_{\mathcal{Z}_{i}}\iff\\{z_{i},z_{j}\\}\in\mathcal{E}_{Z}\iff
i\in\mathcal{N}_{\mathcal{Z}_{j}}$
Parallelly, since $\mathcal{C}$ is undirected,
$\\{i,j\\}\in\mathcal{E}_{C}\iff i\in\mathcal{N}_{\mathcal{C}_{j}}$ and
$j\in\mathcal{N}_{\mathcal{C}_{i}}$, where $\mathcal{N}_{\mathcal{C}_{i}}$ is
the set of communication neighbors of the agent $i$.
Since the resources $z_{i}\in\mathbb{R}$, the resource network is always a
line graph: each agent can have at most $2$ resource neighbors. While the
communication network $\mathcal{C}$ can be as complex as a full graph, we set
it to be the simplest connected graph in 1-D,
$\mathcal{E}_{C}=\mathcal{E}_{Z}$. That is, the agents are aware of the
resource positions of their neighbors only.
Any agent being informed of the resource positions of its non-neighbor agent
is redundant, since it is not used in the iterative CVT computation methods
like Lloyd’s. Therefore, if each agent were to communicate only with its
resource neighbors, then all the agents would converge to the CVT through
Lloyd’s algorithm with minimal communication in a decentralized manner.
In this Section, we looked into how CVTs provide a natural way of embedding a
desired distribution in the solution, along with obtaining the solution in a
straight-forward decentralized approach with minimal requirements on the team
information structure. We studied that one may obtain 1-D CVTs in a
decentralized manner using one of the simplest communication graphs: a line
graph that is also the same as the resource graph. In the next Section, we
take up the resource allocation problem (1) which is a constrained CVT
minimization problem, and solve it centrally using the analytical CVT
computation method SNLE.
## III Static resource allocation
The underlying idea in employing CVTs to solve the resource allocation problem
is quite straightforward: the CVT centroids are the resources allocated to the
agents, and accordingly, they must sum up to the available amount of resource
$r$. Comparing the resource allocation constrained CVT minimization problem
(5) with (1), we can observe that the individual agent cost functions are:
$f_{i}(z_{i})=\int_{x\in V_{i}}||x-z_{i}||^{2}\rho(x)dx$ (9)
The main similarity between the constrained CVT minimization problem (5) and
the resource allocation problem (1) is that the objective functions are
separable. That is, the objective functions are decomposed into individual
(agent) objective functions that are convex and differentiable. Additionally,
the integral equation (9) is known as the Fredholm integral equation of the
first kind, [30].
Given the separability of (5) along with the convexity and differentiability
of the agent cost functions, the asymptotic convergence properties developed
in [2] for the resource allocation problem apply to the resource allocation
constrained CVT minimization problem (5). The core idea of our solution to the
problem of resource allocation through CVTs is to embed the resource
allocation constraint within the objective function through the density
$\rho(.)$.
Suppose $\rho(.)$ is defined by $n_{\rho}$ number of parameters:
$v=(v_{1},v_{2},\ldots,v_{n_{\rho}})$. Let $v_{k}\in v$ for some $k\in
I_{N_{\rho}}$ be an unknown or the “free” design parameter, and all the other
parameters defining the density function be known and fixed. To highlight the
dependence of the density function on the free parameter $v_{k}$, denote the
density function as $\rho(x,v_{k})$, where $x$ is it’s support.
The optimal solution of the unconstrained CVT minimization problem (4) is the
set of centroids of the Voronoi regions for every agent. Using the definition
of centroids (2) for $\\{z_{i}\\}_{i\in I_{N}}$ and embedding the resource
allocation constraint transforms the constrained CVT minimization problem into
the following system of nonlinear equations with $N+1$ unknowns –
$(z_{1}^{c},z_{2}^{c},\ldots,z_{N}^{c},v_{k})$:
$\displaystyle z_{i}^{c}=$
$\displaystyle\frac{\int_{\frac{z_{i-1}^{c}+z_{i}^{c}}{2}}^{\frac{z_{i}^{c}+z_{i+1}^{c}}{2}}x\rho(x,v_{k})dx}{\int_{\frac{z_{i-1}^{c}+z_{i}^{c}}{2}}^{\frac{z_{i}^{c}+z_{i+1}^{c}}{2}}\rho(x,v_{k})dx}\
\ \ \forall i\in I_{N}$ $\displaystyle\sum_{i=1}^{N}z_{i}^{c}=r$ (10)
The solution of this system of nonlinear equations,
$(z_{1}^{c},z_{2}^{c},\ldots,z_{N}^{c},v_{k})$, satisfies the following:
* •
$(z_{1}^{c},z_{2}^{c},\ldots,z_{N}^{c})$ are the $N$ centroids of the CVT in
$\Omega=[a,b]$ under the density function $\rho(x,v_{k})$.
* •
The centroids sum up to $r$, satisfying the resource allocation constraint in
(1).
The main solution of interest here is the solved design parameter $v_{k}$
which is fed to the Lloyd’s algorithm in its initialization step. In that
case, all the agents can still maintain communication only with their resource
neighbors, and since they are all initialized with the same design parameters,
all the agents are minimizing the cost function (4) under the same
specifications, and obtain the CVT.
We now demonstrate the method with different simulation cases. In Fig. 2, the
region $\Omega=[0,100],\ N=50$, and the density is Gaussian. Out of the three
examples therein, the top two have the same variance but are required to
allocate different amounts of resources – $2500$ in the first and $1500$ in
the second – among the same number of agents. Accordingly, we can observe the
resources allocated among all the agents are lower in the second case than the
first. Moving from the second example to the third (the bottom graph in Fig.
2), the variance is increased while keeping all other parameters the same. In
all these three cases, the free design parameter $v_{k}$ is $\mu$ – the mean
of the Gaussian distribution. The solution of the free parameter obtained from
solving the $N+1$ equations from (10), is shown in the figures and is used to
initialize the Lloyd’s algorithm. The generators obtained from the Lloyd’s
algorithm and the generators from solving (10) are plotted together. We can
observe that the two solutions are very close to each other. Additionally,
both the solutions sum up to the resource to be allocated – $r$, with an
acceptable error.
Figure 2: Allocation of $r$ amount of resource among 50 agents in
$\Omega=[0,100]$ under Gaussian distribution for specified variances – $4$
(top and middle) and $8$ (bottom). The mean of the distribution $\mu$ is the
solution $v_{k}$ from (10).
Similarly, we present another set of simulations in Fig. 3. In the three cases
therein, $\Omega,N$ and $r$ are the same. The difference in the three cases is
the underlying distributions – Gamma distribution in the top figure,
Exponential in the middle, and Gaussian distribution in the bottom figure.
Like in Fig. 2, the solutions from the two approaches are close to each other
and also sum up to $r$.
Figure 3: Allocation of $r$ amount of resource among 50 agents in
$\Omega=[0,300]$ under three different distributions. Top: Gamma distribution
with the free parameter $v_{k}$ being $k$. Middle: Exponential distribution
with the free parameter $v_{k}$ being $\lambda$. Bottom: Gaussian distribution
with the free parameter $v_{k}$ being $\mu$.
Even though in this approach we obtain the solution of (5), the SNLE method is
centralized and its grows with $N$. However, it is worth pointing out that
once the Lloyd’s algorithm is initialized with a design parameter $v_{k}$, CVT
obtained using Lloyd’s algorithm is scalable to any $N$ because regardless of
the total number of agents $N$, each agent can have at most two neighbors.
Like most of the solutions to the resource allocation problem, in this Section
we considered a fixed amount of resource to be allocated among al the agents.
Using the developed static allocation method as the initialization step, in
the next Section we consider the problem of dynamic resource allocation
problem where the amount of resource to be allocated is time-varying and all
the agents are aware of the quantity.
## IV Dynamic Resource Allocation
In the previous section, we solved the static resource allocation problem by
using the centralized system of nonlinear equations. However, extending the
same approach to varying amount of resource-to-be-allocated results in a
centralized approach. Therefore, in this Section we focus on developing a
decentralized approach to the dynamic resource allocation problem.
Employing the static resource allocation problem as the initialization step,
our solution approach to the dynamic resource allocation problem under Normal
distribution involves a one-step update that maintains the dynamic resource
allocation constraint while preserving the CVT. We employ the following Lemma
1 to obtain such one-step update in Theorem 1. Through the design process, we
assume that the amount of resource to be allocated among all the agents over
the considered time duration is known to all the agents.
Suppose $\rho(.)=\mathcal{N}(\mu,\sigma^{2})$ and the “free” parameter $v_{k}$
is $\mu$. Then we have:
###### Lemma 1
Suppose at time $k$, $\\{z_{i}(k)\\}_{i\in I_{N}}$ are the centroids of the
CVT in $\Omega\subset\mathbb{R}$ with density
$\rho(.)=\mathcal{N}(\mu(k),\sigma^{2})$. Then the following relationship
holds between the time-updated centroids:
$\displaystyle z_{i}(k+1)-z_{i}(k)$ $\displaystyle=z_{j}(k+1)-z_{j}(k)$
$\displaystyle=\mu(k+1)-\mu(k)=-\delta$ (11)
Proof: Let $\mu(k+1)=\mu(k)-\delta$. Because $\\{z_{i}(k)\\}_{i\in I_{N}}$ are
the centroids with normal distribution, we have by definition:
$\displaystyle
z_{i}(k)=\frac{\int_{V_{i}(k)}xe^{\frac{(x-\mu(k))^{2}}{2\sigma^{2}}}dx}{\int_{V_{i}(k)}e^{\frac{(x-\mu(k))^{2}}{2\sigma^{2}}}dx}$
Similarly, writing out the mass centroid for the next time instant $k+1$ using
$\mu(k+1)=\mu(k)-\delta$, we have:
$z_{i}(k+1)=\frac{\int_{V_{i}(k+1)}xe^{\frac{(x-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dx}{\int_{V_{i}(k)}e^{\frac{(x-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dx}$
(12)
Suppose $V_{i}(k)=[a,b]\subset\Omega$. Consider the change of variables
$y=x-\delta$. Then the mass centroids transform as:
$\displaystyle z_{i}(k)$
$\displaystyle=\frac{\int_{a}^{b}xe^{\frac{(x-\mu(k))^{2}}{2\sigma^{2}}}dx}{\int_{a}^{b}e^{\frac{(x-\mu(k))^{2}}{2\sigma^{2}}}dx}$
$\displaystyle=\frac{\int_{a-\delta}^{b-\delta}(y+\delta)e^{\frac{(y+\delta-\mu(k))^{2}}{2\sigma^{2}}}dy}{\int_{a-\delta}^{b-\delta}e^{\frac{(y+\delta-\mu(k))^{2}}{2\sigma^{2}}}dy}$
$\displaystyle=\frac{\int_{a-\delta}^{b-\delta}(y+\delta)e^{\frac{(y-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dy}{\int_{a-\delta}^{b-\delta}e^{\frac{(y-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dy}$
$\displaystyle=\frac{\int_{a-\delta}^{b-\delta}ye^{\frac{(y-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dy+\int_{a-\delta}^{b-\delta}\delta
e^{\frac{(y-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dy}{\int_{a-\delta}^{b-\delta}e^{\frac{(y-(\mu(k)-\delta))^{2}}{2\sigma^{2}}}dy}$
$\displaystyle=\frac{\int_{V_{i}(k+1)}ye^{\frac{(y-\mu(k+1))^{2}}{2\sigma^{2}}}dy}{\int_{V_{i}(k+1)}e^{\frac{(y-\mu(k+1))^{2}}{2\sigma^{2}}}dy}+\delta\frac{\int_{V_{i}(k+1)}e^{\frac{(y-\mu(k+1))^{2}}{2\sigma^{2}}}dy}{\int_{V_{i}(k+1)}e^{\frac{(y-\mu(k+1))^{2}}{2\sigma^{2}}}dy}$
$\displaystyle=z_{i}(k+1)+\delta$ $\displaystyle\implies$ $\displaystyle
z_{i}(k+1)-z_{i}(k)=-\delta$ (13)
Since (13) holds for all $i\in I_{N}$ and $\mu(k+1)=\mu(k)-\delta$, we have
(11) proved.
$\hfill\square$
###### Theorem 1
Suppose we are initialized with static resource allocation solution at
discrete-time $k$ and are at the following conditions: $\\{z_{i}(k)\\}_{i\in
I_{N}}\ s.t\ \sum_{i\in I_{N}}z_{i}(k)=r(k),\ \ \\{z_{i}(k)\\}_{i\in
I_{N}}\sim\mathcal{N}(\mu(k),\sigma^{2})$. Suppose the resource to be
allocated at the next time instant is $r(k+1)$. If agents update their
resources as
$z_{i}(k+1)=z_{i}(k)+\frac{1}{N}(r(k+1)-r(k))$ (14)
then the resulting solution satisfies the following:
* 1
$\sum_{i\in I_{N}}z_{i}(k+1)=r(k+1)$
* 2
$\\{z_{i}(k+1)\\}_{i\in I_{N}}\sim\mathcal{N}(\mu(k+1),\sigma^{2})$
where $\mu(k+1)$ is a solution of the $(N+1)$ SNLE (10).
Proof: Obtain the time-difference of the summation of the resources:
$\displaystyle\sum_{i\in I_{N}}z_{i}(k+1)-\sum_{i\in I_{N}}z_{i}(k)$
$\displaystyle=\sum_{i\in I_{N}}z_{i}(k+1)-z_{i}(k)$ $\displaystyle=-N\delta$
$\displaystyle=N(\mu(k+1)-\mu(k))$ $\displaystyle=r(k+1)-r(k)$ (15)
$\hfill\square$
Following Theorem 1 we obtain the CVT that satisfies the dynamic resource
allocation constraint for the desired Normal distribution in a decentralized
manner. While this fulfills our objective, it can be observed that the
approach is quite rigid. In practical applications where the agents have their
own set of dynamics and are trying to navigate around certain local objectives
as well, this approach can be restrictive. Therefore, to extend its
applicability we introduce flexibility in the design by allowing for (local)
negotiations between neighbors through what we call a “civility model”.
Before detailing the civility model, let us introduce some new notations. For
each agent $i\in I_{N}$, denote its desired resource amount at time $k$ that
meets its local objective as $u_{i}(k)$. For example, if the agent $i$ is
responsible for the control of a certain system modeled as a state-space, such
$u_{i}(k)$ could be the control input from a state-feedback controller, from
an LQR or from any such local controller. Since we are operating in 1-D
spaces, recall from Section II-C that the resource and communication graphs
are the same. Following the same notation therein, denote the communication
graph at time $k$ as $\mathcal{C}^{k}$, and the neighbors of agent $i$ at time
$k$ as $\mathcal{N}_{\mathcal{C}^{k}_{i}}$.
Initialization: All agents are aware of the total resources $r(k),\forall k\in
T$ and the initial communication network $\mathcal{C}^{k-1}$. Solve the static
allocation problem for the resource $r(k-1)$.
Following the initialization, the civility model for local negotiations is
developed as follows.
Civility model for local negotiations
For every agent $i\in I_{N}$, at every time $k\in T$, do:
* 1
Compute the resource update $z_{i}(k)$ from (14). Compute $u_{i}(k)$ based on
the local requirements, possibly from the local controller.
* 2
Compute the neighbor of interest as
$\hat{j}=\\{j\in\mathcal{N}_{\mathcal{C}^{k}_{i}}\cup i\text{ such that
}||u_{i}(k)-z_{\hat{j}}(k)||<||u_{i}(k)-z_{j}(k)||\\}$.
* 3
Swap resources with the neighbor of interest $\hat{j}$ from the previous step,
if $\hat{j}$ indicates it has not already been taken. This results in
$z_{i}(k)=z_{\hat{j}}(k)$. If $\hat{j}$ has already negotiated with its other
neighbor and is hence taken, or if $\hat{j}=i$, then implement the resource
update $z_{i}(k)$ from Step 1.
It is worth noting that the communication network is dynamically updated in a
decentralized manner, and that such an update naturally follows from the
resource swap during the local negotiations. We call this approach the
civility model because if a neighbor asks to swap, the agent complies with it
regardless of its own local requirement. And hence, since all the agents
follow the same model, no agent is at a disadvantage in following such an
approach.
To demonstrate the clarity and effectiveness of the proposed method to
dynamically allocate resources in a decentralized manner, we consider the
application of demand-response in smart grids. Specifically, we consider a
group of Heating, Ventilation, and Air Conditioning (HVAC) units that have
their local objectives to maintain their indoor air temperatures according to
certain desired setpoints, but are also required to respond to certain demand
(power) curve by consuming the available power as a team of agents.
## V Application to Demand Response
To demonstrate the developed method, we consider power allocation in a group
of building HVACs. In this application of demand-response, the agents are the
building HVACs. The resources to be allocated to all the agents are the powers
consumed by the HVACs to maintain the local indoor air temperatures. We adapt
the state-space model from [31] to simulate the indoor air temperatures for
each agent $i$ as:
$\displaystyle\dot{x}_{i}(t)=A_{i}x_{i}(t)+B_{i}u_{i}(t)+G_{i}w_{i}(t)$
$\displaystyle y_{i}(t)=C_{i}x_{i}(t)+D_{i}u_{i}(t)$ (16)
The input $u_{i}$ is the power consumption of the HVAC (agent $i$), the output
$y_{i}$ is the indoor air temperature, and $w_{i}$ is the vector of
disturbances – outdoor air temperature and solar radiation. The system
matrices for each agent are given by:
$A_{i}=\begin{bmatrix}\frac{-(K_{1}^{i}+K_{2}^{i}+K_{3}^{i}+K_{5}^{i})}{C_{1}^{i}}&\frac{(K_{1}^{i}+K_{2}^{i})}{C_{1}^{i}}&\frac{K_{5}^{i}}{C_{1}^{i}}\\\
\frac{K_{1}^{i}+K_{2}^{i}}{C_{2}^{i}}&\frac{-(K_{1}^{i}+K_{2}^{i})}{C_{2}^{i}}&0\\\
\frac{K_{1}^{i}}{C_{3}^{i}}&0&\frac{-(K_{4}^{i}+K_{5}^{i})}{C_{3}^{i}}\end{bmatrix}$
$\displaystyle B_{i}=\begin{bmatrix}\frac{1}{C_{1}^{i}}+\frac{1}{C_{2}^{i}}\\\
0\\\
0\end{bmatrix}G_{i}=\begin{bmatrix}\frac{K_{3}^{i}}{C_{1}^{i}}&\frac{1}{C_{1}^{i}}\\\
0&\frac{1}{C_{2}^{i}}\\\
\frac{K_{4}^{i}}{C_{3}^{i}}&0\end{bmatrix}C_{i}=\begin{bmatrix}1&0&0\end{bmatrix}$
with $D_{i}$ being a zero matrix. The system parameters, which are resistances
and capacitances in the thermal dynamics of the building model, for each agent
$i$ are obtained as realizations of the following normal distributions:
$\displaystyle K_{1}\sim\mathcal{N}(16.48,0.1)\hfill$ $\displaystyle
K_{5}\sim\mathcal{N}(23.04,0.1)$ $\displaystyle
K_{2}\sim\mathcal{N}(108.5,0.1)\hfill$ $\displaystyle
C_{1}\sim\mathcal{N}(9.36\times 10^{5},1)$ $\displaystyle
K_{3}\sim\mathcal{N}(5,0.1)\hfill$ $\displaystyle
C_{2}\sim\mathcal{N}(2.97\times 10^{6},1)$ $\displaystyle
K_{4}\sim\mathcal{N}(30.5,0.1)\hfill$ $\displaystyle
C_{3}\sim\mathcal{N}(6.695\times 10^{5},1)$
We implement the agent’s model by discretizing the state-space model (16) with
a sampling time of 10 minutes. In the HVAC model, the input $u_{i}$
corresponds to cooling when negative and to heating when positive. Regardless,
its absolute value is the power consumed, and therefore we use that for local
negotiations and let the individual agent decide whether to use the allocated
power for heating or cooling based on its local control. To maintain the
indoor air temperatures from a local control, we employ a state-feedback
controller for pole placement for every agent to determine its $u_{i}(k)$. We
consider the same disturbances for all the agents; the outdoor air temperature
and the solar radiation, [32], we use for our simulations are shown in Fig. 4.
Figure 4: Disturbances in the HVAC model (16)
To begin the dynamic resource allocation we initialize $\rho(.)$ as
$\mathcal{N}(\mu,\sigma^{2})$, and following Section III, solve the first-time
allocation (initialization) as a static allocation problem. Communicating to
all the agents the resulting mean $\mu$, we begin the decentralized dynamic
allocation as laid out in Section IV.
Even though the performance of the developed approach depends on the total
available resource and the local requirements, the civility model allows for
flexibility, and that could be necessary to compensate for local disturbances
or for improper selection of the (desired) distribution in the tessellation.
To explain the graphical setup of our results, we begin with $N=5$ in Fig 5.
The top figure shows the power consumption of all the agents at every time
instant, and the bottom figure shows their total power consumption versus the
available power. Augmenting, Fig 6 shows the individual indoor air
temperatures when the agents implement the allocated power from Fig 5.
Figure 5: Baseline power consumptions: Agents acting based on the resource
allocation constraint without the civility model. Left: Individual power
consumption. Right: Total power consumption. Figure 6: Baseline indoor air
temperatures: Agents acting based on the resource allocation constraint
without the civility model.
Next, we demonstrate the civility model from Section IV by allowing for
swapping through local negotiations. Continuing the previous case we first
consider only $5$ agents in the team in Fig. 7 and then demonstrate for $15$
agents. For every agent, the power consumptions and the indoor air
temperatures are shown in the same color throughout the simulation duration.
For example, agent $2$ is shown in red. Thus one can follow the agents’
negotiations and the resulting swaps and communication network by following
the individual power consumption of the agents through their colors. In the
subsequent cases, we do not show the satisfaction of the resource allocation
constraint through a dedicated figure since we can concisely express it
numerically as the error between total power consumption of all the agents and
the available power; we use $l_{2}$ norm to compute the power consumption
error.
Figure 7: Civility model with local state feedback controller for $5$ agents.
Figure 8: Civility model with local state feedback controller for $15$ agents.
The temperature setpoints for all the HVACs are at $72\degree F$.
The strengths of the developed method lie in its robustness in maintaining the
resource allocation constraint while accounting for local preferences in a
truly decentralized manner. To demonstrate the same, we perturb the setpoints
of certain agents and observe the corresponding resource negotiations and the
air temperatures in Fig 9. We can observe the increased amount of negotiations
in the increased number of swaps spreading throughout the team to correct for
the disturbances for some of the agents. Quantifying the swaps, we have that
out of $144$ time-steps in the simulation, each agent swapped $126.9$ times on
average and that every agent has been a neighbor of almost every other agent.
This suggests a high degree of variation in the communication network, further
suggesting that the amount of information is so fragmented among all the
agents that it is sufficient to meet the resource allocation constraint while
following the desired distribution in the tessellation but not enough for any
agent to recreate the behavior of any other agent.
Figure 9: With swaps and local state feedback controller of 15 agents under
disturbed setpoints
The decentralized dynamic resource allocation solution proposed in this work
follows the idea of “Global trendsetting, local negotiations”. Here, the the
global trend is for the agents’ resources to be Gaussian distributed while
summing up to the available power, and the local negotiations happen to
maintain the balance between following such global trend and accounting for
the local requirements simultaneously.
## VI Conclusions
CVTs in one-dimensional spaces are desirable due to their inherent line
structure and ease of computation of the entire tessellations, allowing for
verification of the quality of the solution. Employing them in the resource
allocation problem brings forth additional advantages as embedding desired
global trends in the team through probability distributions.
For a fixed amount of resource, the static resource allocation method offers
an analytical, although central, solution by posing the constrained CVT
minimization problem as a system of non-linear equations. The generalizability
of the developed static allocation framework is worth remarking. Instead of
the summation constraint, one can have any constraint from
$\mathbb{R}^{N}\to\mathbb{R}$, and one can also have as many constraints as
the number of parameters defining the desired distribution.
The developed decentralized dynamic allocation solution using CVTs provides a
natural way to embed the aggregate team behavior or to set the desired global
trend through the distribution of the tessellations. The developed method
through the civility model allows for flexibility on the local end by
absorbing and distributing disturbances throughout the team. We observed a
hint of inherent privacy in the architecture through the highly dynamic
communication network that nevertheless was a simple line graph at all times,
demonstrating the scalability of the method.
Building on this work, we aim to generalize the develop decentralized resource
allocation method to global trends that are described by more distributions,
and not just Gaussian. We aim to verify the robustness of the architecture to
a changing number of agents in the team, possibly due to communication
failures.
## References
* [1] K. Arrow and F. H. Hahn, General Competitive Analysis. North Holland, December 1983.
* [2] H. Lakshmanan and D. P. de Farias, “Decentralized resource allocation in dynamic networks of agents,” SIAM Journal on Optimization, vol. 19, no. 2, pp. 911–940, 2008.
* [3] L. Xiao and S. Boyd, “Optimal scaling of a gradient method for distributed resource allocation,” Journal of Optimization Theory and Applications, vol. 129, no. 3, pp. 469–488, 2006.
* [4] G. Banjac, F. Rey, P. Goulart, and J. Lygeros, “Decentralized resource allocation via dual consensus ADMM,” in 2019 American Control Conference (ACC), pp. 2789–2794, 2019.
* [5] T. T. Doan and C. L. Beck, “Distributed resource allocation over dynamic networks with uncertainty,” IEEE Transactions on Automatic Control, vol. 66, no. 9, pp. 4378–4384, 2021.
* [6] H. Cai, “Decentralized control of stochastic dynamic systems with applications to resource allocation and portfolio management,” 2012.
* [7] J. Snow, On the Mode of Communication of Cholera. 1855\.
* [8] L. Ju, T. Ringler, and M. Gunzburger, Voronoi Tessellations and Their Application to Climate and Global Modeling, pp. 313–342. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
* [9] J. Cortes, S. Martinez, T. Karatas, and F. Bullo, “Coverage control for mobile sensing networks,” IEEE Transactions on Robotics and Automation, vol. 20, no. 2, pp. 243–255, 2004.
* [10] D. Inoue, Y. Ito, and H. Yoshida, “Optimal transport-based coverage control for swarm robot systems: Generalization of the Voronoi tessellation-based method,” IEEE Control Systems Letters, vol. 5, no. 4, pp. 1483–1488, 2021\.
* [11] Y. Bai, Y. Wang, M. Svinin, E. Magid, and R. Sun, “Adaptive multi-agent coverage control with obstacle avoidance,” IEEE Control Systems Letters, vol. 6, pp. 944–949, 2022.
* [12] A. Davydov and Y. Diaz-Mercado, “Sparsity structure and optimality of multi-robot coverage control,” IEEE Control Systems Letters, vol. 4, no. 1, pp. 13–18, 2020.
* [13] T. Chen, H. Pourbabak, and W. Su, “Electricity market reform,” The Energy Internet, pp. 97–121, 2019.
* [14] S. Williams and M. Short, “Electricity demand forecasting for decentralised energy management,” Energy and Built Environment, vol. 1, no. 2, pp. 178 – 186, 2020.
* [15] C. Park and T. Yong, “Comparative review and discussion on P2P electricity trading,” Energy Procedia, vol. 128, pp. 3–9, 2017.
* [16] C. Zhang, J. Wu, C. Long, and M. Cheng, “Review of existing peer-to-peer energy trading projects,” Energy Procedia, vol. 105, pp. 2563–2568, 2017\.
* [17] A. Malizou, “Electricity aggregators: Starting off on the right foot with consumers,” BEUC, The European Consumer Organization, 2018.
* [18] S. Burger, J. Chaves-Ávila, C. Batlle, and I. Pérez-Arriaga, “A review of the value of aggregators in electricity systems,” Renewable and Sustainable Energy Reviews, vol. 77, pp. 395–405, 2017.
* [19] Q. Du, V. Faber, and M. Gunzburger, “Centroidal voronoi tessellations: Applications and algorithms,” SIAM Review, vol. 41, no. 4, pp. 637–676, 1999.
* [20] P. Fleischer, “Sufficient conditions for achieving minimum distortion in a quantizer,” IEEE International Convention Record, Pt I, pp. 104–111, 1964\.
* [21] J. C. Urschel, “On the characterization and uniqueness of centroidal Voronoi tessellations,” SIAM Journal on Numerical Analysis, vol. 55, no. 3, pp. 1525–1547, 2017.
* [22] S. Lloyd, “Least squares quantization in pcm,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982.
* [23] Q. Du, M. Emelianenko, and L. Ju, “Convergence of the lloyd algorithm for computing centroidal voronoi tessellations,” SIAM Journal on Numerical Analysis, vol. 44, no. 1, pp. 102–119, 2006.
* [24] J. Kieffer, “Uniqueness of locally optimal quantizer for log-concave density and convex error weighting function,” IEEE Transactions on Information Theory, vol. 29, no. 1, pp. 42–47, 1983.
* [25] Y. Liu, W. Wang, B. Lévy, F. Sun, D.-M. Yan, L. Lu, and C. Yang, “On centroidal Voronoi tessellation—energy smoothness and fast computation,” ACM Trans. Graph., vol. 28, sep 2009.
* [26] X. Wang, X. Ying, Y.-J. Liu, S.-Q. Xin, W. Wang, X. Gu, W. Mueller-Wittig, and Y. He, “Intrinsic computation of centroidal voronoi tessellation (cvt) on meshes,” Computer-Aided Design, vol. 58, pp. 51–61, 2015. Solid and Physical Modeling 2014.
* [27] J. Hatless, H. Wei, and C. L, “Fast methods for computing centroidal voronoi tessellations,” Journal of Scientific Computing, vol. 63, pp. 185–212, 2015.
* [28] J. Macqueen, “Some methods for classification and analysis of multivariate observations,” In 5-th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281–297, 1967.
* [29] M. Loebl, Introduction to Graph Theory, pp. 13–49. Wiesbaden: Vieweg+Teubner, 2010.
* [30] A.-M. Wazwaz, Fredholm Integral Equations, pp. 119–173. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
* [31] X. Ma, J. Dong, S. M. Djouadi, J. J. Nutaro, and T. Kuruganti, “Stochastic control of energy efficient buildings: A semidefinite programming approach,” in 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), pp. 780–785, 2015.
* [32] K. Amasyali, Y. Chen, B. Telsang, M. Olama, and S. M. Djouadi, “Hierarchical model-free transactional control of building loads to support grid services,” IEEE Access, vol. 8, pp. 219367–219377, 2020.
|
# HUPD-2213 Determination of Majorana type-phases from the time evolution of
lepton numbers
Nicholas J. Benoit1111E-mail address<EMAIL_ADDRESS>Yuta
Kawamura 2222E-mail address<EMAIL_ADDRESS>Saki Kawano 3333E-mail
address<EMAIL_ADDRESS>
Takuya Morozumi 4,5444E-mail address<EMAIL_ADDRESS>, Yusuke
Shimizu 4,5555E-mail address<EMAIL_ADDRESS>, Kei Yamamoto
6666E-mail address<EMAIL_ADDRESS>
1Graduate School of Science, Hiroshima University, Higashi-Hiroshima 739-8526,
Japan 2Yokkaichi city, Mie, Japan 3Tokyo prefecture, Japan 4Physics Program,
Graduate School of Advanced Science and Engineering,
Hiroshima University, Higashi-Hiroshima 739-8526, Japan 5Core of Research for
the Energetic Universe, Hiroshima University,
Higashi-Hiroshima 739-8526, Japan 6Department of Global Environment Studies,
Hiroshima Institute of Technology,
Saeki-ku, Hiroshima 731-5193, Japan
( Abstract
We have investigated an approach to determine the Majorana type-phases using
the time evolution of lepton family numbers. The Majorana type-phases are
related to the orientation of unitarity triangles for the Pontecorvo-Maki-
Nakagawa-Sakata (PMNS) matrix, and the Majorana phases $\alpha_{21}$ and
$\alpha_{31}$. After taking the second-order time derivative of the lepton
family number expectation values, the dependencies on the summation of
Majorana type-phases can be determined. Thus allowing for the extraction of
the orientation of the unitarity triangles and the Majorana phases. We study
how to extract the Majorana type-phases and the lightest neutrino mass for
three massive neutrinos, and when a neutrino is massless, i.e., $m_{1,3}=0$.
Our result can be complimentary to using neutrinoless double-beta decay for
determining the orientation of PMNS unitarity triangles and the Majorana
phases. )
## 1 Introduction
Despite phenomenal experimental programs in neutrino physics, open questions
remain. We know the three active neutrinos of the Standard Model come in three
flavors (families) electron neutrino, muon neutrino, and the tauon
neutrino[1]. Those flavors are assigned based on the neutrinos weak charged
current lepton partner. We also know that neutrinos can oscillate between
flavors over macroscopic distances[2, 3]. To describe the oscillations the
three neutrino flavors are treated as linear combinations of massive
eigenstates, then to explain experimental data at least two of the eigenstates
must have small, but non-zero, masses[4, 5]. This is in contrast to the
Standard Model, in which the three neutrinos are massless particles. Because
neutrino flavor oscillation experiments can only constrain the mass squared
differences of the neutrinos $\Delta m^{2}_{ij}=m^{2}_{i}-m^{2}_{j}$, details
of neutrino masses remain unknown. For example neutrinos are neutrally charged
particles with non-zero mass, which means they could be their own
antiparticles. Under that setup neutrinos would be called Majorana particles
and have a Majorana mass[6, 7, 8]. A Dirac particle type and mass are also
possible for neutrinos, this would follow the usual Standard Model lepton
content. Neither particle type has not been experimentally established for
neutrinos despite years of searching for Majorana neutrinos. Perhaps the most
famous experiments are searching for neutrinoless double-beta decay[9, 10, 11,
12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], but there are also neutrinoless
quadruple-beta decay[24], neutrinoless double-electron capture[25, 26, 27, 28,
29, 30], proposed processes mediated by the exchange of a pair of virtual
neutrinos akin to a Casimir force[31, 32, 33], coherent scattering of
neutrinos on nuclei with bremsstrahlung[34], and proposed searches of quantum
statics for final state decay processes involving neutrinos[35, 36, 37, 38].
We explore the situation that neutrinos are Majorana particles. In section 2,
we summarize how the three neutrino flavors are constructed as linear
combinations of the massive eigenstates via a mixing matrix; and we introduce
how the mixing matrix can be formed with Majorana type-phases and unitarity
triangles. Then, we connect our previous work on the time evolution of lepton
family numbers to the Majorana type-phases in section 3. We explain how our
previous work can be used to determine the Majorana type-phases and the
lightest neutrino mass in section 4. In the last section, section 5, we
consider the lightest neutrino to be massless and illustrate how this enhances
the predictive power of our work. Finally, we provide some concluding
thoughts.
## 2 Majorana type-phases and unitarity triangles
Neutrino masses are a pillar for physics beyond the Standard Model that
started with the discovery of neutrino flavor oscillations [39, 40]. The
results of neutrino oscillation experiments are mathematically described by a
mixing of basis states via the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix
[41, 42, 43]
$\nu_{L\alpha}=U_{\alpha i}\nu_{Li}=\begin{pmatrix}U_{e1}&U_{e2}&U_{e3}\\\
U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\\ U_{\tau 1}&U_{\tau 2}&U_{\tau
3}\end{pmatrix}\begin{pmatrix}\nu_{L1}\\\ \nu_{L2}\\\ \nu_{L3}\end{pmatrix},$
(1)
where $\alpha=e,\mu,\tau$ and $i=1,2,3$. The PMNS matrix $U_{\alpha i}$ can be
formed for neutrinos with a Majorana mass though diagonalization of the
Majorana mass matrix,
$(U)_{\alpha i}(m_{\nu})_{\alpha\beta}(U)_{\beta j}=(m_{\nu})_{i}\delta_{ij},$
(2)
using a Takagi Factorization [44]. Then the leptonic weak charged current
interaction is written as,
$\mathcal{L}^{(CC)}_{I}=\frac{g}{\sqrt{2}}\overline{l_{L\alpha}}\gamma_{\mu}U_{\alpha
i}\nu_{Li}W^{\mu-}+\text{h.c.}\,.$ (3)
The charged leptons are free to be re-phased, because their mass term is
invariant under the phase transformations of $l_{\alpha}\rightarrow
l^{{}^{\prime}}_{\alpha}=\exp(i\phi_{\alpha})l_{\alpha}$. But, the neutrino
Majorana mass term is not invariant under the phase transformations. This
means the standard parametrization of the PMNS matrix [45] must be extended to
include two independent CP violating phases. Thus, the unitary $3\times 3$
PMNS matrix depends on three mixing angles and three CP violating phases.
$U_{\alpha
i}=U^{D}\operatorname{diag}\left(1,\,e^{i\frac{\alpha_{21}}{2}},\,e^{i\frac{\alpha_{31}}{2}}\right),$
(4)
where the Dirac portion $U^{D}$ is of the form,
$U^{D}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta}&s_{23}c_{13}\\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta}&c_{23}c_{13}\end{pmatrix}.$
(5)
We use the usual notation of $c_{ij}=\cos\theta_{ij}$ and
$s_{ij}=\sin\theta_{ij}$ from PDG[45]. The CP violating phases in the diagonal
matrix, $\alpha_{21}$ and $\alpha_{31}$, are the Majorana phases, whereas the
CP violating phase of $U^{D}$ is the Dirac phase $\delta$. The CP violating
phases have physical meaning only if all the mixing angles are not $0$ or
$\pi/2$. This can be seen by constructing the invariants [46, 47, 48, 49] of
the PMNS matrix.
We consider an interesting aspect of Majorana neutrinos and the rephasing
invariant bilinears [48] that Branco and Rebelo name “Majorana type-
phases”[50]. The Majorana type-phases are defined by Branco and Rebelo to be
the argument of the bilinears $U_{\alpha k}U^{\ast}_{\alpha j}$ with no
summation over the repeated index $\alpha$. Then, Branco and Rebelo choose six
Majorana type-phases and prove the six phases can be used to exactly reproduce
the $3\times 3$ PMNS matrix Eq.(4). We reproduce their [50] six Majorana type-
phases for self-containment of our work,
$\displaystyle\beta_{1}=\arg{U_{e1}U^{\ast}_{e2}}\,,$
$\displaystyle\beta_{2}=\arg{U_{\mu 1}U^{\ast}_{\mu 2}}\,,$
$\displaystyle\beta_{3}=\arg{U_{\tau 1}U^{\ast}_{\tau 2}}\,,$ (6)
$\displaystyle\gamma_{1}=\arg{U_{e1}U^{\ast}_{e3}}\,,$
$\displaystyle\gamma_{2}=\arg{U_{\mu 1}U^{\ast}_{\mu 3}}\,,$
$\displaystyle\gamma_{3}=\arg{U_{\tau 1}U^{\ast}_{\tau 3}}\,.$ (7)
To reproduce the $3\times 3$ PMNS matrix of Eq.(4), from the Majorana type-
phases Eq.(6) and Eq.(7), we must consider the construction of unitarity
triangles. Two types of triangles can be created from the PMNS matrix, Dirac
and Majorana triangles. We are interested in Majorana triangles and their
dependence on the Majorana type-phases. The three Majorana triangles are
derived by multiplying the columns of the PMNS matrix,
$\displaystyle U_{e1}U^{\ast}_{e2}+U_{\mu 1}U^{\ast}_{\mu 2}+U_{\tau
1}U^{\ast}_{\tau 2}$ $\displaystyle=0,$ triangle 1, (8) $\displaystyle
U_{e1}U^{\ast}_{e3}+U_{\mu 1}U^{\ast}_{\mu 3}+U_{\tau 1}U^{\ast}_{\tau 3}$
$\displaystyle=0,$ triangle 2, (9) $\displaystyle U_{e2}U^{\ast}_{e3}+U_{\mu
2}U^{\ast}_{\mu 3}+U_{\tau 2}U^{\ast}_{\tau 3}$ $\displaystyle=0,$ triangle 3.
(10)
The equation for triangle 1, Eq.(8) is connected with the $\beta_{k}$ Majorana
type-phases of Eq.(6). We illustrate that connection between the triangle
vectors of Eq.(8) and $\beta_{k}$ in with figure 1. From figure 1, we can see
all sides of the triangle are constructed from the rephasing invariants
$U_{\alpha k}U^{\ast}_{\alpha j}$. Thus, the Majorana triangles are not free
to rotate in the complex plain and their orientation is physically meaningful
[50, 51].
Figure 1: The first Majorana triangle depends on the Majorana type-phases
$\beta_{k}$. The sides of the triangle are constructed from the first two rows
of the PMNS matrix. The orientation is physically meaningful and can only be
determined by knowledge of the Majorana type-phases.
In addition, the internal angles of the triangle $\zeta_{k}$ are,
$\displaystyle\zeta_{1}=\pi-\left(\beta_{3}-\beta_{2}\right),$
$\displaystyle\zeta_{2}=\left(\beta_{3}-\beta_{1}\right)-\pi,$
$\displaystyle\zeta_{3}=\pi-\left(\beta_{2}-\beta_{1}\right).$ (11)
Triangle 2 is connected to the $\gamma_{k}$ of Eq.(7) and behaves similarly to
triangle 1 where the orientation is physically meaningful. The internal angles
of triangle 2 are calculated by replacing $\beta_{k}$ with $\gamma_{k}$ in
Eq.(11). Lastly, the connection to triangle 3 comes from subtraction between
the $\beta_{k}$’s and $\gamma_{k}$’s and its behavior depends on the other two
triangles.
Branco and Rebelo relate the mixing angles of the parametrized PMNS matrix in
Eq.(5) to the Majorana type-phases using the law of sines on the Majorana
triangles. For example, to find $\theta_{12}$ they take,
$\begin{split}\tan^{2}\theta_{12}&{}=\frac{\absolutevalue{U_{e2}U^{\ast}_{e3}}^{2}}{\absolutevalue{U_{e1}U^{\ast}_{e3}}^{2}}\\\
&{}=\frac{\absolutevalue{\sin(\gamma_{1}-\gamma_{3})}\absolutevalue{\sin(\gamma_{2}-\gamma_{1})}\absolutevalue{\sin(\gamma_{3}-\gamma_{2}-(\beta_{3}-\beta_{2}))}}{\absolutevalue{\sin(\gamma_{1}-\gamma_{3}-(\beta_{1}-\beta_{3}))}\absolutevalue{\sin(\gamma_{2}-\gamma_{1}-(\beta_{2}-\beta_{1}))}\absolutevalue{\sin(\gamma_{3}-\gamma_{2})}}.\end{split}$
(12)
Then they prove how the Dirac phase $\delta$ relates to the Majorana type-
phases via the common area of the triangles,
$\begin{split}A&{}=\frac{1}{16}\absolutevalue{\sin 2\theta_{12}\sin
2\theta_{13}\sin 2\theta_{23}\cos\theta_{13}\sin\delta}\\\
&{}=\frac{1}{2}\absolutevalue{\cos\theta_{12}\cos\theta_{13}\sin\theta_{13}}^{2}\frac{\absolutevalue{\sin(\gamma_{1}-\gamma_{2})}\absolutevalue{\sin(\gamma_{1}-\gamma_{3})}}{\absolutevalue{\sin(\gamma_{2}-\gamma_{3})}}.\end{split}$
(13)
We use the relations of Branco and Rebelo like Eq.(12) and Eq.(13) to update
their analysis [50] of the Majorana triangles based on the recent neutrino
oscillation global fit. Specifically, we use the best fit values of NuFITv5.1
calculated by the Nu-Fit collaboration [52]. Compared to the analysis of
ref.[50], we include additional data from neutrino oscillation experiments.
For simplicity, we consider the Majorana phases to be absent
$\alpha_{21}=\alpha_{31}=0$, and later we will include them. The vectors for
Triangle 1 in fig.2 are,
$\begin{split}U_{e1}U^{\ast}_{e2}&=0.4496-0.0i,\\\ U_{\mu 1}U^{\ast}_{\mu
2}&=-0.2295+0.05711i,\\\ U_{\tau 1}U^{\ast}_{\tau
2}&=-0.2200-0.05711i,\end{split}$ (14)
where we only write the first four significant figures with no rounding.
Figure 2: We build the three Majorana triangle using the best fit values of
NuFITv5.1[52] for normal hierarchy. The orientation of the triangles depends
on the Majorana type-phases; $\beta_{k}$ for triangle 1 that is related to the
Majorana phase $\alpha_{21}$, $\gamma_{k}$ for triangle 2, that is related to
the Majorana phase $\alpha_{31}$, and for triangle 3 the subtraction between
the Majorana type-phases, $\gamma_{k}-\beta_{k}$. For this instance, we set
the Majorana phase $\alpha_{21}=\alpha_{31}=0$, because it is experimentally
not determined. NuFITv5.1 assumes the PMNS matrix is unitary, so all triangles
are completely closed.
We treat the vectors for Triangles 2 and 3 of fig.2 similarly as Triangle 1,
$\displaystyle\left.\begin{aligned} U_{e1}U^{\ast}_{e3}&=-0.07945-0.09469i,\\\
U_{\mu 1}U^{\ast}_{\mu 3}&=-0.2354+0.04261i,\\\ U_{\tau 1}U^{\ast}_{\tau
3}&=0.3149+0.05208i;\end{aligned}\right\\}\text{Triangle 2}$ (15)
$\displaystyle\left.\begin{aligned} U_{e2}U^{\ast}_{e3}&=-0.05251-0.06258i,\\\
U_{\mu 2}U^{\ast}_{\mu 3}&=0.4339+0.02816i,\\\ U_{\tau 2}U^{\ast}_{\tau
3}&=-0.3814+0.03442i.\end{aligned}\right\\}\text{Triangle 3}$ (16)
In all three triangles for fig.2 the vectors completely close to form a
triangle. This is because the data we are using from NuFITv5.1 assume the PMNS
matrix Eq.(5) is unitary. If the unitary assumption was relaxed, the triangles
could be open. Furthermore, all three triangles are scalene, which is
different from the isosceles triangles of Branco and Rebelo because we do not
consider perturbations around tri-bimaximal mixing.
Lastly, current and near future neutrino oscillation experiments can not
determine the orientation of the Majorana triangles. This is because the
oscillation experiments measure the Dirac phase $\delta$, but can not measure
the Majorana phases $\alpha_{21}$ and $\alpha_{31}$ that determine the
triangle orientation. For example, we can change the values of
$\Delta\alpha_{21}$ to rotate triangle 1 in figure 3.
Figure 3: We rotate the first Majorana triangle from the best fit values of
NuFITv5.1[52] for normal hierarchy. The orientation of the triangle depends on
the Majorana type-phases $\beta_{k}$, thus it is rotated by varying between
$0<\alpha_{21}<\pi$.
Branco and Rebelo discussed this problem [50], stating that measurements of
the Dirac phase can only provide insight to the differences of the Majorana
type-phases, i.e.$\gamma_{1}-\gamma_{3}$; so the triangle orientation can not
be determined. Whereas, the addition of Majorana type-phases
$\beta_{i}+\beta_{j}$ and $\gamma_{i}+\gamma_{j}$ are connected to the
Majorana phases $\alpha_{21}$ and $\alpha_{31}$, respectively. They then go on
to discuss the possibility of determining the triangle orientation with
neutrinoless double-beta decay and flavor sensitive leptogenesis[53, 54, 55].
In this paper, we will describe an additional method to determine the triangle
orientation based on our work of lepton number oscillations[56, 57].
## 3 Time evolution of lepton number and Majorana type-phases
To determine the triangle orientation, we study the time evolution of lepton
number focusing on the dependence of the Majorana type-phases. In the works
[56, 57], the lepton family numbers $L_{\alpha}^{M}$ for the Majorana
neutrinos are defined. Then, the time evolution of their expectation values
are obtained. We rewrite the Majorana $L_{\alpha}^{M}(t)$
$\bra*{\sigma}L_{\alpha}^{M}(t)\ket*{\sigma}\\\
=\sum_{i,j}^{3}\left[\real(U^{\ast}_{\alpha i}U_{\sigma i}U_{\alpha
j}U^{\ast}_{\sigma
j})\left(\cos\\{E_{i}(\mathbf{q})t\\}\cos\\{E_{j}(\mathbf{q})t\\}+\frac{\mathbf{q}^{2}}{E_{i}(\mathbf{q})E_{j}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\}\right)\right.\\\
-\imaginary(U^{\ast}_{\alpha i}U_{\sigma i}U_{\alpha j}U^{\ast}_{\sigma
j})\left(\frac{\absolutevalue{\mathbf{q}}}{E_{i}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\cos\\{E_{j}(\mathbf{q})t\\}-\frac{\absolutevalue{\mathbf{q}}}{E_{j}(\mathbf{q})}\cos\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\}\right)\\\
\left.-\real(U^{\ast}_{\alpha i}U^{\ast}_{\sigma i}U_{\alpha j}U_{\sigma
j})\frac{m_{i}}{E_{i}(\mathbf{q})}\frac{m_{j}}{E_{j}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\}\right].$
(17)
For our notation the initial state $\sigma$ denotes the neutrino with a
definite flavor i.e., $\sigma=e,\mu,\tau$ and the momentum $\mathbf{q}$. The
energy of the mass states we define as
$E_{i,j}(\mathbf{q})=\sqrt{\mathbf{q}^{2}+m_{i,j}^{2}}$. To extract the
Majorana type-phases we are interested in the third term of Eq.(17) that
depends on the PMNS combination of $\real(U^{\ast}_{\alpha i}U^{\ast}_{\sigma
i}U_{\alpha j}U_{\sigma j})$. To isolate the third term, consider that the
first two terms can be completely determined from neutrino oscillation and
neutrino mass experiments. This means we can subtract those terms from the
Majorana expectation value to isolate the third term. We define the quantity
$L^{Q}_{\alpha}(t)$ which denotes the sum of the first and the second term of
Eq.(17),
$\bra*{\sigma}{L^{Q}}_{\alpha}(t)\ket*{\sigma}\\\
=\sum_{i,j}^{3}\left[\real(U^{\ast}_{\alpha i}U_{\sigma i}U_{\alpha
j}U^{\ast}_{\sigma
j})\left(\cos\\{E_{i}(\mathbf{q})t\\}\cos\\{E_{j}(\mathbf{q})t\\}+\frac{\mathbf{q}^{2}}{E_{i}(\mathbf{q})E_{j}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\}\right)\right.\\\
\left.-\imaginary(U^{\ast}_{\alpha i}U_{\sigma i}U_{\alpha j}U^{\ast}_{\sigma
j})\left(\frac{\absolutevalue{\mathbf{q}}}{E_{i}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\cos\\{E_{j}(\mathbf{q})t\\}-\frac{\absolutevalue{\mathbf{q}}}{E_{j}(\mathbf{q})}\cos\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\}\right)\right].$
(18)
We represent that subtraction process with the difference between the Majorana
expectation value Eq.(17) and the quantity Eq.(18),
$\bra*{\sigma}L_{\alpha}^{M-Q}(t)\ket*{\sigma}\equiv\bra*{\sigma}L_{\alpha}^{M}(t)\ket*{\sigma}-\bra*{\sigma}L_{\alpha}^{Q}(t)\ket*{\sigma}.$
(19)
This results in the isolated third term of,
$\begin{split}\bra*{\sigma}L_{\alpha}^{M-Q}(t)\ket*{\sigma}={}&-\sum_{i,j}^{3}\real(U^{\ast}_{\alpha
i}U^{\ast}_{\sigma i}U_{\alpha j}U_{\sigma
j})\frac{m_{i}}{E_{i}(\mathbf{q})}\frac{m_{j}}{E_{j}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\},\\\
={}&-\sum_{i=1}^{3}\absolutevalue{U_{\alpha i}U_{\sigma
i}}^{2}\left(\frac{m_{i}\sin\\{E_{i}(\mathbf{q})t\\}}{E_{i}(\mathbf{q})}\right)^{2}\\\
&-2\sum_{i<j}\real(U^{\ast}_{\alpha i}U_{\alpha j}U^{\ast}_{\sigma i}U_{\sigma
j})\frac{m_{i}}{E_{i}(\mathbf{q})}\frac{m_{j}}{E_{j}(\mathbf{q})}\sin\\{E_{i}(\mathbf{q})t\\}\sin\\{E_{j}(\mathbf{q})t\\}.\end{split}$
(20)
Let us investigate the difference by focusing on the dependence of the
Majorana type-phases from Eq.(6). We take $\sigma=\alpha=e$ to obtain,
$\begin{split}\bra*{e}L_{e}^{M-Q}(t)\ket*{e}={}&-\sum_{i=1}^{3}\absolutevalue{U_{ei}}^{4}\left(\frac{m_{i}\sin\\{E_{i}(\mathbf{q})t\\}}{E_{i}(\mathbf{q})}\right)^{2}\\\
&-2\real\\{(U^{\ast}_{e1}U_{e2})^{2}\\}\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\\\
&-2\real\\{(U^{\ast}_{e2}U_{e3})^{2}\\}\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}\\\
&-2\real\\{(U^{\ast}_{e1}U_{e3})^{2}\\}\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}.\end{split}$
(21)
In the last three terms of Eq.(21), the PMNS combinations can be written with
the Majorana type-phases $2\beta_{1}$ and $2\gamma_{1}$ based on Eq.(6) and
Eq.(7). The explicit dependence of Eq.(21) on the Majorana type-phases is
then,
$\begin{split}\bra*{e}L_{e}^{M-Q}(t)\ket*{e}={}&-\sum_{i=1}^{3}\absolutevalue{U_{ei}}^{4}\left(\frac{m_{i}\sin\\{E_{i}(\mathbf{q})t\\}}{E_{i}(\mathbf{q})}\right)^{2}\\\
&-2\absolutevalue{U_{e1}U_{e2}}^{2}\cos(2\beta_{1})\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\\\
&-2\absolutevalue{U_{e2}U_{e3}}^{2}\cos(2(\gamma_{1}-\beta_{1}))\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}\\\
&-2\absolutevalue{U_{e1}U_{e3}}^{2}\cos(2\gamma_{1})\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}.\end{split}$
(22)
We can obtain similar results as Eq.(22) for the muon and tauon numbers of
$\alpha=\mu,\tau$,
$\begin{split}\bra*{e}L_{\mu}^{M-Q}(t)\ket*{e}=&{}-\sum_{i=1}^{3}\absolutevalue{U_{\mu
i}U_{ei}}^{2}\left(\frac{m_{i}\sin\\{E_{i}(\mathbf{q})t\\}}{E_{i}(\mathbf{q})}\right)^{2}\\\
&-2\absolutevalue{U^{\ast}_{e1}U_{e2}U^{\ast}_{\mu 1}U_{\mu
2}}\cos(\beta_{1}+\beta_{2})\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\\\
&-2\absolutevalue{U^{\ast}_{e2}U_{e3}U^{\ast}_{\mu 2}U_{\mu
3}}\cos(\gamma_{1}-\beta_{1}+\gamma_{2}-\beta_{2})\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}\\\
&-2\absolutevalue{U^{\ast}_{e1}U_{e3}U^{\ast}_{\mu 1}U_{\mu
3}}\cos(\gamma_{1}+\gamma_{2})\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}.\end{split}$
(23)
In contrast to Eq.(22), the muon number $L_{\mu}^{M-Q}(t)$ of Eq.(23) depends
on the Majorana type-phases $\beta_{1}+\beta_{2}$ and $\gamma_{1}+\gamma_{2}$.
Finally, the tauon number $L_{\tau}^{M-Q}(t)$ is written as,
$\begin{split}\bra*{e}L_{\tau}^{M-Q}(t)\ket*{e}={}&-\sum_{i=1}^{3}\absolutevalue{U_{\tau
i}U_{ei}}^{2}\left(\frac{m_{i}\sin\\{E_{i}(q)t\\}}{E_{i}(q)}\right)^{2}\\\
&-2\absolutevalue{U^{\ast}_{e1}U_{e2}U^{\ast}_{\tau 1}U_{\tau
2}}\cos(\beta_{1}+\beta_{3})\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\\\
&-2\absolutevalue{U^{\ast}_{e2}U_{e3}U^{\ast}_{\tau 2}U_{\tau
3}}\cos(\gamma_{1}-\beta_{1}+\gamma_{3}-\beta_{3})\frac{m_{2}\sin\\{E_{2}(\mathbf{q})t\\}}{E_{2}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}\\\
&-2\absolutevalue{U^{\ast}_{e1}U_{e3}U^{\ast}_{\tau 1}U_{\tau
3}}\cos(\gamma_{1}+\gamma_{3})\frac{m_{1}\sin\\{E_{1}(\mathbf{q})t\\}}{E_{1}(\mathbf{q})}\frac{m_{3}\sin\\{E_{3}(\mathbf{q})t\\}}{E_{3}(\mathbf{q})}.\end{split}$
(24)
The last three terms are written with the Majorana type-phases
$\beta_{1}+\beta_{3}$ and $\gamma_{1}+\gamma_{3}$. All the lepton family
numbers Eqs.(22-24) depend on the sum of Majorana type-phases
$\beta_{1}+\beta_{j}$ and $\gamma_{1}+\gamma_{j}$ where $j=1,2,3$. In contrast
to the difference of Majorana type-phases in an unitarity triangle such as
$\beta_{i}-\beta_{j}$ and $\gamma_{i}-\gamma_{j}$, the sum of the Majorana
type-phases depend on the Majorana phases $\alpha_{21}$ and $\alpha_{31}$ of
Eq.(4).
As we have seen in Eqs.(22-24), one can identify the part which depends on the
summation of the Majorana type-phases, i.e., $\beta_{i}+\beta_{j}$ and
$\gamma_{i}+\gamma_{j}$. One may determine the combination of Majorana type-
phases by fitting the curve for Eq.(20) as we vary those unknown parameters.
This method may require the measurement of the long time behaviour of the time
evolution. In the next section, we propose other quantities directly related
to them from the derivatives of the expectation values with respect to a slice
of time.
## 4 Determination of Majorana type-phases and the lightest neutrino masses
with the lepton numbers
In this section, we derive the formula to determine the lightest neutrino mass
and Majorana phases with time dependence of the lepton numbers. We consider
two cases,
1. 1.
there are three massive neutrinos where two Majorana phases are allowed,
2. 2.
there is a massless neutrino and only one Majorana phase is allowed.
We take the second-order time derivative of Eq.(17) at the initial time $t=0$,
$\begin{split}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L_{\alpha}^{M}(t)\ket*{\sigma}|_{t=0}&=-\sum_{i,j}^{3}\real(U_{\alpha
i}^{\ast}U_{\sigma i}U_{\alpha j}U_{\sigma
j}^{\ast})\left(m_{i}^{2}+m_{j}^{2}\right)-2\sum_{i,j}^{3}{\rm Re}(U_{\alpha
i}^{\ast}U_{\sigma i}^{\ast}U_{\alpha j}U_{\sigma j})m_{i}m_{j}\\\
&=-2\sum_{i}^{3}\delta_{\alpha\sigma}\absolutevalue{U_{\sigma
i}}^{2}m_{i}^{2}-2\sum_{i,j}^{3}\real(U_{\alpha i}^{\ast}U_{\sigma
i}^{\ast}U_{\alpha j}U_{\sigma j})m_{i}m_{j}.\end{split}$ (25)
The first term is independent of the Majorana phases and the second term
depends on the Majorana phases though the PMNS combination
$\real(U^{\ast}_{\alpha i}U^{\ast}_{\sigma i}U_{\alpha j}U_{\sigma j})$. Then,
the second-order derivative of the total lepton number
$L^{M}(t)=\sum_{\alpha}L_{\alpha}^{M}(t)$ is given by,
$\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}=-4\sum_{i}^{3}m_{i}^{2}|U_{\sigma
i}|^{2}.$ (26)
We use the total lepton number of Eq.(26) to rewrite the first term of Eq.(25)
resulting in,
$\frac{d^{2}}{dt^{2}}\bra*{\sigma}L_{\alpha}^{M}(t)\ket*{\sigma}|_{t=0}=\delta_{\alpha\sigma}\frac{1}{2}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}-2\sum_{i,j}^{3}{\rm
Re}(U_{\alpha i}^{*}U_{\sigma i}^{*}U_{\alpha j}U_{\sigma j})m_{i}m_{j}.$ (27)
Using Eq.(26), one can derive the lightest neutrino mass for the normal
hierarchy and the inverted hierarchy cases in terms of the second-order
derivative of the total lepton number.
$\displaystyle
m_{1}^{2}=-\frac{1}{4}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}-\Delta
m^{2}_{21}|U_{\sigma 2}|^{2}-\Delta m^{2}_{31}|U_{\sigma 3}|^{2}$ Normal, (28)
$\displaystyle
m_{3}^{2}=-\frac{1}{4}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}-\Delta
m^{2}_{13}|U_{\sigma 1}|^{2}-\Delta m^{2}_{23}|U_{\sigma 2}|^{2}$ Inverted.
(29)
To investigate the Majorana type-phases, we consider two cases for the lepton
families. First we set $\alpha=\sigma$ in Eq.(27) and obtain the following
formula,
$\frac{d^{2}}{dt^{2}}\bra*{\sigma}L_{\sigma}^{M}(t)\ket*{\sigma}|_{t=0}=\frac{1}{2}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}-2\sum_{i,j}^{3}(U_{\sigma
i}^{*}U_{\sigma j})^{2}m_{i}m_{j}.$ (30)
Then we set $\alpha\neq\sigma$ in Eq.(27) to obtain,
$\frac{d^{2}}{dt^{2}}\bra*{\sigma}L_{\alpha}^{M}(t)\ket*{\sigma}|_{t=0}=-2\sum_{i,j}^{3}{\rm
Re}(U_{\alpha i}^{*}U_{\alpha j}U_{\sigma i}^{*}U_{\sigma j})m_{i}m_{j}.$ (31)
Next, by specifying the family indices $\sigma$ and $\alpha$ in Eq.(30) and
Eq.(31) we can clarify the dependencies on the Majorana type-phases from
Eqs.(6-7). We take $\sigma=e$ in Eq.(30) to show the dependencies on the
Majorana type-phases $\beta_{1}$ and $\gamma_{1}$,
$\frac{d^{2}}{dt^{2}}\bra*{e}L_{e}^{M}(t)\ket*{e}|_{t=0}=\frac{1}{2}\frac{d^{2}}{dt^{2}}\bra*{e}L^{M}(t)\ket*{e}|_{t=0}-2\sum_{i=1}^{3}m_{i}^{2}\absolutevalue{U_{ei}}^{4}-4m_{1}m_{2}\absolutevalue{U_{e1}U^{\ast}_{e2}}^{2}\cos(2\beta_{1})\\\
-4m_{2}m_{3}\absolutevalue{U_{e2}U^{\ast}_{e3}}^{2}\cos(2\beta_{1}-2\gamma_{1})-4m_{3}m_{1}\absolutevalue{U_{e1}U^{\ast}_{e3}}^{2}\cos(2\gamma_{1}).$
(32)
In contrast, for Eq.(31) we take $\sigma=e$ and $\alpha=\mu$,
$\frac{d^{2}}{dt^{2}}\bra*{e}L_{\mu}^{M}(t)\ket*{e}|_{t=0}=-2\sum_{i}^{3}\absolutevalue{U_{\mu
i}}^{2}\absolutevalue{U_{ei}}^{2}m_{i}^{2}-4\absolutevalue{U_{\mu
1}^{\ast}U_{\mu 2}U_{e1}^{\ast}U_{e2}}\cos(\beta_{1}+\beta_{2})m_{1}m_{2}\\\
\qquad-4\absolutevalue{U_{\mu 2}^{\ast}U_{\mu
3}U_{e2}^{\ast}U_{e3}}\cos(\beta_{1}-\gamma_{1}+\beta_{2}-\gamma_{2})m_{2}m_{3}\\\
-4\absolutevalue{U_{\mu 3}^{\ast}U_{\mu
1}U_{e3}^{\ast}U_{e1}}\cos(\gamma_{1}+\gamma_{2})m_{3}m_{1},$ (33)
where we have dependence on the additional Majorana type-phases $\beta_{2}$
and $\gamma_{2}$ compared to Eq.(32). Lastly we take $\alpha=\tau$,
$\frac{d^{2}}{dt^{2}}\bra*{e}L_{\tau}^{M}(t)\ket*{e}|_{t=0}=-2\sum_{i}^{3}\absolutevalue{U_{\tau
i}}^{2}\absolutevalue{U_{ei}}^{2}m_{i}^{2}-4\absolutevalue{U_{\tau
1}^{\ast}U_{\tau 2}U_{e1}^{\ast}U_{e2}}\cos(\beta_{1}+\beta_{3})m_{1}m_{2}\\\
\qquad-4\absolutevalue{U_{\tau 2}^{\ast}U_{\tau
3}U_{e2}^{\ast}U_{e3}}\cos(\beta_{1}-\gamma_{1}+\beta_{3}-\gamma_{3})m_{2}m_{3}\\\
-4\absolutevalue{U_{\tau 3}^{\ast}U_{\tau
1}U_{e3}^{\ast}U_{e1}}\cos(\gamma_{1}+\gamma_{3})m_{3}m_{1},$ (34)
which results in the dependence on the Majorana type-phases $\beta_{3}$ and
$\gamma_{3}$.
As discussed near the end of section 2, the Majorana type-phases $\beta_{i}$
and $\gamma_{i}$ change for variation of the Majorana phases
$\Delta\alpha_{21}$ and $\Delta\alpha_{31}$,
$\displaystyle\beta_{i}+\beta_{j}\to\beta_{i}+\beta_{j}-\Delta\alpha_{21},$
(35)
$\displaystyle\gamma_{i}+\gamma_{j}\to\gamma_{i}+\gamma_{j}-\Delta\alpha_{31}.$
(36)
The time derivatives of all the three lepton family numbers in Eq.(32-34) are
dependent on summations of the Majorana type-phases i.e.,
$\beta_{i}+\beta_{j}$ and $\gamma_{i}+\gamma_{j}$. From Eqs.(35-36) we know
$\beta_{i}+\beta_{j}$ and $\gamma_{i}+\gamma_{j}$ are sensitive to the
Majorana phases $\Delta\alpha_{21}$ and $\Delta\alpha_{31}$. Thus, the two
Majorana phases $\alpha_{21}$ and $\alpha_{31}$ can be determined by knowing
the time derivatives of the two or more lepton family numbers. In addition, we
can resolve the lightest neutrino mass from the time derivatives of the total
lepton number in Eqs.(28-29). This is a method to determine the triangle
orientations and the lightest neutrino mass based on lepton number
oscillations.
## 5 Considerations when the lightest neutrino is massless
In this section, we consider the lightest neutrino to be massless. As an
example, the $(3,2)$ Type-I seesaw model can predict such a massless neutrino
with three active neutrinos and two heavy right-handed Majorana neutrinos[58].
In this framework, which neutrino is massless depends on the hierarchy,
$\displaystyle m_{1}=0,$ $\displaystyle 0<m_{2}<m_{3},$ Normal hierarchy, (37)
$\displaystyle m_{3}=0,$ $\displaystyle 0<m_{1}<m_{2},$ Inverted hierarchy.
(38)
Because a massless neutrino is free to be re-phased, a few of the six Majorana
type-phases from the three massive Majorana neutrinos in the previous section,
4, are no longer invariants. In the normal hierarchy case, the invariant
combinations of the three $\beta_{i}$ and three $\gamma_{i}$ become,
$\displaystyle\arg(U_{e2}U_{e3}^{\ast})=\gamma_{1}-\beta_{1},$
$\displaystyle\arg(U_{\mu 2}U_{\mu 3}^{\ast})=\gamma_{2}-\beta_{2},$
$\displaystyle\arg(U_{\tau 2}U_{\tau 3}^{\ast})=\gamma_{3}-\beta_{3}.$ (39)
This is because, $\beta_{i}$ and $\gamma_{i}$ are no longer re-phasing
invariants by themselves and one can form four invariant quartets,
$\displaystyle\arg(U_{e1}^{\ast}U_{e2}U_{\mu 1}U_{\mu
2}^{\ast})=\beta_{2}-\beta_{1}$ (40)
$\displaystyle\arg(U_{e1}^{\ast}U_{e2}U_{\tau 1}U_{\tau
2}^{\ast})=\beta_{3}-\beta_{1}$ (41)
$\displaystyle\arg(U_{e1}^{\ast}U_{e3}U_{\mu 1}U_{\mu
3}^{\ast})=\gamma_{2}-\gamma_{1}$ (42)
$\displaystyle\arg(U_{e1}^{\ast}U_{e3}U_{\tau 1}U_{\tau
3}^{\ast})=\gamma_{3}-\gamma_{1}.$ (43)
The Majorana type-phases defined in Eq.(39) and the four arguments defined in
Eqs.(40-43) are not independent. In addition to a Majorana type-phase
$\gamma_{1}-\beta_{1}$ in Eq.(39), one can choose the four arguments of the
quartets as independent re-phasing invariant combinations. This means the
other two Majorana type-phases can be written using them,
$\begin{split}\gamma_{i}-\beta_{i}&=(\gamma_{1}-\beta_{1})+(\gamma_{i}-\beta_{i})-(\gamma_{1}-\beta_{1})\\\
&=(\gamma_{1}-\beta_{1})+(\gamma_{i}-\gamma_{1})-(\beta_{i}-\beta_{1})\qquad(i=2,3).\end{split}$
(44)
A similar situation occurs for the inverted hierarchy massless neutrino of
$m_{3}=0$. The re-phasing invariant Majorana type-phases become $\beta_{i}$,
for $i=1,2,3$. Then, one can choose a Majorana type-phase and the four
arguments in Eqs.(40-43) as independent re-phasing invariants.
### 5.1 Neutrinoless double beta decay and lepton number
In this subsection, we relate one independent Majorana type-phase to physical
observables, such as $\absolutevalue{m_{\nu ee}}$ related to the neutrinoless
double beta decay rate and time evolution of lepton numbers. For the normal
hierarchy case, the $\absolutevalue{m_{\nu ee}}$ is given by the following
formula,
$\absolutevalue{m_{\nu
ee}}^{2}_{\text{norm}}=m_{2}^{2}\absolutevalue{U_{e2}}^{4}+m_{3}^{2}\absolutevalue{U_{e3}}^{4}+2m_{2}m_{3}\absolutevalue{U_{e2}}^{2}\absolutevalue{U_{e3}}^{2}\cos{2(\gamma_{1}-\beta_{1})}.$
(45)
Whereas for the inverted hierarchy case it is,
$\absolutevalue{m_{\nu
ee}}^{2}_{\text{inv}}=m_{1}^{2}\absolutevalue{U_{e1}}^{4}+m_{2}^{2}\absolutevalue{U_{e2}}^{4}+2m_{1}m_{2}\absolutevalue{U_{e1}}^{2}\absolutevalue{U_{e2}}^{2}\cos{2\beta_{1}}.$
(46)
From the expressions in Eq.(45-46), the determination of
$\absolutevalue{m_{\nu ee}}_{\text{norm},\text{inv}}$ is sufficient to
identify a single Majorana type-phase; provided the other moduli of the PMNS
matrix elements in Eq.(5) and two non-zero masses of the neutrinos are
extracted from the neutrino oscillation experiments.
Next, we investigate the time evolution of lepton number when the lightest
neutrino is massless, as in Eqs.(37-38). From Eq.(28), for the normal
hierarchy case, the second-order time derivative of the total lepton number is
related to the combination of the two non-vanishing masses;
$\frac{1}{4}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}=-m^{2}_{2}\absolutevalue{U_{\sigma
2}}^{2}-m^{2}_{3}\absolutevalue{U_{\sigma 3}}^{2}.$ (47)
Then we can rewrite the second-order time derivatives of the electron, muon,
and tauon lepton numbers of Eqs.(32-34) as,
$\displaystyle\frac{d^{2}}{dt^{2}}\bra*{e}L_{e}^{M}(t)\ket*{e}|_{t=0}=\frac{d^{2}}{dt^{2}}\bra*{e}L^{M}(t)\ket*{e}|_{t=0}-4m_{2}m_{3}\absolutevalue{U_{e2}U^{\ast}_{e3}}^{2}\cos(2\beta_{1}-2\gamma_{1}),$
(48)
$\displaystyle\frac{d^{2}}{dt^{2}}\bra*{e}L_{\mu}^{M}(t)\ket*{e}|_{t=0}=-2\sum_{i=2}^{3}\absolutevalue{U_{\mu
i}}^{2}\absolutevalue{U_{ei}}^{2}m_{i}^{2}-4\absolutevalue{U_{\mu
2}^{\ast}U_{\mu
3}U_{e2}^{\ast}U_{e3}}\cos(\beta_{1}-\gamma_{1}+\beta_{2}-\gamma_{2})m_{2}m_{3},$
(49)
$\displaystyle\frac{d^{2}}{dt^{2}}\bra*{e}L_{\tau}^{M}(t)\ket*{e}|_{t=0}=-2\sum_{i=2}^{3}\absolutevalue{U_{\tau
i}}^{2}\absolutevalue{U_{ei}}^{2}m_{i}^{2}-4\absolutevalue{U_{\tau
2}^{*}U_{\tau
3}U_{e2}^{*}U_{e3}}\cos(\beta_{1}-\gamma_{1}+\beta_{3}-\gamma_{3})m_{2}m_{3}.$
(50)
The second-order time derivative for the normal hierarchy in Eq.(48) can be
used to determine one of the Majorana like-phases; $\gamma_{1}-\beta_{1}$. The
other two Majorana type-phases, $\gamma_{i}-\beta_{i}(i=1,2)$,can be
determined through Eq.(44) by knowing the arguments of the quartet in Eq.(40).
This allows us to learn of the orientation of triangle 3 in Fig.2.
Alternatively, with the three second-order time derivatives of Eqs.(48)-(50)
one can determine three Majorana type-phases $\gamma_{i}-\beta_{i}(i=1-3)$.
In the inverted hierarchy case $m_{3}$ is the lightest neutrino mass and the
second-order time derivative of the total lepton number becomes,
$\frac{1}{4}\frac{d^{2}}{dt^{2}}\bra*{\sigma}L^{M}(t)\ket*{\sigma}|_{t=0}=-m^{2}_{2}\absolutevalue{U_{\sigma
2}}^{2}-m^{2}_{1}\absolutevalue{U_{\sigma 1}}^{2}.$ (51)
The major difference between the normal hierarchy of Eq.(47) and the inverted
hierarchy of Eq.(51) is the last mass multiplied by PMNS matrix term. This
modifies the second-order time derivatives to become,
$\displaystyle\frac{d^{2}}{dt^{2}}\bra*{e}L_{e}^{M}(t)\ket*{e}|_{t=0}=\frac{d^{2}}{dt^{2}}\bra*{e}L^{M}(t)\ket*{e}|_{t=0}-4m_{1}m_{2}\absolutevalue{U_{e1}U^{\ast}_{e2}}^{2}\cos(2\beta_{1}),$
(52)
$\displaystyle\frac{d^{2}}{dt^{2}}\bra*{e}L_{\mu}^{M}(t)\ket*{e}|_{t=0}=-2\sum_{i}^{2}\absolutevalue{U_{\mu
i}}^{2}\absolutevalue{U_{ei}}^{2}m_{i}^{2}-4\absolutevalue{U_{\mu
1}^{\ast}U_{\mu 2}U_{e1}^{\ast}U_{e2}}\cos(\beta_{1}+\beta_{2})m_{1}m_{2},$
(53)
$\displaystyle\frac{d^{2}}{dt^{2}}\bra*{e}L_{\tau}^{M}(t)\ket*{e}|_{t=0}=-2\sum_{i}^{2}\absolutevalue{U_{\tau
i}}^{2}\absolutevalue{U_{ei}}^{2}m_{i}^{2}-4\absolutevalue{U_{\tau
1}^{\ast}U_{\tau 2}U_{e1}^{\ast}U_{e2}}\cos(\beta_{1}+\beta_{3})m_{1}m_{2}.$
(54)
Similar to the normal hierarchy equations of Eqs.(48-50), the second-order
time derivatives of the inverted hierarchy in Eqs.(52-54) allow for a complete
determination of the Majorana type-phases $\beta_{i}$ and the orientation of
triangle 1 in Fig.2.
### 5.2 Numerical illustration for two neutrino generations
In this subsection, we illustrate the analytical results of the relations
among the lightest neutrino mass, a Majorana type-phase, and the second-order
time derivative of lepton numbers. We illustrate them using the two generation
model. In the model, there is only two Majorana type-phases, $\beta_{1}$ and
$\beta_{2}$ defined in Eq.(6). They are not independent because the unitary
relation
$U_{e1}U_{e2}^{\ast}=-U_{\mu 1}U_{\mu 2}^{\ast},$ (55)
holds. It leads to the relation between the two Majorana type-phases,
$\beta_{1}=\arg(U_{e1}U_{e2}^{\ast})=\arg(U_{\mu 1}U_{\mu
2}^{\ast})-\pi=\beta_{2}-\pi.$ (56)
Adopting the following parametrization for a 2 by 2 unitary PMNS matrix,
$\begin{pmatrix}\cos\theta_{12}&\sin\theta_{12}e^{i\frac{\alpha_{21}}{2}}\\\
-\sin\theta_{12}&\cos\theta_{12}e^{i\frac{\alpha_{21}}{2}}\\\ \end{pmatrix},$
(57)
the Majorana type-phase $\beta_{1}$ is related to a Majorana phase as,
$\beta_{1}=-\frac{\alpha_{21}}{2}$. Contrary to the three generation model,
one can not write $\tan^{2}\theta_{12}$ in terms of the Majorana type-phase.
In this model, the lightest neutrino mass $m_{1}$ and a Majorana type-phase
$\beta_{1}$ are unknown parameters to be determined by the time evolution of
lepton numbers. We assume that a mass squared difference $\Delta m^{2}_{21}$
and a mixing angle $\theta_{12}$ are measured by the oscillation experiment.
The time evolution of the electron and muon numbers for two generation model
are given by,
$\begin{split}\bra{e}L_{e}(t)\ket{e}&=c_{12}^{4}\left(1-\frac{2m_{1}^{2}\sin^{2}(E_{1}t)}{E_{1}^{2}}\right)+s_{12}^{4}\left(1-\frac{2m_{2}^{2}\sin^{2}(E_{2}t)}{E_{2}^{2}}\right)\\\
&\quad+s_{12}^{2}c_{12}^{2}\left\\{\left(1+\frac{q^{2}-m_{1}m_{2}\cos(2\beta_{1})}{E_{1}E_{2}}\right)\cos\\{(E_{1}-E_{2})t\\}\right.\\\
&\quad\left.+\left(1-\frac{q^{2}-m_{1}m_{2}\cos(2\beta_{1})}{E_{1}E_{2}}\right)\cos\\{(E_{1}+E_{2})t\\}\right\\},\end{split}$
(58)
$\displaystyle\begin{split}\bra{e}L_{\mu}(t)\ket{e}=&c_{12}^{2}s_{12}^{2}\left(\left(1-\frac{2m_{1}^{2}\sin^{2}(E_{1}t)}{E_{1}^{2}}\right)+\left(1-\frac{2m_{2}^{2}\sin^{2}(E_{2}t)}{E_{2}^{2}}\right)\right)\\\
&-s_{12}^{2}c_{12}^{2}\left\\{\left(1+\frac{q^{2}-m_{1}m_{2}\cos(2\beta_{1})}{E_{1}E_{2}}\right)\cos\\{(E_{1}-E_{2})t\\}\right.\\\
&\left.+\left(1-\frac{q^{2}-m_{1}m_{2}\cos(2\beta_{1})}{E_{1}E_{2}}\right)\cos\\{(E_{1}+E_{2})t\\}\right\\}.\end{split}$
(59)
One can also compute the expectation values of the total lepton number
$L(t)=L_{e}(t)+L_{\mu}(t)$ and the difference
$L_{e-\mu}(t)=L_{e}(t)-L_{\mu}(t)$. The straightforward calculation leads to
the second order time derivatives for the total lepton number at $t=0$ as
follows,
$\frac{d^{2}}{dt^{2}}\bra{e}L(t)\ket{e}\Bigr{|}_{t=0}=-4(m_{1}^{2}c_{12}^{2}+m_{2}^{2}s_{12}^{2}).$
(60)
Figure 4 shows that total lepton number sharply decreases at $t=0$ for the
larger the lightest neutrino mass $m_{1}$.
Figure 4: The time dependence of the total lepton number for different
lightest neutrino masses. The momentum $q=0.002$(eV). The dashed line shows
the case for $m_{1}=0.01$(eV) and the solid line shows the case for
$m_{1}=0.02$(eV). We use $\Delta m^{2}_{21}=7.42\times 10^{-5}\text{eV}^{2}$
and $\sin(\theta_{12})=0.551$.
This can be also understood by solving the lightest neutrino mass with the
second derivatives of Eq.(60), a mixing angle and a mass squared difference.
$m_{1}^{2}=-\frac{1}{4}\frac{d^{2}}{dt^{2}}\bra{e}L(t)\ket{e}\Bigr{|}_{t=0}-\Delta
m^{2}_{21}s_{12}^{2}.$ (61)
The second order time derivative for the electron minus muon number
$L_{e-\mu}$ at $t=0$ is given as follows,
$\frac{d^{2}}{dt^{2}}\bra{e}L_{e-\mu}(t)\ket{e}\Bigr{|}_{t=0}=-4\absolutevalue{m_{\nu
ee}}^{2}_{\text{two gene.}},$ (62)
where $\absolutevalue{m_{\nu ee}}^{2}_{\text{two gene.}}$ is given by the same
formula in Eq.(46) as the inverted hierarchy case with $m_{3}=0$. By
substituting the mixing angle to $|U_{ei}|$ ($i=1,2$) in Eq.(57), it is given
by,
$\absolutevalue{m_{\nu ee}}^{2}_{\text{two
gene.}}=m_{1}^{2}{c_{12}}^{4}+m_{2}^{2}{s_{12}}^{4}+2m_{1}m_{2}{c_{12}}^{2}{s_{12}}^{2}\cos{2\beta_{1}}.$
(63)
Eq.(62) tells us that $L_{e-\mu}$ sharply decreases for the larger
$\absolutevalue{m_{\nu ee}}$. Fig.(5) shows the dependence of $L_{e-\mu}$ on
Majorana type-phase $\beta_{1}$. Because $\absolutevalue{m_{\nu ee}}$ is the
largest for $\beta_{1}=0$ and the smallest for $\beta_{1}=\pi$, as shown in
Eq.(63), Fig.(5) numerically confirms the dependence on the
$\absolutevalue{m_{\nu ee}}$ of Eq.(62).
Figure 5: The time dependence of $L_{e-\mu}=L_{e}-L_{\mu}$for different
choice of the Majorana type-phase $\beta_{1}$. $q=0.002$(eV) $m_{1}=0.01$(eV)
and $\sin(\theta_{12})=0.551$. The dashed line shows the case for
$\beta_{1}=0$. The dot-dashed line shows the case for
$2\beta_{1}=\frac{\pi}{2}$ and solid line shows the case for $2\beta_{1}=\pi$.
$\Delta m^{2}_{21}$ is the same as in Fig.4.
One can also solve for a Majorana type-phase $\beta_{1}$ with the second order
derivative of Eqs.(62), a mixing angle and a mass squared difference.
$\cos(2\beta_{1})=\frac{-\frac{1}{4}\frac{d^{2}}{dt^{2}}\bra{e}L_{e-\mu}(t)\ket{e}\bigr{|}_{t=0}-(m_{1}^{2}c_{12}^{4}+m_{2}^{2}s_{12}^{4})}{2m_{1}m_{2}c_{12}^{2}s_{12}^{2}}.$
(64)
Eqs.(61-64) can be used to determine the lightest neutrino mass and the
Majorana type-phase once the time derivatives for $\ddot{L}|_{t=0}$ and
$\ddot{L}_{e-\mu}|_{t=0}$ are measured.
## 6 Concluding Remarks
We have investigated an approach to extract the Majorana type-phases of Branco
and Rebelo using the time evolution of lepton numbers. The specific
combinations of Majorana type-phases in the same triangle are related to the
orientation of unitarity triangles for the PMNS matrix, and the Majorana
phases $\alpha_{21}$ and $\alpha_{31}$. After taking the second-order time
derivative of the lepton number expectation values, the dependencies on the
summation of Majorana type-phases i.e., $\beta_{i}+\beta_{j}$ and
$\gamma_{i}+\gamma_{j}$, can be determined. Thus allowing the extraction of
the orientation of unitarity triangles for the PMNS matrix, and the Majorana
phases.
We view our result as complimentary to using neutrinoless double-beta decay
for determining the orientation of unitarity triangles for the PMNS matrix,
and the Majorana phases. We also show the time derivative of the total lepton
number is sensitive to the lightest neutrino mass. The above features are also
numerically demonstrated for two generation toy model.
For the future, we are interested in identifying a possible experimental setup
used to measure the quantities discussed.
### Acknowledgement
This work is supported by Japan Society for the Promotion of Science (JSPS)
KAKENHI Grant Number JP17K05418 (T.M) and JP21K13923(K.Y). One author, N.J.B,
would like to express thanks to the Japanese government Ministry of Education,
Culture, Sports, Science, and Technology (MEXT) for the financial support
during the writing of this work.
## References
* [1] S. Schael et al. [ALEPH, DELPHI, L3, OPAL, SLD, LEP Electroweak Working Group, SLD Electroweak Group and SLD Heavy Flavour Group], Phys. Rept. 427, 257-454 (2006)
* [2] Y. Fukuda et al. [Super-Kamiokande], Phys. Rev. Lett. 81, 1562-1567 (1998)
* [3] Q. R. Ahmad et al. [SNO], Phys. Rev. Lett. 89, 011301 (2002)
* [4] B. Pontecorvo, Zh. Eksp. Teor. Fiz. 53, 1717-1725 (1967)
* [5] V. N. Gribov and B. Pontecorvo, Phys. Lett. B 28, 493 (1969)
* [6] E. Majorana, Nuovo Cim. 14, 171-184 (1937)
* [7] G. Racah, Nuovo Cim. 14, 322-328 (1937)
* [8] W. H. Furry, Phys. Rev. 54, 56-67 (1938)
* [9] W. H. Furry, Phys. Rev. 56, 1184-1193 (1939)
* [10] J. Schechter and J. W. F. Valle, Phys. Rev. D 25, 2951 (1982) doi:10.1103/PhysRevD.25.2951
* [11] J. F. Nieves, Phys. Lett. B 147, 375-379 (1984)
* [12] E. Takasugi, Phys. Lett. B 149, 372-376 (1984)
* [13] M. J. Dolinski, A. W. P. Poon and W. Rodejohann, Ann. Rev. Nucl. Part. Sci. 69, 219-251 (2019)
* [14] S. Umehara, T. Kishimoto, I. Ogawa, R. Hazama, H. Miyawaki, S. Yoshida, K. Matsuoka, K. Kishimoto, A. Katsuki and H. Sakai, et al. Phys. Rev. C 78, 058501 (2008)
* [15] M. Agostini et al. [GERDA], Phys. Rev. Lett. 125, no.25, 252502 (2020)
* [16] S. I. Alvis et al. [Majorana], Phys. Rev. C 100, no.2, 025501 (2019)
* [17] A. S. Barabash et al. [NEMO], Phys. Atom. Nucl. 74, 312-317 (2011) doi:10.1134/S1063778811020062 [arXiv:1002.2862 [nucl-ex]].
* [18] R. Arnold et al. [NEMO-3], Phys. Rev. D 92, no.7, 072011 (2015)
* [19] R. Arnold et al. [NEMO-3], Phys. Rev. D 93, no.11, 112008 (2016)
* [20] R. Arnold et al. [NEMO-3], Phys. Rev. D 94, no.7, 072003 (2016)
* [21] R. Arnold et al. [NEMO-3], Phys. Rev. D 95, no.1, 012007 (2017)
* [22] S. Abe et al. [KamLAND-Zen],
* [23] G. Anton et al. [EXO-200], Phys. Rev. Lett. 123, no.16, 161802 (2019)
* [24] R. Arnold et al. [NEMO-3], Phys. Rev. Lett. 119, no.4, 041801 (2017)
* [25] R. G. Winter, Phys. Rev. 100, 142-144 (1955) doi:10.1103/PhysRev.100.142
* [26] J. Bernabeu, A. De Rujula and C. Jarlskog, Nucl. Phys. B 223, 15-28 (1983) doi:10.1016/0550-3213(83)90089-5
* [27] Z. Sujkowski and S. Wycech, Phys. Rev. C 70, 052501 (2004)
* [28] D. Q. Adams et al. [CUORE], Phys. Rev. C 105, 065504 (2022)
* [29] K. Abe et al. [XMASS], PTEP 2018, no.5, 053D03 (2018)
* [30] K. Blaum, S. Eliseev, F. A. Danevich, V. I. Tretyak, S. Kovalenko, M. I. Krivoruchenko, Y. N. Novikov and J. Suhonen, Rev. Mod. Phys. 92, 045007 (2020)
* [31] J. A. Grifols, E. Masso and R. Toldra, Phys. Lett. B 389, 563-565 (1996)
* [32] A. Segarra and J. Bernabéu, Phys. Rev. D 101, no.9, 093004 (2020)
* [33] A. Costantino and S. Fichet, JHEP 09, 122 (2020) doi:10.1007/JHEP09(2020)122 [arXiv:2003.11032 [hep-ph]].
* [34] A. Millar, G. Raffelt, L. Stodolsky and E. Vitagliano, Phys. Rev. D 98, no.12, 123006 (2018)
* [35] J. F. Nieves and P. B. Pal, Phys. Rev. D 32, 1849-1852 (1985)
* [36] T. Chhabra and P. R. Babu, Phys. Rev. D 46, 903-909 (1992)
* [37] B. Kayser and R. E. Shrock, Phys. Lett. B 112, 137-142 (1982) doi:10.1016/0370-2693(82)90314-8
* [38] C. S. Kim, M. V. N. Murthy and D. Sahoo, Phys. Rev. D 105, no.11, 113006 (2022)
* [39] K. Abe et al. (Super-Kamiokande), Phys. Rev. D 83, 052010 (2011)
* [40] B. Aharmim et al. (SNO), Phys. Rev. C 81, 055504 (2010)
* [41] B. Pontecorvo, Sov. Phys. JETP 7, 172 (1958) [Zh. Eksp. Teor. Fiz. 34, 247 (1957)].
* [42] Z. Maki, M. Nakagawa and S. Sakata, Prog. Theor. Phys. 28, 870 (1962).
* [43] C. Giunti and C. W. Kim, “Fundamentals of Neutrino Physics and Astrophysics,”
* [44] T. Hahn, [arXiv:physics/0607103 [physics]].
* [45] P.A. Zyla et al. [Particle Data Group], PTEP 2020, no.8, 083C01 (2020)
* [46] C. Jarlskog, Phys. Rev. Lett. 55, 1039 (1985)
* [47] G. C. Branco, L. Lavoura and M. N. Rebelo, Phys. Lett. B 180, 264-268 (1986)
* [48] J. F. Nieves and P. B. Pal, Phys. Rev. D 36, 315 (1987)
* [49] F. Feruglio, C. Hagedorn and R. Ziegler, JHEP 07, 027 (2013) [arXiv:1211.5560 [hep-ph]].
* [50] G. C. Branco and M. N. Rebelo, Phys. Rev. D 79, 013001 (2009) doi:10.1103/PhysRevD.79.013001 [arXiv:0809.2799 [hep-ph]].
* [51] J. A. Aguilar-Saavedra and G. C. Branco, Phys. Rev. D 62, 096009 (2000) doi:10.1103/PhysRevD.62.096009 [arXiv:hep-ph/0007025 [hep-ph]].
* [52] I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou, JHEP 09, 178 (2020) doi:10.1007/JHEP09(2020)178 [arXiv:2007.14792 [hep-ph]], NuFIT 5.1 (2021) www.nu-fit.org.
* [53] M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45-47 (1986) doi:10.1016/0370-2693(86)91126-3
* [54] T. Endoh, T. Morozumi and Z. h. Xiong, Prog. Theor. Phys. 111, 123-149 (2004) doi:10.1143/PTP.111.123 [arXiv:hep-ph/0308276 [hep-ph]].
* [55] T. Fujihara, S. Kaneko, S. K. Kang, D. Kimura, T. Morozumi and M. Tanimoto, Phys. Rev. D 72, 016006 (2005) doi:10.1103/PhysRevD.72.016006 [arXiv:hep-ph/0505076 [hep-ph]].
* [56] A. S. Adam, N. J. Benoit, Y. Kawamura, Y. Matsuo, T. Morozumi, Y. Shimizu, Y. Tokunaga and N. Toyota, PTEP 2021, 5 (2021) doi:10.1093/ptep/ptab025 [arXiv:2101.07751 [hep-ph]].
* [57] A. S. Adam, N. J. Benoit, Y. Kawamura, Y. Matsuo, T. Morozumi, Y. Shimizu and N. Toyota, [arXiv:2106.02783 [hep-ph]].
* [58] E. Ma, D. P. Roy and U. Sarkar, Phys. Lett. B 444, 391-396 (1998) doi:10.1016/S0370-2693(98)01395-1 [arXiv:hep-ph/9810309 [hep-ph]].
|
# Statistical Chronometry of Meteorites: II. Initial Abundances and
Homogeneity of Short-lived Radionuclides
Steven J. Desch Daniel R. Dunlap Curtis D. Williams Prajkta Mane Emilie T.
Dunham
###### Abstract
Astrophysical models of planet formation require accurate radiometric dating
of meteoritic components by short-lived (Al-Mg, Mn-Cr, Hf-W) and long-lived
(Pb-Pb) chronometers, to develop a timeline of such events in the solar nebula
as formation of Ca-rich, Al-rich Inclusions (CAIs), chondrules, planetesimals,
etc. CAIs formed mostly around a time (“$t\\!\\!=\\!\\!0$”) when the short-
lived radionuclide ${}^{26}{\rm Al}$ ($t_{1/2}=0.72$ Myr) was present and
presumably homogeneously distributed at a known level we define as
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{\rm SS}\equiv 5.23\times
10^{-5}$. The time of formation after $t\\!\\!=\\!\\!0$ of another object can
be found by determining its initial $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}$ ratio and comparing it to $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{\rm SS}$. Dating of meteoritic objects using the Mn-Cr or Hf-W systems
is hindered because the abundances $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$
at $t\\!\\!=\\!\\!0$ are not known precisely. To constrain these quantities,
we compile literature Al-Mg, Mn-Cr, Hf-W and Pb-Pb data for 14 achondrites and
use novel statistical techniques to minimize the discrepancies between their
times of formation across these systems. We find that for $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}=(8.09\pm 0.65)\times 10^{-6}$,
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=(10.42\pm 0.25)\times
10^{-5}$, $t_{\rm SS}=4568.36\pm 0.20\,{\rm Myr}$, and a ${}^{53}{\rm Mn}$
half-life of $3.80\pm 0.23$ Myr, these four free parameters make concordant 37
out of 38 formation times recorded by the different systems in 14 achondrites.
These parameters also make concordant the ages derived for chondrules from
CB/CH achondrites, formed simultaneously in an impact, and are apparently
concordant with the I-Xe chronometer as well. Our findings provide very strong
support for homogeneity of ${}^{26}{\rm Al}$, ${}^{53}{\rm Mn}$, and
${}^{182}{\rm Hf}$ in the solar nebula, and our approach offers a framework
for more precise chronometry.
###### keywords:
Solar System formation 1530 , Planet formation 1241 , Meteorites 1038 ,
Achondrites 15 , Chondrites 228
††journal: Icarus
[inst1]organization=School of Earth and Space Exploration, Arizona State
University,addressline=PO Box 871404, city=Tempe, postcode=85287-1404,
state=Arizona, country=USA
[inst2]organization=Oak Ridge National Laboratory, addressline=1 Bethel Valley
Rd, city=Oak Ridge, postcode=37830, state=Tennessee, country=USA
[inst3]organization=Earth and Planetary Sciences Department, University of
California, Davis,addressline=One Shields Ave., city=Davis, postcode=95616,
state=California, country=USA
[inst4]organization=Lunar and Planetary Institute, USRA, addressline=3600 Bay
Area Blvd., city=Houston, postcode=77058, state=Texas, country=USA
[inst6]organization=Department of Earth, Planetary and Space Sciences,
University of California, Los Angeles, addressline=PO Box 951567, city=Los
Angeles, postcode=90095-1567, state=California, country=USA
We present a new method for combining and averaging data from the Al-Mg, Mn-
Cr, Hf-W, and Pb-Pb radiometric dating systems, to: attain greater accuracy
and precision in the initial $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ ratios the
Pb-Pb age $t_{\rm SS}$ of “$t\\!\\!=\\!\\!0$” in the Solar System, when
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{\rm SS}\equiv 5.23\times
10^{-5}$; and better assess concordancy.
In meteorites and components where it is expected, we find substantial
concordancy between the times of formation measured by the different isotopic
systems, provided $t_{\rm SS}=4568.36\pm 0.20$ Myr, $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}=(8.09\pm 0.65)\times 10^{-6}$, and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=(10.42\pm 0.25)\times
10^{-5}$, and the ${}^{53}{\rm Mn}$ half-life is $\approx 3.80\pm 0.23$ Myr;
this strongly implies homogeneity of ${}^{26}{\rm Al}$, ${}^{53}{\rm Mn}$, and
${}^{182}{\rm Hf}$ in the solar nebula from early times.
## 1 Introduction
### 1.1 Times of Formation from the Al-Mg and Pb-Pb Systems
To learn about the birth of planets and the events of the Solar System’s first
few million years, we study meteorites that bear witness to this era. It is
especially important to constrain the times at which the components in
meteorites formed, the times at which their parent bodies accreted and melted,
and when these bodies collided. Most importantly, it is vital to constrain the
relative order or sequence of events within the solar nebula. The goal is to
find the time $\Delta t$ after $t\\!\\!=\\!\\!0$ that an event occurred, where
$t\\!\\!=\\!\\!0$ is a defined event or time in the Solar System history.
To obtain these times $\Delta t$, radiometric dating systems such as the Al-Mg
system are employed. Ca-rich, Al-rich Inclusions (CAIs) are thought to be the
first solids formed in the Solar System. When they formed, they incorporated
live ${}^{26}{\rm Al}$, a short-lived radionuclide (SLR) that decays to
${}^{26}{\rm Mg}$ with a half-life of 0.717 Myr, or mean-life
$\tau_{26}=1.034$ Myr (Auer et al., 2009; Kondev, 2021). Although extinct now,
its one-time existence can be found by taking a linear regression of the
measured values of $y={}^{26}{\rm Mg}/{}^{24}{\rm Mg}$ and $x={}^{27}{\rm
Al}/{}^{24}{\rm Mg}$ isotopic ratios in different minerals within the same
CAI. The slope of this correlation, if it is linear, yields
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ within the CAI at the time it
formed and achieved isotopic closure. A large fraction of CAIs appear to have
formed from a reservoir with ${}^{26}{\rm Al}/{}^{27}{\rm Al}$ near a
canonical ratio that we take as $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{\rm SS}\equiv 5.23\times 10^{-5}$ (Jacobsen et al., 2008). This
strongly suggests that ${}^{26}{\rm Al}$ was homogeneously distributed in the
solar nebula from a very early time. Assuming homogeneity of ${}^{26}{\rm
Al}$, the time of formation of a CAI, as recorded by the Al-Mg system, can be
calculated as
$\Delta t_{26}=\tau_{26}\,\ln\left[\frac{(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{\rm SS}}{(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}}\right].$ (1)
This provides a date of formation, relative to $t\\!\\!=\\!\\!0$, which for
our purposes is defined to be that time in the solar nebula when
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})=(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{\rm SS}$.
An alternative method is to use the Pb-Pb system to calculate the absolute age
of a sample. Measurement of the isotopic ratios $x={}^{204}{\rm
Pb}/{}^{206}{\rm Pb}$ and $y={}^{207}{\rm Pb}/{}^{206}{\rm Pb}$ in different
leachates, washes, or residues derived from acid dissolution of a sample can
be linearly regressed; the intercept of this regression, combined with a
measurement of ${}^{238}{\rm U}/{}^{235}{\rm U}$ in the bulk sample, yields a
number that is a function only of the age of the sample, which we denote
$t_{\rm Pb}$. As we discuss in a companion paper (Desch et al. 2023; hereafter
Paper I), this absolute age by itself is not a quantity that astrophysical
models of planet formation can make use of; what matters is the sequence of
events in the first few Myr of the solar nebula, not how long ago that
sequence took place. Moreover, due to uncertainties in the half-lives of
${}^{235}{\rm U}$ and ${}^{238}{\rm U}$, absolute ages are intrinsically
uncertain by $\pm 9(2\sigma)$ Myr (Tissot et al., 2017). However, these
systematic uncertainties largely cancel when taking the difference between two
Pb-Pb ages, and typically the Pb-Pb system can be used as a relative
chronometer with precision of 0.3-0.5 Myr determined solely by measurement
uncertainties (Amelin, 2006; Tissot et al., 2017). The Pb-Pb ages of samples
can be converted into $\Delta t_{\rm Pb}$, the time of formation after
$t\\!\\!=\\!\\!0$:
$\Delta t_{\rm Pb}=t_{\rm SS}-t_{\rm Pb}.$ (2)
Here, $t_{\rm SS}$ is the Pb-Pb age of a sample that would be found if it
achieved isotopic closure at $t\\!\\!=\\!\\!0$ [when ${}^{26}{\rm
Al}/{}^{27}{\rm Al}$ = $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{\rm SS}$],
assuming the same uranium half-lives that are typically assumed (703.81 Myr
for ${}^{235}{\rm U}$ and 4468.3 Myr for ${}^{238}{\rm U}$; Jaffey et al.
1971, Villa et al. 2016).
The most commonly accepted way to determine the value $t_{\rm SS}$ is by
direct measurement of the Pb-Pb ages of CAIs, which are presumed to have
achieved isotopic closure of the Pb-Pb system at the same time
($t\\!\\!=\\!\\!0$) as the Al-Mg system. The most commonly cited value is that
of Connelly et al. (2012), who averaged data from four CAIs to find
$4567.30\pm 0.16$ Myr. Some CAIs appear older. Bouvier and Wadhwa (2010) found
one CAI to have a Pb-Pb age of $4568.2\pm 0.2$ Myr, although this was not
based on a direct measurement of ${}^{238}{\rm U}/{}^{235}{\rm U}$ in the
sample. Bouvier et al. (2011a) reported one with an age of $4568.0\pm 0.3$
Myr, but not in the refereed literature. These suggest that perhaps not all
CAIs achieved isotopic closure of the Pb-Pb system at the same time; perhaps
none of them achieved isotopic closure at $t\\!\\!=\\!\\!0$.
One goal of Paper I was to determine $t_{\rm SS}$, not by appealing to direct
measurements of $t_{\rm Pb}$ in CAIs, but through a statistical approach,
finding the value of $t_{\rm SS}$ that minimized the differences between
$\Delta t_{26}$ and $\Delta t_{\rm Pb}$ across a basket of appropriate samples
that rapidly cooled and were not later disturbed. If the $\Delta t_{26}$ and
$\Delta t_{\rm Pb}$ formation times of such samples cannot be reconciled, then
this falsifies the assumption underlying the use of Al-Mg systematics for
chronometry, that ${}^{26}{\rm Al}$ was homogeneous. If a range of values for
$t_{\rm SS}$ does make the formation times concordant, this strongly supports
SLR homogeneity. Using Al-Mg formation times and Pb-Pb ages for seven rapidly
cooled achondrites (D’Orbigny, SAH 99555, NWA 1670, Asuka 881394, NWA 7325,
NWA 2976 and NWA 6704). Desch et al. (2023) found that a range of values
$t_{\rm SS}=4568.42\pm 0.24$ Myr made the Al-Mg and Pb-Pb ages concordant,
with $\Delta t_{26}$ and $\Delta t_{\rm Pb}$ agreeing within errors for each
achondrite, and the fit was good in a statistical sense
($\chi_{\nu}^{2}=0.98$). Even though chondrules may be reset by transient
heating events, their Pb-Pb and Al-Mg formation times are consistent within
measurement errors using the same value of $t_{\rm SS}$. The goodness-of-fit
parameter was still a statistically significant $\mbox{$\chi_{\nu}^{2}$}=1.36$
(12% probability). These results could have falsified the hypothesis of
homogeneous ${}^{26}{\rm Al}$ but did not. These findings strongly support
homogeneity of ${}^{26}{\rm Al}$ and the concordancy of Al-Mg and Pb-Pb
formation times.
The goal of this paper is to determine whether we can extend these results to
other isotopic systems, assessing whether other SLRs were homogeneously
distributed, and whether the formation times derived from them are concordant
with the Al-Mg and Pb-Pb formation times.
### 1.2 Times of Formation from other Isotopic Systems
There are several other isotopic systems that can be used to date meteoritic
samples, as depicted in Figure 1. In general they date different sorts of
events. The Al-Mg system usually achieves isotopic closure, at which point
${}^{26}{\rm Mg}$ ceases to diffuse significantly, after crystallization of
rocky material from a magmatic melt. Bulk excesses of ${}^{26}{\rm Mg}$ in a
sample also can be used to date the time at which the sample became a closed
reservoir, which often dates the time of silicate differentiation. In
practice, the Al-Mg system has been used to date these events as late as about
6 Myr, about 8 times the ${}^{26}{\rm Al}$ half-life.
The SLR ${}^{53}{\rm Mn}$ decays to ${}^{53}{\rm Cr}$ with a half-life of
about 3.7 Myr. The inferred initial ratio $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}$ can be used to date the time of magmatic crystallization, or other
processes such as carbonate formation. The SLR ${}^{182}{\rm Hf}$ decays (via
${}^{182}{\rm Ta}$) to ${}^{182}{\rm W}$ with a half-life of 8.9 Myr. The
initial $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio in a sample
can be used to date magmatic crystallization or silicate differentiation, and
excesses of ${}^{182}{\rm W}$ in bulk samples can be used to date metal-
silicate separation, such as core formation. The SLR ${}^{129}{\rm I}$ decays
to ${}^{129}{\rm Xe}$ with a half-life of 16.1 Myr. The initial $({}^{129}{\rm
I}/{}^{127}{\rm I})_{0}$ ratio can be inferred and then used to date secondary
processes such as shocks, as Xe tends to remain in a sample except when
disturbed. Other SLRs that might be used as chronometers include ${}^{107}{\rm
Pd}$, which decays to ${}^{107}{\rm Ag}$ with a half-life of 6.5 Myr; and
${}^{92}{\rm Nb}$, which decays to ${}^{92}{\rm Zr}$ with a half-life of 34.7
Myr. Figure 1 depicts the timescales over which some of these chronometers are
useful, as well as the types of processes that can be dated using them. For
more information, we refer the reader to the review by Davis (2022).
Figure 1: Timescales of effectiveness (roughly $8\times$ half-life) of various
short-lived chronometers, used to date early Solar System processes. The
processes that affect isotopic closure and which are dated by each system are
denoted by colors. The secondary processes relevant to ${}^{53}{\rm
Mn}-{}^{53}{\rm Cr}$ and ${}^{129}{\rm I}-{}^{129}{\rm Xe}$ systems include
aqueous and thermal alteration. The timescales of pertinent early Solar System
processes are shown in the top panel.
In principle, a determination of an initial abundance when an object formed,
such as $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$, could be used to
determine the time of formation after $t\\!\\!=\\!\\!0$, which again we define
as the time when $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})=(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{\rm SS}\equiv 5.23\times
10^{-5}$ in the solar nebula. This time of formation as determined by Mn-Cr
systematics would be
$\Delta t_{53}=\tau_{53}\,\ln\left[\frac{(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}}{(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}}\right],$ (3)
where $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ would be the
isotopic ratio in the solar system at $t\\!\\!=\\!\\!0$. Unlike the case for
Al-Mg, for which a good estimate of $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{\rm SS}$ is known, the initial abundances like $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ are for practical purposes not known to
sufficient precision. Direct determinations of $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ and especially $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ from CAI data yield values that are too
uncertain to resolve times of formation at the $<1$ Myr level; other isotopic
ratios have hardly been constrained at all.
As a result, meteoriticists have instead used these isotopic systems to
measure the difference in times of formation between one object and a single
separate “anchor” such as the achondrite D’Orbigny. It is assumed that both
the sample and anchor formed from istopic reservoirs with the same abundance
of ${}^{53}{\rm Mn}$, i.e., that ${}^{53}{\rm Mn}$ is homogeneous among those
two objects. After this difference in formation time is determined using Mn-Cr
systematics, the absolute Pb-Pb age of the anchor, $t_{\rm Pb,DOrbigny}$, is
added, and a “model” absolute age for the sample is determined. The implicit
goal of meteorite chronometry has been to obtain these absolute ages, and for
that, use of individual anchors has been standard.
Here we argue that absolute ages are not the goal, and that recognition of
this fact enables a move away from individual anchors, toward a more precise,
statistical approach. First it should be recognized that use of individual
anchors introduces uncertainty to age determinations, especially since all
model ages rely on Pb-Pb ages, which are uncertain, typically by $\pm 0.5$
Myr. In fact, there are two uncertain Pb-Pb ages: that of the anchor, whether
that is the age of $t\\!\\!=\\!\\!0$, usually taken to be the Pb-Pb age of
CAIs (e.g., Connelly et al., 2012), or of an anchor like D’Orbigny; plus that
of the sample. Second, as discussed in Paper I and above, this absolute age
lacks meaning until it is put into a sequence of events in the early solar
nebula, by determining the time of formation after $t\\!\\!=\\!\\!0$. This is
done by subtracting the model age from the absolute age of the Solar System,
$t_{\rm SS}$. After all this, the time of formation after $t\\!\\!=\\!\\!0$ of
the object, using Mn-Cr measurements, is calculated:
$\Delta t_{53}=\tau_{53}\,\ln\left[\frac{(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm DOrbigny}}{(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}}\right]+t_{\rm SS}-t_{\rm Pb,DOrbigny}.$ (4)
This is to be compared with the equivalent quantity in Equation 3.
It might seem that use of anchors avoids making assumptions about homogeneity
of SLRs, in that it is not assumed that ${}^{53}{\rm Mn}$ was homogeneous
between the sample/anchor reservoir and the CAI-forming region. However, it is
still assumed that ${}^{53}{\rm Mn}$ was homogeneous between the sample and
the anchor, plus an additional assumption is made: that the Pb-Pb system in
CAIs closed simultaneously with the Al-Mg system (or whatever system is used
to define $t\\!\\!=\\!\\!0$). In the end, deriving needed quantities like
$\Delta t_{53}$ requires at least as many assumptions about homogeneity as
just assuming ${}^{53}{\rm Mn}$ was homogeneous throughout the solar nebula
and simply determining the correct value of $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ to enter into Equation 3.
Effectively, determining $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$
is just what the use of anchors does, as Equation 4 is equivalent to
extrapolating backward in time from the anchor to define
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}=(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm DOrbigny}\,\exp\left[+(t_{\rm SS}-t_{\rm
Pb,DOrbigny})/\tau_{53}\right],$ (5)
then using this value in Equation 3 to find the time of formation of the
sample. Any time an anchor is used to infer when after $t\\!\\!=\\!\\!0$ a
sample formed, it is equivalent to finding at least a model value for the
initial abundance of an SLR in the solar system. The only difference is that
the traditional approach using individual anchors determines
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ using only a single
meteorite at a time.
Recognizing this mathematical equivalence allows a much more precise approach
to chronometry, because in principle many anchors can be used simultaneously,
to produce much less uncertain estimates of quantities such as $t_{\rm SS}$
and $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$. In Paper I we
showed that the Pb-Pb system cannot be demonstrated to have achieved isotopic
closure in CAIs at the same time the Al-Mg system last did, so that Pb-Pb ages
of CAIs are not likely to be a good estimate of $t_{\rm SS}$. Instead, we
found that an age $t_{\rm SS}=4568.42\pm 0.24$ Myr minimized the discrepancies
between times of formation determined by Al-Mg systematics, $\Delta t_{26}$,
and times of formation determined by Pb-Pb ages, $\Delta t_{\rm Pb}$, for
seven achondrites and four chondrules with simultaneous measurements.
Moreover, that analysis found that based on that value of $t_{\rm SS}$,
concordancy was achieved in a statistically significant sense, justifying the
assumption of homogeneity. Adopting a value for $t_{\rm SS}$, one can
extrapolate backward from a sample like D’Orbigny to estimate
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ in Equation 5. The
estimates from several samples can be averaged together, producing a combined
estimate for $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ that is
much more precise than could be achieved using one anchor alone. This
approach, just like the traditional use of anchors, assumes homogeneity of
SLRs; but using a statistical approach also allows this assumption to be
tested rigorously.
### 1.3 Outline
The goal of this paper is to assess the homogeneity of other SLRs and use them
to date meteorites. In particular we aim to determine the initial abundances
of SLRs in the solar system, especially $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$,
but also $({}^{127}{\rm I}/{}^{129}{\rm I})_{\rm SS}$, $({}^{107}{\rm
Pd}/{}^{108}{\rm Pd})_{\rm SS}$, and others. This assumes these SLRs were
homogeneously distributed, an assumption we aim to test. In Paper I we used
statistical averages to find the value of $t_{\rm SS}$ that minimized the
differences between the time of formation of a sample as inferred from Al-Mg
systematics, $\Delta t_{26}$, and times of formation as determined from Pb-Pb
ages, $\Delta t_{\rm Pb}$. Here we will find the values of $t_{\rm SS}$,
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$, $\tau_{53}$, and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ that minimize the
discrepancies between the times of formation of a sample as determined by Al-
Mg, Mn-Cr and Hf-W, or Pb-Pb systematics.
In Paper I we restricted our attention to those samples that had both Al-Mg
and Pb-Pb measurements. In §2 we describe the 14 rapidly cooled achondrites we
consider, that have Pb-Pb ages and at least one other age determination (Al-
Mg, Mn-Cr, or Hf-W). We compile literature data to determine our best
estimates of $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$,
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$, $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}$ and $t_{\rm Pb}$ for each.
In §3 we present a statistical approach we have developed. We define a
goodness-of-fit metric $\chi_{\nu}^{2}$ for describing the degree to which the
ages determined for various selected samples (achondrites) using the various
isotopic systems (Al-Mg, Mn-Cr, Hf-W, Pb-Pb) are concordant (based on the
assumption of SLR homogeneity), and the threshold value of $\chi_{\nu}^{2}$
for statistical significance. We also show how to optimize the input
parameters $t_{\rm SS}$, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$, $\tau_{53}$, and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm
SS}$, to minimize $\chi_{\nu}^{2}$.
In §4 we apply these statistical techniques to our dataset. We find values for
the four parameters $t_{\rm SS}$, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$, $\tau_{53}$, and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$, that make the 37 available Al-Mg, Mn-Cr, Hf-W, and Pb-Pb
formation times of the 14 achondrites concordant (excluding only the Hf-W
formation time of NWA 4801). The fit is statistically significant, with
$\mbox{$\chi_{\nu}^{2}$}=1.09$ (33% probability).
In §5 we discuss the implications. The fact that a set of parameters makes all
these formation times concordant fails to falsify, and instead strongly
supports, the assumption that the SLRs were homogeneously distributed.
Moreover, the values we derive for $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$
compare favorably to values inferred from measurements of CAIs, and the mean-
life we infer for ${}^{53}{\rm Mn}$ is consistent with measurements, but our
estimates are more precise. These results mean that one can use the assumed
values of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ to infer the time of
formation of a sample without making reference to anchors. The value we infer
for $t_{\rm SS}$ is 1.1 Myr older than Pb-Pb ages of CAIs suggest, which we
attribute (as in Paper I) to late resetting of the Pb-Pb system in CAIs.
In §6, we use our refined chronometry to refine the time of formation of
Shallowater (used for I-Xe dating), and the initial ratios $({}^{60}{\rm
Fe}/{}^{56}{\rm Fe})_{\rm SS}$ and $({}^{107}{\rm Pd}/{}^{108}{\rm Pd})_{\rm
SS}$, and others, to facilitate using these systems for radiometric dating. We
demonstrate that the concordancy extends to other systems, such as chondrules
in CR chondrites, and chondrules in CB/CH chondrites. We make suggestions for
further tests of the model we present.
We summarize our findings and draw conclusions in §7.
## 2 Meteoritic Data
### 2.1 Sample Selection
Our statistical technique of minimizing the discrepancies between the times of
formation as determined by different isotopic systems (e.g., Al-Mg, Mn-Cr,
Hf-W, Pb-Pb) presupposes that the different systems achieved isotopic closure
simultaneously. This drives us to select meteoritic samples that were melted
(homogenizing the isotopes), rapidly cooled, and not obviously heated again or
later metamorphosed (e.g., by shock). Precise internal isochrons are required,
so larger samples (e.g., achondrites) are preferred over smaller meteoritic
components (e.g., chondrules or CAIs).
Chondrules may seem like an excellent candidate for this type of analysis, and
in some ways they are: the different isotopic systems have been measured in
many individual chondrules, and chondrule textures indicate that they cooled
and crystallized in a matter of only hours (Desch et al., 2012). However, over
time spans of perhaps 2 Myr (Villeneuve et al., 2009), chondrules appear to
have experienced multiple transient heating events that raised them to
different temperatures, including those near but not exceeding the solidus
(Ruzicka et al., 2008). As discussed in Paper I, at these temperatures and
cooling rates it is possible to reset the Pb-Pb chronometer without resetting
the Al-Mg chronometer. While the four chondrules considered were broadly
concordant in their Al-Mg and Pb-Pb formation times, this may not necessarily
be the case with chondrules overall. A notable exception may be the chondrules
produced in the impact associated with CB/CH chondrites, which were
immediately swept up after formation. We discuss chondrules in §5.3 and §5.4,
but do not optimize the model to fit them.
Among achondrites, only a subset may be suitable for this analysis. In Paper
I, we discussed the reasons why the isotopic systems should have closed
simultaneously in the rapidly cooled [$\sim 300\,{\rm K}\,{\rm hr}^{-1}$;
(Keil, 2012)] quenched angrites, including D’Orbigny, SAH 99555, and NWA 1670.
The petrologically similar achondrites NWA 7325 and probably Asuka 881394, as
well as NWA 2976 and NWA 6704 probably also cooled rapidly enough to pass
through the closure temperatures of all isotopic systems essentially
simultaneously. This may not be the case for plutonic angrites. Based on
diffusion profiles of Ca in olivine, the plutonic angrite LEW 86010 is
estimated to have cooled at about $300\,{\rm K}\,{\rm yr}^{-1}$ (McKay et al.,
1998), about $10^{4}$ times more slowly than the quenched, or volcanic
angrites. Keil (2012) notes that these cooling rates are more consistent with
near-surface dikes, shallow intrusions, or ponded lava flows, rather than true
plutons. Still, the various isotopic systems with closure temperatures
hundreds of K apart should have closed within only years of each other,
essentially simultaneously. It is also possible that a sample may see its
isotopic systems achieve simultaneous isotopic closure, but then be reset by
shock or metamorphism at a later time. This may manifest itself as a resetting
in some systems but not others, or in some just some rocks. The plutonic
angrite NWA 4801 may be one example of a disturbed sample (Irving and Kuehner,
2007; McKibbin et al., 2015).
Going forward, we restrict our attention to achondrites with U-corrected Pb-Pb
ages and at least one other age from a different isotopic system. (We make an
exception for Lewis Cliff 86010, whose Pb-Pb age is not U-corrected, as this
is a much-studied angrite for which a reasonable guess to the ${}^{238}{\rm
U}/{}^{235}{\rm U}$ ratio can be made. We also make an exception for NWA 1670
even though its uranium isotopes were not directly measured.) For practical
purposes, this usually ensures that a sample will have been measured in three
systems, which provides a much more restrictive test of concordance. For
systems formed more than about 5 Myr after $t\\!\\!=\\!\\!0$, after which
${}^{26}{\rm Al}$ is effectively extinct, it is almost the only way to ensure
three ages for the same sample.
Separate from the isotopic abundances associated with the decay of
radionuclides are stable isotopic anomalies in bulk chondrites and
achondrites, which provide important context for understanding their origins.
Isotopic evidence from $\epsilon^{50}{\rm Ti}$, $\epsilon^{54}{\rm Cr}$, and
$\Delta^{17}{\rm O}$ isotopic ratios places the formation of meteorites in one
of two reservoirs: the “NC” reservoir, thought to be in the inner Solar
System, in; or in the “CC” reservoir, thought to be in the outer Solar
System,(Trinquier et al., 2009; Warren, 2011; Kruijer et al., 2017). All of
the achondrites we include in our analysis are from the NC reservoir, except
for the two recently dated achondrites NWA 2796 and NWA 6704, which derive
from the CC reservoir (Sanborn et al., 2019).
In the following subsections we describe each achondrite used in our analysis,
and the data used to derive $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$
and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ for each. The data used
to derive $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ and Pb-Pb ages are
discussed in Paper I. For all but two samples we apply the correction of 0.19
Myr advocated by Tissot et al. (2017), to account for the empirical finding
that pyroxenes are isotopically lighter than the whole rock. We make
exceptions for the two achondrites from the CC isotopic reservoir, NWA 2976
and NWA 6704, on the basis that they formed from hydrous magmas in which U was
more likely to be taken up entirely into pyroxene grains.
### 2.2 Quenched Angrites
In contrast to plutonic angrites, which have nearly equilibrated minerals with
little zoning, volcanic angrites have highly zoned mineral assemblages far
from equilibrium. (See Tissot et al., 2022). ‘Quenched,’ or ‘volcanic,’
angrites are inferred to have cooled rapidly, at rates $\sim 300\,{\rm
K}\,{\rm hr}^{-1}$, after burial within the top meter or so of the surface
(Keil, 2012). If they escaped later resetting, they are likely to record
simultaneous closure of the isotopic systems.
#### 2.2.1 D’Orbigny
D’Orbigny, described more fully in Paper I, is a quenched angrite that has
long been considered an anchor in which the different isotopic systems likely
closed simultaneously and has not been disturbed.
As discussed in Paper I, we adopt for the initial $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{0}$ value for D’Orbigny the weighted mean of several
values advocated by Sanborn et al. (2019), $(3.93\pm 0.39)\times 10^{-7}$.
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ ratio, we take the
weighted mean of values determined by Nyquist et al. (2003), Glavin et al.
(2004), Sugiura et al. (2005), McKibbin et al. (2015), and Kleine and Wadhwa
(2017), to find $(3.233\pm 0.033)\times 10^{-6}$, the value we adopt. An
additional determination of $(3.20\pm 0.21)\times 10^{-6}$ (consistent with
our adopted value) was made by Yin et al. (2009), but not in the refereed
literature.
The ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio was determined by
Kleine et al. (2012) to be $(7.15\pm 0.17)\times 10^{-5}$.
As discussed in Paper I, for the Pb-Pb age we take the intercept of the Pb-Pb
isochron derived by Amelin (2008b), and the weighted mean of the ${}^{238}{\rm
U}/{}^{235}{\rm U}$ values from Brennecka and Wadhwa (2012) and Tissot et al.
(2017), to find $4563.24\pm 0.21$ Myr.
#### 2.2.2 SAH 99555
As described in Paper I, SAH 99555 is a quenched angrite similar to D’Orbigny,
with an unshocked, fine-grained texture composed of anorthite, Al-Ti-bearting
hedenbergite, olivine and mm-sized vesicles (Keil, 2012).
As discussed in Paper I, we adopt for the initial $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{0}$ value for SAH 99555 the weighted mean of the
values determined by Spivak-Birndorf et al. (2009) and Schiller et al. (2015),
finding $(3.64\pm 0.18)\times 10^{-7}$.
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ ratio, we take the
weighted mean of values determined by Sugiura et al. (2005) and McKibbin et
al. (2015), to find $(3.279\pm 0.169)\times 10^{-6}$, the value we adopt.
The ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio was determined by
Kleine et al. (2012) to be $(6.87\pm 0.15)\times 10^{-5}$.
As discussed in Paper I, for the Pb-Pb age we take the average of the
intercepts of the Pb-Pb isochrons derived by Amelin (2008a) and Connelly et
al. (2008), and the ${}^{238}{\rm U}/{}^{235}{\rm U}$ value determined by
Brennecka and Wadhwa (2012) and Tissot et al. (2017), to find $4563.51\pm
0.24$ Myr.
#### 2.2.3 NWA 1670
NWA 1670, as described in Paper I, is a quenched angrite with a porphyritic
texture including large olivine megacrysts in a fine-grained matrix of
olivine, pyroxene, kirsch-steinite and anorthite, as well as other accessory
minerals (Keil, 2012). These indicate rapid cooling at $\sim 300\,{\rm
K}\,{\rm hr}^{-1}$ (Mikouchi et al., 2003).
For NWA 1670, we adopt the $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$
ratio determined by Schiller et al. (2015), $(5.92\pm 0.59)\times 10^{-7}$.
For $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$, we adopt the value
determined by Sugiura et al. (2005), $(2.85\pm 0.92)\times 10^{-6}$.
We are not aware of an determination of $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{0}$ for NWA 1670.
As described fully in Paper I (Desch et al., 2023), we have reanalyzed the Pb-
Pb isochron of Schiller et al. (2015) to determine a Pb-Pb age $4564.02\pm
0.66$ Myr.
#### 2.2.4 NWA 1296
NWA 1296 is a quenched angrite with a bulk composition similar to that of
D’Orbigny and Sahara 99555 (Jambon et al., 2004; Tissot et al., 2022). NWA
1296 has fine-grained texture consisting of primarily of dentritic olivine,
anorthite and Al-Fe diopside-hedenbergite pyroxenes (Jambon et al., 2004). The
texture and grain sizes are consistent with formation through rapid
crystallization. We are not aware of Al-Mg or Mn-Cr measurements for this
achondrite.
We adopt $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}=(7.01\pm
0.28)\times 10^{-5}$ (Kleine et al., 2012).
Its Pb-Pb age was determined by Amelin and Irving (2011) to be $4564.20\pm
0.45$ Myr, based on an assumed $\mbox{${}^{238}{\rm U}/{}^{235}{\rm
U}$}=137.88$. We are not aware of any measurements in the refereed literature,
or indeed of any direct measurements of its uranium isotopes, but we retain
this age as a test of the model. The most severe test comes from assuming the
youngest plausible Pb-Pb age, i.e., by assuming the minimum ${}^{238}{\rm
U}/{}^{235}{\rm U}$ value. We adopt the value $\mbox{${}^{238}{\rm
U}/{}^{235}{\rm U}$}=137.786\pm 0.013$ from NWA 1670 (Schiller et al., 2015),
which implies a correction -0.99 Myr. As with the other NC achondrites, we
assume corrections like that applied to NWA 1670 are based on measurements of
pyroxene grains, which are isotopically lighter than whole-rock measurements,
and apply an additional correction -0.19 Myr. This yields an age $4563.02\pm
0.45$ Myr.
### 2.3 Plutonic Angrites
In contrast to quenched angrites, plutonic angrites appear to have achieved
equilibrium, cooling much more slowly than volcanic angrites, at rates $\sim
300\,{\rm K}\,{\rm yr}^{-1}$, indicating burial at depths of tens of meters
(Keil, 2012), This cooling rate is still rapid enough for their isotopic
systems to have achieved closure simultaneously, but many plutonic angrites
appear to have been later disturbed or metamorphosed, which may affect
different systems differently.
#### 2.3.1 LEW 86010
LEW 86010 is an unshocked plutonic angrite with granular texture, with grains
0.6-1.2 mm across, composed of anorthite, Al-Ti-bearing diopside, and calcic
olivine, with some kirschsteinite. It is thought to have cooled in thousands
of years or less, based on zoning in pyroxene and exsolution lamellae in
olivine (McKay et al., 1998; Keil, 2012; McKibbin et al., 2015).
We are not aware of determinations of ($\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}$ for LEW 86010 or any of the plutonic angrites. This is
unsurprising, as the other systems suggest they formed roughly 10 Myr after
$t\\!\\!=\\!\\!0$, when ${}^{26}{\rm Al}$ would have been effectively extinct.
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$, we adopt the
weighted mean of values determined by Lugmair and Shukolyukov (1998) and
Nyquist et al. (1994), to find $(1.345\pm 0.049)\times 10^{-6}$.
The ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio was determined by
Kleine et al. (2012) to be $(4.80\pm 0.42)\times 10^{-5}$.
The Pb-Pb age of LEW 86010 was determined by Amelin (2008b) to be $4558.55\pm
0.15$ Myr, but without using a measurement of the ${}^{238}{\rm
U}/{}^{235}{\rm U}$ in the sample, instead assuming 137.88. While LEW 86010 is
important for its previous use as an anchor, its small size (6.9 g) has
precluded precise measurement of its ${}^{238}{\rm U}/{}^{235}{\rm U}$,
although Lugmair and Galer (1992) found the ${}^{238}{\rm U}/{}^{235}{\rm U}$
ratio was about $(1.1\pm 1.7)\mbox{\text{\textperthousand}}$ lighter than
137.88 in its pyroxenes (and the whole rock is isotopically heavier, in line
with other achondrites). Correcting for this would lower the age of LEW 86010
by $1.6\pm 2.5$ Myr. Adopting the value $\mbox{${}^{238}{\rm U}/{}^{235}{\rm
U}$}=137.786$ apparently common to plutonic angrites (Tissot et al., 2017), we
estimate an age correction of -0.99 Myr, but with considerable uncertainty.
For these reasons we cannot determine the Pb-Pb age of LEW 86010 with
certainty, but based on its previous use as an anchor, we include it in our
analysis.
#### 2.3.2 NWA 4590
NWA 4590 is a coarse-grained igneous cumulate rock with Al-Ti-rich
clinopyroxene, anorthite, Ca-rich olivine with kirchsteinite exsolution, ulvö-
spinel, plus merrillite and silico-phosphate. As with LEW 86010, it is thought
to have cooled over only thousands of years, based on zoning in pyroxene and
exsolution lamellae in olivine (McKibbin et al., 2015). Pb-Pb dating has been
applied to the silicates and silico-phosphates, and the Pb-Pb ages found to
differ by $0.55\pm 0.29$ Myr; based on the differences in closure
temperatures, a slow cooling rate $540\pm 290\,{\rm K}\,{\rm Myr}^{-1}$ was
inferred (Amelin et al., 2011). However, other petrologic evidence suggests
that instead the Pb-Pb system in the phosphates was reset by a later reheating
event (McKibbin et al., 2015).
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ value, we adopt the
value determined by McKibbin et al. (2015), $(0.85\pm 0.40)\times 10^{-6}$. We
note that Yin et al. (2009), in unrefereed work, found a similar value,
$(1.01\pm 0.12)\times 10^{-6}$.
The ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio was determined by
Kleine et al. (2012) to be $(4.63\pm 0.17)\times 10^{-5}$.
The Pb-Pb age was determined to be $4557.81\pm 0.37$ Myr by Brennecka and
Wadhwa (2012), who applied a measured uranium correction to the Pb-Pb
isochrons measured by Amelin and Irving (2007) and Amelin et al. (2011). A
more refined uranium correction was applied by Tissot et al. (2017), who
determined a Pb-Pb age of $4557.76\pm 0.38$ Myr. After applying a 0.19 Myr
correction, we adopt a value $4557.57\pm 0.38$ Myr.
#### 2.3.3 NWA 4801
NWA 4801 has a granular, cumulate texture, with grain sizes 0.1 - 1.2 mm,
described by Irving and Kuehner (2007) as being an annealed breccia formed
originally by disruption of a very coarse-grained plutonic protolith. It
consists primarily of Al-Ti-bearing diopside and anorthite. McKibbin et al.
(2015) explain the lack of chemical variation in pyroxene and olivine, along
with the textural features, as petrologic evidence for slow cooling, but also
mention a later stage of high-temperature annealing could have occured as was
described by Irving and Kuehner (2007).
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ value, although it is
the only value in the refereed literature, we consider the value determined by
McKibbin et al. (2015), $(0.13\pm 1.1)\times 10^{-6}$, to be too imprecise. We
consider the value reported by Shukolyukov et al. (2009), $(0.96\pm
0.04)\times 10^{-6}$, to be overly precise. We adopt the value from the
abstract by Yin et al. (2009), $(0.959\pm 0.040)\times 10^{-6}$.
The ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio was determined by
Kleine et al. (2012) to be $(4.52\pm 0.16)\times 10^{-5}$.
The Pb-Pb age was determined to be $4557.01\pm 0.27$ Myr by Brennecka and
Wadhwa (2012), who applied a measured uranium correction to the Pb-Pb isochron
measured by Amelin (2008b). A more refined uranium correction was applied by
Tissot et al. (2017), who determined a Pb-Pb age of $4556.82\pm 0.28$ Myr. A
value of $4556.8\pm 0.2$ Myr was reported by Connelly and Bizzarro (2016). We
take the weighted mean of these to find $4556.91\pm 0.21$ Myr. After
correcting this by 0.19 Myr, we adopt $4556.72\pm 0.21$ Myr.
#### 2.3.4 Angra dos Reis
Angra dos Reis (AdoR) is a porphyritic igneous rock composed of pyroxene, Al-
Ti-bearing diopside, calcic olivine, and other minerals. Despite being the
namesake of angrites, AdoR differs from other angrites in its geochemical
composition, as it is nearly mono-mineralic with ${>}$90 vol% pyroxene (Prinz
et al., 1977).
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ value, Lugmair and
Shukolyukov (1998) combined data for AdoR with that for LEW 86010 and found
they were consistent with the same isochron, with slope $(1.25\pm 0.07)\times
10^{-6}$. However, it is unlikely that AdoR and LEW 86010 formed in the same
magmatic system, and therefore unlikely that they should close at the same
time. Instead, we take the two data points and derive a slope
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}=(1.10\pm 0.40)\times 10^{-6}$.
The Pb-Pb age was determined to be $4556.60\pm 0.26$ Myr by Brennecka and
Wadhwa (2012), who applied a measured uranium correction to the Pb-Pb
isochrons measured by Amelin (2008b). A more refined uranium correction was
applied by Tissot et al. (2017), who determined a Pb-Pb age of $4556.45\pm
0.29$ Myr. After correcting this by 0.19 Myr, we adopt $4556.26\pm 0.29$ Myr.
#### 2.3.5 NWA 2999 and Paired Meteorites
Containing both coarse and fine grained lithologies, NWA 2999 (and numerous
pairings, including NWA 4931, NWA 6291, and at least six others) has been
described as a plutonic angrite (Keil, 2012) as well as an annealed breccia
(Irving and Kuehner 2007). Due to the high abundance and siderophile element
contents of metal, an exogenous impactor has been suggested to mix materials
of carbonaceous chondrite origin from the CC reservoir (Gellissen et al.,
2007; Humayun et al., 2007; Riches et al., 2012), even though the W isotope
composition would seem to preclude this (Kleine et al., 2012). NWA 2999 shows
signs of terrestrial alteration in the presence of iron replacement minerals
(goethite and magnetite) as well as light rare earth element (LREE)
enrichments and Ce anomalies in olivine (Sanborn and Wadhwa, 2021). The trace
element data present by Sanborn and Wadhwa (2021) shows that NWA 2999 has a
composition more closely related to volcanic angrites and implies that
magmatic activity of the volcanic angrite source reservoir continued for
millions of years. Consensus on the petrogenesis of NWA 2999 has not been
reached and in many ways it is a confusing member of the angrite group.
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ ratio, the only value
we could find was in the unrefereed abstract by Shukolyukov and Lugmair
(2008), $(1.28\pm 0.23)\times 10^{-6}$. We do not consider this a reliable Mn-
Cr formation time of NWA 2999.
The ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ ratio was determined by
Kleine et al. (2012) to be $(5.43\pm 0.34)\times 10^{-5}$.
For the Pb-Pb age, we take the value $4560.74\pm 0.47$ Myr determined by
Brennecka and Wadhwa (2012), based on a Pb-Pb isochron by Amelin and Irving
(2007). After correcting this by 0.19 Myr, we adopt $4560.55\pm 0.47$ Myr.
### 2.4 Other NC Achondrites
#### 2.4.1 Asuka 881394
Asuka 881394 is a eucrite-like achondrite with a coarse-grained igneous
texture with near equal amounts of anorthite and pyroxene. As discussed in
Paper I, we adopt for the initial $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}$ value for Asuka 881394 the weighted mean of the values determined
by Nyquist et al. (2003), Wadhwa et al. (2009), and Wimpenny et al. (2019),
finding $(13.071\pm 0.55)\times 10^{-7}$.
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ ratio, we take the
weighted mean of values determined by Nyquist et al. (2003) and Wimpenny et
al. (2019), to find $(3.863\pm 0.228)\times 10^{-6}$, the value we adopt.
We are not aware of a determination of $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{0}$ for Asuka 881394.
The Pb-Pb age of Asuka 881394 was determined by Wadhwa et al. (2009) to be
$4566.5\pm 0.2$ (not U-corrected), and by (Wimpenny et al., 2019) to be
$4564.95\pm 0.53$ Myr, the value we take. After correcting this by 0.19 Myr,
we adopt $4564.76\pm 0.53$ Myr.
#### 2.4.2 Ibitira
Ibitira is an unbrecciated basaltic rock with abundant vesicles and a fine-
grained texture. It is compared to eucrites, but is distinct. Its plagioclase
is mostly calcic, due to depletion in alkali elements, and its pyroxenes have
high Fe/Mn ratios in comparison to typical basaltic eucrites. Mittlefehldt
(2005) argued these indicate formation on a distinct parent asteroid, a
conclusion corroborated by the finding that the $\Delta^{17}{\rm O}$ oxygen
isotope composition is $16-21{\sigma}$ above the HED mean values (Scott et
al., 2009). The vesicles in Ibitira suggest a volcanic origin, although its
seems to have formed at the same time as plutonic angrites. It is likely a
volcanic achondrite, but we treat it separately.
We are not aware of determinations of $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}$ or $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ for Ibitira.
For the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ value, we adopt the
value of Lugmair and Shukolyukov (1998), $(1.06\pm 0.50)\times 10^{-6}$. We
note that a similar value was reported in the unrefereed abstract of Yin et
al. (2009).
We take the Pb-Pb age of $4556.75\pm 0.57$ of Iizuka et al. (2014) for
Ibitira. After correcting this by 0.19 Myr, we adopt $4556.56\pm 0.57$ Myr.
#### 2.4.3 NWA 7325
As described in paper I, NWA 7325 is an ungrouped achondrite with a medium-
grained cumulate texture consisting of Mg-rich olivine, Cr-bearing diopside
and Ca-rich plagioclase (Goodrich et al., 2017).
As in Paper I, we adopt the value $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}=(3.03\pm 0.14)\times 10^{-7}$ (Koefoed et al., 2016) for NWA 7325.
We are not aware of determinations of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}$ or $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ for NWA
7325.
As in Paper I, we adopt $4563.7\pm 1.7$ Myr (Koefoed et al., 2016) for the Pb-
Pb age of NWA 7325.
### 2.5 CC Achondrites
#### 2.5.1 NWA 2976
As described in Paper I, NWA 2976 (paired with NWA 011) is an unshocked,
unbrecciated ungrouped achondrite with coarse grained pigeonite surrounded by
fine-grained, recrystallized plagioclase with well-developed 120∘ triple
junctions (Yamashita et al., 2010a).
As in Paper I, we adopt a weighted mean of the values from Bouvier et al.
(2011b) and Schiller et al. (2010), $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}=(4.05\pm 0.15)\times 10^{-7}$.
We are not aware of a determination of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}$ or $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ for NWA
2976.
As in Paper I, we adopt $4563.16\pm 0.57$ Myr for the Pb-Pb age of NWA 2976.
#### 2.5.2 NWA 6704
As described in Paper I, NWA 6704 (paired with NWA 6693) is an unshocked,
ungrouped achondrite with a medium grained texture comprised of low-Ca
pyroxene along with Ni-rich olivine and sodic plagioclase (Hibiya et al.,
2019). As in Paper I, we adopt $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}=(3.15\pm 0.38)\times 10^{-7}$ value for NWA 6704 (Sanborn et al.,
2019).
We adopt the value $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}=(2.59\pm
0.34)\times 10^{-6}$ for NWA 6704 (Sanborn et al., 2019).
As in Paper I, we adopt $4562.76\pm 0.26$ Myr for the Pb-Pb age of NWA 6704
(Amelin et al., 2019).
### 2.6 Summary of Achondrite Data
In Table 1 we compile all of the above data, which comprise 38 values of
either $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$, $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{0}$, $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{0}$, or U-corrected Pb-Pb ages $t_{\rm Pb}$, across 14 achondrites. In
general, we only use values from the refereed literature; but in some cases,
if other data were not available, we used data from abstracts (as noted in the
caption). Where multiple measurements are available for a single achondrite,
we have taken a weighted average. With knowledge of key quantities such as the
${}^{53}{\rm Mn}$ half-life, and the Pb-Pb age of samples formed at
$t\\!\\!=\\!\\!0$, $t_{\rm SS}$, these quantities can be converted into times
of formation after $t\\!\\!=\\!\\!0$, next.
Table 1: $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$,
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$, $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}$, and U-corrected Pb-Pb ages of 13 achondrites.
Achondrite | $\left(\frac{{}^{26}{\rm Al}}{{}^{27}{\rm Al}}\right)_{0}$ | $2\sigma$ | $\dagger$ | $\left(\frac{{}^{53}{\rm Mn}}{{}^{55}{\rm Mn}}\right)_{0}$ | $2\sigma$ | $\dagger$ | $\left(\frac{{}^{182}{\rm Hf}}{{}^{180}{\rm Hf}}\right)_{0}$ | $2\sigma$ | $\dagger$ | Pb-Pb | $2\sigma$ | $\dagger$
---|---|---|---|---|---|---|---|---|---|---|---|---
D’Orbigny | 3.93 | 0.39 | * | 3.23 | 0.03 | * | 7.15 | 0.17 | i | 4563.24 | 0.21 | j
| 5.06 | 0.92 | a | 2.83 | 0.25 | e | | | | | |
| 3.98 | 0.15 | b | 3.24 | 0.04 | f | | | | | |
| 3.97 | 0.21 | c | 2.84 | 0.24 | g | | | | | |
| 3.93 | 0.39 | d | 3.54 | 0.18 | h | | | | | |
| | | | 3.23 | 0.07 | c | | | | | |
SAH 99555 | 3.65 | 0.18 | * | 3.28 | 0.17 | * | 6.87 | 0.15 | i | 4563.51 | 0.24 | k
| 5.13 | 1.90 | a | 2.82 | 0.37 | g | | | | | |
| 3.64 | 0.18 | b | 3.40 | 0.19 | h | | | | | |
NWA 1670 | 5.92 | 0.59 | b | 2.85 | 0.92 | g | | | | 4564.02 | 0.66 | l
NWA 1296 | | | | | | | 7.01 | 0.28 | i | 4563.02 | 0.45 | m
LEW 86010 | | | | 1.35 | 0.05 | * | 4.80 | 0.42 | i | | |
| | | | 1.44 | 0.07 | n | | | | | |
| | | | 1.25 | 0.07 | o | | | | | |
NWA 4590 | | | | 0.85 | 0.40 | h | 4.63 | 0.17 | i | 4557.57 | 0.38 | *
| | | | | | | | | | 4557.81 | 0.37 | p
| | | | | | | | | | 4557.76 | 0.38 | q
NWA 4801 | | | | 0.96 | 0.04 | * | 4.52 | 0.16 | i | 4556.63 | 0.28 | *
| | | | 0.919 | 0.295 | r | | | | 4557.01 | 0.27 | t
| | | | 0.96 | 0.04 | s | | | | 4556.82 | 0.28 | q
Angra | | | | 1.10 | 0.40 | n | 4.02 | 0.24 | i | 4556.26 | 0.29 | *
dos Reis | | | | | | | | | | 4556.60 | 0.26 | u
| | | | | | | | | | 4556.45 | 0.29 | q
NWA 2999 | | | | 1.28 | 0.23 | v | 5.43 | 0.34 | i | 4560.55 | 0.47 | t
Asuka 881394 | 13.07 | 0.56 | * | 3.86 | 0.23 | * | | | | 4564.76 | 0.53 | *
| 11.8 | 1.4 | e | 4.6 | 1.7 | e | | | | 4566.5 | 0.2 | w
| 12.8 | 0.7 | w | 3.85 | 0.23 | x | | | | 4564.95 | 0.53 | x
| 14.8 | 1.2 | x | | | | | | | | |
Ibitira | | | | 1.06 | 0.50 | n | | | | 4556.56 | 0.57 | y
NWA 7325 | 3.03 | 0.14 | z | | | | | | | 4563.7 | 1.7 | z
NWA 2976 | 4.05 | 0.15 | * | | | | | | | 4563.16 | 0.57 | aa
| 4.91 | 0.46 | aa | | | | | | | | |
| 3.94 | 0.16 | bb | | | | | | | | |
NWA 6704 | 3.15 | 0.38 | d | 2.59 | 0.34 | d | | | | 4562.76 | 0.26 | cc
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ and $2\sigma$ uncertainties
in units of $10^{-7}$; $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ and
$2\sigma$ uncertainties in units of $10^{-6}$; $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}$ and $2\sigma$ uncertainties in units of
$10^{-5}$; Pb-Pb ages and uncertainties in units of Myr. Adopted values in
bold type. * denotes a weighted average of literature data; $\dagger$
References: a. Spivak-Birndorf et al. (2009); b. Schiller et al. (2015); c.
Kleine and Wadhwa (2017); d. Sanborn et al. (2019); e. Nyquist et al. (2003);
f. Glavin et al. (2004); g. Sugiura et al. (2005); h. McKibbin et al. (2015);
i. Kleine et al. (2012); j. Pb-Pb isochron from Amelin (2008b), uranium
correction from Brennecka and Wadhwa (2012) and Tissot et al. (2017) [see
Desch et al. 2023]; k. Pb-Pb isochron from Amelin (2008a) and Connelly et al.
(2008), uranium correction from Tissot et al. (2017) and Connelly et al.
(2012) [see Desch et al. 2023]; l. Schiller et al. (2015), as corrected by
Desch et al. (2023); m. Amelin and Irving (2011), with corrections described
in text; n. Nyquist et al. (1994); o. Lugmair and Shukolyukov (1998); p.
Brennecka and Wadhwa (2012), based on Pb-Pb isochron of Amelin and Irving
(2007) (abstract) and Amelin et al. (2011) (abstract). q. Tissot et al.
(2017); r. Yin et al. (2009) (abstract); s. Shukolyukov et al. (2009)
(abstract); t. Brennecka and Wadhwa (2012), based on Pb-Pb isochron by Amelin
and Irving (2007); u. Brennecka and Wadhwa (2012), based on Pb-Pb isochron
from Amelin (2008b); v. Shukolyukov and Lugmair (2008) (abstract); w. Wadhwa
et al. (2009); x. Wimpenny et al. (2019); y. Iizuka et al. (2014); z. Koefoed
et al. (2016); aa. Schiller et al. (2010); bb. Bouvier et al. (2011b); cc.
Amelin et al. (2019).
## 3 Statistical Chronometry Methods
We advocate analysis of the above samples through a statistical approach. This
builds on similar approaches: for example, that of Nyquist et al. (2009), who
correlated Mn-Cr and Al-Mg ages, and estimated $t_{\rm SS}$, among other
quantities; Tissot et al. (2017), who averaged several samples together to
better estimate $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$; Sanborn
et al. (2019), who correlated several samples together to better constrain the
${}^{53}{\rm Mn}$ half-life; and Piralla et al. (2023), who correlated Al-Mg
and Pb-Pb ages to find the age of the Solar System. Our approach, which we
call “statistical chronometry,” is more comprehensive, and has the goal of
fitting all the systems simultaneously, to derive the values that make all the
isotopic systems concordant, in the achondrites for which they are most likely
to be concordant; and is unique in employing rigorous methods to statistically
evaluate the goodness of that fit.
### 3.1 Goodness-of-fit Metric
The goal of our statistical approach is to find the optimal values of: $t_{\rm
SS}$, the Pb-Pb age of samples that closed at $t\\!\\!=\\!\\!0$, defined to be
the time at which $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})=(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{\rm SS}=5.23\times
10^{-5}$; $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$, the abundances of
${}^{53}{\rm Mn}$ and ${}^{182}{\rm Hf}$ at $t\\!\\!=\\!\\!0$; and the half-
lives of ${}^{53}{\rm Mn}$, ${}^{26}{\rm Al}$, and ${}^{182}{\rm Hf}$. The
optimal values are those that minimize for each achondrite the differences
between the times of formation after $t\\!\\!=\\!\\!0$ as determined by each
isotopic system ($\Delta t_{26}$, $\Delta t_{53}$, $\Delta t_{182}$, and
$\Delta t_{\rm Pb}$), and the “true” or best estimate of its time of
formation, $\Delta t$. More specifically, if we assume the initial solar
system abundance of ${}^{26}{\rm Al}$ is a fixed quantity for the purpose of
the calculation, the times of formation one would infer from the Al-Mg system
would be
$\Delta t_{26}=\tau_{26}\,\ln\left[\frac{(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{\rm SS}}{R_{26}}\right],$ (6)
with ($2\sigma$) uncertainty $\sigma_{\Delta t26}$, where
$\sigma_{\Delta
t26}^{2}=\tau_{26}^{2}\,\left(\frac{\sigma_{R26}}{R_{26}}\right)^{2}+\left(\Delta
t_{26}\right)^{2}\,\left(\frac{\sigma_{\tau 26}}{\tau_{26}}\right)^{2}.$ (7)
Here $R_{26}=(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$, and
$\sigma_{R26}$ is the ($2\sigma$) uncertainty in $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{0}$, and for the purposes of calculating the goodness-
of-fit metric we will assume that any mean-life (e.g., $\tau_{26}$) is a fixed
input parameter, in which case the measurement uncertainty $\sigma_{\tau 26}$
is irrelevant. Likewise,
$\Delta t_{53}=\tau_{53}\,\ln\left[\frac{(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}}{R_{53}}\right]$ (8)
and
$\sigma_{\Delta t53}=\tau_{53}\,\frac{\sigma_{R53}}{R_{53}},$ (9)
and
$\Delta t_{182}=\tau_{182}\,\ln\left[\frac{(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}}{R_{182}}\right]$ (10)
and
$\sigma_{\Delta t182}=\tau_{182}\,\frac{\sigma_{R182}}{R_{182}}.$ (11)
In a similar vein, assuming $t_{\rm SS}$ is a fixed quantity for the purposes
of the calculation,
$\Delta t_{\rm Pb}=t_{\rm SS}-t_{\rm Pb},$ (12)
where $t_{\rm Pb}$ is the Pb-Pb age of the sample, and the ($2\sigma$)
uncertainty is
$\sigma_{\Delta tPb}=\sigma_{\rm tPb},$ (13)
where $\sigma_{\rm tPb}$ is the ($2\sigma$) uncertainty in the Pb-Pb age.
If all of the isotopic systems achieved closure at the same time in an
achondrite, then there is one time of formation, $\Delta t$, and each of these
isotopic systems is providing an estimate or “measurement” of $\Delta t$. The
best estimate of the time of formation is then the weighted mean:
$\Delta t=\left[\frac{\Delta t_{26}}{\sigma_{\Delta t26}^{2}}+\frac{\Delta
t_{53}}{\sigma_{\Delta t53}^{2}}+\frac{\Delta t_{182}}{\sigma_{\Delta
t182}^{2}}+\frac{\Delta t_{\rm Pb}}{\sigma_{\Delta
tPb}^{2}}\right]\div\left[\frac{1}{\sigma_{\Delta
t26}^{2}}+\frac{1}{\sigma_{\Delta t53}^{2}}+\frac{1}{\sigma_{\Delta
t182}^{2}}+\frac{1}{\sigma_{\Delta tPb}^{2}}\right].$ (14)
Here it is understood that for samples for which no age information exists,
$\sigma_{\Delta t}$ is effectively infinite.
Based on this best estimate, we can define a goodness-of-fit parameter
describing how well the isotopic systems combined fit the best-fit $\Delta
t_{i}$ for one achondrite indexed by $i$; when summed over all achondrites
from $i=1$ to $i=A$, we derive a global goodness-of-fit parameter:
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\chi_{\nu}^{2}=\frac{1}{N-M}\,\sum_{i=1}^{A}\left[\frac{\left(\Delta
t_{26,i}-\Delta t_{i}\right)^{2}}{\sigma_{\Delta t26,i}^{2}}\right.$
$\left.+\frac{\left(\Delta t_{53,i}-\Delta t_{i}\right)^{2}}{\sigma_{\Delta
t53,i}^{2}}+\frac{\left(\Delta t_{182,i}-\Delta
t_{i}\right)^{2}}{\sigma_{\Delta t182,i}^{2}}+\frac{\left(\Delta t_{{\rm
Pb},i}-\Delta t_{i}\right)^{2}}{\sigma_{\Delta t{\rm Pb},i}^{2}}\right].$ (15)
Here $N$ ($\leq 37)$ is the number of ages across all $A$ ($\leq 14$)
achondrites that are included in the sum. The number of input parameters being
optimized is $M$. If we fit only Al-Mg formation times, $M=1$ and $t_{\rm SS}$
is being optimized. If we fit Hf-W formation times as well, $M=2$ and we
optimize for $t_{\rm SS}$ as well as $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$. If we fit Mn-Cr formation times as well, then $M=4$, because
we must vary both $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and
$\tau_{53}$, as discussed below. This sum can be restricted to subsets of the
achondrites, to investigate the fit of, e.g., just volcanic angrites. The
optimal values of these quantities are those that minimize $\chi_{\nu}^{2}$.
In principle, $\chi_{\nu}^{2}$ is a multi-dimensional function of these $M$
inputs, and must be minimized by a global search. In practice, we find it is
sufficient to treat the minimization by first fitting the Al-Mg, Hf-W and Pb-
Pb systems, and then adjusting the Mn-Cr system. The uncertainty in the
${}^{53}{\rm Mn}$ half-life is large enough that it and the value of
$(\mbox{${}^{53}{\rm Mn}$})_{\rm SS}$ must be optimized simultaneously. The
uncertainties in the ${}^{26}{\rm Al}$ and ${}^{182}{\rm Hf}$ half-lives, in
contrast, are small enough that they can be considered fixed.
Our strategy is as follows. First we optimize $t_{\rm SS}$ by comparing the
Pb-Pb ages against the Al-Mg ages (similar to the approach in Paper I). Al-Mg
ages have the lowest uncertainties and $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{\rm SS}$ is already defined, so this will provide the tightest
constraints on $t_{\rm SS}$. Second, we optimize $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ by comparing Hf-W against Al-Mg and Pb-Pb
formation times. After optimizing these two parameters, we then optimize
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and $\tau_{53}$ by
comparing Mn-Cr formation times against Al-Mg, Hf-W, and Pb-Pb formation
times. Direct comparisons between Mn-Cr and Al-Mg formation times have been
made by Nyquist et al. (2009), Sanborn et al. (2019), Tissot et al. (2017),
and others, but we compare Mn-Cr against Pb-Pb because there are 10
achondrites in Table 1 with both Mn-Cr and Pb-Pb data, but only five that have
both Mn-Cr and Al-Mg. We then use our updated values to derive new times of
formation from the data in Table 1, and repeat the calculations above until
the global optimization has converged; in practice, this means only a slight
adjustment to quantities after the first iteration. After the global
optimization, we will compare the Mn-Cr and Al-Mg systems, or other pairs of
systems.
### 3.2 Optimal value of $t_{\rm SS}$
We begin by finding the optimal $t_{\rm SS}$, keeping other quantities fixed.
This means finding the value $t_{\rm SS}^{*}$ for which
$\partial\mbox{$\chi_{\nu}^{2}$}/\partial t_{\rm SS}=0$. Recalling the
definition of $\Delta t_{i}$ above and recognizing that $\partial\Delta
t_{{\rm Pb},i}/\partial t_{\rm SS}=1$, we find:
$t_{\rm SS}^{*}=\frac{\sum_{i=1}^{A}\alpha_{i}\left(\frac{t_{{\rm
Pb},i}+\Delta t_{26,i}}{\sigma_{\Delta t26,i}^{2}}+\frac{t_{{\rm Pb},i}+\Delta
t_{53,i}}{\sigma_{\Delta t53,i}^{2}}+\frac{t_{{\rm Pb},i}+\Delta
t_{182,i}}{\sigma_{\Delta
t182,i}^{2}}\right)}{\sum_{i=1}^{A}\alpha_{i}\left(\frac{1}{\sigma_{\Delta
t26,i}^{2}}+\frac{1}{\sigma_{\Delta t53,i}^{2}}+\frac{1}{\sigma_{\Delta
t182,i}^{2}}\right)},$ (16)
where
$\alpha_{i}=\frac{1/\sigma_{\Delta t{\rm Pb},i}^{2}}{1/\sigma_{\Delta t{\rm
Pb},i}^{2}+1/\sigma_{\Delta t26,i}^{2}+1/\sigma_{\Delta
t53,i}^{2}+1/\sigma_{\Delta t182,i}^{2}},$ (17)
and it is again understood that the summation is over only the relevant
samples, and that for isotopic systems without data, effectively $\sigma$ is
infinite. The optimal Pb-Pb age of $t\\!\\!=\\!\\!0$ is thus seen to be a
weighted mean of the ages found by adding the inferred times of formation of
sample $i$ ($\Delta t_{26,i}$ or $\Delta t_{53,i}$ or $\Delta t_{182,i}$) to
the Pb-Pb age of sample $i$ ($t_{{\rm Pb},i}$). We derive Equation 16 in the
Appendix.
In the limit that no Mn-Cr or Hf-W data exist, only Al-Mg data, we find
$t_{\rm
SS}^{*}=\left[\sum_{i=1}^{A}w_{i}\right]^{-1}\times\left[\sum_{i=1}^{A}w_{i}\,\left(t_{{\rm
Pb},i}+\Delta t_{26,i}\right)\right],$ (18)
where $w_{i}^{2}=1/\left(\sigma_{\Delta t26,i}^{2}+\sigma_{\Delta t{\rm
Pb},i}^{2}\right)$, as found in Paper I. Equation 16 is thus seen to be a more
general form of Equation 18 from Paper I, and should be used, although in
practice the inclusion of Mn-Cr and Hf-W data changes $t_{\rm SS}^{*}$ only by
$<0.06$ Myr.
In principle, half-lives of ${}^{26}{\rm Al}$, ${}^{235}{\rm U}$, and
${}^{238}{\rm U}$ could be found that optimize the fit. However, the Pb-Pb
ages we have included have already implicitly assumed fixed half-lives for
${}^{235}{\rm U}$ and ${}^{238}{\rm U}$, most commonly $t_{1/2}=703.81\pm
0.96(1\sigma)$ Myr and $t_{1/2}=4468.3\pm 4.8(1\sigma)$ Myr, respectively
(Jaffey et al., 1971; Villa et al., 2016). We investigate the sensitivity of
the results to uncertainties in $\tau_{26}$ in §4.2, but we fix the
${}^{26}{\rm Al}$ half-life at 0.717 Myr during the optimization procedure.
### 3.3 Optimal value of $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm
SS}$
We next find the optimal value of $R_{182,{\rm SS}}=(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ that minimizes $\chi_{\nu}^{2}$. That is, we
find $R_{182}^{*}$ for which $\partial\mbox{$\chi_{\nu}^{2}$}/\partial
R_{182,{\rm SS}}=0$. Direct differentiation of $\chi_{\nu}^{2}$ yields a
cumbersome result unless it is assumed that $\Delta t_{i}$ is insensitive to
$R_{182,{\rm SS}}$. Fortunately, this is the case, as in almost all instances,
$\Delta t_{i}$ is much more heavily weighted to $\Delta t_{26,i}$ and $\Delta
t_{{\rm Pb},i}$ than to $\Delta t_{182,i}$, the latter having the largest age
uncertainty due to the long half-life of ${}^{182}{\rm Hf}$. Accordingly, we
find:
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}=$ (19)
$\exp\left\\{\left[\sum_{i=1}^{A}w_{i}\right]^{-1}\times\left[\sum_{i=1}^{A}\,w_{i}\,\left(\ln(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0,i}+\frac{\Delta
t_{i}}{\tau_{182}}\right)\right]\right\\},$
where $w_{i}=1/\sigma_{\Delta t182,i}^{2}$.
The measured half-life of ${}^{182}{\rm Hf}$ is $8.896\pm 0.089(1\sigma)$ Myr
(Vockenhuber et al., 2004), so the $2\sigma$ uncertainty in the half-life is
only 2%. In §4.2 we investigate the sensitivity of the results to
$\tau_{182}$, but during the optimization we fix the half-life at 8.896 Myr.
### 3.4 Optimal values of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$ and $\tau_{53}$
We next find the optimal values of $R_{53,{\rm SS}}$ and $\tau_{53}$ that
minimize $\chi_{\nu}^{2}$. That is, we find $R_{53,{\rm SS}}^{*}$ and
$\tau_{53}^{*}$ for which $\partial\mbox{$\chi_{\nu}^{2}$}/\partial R_{53,{\rm
SS}}=0$ and $\partial\mbox{$\chi_{\nu}^{2}$}/\partial\tau_{53}=0$
simultaneously.
We again take advantage of the fact that $\Delta t_{i}$ is much more tied to
Al-Mg formation times $\Delta t_{26,i}$, or the Pb-Pb formation times $\Delta
t_{{\rm Pb},i}$ that are tied to them, than to Mn-Cr formation times. In that
case, assuming fixed $\tau_{53}$,
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}^{*}=$ (20)
$\exp\left\\{\left[\sum_{i=1}^{A}w_{i}\right]^{-1}\times\left[\sum_{i=1}^{A}\,w_{i}\,\left(\ln(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{0,i}+\frac{\Delta
t_{i}}{\tau_{53}}\right)\right]\right\\},$
where $w_{i}=1/\sigma_{\Delta t53,i}^{2}$.
Conversely, assuming fixed $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$, we find
$\tau_{53}^{*}=\frac{\sum_{i=1}^{A}w_{i}\,\Delta t_{i}\,\ln\left(R_{53,{\rm
SS}}/R_{53,i}\right)}{\sum_{i=1}^{A}w_{i}\,\left[\ln\left(R_{53,{\rm
SS}}/R_{53,i}\right)\right]^{2}},$ (21)
where again $w_{i}=1/\sigma_{\Delta t53,i}^{2}$.
Because of the coupled nature of the problem, we first assume a value for
$\tau_{53}$ and find $R_{53,{\rm SS}}^{*}$; then, setting $R_{53,{\rm
SS}}=R_{53,{\rm SS}}^{*}$, we find the optimal value $\tau_{53}^{*}$; then,
refining $\tau_{53}$ to be equal to this new $\tau_{53}^{*}$, we again find
$R_{53,{\rm SS}}^{*}$ and iterate this procedure to convergence.
### 3.5 Searching for an optimal fit
The equations above for $t_{\rm SS}^{*}$, $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}$, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}^{*}$ and $\tau_{53}^{*}$ yield a fast way to search the multi-
variable parameter space. In practice, however, we also can vary the four
input parameters across a four-dimensional grid of input parameters and find
the minimum $\chi_{\nu}^{2}$ by brute force. Testing $\sim 10^{8}$
combinations of input parameters takes less than one minute on a typical
laptop computer. We find the two approaches yield almost identical answers,
with differences in $\chi_{\nu}^{2}$ at the $<1\%$ level.
### 3.6 Statistical significance of a fit
As in Paper I, the significance of a fit is first assessed by considering the
$z$ scores of various ages, e.g., $z_{26,i}=2(\Delta t_{26,i}-\Delta
t_{i})/\sigma_{\Delta t26}$. In an acceptable fit, most (95%) but not all $z$
scores should be characterized by $\left|z\right|<2$.
The fit is also assessed in a global sense through $\chi_{\nu}^{2}$. The
procedure above finds the minimum value of $\chi_{\nu}^{2}$ given fixed half-
lives of ${}^{26}{\rm Al}$ and ${}^{182}{\rm Hf}$, allowing variations in four
parameters: $t_{\rm SS}$, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$, $\tau_{53}$, and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm
SS}$. Here $\nu$ is the number of degrees of freedom, equal to $N-4$ if we
have $N$ times of formation being fit simultaneously, using 4 free parameters.
If they do match, then we should see $\mbox{$\chi_{\nu}^{2}$}\approx 1$. In
general, the probability that the data are concordant but have
$\mbox{$\chi_{\nu}^{2}$}>1$ as high as they do due to measurement errors alone
is the cumulative distribution $P_{\nu}(>\mbox{$\chi_{\nu}^{2}$})$ (Wendt and
Carl, 1991). Closed-form solutions exist. The usual (“$2\sigma$”) threshold
for acceptance is that this probability exceed 5%, which places an upper limit
to $\chi_{\nu}^{2}$ such that $P_{\nu}(\mbox{$\chi_{\nu}^{2}$}_{\rm
max})=0.05$. For example, if $\nu=16$, then $\mbox{$\chi_{\nu}^{2}$}_{\rm
max}=1.644$. As discussed by Wendt and Carl (1991), in the limit of large $N$,
if the data scatter is only due to (Gaussian) measurement error, then the
probability is 95% that $\mbox{$\chi_{\nu}^{2}$}_{\rm
min}<\mbox{$\chi_{\nu}^{2}$}<\mbox{$\chi_{\nu}^{2}$}_{\rm max}$, where
$\mbox{$\chi_{\nu}^{2}$}_{\rm max}\approx
1+2\,\left(\frac{2}{N-M}\right)^{1/2},$ (22)
where again $N-M$ reflects the fact that there are $N$ data fit with $M$ input
parameters, so there are $N-M$ degrees of freedom. There is only a $<5\%$
probability that $\mbox{$\chi_{\nu}^{2}$}>\mbox{$\chi_{\nu}^{2}$}_{\rm max}$
(or less than a corresponding function $\mbox{$\chi_{\nu}^{2}$}_{\rm min}$).
Equation 22 is then approximate, as $\mbox{$\chi_{\nu}^{2}$}_{\rm max}=1.707$
for $\nu=16$, compared to 1.644.
If some subset of the achondrites above yields
$\mbox{$\chi_{\nu}^{2}$}<\mbox{$\chi_{\nu}^{2}$}_{\rm max}$, we consider this
a satisfactory fit, and the isotopic systems can be considered concordant in
those achondrites. Such a fit would fail to invalidate the two assumptions of
homogeneity of ${}^{26}{\rm Al}$, ${}^{53}{\rm Mn}$ and ${}^{182}{\rm Hf}$,
and that the isotopic systems achieved closure in each sample. If some subset
of achondrites yields $\mbox{$\chi_{\nu}^{2}$}>\mbox{$\chi_{\nu}^{2}$}_{\rm
max}$, then either the assumption of homogeneity has been violated, or the
closures of some isotopic systems were not simultaneous in one or more of the
achondrites, or both. Petrologic context would be necessary to assess these
possibilities.
## 4 Statistical Chronometry Results
### 4.1 Volcanic achondrites
Because the assumption of simultaneously closed isotopic systems is more
likely to be satisfied in the volcanic achondrites than other achondrites, we
begin our analysis with the three quenched angrites D’Orbigny, SAH 99555, and
NWA 1670, the eucrite-like Asuka 881394, and NWA 7325, and the rapidly cooled
CC achondrites NWA 2976 and NWA 6704. These are the same seven achondrites
considered in Paper I, for which the Al-Mg and Pb-Pb formation times were
shown to be concordant. We now ask whether they remain concordant when
considering their Mn-Cr and Hf-W ages as well.
We first consider only the Al-Mg and Pb-Pb systems. Across these seven
achondrites, there are 14 formation times $\Delta t_{26}$ and $\Delta t_{\rm
Pb}$ which we fit using the single parameter $t_{\rm SS}$. We find an optimal
fit $t_{\rm SS}=4568.377$ Myr, as in Paper I, with
$\mbox{$\chi_{\nu}^{2}$}=0.979$, which is an excellent fit, with
$P(\mbox{$\chi_{\nu}^{2}$})=47\%$.
We next consider the effects of including Hf-W ages, which exist for D’Orbigny
and SAH 99555. Across these seven achondrites there are now 16 formation times
to be fit using only the two parameters $t_{\rm SS}$ and $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$. We find an optimal fit
$\displaystyle(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}$
$\displaystyle=$ $\displaystyle 10.402\times 10^{-5}$ $\displaystyle t_{\rm
SS}^{*}$ $\displaystyle=$ $\displaystyle 4568.370\,{\rm Myr}.$
with $\mbox{$\chi_{\nu}^{2}$}=1.24$, also a good fit, with
$P(\mbox{$\chi_{\nu}^{2}$})=24\%$. These seven achondrites easily can be
considered concordant across their Al-Mg, Hf-W, and Pb-Pb ages.
An examination of the $z$ scores of the formation times strengthens the case
for concordance. The Pb-Pb age of SAH 99555 is discordant at the $2.2\sigma$
level, and the Pb-Pb age of NWA 6704 at almost the $2.0\sigma$ level; but this
is exactly as expected. Assuming these ages are scattered only due to their
reported measurement uncertainties, we would expect 68% of the 16 ages, i.e.,
10.9, to have $\left|z\right|<1$; 27%, or 4.3, to have $1<\left|z\right|<2$,
5%, or 0.8, to have $2<\left|z\right|<3$, and $0.003\%$, or 0.05, to have
$\left|z\right|>3$. The actual distributions are 12, 3, 1, and 0.
We now consider the effects of including the Mn-Cr formation times that exist
for D’Orbigny, SAH 99555, NWA 1670, and Asuka 881394. Across these seven
achondrites there are $N\\!\\!=\\!\\!21$ formation times that must be fit
simultaneously. demanding
$\mbox{$\chi_{\nu}^{2}$}<\mbox{$\chi_{\nu}^{2}$}_{\rm max}\approx 1.69$.
Fitting the four parameters simultaneously, we find the following global
minimum in $\chi_{\nu}^{2}$:
$\displaystyle\tau_{53}^{*}$ $\displaystyle=$ $\displaystyle 6.70\,{\rm Myr}$
$\displaystyle(t_{1/2}=4.64\,{\rm Myr})$ $\displaystyle(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle
6.87\times 10^{-6}$ $\displaystyle(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle 10.40\times 10^{-5}$
$\displaystyle t_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle 4568.37\,{\rm
Myr}.$
For these values, $\mbox{$\chi_{\nu}^{2}$}=1.59$, which is an acceptable fit
(6% probability). We do not assign significance to this solution, though,
because of the aphysical ${}^{53}{\rm Mn}$ half-life. In fact, a half-life of
3.8 Myr would fit the data almost equally well, with
$\mbox{$\chi_{\nu}^{2}$}=1.69$ and $P(\mbox{$\chi_{\nu}^{2}$})=5\%$. We defer
discussion of Mn-Cr systematics until after considering other achondrites.
### 4.2 All achondrites
We now consider whether the Al-Mg, Hf-W, and Pb-Pb formation times will remain
concordant if we include other achondrites into the statistical test. We now
add the six plutonic angrites LEW 86010, NWA 1296, NWA 2999, NWA 4590, NWA
4801, and Angra dos Reis (AdoR), to the list of seven achondrites above. Among
these 12 achondrites there are 26 formation times to be fit using only the
same two parameters, $t_{\rm SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$, as above. An acceptable fit requires an even smaller
$\chi_{\nu}^{2}$, less than about 1.58.
Amazingly, we find an acceptable optimal solution:
$\displaystyle(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}$
$\displaystyle=$ $\displaystyle 10.500\times 10^{-5}$ $\displaystyle t_{\rm
SS}^{*}$ $\displaystyle=$ $\displaystyle 4568.326\,{\rm Myr}.$
For these values, $\mbox{$\chi_{\nu}^{2}$}=1.31$, which is a good fit (14%
probability). The $z$ scores also appear distributed normally, except for NWA
4801; its Hf-W formation time is discordant at the $2.9\sigma$ level.
If we don’t include the Hf-W formation time of NWA 4801, we can fit 11
achondrites with 24 formation times using only the same two parameters,
$t_{\rm SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$, as
above. An acceptable fit requires an even smaller
$\mbox{$\chi_{\nu}^{2}$}<1.60$. Now the optimal fit is
$\displaystyle(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}$
$\displaystyle=$ $\displaystyle 10.427\times 10^{-5}$ $\displaystyle t_{\rm
SS}^{*}$ $\displaystyle=$ $\displaystyle 4568.360\,{\rm Myr}.$
For these values, $\mbox{$\chi_{\nu}^{2}$}=0.959$, which is an exceptionally
good fit, with $P(\mbox{$\chi_{\nu}^{2}$})=51.4\%$. The $z$ scores are also
distributed normally: 19 with $\left|z\right|<1$, 4 with $\left|z\right|=1-2$,
1 with $\left|z\right|=2-3$, and 0 with $\left|z\right|>3$. With the exception
of the Hf-W formation time of NWA 4801, the Al-Mg, Hf-W, and Pb-Pb systems
appear concordant across all 11 achondrites, including the CC achondrites and
the quenched and plutonic angrites.
We can vary these two parameters about these optimal values and see which
combinations yield solutions with $>5\%$ probability, which demands
$\mbox{$\chi_{\nu}^{2}$}<\mbox{$\chi_{\nu}^{2}$}_{\rm max}\approx 1.58$
(actually, 1.541). For fixed $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$, we find the acceptable range is $t_{\rm SS}\approx
4568.36\pm 0.24$ Myr. For fixed $t_{\rm SS}$, we find the acceptable range of
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ is $\approx(10.42\pm
0.28)\times 10^{-5}$. Across these ranges, the Al-Mg, Hf-W and Pb-Pb systems
remain concordant with each other.
### 4.3 Mn-Cr ages
We now consider the concordancy of the Mn-Cr formation times across all the
achondrites, rejecting only the Hf-W formation time of NWA 4801. We include
all Mn-Cr formation times, yielding 37 formation times across 14 achondrites.
We seek values of $t_{\rm SS}$, $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$, and
the ${}^{53}{\rm Mn}$ half-life that optimize the fit. The range of values of
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ and $t_{\rm SS}$ that
provide an acceptable fit are quite restricted and we fix them at the values
above. In contrast, there are many combinations of $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and ${}^{53}{\rm Mn}$ half-life that provide
acceptable fits. We have calculated $\chi_{\nu}^{2}$ across a grid of values
for the ${}^{53}{\rm Mn}$ half-life and the initial $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ ratio in the Solar System. For each value of
$\chi_{\nu}^{2}$ we calculate the probability $P_{\rm
fit}(\mbox{$\chi_{\nu}^{2}$})$ that the fit would have that $\chi_{\nu}^{2}$,
using the formulas of (Wendt and Carl, 1991) and assuming 33 degrees of
freedom. In Figure 2a we plot $\log_{10}[P_{\rm
fit}(\mbox{$\chi_{\nu}^{2}$})]$. Values $>-1.30$ are statistically significant
at the $2\sigma$ level ($>5\%$ probability). The minimum $\chi_{\nu}^{2}$
surface in this two-dimensional space of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$ vs. $\tau_{53}$ describes a long and narrow trough. Most of
the constraints on the Mn-Cr system come from quenched angrites like D’Orbigny
that formed at around $\Delta t\approx 5$ Myr, so empirically
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}\,\exp(-(5\,{\rm
Myr}/\tau_{53})\approx 3.24\times 10^{-6}$. For a given half-life, the
uncertainty in $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ is
typically about $\pm(0.20)\times 10^{-6}$. It is apparent that half-lives
anywhere from $<3$ to $>5$ Myr could fit the data. This is aphysical, though,
as experiments constrain the half-life to a range 3.1 to 4.3 Myr.
We next calculate the range of half-lives that are consistent with both the
chronometry and the experimental determinations. To do so we calculate a new
probability $P_{1/2}=0.3989\,\exp\left[-(t_{1/2}-3.70\,{\rm
Myr})^{2}/2(0.31\,{\rm Myr})^{2}\right]$, due to experimental determination of
the half-life. In Figure 2b we plot the joint probability $P_{\rm fit}\times
P_{1/2}$ as a function of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$ and the ${}^{53}{\rm Mn}$ half-life. The most probable value of the
${}^{53}{\rm Mn}$ half-life is about 3.80 Myr and the probable ($>5\%$) range
is restricted to 3.57 to 4.14 Myr, or roughly $3.80\pm 0.23(2\sigma)$ Myr. The
most likely value 3.80 is not very different from the commonly adopted 3.7
Myr, but has half the uncertainty. Half-lives from 3.1 to 3.5 Myr are allowed
by the experiments, but are less probable; in combination with the
improbability of matching the chronometry, they can be ruled out. Likewise,
half-lives from 4.1 to $>5$ Myr make the formation times concordant, but are
not also consistent with the experimental half-life. Under the assumption of
uniform ${}^{53}{\rm Mn}$, the range of half-lives consistent with both the
chronometry and the experiments is about 3.6 to about 4.1 Myr. It could have
been the case that the only half-lives making the Mn-Cr ages consistent was
outside the range allowed by experiments; this would have validated the
assumption of homogeneous ${}^{53}{\rm Mn}$, but instead this assumption is
supported.
With the range of likely half-lives constrained, the range of
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ values consistent with
concordant ages is also found. The allowed range of $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ is $(8.09_{-0.59}^{+0.71})\times 10^{-6}$. It
is understood that ${}^{53}{\rm Mn}$ and $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$ cannot be varied independently and still fit the formation
times.
If we adopt a half-life 3.80 Myr, we find the following optimal fit:
$\displaystyle\tau_{53}^{*}$ $\displaystyle=$ $\displaystyle 5.482\,{\rm Myr}$
$\displaystyle(t_{1/2}=3.80\,{\rm Myr})$ $\displaystyle(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle
8.09\times 10^{-6}$ $\displaystyle(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle 10.421\times 10^{-5}$
$\displaystyle t_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle 4568.355\,{\rm
Myr}.$
The fit is completely concordant. For this combination,
$\mbox{$\chi_{\nu}^{2}$}=1.09$, indicating an excellent fit (33% probability).
The $z$ scores appear to be distributed normally. Among 37 formation times, we
would expect 25.2 to be discordant by $<1\sigma$, 10.0 to be discordant by 1
to $2\sigma$, 1.7 to be discordant by 2 to $3\sigma$, and 0.1 to be discordant
by $>3\sigma$. We find 24, 11, 2, and 0. The two most discordant ages are the
Pb-Pb age of SAH 99555 (discrepant at the $2.3\sigma$ level) and the Mn-Cr age
of NWA 6704 (discrepant at the $2.4\sigma$ level). In summary, all 37 robust
formation times (except the Hf-W formation time of NWA 4801) are concordant.
Figure 2: Left. Logarithm of the probability of the fit as a function of the
input parameters, the assumed half-life of ${}^{53}{\rm Mn}$ and the initial
ratio $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$. Values $>-1.3$ (bold
contour) are statistically significant ($>5\%$ probability). Right. The joint
probability including the experimental constraints on the half-life.
Fixing the parameters above, we can investigate the discrepancies of the ages
we have excluded. For NWA 2999, based on the reported $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{0}=(1.28\pm 0.23)\times 10^{-6}$ (Shukolyukov and
Lugmair, 2008), we would find $\Delta t_{53}=10.11\pm 0.99$ Myr, which differs
from our computed $\Delta t=7.95$ Myr by $>4\sigma$. This value was not
published in the refereed literature. Lugmair and Shukolyukov (1998) had
regressed two Mn-Cr data points for AdoR along with data for LEW 86010 and
reported $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}=(1.25\pm 0.07)\times
10^{-6}$. This value and uncertainty would yield the same Mn-Cr formation time
$\Delta t_{53}=9.87\pm 0.20$ Myr for AdoR as for LEW 86010, which is
discrepant from AdoR’s $\Delta t=12.08$ Myr by $>22\sigma$. Based on Mn-Cr and
Hf-W data, we calculate a time of formation of LEW 86010 of $\Delta t=9.88$
Myr. Its Pb-Pb age is $4558.55\pm 0.15$ Myr, based on a U isotopic ratio
137.88 (Amelin, 2008a), yielding $\Delta t_{\rm Pb}=9.80\pm 0.15$. To be
concordant to within $2\sigma$, it should be younger than 4558.55 Myr, by
about 0.07 Myr, but not more than 0.22 Myr. This is comparable to the age
correction found using the ${}^{238}{\rm U}/{}^{235}{\rm U}$ ratio of the
similar plutonic angrite NWA 4590. These data previously flagged as
questionable indeed turn out not to fit with the other data.
This leaves only the case of the Hf-W formation time of NWA 4801. Including it
in the optimization slightly changes the inferred parameters [e.g.,
${}^{53}{\rm Mn}$ half-life $\equiv 3.80$ Myr, $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}=8.093\times 10^{-6}$, $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=10.496\times 10^{-5}$, $t_{\rm SS}=4568.320$
Myr], but the difference makes the fit for SAH 99555 worse, with $\Delta
t_{182}$ and $\Delta t_{\rm Pb}$ discrepant at the $2.2\sigma$ and $2.6\sigma$
levels, and now $\Delta t_{53}$ for NWA 6704 is discrepant by $2.5\sigma$.
Although the ages of NWA 4801 are not terribly discordant [its inferred
formation time $\Delta t_{182}=10.81$ Myr is discrepant from its inferred
$\Delta t=11.47$ Myr by $2.9\sigma$], including NWA 4801 in the fit makes the
overall fit much worse, with $\mbox{$\chi_{\nu}^{2}$}=1.408$. While below the
threshold $1.50$ for statistical significance, the probability of the fit is
only 6%, much worse than the probability of 33% found without including NWA
4801. McKibbin et al. (2015) considered this achondrite disturbed, noting the
description of it by Irving and Kuehner (2007) as an annealed breccia formed
originally by disruption of a very coarse-grained plutonic protolith, implying
a late thermal annealing event that could have reset some isotopic systems.
Kleine et al. (2012) also noted it had an unusually high abundance of W in its
fines fraction. We choose to exclude NWA 4801 from our optimization when
calculating Solar System values, but note that technically the chronometry
would be concordant even including it.
### 4.4 Sensitivity to Parameters
Around this optimal fit, we can examine what ranges of input parameters yield
acceptable fits. We first keep variables fixed around the values above and
vary them one at a time, finding values that make the 37 ages concordant
($\mbox{$\chi_{\nu}^{2}$}<1.44$). We find:
$\displaystyle\tau_{53}^{*}$ $\displaystyle=$ $\displaystyle 5.48\,\pm
0.33\,{\rm Myr}$ $\displaystyle(t_{1/2}=3.80\pm 0.23\,{\rm Myr})$
$\displaystyle(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}^{*}$
$\displaystyle=$ $\displaystyle(8.09_{-0.58}^{+0.71})\times 10^{-6}$
$\displaystyle(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}$
$\displaystyle=$ $\displaystyle(10.42\pm 0.23)\times 10^{-5}$ $\displaystyle
t_{\rm SS}^{*}$ $\displaystyle=$ $\displaystyle 4568.35\pm 0.19\,{\rm Myr}.$
In our analysis, we formally allowed only four parameters to vary: $t_{\rm
SS}$, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$, $\tau_{53}$, and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$. We fixed
$\tau_{26}=1.0344$ Myr ($t_{1/2}=0.717$ Myr) and $\tau_{182}=12.834$ Myr
($t_{1/2}=8.90$ Myr), but equations similar to Equation 22 can be used to
query what the optimal values of $\tau_{26}$ and $\tau_{182}$ should be.
Varying the half-life of ${}^{26}{\rm Al}$ (and optimizing the other
variables), we find that the optimal fit is for values near 0.717 Myr, but
that no meaningful variations in $\chi_{\nu}^{2}$ occur over the
experimentally allowed range. The optimal value of $t_{\rm SS}$ does change,
however, with variations in $\tau_{26}$ across its $\pm 0.017$ Myr range
yielding variations in $t_{\rm SS}$ of $\pm 0.14$ Myr. Varying the half-life
of ${}^{182}{\rm Hf}$ across its experimentally allowed range does not change
the fit almost at all, with variations in $t_{\rm SS}$ less than $\pm 0.01$
Myr.
Finally, we examine the effects of the age correction of 0.19 Myr applied to
all achondrites other than the CC achondrites (NWA 2976 and NWA 6704). As
discussed in Paper I, we take the fraction of U in pyroxenes, $f_{\rm cpx}$,
to be 1 in those achondrites and, following Tissot et al. (2017), 0.5 in
angrites and other NC achondrites. If we do not apply this correction to the
NC achondrites (setting $f_{\rm cpx}=1$), their ages would be 0.19 Myr older
than our optimal solution, and $t_{\rm SS}$ would increase by about 0.14 Myr,
but worsening the fit: $\chi_{\nu}^{2}$ would increase from 1.09 to 1.32, and
the Pb-Pb age of SAH 99555 made discordant at the $2.7\sigma$ level. If we
apply the maximum correction of 0.38 Myr ($f_{\rm cpx}=0$) to the NC
achondrites, they would be 0.19 Myr younger than our optimal solution, and
$t_{\rm SS}$ would decrease by about 0.14 Myr, and overall the fit would be
much improved, with $\mbox{$\chi_{\nu}^{2}$}=0.96$, although now the Pb-Pb age
of D’Orbigny would be discordant at the $2.6\sigma$ level. These findings
suggest that the correction for the isotopic fractionation between pyroxenes
and whole-rock U isotopic compositions advocated by (Tissot et al., 2017) is
necessary, i.e., current ages of NC achondrites are overestimated, by at least
0.2 Myr.
### 4.5 Summary
We have compiled and vetted nearly 40 reported formation times measured by
four isotopic systems, across 14 achondrites. We find them remarkably
concordant. Only the Hf-W formation time of NWA 4801 appears not to fit,
possibly because it suffered a late-stage annealing event (Irving and Kuehner,
2007), possibly associated with the unusually high W abundance in its matrix
(Kleine et al., 2012). Two previously repoted Mn-Cr ages of plutonic angrites
could not be reconciled, but the one for NWA 2999 was not in the refereed
literature, and the one for Angra dos Reis was actually conflated with the Mn-
Cr isochron for LEW 86010. The Pb-Pb age of LEW 86010 was not U-corrected, but
would be reconciled with the other formation times if its U isotopic ratio
were the same as NWA 4590, another similar plutonic angrite. The other 37
formation times across 14 achondrites are made remarkably concordant by
optimizing only four parameters. Across the following ranges, the solution is
concordant:
$t_{1/2,53}$ $\displaystyle=$ $3.80\pm 0.23\,{\rm Myr}$ $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}^{*}$ $\displaystyle=$ $(8.09\pm 0.65)\times
10^{-6}$ $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}^{*}$
$\displaystyle=$ $(10.42\pm 0.23)\times 10^{-5}$ $t_{\rm SS}^{*}$
$\displaystyle=$ $\displaystyle\mbox{\boldmath$4568.35\pm 0.19\,{\rm Myr}$}.$
Uncertainties are $2\sigma$. The values of $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and the ${}^{53}{\rm Mn}$ half-life are not
to be varied independently; if the half-life were known, then the uncertainty
in $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ would be only
$\pm(0.20)\times 10^{-6}$. The center of these ranges is a concordant
solution, with $\mbox{$\chi_{\nu}^{2}$}=1.16$ (42% probability solution) and
deviations distributed normally.
Using these values, we calculate the following times of formation for all the
achondrites, listed in Table 2 and depicted in Figure 3. For NWA 4801, we
calculated a weighted mean $\Delta t$ using only the Mn-Cr and Pb-Pb ages.
Table 2: Inferred times of formation after $t\\!\\!=\\!\\!0$, in Myr, of 14
achondrites according to the Al-Mg, Mn-Cr, Hf-W, and Pb-Pb systems, and
weighted means of formation times, using the parameters that optimize the fit
to all but NWA 4801. Italics denote formation times differing by $>2\sigma$
from the inferred formation time of the achondrite.
Achondrite | $\Delta t_{26}$ | $2\sigma$ | $\Delta t_{53}$ | $2\sigma$ | $\Delta t_{182}$ | $2\sigma$ | $\Delta t_{\rm Pb}$ | $2\sigma$ | $\Delta t$ | $2\sigma$
---|---|---|---|---|---|---|---|---|---|---
D’Orbigny | 5.06 | 0.10 | 5.03 | 0.06 | 4.84 | 0.31 | 5.12 | 0.21 | 5.03 | 0.05
SAH 99555 | 5.14 | 0.05 | 4.95 | 0.28 | 5.35 | 0.28 | 4.85 | 0.24 | 5.12 | 0.05
NWA 1670 | 4.64 | 0.10 | 5.72 | 1.77 | | | 4.34 | 0.66 | 4.63 | 0.10
NWA 1296 | | | | | 5.09 | 0.51 | 5.34 | 0.45 | 5.23 | 0.34
NWA 2999 | | | | | 8.37 | 0.80 | 7.81 | 0.47 | 7.95 | 0.41
LEW 86010 | | | 9.84 | 0.20 | 9.95 | 1.12 | | | 9.84 | 0.20
NWA 4590 | | | 12.35 | 2.58 | 10.41 | 0.47 | 10.79 | 0.38 | 10.66 | 0.29
NWA 4801 | | | 11.93 | 1.76 | 10.72 | 0.45 | 11.64 | 0.21 | 11.64 | 0.21
Angra dos Reis | | | 10.94 | 1.99 | 12.23 | 0.77 | 12.10 | 0.29 | 12.09 | 0.27
Ibitira | | | 11.14 | 2.59 | | | 11.80 | 0.57 | 11.77 | 0.56
Asuka 881394 | 3.82 | 0.04 | 4.05 | 0.32 | | | 3.60 | 0.53 | 3.82 | 0.04
NWA 7325 | 5.33 | 0.05 | | | | | 4.65 | 1.7 | 5.33 | 0.05
NWA 2976 | 5.03 | 0.04 | | | | | 5.20 | 0.57 | 5.03 | 0.04
NWA 6704 | 5.29 | 0.13 | 6.24 | 0.72 | | | 5.60 | 0.26 | 5.37 | 0.11
Figure 3: Times of formation of 14 achondrites, using the following
parameters: $t_{\rm SS}=4568.36$ Myr, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}=8.09\times 10^{-6}$, ${}^{53}{\rm Mn}$ half-life 3.80 Myr, and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=10.42\times 10^{-5}$.
Using the measurements reported in Table 1, we calculate for each achondrite
the formation times after $t\\!\\!=\\!\\!0$: $\Delta t_{26}$ (red), $\Delta
t_{53}$ (green), $\Delta t_{182}$ (blue), and $\Delta t_{\rm Pb}$ (violet),
and their weighted mean $\Delta t$ (black dashed line), as listed in Table 2.
If concordant, the formation times of 95% (i.e., all but two) of the ages from
all isotopic systems should match $\Delta t$ within $<2\sigma$ uncertainty.
Only the Mn-Cr age of NWA 6704 ($2.4\sigma$), the Pb-Pb age of SAH 99555
($2.9\sigma$) and the Hf-W age of NWA 4801 ($4.2\sigma$) are discordant.
Including NWA 4801, $\mbox{$\chi_{\nu}^{2}$}=1.41$, which is still
statistically significant (6% probability), but we consider it disturbed and
exclude it. The 37 formation times across four isotopic systems, in 14
achondrites, are then made concordant in a statistical sense
($\mbox{$\chi_{\nu}^{2}$}=1.09$, 33% probability; deviations distributed
normally) using only 4 input parameters. This supports rather than falsifies
the assumption of homogeneity of radionuclides.
## 5 Discussion
### 5.1 Comparison to CAIs
The parameters we advocate were not derived using any knowledge of CAIs
whatsoever, except that our decision to define $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{\rm SS}=5.23\times 10^{-5}$ at $t\\!\\!=\\!\\!0$ was
informed by CAIs. Therefore, measurements of the ${}^{53}{\rm Mn}$ half-life,
or direct inferences of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$,
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ or $t_{\rm SS}$ from
measurements of CAIs, provide a severe test of our model’s predictions.
#### 5.1.1 ${}^{53}{\rm Mn}$ half-life
Our inferred half-life of ${}^{53}{\rm Mn}$, $3.80\pm 0.23(2\sigma)$ Myr, is
much more precise than the oft-quoted value $3.7\pm 0.37(1\sigma)$ Myr (Honda
and Imamura, 1971). [NB: It is common in the experimental physics literature
to report uncertainties in measured values as the standard deviation in the
data, i.e., as $1\sigma$ uncertainties. In contrast, it is common in the
cosmochemistry literature to report uncertainties as 95% confidence intervals,
i.e., $2\sigma$ uncertainties. In citations below where the uncertainty was
not defined, we add a question mark.] In fact, most measurements of the
${}^{53}{\rm Mn}$ half-life are quite uncertain. Besides the commonly cited
value $3.7\pm 0.37(1\sigma)$ Myr, there are $2.9\pm 1.2(1\sigma?)$ Myr
(Matsuda et al., 1971) and $3.9\pm 0.6(1\sigma?)$ Myr (Woelfle et al., 1973).
These three values are all within uncertainty of their weighted mean of
$3.70\pm 0.61(2\sigma)$ Myr.
There also is a long history of trying to reconcile Mn-Cr and other ages in
meteorites, leaving the ${}^{53}{\rm Mn}$ half-life as a free parameter. Using
Apollo 14 samples to measure cosmogenic ${}^{53}{\rm Mn}$, Herr et al. (1972)
derived $3.8\pm 0.7(1\sigma?)$ Myr. Reconciling Mn-Cr systematics with Al-Mg
systematics in 18 chondrites, Heimann et al. (1974) found $3.85\pm
0.4(1\sigma?)$ Myr. Likewise, Nyquist et al. (2009) compared Al-Mg and Mn-Cr
ages in achondrites and CAIs and inferred that the best fit to the
${}^{53}{\rm Mn}$ half-life was that it was the ${}^{26}{\rm Al}$ half-life
divided by $(0.23\pm 0.04)$, implying a half-life $\approx 3.1$ Myr. On the
other hand, the regression they performed of Mn-Cr against Pb-Pb ages, using
LEW 86010 and Asuka 881394, suggested a much longer half-life of 4.8 Myr.
Sanborn et al. (2019) regressed Mn-Cr ages against Pb-Pb relative ages for
several achondrites, allowing the half-life of ${}^{53}{\rm Mn}$ to be a free
parameter, and found a decay constant $1.8\pm 0.2(2\sigma)\times 10^{-7}\,{\rm
yr}^{-1}$, equivalent to a $3.85\pm 0.43(2\sigma)$ Myr half-life. Our inferred
value is exactly in line with these estimates, but more precise.
#### 5.1.2 Initial ratio $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$
Presumably, a measurement of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$
in CAIs would directly record $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}$, if the Mn-Cr system closed at the same time as the Al-Mg system.
However, the initial value $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ in
CAIs is very difficult to infer from measurements, because Mn concentrations
and Mn/Cr ratios are low in CAIs, among other reasons (Davis and McKeegan,
2014). For various CAIs, Birck and Allègre (1985) reported
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ $=(44\pm 10)\times 10^{-6}$,
Papanastassiou et al. (2005) reported $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}=(14.33\pm 5.48)\times 10^{-6}$, Nyquist et al. (2009) reported
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}=(9.1\pm 1.7)\times 10^{-6}$,
and Trinquier et al. (2008) reported $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}=(6.28\pm 0.66)\times 10^{-6}$. The weighted average of the three
more recent (and more precise) measurements is $(6.74\pm 0.61)\times 10^{-6}$,
and all three are marginally concordant with this value at the roughly
$3\sigma$ level. Presumably this is the Solar System value that CAIs obtained
when they formed, so $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm
SS}\approx 6.74\times 10^{-6}$, if the Mn-Cr system was not later reset.
Slightly higher values ($>8\times 10^{-6}$) might be inferred if isotopic
closure of the Mn-Cr system in CAIs took place $\sim 1$ Myr after
$t\\!\\!=\\!\\!0$. In analogy with the conclusions we reached in Paper I for
the ability of the Pb-Pb system to be reset by transient heating of CAIs, such
late resetting of the Mn-Cr system in CAIs may be likely.
In an approach similar to ours but more comprehensive, Tissot et al. (2017)
recently reviewed various measurements to derive $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{0}$ in bulk chondrites, plus anchoring to D’Orbigny to
extrapolate backward in time to derive $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$. Summarizing these, they recommended $(7\pm 1)\times
10^{-6}$, with the Pb-Pb age of CAIs being a major uncertainty. However, if
the Pb-Pb age of CAIs were fixed at $4567.94$ Myr (Bouvier et al., 2011a),
they state that they would then recommend a somewhat higher value,
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}=(7.37\pm 0.60)\times
10^{-6}$. Our estimate, $(8.09\pm 0.65)\times 10^{-6}$ is in line with the
high end of their estimate.
#### 5.1.3 Initial ratio $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm
SS}$
The initial value $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ in CAIs
has been a challenge to measure, and only recently has it been very well
constrained. Burkhardt et al. (2012) reported $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}=(9.85\pm 0.40)\times 10^{-5}$, based on a Hf-W
isochron for mineral separates of a coarse-grained, type B Allende CAI.
Because this CAI was melted after its minerals formed, the true initial value
in the Solar System was likely higher; if melted $\sim 1$ Myr after
$t\\!\\!=\\!\\!0$, $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$
could have been as high as $(10.65\pm 0.43)\times 10^{-5}$. Indeed, Kruijer et
al. (2014) reported $(10.49\pm 0.62)\times 10^{-5}$, based on their
investigation of fine-grained (unmelted) CAIs. The value they recommend,
$(10.18\pm 0.43)\times 10^{-5}$, is based on a weighted average of fine-
grained plus coarse-grained CAIs. Our value inferred from making ages
concordant, $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=(10.42\pm
0.23)\times 10^{-5}$, is in excellent agreement with the value of
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ inferred from measurements
of fine-grained CAIs (Kruijer et al., 2014).
#### 5.1.4 Pb-Pb age of the solar system, $t_{\rm SS}$
The most surprising result from the above analysis and of Paper I, in which
$t_{\rm SS}=4568.42\pm 0.20$ Myr was estimated, is that it predicts a Pb-Pb
age of CAIs of $4568.36\pm 0.19\,{\rm Myr}$ if the Pb-Pb system closed at the
same time as the Al-Mg system in these CAIs. As discussed already in Paper I,
this is inconsistent with the measured Pb-Pb ages of CAIs, which are
$4567.18\pm 0.50$ for Allende CAI SJ101 (Amelin et al., 2010), and $4567.35\pm
0.28$ Myr, $4567.23\pm 0.29$ Myr, and $4567.38\pm 0.31$ Myr for CAIs 22E, 31E,
and 32E from Efremovka (Connelly et al., 2012). Our inferred age of $t_{\rm
SS}$ is over 1 Myr older than these CAI ages. However, some reports suggest
older Pb-Pb ages of CAIs: $4567.94\pm 0.31$ Myr for CAI B4 from NWA 6991
(Bouvier et al., 2011a), and $4568.22\pm 0.18$ Myr for CAI B1 from NWA 2364
(Bouvier and Wadhwa, 2010). The former was not published in the refereed
literature, the latter was not corrected using a direct measurement of the
${}^{238}{\rm U}/{}^{235}{\rm U}$ ratio. Interestingly, our inferred value of
$t_{\rm SS}$ is consistent with these CAI ages.
In Paper I we demonstrated that transient heating events with the peak
temperatures and cooling rates characteristic of chondrule formation could
have reset the Pb-Pb system in CAIs without resetting the Al-Mg system,
allowing the Pb-Pb ages to look younger than the Al-Mg ages. As CAIs resided
in the protoplanetary disk until incorporated into chondrites, and heating of
chondrules was ongoing for many Myr, CAIs could appear through their Pb-Pb
ages to have formed up to several Myr after $t\\!\\!=\\!\\!0$. This
potentially explains the 1 Myr discrepancy between oft-cited Pb-Pb ages of
CAIs and our inferred $t_{\rm SS}$.
### 5.2 Homogeneity and concordancy in other samples
Throughout this work, based on the arguments presented in Paper I comparing
the Al-Mg and Pb-Pb chronometers, we have assumed homogeneity of the SLRs,
especially ${}^{26}{\rm Al}$. This hypothesis would have been falsified if the
formation times in rapidly cooled achondrites, especially quenched angrites,
were discordant. In Paper I we found that the Al-Mg and Pb-Pb formation times
of volcanic achondrites were concordant. Here we find that the Al-Mg and Pb-Pb
formation times of all achondrites, and indeed the Hf-W and Mn-Cr formation
times, are also concordant. This finding further supports the assumption of
homogeneity. Nevertheless, there are other samples for which simultaneous
closure of multiple isotopic systems might be likely; these could potentially
invalidate our model’s assumption of SLR homogeneity. We examine some here,
including FUN (Fractionations and Unknown Nuclear effects) CAIs, chondrules
and other components of CR chondrites, and CB/CH chondrites.
#### 5.2.1 Homogeneity of SLRs inferred from FUN CAI STP-1
It has been suggested that the FUN CAI STP-1 provides evidence for decoupling
of ${}^{182}{\rm Hf}$ and ${}^{26}{\rm Al}$ in the solar nebula (Holst et al.,
2013; Park et al., 2017). STP-1 exhibits $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}=(2.94\pm 0.21)\times 10^{-6}$ and $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}=(9.60\pm 1.10)\times 10^{-5}$ if inferred using
${}^{186}{\rm W}/{}^{183}{\rm W}$ to correct for fractionation, or
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}=(9.22\pm 1.10)\times
10^{-5}$ if using ${}^{186}{\rm W}/{}^{184}{\rm W}$ (Holst et al., 2013). The
argument made by Holst et al. (2013) is that the $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{0}$ ratio is far below the canonical value
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{\rm SS}=5.23\times 10^{-5}$,
implying it formed a long time after CAIs (if ${}^{26}{\rm Al}$ was
homogeneous), whereas its $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$
value is “identical” within uncertainties to the initial Solar System
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$, which was taken to be
$(9.85\pm 0.40)\times 10^{-5}$ (Burkhardt et al., 2012). This would imply
STP-1 formed at the same time as other CAIs but had only 6% the canonical
amount of ${}^{26}{\rm Al}$. Put another way, Holst et al. (2013) inferred
$\Delta t_{26}=3.02\pm 0.07$ Myr [based on a different $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{\rm SS}$], but $\Delta t_{182}=0.33_{-1.47}^{+1.67}$
Myr, and considered this age gap of 2.69 Myr, discrepant at the $3.4\sigma$
level, to be irreconcilable. In their interpretation, FUN CAIs formed in a
region relatively devoid of ${}^{26}{\rm Al}$.
This conclusion depends very strongly, however, on the assumed value of
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$. We and Kruijer et al.
(2014) both infer much higher values than the previously accepted value of
$(9.85\pm 0.40)\times 10^{-5}$ of Burkhardt et al. (2012). Using
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=10.42\times 10^{-5}$
and the ${}^{186}{\rm W}/{}^{184}{\rm W}$-normalized value
($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}=(9.22\pm 1.10)\times
10^{-5}$ yields $\Delta t_{182}=1.57_{-1.45}^{+1.63}$ Myr. The discrepancy
between this age and $\Delta t_{26}=2.98\pm 0.07$ Myr is only 1.41 Myr
(1.7$\sigma$). The $\Delta t_{26}$ and updated $\Delta t_{182}$ formation
times do not differ by enough to exceed the usual threshold for discrepancy
($2\sigma$). We therefore conclude that the Hf-W and Al-Mg ages of STP-1 do
not provide strong evidence that ${}^{26}{\rm Al}$ and ${}^{182}{\rm Hf}$ were
decoupled in the solar nebula.
### 5.3 CR Chondrites and Chondrules
With our updates to the predicted Pb-Pb age of CAIs, $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$, a valid concern is that other “anchor” systems currently
thought to be concordant no longer will be. This may be particularly true for
the ages of CR chondrites and their chondrules, because these have some of the
largest times of formation after $t\\!\\!=\\!\\!0$ of any chondrites
(therefore small discrepancies have time to accumulate). They also have been
dated by Pb-Pb, Al-Mg and Hf-W systems, making them vulnerable to changes in
any one of these systems.
Currently the ages of CR chondrite chondrules derived from different systems
have appeared to be concordant with a formation time of about 3.7 Myr after
CAIs. Budde et al. (2018) measured ${}^{182}{\rm W}$ excesses in CR
chondrites. This dates the time of metal-silicate separation of the metal in
CR chondrites which, they argue, was simultaneous with formation of the
chondrules in CR chondrites. Combining several samples (bulk CR chondrites and
bulk CR chondrules) into a single isochron, they derived $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}=(7.68\pm 0.17)\times 10^{-5}$. They combined this
with an assumed $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm
SS}=(10.18\pm 0.43)\times 10^{-5}$ (Kruijer et al., 2014) to derive a time of
metal-silicate separation/chondrule formation in CR chondrites, $\Delta
t_{182}=3.63\pm 0.62$ Myr. They also compared to Pb-Pb ages of six CR
chondrite chondrules, analyzed by Amelin et al. (2002), but corrected for
their U isotopic compositions by Schrader et al. (2017), who found $t_{\rm
Pb}=4563.6\pm 0.6$ Myr. Using the Pb-Pb age of CAIs of $4567.30\pm 0.16$ Myr
of Connelly et al. (2012), they inferred $\Delta t_{\rm Pb}=3.66\pm 0.63$ Myr
for these chondrules. Budde et al. (2018) also compared to a time $\Delta
t_{26}=3.75\pm 0.24$ Myr of these chondrules’ formation they say is inferred
by Schrader et al. (2017) using Al-Mg systematics. They noted these are
concordant, implying that chondrules in CR chondrites formed at about 3.7 Myr
after CAIs. One could calculate a weighted mean $\Delta t=3.73\pm 0.21$ Myr
after CAIs, and all the systems would appear concordant with that time of
formation.
We interpret these results differently. First, it is not clear that $\Delta
t_{26}=3.75\pm 0.24$ Myr is appropriate. Schrader et al. (2017) identified
three groups of chondrules in CR chondrites: group 1 (n=1) had
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}=(6.3\pm 0.9)\times 10^{-6}$,
group 2 (n=7) had $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}=(3.2\pm
0.6)\times 10^{-6}$, and group 3 (n=14) had $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})_{0}=(6.7\pm 0.3)\times 10^{-7}$. Schrader et al. (2017)
took the weighted mean of the group 2 and group 3 chondrules’
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ values, and then converted
that average $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ to a time of
formation assuming a ${}^{26}{\rm Al}$ half-life of 0.705 Myr, to report
$\Delta t_{26}=3.7_{-0.2}^{+0.3}$ Myr. From there, Budde et al. (2018)
reported this as $\Delta t_{26}=3.75\pm 0.24$ Myr. However, when we take a
weighted mean of the $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ values,
we find it is $(6.71\pm 0.08)\times 10^{-7}$, essentially the group 3 value,
both because there are more group 3 chondrules, and because their
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ values are known to greater
(absolute) precision. Using this ratio, and a ${}^{26}{\rm Al}$ half-life
0.717 Myr, we the find times of formation of the CR chondrules to be $\Delta
t_{26}=4.51\pm 0.21$ Myr. Next, using our determination of $t_{\rm
SS}=4568.36\pm 0.20$ Myr, we calculate $\Delta t_{\rm Pb}=4.76\pm 0.64$ Myr.
These two times of formation, determined by internal isochrons, have weighted
mean $\Delta t=4.53\pm 0.20$ Myr, with the Al-Mg and Pb-Pb ages concordant
with each other. Finally, using ($\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}=(10.42\pm 0.20)\times 10^{-5}$, and a ${}^{182}{\rm Hf}$ half-
life 8.90 Myr, we find $\Delta t_{182}=3.92\pm 0.35$ Myr. This latter age is
slightly discordant ($2.9\sigma)$, but as it is based on an excess and not an
internal isochron, it may reflect an early stages of metal-silicate separation
while the chondrules were in the solar nebula, while the other two
chronometers could record a slightly later heating event. We suggest that the
CR chondrites may have assembled as late as 4.5 Myr after $t\\!\\!=\\!\\!0$.
### 5.4 Chondrules in CB/CH chondrites
Another apparently concordant system involves the ages of chondrules in CB and
CH chondrites. Chondrules and CAIs in the CB and CH chondrites formed or were
reset much later than similar inclusions in other chondrites, a fact
attributed to their formation or resetting in an impact plume following a
collision between two asteroids (Krot et al., 2005). Because of the suddenness
of the event, it has been suggested (Bollard et al., 2019) that this event
could serve as a time anchor, possibly a better time anchor than the angrites.
Various chronometers have been applied to date the timing of the impact.
#### 5.4.1 Al-Mg ages of CB/CH chondrules
So far, the precise Al-Mg chronometer has been used only to provide a lower
limit to the time of formation. Very low $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}$ values $\sim 10^{-6}$ or lower in chondrules in CH or CB
chondrites generally argue for a time of formation $\Delta t_{26}>4$ Myr
(Weber et al., 1995; Krot et al., 2005). For example, Olsen et al. (2013)
found $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}=(4.5\pm 8.3)\times
10^{-7}$, implying $\Delta t_{26}>3.8$ Myr, from bulk Hammada al Hamra 237
chondrules.
#### 5.4.2 Pb-Pb ages of CB/CH chondrules
Krot et al. (2005) derived Pb-Pb ages of chondrules from the CB chondrite
Gujba, reporting an average age $4562.68\pm 0.49$ Myr. Bollard et al. (2019)
pointed out that the Pb-Pb ages reported by Krot et al. (2005) were not
U-corrected. Gujba bulk chondrite has been measured at $\mbox{${}^{238}{\rm
U}/{}^{235}{\rm U}$}=137.794\pm 0.014$ (Connelly et al., 2012), which matches
the value advocated by Goldmann et al. (2015) to be used for bulk Solar
System, $137.794\pm 0.027$. Bollard et al. (2015) instead applied a correction
for $\mbox{${}^{238}{\rm U}/{}^{235}{\rm U}$}=137.786\pm 0.013$ that they
asserted should be used for Solar System objects (Connelly et al., 2012), to
derive an absolute age $4561.68\pm 0.51$ Myr. If we apply the value measured
for Gujba, $\mbox{${}^{238}{\rm U}/{}^{235}{\rm U}$}=137.794\pm 0.014$, we
derive a U-corrected Pb-Pb age of $4561.77\pm 0.54$ Myr.
Bollard et al. (2015) went on to measure Pb-Pb ages of 4 chondrules from
Gujba. Pooling these 4 chondrules together, assuming $\mbox{${}^{238}{\rm
U}/{}^{235}{\rm U}$}=137.786$ (not measured), they derived an age $4562.49\pm
0.21$ Myr. [If we apply the value measured for Gujba, $\mbox{${}^{238}{\rm
U}/{}^{235}{\rm U}$}=137.794\pm 0.014$, we derive a U-corrected Pb-Pb age of
$4562.57\pm 0.24$ Myr.] Based on a Pb-Pb age of CAIs of $4567.30\pm 0.16$ Myr,
Bollard et al. (2019) inferred a time of formation of $\Delta t_{\rm
Pb}=4.81\pm 0.33$ Myr.
The U-corrected Pb-Pb ages of chondrules from Gujba are $4561.77\pm 0.54$ Myr
(using data from Krot et al. (2005)) and $4562.57\pm 0.24$ Myr (using data
from Bollard et al. (2019)). New data from Connelly et al. (2021) suggest
$4562.64\pm 0.13$ Myr for a single Gujba chondrule. Assuming these all mark a
simultaneous event, their weighted mean is $4562.60\pm 0.11$ Myr, consistent
with the 2005 data at the $3\sigma$ level, and the 2019 data at the
$0.3\sigma$ level. Based on this age, and our estimate $t_{\rm SS}=4568.36$
Myr, we derive a time of formation $\Delta t_{\rm Pb}=5.76\pm 0.11$ Myr.
#### 5.4.3 Mn-Cr ages of CB/CH chondrules
Yamashita et al. (2010b) reported an initial value $(\mbox{${}^{53}{\rm
Mn}/{}^{55}{\rm Mn}$})_{0}=(3.18\pm 0.52)\times 10^{-6}$ in Gujba chondrules
and a metal grain. They used a value $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}=(3.24\pm 0.04)\times 10^{-6}$ (Glavin et al., 2004) and an absolute
age (a Pb-Pb age not U-corrected) of $4564.42\pm 0.12$ Myr (Amelin, 2008b) for
D’Orbigny, plus a half-life of ${}^{53}{\rm Mn}$ of 3.7 Myr, to infer an
absolute age $4564.3\pm 0.9$ Myr. Alternatively, they used a value
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}=(1.25\pm 0.07)\times 10^{-6}$
(Lugmair and Shukolyukov, 1998) and an absolute age (a Pb-Pb age not
U-corrected) of $4558.55\pm 0.15$ Myr (Amelin, 2008b) for LEW 86010 to derive
an absolute age $4563.5\pm 1.1$ Myr. They note that the absolute age
$4564.3\pm 0.9$ Myr was not concordant with the Pb-Pb ages of Gujba chondrules
reported by Krot et al. (2005), $4562.68\pm 0.49$ Myr. It also would not be
concordant with the updated age of Bollard et al. (2015), $4561.68\pm 0.51$
Myr. If these ages were anchored to a presumed Pb-Pb age of CAIs of
$4567.30\pm 0.16$ Myr (Connelly et al., 2012), it would imply a time of
formation $\Delta t_{53}\approx 3.00\pm 0.91$ Myr or $\Delta t_{53}\approx
3.80\pm 1.11$ Myr, depending on which other anchor is used.
Our approach is more straightforward. Assuming an initial Solar System ratio
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}=8.09\times 10^{-6}$ and a
${}^{53}{\rm Mn}$ half-life 3.80 Myr, we use the measured value $(3.18\pm
0.52)\times 10^{-6}$ (Yamashita et al., 2010b) to derive $\Delta
t_{53}=5.08\pm 0.90$ Myr. This uncertainty is driven by the uncertainty slope
of the Mn-Cr isochron and would be the same whether anchored to D’Orbigny or
referenced to the Solar System initial value.
#### 5.4.4 Hf-W ages of CB/CH chondrules
Bollard et al. (2015) reported an average value $\epsilon^{182}{\rm
W}=-2.97\pm 0.16$ for CB chondrites. Based on a comparison to the value
$\epsilon^{182}{\rm W}=-3.49\pm 0.07$ pertinent to average CAIs (Kruijer et
al., 2014), they derived a time of formation $\Delta t_{182}=5.09\pm 0.59$
Myr. We argued above (§5.1.3) that it would more accurate to use the value for
fine-grained CAIs, $\epsilon^{182}{\rm W}=-3.57\pm 0.07$ (Kruijer et al.,
2014), in which case we would derive a time of formation of CB chondrites
$\Delta t_{182}=5.87\pm 0.68$ Myr.
#### 5.4.5 Summary
Assuming the CB chondrites and chondrules formed or were reset together in the
impact plume, and achieved isotopic closure simultaneously, we take a weighted
mean of the ages above [ $\Delta t_{\rm Pb}=5.76\pm 0.11$ Myr, $\Delta
t_{53}=5.08\pm 0.90$ Myr, and $\Delta t_{182}=5.87\pm 0.68$ Myr to find a time
of formation $\Delta t=5.75\pm 0.11$ Myr. All three systems are concordant
with an age of 5.75 Myr at the $0.2\sigma$ (Pb-Pb), $1.5\sigma$ (Mn-Cr), and
$0.4\sigma$ (Hf-W) levels. Following the suggestion of Bollard et al. (2015),
the CB chondrites and chondrules do appear to be concordant and would serve as
a good anchor, but with an absolute age $\approx 4562.60$ Myr and a much later
time of formation of $\Delta t\approx 5.75$ Myr after CAIs.
### 5.5 Formation Times of “Anchors”
Our updated estimates of quantities like $t_{\rm SS}$ allow refinements in the
calculation of when particular samples formed. Useful among these are those
objects for which three or more precisely measured isotopic systems appear to
have reached isotopic closure simultaneously and are apparently most
concordant. These include the volcanic angrites D’Orbigny, SAH 99555, and NWA
1670, plus the chondrules formed in the impact plume following the CB/CH
impact. For lack of a better word we call them anchors. In Table 3, we list:
the time of formation after $t\\!\\!=\\!\\!0$ of the object, as recorded by
the different isotopic systems, assuming our canonical parameters derived from
volcanic achondrites; the formation time as inferred from combining the
systems; and the model age for the object. We include the latter only to allow
ease of comparison with other works, and caution that these should not be used
to anchor to. We emphasize again that the absolute age has little use in
models of planet formation or protoplanetary disk evolution; the time of
formation after $t\\!\\!=\\!\\!0$ is the much more relevant quantity. These
quantities should be determined by using $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}$, or $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ in
conjunction with the $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and
$\tau_{53}$ derived here, or $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{0}$ and the $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$
derived here, or a Pb-Pb age and the value of $t_{\rm SS}$ derived here. Where
possible, multiple ages should be combined.
Table 3: Our model predictions (in Myr, with uncertainties) for samples in
which all isotopic systems closed simultaneously: times of formation after
$t\\!\\!=\\!\\!0$, according to the Al-Mg, Mn-Cr, Hf-W, and Pb-Pb systems;
weighted mean times of formation; and absolute ages (using standard half-
lives). The absolute ages are provide for the sake of comparison, but should
not be used as anchors.
Sample | $\Delta t_{26}$ | $2\sigma$ | $\Delta t_{53}$ | $2\sigma$ | $\Delta t_{182}$ | $2\sigma$ | $\Delta t_{\rm Pb}$ | $2\sigma$ | $\Delta t$ | $2\sigma$ | $t\,\,\,\,\,\,\,\,$ | $2\sigma$
---|---|---|---|---|---|---|---|---|---|---|---|---
D’Orbigny | 5.06 | 0.10 | 5.03 | 0.06 | 4.83 | 0.31 | 5.11 | 0.21 | 5.03 | 0.05 | 4563.32 | 0.20
SAH 99555 | 5.14 | 0.05 | 4.95 | 0.29 | 5.35 | 0.28 | 4.84 | 0.24 | 5.12 | 0.05 | 4563.23 | 0.20
NWA 1670 | 4.64 | 0.10 | 5.72 | 1.78 | | | 4.33 | 0.66 | 4.63 | 0.10 | 4563.72 | 0.22
Asuka 881394 | 3.82 | 0.04 | 4.04 | 0.32 | | | 3.59 | 0.53 | 3.82 | 0.04 | 4564.53 | 0.20
CH/CB | $>3.8$ | | 5.08 | 0.90 | 5.87 | 0.68 | 5.76 | 0.11 | 5.75 | 0.11 | 4562.10 | 0.23
Chondrules | | | | | | | | | | | |
Measurements to determine data missing from Table 3 would provide a severe
test of our model. For NWA 1670, we predict $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{0}\approx 7.3\times 10^{-5}$. For the CH/CB
chondrules, we predict an initial ratio $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})_{0}\approx 2.0\times 10^{-7}$. For NWA 7325, we predict
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}\approx 3.1\times 10^{-6}$ and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}\approx 6.9\times 10^{-5}$.
For LEW 86010, we predict a U isotopic ratio leading to a Pb-Pb age $\approx
4558.58$ Myr.
An interesting application of the improved chronometry is to test whether the
similar volcanic angrites D’Orbigny and SAH 99555 formed at the same time.
According to the Al-Mg chronometer alone, SAH 99555 formed $0.076\pm 0.115$
Myr after D’Orbigny. After combining all the isotopic systems, we conclude
that SAH 99555 formed $0.091\pm 0.068$ Myr after D’Orbigny. Our predicted
formation times are closer together than previously inferred ($0.090$ vs.
$0.076$ Myr), but also more likely to be distinct ($2.7\sigma$ vs.
$1.3\sigma$).
## 6 Other Isotopic Systems Used for Chronometry
Our updates to parameters, especially $t_{\rm SS}$, allow refinements in the
quantities required for other isotopic systems to be used for chronometry.
Based on their common or increasing use, we discuss the I-Xe, Fe-Ni, Pd-Ag,
Nb-Zr, and Be-B systems.
### 6.1 The I-Xe system
The decay of ${}^{129}{\rm I}$ to ${}^{129}{\rm Xe}$ is one of the earliest
used chronometers, but the technique to use this isotopic system differs from
the others because of the way the measurements are made. An estimate of the
initial ${}^{129}{\rm I}/{}^{127}{\rm I}$ ratio in a meteoritic sample can be
obtained by irradiating the sample with neutrons, transmuting ${}^{127}{\rm
I}$ to ${}^{128}{\rm I}$, which decays with a half-life of 25 minutes to
${}^{128}{\rm Xe}$. Xe is then driven out of the sample and measured, and an
isochron of ${}^{129}{\rm Xe}/{}^{132}{\rm Xe}$ vs. ${}^{128}{\rm
Xe}/{}^{132}{\rm Xe}$ is constructed. To the extent that the neutron
absorption cross section of ${}^{127}{\rm I}$ is known, the correlation could
provide the ${}^{129}{\rm I}/{}^{127}{\rm I}$ ratio. Because of uncertainty in
the neutron cross section, it is difficult to measure the initial abundance of
${}^{129}{\rm I}/{}^{127}{\rm I}$ to an acceptable accuracy; however, it is
possible to precisely measure the ratio of ${}^{129}{\rm I}/{}^{127}{\rm I}$
in a sample relative to the initial ${}^{129}{\rm I}/{}^{127}{\rm I}$ in an
anchor, usually taken to be the enstatite achondrite Shallowater. In this way,
the relative age between the sample and Shallowater can be obtained. To be
useful for chronometry, we must determine the time after $t\\!\\!=\\!\\!0$ at
which Shallowater formed, $\Delta t_{\rm SW}$.
Recently, Pravdivtseva et al. (2017) and Gilmour and Crowther (2017)
summarized the concordancy between I-Xe and U-corrected Pb-Pb data of a
variety of samples, and derived absolute ages of the Shallowater standard of
$4562.4\pm 0.2$ Myr and $4562.7\pm 0.3$ Myr, respectively. We favor the value
of Gilmour and Crowther (2017), as it includes samples such as Ibitira and NWA
7325 not included by Pravdivtseva et al. (2017). Using this age and $t_{\rm
SS}=4568.36\pm 0.10$ Myr, we infer that Shallowater reached closure at $\Delta
t_{\rm SW}=5.65\pm 0.36$ Myr after $t\\!\\!=\\!\\!0$.
It is also possible to accept chondrules from the CB/CH impact event as an
anchor, and take a time of formation $\Delta t=5.75\pm 0.11$ Myr, then apply
the correction by Pravdivtseva et al. (2017), that Shallowater formed $0.29\pm
0.16$ Myr before that, to infer $\Delta t_{\rm SW}=5.45\pm 0.19\,{\rm Myr}$.
It is encouraging that these two approaches—one based on I-Xe age difference
anchored to the CB/CH impact event constrained by Mn-Cr, Hf-W and Pb-Pb ages;
the other based on comparing I-Xe formation times with Pb-Pb ages—yield
identical formation times within the certainties (at the 0.9$\sigma$ level).
Taking the weighted mean of these then yields our best estimate for when
Shallowater formed:
$\Delta t_{\rm SW}=5.50\pm 0.17\,{\rm Myr}$
For completeness, we note that the half-life of ${}^{129}{\rm I}$, which has
long been cited as $15.7\pm 0.6(1\sigma)$ Myr (Emery, 1972), has been updated.
Pravdivtseva et al. (2017) reviewed different values in the literature and
compared them to a value derived from regressing I-Xe data, concluding that a
value near 16 Myr appeared correct. More recently, in a concerted effort using
multiple measurement techniques, García-Toraño et al. (2018) determined the
half-life to be $16.14\pm 0.12(1\sigma)$, the value adopted here.
Using a formation time $\Delta t_{\rm SW}=5.50\pm 0.17$ Myr and the half-life
of ${}^{129}{\rm I}$, the initial ${}^{129}{\rm I}/{}^{127}{\rm I}$ ratio can
be estimated. In recent experimental work to effectively derive the neutron
cross sections, (Pravdivtseva et al., 2021) estimated $({}^{129}{\rm
I}/{}^{127}{\rm I})_{0}\approx 1.35\times 10^{-4}$ in Shallowater at the time
of its formation, with an apparent uncertainty $<2\%$. Extrapolating backward
in time yields
$({}^{129}{\rm I}/{}^{127}{\rm I})_{\rm SS}\approx(1.71\pm 0.02)\times
10^{-4}$.
This value is somewhat higher than estimates of $1.4\times 10^{-4}$ (e.g.,
Davis, 2022) because those are based on a value $1.07\times 10^{-4}$ in
Bjurböle whole rock (Hohenberg and Kennedy, 1981), which Pravdivtseva et al.
(2021) argue is too low because of self-shielding effects in the KI salts used
in those experiments.
### 6.2 The Fe-Ni system
The short-lived radionuclide ${}^{60}{\rm Fe}$ decays to ${}^{60}{\rm Ni}$
(via ${}^{60}{\rm Co}$), with a half-life of $2.62\pm 0.04(1\sigma)\,{\rm
Myr}$ (Rugel et al., 2009). Before 2009, the half-life $1.49\pm 0.27(1\sigma)$
Myr (Kutschera et al., 1984) was commonly cited, but the measurement by
Wallner et al. (2015), $2.50\pm 0.12(1\sigma)$ Myr supports the newer, more
precise value.
The initial $({}^{60}{\rm Fe}/{}^{56}{\rm Fe})_{\rm SS}$ ratio was at first
determined to be $\sim 10^{-6}$ (Tachibana and Huss, 2003), but subsequent
work has shown that this and similar analyses were overestimates, due to use
of a too-short half-life, errors in how low ion counts obtained by Secondary
Ion Mass Spectrometry (SIMS) analyses were averaged (Ogliore et al., 2011;
Telus et al., 2013), and the possibility that samples have suffered from
redistribution of Fe and/or Ni isotopes after crystallization of samples
(Quitté et al., 2011; Telus et al., 2018). An analysis by sensitive Resonance
Ionization Mass Spectrometery (RIMS) also has shown that SIMS analyses can
introduce isotopic fractionation, leading to overestimates in
$(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{0}$ (Trappitsch et al., 2018). A
higher value $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{\rm SS}\approx
6\times 10^{-7}$ inferred by Cook et al. (2021), based on measured
$\epsilon^{60}{\rm Ni}$ excesses in iron meteorites, is based on the
assumption that the iron meteorite parent body had $\epsilon^{60}{\rm
Ni}\approx 0.0$ like CI carbonaceous chondrites; assuming it was more CV
chondrite-like (as implied by the $\epsilon^{62}{\rm Ni}$ excesses) yields
$\epsilon^{60}{\rm Ni}\approx-0.13$ and admits $(\mbox{${}^{60}{\rm
Fe}$})_{0}<10^{-8}$. Based on a variety of analyses yielding internal
isochrons for CAIs, chondrules, achondrites and iron meteorites, especially by
inductively coupled plasma mass spectrometry (ICP-MS), a value $({}^{60}{\rm
Fe}/{}^{56}{\rm Fe})_{\rm SS}\sim 10^{-8}$ is now widely accepted (Quitté et
al., 2010; Spivak-Birndorf et al., 2011; Tang and Dauphas, 2012, 2015).
The initial $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{\rm SS}$ of the solar
system can be found by extrapolating backwards from samples in which the Fe-Ni
system closed at the same time as other systems. Using $\epsilon^{53}{\rm Cr}$
excesses to create a bulk rock isochron for the eucrite parent body (EPB),
Trinquier et al. (2008) determined $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{0}=(4.21\pm 0.42)\times 10^{-6}$, which implies $\Delta t=3.58\pm
0.55$ Myr for our favored parameters. Tang and Dauphas (2012) inferred from
bulk rock isochrons that $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm
Fe}$})_{0}=(3.45\pm 0.32)\times 10^{-9}$ in the EPB, so extrapolating
backwards we infer $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{\rm
SS}=(8.90\pm 1.54)\times 10^{-9}$.
Likewise, from whole rock isochrons of angrites, Shukolyukov and Lugmair
(2007) inferred $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}=(3.40\pm
0.14)\times 10^{-6}$ in the angrite parent body (APB), and Zhu et al. (2019)
$(3.16\pm 0.11)\times 10^{-6}$, which together imply $\Delta t=5.00\pm 0.15$
Myr for our favored parameters. Combining several bulk-rock angrites, Quitté
et al. (2010) determined $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm
Fe}$})_{0}=(3.12\pm 0.78)\times 10^{-9}$, and Tang and Dauphas (2012) inferred
$(2.20\pm 1.16)\times 10^{-9}$, with weighted mean $(\mbox{${}^{60}{\rm
Fe}/{}^{56}{\rm Fe}$})_{0}=(2.83\pm 0.65)\times 10^{-9}$. Extrapolating
backward, we infer $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{\rm
SS}=(10.61\pm 2.47)\times 10^{-9}$. The weighted mean of these two analyses
yields $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{\rm SS}=(9.38\pm
1.31)\times 10^{-9}$.
It is also possible to extrapolate backward from the individual internal
isochrons from D’Orbigny: $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm
Fe}$})_{0}=(4.1\pm 2.6)\times 10^{-9}$ (Quitté et al., 2010); $(2.81\pm
0.86)\times 10^{-9}$ (Spivak-Birndorf et al., 2011); and $(3.42\pm 0.58)\times
10^{-9}$ (Tang and Dauphas, 2012); weighted average $(3.26\pm 0.47)\times
10^{-9}$. Using our inferred $\Delta t=5.06\pm 0.04$ Myr, we would infer
$(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm Fe}$})_{\rm SS}=(12.43\pm 1.81)\times
10^{-9}$. Likewise, we could use data for SAH 99555: $(\mbox{${}^{60}{\rm
Fe}/{}^{56}{\rm Fe}$})_{0}=(1.8\pm 0.5)\times 10^{-9}$ (Quitté et al., 2010);
$(1.97\pm 0.77)\times 10^{-9}$ (Tang and Dauphas, 2012). With our inferred
$\Delta t=5.12\pm 0.05$ Myr, we derive $(\mbox{${}^{60}{\rm Fe}/{}^{56}{\rm
Fe}$})_{\rm SS}=(7.17\pm 1.64)\times 10^{-7}$. The weighted mean of these is
$(9.5\pm 1.2)\times 10^{-9}$, very close to the value inferred from bulk rock
isochrons; but neither the D’Orbigny nor SAH 99555 internal isochrons are
concordant with that value (nor with each other).
We consider the bulk rock isochrons more reliable, as they are less
susceptible to thermal disturbance (Tang and Dauphas, 2012). The weighted mean
of the values inferred from the EPB and APB is
$({}^{60}{\rm Fe}/{}^{56}{\rm Fe})_{\rm SS}=(9.4\pm 1.3)\times 10^{-9}$.
This is to be compared to the value $(11.5\pm 2.6)\times 10^{-9}$ derived by
Tang and Dauphas (2012). These are consistent with each other, and the
difference is attributable mostly to our refinements in the half-life and
abundance of ${}^{53}{\rm Mn}$.
### 6.3 The Pd-Ag system
The SLR ${}^{107}{\rm Pd}$ decays to ${}^{107}{\rm Ag}$ with a half-life of
$6.50\pm 0.3(1\sigma)\,{\rm Myr}$ (Flynn and Glendenin, 1969). The Pd-Ag
system has been used as a chronometer in the metallic phases of iron
meteorites and mesosiderites, as well as in carbonaceous chondrites, and it
would be useful to better constrain the initial ratio $({}^{107}{\rm
Pd}/{}^{108}{\rm Pd})_{\rm SS}$ in the Solar System. Isochrons in a variety of
iron and stony-iron meteorites yield a range of initial $({}^{107}{\rm
Pd}/{}^{108}{\rm Pd})_{0}\approx(1.5-2.5)\times 10^{-5}$ (Chen and Wasserburg,
1990; Chen et al., 2002). Later studies found, after underestimating the ages
of some iron meteorites, that the Iab iron meteorite parent body likely
suffered partial melting of metal and sulfides some 15 Myr after Solar System
formation, resetting the Pd-Ag system (Theis et al., 2013). Carbonaceous
chondrites are therefore a better sample for constraining $({}^{107}{\rm
Pd}/{}^{108}{\rm Pd})_{\rm SS}$. Schönbächler et al. (2008) found
$(\mbox{${}^{107}{\rm Pd}/{}^{108}{\rm Pd}$})_{\rm SS}=(5.9\pm 2.2)\times
10^{-5}$ from whole-rock isochrons of carbonaceous chondrites. This was
refined to a value $(\mbox{${}^{107}{\rm Pd}/{}^{108}{\rm Pd}$})_{\rm
SS}=(6.6\pm 0.4)\times 10^{-5}$ by Matthes et al. (2018) and Brennecka et al.
(2018) using measurements of the type IVa iron meteorite Muonionalusta, and
assuming a Pb-Pb age of the Solar System 4567.3 Myr.
We refine this by taking the reported age for Muonionalusta, $4558.4\pm 0.5$
Myr (Brennecka et al., 2018), to estimate a time of formation $\Delta
t=9.96\pm 0.61$ Myr. We then extrapolate backwards from the measured value
$({}^{107}{\rm Pd}/{}^{108}{\rm Pd})_{0}=(2.57\pm 0.07)\times 10^{-5}$
(Matthes et al., 2018), to derive
$({}^{107}{\rm Pd}/{}^{108}{\rm Pd})_{\rm SS}=(7.43\pm 0.52)\times 10^{-5}$.
Including the $2\sigma$ uncertainty in the half-life would increase the
uncertainty to $\pm(0.90)\times 10^{-5}$.
### 6.4 The Nb-Zr system
The short-lived radionuclide ${}^{92}{\rm Nb}$ decays to ${}^{92}{\rm Zr}$
with a half-life that is $34.7\pm 2.4\,{\rm Myr}$ (Audi et al., 2003),
although Iizuka et al. (2016) recommended using the older value $37\pm 5$ Myr
(Holden, 1990).
An initial ratio $({}^{92}{\rm Nb}/{}^{93}{\rm Nb})_{0}=(1.4\pm 0.5)\times
10^{-5}$ was recently determined for NWA 4590 (Iizuka et al., 2016). Using a
Pb-Pb age $4557.93\pm 0.36$ Myr and $t_{\rm SS}=4567.3$ Myr (i.e., $\Delta
t=9.37$ Myr), they extrapolated backward in time to infer
$({}^{92}{\rm Nb}/{}^{93}{\rm Nb})_{\rm SS}=(1.7\pm 0.6)\times 10^{-5}$.
Although we infer $\Delta t=10.64\pm 0.30$ Myr for NWA 4590, the difference is
negligible because of the long mean-life of ${}^{92}{\rm Nb}$. The initial
$({}^{92}{\rm Nb}/{}^{93}{\rm Nb})_{\rm SS}$ ratio in the Solar System is
insensitive to these differences, but conversely the ages derived from the Nb-
Zr are very sensitive to the $({}^{92}{\rm Nb}/{}^{93}{\rm Nb})_{\rm SS}$
ratio, so we recommend further determinations of this value to develop the Nb-
Zr system as a chronometer.
### 6.5 The Be-B System
The short-lived radionuclide ${}^{10}{\rm Be}$ decays to ${}^{10}{\rm B}$ with
a half-life of 1.39 Myr. Chmeleff et al. (2010) reported an experimentally
derived half-life $1.386\pm 0.016(1\sigma)$ Myr. Using a different
experimental technique, Korschinek et al. (2010) found $1.388\pm
0.018(1\sigma)$ Myr. Both papers’ authors recommended combining their results
and using the half-life $1.387\pm 0.012(1\sigma)$ Myr. The Be-B system has
almost exclusively been studied in CAIs containing the minerals melilite,
hibonite, and grossite, because these minerals are among the few to exhibit
the required variable-to-high Be/B ratios (up to a few hundred) to build an
isochron (Dunham et al., 2020), and these minerals are unique to CAIs.
Identification of phases with variable Be/B in other meteoritic components may
allow dating of other samples, and the Be-B system may serve as a good
chronometer of events affecting CAIs, so it is worthwhile to establish
$(\mbox{${}^{10}{\rm Be}/{}^{9}{\rm Be}$})_{\rm SS}$.
Recently, Dunham et al. (2022) determined that the $({}^{10}{\rm
Be}/{}^{9}{\rm Be})_{0}$ ratios recorded by CAIs overwhelmingly cluster, with
few exceptions, around a single value,
$({}^{10}{\rm Be}/{}^{9}{\rm Be})_{\rm SS}=(7.1\pm 0.2)\times 10^{-4}$.
Of the 54 ${}^{10}{\rm Be}/{}^{9}{\rm Be}$ robust CAI regressions, 10 (19%)
have $({}^{10}{\rm Be}/{}^{9}{\rm Be})_{0}$ above (n=3) and below (n=7) the
$({}^{10}{\rm Be}/{}^{9}{\rm Be})_{\rm SS}$. Overall, though, the homogeneity
of ${}^{10}{\rm Be}$ in CAI regressions (81%) suggests that ${}^{10}{\rm Be}$
was distributed uniformly in the solar nebula (Dunham et al., 2022).
Most $({}^{10}{\rm Be}/{}^{9}{\rm Be})_{0}$ measurements were conducted on
normal CAIs that have nearly canonical $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm
Al}$})$ ratio and $({}^{10}{\rm Be}/{}^{9}{\rm Be})_{\rm SS}$. Of the seven
CAIs with low $({}^{10}{\rm Be}/{}^{9}{\rm Be})_{0}\approx(3-5)\times
10^{-4}$, all have isotopic anomalies (i.e., are FUN or PLAty Crystals of
hibonite [PLAC] type CAIs) and five are dominated by hibonite. We consider it
likely that hibonite-dominated particles did not sample the overall solar
nebula reservoir, instead retaining memory of a presolar origin, a possibility
previously suggested by Ireland (1990), Kööp et al. (2016), and Larsen et al.
(2020). The only non-hibonite-dominated objects with measured and non-
canonical $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ and
$(\mbox{${}^{10}{\rm Be}/{}^{9}{\rm Be}$})_{0}$ ratios are the FUN CAIs CMS-1
(Williams et al., 2017; Dunham et al., 2022) and KT1 (Larsen et al., 2011;
Wielandt et al., 2012). CMS-1 was a forsterite-bearing inclusion before
thermal processing (Mendybaev et al., 2017), and records $({}^{10}{\rm
Be}/{}^{9}{\rm Be})_{0}=(1.8\pm 3.2)\times 10^{-4}$ (Dunham et al., 2022) and
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}=(2.85\pm 0.57)\times 10^{-5}$
(Williams et al., 2017). These imply $\Delta t_{10}=(2.75_{-2.05}^{+\infty}$
Myr (i.e., $\Delta t_{10}>0.70$ Myr) and $\Delta
t_{26}=(0.63_{-0.19}^{+0.23})$ Myr. If the Al-Mg and Be-Be systems were
simultaneously reset at $\Delta t=0.8$ Myr, the predicted values for this
inclusion would be $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}=2.4\times
10^{-5}$ and $(\mbox{${}^{10}{\rm Be}/{}^{9}{\rm Be}$})_{0}=4.8\times
10^{-4}$, both within $<2\sigma$ of the measured values. CAI KT1 records
$({}^{10}{\rm Be}/{}^{9}{\rm Be})_{0}=(5.0\pm 0.4)\times 10^{-4}$ (Dunham et
al., 2022) and $(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}=(-2.2\pm
4.7)\times 10^{-5}$ (Larsen et al., 2011). These imply $\Delta t_{10}=(0.75\pm
0.15)$ Myr and $\Delta t_{26}>0.76$ Myr. Interestingly, if the thermal
processing events experienced by both CAIs took place at around $\Delta t=0.8$
Myr, the Al-Mg and Be-B systems might be reconciled. More precise data for
CMS-1, KT1 and more non-hibonite-dominated FUN CAIs is needed to further test
whether the Be-B system can be used as a chronometer of CAI melting events.
## 7 Conclusions
In Paper I we tested the hypothesis that ${}^{26}{\rm Al}$ was distributed
homogeneously in the solar nebula, first finding the value of the Pb-Pb age of
$t\\!\\!=\\!\\!0$ that minimized the discrepancies between the formation times
$\Delta t_{26}$ found using Al-Mg measurements, and formation times $\Delta
t_{\rm Pb}$ found using Pb-Pb dating. We then tested whether this optimal fit
made the ages concordant in a statistically sense. For seven rapidly cooled
achondrites, this was the case. They could have falsified the homogeneity
hypothesis, but did not. This fact, plus the astrophysical theories for the
origins of the short-lived radionuclides, strongly suggest all radionuclides
were homogeneously distributed.
In this paper we built on that model, creating further tests of homogeneity by
comparing ages derived using the Hf-W and Mn-Cr systems as well. For 11
achondrites (excluding only NWA 4801) with 26 formation times across the Al-
Mg, Hf-W and Pb-Pb systems, we found statistical concordance using only two
free parameters: $t_{\rm SS}=4568.36$ Myr and $(\mbox{${}^{182}{\rm
Hf}/{}^{180}{\rm Hf}$})_{\rm SS}=10.43\times 10^{-5}$. The goodness-of-fit
parameter was 0.88, and the deviations in ages were normally distributed.
These findings strongly support the assumption of homogeneity of ${}^{26}{\rm
Al}$ and ${}^{182}{\rm Hf}$. They further suggest that the Hf-W system in NWA
4801 was lightly disturbed, which is consistent with the late thermal
annealing inferred for this achondrite (Irving and Kuehner, 2007; McKibbin et
al., 2015) and the high abundance of W in its matrix (Kleine et al., 2012).
Extending the results to plutonic angrites and other achondrites, and
including the Mn-Cr system, we found further concordancy. For preferred values
of ${}^{53}{\rm Mn}$ half-life 3.80 Myr, $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm
Mn}$})_{\rm SS}$ $=8.09\times 10^{-6}$, $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm
Hf}$})_{\rm SS}$ $=10.42\times 10^{-5}$, and $t_{\rm SS}=4568.35$ Myr, we find
37 formation times across 14 achondrites are concordant in a statistical
sense, with normally distributed $z$ scores and a goodness-of-fit parameter
$\mbox{$\chi_{\nu}^{2}$}\approx 1.1$. This is our preferred solution.
Our parameters were not based on any measurements of CAIs, other than to
choose to reference $t\\!\\!=\\!\\!0$ to the time when $(\mbox{${}^{26}{\rm
Al}/{}^{27}{\rm Al}$})=5.23\times 10^{-5}$, because many CAIs have initial
$(\mbox{${}^{26}{\rm Al}/{}^{27}{\rm Al}$})_{0}$ close to this value. Despite
this, our values are in excellent accord with the few and imprecise
measurements of $(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{0}$ and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{0}$ in CAIs. Although our value
of $t_{\rm SS}$ is about 1 Myr older than the measured value of $t_{\rm Pb}$
in four CAIs (Connelly et al., 2012), it is in good accord with the two values
inferred by Bouvier and Wadhwa (2010) and Bouvier et al. (2011a). We infer
that late transient heating events like those undergone by chondrules can
reset the Pb-Pb chronometer in CAIs without resetting Al-Mg, as discussed in
Paper I.
Our results allow us to test the concordance of other isotopic systems such as
chondrules forming after the impact event that created the CB/CH chondrites.
We concur with Bollard et al. (2019) that the isotopic systems were likely to
have closed simultaneously in these objects, making them a good test of our
chronometry. We find that if the impact took place at $5.75\pm 0.11$ Myr, the
Mn-Cr, Hf-W and Pb-Pb systems are indeed concordant, using the same parameters
derived from achondrites.
Our results allow other isotopic systems to be examined. In particular, we
conclude that Shallowater closed at $\Delta t_{\rm SW}=5.50\pm 0.23$ Myr after
$t\\!\\!=\\!\\!0$. This date should make it easier to put I-Xe ages into a
solar nebula chronology. I-Xe ages appear concordant. We do not see an obvious
conflict between the Be-B and Al-Mg ages of the FUN CAIs CMS-1 and KT1; both
systems appear consistent with having formed at 0.8 Myr after
$t\\!\\!=\\!\\!0$. Our results provide context for studying these other
isotopic systems.
To assist with developing a sequence of events in the solar nebula, we
strongly advocate reporting formation or closure times relative to
$t\\!\\!=\\!\\!0$. There are no astrophysical models of planet or formation
that would be affected if the solar nebula formed at 4500 or 4700 Myr ago
instead of 4568 Myr. Absolute ages are anyway uncertain to within $\pm 9$ Myr
because of the uncertainties in the uranium half-lives; Pb-Pb dating is only
precise when taking the difference between two ages, so that these
uncertainties cancel. In other words, Pb-Pb dating is only really useable to
models and practically precise when it is employed as a relative chronometer.
The use of anchors to report Al-Mg or Mn-Cr ages as modeled absolute ages is
therefore not necessary, and indeed introduces considerable confusion and
additional imprecision. Our hope is that by constraining the initial values of
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$ and
$(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ more precisely, this
will enable reporting of relative ages.
The framework we have presented here makes clear the need for further
measurements, and points to how to employ them. From objects like D’Orbigny it
is clear that pooling results from different laboratories has allowed dating
that is more accurate and more precise; there is a need to measure all samples
multiple times in different labs. Pooling together data from multiple isotopic
systems also leads to greater precision. For example, our inferred Al-Mg
formation time of D’Orbigny is $\Delta t_{26}=5.059\pm 0.103$ Myr, but after
combining all the data, our inferred time for formation is $\Delta t=5.034\pm
0.048$ Myr. This is twice as precise than the typical $>0.1$ Myr uncertainty
in Al-Mg formation times. As more data are acquired for individual samples,
more precise formation times can be inferred; and as more samples and data are
acquired, our framework will allow more precise estimates of key quantities
like $(\mbox{${}^{182}{\rm Hf}/{}^{180}{\rm Hf}$})_{\rm SS}$ and
$(\mbox{${}^{53}{\rm Mn}/{}^{55}{\rm Mn}$})_{\rm SS}$. Already our
uncertainties in these quantities are far less than those of single direct
measurements of CAIs (for which disturbance cannot be ruled out anyway). Our
approach allows determination of ages and formation times of meteorites and
inclusions, without the use of individual samples as anchors.
Acknowledgments: The authors would like to acknowledge the efforts of
cosmochemists from multiple laboratories around the world whose work makes
possible the data cited in Table 1 and throughout this paper. Statistical
chronometry necessarily distills very difficult and painstaking analytical
work into mere numbers to be crunched, but the efforts to obtain those numbers
are appreciated. We thank Zack Torrano for useful discussions. We thank
Francois Tissot and two anonymous reviewers whose suggestions greatly improved
the quality of our work. The work herein benefitted from collaborations and/or
information exchange within NASA’s Nexus for Exoplanetary System Science
research coordination network sponsored by NASA’s Space Mission Directorate
(grant NNX15AD53G, PI Steve Desch). Emilie Dunham gratefully acknolwedges
support from a 51 Pegasi b Fellowship, grant #2020-1829.
The data and scripts used to create Table 1 andFigure 3 are included as
Research Data.
## Appendix A Derivation of $t^{*}_{\rm SS}$
Here we derive Equation 16 for $t^{*}_{\rm SS}$. The global goodness-of-fit
parameter is
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\chi_{\nu}^{2}=\frac{1}{N-M}\,\sum_{i=1}^{A}\left[\frac{\left(\Delta
t_{26,i}-\Delta t_{i}\right)^{2}}{\sigma_{\Delta t26,i}^{2}}\right.$
$\left.+\frac{\left(\Delta t_{53,i}-\Delta t_{i}\right)^{2}}{\sigma_{\Delta
t53,i}^{2}}+\frac{\left(\Delta t_{182,i}-\Delta
t_{i}\right)^{2}}{\sigma_{\Delta t182,i}^{2}}+\frac{\left(\Delta t_{{\rm
Pb},i}-\Delta t_{i}\right)^{2}}{\sigma_{\Delta t{\rm Pb},i}^{2}}\right],$ (23)
and the optimal value of $t_{\rm SS}$ is that value which makes
$\partial\mbox{$\chi_{\nu}^{2}$}/\partial t_{\rm SS}=0$. It is recognized that
$\partial(\Delta t_{{\rm Pb},i})/\partial t_{\rm SS}=1$ and, from the
definition of $\Delta t_{i}$ (Equation 14), $\partial\Delta t_{i}/\partial
t_{\rm SS}=\alpha_{i}$, where
$\alpha_{i}=\frac{1/\sigma_{\Delta t{\rm Pb},i}^{2}}{1/\sigma_{\Delta t{\rm
Pb},i}^{2}+1/\sigma_{\Delta t26,i}^{2}+1/\sigma_{\Delta
t53,i}^{2}+1/\sigma_{\Delta t182,i}^{2}}$ (24)
Therefore
$2\sum_{i=1}^{A}\frac{\Delta t_{i}-\Delta t_{26,i}}{\sigma_{\Delta
t26,i}^{2}}\,\alpha_{i}+2\sum_{i=1}^{A}\frac{\Delta t_{i}-\Delta
t_{53,i}}{\sigma_{\Delta t53,i}^{2}}\,\alpha_{i}+2\sum_{i=1}^{A}\frac{\Delta
t_{i}-\Delta t_{182,i}}{\sigma_{\Delta t182,i}^{2}}\,\alpha_{i}$
$+2\sum_{i=1}^{A}\frac{\Delta t_{i}-\Delta t_{{\rm Pb},i}}{\sigma_{\Delta
t{\rm Pb},i}^{2}}\,\left(\alpha_{i}-1\right)=0.$ (25)
All the terms involving $\alpha_{i}$ cancel (by construction, since $\Delta
t_{i}$ itself was chosen to minimize $\chi_{\nu}^{2}$), leaving only
$\sum_{i=1}^{A}\frac{\Delta t_{{\rm Pb},i}}{\sigma_{\Delta t{\rm
Pb},i}^{2}}=\sum_{i=1}^{A}\frac{\Delta t_{i}}{\sigma_{\Delta t{\rm
Pb},i}^{2}}.$ (26)
Substituting $\Delta t_{{\rm Pb},i}=t_{\rm SS}-t_{{\rm Pb},i}$, we can write
this as
$\left(\sum_{i=1}^{A}\frac{1-\alpha_{i}}{\sigma_{\Delta t{\rm
Pb},i}^{2}}\right)\,t_{\rm SS}=\sum_{i=1}^{A}\,\alpha_{i}\,\left[\frac{t_{{\rm
Pb},i}+\Delta t_{26,i}}{\sigma_{\Delta t26,i}^{2}}+\frac{t_{{\rm Pb},i}+\Delta
t_{53,i}}{\sigma_{\Delta t53,i}^{2}}+\frac{t_{{\rm Pb},i}+\Delta
t_{182,i}}{\sigma_{\Delta t182,i}^{2}}\right]$ (27)
Upon final simplication this yields
$t_{\rm SS}^{*}=\frac{\sum_{i=1}^{A}\alpha_{i}\left(\frac{t_{{\rm
Pb},i}+\Delta t_{26,i}}{\sigma_{\Delta t26,i}^{2}}+\frac{t_{{\rm Pb},i}+\Delta
t_{53,i}}{\sigma_{\Delta t53,i}^{2}}+\frac{t_{{\rm Pb},i}+\Delta
t_{182,i}}{\sigma_{\Delta
t182,i}^{2}}\right)}{\sum_{i=1}^{A}\alpha_{i}\left(\frac{1}{\sigma_{\Delta
t26,i}^{2}}+\frac{1}{\sigma_{\Delta t53,i}^{2}}+\frac{1}{\sigma_{\Delta
t182,i}^{2}}\right)}.$ (28)
## References
* Amelin (2006) Amelin, Y., 2006. The prospect of high-precision Pb isotopic dating of meteorites. Meteoritics and Planetary Science 41, 7–17. doi:10.1111/j.1945-5100.2006.tb00189.x.
* Amelin (2008a) Amelin, Y., 2008a. The U Pb systematics of angrite Sahara 99555. Geochimica et Cosmochimica Acta 72, 4874–4885. doi:10.1016/j.gca.2008.07.008.
* Amelin (2008b) Amelin, Y., 2008b. U Pb ages of angrites. Geochimica et Cosmochimica Acta 72, 221–232. doi:10.1016/j.gca.2007.09.034.
* Amelin and Irving (2007) Amelin, Y., Irving, A.J., 2007\. Seven Million Years of Evolution on the Angrite Parent Body from Pb-Isotopic Data, in: Chronology of Meteorites and the Early Solar System, pp. 20–21.
* Amelin and Irving (2011) Amelin, Y., Irving, A.J., 2011\. Lead Isotopic Age of the Quenched Angrite Northwest Africa 1296. Meteoritics and Planetary Science Supplement 74, 5196.
* Amelin et al. (2010) Amelin, Y., Kaltenbach, A., Iizuka, T., Stirling, C.H., Ireland, T.R., Petaev, M., Jacobsen, S.B., 2010. U-Pb chronology of the Solar System’s oldest solids with variable 238U/ 235U. Earth and Planetary Science Letters 300, 343–350. doi:10.1016/j.epsl.2010.10.015.
* Amelin et al. (2011) Amelin, Y., Kaltenbach, A., Stirling, C.H., 2011. The U-Pb Systematics and Cooling Rate of Plutonic Angrite NWA 4590, in: Lunar and Planetary Science Conference, p. 1682.
* Amelin et al. (2019) Amelin, Y., Koefoed, P., Iizuka, T., Fernandes, V.A., Huyskens, M.H., Yin, Q.Z., Irving, A.J., 2019. U-Pb, Rb-Sr and Ar-Ar systematics of the ungrouped achondrites Northwest Africa 6704 and Northwest Africa 6693. Geochimica et Cosmochimica Acta 245, 628–642. doi:10.1016/j.gca.2018.09.021.
* Amelin et al. (2002) Amelin, Y., Krot, A.N., Hutcheon, I.D., Ulyanov, A.A., 2002\. Lead Isotopic Ages of Chondrules and Calcium-Aluminum-Rich Inclusions. Science 297, 1678–1683. doi:10.1126/science.1073950.
* Audi et al. (2003) Audi, G., Bersillon, O., Blachot, J., Wapstra, A.H., 2003\. The NUBASE evaluation of nuclear and decay properties. Nuclear Physics A 729, 3–128. doi:10.1016/j.nuclphysa.2003.11.001.
* Auer et al. (2009) Auer, M., Wagenbach, D., Wild, E.M., Wallner, A., Priller, A., Miller, H., Schlosser, C., Kutschera, W., 2009\. Cosmogenic 26Al in the atmosphere and the prospect of a 26Al/ 10Be chronometer to date old ice. Earth and Planetary Science Letters 287, 453–462. doi:10.1016/j.epsl.2009.08.030.
* Birck and Allègre (1985) Birck, J.L., Allègre, C.J., 1985\. 53Mn in the Early Solar System. Meteoritics 20, 609\.
* Bollard et al. (2015) Bollard, J., Connelly, J.N., Bizzarro, M., 2015. Pb-Pb dating of individual chondrules from the CBa chondrite Gujba: Assessment of the impact plume formation model. Meteoritics and Planetary Science 50, 1197–1216. doi:10.1111/maps.12461.
* Bollard et al. (2019) Bollard, J., Kawasaki, N., Sakamoto, N., Olsen, M., Itoh, S., Larsen, K., Wielandt, D., Schiller, M., Connelly, J.N., Yurimoto, H., Bizzarro, M., 2019. Combined U-corrected Pb-Pb dating and 26Al-26Mg systematics of individual chondrules - Evidence for a reduced initial abundance of 26Al amongst inner Solar System chondrules. Geochimica et Cosmochimica Acta 260, 62–83. doi:10.1016/j.gca.2019.06.025.
* Bouvier et al. (2011a) Bouvier, A., Brennecka, G.A., Wadhwa, M., 2011a. Absolute Chronology of the First Solids in the Solar System, in: Workshop on Formation of the First Solids in the Solar System, p. 9054.
* Bouvier et al. (2011b) Bouvier, A., Spivak-Birndorf, L.J., Brennecka, G.A., Wadhwa, M., 2011b. New constraints on early Solar System chronology from Al-Mg and U-Pb isotope systematics in the unique basaltic achondrite Northwest Africa 2976. Geochimica et Cosmochimica Acta 75, 5310–5323. doi:10.1016/j.gca.2011.06.033.
* Bouvier and Wadhwa (2010) Bouvier, A., Wadhwa, M., 2010\. The age of the Solar System redefined by the oldest Pb-Pb age of a meteoritic inclusion. Nature Geoscience 3, 637–641. doi:10.1038/ngeo941.
* Brennecka et al. (2018) Brennecka, G.A., Amelin, Y., Kleine, T., 2018. Uranium isotope ratios of Muonionalusta troilite and complications for the absolute age of the IVA iron meteorite core. Earth and Planetary Science Letters 490, 1–10. doi:10.1016/j.epsl.2018.03.010.
* Brennecka and Wadhwa (2012) Brennecka, G.A., Wadhwa, M., 2012\. Uranium isotope compositions of the basaltic angrite meteorites and the chronological implications for the early Solar System. Proceedings of the National Academy of Science 109, 9299–9303. doi:10.1073/pnas.1114043109.
* Budde et al. (2018) Budde, G., Kruijer, T.S., Kleine, T., 2018. Hf-W chronology of CR chondrites: Implications for the timescales of chondrule formation and the distribution of 26Al in the solar nebula. Geochimica et Cosmochimica Acta 222, 284–304. doi:10.1016/j.gca.2017.10.014.
* Burkhardt et al. (2012) Burkhardt, C., Kleine, T., Dauphas, N., Wieler, R., 2012\. Origin of isotopic heterogeneity in the solar nebula by thermal processing and mixing of nebular dust. Earth and Planetary Science Letters 357, 298–307. doi:10.1016/j.epsl.2012.09.048.
* Chen et al. (2002) Chen, J.H., Papanastassiou, D.A., Wasserburg, G.J., 2002. Re-Os and Pd-Ag systematics in Group IIIAB irons and in pallasites. Geochimica et Cosmochimica Acta 66, 3793–3810. doi:10.1016/S0016-7037(02)00952-3.
* Chen and Wasserburg (1990) Chen, J.H., Wasserburg, G.J., 1990\. The isotopic composition of Ag in meteorites and the presence of 107Pd in protoplanets. Geochimica et Cosmochimica Acta 54, 1729–1743. doi:10.1016/0016-7037(90)90404-9.
* Chmeleff et al. (2010) Chmeleff, J., von Blanckenburg, F., Kossert, K., Jakob, D., 2010\. Determination of the 10Be half-life by multicollector ICP-MS and liquid scintillation counting. Nuclear Instruments and Methods in Physics Research B 268, 192–199. doi:10.1016/j.nimb.2009.09.012.
* Connelly and Bizzarro (2016) Connelly, J.N., Bizzarro, M., 2016\. Lead isotope evidence for a young formation age of the Earth-Moon system. Earth and Planetary Science Letters 452, 36–43. doi:10.1016/j.epsl.2016.07.010.
* Connelly et al. (2012) Connelly, J.N., Bizzarro, M., Krot, A.N., Nordlund, Å., Wielandt, D., Ivanova, M.A., 2012\. The Absolute Chronology and Thermal Processing of Solids in the Solar Protoplanetary Disk. Science 338, 651\. doi:10.1126/science.1226919.
* Connelly et al. (2008) Connelly, J.N., Bizzarro, M., Thrane, K., Baker, J.A., 2008\. The Pb Pb age of Angrite SAH99555 revisited. Geochimica et Cosmochimica Acta 72, 4813–4824. doi:10.1016/j.gca.2008.06.007.
* Connelly et al. (2021) Connelly, J.N., Bollard, J., Costa, M.M., Vermeesch, P., Bizzarro, M., 2021. Improved methods for high-precision Pb-Pb dating of extraterrestrial materials. Journal of Analytical Atomic Spectroscopy 36, 2579–2587. doi:10.1039/D1JA00299F.
* Cook et al. (2021) Cook, D.L., Meyer, B.S., Schönbächler, M., 2021. Iron and Nickel Isotopes in II D and IVB Iron Meteorites : Evidence for Admixture of an SN II Component and Implications for the Initial Abundance of 60 Fe. The Astrophysical Journal 917, 59\. URL: http://dx.doi.org/10.3847/1538-4357/ac0add, doi:10.3847/1538-4357/ac0add.
* Davis (2022) Davis, A.M., 2022. Short-Lived Nuclides in the Early Solar System: Abundances, Origins, and Applications. Annual Review of Nuclear and Particle Science 72\. doi:10.1146/annurev-nucl-010722-074615.
* Davis and McKeegan (2014) Davis, A.M., McKeegan, K.D., 2014\. Short-Lived Radionuclides and Early Solar System Chronology, in: Davis, A.M. (Ed.), Meteorites and Cosmochemical Processes. volume 1, pp. 361–395.
* Desch et al. (2023) Desch, S.J., Dunlap, Daniel, R., Dunham, E.T., Williams, C.D., Mane, P., 2023. Statistical Chronometry of Meteorites: I. The Pb-Pb Age of $t\\!\\!=\\!\\!0$ of the Solar System. Icarus submitted, 00\. doi:10.3847/1538-4365/aad95f, arXiv:1710.03809.
* Desch et al. (2012) Desch, S.J., Morris, M.A., Connolly, H.C., Boss, A.P., 2012\. The importance of experiments: Constraints on chondrule formation models. Meteoritics and Planetary Science 47, 1139–1156. doi:10.1111/j.1945-5100.2012.01357.x.
* Dunham et al. (2020) Dunham, E.T., M., W., J., D.S., L., H.R., 2020. Best Practices for Determination of Initial 10Be/9Be in Early Solar System materials by Secondary Ion Mass Spectrometry. Geostandards and Geoanalytical Research 44, 695–710. doi:https://doi.org/10.1111/ggr.12329.
* Dunham et al. (2022) Dunham, E.T., Wadhwa, M., Desch, S.J., Liu, M.C., Fukuda, K., Kita, N., Hertwig, A.T., Hervig, R.L., Defouilloy, C., Simon, S.B., Davidson, J., Schrader, D.L., Fujimoto, Y., 2022. Uniform initial 10Be/9Be inferred from refractory inclusions in CV3, CO3, CR2, and CH/CB chondrites. Geochimica et Cosmochimica Acta 000, 000–000. doi:10.1103/PhysRev.185.1591.
* Emery (1972) Emery, G.T., 1972. Perturbation of Nuclear Decay Rates. Annual Review of Nuclear and Particle Science 22, 165–202. doi:10.1146/annurev.ns.22.120172.001121.
* Flynn and Glendenin (1969) Flynn, K.F., Glendenin, L.E., 1969\. Half-Life of 107Pd. Physical Review 185, 1591–1593. doi:10.1103/PhysRev.185.1591.
* García-Toraño et al. (2018) García-Toraño, E., Altzitzoglou, T., Auerbach, P., Bé, M.M., Bobin, C., Cassette, P., Chartier, F., Dersch, R., Fernández, M., Isnard, H., Kossert, K., Lourenco, V., Nähle, O., Nonell, A., Peyrés, V., Pommé, S., Rozkov, A., Sanchez-Cabezudo, A., Sochorová, J., 2018. The half-life of ${129}{\rm I}$. Applied Radiation and Isotopes 140, 157–162. doi:10.1016/j.apradiso.2018.06.007.
* Gellissen et al. (2007) Gellissen, M., Palme, H., Korotev, R.L., Irving, A.J., 2007\. NWA 2999, A Unique Angrite with a Large Chondritic Component, in: Lunar and Planetary Science Conference, p. 1612.
* Gilmour and Crowther (2017) Gilmour, J.D., Crowther, S.A., 2017\. The I-Xe chronometer and its constraints on the accretion and evolution of planetesimals. Geochemical Journal 51, 69–80. doi:10.2343/geochemj.2.0429.
* Glavin et al. (2004) Glavin, D.P., Kubny, A., Jagoutz, E., Lugmair, G.W., 2004\. Mn-Cr isotope systematics of the D’Orbigny angrite. Meteoritics and Planetary Science 39, 693–700. doi:10.1111/j.1945-5100.2004.tb00112.x.
* Goldmann et al. (2015) Goldmann, A., Brennecka, G., Noordmann, J., Weyer, S., Wadhwa, M., 2015. The uranium isotopic composition of the Earth and the Solar System. Geochimica et Cosmochimica Acta 148, 145–158. doi:10.1016/j.gca.2014.09.008.
* Goodrich et al. (2017) Goodrich, C.A., Kita, N.T., Yin, Q.Z., Sanborn, M.E., Williams, C.D., Nakashima, D., Lane, M.D., Boyle, S., 2017\. Petrogenesis and provenance of ungrouped achondrite Northwest Africa 7325 from petrology, trace elements, oxygen, chromium and titanium isotopes, and mid-IR spectroscopy. Geochimica et Cosmochimica Acta 203, 381–403. doi:10.1016/j.gca.2016.12.021.
* Heimann et al. (1974) Heimann, M., Parekh, P.P., Herr, W., 1974. A comparative study on 28Al and 53Mn in eighteen chondrites. Geochimica et Cosmochimica Acta 38, 217–234. doi:10.1016/0016-7037(74)90107-0.
* Herr et al. (1972) Herr, W., Herpers, U., Woelfle, R., 1972. Study on the cosmic ray produced long-lived Mn-53 in Apollo 14 samples. Proc. Third Lunar Science Conference, Geochm. Cosmochim. Acta Suppl. 3, 1763–1769.
* Hibiya et al. (2019) Hibiya, Y., Archer, G.J., Tanaka, R., Sanborn, M.E., Sato, Y., Iizuka, T., Ozawa, K., Walker, R.J., Yamaguchi, A., Yin, Q.Z., Nakamura, T., Irving, A.J., 2019\. The origin of the unique achondrite Northwest Africa 6704: Constraints from petrology, chemistry and Re-Os, O and Ti isotope systematics. Geochimica et Cosmochimica Acta 245, 597–627. doi:10.1016/j.gca.2018.04.031.
* Hohenberg and Kennedy (1981) Hohenberg, C.M., Kennedy, B.M., 1981\. I-Xe dating: intercomparisons of neutron irradiations and reproducibility of the Bjurböle standard. Geochimica et Cosmochimica Acta 45, 251–256. doi:10.1016/0016-7037(81)90170-8.
* Holden (1990) Holden, N.E., 1990. Total half-lives for selected nuclides. Pure Applied Chemistry 62, 941–958. doi:http://dx.doi.org/10.1351/pac199062050941.
* Holst et al. (2013) Holst, J.C., Olsen, M.B., Paton, C., Nagashima, K., Schiller, M., Wielandt, D., Larsen, K.K., Connelly, J.N., Jørgensen, J.K., Krot, A.N., Nordlund, Å., Bizzarro, M., 2013\. 182Hf-182W age dating of a 26Al-poor inclusion and implications for the origin of short-lived radioisotopes in the early Solar System. Proceedings of the National Academy of Science 110, 8819–8823. doi:10.1073/pnas.1300383110.
* Honda and Imamura (1971) Honda, M., Imamura, M., 1971\. Half-Life of Mn53. Physical Review C 4, 1182–1188. doi:10.1103/PhysRevC.4.1182.
* Humayun et al. (2007) Humayun, M., Simon, S.B., Grossman, L., 2007. Tungsten and hafnium distribution in calcium aluminum inclusions (CAIs) from Allende and Efremovka. Geochimica et Cosmochimica Acta 71, 4609–4627. doi:10.1016/j.gca.2007.07.014.
* Iizuka et al. (2014) Iizuka, T., Amelin, Y., Kaltenbach, A., Koefoed, P., Stirling, C.H., 2014. U-Pb systematics of the unique achondrite Ibitira: Precise age determination and petrogenetic implications. Geochimica et Cosmochimica Acta 132, 259–273. doi:10.1016/j.gca.2014.02.017.
* Iizuka et al. (2016) Iizuka, T., Lai, Y.J., Akram, W., Amelin, Y., Schönbächler, M., 2016. The initial abundance and distribution of 92Nb in the Solar System. Earth and Planetary Science Letters 439, 172–181. doi:10.1016/j.epsl.2016.02.005, arXiv:1602.00966.
* Ireland (1990) Ireland, T.R., 1990. Presolar isotopic and chemical signatures in hibonite-bearing refractory inclusions from the Murchison carbonaceous chondrite. Geochimica et Cosmochimica Acta 54, 3219–3237. doi:10.1016/0016-7037(90)90136-9.
* Irving and Kuehner (2007) Irving, A.J., Kuehner, S.M., 2007\. Plutonic Angrite NWA 4801 and a Model for the Angrite Parent Body Consistent with Petrological and Chronological Constraints, in: Chronology of Meteorites and the Early Solar System, pp. 74–75.
* Jacobsen et al. (2008) Jacobsen, B., Yin, Q.z., Moynier, F., Amelin, Y., Krot, A.N., Nagashima, K., Hutcheon, I.D., Palme, H., 2008\. 26Al- 26Mg and 207Pb- 206Pb systematics of Allende CAIs: Canonical solar initial 26Al/ 27Al ratio reinstated. Earth and Planetary Science Letters 272, 353–364. doi:10.1016/j.epsl.2008.05.003.
* Jaffey et al. (1971) Jaffey, A.H., Flynn, K.F., Glendenin, L.E., Bentley, W.C., Essling, A.M., 1971. Precision Measurement of Half-Lives and Specific Activities of 235U and 238U. Physical Review C 4, 1889–1906. doi:10.1103/PhysRevC.4.1889.
* Jambon et al. (2004) Jambon, A., Barrat, J.A., Boudouma, O., Fonteilles, M., Badia, S., Gopel, C., Bohn, M., 2004. Mineralogy and petrology of the angrite Northwest Africa 1296. Meteoritics and Planetary Science 40, 361–375. doi:10.1111/j.1945-5100.2005.tb00388.x.
* Keil (2012) Keil, K., 2012. Angrites, a small but diverse suite of ancient, silica-undersaturated volcanic-plutonic mafic meteorites, and the history of their parent asteroid. Chemie der Erde / Geochemistry 72, 191–218. doi:10.1016/j.chemer.2012.06.002.
* Kleine et al. (2012) Kleine, T., Hans, U., Irving, A.J., Bourdon, B., 2012\. Chronology of the angrite parent body and implications for core formation in protoplanets. Geochimica et Cosmochimica Acta 84, 186–203. doi:10.1016/j.gca.2012.01.032.
* Kleine and Wadhwa (2017) Kleine, T., Wadhwa, M., 2017\. Chronology of Planetesimal Differentiation, in: Elkins-Tanton, L.T., Weiss, B.P. (Eds.), Planetesimals: Early Differentiation and Consequences for Planets, pp. 224–245. doi:10.1017/9781316339794.011.
* Koefoed et al. (2016) Koefoed, P., Amelin, Y., Yin, Q.Z., Wimpenny, J., Sanborn, M.E., Iizuka, T., Irving, A.J., 2016. U-Pb and Al-Mg systematics of the ungrouped achondrite Northwest Africa 7325. Geochimica et Cosmochimica Acta 183, 31–45. doi:10.1016/j.gca.2016.03.028.
* Kondev (2021) Kondev, F.G., 2021. Nuclear Data Sheets for A=201. arXiv e-prints , arXiv:2107.07613arXiv:2107.07613.
* Kööp et al. (2016) Kööp, L., Davis, A.M., Nakashima, D., Park, C., Krot, A.N., Nagashima, K., Tenner, T.J., Heck, P.R., Kita, N.T., 2016. A link between oxygen, calcium and titanium isotopes in 26Al-poor hibonite-rich CAIs from Murchison and implications for the heterogeneity of dust reservoirs in the solar nebula. Geochimica et Cosmochimica Acta 189, 70–95. doi:10.1016/j.gca.2016.05.014.
* Korschinek et al. (2010) Korschinek, G., Bergmaier, A., Faestermann, T., Gerstmann, U.C., Knie, K., Rugel, G., Wallner, A., Dillmann, I., Dollinger, G., von Gostomski, C.L., Kossert, K., Maiti, M., Poutivtsev, M., Remmert, A., 2010\. A new value for the half-life of 10Be by Heavy-Ion Elastic Recoil Detection and liquid scintillation counting. Nuclear Instruments and Methods in Physics Research B 268, 187–191. doi:10.1016/j.nimb.2009.09.020.
* Krot et al. (2005) Krot, A.N., Amelin, Y., Cassen, P., Meibom, A., 2005\. Young chondrules in CB chondrites from a giant impact in the early Solar System. Nature 436, 989–992. doi:10.1038/nature03830.
* Kruijer et al. (2017) Kruijer, T.S., Burkhardt, C., Budde, G., Kleine, T., 2017\. Age of Jupiter inferred from the distinct genetics and formation times of meteorites. Proceedings of the National Academy of Science 114, 6712–6716. doi:10.1073/pnas.1704461114.
* Kruijer et al. (2014) Kruijer, T.S., Kleine, T., Fischer-Gödde, M., Burkhardt, C., Wieler, R., 2014. Nucleosynthetic W isotope anomalies and the Hf-W chronometry of Ca-Al-rich inclusions. Earth and Planetary Science Letters 403, 317–327. doi:10.1016/j.epsl.2014.07.003.
* Kutschera et al. (1984) Kutschera, W., Billquist, P.J., Frekers, D., Henning, W., Jensen, K.J., Xiuzeng, M., Pardo, R., Paul, M., Rehm, K.E., Smither, R.K., Yntema, J.L., Mausner, L.F., 1984\. Half-life of 60Fe. Nuclear Instruments and Methods in Physics Research B 5, 430–435. doi:10.1016/0168-583X(84)90555-X.
* Larsen et al. (2011) Larsen, K.K., Trinquier, A., Paton, C., Schiller, M., Wielandt, D., Ivanova, M.A., Connelly, J.N., Nordlund, Å., Krot, A.N., Bizzarro, M., 2011\. Evidence for Magnesium Isotope Heterogeneity in the Solar Protoplanetary Disk. The Astrophysical Journal 735, L37. doi:10.1088/2041-8205/735/2/L37.
* Larsen et al. (2020) Larsen, K.K., Wielandt, D., Schiller, M., Krot, A.N., Bizzarro, M., 2020. Episodic formation of refractory inclusions in the Solar System and their presolar heritage. Earth and Planetary Science Letters 535, 116088. doi:10.1016/j.epsl.2020.116088.
* Lugmair and Galer (1992) Lugmair, G.W., Galer, S.J.G., 1992\. Age and isotopic relationships among the angrites Lewis Cliff 86010 and Angra dos Reis. Geochimica et Cosmochimica Acta 56, 1673–1694. doi:10.1016/0016-7037(92)90234-A.
* Lugmair and Shukolyukov (1998) Lugmair, G.W., Shukolyukov, A., 1998\. Early solar system timescales according to 53Mn- 53Cr systematics. Geochimica et Cosmochimica Acta 62, 2863–2886. doi:10.1016/S0016-7037(98)00189-6.
* Matsuda et al. (1971) Matsuda, H., Umemoto, S., Honda, M., 1971. Manganese-53 produced by 730 MeV proton bombardment of iron. Radiochimica Acta 15, 51–53.
* Matthes et al. (2018) Matthes, M., Fischer-Gödde, M., Kruijer, T.S., Kleine, T., 2018\. Pd-Ag chronometry of IVA iron meteorites and the crystallization and cooling of a protoplanetary core. Geochimica et Cosmochimica Acta 220, 82–95. doi:10.1016/j.gca.2017.09.009.
* McKay et al. (1998) McKay, G.A., Miyamoto, M., Mikouchi, T., Ogawa, T., 1998\. The cooling history of the LEW 86010 angrite as inferred from kirschsteinite lamellae in olivine. Meteoritics and Planetary Science 33, 977–983. doi:10.1111/j.1945-5100.1998.tb01704.x.
* McKibbin et al. (2015) McKibbin, S.J., Ireland, T.R., Amelin, Y., Holden, P., 2015\. Mn-Cr dating of Fe- and Ca-rich olivine from ’quenched’ and ’plutonic’ angrite meteorites using Secondary Ion Mass Spectrometry. Geochimica et Cosmochimica Acta 157, 13–27. doi:10.1016/j.gca.2015.02.019.
* Mendybaev et al. (2017) Mendybaev, R.A., Williams, C.D., Spicuzza, M.J., Richter, F.M., Valley, J.W., Fedkin, A.V., Wadhwa, M., 2017. Thermal and chemical evolution in the early Solar System as recorded by FUN CAIs: Part II - Laboratory evaporation of potential CMS-1 precursor material. Geochimica et Cosmochimica Acta 201, 49–64. doi:10.1016/j.gca.2016.08.034.
* Mikouchi et al. (2003) Mikouchi, T., McKay, G., Koizumi, E., Monkawa, A., Miyamoto, M., 2003. Northwest Africa 1670: A New Quenched Angrite. Meteoritics and Planetary Science Supplement 38, 5218.
* Mittlefehldt (2005) Mittlefehldt, D., 2005. Ibitira: A basaltic achondrite from a distinct parent asteroid and implications for the Dawn mission. Meteoritics and Planetary Science 40, 665–677. doi:10.1111/j.1945-5100.2005.tb00972.x.
* Nyquist et al. (1994) Nyquist, L.E., Bansal, B., Wiesmann, H., Shih, C.Y., 1994\. Neodymium, Strontium and Chromium Isotopic Studies of the LEW8610 and Angra DOS Reis Meteorites and the Chronology of the Angrite Parent Body. Meteoritics 29, 872\. doi:10.1111/j.1945-5100.1994.tb01102.x.
* Nyquist et al. (2009) Nyquist, L.E., Kleine, T., Shih, C.Y., Reese, Y.D., 2009\. The distribution of short-lived radioisotopes in the early solar system and the chronology of asteroid accretion, differentiation, and secondary mineralization. Geochimica et Cosmochimica Acta 73, 5115–5136. doi:10.1016/j.gca.2008.12.031.
* Nyquist et al. (2003) Nyquist, L.E., Reese, Y., Wiesmann, H., Shih, C.Y., Takeda, H., 2003. Fossil 26Al and 53Mn in the Asuka 881394 eucrite: evidence of the earliest crust on asteroid 4 Vesta. Earth and Planetary Science Letters 214, 11–25. doi:10.1016/S0012-821X(03)00371-6.
* Ogliore et al. (2011) Ogliore, R.C., Huss, G.R., Nagashima, K., 2011. Ratio estimation in SIMS analysis. Nuclear Instruments and Methods in Physics Research B 269, 1910–1918. doi:10.1016/j.nimb.2011.04.120, arXiv:1106.0797.
* Olsen et al. (2013) Olsen, M.B., Schiller, M., Krot, A.N., Bizzarro, M., 2013\. Magnesium Isotope Evidence for Single Stage Formation of CB Chondrules by Colliding Planetesimals. The Astrophysical Journal 776, L1. doi:10.1088/2041-8205/776/1/L1.
* Papanastassiou et al. (2005) Papanastassiou, D.A., Wasserburg, G.J., Bogdanovski, O., 2005. The 53Mn-53Cr System in CAIs: An Update, in: Mackwell, S., Stansbery, E. (Eds.), 36th Annual Lunar and Planetary Science Conference, p. 2198\.
* Park et al. (2017) Park, C., Nagashima, K., Krot, A.N., Huss, G.R., Davis, A.M., Bizzarro, M., 2017\. Calcium-aluminum-rich inclusions with fractionation and unidentified nuclear effects (FUN CAIs): II. Heterogeneities of magnesium isotopes and 26Al in the early Solar System inferred from in situ high-precision magnesium-isotope measurements. Geochimica et Cosmochimica Acta 201, 6–24. doi:10.1016/j.gca.2016.10.002.
* Piralla et al. (2023) Piralla, M., Villeneuve, J., Schnuriger, N., Bekaert, D., Marrocchi, Y., 2023. A unified chronology of dust formation in the early solar system. Icarus 10, 230–237. doi:10.1016/j.icarus.2023.115427.
* Pravdivtseva et al. (2021) Pravdivtseva, O., Meshik, A., Hohenberg, C.M., 2021. Experimental Determination of the 129I/127I Ratio in the Early Solar System, in: 52nd Lunar and Planetary Science Conference, p. 2570.
* Pravdivtseva et al. (2017) Pravdivtseva, O., Meshik, A., Hohenberg, C.M., Krot, A.N., 2017\. I-Xe systematics of the impact plume produced chondrules from the CB carbonaceous chondrites: Implications for the half-life value of 129I and absolute age normalization of 129I-129Xe chronometer. Geochimica et Cosmochimica Acta 201, 320–330. doi:10.1016/j.gca.2016.01.012.
* Prinz et al. (1977) Prinz, M., Keil, K., Hlava, P., Berkley, J., Gomes, C.B.and Curvello, W., 1977. Studies of Brazillian meteorites, III. Origin and history of the Angra dos Reis achondrite. Earth and Planetary Science Letters 35, 317–330. doi:10.1016/0012-821X(77)90134-02.
* Quitté et al. (2011) Quitté, G., Latkoczy, C., Schönbächler, M., Halliday, A.N., Günther, D., 2011. 60Fe-60Ni systematics in the eucrite parent body: A case study of Bouvante and Juvinas. Geochimica et Cosmochimica Acta 75, 7698–7706. doi:10.1016/j.gca.2011.09.037.
* Quitté et al. (2010) Quitté, G., Markowski, A., Latkoczy, C., Gabriel, A., Pack, A., 2010. Iron-60 Heterogeneity and Incomplete Isotope Mixing in the Early Solar System. The Astrophysical Journal 720, 1215–1224. doi:10.1088/0004-637X/720/2/1215.
* Riches et al. (2012) Riches, A.J.V., Day, J.M.D., Walker, R.J., Simonetti, A., Liu, Y., Neal, C.R., Taylor, L.A., 2012. Rhenium-osmium isotope and highly-siderophile-element abundance systematics of angrite meteorites. Earth and Planetary Science Letters 353, 208–218. doi:10.1016/j.epsl.2012.08.006.
* Rugel et al. (2009) Rugel, G., Faestermann, T., Knie, K., Korschinek, G., Poutivtsev, M., Schumann, D., Kivel, N., Günther-Leopold, I., Weinreich, R., Wohlmuther, M., 2009\. New Measurement of the Fe60 Half-Life. Physical Review Letters 103, 072502\. doi:10.1103/PhysRevLett.103.072502.
* Ruzicka et al. (2008) Ruzicka, A., Floss, C., Hutson, M., 2008. Relict olivine grains, chondrule recycling, and implications for the chemical, thermal, and mechanical processing of nebular materials. Geochimica et Cosmochimica Acta 72, 5530–5557. doi:10.1016/j.gca.2008.08.017.
* Sanborn and Wadhwa (2021) Sanborn, M.E., Wadhwa, M., 2021\. Trace element geochemistry of coarse-grained angrites from Northwest Africa: Implications for their petrogenesis on the angrite parent body. Meteoritics and Planetary Science 56, 482–499. doi:doi.org/10.1111/maps.13631.
* Sanborn et al. (2019) Sanborn, M.E., Wimpenny, J., Williams, C.D., Yamakawa, A., Amelin, Y., Irving, A.J., Yin, Q.Z., 2019. Carbonaceous achondrites Northwest Africa 6704/6693: Milestones for early Solar System chronology and genealogy. Geochimica et Cosmochimica Acta 245, 577–596. doi:10.1016/j.gca.2018.10.004.
* Schiller et al. (2010) Schiller, M., Baker, J.A., Bizzarro, M., 2010. 26Al- 26Mg dating of asteroidal magmatism in the young Solar System. Geochimica et Cosmochimica Acta 74, 4844–4864. doi:10.1016/j.gca.2010.05.011.
* Schiller et al. (2015) Schiller, M., Connelly, J.N., Glad, A.C., Mikouchi, T., Bizzarro, M., 2015. Early accretion of protoplanets inferred from a reduced inner solar system 26Al inventory. Earth and Planetary Science Letters 420, 45–54. doi:10.1016/j.epsl.2015.03.028.
* Schönbächler et al. (2008) Schönbächler, M., Carlson, R.W., Horan, M.F., Mock, T.D., Hauri, E.H., 2008. Silver isotope variations in chondrites: Volatile depletion and the initial 107Pd abundance of the solar system. Geochimica et Cosmochimica Acta 72, 5330–5341. doi:10.1016/j.gca.2008.07.032.
* Schrader et al. (2017) Schrader, D.L., Nagashima, K., Krot, A.N., Ogliore, R.C., Yin, Q.Z., Amelin, Y., Stirling, C.H., Kaltenbach, A., 2017\. Distribution of 26Al in the CR chondrite chondrule-forming region of the protoplanetary disk. Geochimica et Cosmochimica Acta 201, 275–302. doi:10.1016/j.gca.2016.06.023.
* Scott et al. (2009) Scott, E.R.D., Greenwood, R.C., Franchi, I.A., Sanders, I.S., 2009\. Oxygen isotopic constraints on the origin and parent bodies of eucrites, diogenites, and howardites. Geochimica et Cosmochimica Acta 73, 5835–5853. doi:10.1016/j.gca.2009.06.024.
* Shukolyukov and Lugmair (2007) Shukolyukov, A., Lugmair, G.W., 2007\. The Mn-Cr Isotope Systematics of Bulk Angrites, in: 38th Annual Lunar and Planetary Science Conference, p. 1423.
* Shukolyukov and Lugmair (2008) Shukolyukov, A., Lugmair, G.W., 2008\. Mn-Cr Chronology of Eucrite CMS 04049 and Angrite NWA 2999, in: Lunar and Planetary Science Conference, p. 2094.
* Shukolyukov et al. (2009) Shukolyukov, A., Lugmair, G.W., Irving, A.J., 2009. Mn-Cr Isotope Systematics of Angrite Northwest Africa 4801, in: Lunar and Planetary Science Conference, p. 1381.
* Spivak-Birndorf et al. (2009) Spivak-Birndorf, L., Wadhwa, M., Janney, P., 2009. 26Al-26Mg systematics in D’Orbigny and Sahara 99555 angrites: Implications for high-resolution chronology using extinct chronometers. Geochimica et Cosmochimica Acta 73, 5202–5211. doi:10.1016/j.gca.2009.02.038.
* Spivak-Birndorf et al. (2011) Spivak-Birndorf, L.J., Wadhwa, M., Janney, P.E., 2011. 60Fe-60Ni Chronology of the D’Orbigny Angrite: Implications for the Initial Solar System Abundance of 60Fe, in: Lunar and Planetary Science Conference, p. 2281\.
* Sugiura et al. (2005) Sugiura, N., Miyazaki, A., Yanai, K., 2005. Widespread magmatic activities on the angrite parent body at 4562 Ma ago. Earth, Planets, and Space 57, e13–e16. doi:10.1186/BF03351858.
* Tachibana and Huss (2003) Tachibana, S., Huss, G.R., 2003\. The Initial Abundance of 60Fe in the Solar System. The Astrophysical Journal 588, L41–L44. doi:10.1086/375362.
* Tang and Dauphas (2012) Tang, H., Dauphas, N., 2012\. Abundance, distribution, and origin of 60Fe in the solar protoplanetary disk. Earth and Planetary Science Letters 359, 248–263. doi:10.1016/j.epsl.2012.10.011, arXiv:1212.1490.
* Tang and Dauphas (2015) Tang, H., Dauphas, N., 2015\. Low 60Fe abundance in Semarkina and Sahara 99555. The Astrophysical Journal 802, 22\.
* Telus et al. (2018) Telus, M., Huss, G.R., Nagashima, K., Ogliore, R.C., 2018\. In situ 60 Fe- 60 Ni systematics of chondrules from unequilibrated ordinary chondrites. Geochimica et Cosmochimica Acta 221, 342–357. URL: https://doi.org/10.1016/j.gca.2017.06.013, doi:10.1016/j.gca.2017.06.013.
* Telus et al. (2013) Telus, M., Huss, G.R., Ogliore, R.C., Nagashima, K., Tachibana, S., 2013. Recalculation of data for short-lived radionuclide systems using less-biased ratio estimation. Meteoritics and Planetary Science 2030, 2013–2030.
* Theis et al. (2013) Theis, K.J., Schönbächler, M., Benedix, G.K., Rehkämper, M., Andreasen, R., Davies, C., 2013\. Palladium-silver chronology of IAB iron meteorites. Earth and Planetary Science Letters 361, 402–411. doi:10.1016/j.epsl.2012.11.004.
* Tissot et al. (2022) Tissot, F.L.H., Collinet, M., Namur, O., Grove, T.L., 2022\. The case for the angrite parent body as the archetypal first-generation planetesimal: Large, reduced and Mg-enriched. Geochimica et Cosmochimica Acta 338, 278–301. doi:10.1016/j.gca.2022.09.031.
* Tissot et al. (2017) Tissot, F.L.H., Dauphas, N., Grove, T.L., 2017. Distinct 238U/235U ratios and REE patterns in plutonic and volcanic angrites: Geochronologic implications and evidence for U isotope fractionation during magmatic processes. Geochimica et Cosmochimica Acta 213, 593–617. doi:10.1016/j.gca.2017.06.045.
* Trappitsch et al. (2018) Trappitsch, R., Boehnke, P., Stephan, T., Telus, M., Savina, M.R., Pardo, O., Davis, A.M., Dauphas, N., Pellin, M.J., Huss, G.R., 2018\. New Constraints on the Abundance of 60Fe in the Early Solar System. The Astrophysical Journal 857, L15. doi:10.3847/2041-8213/aabba9.
* Trinquier et al. (2008) Trinquier, A., Birck, J.L., Allègre, C.J., Göpel, C., Ulfbeck, D., 2008. 53Mn- 53Cr systematics of the early Solar System revisited. Geochimica et Cosmochimica Acta 72, 5146–5163. doi:10.1016/j.gca.2008.03.023.
* Trinquier et al. (2009) Trinquier, A., Elliott, T., Ulfbeck, D., Coath, C., Krot, A.N., Bizzarro, M., 2009\. Origin of Nucleosynthetic Isotope Heterogeneity in the Solar Protoplanetary Disk. Science 324, 374\. doi:10.1126/science.1168221.
* Villa et al. (2016) Villa, I.M., Bonardi, M.L., De Bièvre, P., Holden, N.E., Renne, P.R., 2016. IUPAC-IUGS status report on the half-lives of 238U, 235U and 234U. Geochimica et Cosmochimica Acta 172, 387–392. doi:10.1016/j.gca.2015.10.011.
* Villeneuve et al. (2009) Villeneuve, J., Chaussidon, M., Libourel, G., 2009. Homogeneous Distribution of 26Al in the Solar System from the Mg Isotopic Composition of Chondrules. Science 325, 985\. doi:10.1126/science.1173907.
* Vockenhuber et al. (2004) Vockenhuber, C., Oberli, F., Bichler, M., Ahmad, I., Quitté, G., Meier, M., Halliday, A.N., Lee, D.C., Kutschera, W., Steier, P., Gehrke, R.J., Helmer, R.G., 2004\. New Half-Life Measurement of 182Hf: Improved Chronometer for the Early Solar System. Physical Review Letters 93, 172501\. doi:10.1103/PhysRevLett.93.172501.
* Wadhwa et al. (2009) Wadhwa, M., Amelin, Y., Bogdanovski, O., Shukolyukov, A., Lugmair, G.W., Janney, P., 2009\. Ancient relative and absolute ages for a basaltic meteorite: Implications for timescales of planetesimal accretion and differentiation. Geochimica et Cosmochimica Acta 73, 5189–5201. doi:10.1016/j.gca.2009.04.043.
* Wallner et al. (2015) Wallner, A., Bichler, M., Buczak, K., Dressler, R., Fifield, L.K., Schumann, D., Sterba, J.H., Tims, S.G., Wallner, G., Kutschera, W., 2015\. Settling the Half-Life of 60Fe: Fundamental for a Versatile Astrophysical Chronometer. Physical Review Letters 114, 041101\. doi:10.1103/PhysRevLett.114.041101.
* Warren (2011) Warren, P.H., 2011. Stable isotopes and the noncarbonaceous derivation of ureilites, in common with nearly all differentiated planetary materials. Geochimica et Cosmochimica Acta 75, 6912–6926. doi:10.1016/j.gca.2011.09.011.
* Weber et al. (1995) Weber, D., Schirmeyer, S., Bischoff, A., 1995. Refractory Inclusions from the CH-Chondrite PCA 91467: Similarities with and Relationship to Inclusions from ALH 85085 and ACFER 182, in: Lunar and Planetary Science Conference, p. 1475.
* Wendt and Carl (1991) Wendt, I., Carl, C., 1991\. The statistical distribution of the mean squared weighted deviation. Chemical Geology (Isotope Geoscience Section) 86, 275–285. doi:10.1016/0168-9622(91)90010-TGet.
* Wielandt et al. (2012) Wielandt, D., Nagashima, K., Krot, A.N., Huss, G.R., Ivanova, M.A., Bizzarro, M., 2012\. Evidence for Multiple Sources of 10Be in the Early Solar System. The Astrophysical Journal 748, L25. doi:10.1088/2041-8205/748/2/L25.
* Williams et al. (2017) Williams, C.D., Ushikubo, T., Bullock, E.S., Janney, P.E., Hines, R.R., Kita, N.T., Hervig, R.L., MacPherson, G.J., Mendybaev, R.A., Richter, F.M., Wadhwa, M., 2017. Thermal and chemical evolution in the early solar system as recorded by FUN CAIs: Part I - Petrology, mineral chemistry, and isotopic composition of Allende FUN CAI CMS-1. Geochimica et Cosmochimica Acta 201, 25–48. doi:10.1016/j.gca.2016.10.053.
* Wimpenny et al. (2019) Wimpenny, J., Sanborn, M.E., Koefoed, P., Cooke, I.R., Stirling, C., Amelin, Y., Yin, Q.Z., 2019. Reassessing the origin and chronology of the unique achondrite Asuka 881394: Implications for distribution of 26Al in the early Solar System. Geochimica et Cosmochimica Acta 244, 478–501. doi:10.1016/j.gca.2018.10.006.
* Woelfle et al. (1973) Woelfle, R., Herr, W., Herpers, U., 1973. Determination of the activation cross section $\sigma_{\rm therm}$ and the half-life of ${}^{53}{\rm Mn}$ by a 587 days reactor irradiation. Radiochimica Acta 18, 207–211.
* Yamashita et al. (2010a) Yamashita, K., Maruyama, S., Yamakawa, A., Nakamura, E., 2010a. 53Mn-53Cr Chronometry of Cb Chondrite: Evidence for Uniform Distribution of 53Mn in the Early Solar System. The Astrophysical Journal 723, 20–24. doi:10.1088/0004-637X/723/1/20.
* Yamashita et al. (2010b) Yamashita, K., Maruyama, S., Yamakawa, A., Nakamura, E., 2010b. 53Mn-53Cr Chronometry of Cb Chondrite: Evidence for Uniform Distribution of 53Mn in the Early Solar System. The Astrophysical Journal 723, 20–24. doi:10.1088/0004-637X/723/1/20.
* Yin et al. (2009) Yin, Q.Z., Amelin, Y., Jacobsen, B., 2009. Project Milestones: Testing Consistent Chronologies Between Extinct 53Mn-53Cr and Extant U-Pb Systematics in the Early Solar System, in: Lunar and Planetary Science Conference, p. 2060\.
* Zhu et al. (2019) Zhu, K., Moynier, F., Wielandt, D., Larsen, K.K., Barrat, J.A., Bizzarro, M., 2019\. Timing and Origin of the Angrite Parent Body Inferred from Cr Isotopes. The Astrophysical Journal 877, L13. doi:10.3847/2041-8213/ab2044.
|
# Deep Learning-Based Vehicle Speed Prediction for Ecological Adaptive Cruise
Control in Urban and Highway Scenarios
###### Abstract
In a typical car-following scenario, target vehicle speed fluctuations act as
an external disturbance to the host vehicle and in turn affect its energy
consumption. To control a host vehicle in an energy-efficient manner using
model predictive control (MPC), and moreover, enhance the performance of an
ecological adaptive cruise control (EACC) strategy, forecasting the future
velocities of a target vehicle is essential. For this purpose, a deep
recurrent neural network-based vehicle speed prediction using long-short term
memory (LSTM) and gated recurrent units (GRU) is studied in this work. Besides
these, the physics-based constant velocity (CV) and constant acceleration (CA)
models are discussed. The sequential time series data for training (e.g. speed
trajectories of the target and its preceding vehicles obtained through
vehicle-to-vehicle (V2V) communication, road speed limits, traffic light
current and future phases collected using vehicle-to-infrastructure (V2I)
communication) is gathered from both urban and highway networks created in the
microscopic traffic simulator SUMO. The proposed speed prediction models are
evaluated for long-term predictions (up to $10\,\mathrm{s}$) of target vehicle
future velocities. Moreover, the results revealed that the LSTM-based speed
predictor outperformed other models in terms of achieving better prediction
accuracy on unseen test datasets, and thereby showcasing better generalization
ability. Furthermore, the performance of EACC-equipped host car on the
predicted velocities is evaluated, and its energy-saving benefits for
different prediction horizons are presented.
###### keywords:
Adaptive cruise control, Velocity prediction, Car-following, Recurrent neural
networks, Model predictive control, Intelligent transportation systems, V2V,
V2I
††thanks: This work is funded by the German Ministry for Education and
Research (BMBF) and partially supported by the Center of Commercial Vehicle
Technology (Zentrum für Nutzfahrzeugtechnologie, ZNT) at the University of
Kaiserslautern.
Sai Krishna Chada* Daniel Görges* Achim Ebert** Roman Teutsch***
* Institute of Electromobility ** Human Computer Interaction Group *** Institute for Mechanical and Automotive Design
University of Kaiserslautern, Germany
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>&
<EMAIL_ADDRESS>
## 1 Introduction
Vehicle speed prediction is regarded as a key aspect in intelligent
transportation systems (ITS) that fundamentally aim at improving the road
safety, traffic-efficiency and vehicular energy efficiency (Jiang and Fei,
2017). Past studies on speed forecasting in transportation systems focused
mainly in two directions, namely, network-wide traffic speed prediction (Cui
et al., 2018), and host vehicle velocity prediction (Sun et al., 2015; Gaikwad
et al., 2019). Forecasting the future velocities of the host vehicle for the
entire driving route for the purpose of efficient energy management in hybrid
electric vehicles (HEVs) is studied in (Sun et al., 2015). Moreover, the
vehicle speed prediction can largely benefit the newly developed advanced
driver assistance systems (ADAS) as well (Schmied et al., 2015). With a goal
to minimize the energy consumption in on-road vehicles, efforts are being made
towards developing advanced adaptive cruise control (ACC) concepts to control
the host vehicle in an automated fashion. In this regard, ecological adaptive
cruise control (EACC) concepts, that use an optimal controller to compute an
energy-optimal speed for the host vehicle while tracking a target (leading)
vehicle are becoming popular (Moser et al., 2015). In prior studies (Weißmann
et al., 2018; Chada et al., 2020), to explore the energy consumption reduction
benefits using EACC in a car-following scenario, a common assumption was made
that the future velocities of the target vehicle are perfectly available.
However, the perfect future velocities of the target vehicle in a real world
setting are not known a priori, but rather must be predicted through either
behavioral models or data-driven approaches.
Several methods for developing vehicle speed predictors for various
applications were proposed in the prior works. For instance, a comparative
analysis on the parametric and non-parametric approaches for speed prediction
in highway driving is presented in (Lefèvre et al., 2014). The authors
classified the prediction space into short-term prediction ($<4\,\mathrm{s}$)
and long-term prediction ($4-10\,\mathrm{s}$). Vehicle velocity prediction
until $10\,\mathrm{s}$ for the use case in energy management in HEVs is
studied in (Gaikwad et al., 2019). (Liu et al., 2019) investigated on the one
hand stochastic models such as Markov chain and conditional linear Gaussian
(CLG) for the host vehicle velocity prediction. On the other hand,
deterministic models such as auto-regressive moving average (ARMA), nonlinear
auto-regressive exogenous model (NARX) and recurrent neural networks such as
long-short term memory (LSTM) units were studied. The authors in (Wegener et
al., 2021) studied longitudinal vehicle speed prediction in urban environments
using CLG and deep neural networks (DNNs). Moreover, (Shin et al., 2019)
proposed a fuzzy markov chain model with speed constraints to perform host
vehicle speed prediction. In (Lin and Görges, 2018), the authors proposed a
cloud-based seasonal autoregressive integrated moving average (SARIMA)
framework for vehicle speed prediction purpose, for which a highway database
was used. To enhance the accuracy of the speed predictors, availing the
benefits of the surrounding information using vehicle-to-vehicle (V2V) and
vehicle-to-infrastructure (V2I) communication can be essential (Moser et al.,
2015).
Concerning the speed prediction for the ecological cruise control use case,
there exist only a handful of studies (Schmied et al., 2015; Moser et al.,
2015; Jia et al., 2020; Sankar et al., 2022). (Schmied et al., 2015) used a
simplified prediction model with sinusoidal functions to predict the preceding
vehicle behavior, and demonstrated a predictive cruise control approach. In
(Moser et al., 2015), a Bayesian network approach with CLG model was used to
predict the information of a target vehicle for the cooperative adaptive
cruise control (CACC) use case. In (Jia et al., 2020) long-short term memory
(LSTM)-based energy-optimal ACC for target vehicle speed prediction in an
urban environment is proposed. Furthermore, the authors in (Wegener et al.,
2021) implemented vector autoregressive (VAR) model to generate simultaneous
predictions of the target vehicle, and used receding horizon control to derive
optimal accelerations for the host vehicle.
In most of the previous works (Lin and Görges, 2018; Shin et al., 2019; Liu et
al., 2019; Jia et al., 2020; Wegener et al., 2021), the speed prediction
models were trained on datasets that were gathered from repeated trials in the
same driving route by considering either limited or no surrounding traffic.
Although the driving patterns are comparatively easy to predict in such an
approach, the limitation, however, remains that the prediction model may not
generalize well for unseen data from a different route. To address this, the
present work proposes a scalable and more generalizable method for preparing
the time series data, and develop a speed predictor that uses the historical
observations to predict the target vehicle future velocities in both urban and
highway environments.
The contributions made in this paper are: (1) A novel approach for time series
data preparation for urban and highway networks using the microscopic traffic
simulation tool SUMO is presented. (2) To predict the target vehicle future
velocities, both deep recurrent neural networks (LSTM and GRU) and physics-
based models (CV and CA) are studied. (3) The influence of various input
variables (e.g. preceding vehicle behavior, traffic light signal phase and
road speed limits) on the prediction accuracy are investigated. Moreover, the
impact of using additional V2V and V2I information on the accuracy of the
predicted outputs is explored. (4) Furthermore, the performance of the EACC on
the predicted target vehicle velocities is evaluated, and the energy-saving
potential for different prediction horizons are investigated.
## 2 Ecological Adaptive Cruise Control
### 2.1 System Dynamics
The longitudinal vehicle dynamics of the host car is described by
$\displaystyle\frac{dv_{\text{h}}}{dt}=\frac{1}{m_{\text{eq}}}(F_{\text{t}}-F_{\text{b}}-\underbrace{F_{\text{a}}-F_{\text{roll}}-F_{\text{g}}}_{F_{\text{r}}})$
(1)
where $v_{\text{h}}$ is the host car velocity, $m_{\text{eq}}$ is the
equivalent mass which is the sum of vehicle weight, rotational equivalent
masses, driver and cargo weight, $F_{\text{t}}$ is the traction force,
$F_{\text{b}}$ is the braking force and $F_{\text{r}}$ is the combination of
aerodynamic resistance $F_{\text{a}}=\frac{1}{2}\rho
A_{\text{f}}c_{\text{a}}v_{\text{h}}^{2}$, rolling resistance
$F_{\text{roll}}=c_{\text{r}}m_{\text{v}}g\cos\theta$ and gradient resistance
$F_{\text{g}}=m_{\text{v}}g\sin\theta$. To handle the nonlinearity occurring
due to the term $v_{\text{h}}^{2}$ in $F_{\text{a}}$, an approximation of the
aerodynamic resistance $F_{\text{a}}\approx\frac{1}{2}\rho
A_{\text{f}}c_{\text{a}}(p_{1}v_{\text{h}}+p_{\text{2}})$ is considered in
this work. Here, $c_{\text{a}}$ is the drag coefficient, $A_{\text{f}}$ is the
frontal cross-sectional area of the vehicle, $\rho$ is the density of the air,
$p_{\text{1}}$ and $p_{\text{2}}$ are the coefficients obtained through line
fitting. Furthermore, $m_{\text{v}}$ is the host vehicle weight, $g$ is the
gravitational acceleration, $\theta$ is the gradient angle and $c_{\text{r}}$
is the rolling resistance coefficient.
The chosen host vehicle in this work is a battery electric vehicle (BEV),
whose accurate vehicle and battery models are obtained from (Lin et al.,
2014). A half-map approximation of the BEV power consumption map (Chada et
al., 2020) is used in this study.
### 2.2 Model Predictive Control Problem Formulation
A typical car-following scenario is illustrated in Fig. 1, in which a host car
is tracking a target vehicle. The EACC optimization problem based on model
predictive control (MPC) framework is formulated in time-domain with a goal to
minimize the objective function (2). Here, $N$ is the length of the prediction
horizon and $k$ denotes the discrete time.
$\displaystyle\min_{\textit{{$F_{t,k}$}},\textbf{$F_{b,k}$},{\zeta}_{1,k},{\zeta}_{2,k}}\leavevmode\resizebox{258.36667pt}{}{$\sum_{k=0}^{N-1}P(v_{\text{h},k},F_{\text{t},k})+\varepsilon_{1}F_{\text{b},k}^{2}+$}$
(2a) $\displaystyle\leavevmode\resizebox{422.77661pt}{}{$\text{s.t.}\ \
v_{\text{h},k+1}=v_{\text{h},k}+\frac{\Delta
T}{m_{\text{eq}}}(F_{\text{t},k}-F_{\text{b},k}-c_{\text{r}}m_{\text{v}}g\cos\theta_{k}$}-$
(2b) (2c) (2d) (2e) (2f) (2g) (2h) $\displaystyle\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
d_{\text{rel},k}\geq\underbrace{d_{\text{min}}+h_{\text{m}}v_{\text{h},k}}_{d_{\text{s},k}}$
(2i) $\displaystyle\ \ \ \ \ \ \ \ \ \
d_{\text{rel},k}\leq\underbrace{d_{\text{min}}+h_{\text{c}}v_{\text{h},k}}_{d_{\text{c},k}}+\zeta_{\text{1},k}$
(2j)
The first term in the cost function minimizes the BEV power consumption that
can be approximated as a function of $v_{h,k}$ and $F_{t,k}$ (Chada et al.,
2020). Using the second term in (2), an excessive braking force
$F_{\text{b},k}$ is penalized. In the third term, to motivate the host car to
stay within the desired region $d_{\text{c}}$ to a target vehicle, a slack
variable $\zeta_{1}$ is penalized. Furthermore, in the final term the
variation in the traction force at successive time steps is penalized using a
slack variable $\zeta_{2}$ if it exceeds a constant value $\Delta
F_{\text{t,max}}$, with the aim to minimize jerks and improve the driving
comfort, as given in (2g and 2h). $\varepsilon_{1}$, $\varepsilon_{2}$ and
$\varepsilon_{3}$ are the corresponding weighting factors for the
aforementioned terms in the cost function (2). The host vehicle velocity and
relative distance to the target vehicle in discretized form is given in (2)
and (2c), where $v_{\text{t}}$ is the velocity of the target vehicle.
Furthermore, the limits for the traction force $F_{\text{t}}$ are set using
(2e), in which $F_{\text{t,max}}$ is the maximum traction force. The physical
limitations of the vehicle with respect to velocity and braking force are
addressed in (2d) and (2f). In order to maintain a safe distance to the target
vehicle, a hard constraint is introduced in (2i). Here, the state variables
are $x_{k}=[v_{\text{h},k},d_{\text{rel},k}]^{\top}$ and the control variables
are
$u^{*}_{k}=[F_{\text{t},k},F_{\text{b},k},\zeta_{\text{1},k},\zeta_{\text{2},k}]^{\top}$.
Figure 1: Schematic of a typical car-following scenario
## 3 Data Preparation for Speed Prediction
In contrary to the other approaches which considered a fixed driving route for
data preparation, this work focuses on network-based data collection as it is
laborious and time-consuming to extract V2V and V2I information in real world.
The goal here is to gather rich driving datasets from the urban and highway
networks that are route-independent under varied traffic conditions. Moreover,
the data must enable developing scalable prediction models and promote
generalizability. Therefore, to extract time series information in this work,
the open source microscopic traffic simulation tool SUMO (Simulation for Urban
Mobility)(Lopez et al., 2018) was used. The road networks in SUMO are
generated from the open-source map database (OpenStreetMap contributors,
2017). To reproduce the real world traffic flow in the simulation, a random
traffic ($3000$ vehicles/network) is generated with each traffic object in the
network having a random starting and destination points (Chada et al., 2021).
The training datasets are chosen from the Landstuhl highway (Fig. 3a) and
Kaiserslautern city (Fig. 3b). To improve generalizability, five test datasets
are chosen from a different city and connecting highway (Nieder-Olm), as shown
in Fig. 3c.
In a highway driving scenario, the speed of the target vehicle is primarily
influenced by the regulatory road speed limits and the velocities of a
preceding vehicle. Besides these factors, the presence of traffic light
signals in the urban environments influence the target vehicle speed as well.
By leveraging the V2V and V2I information, additional inputs from multiple
vehicles ahead of the target vehicle can be obtained. Moreover, the future
signal phase and timing (SPaT) information can be used to improve the
predictions for the target vehicle.
In this work, two feature groups FG1 and FG2 are investigated. As listed in
Table 1 and illustrated in Fig. 2, the feature group FG1 consists of six input
features, namely, velocity of the target vehicle $v_{\text{t}}$, velocity of
the first preceding vehicle $v_{\text{p}_{1}}$, relative distance between the
target vehicle and the first preceding vehicle $d_{\text{rel}_{1}}$, traffic
light signal current state $s_{\text{TL}}$, relative distance between the
target vehicle and the traffic light signal $d_{\text{TL}}$, and the maximum
road speed limit $v_{\text{max}}$. In addition to the input features described
in FG1, the feature group FG2 considers the influence of the second preceding
vehicle with velocity $v_{\text{p}_{2}}$, and the relative distance between
the target vehicle and the second preceding vehicle $d_{\text{rel}_{2}}$ that
is calculated using,
$d_{\text{rel}_{2}}=\ d_{\text{rel}_{1}}+\ l_{\text{p}_{1}}+\
d_{\text{p}_{12}}$ (3)
where, $l_{\text{p}_{1}}$ is the length of the first preceding vehicle and
$d_{\text{p}_{12}}$ is the relative distance between the first and second
preceding vehicles. Besides, FG2 also considers the future traffic light
signal state information $s_{\text{TL},k+1},..., s_{\text{TL},k+H}$ until the
prediction horizon $H$ as input features. During the data collection process,
each traffic object in the network is considered to be a target vehicle and
the input variables as given in Table 1 are gathered. To access the above
mentioned variables from the simulation, a traffic control interface known as
TraCI4Matlab is used (Acosta et al., 2015). Furthermore, preprocessing
techniques such as data cleaning, normalization, and data splitting into
training, validation and testing are performed to handle the data in an
efficient manner.
Figure 2: Schematic of V2V and V2I enabled traffic scenario
(a) Landstuhl highway
(b) Kaiserslautern city
(c) Nieder-Olm
Figure 3: SUMO networks to generate training (a & b) and testing (c) datasets
## 4 Speed Prediction Methods
In this work, to develop a target vehicle speed predictor, both the physics-
based prediction methods and deep recurrent neural networks are studied.
### 4.1 Physics-Based Prediction Models
#### 4.1.1 Constant Velocity:
The constant velocity (CV) model assumes that the future velocities of the
target car remain constant, and can be determined using
$v_{\text{t}}(t+\Delta T)=\overline{v}_{\text{t}}(t)$ (4)
where
$\overline{v}_{\text{t}}(t)=\frac{v_{\text{t}}(t)+v_{\text{t}}(t-1)}{2}$,
$\Delta T$ is the sample time.
#### 4.1.2 Constant Acceleration:
The constant acceleration (CA) model makes the assumption that the future
velocities of the target car are incremented by a constant amount of
acceleration, i.e.
$v(t+\Delta T)=v(t)+\Delta Ta(t)$ (5)
where $a(t)=\frac{v_{\text{t}}(t)-v_{\text{t}}(t-1)}{\Delta T}$.
Table 1: Feature groups and their corresponding input variables Feature group | Input Features
---|---
FG1 | $v_{\text{t},k},v_{\text{p}_{1},k},d_{\text{rel}_{1},k},s_{\text{TL},k},d_{\text{TL},k},v_{\text{max},k}$
FG2 | $v_{\text{t},k},v_{\text{p}_{1},k},d_{\text{rel}_{1},k},s_{\text{TL},k},d_{\text{TL},k},v_{\text{max},k}$
$v_{\text{p}_{2},k},d_{\text{rel}_{2},k},s_{\text{TL},k+1},...,
s_{\text{TL},k+H}$
### 4.2 Recurrent Neural Networks
Recurrent neural networks (RNNs) are popularly known in the category of deep
neural networks as they are capable of using their internal state memory to
process sequential or time series data. Two variants in the RNN architecture,
namely, gated recurrent unit (GRU) and long short-term memory (LSTM) units are
investigated in this work. The internal cell structure of LSTM and GRU are
illustrated in the Fig. 4(a) and Fig. 4(b), respectively.
The internal mechanisms of an LSTM (Fig. 4(a)) consists of a cell state
$C_{\text{t}}$ and various gates such as a forget gate $f_{\text{t}}$, an
input gate ($i_{\text{t}}$, $\\!\widetilde{C}_{\text{t}}$) and an output gate
$o_{\text{t}}$. The cell states act as a transport highway to transfer the
information from previous intervals all the way down the entire sequence
chain. The gates can regulate the flow of information by either adding or
removing the information from the cell state. The inputs to the LSTM are the
previous hidden state $h_{\text{t-1}}$ and the current state $x_{\text{t}}$.
The forget gate uses a sigmoid activation function to decide on which
information to throw away from the cell state, and is described using (6a).
$\displaystyle
f_{\text{t}}=\sigma\left(W_{f}\left[h^{T}_{t-1},X^{T}_{t}\right]+b_{f}\right)$
(6a) $\displaystyle
i_{\text{t}}=\sigma\left(W_{i}\left[h^{T}_{t-1},X^{T}_{t}\right]+b_{i}\right)$
(6b)
$\displaystyle\\!\widetilde{C}_{\text{t}}=\text{tanh}\left(W_{C}\left[h^{T}_{t-1},X^{T}_{t}\right]^{T}+b_{C}\right)$
(6c) $\displaystyle
C_{\text{t}}=f_{t}*C_{t-1}+i_{t}*\\!\widetilde{C}_{\text{t}}$ (6d)
$\displaystyle
o_{\text{t}}=\sigma\left(W_{o}\left[h^{T}_{t-1},X^{T}_{t}\right]+b_{o}\right)$
(6e) $\displaystyle h_{\text{t}}=o_{t}*\text{tanh}(C_{t})$ (6f)
Figure 4: Internal cell structure of LSTM and GRU
The input gate is a combination of two layers. The first layer is referred to
as input layer gate $i_{\text{t}}$, which decides on which important
information to retain to update in the cell-state and the second layer is a
tanh layer which adds new candidate values $\\!\widetilde{C}_{\text{t}}$ to
the cell state as given in (6b) and (6c) respectively. Thus, the old cell
state $C_{\text{t-1}}$ is updated to a new cell state $C_{\text{t}}$ using the
forget gate and the input gate information according to (6d). Finally, the
cell state passes through the tanh activation function and the final output is
filtered using a sigmoid layer resulting in the next hidden state
$h_{\text{t}}$ as described in (6e) and (6f). In equation (6), $W_{\text{f}}$,
$W_{\text{i}}$,$W_{\text{c}}$, $W_{\text{o}}$ are the weights and
$b_{\text{f}}$, $b_{\text{i}}$,$b_{\text{c}}$, $b_{\text{o}}$ are the biases
of the forget, input and output gates respectively.
In contrary to the LSTM, the GRU has a simpler cell structure with only two
gates, namely, reset and update gates, as illustrated in Fig. 4(b). The reset
gate is used to determine how much historical information to forget and the
update gate decides which new information must be passed along to the future.
The equations for the GRU are given by
$\displaystyle y_{t}=\sigma(U_{y}x_{t}+W_{y}q_{t-1}+a_{y})$ (7a)
$\displaystyle s_{t}=\sigma(U_{s}x_{t}+W_{s}q_{t-1}+a_{s})$ (7b)
$\displaystyle\tilde{q}_{t}=tanh(U_{q}x_{t}+W_{q}(s_{t}\odot q_{t-1}+a_{q}))$
(7c) $\displaystyle q_{t}=(1-y_{t})\odot q_{t-1}+y_{t}\odot\tilde{q}_{t}$ (7d)
where $U_{*}$, $W_{*}$, and $a_{*}$ are the two weights and the bias, while
$\odot$ is the scalar product of the vectors and $\sigma$ refers to the
sigmoid activation function.
Figure 5: Architecture of Stacked RNN Figure 6: Prediction results of LSTM-FG2
on one of the test datasets for a prediction horizon of $5\,\mathrm{s}$ Figure
7: Comparison of target vehicle predictions for LSTM-FG1 and FG2 models for a
prediction horizon of $5\,\mathrm{s}$
#### 4.2.1 Stacked RNN:
In order to obtain a greater level of abstraction, the hidden layers are
arranged together in stacked fashion as illustrated in Fig. 5. The
architecture consists of three segments, described from bottom to top: (i) an
input layer, (ii) stacked LSTM or GRU layers, (iii) an output layer. The input
layer consists of multiple input features (marked in green) as described in
Table 1. In Fig. 5 from left to right, the feature values of past, current and
future timesteps are provided as inputs to the stacked layers. On the right,
along with the current traffic light state, future state information up to a
prediction horizon $H$ is additionally used. The combined input sequence is
fed into the stacked hidden layers. Each layer has memory cells associated
with it, otherwise known as neurons. The lower layers are used to capture the
low level representations and the deeper layers can learn higher levels of
abstraction. The layer in yellow outputs multi-step sequence predictions up to
a prediction horizon $H$, in this case the future velocities of the target
vehicle.
To model the RNNs in this work, the keras deep learning library was used
(Brownlee, 2018) in python, and training of the deep learning prediction
models is performed on a GPU cluster. The hyperparameters for both the LSTM
and GRU are tuned using the Keras Bayesian optimization function. The tuned
hyperparameters for the $5\,\mathrm{s}$ speed predictors are illustrated in
the Table 2. Four stacked layers were implemented to model the non-linear
input data representations to outputs. Moreover, to reduce overfitting of the
data during the training process, a regularization technique called dropout is
used. Furthermore, a rectified linear activation function (reLU) is used to
train the network faster, and to estimate the loss of the model while
training, a mean squared error (MSE) is used as the loss function.
Additionally, the ADAM optimizer is used to efficiently update the network
weights while training. The length of the past sequence is maintained to be
equivalent to the prediction horizon $H$, and the learning rate is chosen as
$1e^{-}3$. The number of epochs was chosen as 25, in addition to which an
early stopping was implemented to be able to stop the training process when a
desired metric has stopped to improve.
Hyperparameters | FG1 | FG2
---|---|---
LSTM | GRU | LSTM | GRU
Batch size | 32 | 512
Input features | 6 | 13
Stacked layer 1 | 90 | 450 | 600 | 300
Dropout 1 | 0 | 0.3 | 0 | 0.2
Stacked layer 2 | 60 | 600 | 420 | 600
Dropout 2 | 0 | 0.3 | 0.25 | 0
Stacked layer 3 | 600 | 60 | 450 | 600
Dropout 3 | 0.3 | 0 | 0.3 | 0
Stacked layer 4 | 600 | 60 | 480 | 180
Dropout 4 | 0 | 0.3 | 0.3 | 0.1
Dense layer | 30 | 570 | 60 | 30
Table 2: Hyperparameters for LSTM and GRU models for the predicition horizon
of $5\,\mathrm{s}$
## 5 Results and discussion
Figure 8: Speed forecasting results of $\hat{v}_{\text{GRU}\text{,FG2}}$,
$\hat{v}_{\text{LSTM}\text{,FG2}}$, $\hat{v}_{\text{CA}}$ and
$\hat{v}_{\text{CV}}$ for a prediction horizon of $5\,\mathrm{s}$
### 5.1 Evaluation Metrics
To evaluate the performance of the prediction models on the test dataset, two
error metrics are used in this work. Firstly, the mean absolute error (MAE)
measures the average magnitude of the errors in a set of predictions and is
given by
$\text{MAE}=\sum\limits^{n}_{i=1}\frac{|\hat{v}_{t(i)}-v_{t(i)}|}{n}$ (8)
where $\hat{v}_{t(i)}$ is the predicted target vehicle speed at the $i^{th}$
prediction step, $v_{t(i)}$ is the original target vehicle observation and $n$
is the total number of observations. Secondly, the root mean squared error
(RMSE) is the square root of the average of squared differences between
predictions and actual observation and is described as
$\text{RMSE}=\sqrt{\sum\limits^{n}_{i=1}\frac{(\hat{v}_{t(i)}-v_{t(i)})^{2}}{n}}$
(9)
### 5.2 Performance Evaluation of the Prediction Methods
Prediction Models | MAE [m/s] | RMSE [m/s]
---|---|---
5s | 10s | 5s | 10s
FG1 | FG2 | FG1 | FG2 | FG1 | FG2 | FG1 | FG2
CV | 2.12 | 3.16 | 3.79 | 5.43
CA | 2.75 | 5.26 | 4.77 | 8.68
LSTM | 2.06 | 1.75 | 4.26 | 2.92 | 3.19 | 2.84 | 5.63 | 4.61
GRU | 2.02 | 2.51 | 3.19 | 3.3 | 3.29 | 3.68 | 4.58 | 4.7
Table 3: Comparison of MAE and RMSE for various prediction methods
The performance of the prediction methods discussed in Section 4 is evaluated
on the $5$ test datasets. The average MAE and RMSE for each prediction model
across different prediction horizons ($5\,\mathrm{s}$ and $10\,\mathrm{s}$)
with respect to the feature groups (FG1 and FG2) are summarized in Table 3. It
can be noticed that the prediction error increases for all the models as the
prediction horizon increases due to the increasing uncertainties. Moreover,
the LSTM model with input features FG2 has outperformed remaining methods, and
showcased lower average prediction error as compared to feature group FG1.
The prediction results for the LSTM-FG2 model with a prediction horizon of
$5\,\mathrm{s}$ after evaluating it on one of the test datasets that consists
of various traffic scenarios are shown in Fig. 6. Furthermore, the prediction
result of $\hat{v}_{\text{LSTM-FG1}}$ is compared against
$\hat{v}_{\text{LSTM-FG2}}$ in Fig. 7 and demonstrated for two scenarios. In
Fig. 7(a), the target vehicle is under the influence of two preceding vehicles
ahead (exemplary scenario in Fig. 2). As presented in Table 1, the FG1 uses
the inputs from the first preceding vehicle alone and the FG2 uses both the
preceding vehicle information as inputs with an assumption that this
information can be obtained from V2V communication. The results illustrate
that the $\hat{v}_{\text{LSTM-FG2}}$ has better tracking ability of the target
vehicle ${v}_{\text{t}}$ as compared to $\hat{v}_{\text{LSTM-FG1}}$. In the
second scenario as illustrated in Fig. 7(b), the target vehicle stopped at a
red phase of a traffic light signal. The predictions of $\hat{v}_{\text{LSTM-
FG2}}$ match the expected behavior of the target vehicle ${v}_{\text{t}}$,
however, as the feature group FG1 did not take the future traffic light phase
into account, the $\hat{v}_{\text{LSTM-FG1}}$ depict inaccurate predictions
that the target vehicle will move forward from standstill. Such a behavior may
in turn lead to traffic signal violations which is not desirable.
Furthermore, a comparison of the prediction results for the methods discussed
in Table 3 when evaluated on a few scenarios can be found in Fig. 8. It can be
observed that the LSTM-FG2 prediction model has demonstrated better prediction
accuracy in all the scenarios as compared to GRU-FG2, CV and CA models.
Although the prediction results of $\hat{v}_{\text{GRU,FG2}}$ matches with the
predictions of $\hat{v}_{\text{LSTM,FG2}}$ at a few timestamps, its accuracy
needs to be improved in a few scenarios (e.g. round-abouts). The predictions
based on CV and CA models were not able to accurately predict the target
vehicle behavior due to the presence of abrupt speed variations in the target
vehicle. It can be noticed from Fig. 8, that a prediction error exists with
the $\hat{v}_{\text{LSTM,FG2}}$ as well. For instance, while maneuvering a
right turn (Fig. 8(a)), at $89\,\mathrm{s}$ the model did not predict the slow
down at the curvature accurately. Similarly, in Fig. 8(d) the model is able to
predict the future speeds accurately in the round-about only until
$3\,\mathrm{s}$ into the future.
### 5.3 Performance Evaluation of EACC using Predicted Speeds
Figure 9: Speed profiles of EACC-equipped host car while tracking three target
vehicle velocity criteria Figure 10: Energy savings for EACC-equipped host car
while tracking three target vehicle velocity criteria
To analyze the performance of the EACC in a typical car-following scenario
while tracking a target vehicle, an evaluation is conducted based on three
criteria in which the target vehicle future velocities are (I) assumed to be
constant due to the lack of a robust speed predictor and V2V communications,
(II) predicted using the LSTM-based speed prediction model proposed in this
work, and (III) assumed to be perfectly available. The speed profiles of the
host vehicle for the three criteria $v_{\text{h,I}}$, $v_{\text{h,II}}$ and
$v_{\text{h,III}}$ are illustrated in Fig. 9 while tracking a target vehicle
velocity profile $v_{\text{t}}$. In the distance plot in the above part of
Fig. 9, the host vehicle with condition II is seen performing a robust car-
following and maintains a good inter-vehicle distance to the target vehicle
$d_{\text{rel,II}}$. It can be noticed from the two enlarged sections in the
below part of Fig. 9, that the $v_{\text{h,II}}$ is able to track the
$v_{\text{h,III}}$ better than the $v_{\text{h,I}}$. Moreover, considering a
constant speed (CS) model in criterion I has resulted in abrupt decelerations
and sharp accelerations as compared to the II and III.
In Fig. 10, a comparison of the energy savings of the EACC-equipped host
vehicle after evaluating on the three above mentioned criteria is shown. The
results demonstrate that the energy savings can increase with the increase in
the prediction horizon, i.e. the more the information about the target vehicle
is available into the future, the better that the host vehicle is able to plan
its optimal trajectories. Moreover, the proposed LSTM-based EACC (criterion
II) is able to achieve better energy savings as compared to the criterion I
and is close to criterion III.
Concerning the execution time of the proposed LSTM-based EACC, an evaluation
is performed for the prediction horizon of
$10\,\mathrm{s}~{}(50\,\mathrm{steps})$ using Matlab® R2021b profiler on a
Windows 10 PC equipped with an Intel® Core™ i7-7500U CPU processor with 2.70
GHz clock frequency and 12 GB RAM. No GPU for parallel computing was used. The
mean execution time for the proposed concept is found to be $60\,\mathrm{ms}$.
The sample time in this work is chosen as $\Delta T$=$200\,\mathrm{ms}$
including the time for inference of the neural networks. As with the current
system configuration, the LSTM-based EACC can be solved at each step below the
sample time, thus showcasing the real-time capability of the proposed
controller.
## 6 Conclusion
In this work, to enhance the efficiency of an ecological adaptive cruise
control (EACC) strategy, an LSTM-based target vehicle speed prediction model
for both urban and highway scenarios is proposed. In the speed prediction
task, the LSTM model outperformed GRU, CV and CA models, and was able to
capture the historical dependencies from several input features and perform
long-term predictions up to $10\,\mathrm{s}$. Moreover, considering the
additional input features, such as information about multiple preceding
vehicles in the driving route obtained through V2V and traffic light signal
future phases gathered through V2I has enhanced the prediction accuracy.
Furthermore, energy savings up to $26\%$ can be realizable for the EACC-
equipped host car while tracking the target vehicle predicted velocities. A
performance increment in terms of additional average energy savings of up to
$2.5\%$ with the proposed LSTM-based EACC can be achieved as compared to the
constant speed (CS) model. For further improvements in the prediction
accuracy, additional input features constituting traffic rules at non-priority
intersections, road topology and curvature must be further investigated.
## References
* Acosta et al. (2015) Acosta, A.F., Espinosa, J.E., and Espinosa, J. (2015). Traci4matlab: Enabling the integration of the SUMO road traffic simulator and Matlab® through a software re-engineering process. _Lecture Notes in Control and Information Sciences_ , 13, 155–170.
* Brownlee (2018) Brownlee, J. (2018). Deep learning for time series forecasting. _Machine Learning Mastery_.
* Chada et al. (2021) Chada, S.K., Görges, D., Ebert, A., and Teutsch, R. (2021). A driver-in-the-loop co-simulation framework for testing predictive EDAS for commercial vehicles in urban environments. _In Proceedings of the 6th Commercial Vehicle Technology Symposium 2020/2021_ , 107–118.
* Chada et al. (2020) Chada, S.K., Purbai, A., Görges, D., Ebert, A., and Teutsch, R. (2020). Ecological adaptive cruise control for urban environments using SPaT information. _In Proceedings of the IEEE Vehicle Power and Propulsion Conference_ , 2–7.
* Cui et al. (2018) Cui, Z., Ke, R., Pu, Z., and Wang, Y. (2018). Deep bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction. URL http://arxiv.org/abs/1801.02143.
* Gaikwad et al. (2019) Gaikwad, T.D., Asher, Z.D., Liu, K., Huang, M., and Kolmanovsky, I. (2019). Vehicle velocity prediction and energy management strategy part 2: Integration of machine learning vehicle velocity prediction with optimal energy management to improve fuel economy. _SAE Technical Paper_ , 2019-01-1212.
* Jia et al. (2020) Jia, Y., Cai, C., and Görges, D. (2020). An LSTM-based speed predictor based on traffic simulation data for improving the performance of energy-optimal adaptive cruise control. _In Proceedings of the 23rd IEEE International Conference on Intelligent Transportation Systems_ , 1–7.
* Jiang and Fei (2017) Jiang, B. and Fei, Y. (2017). Vehicle speed prediction by two-level data driven models in vehicular networks. _IEEE Transactions on Intelligent Transportation Systems_ , 18, 1793–1801.
* Lefèvre et al. (2014) Lefèvre, S., Sun, C., Bajcsy, R., and Laugier, C. (2014). Comparison of parametric and non-parametric approaches for vehicle speed prediction. _In Proceedings of the American Control Conference_ , 3494–3499.
* Lin and Görges (2018) Lin, X. and Görges, D. (2018). Cloud-based vehicle velocity prediction based on seasonal autoregressive integrated moving average processes. _SAE Technical Paper_ , 2018-01-1178, 1–9.
* Lin et al. (2014) Lin, X., Görges, D., and Liu, S. (2014). Eco-driving assistance system for electric vehicles based on speed profile optimization. _In Proceedings of the IEEE Conference on Control Applications_ , 629–634.
* Liu et al. (2019) Liu, K., Asher, Z., Gong, X., Huang, M., and Kolmanovsky, I. (2019). Vehicle velocity prediction and energy management strategy part 1: Deterministic and stochastic vehicle velocity prediction using machine learning. _SAE Technical Paper_ , 2019-01-1051.
* Lopez et al. (2018) Lopez, P.A., Behrisch, M., Bieker-walz, L., Erdmann, J., Fl, Y.p., Hilbrich, R., Leonhard, L., Rummel, J., Wagner, P., and Wießner, E. (2018). Microscopic traffic simulation using SUMO. _In Proceedings of the 21st IEEE International Conference on Intelligent Transportation Systems_ , 2575–2582.
* Moser et al. (2015) Moser, D., Waschl, H., Schmied, R., Efendic, H., and del Re, L. (2015). Short term prediction of a vehicle’s velocity trajectory using ITS. _SAE International Journal of Passenger Cars - Electronic and Electrical Systems_ , 8(2), 364–370.
* OpenStreetMap contributors (2017) OpenStreetMap contributors (2017). Planet dump retrieved from https://planet.osm.org . https://www.openstreetmap.org .
* Sankar et al. (2022) Sankar, G.S., Kim, M., and Han, K. (2022). Data-driven leading vehicle speed forecast and its application to ecological predictive cruise control. _IEEE Transactions on Vehicular Technology_ , 1–12. 10.1109/TVT.2022.3193091.
* Schmied et al. (2015) Schmied, R., Waschl, H., and Del Re, L. (2015). A simplified fuel efficient predictive cruise control approach. _SAE Technical Paper_ , 2015-01-0296.
* Shin et al. (2019) Shin, J., Kim, S., Sunwoo, M., and Han, M. (2019). Ego-vehicle speed prediction using fuzzy markov chain with speed constraints. _In Proceedings of the IEEE Intelligent Vehicles Symposium_ , 2106–2112.
* Sun et al. (2015) Sun, C., Hu, X., Moura, S.J., and Sun, F. (2015). Velocity predictors for predictive energy management in hybrid electric vehicles. _IEEE Transactions on Control Systems Technology_ , 23(3), 1197–1204.
* Wegener et al. (2021) Wegener, M., Herrmann, F., Koch, L., Savelsberg, R., and Andert, J. (2021). Longitudinal vehicle motion prediction in urban settings with traffic light interaction. _IEEE Transactions on Intelligent Vehicles_. 10.1109/TIV.2021.3114156.
* Weißmann et al. (2018) Weißmann, A., Görges, D., and Lin, X. (2018). Energy-optimal adaptive cruise control combining model predictive control and dynamic programming. _Control Engineering Practice_ , 72, 125–137.
|
# Cold Mode Gas Accretion on Two Galaxy Groups at z$\sim$2
Andrey Vayner Department of Physics and Astronomy, Johns Hopkins University,
Bloomberg Center, 3400 N. Charles St., Baltimore, MD 21218, USA Nadia L.
Zakamska Department of Physics and Astronomy, Johns Hopkins University,
Bloomberg Center, 3400 N. Charles St., Baltimore, MD 21218, USA Institute for
Advanced Study, Einstein Dr., Princeton NJ 08540 Sanchit Sabhlok Department
of Physics, University of California San Diego, 9500 Gilman Drive La Jolla, CA
92093 USA Center for Astrophysics & Space Sciences, University of California
San Diego, 9500 Gilman Drive, La Jolla, CA 92093 USA Shelley A. Wright
Department of Physics, University of California San Diego, 9500 Gilman Drive,
La Jolla, CA 92093 USA Center for Astrophysics & Space Sciences, University
of California San Diego, 9500 Gilman Drive, La Jolla, CA 92093 USA Lee Armus
IPAC, California Institute of Technology, 1200 E. California Blvd., Pasadena,
CA 91125 Norman Murray Canadian Institute for Theoretical Astrophysics,
University of Toronto, 60 St. George Street, Toronto, ON M5S 3H8, Canada
Canada Research Chair in Theoretical Astrophysics Gregory Walth IPAC,
California Institute of Technology, 1200 E. California Blvd., Pasadena, CA
91125 Yuzo Ishikawa Department of Physics and Astronomy, Johns Hopkins
University, Bloomberg Center, 3400 N. Charles St., Baltimore, MD 21218, USA
(Received 2022 June 7; Accepted 2022 November 29)
###### Abstract
We present Keck Cosmic Web Imager (KCWI) integral field spectroscopy (IFS)
observations of rest-frame UV emission lines $\rm Ly\alpha$,
CIV$\lambda\lambda$ 1548 Å, 1550Å and $\rm HeII$ 1640 Å observed in the
circumgalactic medium (CGM) of two $z=2$ radio-loud quasar host galaxies. We
detect extended emission on 80-90 kpc scale in $\rm Ly\alpha$ in both systems
with CIV and $\rm HeII$ emission also detected out to 30-50 kpc. All emission
lines show kinematics with a blue and redshifted gradient pattern consistent
with velocities seen in massive dark matter halos and similar to kinematic
patterns of inflowing gas seen in hydrodynamical simulations. Using the
kinematics of both resolved $\rm Ly\alpha$ emission and absorption, we can
confirm that both kinematic structures are associated with accretion.
Combining the KCWI data with molecular gas observations with Atacama Large
Millimeter/submillimeter Array (ALMA) and high spatial resolution of ionized
gas with Keck OSIRIS, we find that both quasar host galaxies reside in proto-
group environments at $z=2$. We estimate $1-6\times
10^{10}$M${}_{\hbox{$\odot$}}$ of warm-ionized gas within 30-50 kpc from the
quasar that is likely accreting onto the galaxy group. We estimate inflow
rates of 60-200 M⊙ yr-1, within an order of magnitude of the outflow rates in
these systems. In the 4C 09.17 system, we detect narrow gas streams associated
with satellite galaxies, potentially reminiscent of ram-pressure stripping
seen in local galaxy groups and clusters. We find that the quasar host
galaxies reside in dynamically complex environments, with ongoing mergers, gas
accretion, ISM stripping, and outflows likely playing an important role in
shaping the assembly and evolution of massive galaxies at cosmic noon.
galaxies: evolution, galaxies: ISM, galaxies: kinematics and dynamics,
(galaxies:) quasars: supermassive black holes, (galaxies:) intergalactic
medium, galaxies: high-redshift
††journal: MNRAS
## 1 Introduction
How massive galaxies form is one of the most puzzling questions in modern-day
astrophysics. Distant ($z\sim 2$) quasar host galaxies with supermassive black
hole masses of $10^{9}$ M${}_{\hbox{$\odot$}}$ are likely the progenitors of
the most massive systems seen in the local Universe. These quasars reside in
dark matter halos $\mathrel{\hbox{\hbox to0.0pt{\hbox{\lower
4.0pt\hbox{$\sim$}}\hss}\hbox{$>$}}}6\times 10^{12}$ M${}_{\hbox{$\odot$}}$
(Myers et al., 2006; Tumlinson et al., 2017; Geach et al., 2019). Such halos
today have at most 10$\%$ of baryonic matter locked up in stars (Kormendy &
Ho, 2013). The majority of the baryonic matter resides in the circumgalactic
medium (CGM) of these galaxies that extends to scales ten to a hundred times
larger than the radius of the quasar host galaxies’ stellar component
(Tumlinson et al., 2017; Silverman et al., 2019; Zakamska et al., 2019).
Furthermore, the majority of the metals that are the byproduct of stellar
evolution reside in the CGM (Tumlinson et al., 2017).
One major challenge in modern extragalactic astrophysics and cosmology is
understanding the complex feedback loop between the supermassive black hole
(especially in its actively accreting – quasar – phase), its host galaxy, and
its CGM. For example, what is the role of gas in the CGM in fueling star
formation and supermassive black hole growth? How do feedback processes inside
a galaxy – whether due to star formation or the supermassive black hole
activity or both – drive the metals and the gas back into the CGM?
Simulations have indicated that accretion through cold streams ($10^{4}$K) is
likely responsible for providing the majority of baryonic matter supply in
distant ($z>2$) galaxies (Kereš et al., 2009; van de Voort et al., 2011).
However, even though this matter is concentrated in filaments that trace the
dark matter distribution, it has been observationally challenging to directly
detect and study cold accretion streams in distant galaxies due to their
highly diffuse nature. Historically, there have been two dominant ways of
studying the CGM in distant galaxies. The first method is through transverse
absorption line surveys (e.g., Zhu & Ménard 2013; Prochaska et al. 2014) using
background quasars or galaxies as continuum sources to probe absorbing CGM and
direct “down-the-barrel” spectroscopy of individual galaxies (e.g., Steidel et
al. 2010; Rubin et al. 2012). The second method to study the CGM is using
direct narrow-band imaging of $\rm Ly\alpha$ emission of high-redshift
galaxies (Cantalupo, 2017).
One of the best means of studying the resolved CGM around high redshift
quasars is through $\rm Ly\alpha$ emission, as it is thought to be the
brightest observable emission line in the CGM of active galaxies. The exact
balance of energy powering $\rm Ly\alpha$ emission in CGM remains uncertain
(Trebitsch et al., 2016). The different energy sources include shock-heating
of CGM by outflows (e.g., Taniguchi & Shioya 2000), the release of
gravitational binding energy during gravitational collapse of the gas onto the
halo (e.g., Haiman et al. 2000; Fardal et al. 2001), and photo-ionization by
the quasar itself (e.g., Cantalupo et al. 2005) and by the cosmic ultraviolet
background (e.g., Gould & Weinberg 1996). The expected $\rm Ly\alpha$ emission
from the CGM surrounding a luminous quasar is on the order of $(5-100)\times
10^{-19}$erg s-1 cm-2$\rm arcsec^{-2}$, an order of magnitude greater than the
photo-ionization by the cosmic ultraviolet background (Cantalupo et al.,
2005), making quasars ideal targets for current resolved CGM studies with
modern-day integral-field-unit spectroscopy. However, other sources of
ionization such as embedded star formation in satellite galaxies can also be
contributing (Mas-Ribas et al., 2017).
The recent advent of a large field of view optical integral field
spectrographs like MUSE on VLT (Bacon, 2010), and KCWI (Keck Cosmic Web
Imager; Martin et al. 2010) have opened a new window for studying the CGM in
emission. These instruments allow mapping of the 2D distribution of CGM gas
surrounding distant galaxies and quasar systems and measurements of the gas
kinematics. Due to wavelength coverage, MUSE is primarily focused on studying
the CGM of powerful AGN at redshifts $>$ 3\. The blue sensitivity of KCWI
allows for the first time to study the CGM of $z=2-3$ quasars through $\rm
Ly\alpha$ emission using an 8-10 m class telescope. Around a typical quasar at
$z=2-4$, both MUSE and KCWI have been able to detect extended $\rm Ly\alpha$
with 20 minutes to one hour on source exposure times (Borisova et al., 2016;
Cai et al., 2019; Arrigoni Battaia et al., 2019; O’Sullivan et al., 2020;
Travascio et al., 2020). Deeper observations can yield detections of fainter
UV lines such as $\rm HeII$ and CIV, but there appears to be a dependence on
the source population (Cantalupo, 2017). Certain very deep observations
sometimes only yield tentative detection of addition UV emission lines around
luminous type-1 radio-quiet quasars (Arrigoni Battaia et al., 2018; Cantalupo
et al., 2019) while others place very stringent limits and only detect
additional UV lines by stacking multiple data sets (Fossati et al., 2021).
Observations of luminous radio-loud quasars (Heckman et al., 1991; Roche et
al., 2014; Shukla et al., 2022) and high redshift radio-galaxies (Villar-
Martín et al., 2003, 2007; Humphrey et al., 2007; Vernet et al., 2017) show a
much higher occurrence of extended $\rm HeII$ and CIV emission within their
$\rm Ly\alpha$ halos even in relatively shallow observations. The dichotomy
between the CGM of the radio-loud and quiet population is an ongoing area of
research. Differences in environments, gas phase conditions and gas heating
mechanisms may play a key role.
In this paper, we present KCWI observations of the warm ionized gas
($10^{4}$K) in the CGM of two quasar host galaxies. We target the rest-frame
UV emission lines $\rm Ly\alpha$, CIV$\lambda\lambda$ 1548 Å, 1550Å and $\rm
HeII$ 1640 Å. We present sample selection, summary of observations, data
reduction, and emission line analysis in Section Section 2. We discuss
individual objects in Section 3. We discuss the implication of the observed
kinematics and dynamics of the CGM in Section 4. We summarize our conclusions
in Section 5. We use an $\rm H_{0}=67.8$ km s-1 Mpc-1, $\Omega_{\rm m}=0.308$,
$\Omega_{\Lambda}=0.692$ cosmology throughout this paper (Planck Collaboration
et al., 2014).
## 2 Observations Data Reduction & Analysis
In this section we outline the observations conducted as part of this study
consisting of KCWI, Keck - OSIRIS LGS observations and ALMA.
### 2.1 Sample selection
We present details of the initial sample selection in Vayner et al. (2021b).
In short, we selected targets to be observable with the Keck I adaptive optics
system with a redshift where the primary optical emission lines (such as
H$\beta$, [Oiii] and H$\alpha$ ) are redshifted into good atmospheric windows
in the near-infrared. The parent sample was further constrained to be radio-
loud type-1 quasars with jets on galactic scales. All quasars were selected to
have a quasar bolometric luminosity $>10^{46}$ erg s-1 and a radio luminosity
at 178 MHz $>10^{44}$ erg s-1. The general properties of our targets are
presented in Table 1. For detail on redshift measurement please see Vayner et
al. (2021b), where they were measured from the spatially unresolved quasar
narrow line region.
We followed up KCWI observations on a subset of objects at $z=2-2.4$ where the
$\rm Ly\alpha$ and $\rm HeII$ emission lines fall within a single grating
configuration of KCWI and do not overlap with any strong atmospheric emission
lines. Object 4C 09.17 and object 7C 1354+2552 were selected for this article
based on a wealth of multi-wavelength observations that can bridge the scales
from the nuclear region of the quasar host galaxy to CGM scales. Both sources
show evidence of outflows and are interesting laboratories to study the
intricate balance between feeding and feedback at high redshift. In addition,
sensitive ALMA observations are also available to trace the molecular gas of
the quasar host galaxy and nearby satellite galaxies on similar spatial scale
to the KCWI observations. Details on the ALMA observations and data reduction
can be found in Vayner et al. (2021a). In short, we followed up targets from
the parent sample with strong indication of ionized outflows on galaxy scales.
We targeted the CO (3-2) or CO (4-3) molecular emission line to resolve and
map the molecular ISM and detect evidence of feedback on the molecular
reservoir or through detection of molecular outflows. The matching field of
view of ALMA and KCWI also allowed us to search for companion galaxies through
detection of the CO emission line within $\pm$ 2000km s-1 from the redshift of
the quasar.
Name | R.A | Dec | z | Lbol [erg s-1]
---|---|---|---|---
4C 09.17 | 04:48:21.74 | +09:50:51.46 | 2.1083 | $2.88\times 10^{46}$
7C 1354+2552 | 13:57:06.53 | +25:37:24.46 | 2.0320 | 2.75$\times 10^{46}$
Table 1: General properties of the two targets part of this article.
### 2.2 KCWI
KCWI observations were obtained on the nights of November 22 and 23, 2017,
October 2 and 3, 2018 and March 28, 2019. The KCWI observations were conducted
with the medium slicer using the BL grating with the central wavelength of
4499 Å with the KBlue filter. Our observations covered a wavelength range from
3500Å to 5500Å with a spectral resolving power R$\sim$ 1800 and a field of of
view of 16.5′′$\times$ 20.4′′. The observations consisted of acquiring the
quasar with the MAGIQ guider and centering the quasar in the KCWI field of
view. We set the exposure time to 1200 seconds for quasar observations and
600s for dedicated empty sky observation for ideal sky subtraction. Sky
observations were taken by offsetting the integral field unit (IFU) field of
view 30-60′′ away from the extended $\rm Ly\alpha$ emission onto a previously
selected empty sky region with no objects brighter than 24th magnitude in
$R$-band selected from the Dark Energy Survey (DES). We followed the ABAAB
observing sequence where A is the quasar observations, and B is the empty sky
observation. We dithered each quasar observation using multiples of half-
slicer steps perpendicular to the slices and random offsets of 1-3′′ in the
parallel direction. A guider image was saved after each dither and sky offset
movement to facilitate better absolute astrometry if necessary. We observed 4C
09.17 on source for a total of 3.0 hours (9$\times$1200s) and 7C 1354+2552 for
3.3 hours (10$\times$1200s).
### 2.3 OSIRIS
OSIRIS observations were taken in the laser-guide star (LGS) mode. Initial
observations were taken with the Hn2 and Kn1 filters for 4C 0917 and in the
Hn1 and Kn1 filters for 7C 1354+2552 using the 50 milli-arcsecond lenslet
plate scale aimed to resolve and study the host galaxy of the quasar (Vayner
et al., 2021c, b). Subsequent observations on October 15, 2019, were taken
with the Hn2 and Kn1 filter for 4C 09.17 but using the 100 milli-arcsecond
lenslet plate scale to achieve a larger field of view to study the more
diffuse emission discovered with the initial observation and to help bridge
the spatial scale gap between the initial OSIRIS observations and KCWI. Each
OSIRIS observation began by acquiring the tip/tilt star and centering it in
the respective science frame. We then offset to the quasar using a known
offset from Gaia astrometry. Each OSIRIS observation consisted of 600s
exposures, with a dedicated sky frame taken once an hour using small dithers
between each science observation.
### 2.4 KCWI Data Reduction & Analysis
The data were reduced with the KCWI data reduction pipeline (Morrissey et al.,
2018), which performs bias subtraction, cosmic ray removal, scattered light
subtraction, flat fielding, wavelength calibration, and spectra extraction
into a three-dimensional data cube. For each 3D data cube, the pipeline also
performs differential atmospheric refraction correction and flux calibration
using observations of standard stars recommended by the KCWI instrument team.
Sky subtraction was done by scaling the flux as a function of wavelength in
the sky data cube to match the data. The flat fielding and wavelength
calibration were done using observations of a white light source and ThAr/FeAr
lamps at the start of each night. The spectra were extracted into a data cube
with a native pixel size of the slicer IFU of 0.69$\times$0.29′′. We construct
a white light image for each data cube by taking an average along the spectral
axis. Offsets between each cube due to dithering are calculated by cross-
correlating the quasar’s position in each data cube. Science observations
taken at different sky position angles are rotated such that north is up based
on the recorded World Coordinate System (WCS) in the fits header. Finally, we
combine the cubes using the CWITools package (O’Sullivan & Chen, 2020), re-
sampling the cubes onto a common grid with a pixel scale of
0.3′′$\times$0.3′′. Using the arclines data we measure a line spread function
(LSF) by isolating a single emission and fitting a Gaussian model in each
spaxel. We measure a median LSF of 1.04 Å based on the Gaussian model
dispersion value, with a standard deviation of 0.043 Å across the slicer field
of view. These values correspond to a LSF of 83.93 $\pm$ 3.48 km s-1 across
the field of view. We achieved a final sensitivity (2$\sigma$) at 4000 Å over
2′′$\times$ 2′′ area in a 10Å window of 2.5$\times 10^{-19}$ erg s-1 cm-2
arcsec-2 and 3.7 $\times 10^{-19}$ erg s-1 cm-2 arcsec-2 for 7C 1354+2552 and
4C 09.17, respectively.
### 2.5 OSIRIS Data Reduction & Analysis
Details of the OSIRIS data reduction can be found in Vayner et al. (2021b). In
short, we used the OSIRIS data reduction pipeline version 4.1.0 (Larkin et
al., 2013; Lyke et al., 2017; Lockhart et al., 2019) that does standard near-
infrared detector reduction steps, constructs three-dimensional data cubes,
does scaled sky subtraction to remove the bright OH glow from the sky and
mosaics observations at different dither positions into a single science data
cube.
### 2.6 Point-spread-function subtraction
For PSF subtraction, we followed the procedure outlined in (Vayner et al.,
2016, 2021d, 2021c). In short, the PSF for each data cube gets constructed by
using the wings of the broad emission lines (e.g., $\rm Ly\alpha$, CIV) and
the quasar continuum by isolating and averaging those data channels together.
The constructed image is then normalized to the maximum flux and subtracted
from the rest of the data cub while re-scaling to the maximum spatial flux at
each data channel. We construct a white light image excluding emission line
channels following the PSF subtraction. Spaxels that show flux with a standard
deviation greater than three times the background value measured near the edge
of IFU FOV have their spectra flagged and are excluded from further data
analysis. These spaxels generally reside near the core of the PSF, within the
full width at half maximum (FWHM) of the PSF, and are dominated by residuals
from PSF subtraction. Other studies (Inskip et al., 2011; Borisova et al.,
2016) remove continuum emission before constructing the PSF. For our sources,
the channel range used to construct the PSF image is selected such that the
PSF dominates the resulting image.
### 2.7 Astrometry alignment
All of the multi-wavelength data sets used in this study have their astrometry
aligned to the same world coordinate system using the quasar as the common
source across all data sets. All coordinates are measured relative to the
quasar position in each given data set. For optical and near-infrared data we
use the centroid of the bright unresolved quasar continuum, while for radio
data we use the spatially unresolved emission with the flattest radio spectrum
(Vayner et al., 2021a). For KCWI, the astrometric accuracy depends on how well
we can find the centroid of the quasar in our white-light image. To compute
the centroid, we fit a 2D Gaussian to the white-light image. We obtain an
uncertainty on the centroid of 0.03′′(RMS) for both data cubes, and we take
this to be the relative astrometric error between the KCWI and the other
multi-wavelength data sets. The centroid of the quasar in the OSIRIS, HST and
ALMA data sets can be found to an accuracy better than 0.005′′. Hence their
relative astrometric errors are smaller compared to KCWI.
### 2.8 Optimal emission line extraction & moment maps.
We optimally extract the flux of each emission line using an “object” data
cube created through the use of segmentation maps. We select a minimum
threshold of 2 $\sigma$ and use the cwi segment routine within CWITools
(O’Sullivan & Chen, 2020), which loops through a range of wavelengths
surrounding each emission line and creates a segmentation map based on the
signal-to-noise ratio (SNR) criteria at each wavelength channel. The result is
a data cube consisting of a mask at each wavelength location above the
threshold. A moment-zero map in surface brightness units is created by summing
the flux in each spaxel above the threshold criteria for each emission line.
Moment 1 and 2 maps are created in a similar manner by applying the associated
moment equation to flux in each spaxel above the threshold. We present moment
maps for 4C 09.17 and 7C1354+2552 in Figure 1 and 2. Moment 1 maps are created
using the redshift derived from the $\rm HeII$ line. We construct radial
surface brightness profiles from the moment 0 maps for each detected line out
to the edge of the KCWI field of view to showcase the maximum detected spatial
extent in each emission line (Figure 3).
Figure 1: Moment maps for extended $\rm Ly\alpha$ and He II 1640 Å emission in
the 4C09.17 system. On the left, we present the optimally extracted extended
line flux, the middle panel shows the radial velocity from the moment 1 map,
and on the right, we show the velocity dispersion extracted from the moment 2
map. The bar in each right panel shows 10′′ or 86 kpc at the redshift of the
source. The ellipse in the lower-left corner shows the FWHM of the PSF. North
is up and east is to the left.
Figure 2: Moment maps for extended $\rm Ly\alpha$ and He II 1640 Å emission in
the 7C1354+2552 system. We present the optimally extracted extended line flux
on the left, and the middle panel shows the radial velocity from the moment 1
map. We show the velocity dispersion extracted from the moment 2 map on the
right. The bar in each right panel shows 10′′ or 86 kpc at the redshift of the
source. The ellipse in the lower-left corner shows the FWHM of the PSF. North
is up and east is to the left.
Figure 3: Average surface brightness profiles for the extended rest-frame UV
emission line nebulae in the 4C 09.17 and 7C 1354+2552 systems.
## 3 Results: individual sources
In this section, we discuss the results of each source by combining the multi-
wavelength observations and the KCWI data to interpret the kinematics,
dynamics, and photo-ionization condition of the gas on CGM scales.
### 3.1 4C09.17
4C09.17 is a radio-loud quasar at z=2.1083, measured from the narrow line
region of the quasar, with a bolometric luminosity of 2.88$\times 10^{46}$erg
s-1. The quasar host galaxy consists of clumpy star-forming regions (Vayner et
al., 2021c) with a star formation rate of 9$\pm$1 M⊙ yr-1 and a molecular gas
reservoir of $(3\pm 0.3)\times 10^{9}$ M${}_{\hbox{$\odot$}}$ (Vayner et al.,
2021a). We detect a multi-phase outflow in the quasar host galaxy extending in
the eastern direction with a total outflow rate of 400$\pm$50 M⊙ yr-1 likely
driven by quasar activity (Vayner et al., 2021a).
Lehnert et al. (1992) detect potentially extended emission in the vicinity of
the quasar. Higher resolution and more sensitive optical and near-infrared
imaging reveal three galaxy candidates within 20 kpc from the quasar (Armus et
al., 1997). We confirm 4C 09.17 B (Vayner et al., 2021c) to be another galaxy
merging with the quasar host galaxy through OSIRIS integral field spectroscopy
by detecting emission from the [Oiii] , H$\alpha$ , and [Nii] emission lines.
The 4C 09.17 B galaxy consists of 4 star-forming clumps with a star formation
rate of 96$\pm$8 M⊙ yr-1. We detect 4C 09.17 C through the CO (4-3) emission
line in our ALMA program (Vayner et al., 2021a). We also detect a molecular
outflow in 4C09.17 C with an outflow rate of 2500 M⊙ yr-1 in the southwest
direction. We marginally detect a narrow [Oiii] emission line in the OSIRIS
integral field spectroscopy of 4C 09.17 C; however, we do not detect an
ionized outflow. We detect 4C 09.17 D part of the OSIRIS observations in the
[Oiii] line obtained for this study that utilize the larger plate scale to
detect more diffuse emission. The detection of 3 galaxies in the vicinity of
the quasar host galaxy indicates that 4C 09.17 is a group system. While the
definition of a galaxy group is ambiguous, the general consensus is that a
galaxy group consists of several galaxies of similar mass with a combined
total mass of $10^{13-14}$M${}_{\hbox{$\odot$}}$ . Given that the 4C 09.17
system consists of several galaxies with similar dynamical masses of $\sim
10^{10-11}$M${}_{\hbox{$\odot$}}$ , their total mass including the dark matter
halo can potentially surpass $10^{13}$M${}_{\hbox{$\odot$}}$ , hence, we
believe this system to be a group. We present the position of the galaxies
relative to the quasar that we detect both in $J-$ band continuum imaging and
in ionized emission in Figure 5. We show the satellite galaxy properties from
the multi-wavelength observations in Table 2. The discovery spectrum of each
companion galaxy can be found in Appendix A.
Satellite galaxy | $\Delta$ RA | $\Delta$ DEC | Line | $\Delta$ V
---|---|---|---|---
4C 09.17 B | 0.6′′ | 0′′ | H$\alpha$ , [Oiii] , [Nii] | -267$\pm$1 km s-1
4C 09.17 C | -1.13′′ | 2.29 ′′ | CO (4-3), [Oiii] | 201$\pm$18 km s-1
4C 09.17 D | 2.26′′ | -1.86 ′′ | [Oiii] | 411$\pm$20km s-1
7C 1354 B | -0.82′′ | -0.11′′ | [Oiii] ,H$\alpha$ | 619$\pm$20 km s-1
7C 1354 C | -0.5′′ | 0.95′′ | CO (4-3) | -101$\pm$21km s-1
7C 1354 D | -3.75′′ | 3.5 ′′ | CO (4-3) | -254$\pm$25 km s-1
7C 1354 E | 4.25′′ | 5.5′′ | CO (4-3) | -601$\pm$22 km s-1
Table 2: Properties of satellite galaxies in the group systems.$\Delta$ RA/DEC
is the spatial offset relative to the quasar for each satellite galaxy. Line
is the emission line used for spectroscopic redshift. $\Delta$ V is the
relative offset of the satellite galaxy relative to the redshift of the CGM.
With KCWI, we detect extended $\rm Ly\alpha$ emission with a maximum extent of
85.6 kpc, based on azimuthally averaged surface brightness profile (Figure 3)
extending towards the southeast. In addition, we detect CIV and $\rm HeII$
emission with a maximum extent of 31.5-40 kpc. The moment 1 map for each
emission line features a velocity gradient with a direction of the maximal
velocity gradient (major kinematic axis) along the north-west to southeast
direction with a maximum velocity offset of $\pm$500 km s-1. To correct for
beam smearing in the central part of the CGM, we measure the velocity
dispersion in the outskirts of the $\rm HeII$ and $\rm Ly\alpha$ nebulae and
find a value of 230-307 km s-1, away from the cusp of the velocity gradient.
The measured velocity dispersion of $\rm HeII$ reflects bulk motions of the
gas more accurately than that of $\rm Ly\alpha$.
To constrain the systemic velocity of the CGM surrounding the galaxy group, we
integrate over the extended $\rm HeII$ emission. The spectrum shows a double-
peaked emission-line profile in $\rm HeII$, suggesting the presence of two
kinematic components. We fit the spectrum with a sum of two Gaussians and
measure a weighted flux redshift of 2.10879 and take this to be the systemic
redshift of the CGM in the 4C 09.17 group. We measure the velocity relative to
He II 1640 Å because this is a recombination line; hence it does not suffer
from resonant scattering like $\rm Ly\alpha$, and can provide the true
kinematics of the gas in the CGM. Following this, we also integrate over the
entire individual blue/redshifted extended $\rm Ly\alpha$ emission and present
them in Figure 4. The velocities are measured relative to the frame of the CGM
determined from the flux-weighted redshift described above (Figure 4).
Figure 4: Spectra of distinct regions and kinematic components extracted from
the PSF subtracted KCWI data cube of 4C 09.17. Top panel shows the $\rm HeII$
emission extracted of the $\rm HeII$ nebula and is used to derive the redshift
of the CGM and identify the distinct kinematic components. Middle and bottom
left rows show spectra of $\rm Ly\alpha$ extracted over a polygon region
containing the lowest surface brightness contours shown on the right
integrated intensity $\rm Ly\alpha$ map. Middle and bottom rows show spectra
of $\rm Ly\alpha$ extracted over the two distinct blue and redshifted
kinematic components over their entire, respective $\rm Ly\alpha$ emission
map. Each emission line is fit with a single or multiple Gaussian components
shown with a dashed line, while the total best fit consisting of the
individual components is shown in black. The dashed vertical line shows
absorption in the blueshifted $\rm Ly\alpha$ profile due to the foreground gas
in the redshifted kinematic component. North is up and east is to the left.
Figure 5: Detection of emission from stellar and gas components in 4C 09.17 on
scales ranging from 1 to 90 kpc. The left panel shows the innermost emission
detected with HST WFPC2 imaging of rest-frame UV emission from massive young
stars in the quasar host galaxy and in the nearby merging galaxy 4C 09.17 B.
The white contours represent H$\alpha$ emission in the quasar host galaxy and
the nearby merger system originating from clumpy star-forming regions detected
with OSIRIS-LGS observations. Teal contours show the location of the ionized
outflow traced through [Oiii] emission. The middle panel shows near-infrared
HAWKI $J$-band imaging of rest-frame $B$ band emission from stars in the host
galaxies of the satellite galaxies C and D. Red contours in the middle panel
show diffuse emission associated with the satellite galaxy B on scales of
$\sim$ 16-20 kpc obtained with OSIRIS-LGS with a larger plate scale. Teal
contours show the emission from the outflow, seen on larger more diffuse
scales with the larger plate scale OSIRIS observations. The right panel shows
extended $\rm Ly\alpha$ halo associated with the group system of galaxies. The
orange stars represent the optical position of the galaxies, and the white
star represents the location of the quasar host galaxy. We highlight the
streams substructure in the $\rm Ly\alpha$ halo obtained by applying an
unsharp-mask filter to the optimally extracted $\rm Ly\alpha$ emission map and
present them as white contours on the right panel. North is up and east is to
the left.
The high optical depth of $\rm Ly\alpha$ can allow us to probe the 3D
structure of the CGM for certain geometric configurations (Wang et al., 2021).
For example if two or more gas filaments line up and overlap along the line of
sight, then the background filament can have a portion of its emission
absorbed by the foreground filament imprinting the 3D structure onto the $\rm
Ly\alpha$ line profile. We detect an absorption in the $\rm Ly\alpha$ line in
the blueshifted kinematic component at the velocity offset of the redshifted
kinematic component as measured with the $\rm HeII$ emission line (Figure 4,
middle-left panel). This absorption indicates that the redshifted kinematic
component is in the foreground since it absorbs a portion of the blueshifted
kinematic component. Based on this geometric setup, the redshifted gas moves
inwards towards the galaxy group. Since the light emitted from the blueshifted
component is getting absorbed, this further means that the blueshifted gas is
in the background. Therefore, the blueshifted component must also be moving
towards the galaxy group. We discuss our fitting procedure of both the
emission and absorption lines in Appendix B. The absorption likely exists over
the entire extended $\rm Ly\alpha$ halo. Even after removing spaxels near the
brightest portion of the $\rm Ly\alpha$ nebula near galaxy B, we still see an
absorption component. The absorber is also detected in the CIV line, however
the spectral resolving power and SNR does not allow the same level of fitting
as for the $\rm Ly\alpha$ line.
We interpret the blue and redshifted kinematic components (Figure 1) likely to
be a part of two separate filaments in the CGM of the 4C 09.17 group system
similar to what has been found in hydrodynamical simulations of the CGM in
massive dark matter halos (Stewart et al., 2017) on similar spatial scales to
our observations. We find the galaxy group to reside near the
boundary/intersection of the two kinematic components. The blueshifted
velocity of 4C 09.17 B indicates that the galaxy is likely associated with the
blueshifted kinematic components of the CGM emission, while 4C 09.17 C and D
are likely associated with the redshifted component. Interestingly, $\rm HeII$
and CIV emission encompass galaxies B and D along with the quasar host galaxy.
Since the strength of recombination radiation is proportional to the electron
and hydrogen density the strong detection of $\rm HeII$ could be an indication
of a larger concentration of gas in the CGM near the densest portion of the
galaxy group where there is a larger accumulation of gas from accretion and
stripping processes.
The morphology of the extended $\rm Ly\alpha$ emission is very interesting,
showcasing a substructure consisting of narrow gas streams extending radially
outwards from the galaxy group. We apply an unsharp mask with a radius of 7
pixels (2′′) to the optimally extracted $\rm Ly\alpha$ emission map to
highlight the gas streams, which we present as white contours in Figure 5.
After applying the unsharp mask we detect the streams more clearly against the
more diffuse $\rm Ly\alpha$ emission. The narrow streams are present in both
kinematic components associated with the satellite galaxies 4C 09.17 B and D.
For 4C 09.17 B, we also detect more diffuse emission in both the 50 mas plate
scale OSIRIS observation (Vayner et al., 2021c) and with the larger, 100 mas
plate scale observation part of this study (Figure 5, red contours). The
diffuse emission seen in [Oiii] on the 20 kpc scale appears to be offset
generally in the western direction. It shows stream-like structure similar to
that seen in the $\rm Ly\alpha$ halo substructure. The velocity measured in
[Oiii] is similar in both offset and dispersion to the extended $\rm Ly\alpha$
and $\rm HeII$ emission. The similarity of the west-ward extent of [Oiii] and
$\rm Ly\alpha$ in the blueshifted kinematic component, together with similar
velocity offset and elongated clumpy substructure, leads us to believe that
they are part of the same warm-ionized gas structure of the CGM. Emission line
ratios using the emission lines [Oiii] , H$\alpha$ and [Nii] in Vayner et al.
(2021c) revealed that the more diffuse emission in 4C 09.17 B is consistent
with quasar photoionization, the clumpier emission is consistent with
photoionization from young stars. We interpret this as likely being due to
differences in the gas column densities of the star-forming clumps vs. the
more diffuse gas where ionizing photons from the quasar can more easily
penetrate.
### 3.2 7C 1354+2552
7C 1354+2552 is a luminous quasar with a bolometric luminosity of $(2.75\pm
0.11)\times 10^{46}$ erg s-1 at z = 2.032, measured based on the quasar
narrow-line region. The host galaxy of the quasar consists of a star-forming
disk galaxy with a star formation rate of 29 $\pm$3 M⊙ yr-1, based on the
H$\alpha$ emission-line luminosity (Vayner et al., 2021c). Through modeling of
the rotating galactic disk, we measure the position angle of the semi-major
axis to be 75.68 $\pm$0.47 ∘East of North with a maximum line-of-sight
velocity along the major kinematic axis of 309.84 $\pm$ 20.47 km s-1and a
velocity dispersion of 61.3 $\pm$ 7.9 km s-1. The blueshifted component of the
galactic disk is towards the south-eastern direction, while the redshifted is
found towards the northwest.
In Vayner et al. (2021c) we discovered a companion galaxy (7C1354+2552 B)
towards the southeast direction. This galaxy is detected in the [Oiii] ,
H$\alpha$ , and [Nii] emission lines. The line ratios are consistent with a
star-formation as the primary source of gas photoionization with a rate of 30
M⊙ yr-1. In our recent ALMA band 4 observations of this system, we detected CO
(4-3) emission within $\pm$ 500 km s-1 from the systemic redshift of the
quasar, associated with galaxies in the KCWI field of view.
With KCWI, we detect extended $\rm Ly\alpha$ emission to a maximum extent of
90 kpc while we also detect $\rm HeII$ and CIV with a maximum extent of 50-60
kpc. The extent of the $\rm Ly\alpha$ line is likely larger and we are limited
by the KCWI field-of-view. In addition we detect fainter emission lines such
as Si IV 1393.755 Å and OIII] 1660.809, 1666.150 Å to 30 kpc extent. Surface
brightness maps of the additional fainter emission lines are shown in the
Appendix Figure 11. Similar to 4C 09.17, we detect a gradient-like feature in
the moment 1 map of all detected UV emission lines with a velocity range of
-500 to 500 km s-1 and a major kinematic axis along the northwest to southeast
direction. Minor differences in the velocity structure of the moment 1 map of
$\rm Ly\alpha$ and $\rm HeII$ in the inner 1-2′′ likely arise due to some PSF
subtraction noise or differences in the mechanisms of how the lines are
produced. We extract the spectra in the kinematically distinct regions seen in
the moment 1 map (Figure 2) and fit them with and fit the spectrum with a
combination of a Gaussian emission line and absorption (Figure 6). There is
evidence for absorption in $\rm Ly\alpha$ that is detected over the entire
extended emission region highlighted in Figure 6 associated with three
different absorbers. Similar to 4C 09.17 we find the blueshifted kinematic
component shows absorption in $\rm Ly\alpha$ at the velocity of the redshifted
kinematic component. Similar to the case of 4C 09.17 the absorption in the
spectrum of the blueshifted component in the $\rm Ly\alpha$ line at the
velocity of the redshifted kinematic component indicates that we are indeed
detecting inflowing gas, as both kinematic components are moving inwards
towards the quasar host galaxy and galaxy group. We were able to spatially map
the extent of each absorber. In Figure 7 we present a map of the resolved
equivalent width and radial velocity offset of the absorbers relative to the
systemic redshift of the CGM.
Figure 6: Spectra of distinct regions and kinematic components extracted from
the PSF subtracted KCWI data cube of 7C 1354+2552. Top panel shows the $\rm
HeII$ emission extracted of the $\rm HeII$ nebula and is used to derive the
redshift of the CGM and identify the distinct kinematic components. The middle
and bottom left rows show spectra of $\rm Ly\alpha$ extracted over a polygon
region containing the lowest surface brightness contours of the right column
map. The two rows show spectra of $\rm Ly\alpha$ extracted over the two
distinct blue and redshifted kinematic components over their entire,
respective $\rm Ly\alpha$ emission map. Each emission line is fit with a
single or multiple Gaussian components shown with a dashed line, while the
total best fit consisting of the individual components is shown in black. The
dashed line in shows absorptions found in the extended $\rm Ly\alpha$ maps of
both kinematic components. North is up and east is to the left.
We present the location of the galaxies relative to the quasar host galaxy and
to their position within the $\rm Ly\alpha$ halo in Figure 8. The two closest
galaxies to the quasar are found to reside within the $\rm HeII$ halo. The
velocity of galaxy 7C1354+2552 B indicates that it is likely associated with
the redshifted kinematic component detected in $\rm Ly\alpha$ and $\rm HeII$
towards the south-west. 7C1354+2552 C and D appear to be linked by a “bridge”
structure towards the northwest, and their velocity offsets are in general
agreement with the velocity of $\rm HeII$ and $\rm Ly\alpha$ found in the
moment 1 map towards the northwest. Similarly a “bridge” structure towards the
north-east links the three galaxies near the centroid of the nebula with the
satellite galaxy 7C1354+2552 E.
Figure 7: Equivalent width (W) and radial velocity offset maps of the $\rm
Ly\alpha$ absorbers “A1-A3” detected across the 7C 1354 +2552 $\rm Ly\alpha$
halo. Teal contours represent the $\rm Ly\alpha$ surface brightness map.
Orange stars represent the location of nearby companion galaxies. North is up
and east is to the left.
The presence of galaxies within the rest-fame UV emission-line nebulae with
velocities consistent with the extended gas leads us to believe that the gas
is likely associated with gas accreting onto the central galaxy group.
Galaxies 7C1354 D and E are likely on their path to merge with the central
galaxy group. We interpret the blue and redshifted components as filaments
within the CGM along which the galaxies are moving towards the central galaxy
group.
Figure 8: Detection of emission from gas components in the 7C1354+2552 system on scales ranging from 1 to 100 kpc. Left panel shows the inner most emission detected with Keck/OSIRIS IFS observations mapping the [Oiii] emission line on kpc scales. Right panel shows extended $\rm Ly\alpha$ halo associated with the group of galaxies. The orange stars represent the position of satellite galaxies detected with either ALMA or OSIRIS observations, white star represents the location of the quasar host galaxy. Box on the right represents the OSIRIS field of view on the left. North is up and east is to the left. Line component | Integrated intensity | Line center | $\Delta$ V | Velocity dispersion
---|---|---|---|---
| $\times 10^{-16}$ erg s-1 | Å | km s-1 | km s-1
7C1354 $\rm Ly\alpha$ C1 $\Delta V>0$ | $5.056_{-0.008}^{+0.024}$ | $3663.964_{-0.069}^{+0.013}$ | $533_{-6}^{+1}$ | $539_{-1}^{+2}$
7C1354 $\rm Ly\alpha$ C2 $\Delta V>0$ | $1.488_{-0.209}^{+0.013}$ | $3649.122_{-0.037}^{+0.048}$ | $-684_{-3}^{4}$ | $213_{-1}^{+15}$
7C1354 $\rm Ly\alpha$ C1 $\Delta V<0$ | $3.476_{-0.007}^{+0.082}$ | $3656.784_{-0.024}^{+0.095}$ | $-56_{-2}^{+8}$ | $407_{-1}^{+5}$
7C1354 $\rm Ly\alpha$ C2 $\Delta V<0$ | $2.710_{-0.051}^{+0.010}$ | $3655.032_{-0.397}^{+0.074}$ | $-199_{-33}^{6}$ | $1010_{-3}^{+14}$
7C1354 $\rm HeII$ C1 $\Delta V>0$ | 0.11$\pm$0.01 | 4944.88$\pm$0.46 | 578$\pm$33 | 217$\pm$26
7C1354 $\rm HeII$ C1 $\Delta V<0$ | 0.265$\pm$0.03 | 4931.17$\pm$0.39 | -255$\pm$30 | 342$\pm$25
4C09.17 $\rm Ly\alpha$ C1 $\Delta V<0$ | $0.892_{-0.010}^{+0.016}$ | $3780.728_{-0.249}^{+0.072}$ | $116_{-20}^{+6}$ | $463_{-5}^{+17}$
4C09.17 $\rm Ly\alpha$ C1 $\Delta V>0$ | 0.990 $\pm$ 0.01 | 3785.38 $\pm$ 0.17 | 465$\pm$22 | 478$\pm$14
4C09.17 $\rm HeII$ C1 $\Delta V>0$ | 0.062$\pm$0.006 | 5106.30 $\pm$ 0.9 | 393$\pm$53 | 307$\pm$46
4C09.17 $\rm HeII$ C1 $\Delta V<0$ | 0.05$\pm$0.01 | 5093.98 $\pm$ 0.65 | $-329\pm 42$ | 231$\pm$31
Table 3: Best fit emission line properties from spectra integrated over individual kinematic components Line component | Column density | line center | $\Delta$V | Doppler parameter [b]
---|---|---|---|---
| log10(cm2) | Å | km s-1 | km s-1
4C 09.17 $\rm Ly\alpha$ $\Delta V<0$ A1 | $13.482_{-0.055}^{+0.063}$ | $3783.907_{-0.151}^{+0.321}$ | $368_{-12}^{+25}$ | $166_{-19}^{+61}$
7C 1354+2552 $\rm Ly\alpha$ $\Delta V>0$ A1 | $14.238_{-0.085}^{+0.006}$ | $3650.189_{-0.047}^{+0.466}$ | $-597_{-4}^{+38}$ | $349_{-6}^{+2}$
7C 1354+2552 $\rm Ly\alpha$ $\Delta V>0$ A2 | $13.795_{-0.003}^{+0.004}$ | $3663.871_{-0.014}^{+0.007}$ | $525_{-1}^{+1}$ | $123_{-2}^{+1}$
7C 1354+2552 $\rm Ly\alpha$ $\Delta V<0$ A1 | $13.983_{-0.003}^{+0.006}$ | $3654.352_{-0.007}^{+0.010}$ | $-255_{-1}^{+1}$ | $97_{-3}^{+1}$
7C 1354+2552 $\rm Ly\alpha$ $\Delta V<0$ A2 | $13.735_{-0.004}^{+0.010}$ | $3662.954_{-0.026}^{+0.020}$ | $450_{-2}^{+2}$ | $109_{-1}^{+12}$
7C 1354+2552 $\rm Ly\alpha$ $\Delta V<0$ A3 | $15.821_{-0.101}^{+0.053}$ | $3639.144_{-0.331}^{+0.052}$ | $-1503_{-27}^{+4}$ | $83_{-1}^{+15}$
Table 4: Best fit absorption line properties from spectra integrated over
individual kinematic components
## 4 Discussion
### 4.1 Evidence for gravitational motion and gas accretions in the CGM
Both sources in this study show a well-defined gradient structure in the
radial velocity maps of $\rm Ly\alpha$, $\rm HeII$, and CIV. The fact that a
similar velocity structure is observed in the $\rm HeII$ recombination line
leads us to believe that these kinematic structures are not caused by
radiative transfer effects of $\rm Ly\alpha$ and are indeed real gas motion in
the CGM of these two systems. In both objects the blueshifted and redshifted
kinematic components are associated with nearby satellite galaxies. The
gradient patterns do not appear to be associated with large scale structure of
the quasar host galaxies in either of the systems. In fact for 7C 1354+2552
the gradient pattern seen in the moment 1 map is counter to the rotational
pattern seen in the quasar host galaxy. We further discuss the significance of
of the angular momentum misalignment later in the discussion section.
For both systems, we have defined a systemic redshift for the gas in the CGM
based on the luminosity weighted centroid of the blue and redshifted kinematic
components measured over the entire $\rm HeII$ halo. In both systems, we
detect a concentration of 2-3 galaxies of similar mass within a 20 kpc radius
from the quasar, indicating that the quasar host galaxies of 4C 09.17 and
7C1354+2552 reside in a group environment. The $\rm HeII$ emission appears to
be concentrated in the galaxy group, potentially near the node of the dark
matter halo. Extended $\rm HeII$ emission has recently been found around other
high redshift quasars, and these systems also show similar results where $\rm
HeII$ is detected near a concentration of galaxies (Cai et al., 2017;
Cantalupo et al., 2019; Herenz et al., 2020; Husemann et al., 2021). In both
4C 09.17 and 7C1354+2552, the blueshifted and redshifted kinematic components
appear to move towards the central galaxy group. For both sources, the
presence of absorption in the $\rm Ly\alpha$ line gives us clues to the three-
dimensional structure of the gas in the CGM, where the redshifted component is
located in front of the blueshifted kinematic component, meaning that both
kinematic components are moving towards the central concentration of galaxies.
While $\rm Ly\alpha$ emission line can suffer from several radiative transfer
effects that cause arbitrary broadening and velocity shifts, the optical depth
helps with interpretation of relative gas motion in the CGM in high signal to
noise ratio data where we can both resolve the gas in the CGM and measure the
emission and absorption profile. While we detect signs of inflows based on
$\rm Ly\alpha$ absorption kinematics, similar analysis of a radio-loud galaxy
found evidence for outflowing material on CGM scales based on resolved $\rm
Ly\alpha$ absorption (Wang et al., 2021).
Based on statistical analysis, we know that quasars reside in dark matter
halos of $10^{12-13.7}$M${}_{\hbox{$\odot$}}$ (White et al., 2012; Hall et
al., 2018) at z$\sim 2$. For an NFW dark-matter profile (Navarro et al.,
1996), such halos are expected to have virial velocities of 200-400 km
s-1(White et al., 2012; Buckley et al., 2014; Shull, 2014), the observed
motion in both of the kinematic components are consistent with gravitational
motion in a massive dark matter halo. The observed gradients are similar to
those found in other high redshift CGM around luminous quasars at $z=2-3$.
Other observational works have associated these kinematic structures as
inflowing material from the CGM (Arrigoni Battaia et al., 2018; Martin et al.,
2019; Arrigoni Battaia et al., 2021) based on the similarity between velocity
offsets and similar gradient-like structures found in hydrodynamical
simulations of massive dark matter halos (Stewart et al., 2017).
### 4.2 Deriving the warm-ionized gas mass using $\rm Ly\alpha$ and inflow
rates
It is interesting to ask at what rate the cold gas in the CGM is accreting
onto the galaxy groups in both systems. However, first, we need to measure the
amount of warm-ionized gas in the CGM. Over the last ten years, several works
have estimated the amount of warm ionized gas in the CGM around quasars using
$\rm Ly\alpha$ as a tracer of the gas mass. The first method assumes that $\rm
Ly\alpha$ emission arises in regions that are optically thin to Lyman
continuum photons. Using a spectral model where the quasar is the primary
source of ionization with the assumption that the majority of the gas is in
the ionized state, Hennawi & Prochaska (2013) find the following relationship
between the hydrogen column density and the average surface brightness of $\rm
Ly\alpha$:
$\frac{N_{H}}{10^{20}\rm cm^{-2}}=\frac{SB}{7.7\times
10^{-19}}\left(\frac{1+z}{3.0}\right)^{4}\left(\frac{f_{c}}{1.0}\right)^{-1}\left(\frac{n_{H}}{0.1{\rm
cm}^{-3}}\right)^{-1}$ (1)
where $f_{c}$ is the covering factor for the clouds in the CGM and $n_{H}$ is
the hydrogen number density, which equals the electron density $n_{e}$ under
the assumption that the majority of the gas is ionized. The above equation
holds true for neutral hydrogen column densities of $N_{HI}\ll 10^{17}$cm-2.
The electron or hydrogen density is always a major uncertainty when
calculating ionized gas masses (Harrison et al., 2018). Based on statistical
observations from the line of sight absorption measurement in the CGM of high
redshift luminous quasars, the median total hydrogen column density is
$log_{10}(N_{H})\sim 20.5\pm 1$ within a projected radius of 100-200 kpc (Lau
et al., 2016) and the average covering factor is 0.5 (Prochaska et al., 2013).
The average $\rm Ly\alpha$ surface brightness within 100 kpc, corrected for
redshift surface brightness dimming is 1$\times 10^{-17}$erg s-1arcsec-2,
based on statistical observations of the CGM around luminous quasars (Cai et
al., 2019). Plugging these two values into equation 1 we can estimate that the
expected average hydrogen number density is 1 cm-3. Multiplying equation 1 by
the surface area of the $\rm Ly\alpha$ emitting region can provide us with a
hydrogen gas mass. The electron density is likely a lower limit since some
annuli used to compute the average surface brightness maps do not have full
emission across the entire annulus (Heckman et al., 1991; Arrigoni Battaia et
al., 2015; Hennawi et al., 2015).
Another method is to assume that the CGM consists of individual clouds in the
CGM that are all at the same density. The $\rm Ly\alpha$ line is also assumed
to be optically thin in the case A recombination regime (Osterbrock & Ferland,
2006).
$M_{HII}=(n_{p}m_{p}+n_{He}m_{He})Vf.$ (2)
$V$ is the volume of the gas emitting region in the CGM, $f$ is the volume
filling factor (the ratio of the volume of emitting clumps to the total volume
of the region), and $n_{p}$ is the proton number density. $n_{He}$ and
$m_{He}$ are the number density of helium and the mass of a helium atom. We
assume a solar abundance for helium in the CGM gas. We further assume the gas
to be fully ionized where helium is an equal mix of HeII and HeIII. Under
these assumptions, we get the following relationships:
$\displaystyle n_{He}$ $\displaystyle=0.1n_{p}$ (3) $\displaystyle n_{e}$
$\displaystyle=n_{p}+\frac{3}{2}n_{He}$ $\displaystyle n_{e}$
$\displaystyle=1.15n_{p}$
Following from Osterbrock & Ferland (2006), we can write the line luminosity
due to recombination:
$L(Ly\alpha)=n_{e}n_{p}j_{Ly\alpha}Vf$ (4)
where $n_{e}$ is the electron density, $n_{p}$ is the proton density and
$j_{Ly\alpha}$ is the emissivity. We use an emissivity value of 1.53$\times
10^{-24}$ erg cm3 s-1, calculated using an electron density of 1 cm-3 and gas
temperature of 20,000 K using the PyNeb package (Luridiana et al., 2015). By
combining equation 2, 3 and 4 we can derive the following equation for the
relation between, $\rm Ly\alpha$ luminosity, electron density and hydrogen
mass:
$M_{HII}=7.7\times 10^{9}M_{\odot}\frac{L_{Ly\alpha}}{1\times 10^{43}\mbox{
erg s}^{-1}}\left(\frac{n_{e}}{1\mbox{ cm}^{-3}}\right)^{-1}.$ (5)
Both $\rm Ly\alpha$ mass derivation methods provide similar results. In Table
5 we present the amount of warm-ionized gas mass derived using equation 5 for
the total $\rm Ly\alpha$ nebula and only for the portion where we detect $\rm
HeII$.
### 4.3 Deriving the warm-ionized gas mass using HeII 1640 and inflow rates
For $\rm Ly\alpha$ the combination of resonant scattering, photoionization due
to quasar and star formation, and collisional excitation due to gravitational
cooling makes it difficult to estimate the amount of warm-ionized gas in the
CGM. There is evidence within our two nebulae that the $\rm Ly\alpha$ emission
may have a strong contribution from resonant scattering due to relatively
large equivalent widths observed in $\rm Ly\alpha$ absorption on the spatial
extent of the $\rm Ly\alpha$ halo (Hennawi & Prochaska, 2013). Furthermore,
the difference in the velocity dispersion between $\rm Ly\alpha$ and $\rm
HeII$ after correcting for beam-smearing further indicates that resonant
scattering plays a role in the $\rm Ly\alpha$ line, since the $\rm Ly\alpha$
photons scattered by the higher velocity gas can escape the nebulae more
easily. It is not easy to decipher the amount of line luminosity caused by
each ionization source.
Because $\rm HeII$ is optically thin, we can assume that most $\rm HeII$
emission comes from recombination and that the quasar is the primary source of
gas ionization, with a minor contribution from collisional excitation. We
assume each cloud has the same density and the density is constant across the
$\rm HeII$ nebula, similar to our assumption for the $\rm Ly\alpha$ derived
mass in equation 5.
Using the formulation of Osterbrock & Ferland (2006), the $\rm HeII$
luminosity due to recombination is given by:
$L(HeII)=n_{e}n_{HeIII}j_{HeII1640}Vf$ (6)
where $n_{e}$ is the electron density, $n_{HeIII}$ is number density of doubly
ionized helium. $j_{HeII1640}$ is the emissivity of the 1640 Å $\rm HeII$
emission line, under the assumption of case B recombination at the lower
density. We use an emissivity value of 5.36$\times 10^{-24}$ erg cm3 s-1,
calculated using an electron density of 1 cm-3 and gas temperature of 20,000 K
using the PyNeb package (Luridiana et al., 2015). Combining equations (2,3,6)
we obtain the following total H II ionized gas mass - $L_{HeII1640}$
relationship:
$M_{HII}=4.4\times 10^{10}M_{\odot}\frac{L_{HeII}}{1\times
10^{43}\rm~{}ergs^{-1}}\left(\frac{n_{e}}{1\mbox{ cm}^{-3}}\right)^{-1}$ (7)
In table 5 we derive the ionized gas mass within the maximum extent of $\rm
HeII$ using only spaxels where $\rm HeII$ is detected. In all cases, we assume
an electron density of 1 cm-3.
Object | SB $\rm Ly\alpha$ [R max] | MHII $\rm Ly\alpha$[R max] | SB $\rm Ly\alpha$ [R $\rm HeII$] | MHII $\rm Ly\alpha$[R $\rm HeII$] | SB $\rm HeII$[R $\rm HeII$] | MH $\rm HeII$[R $\rm HeII$]
---|---|---|---|---|---|---
7C1354 | 6$\pm 0.7$ | 27$\pm$3 | 18 $\pm$ 2 | 23 $\pm$2 | 0.8$\pm$0.1 | 6$\pm$0.5
4C 09.17 | 2.4$\pm 0.2$ | 5$\pm$0.5 | 9.3 $\pm$ 0.9 | 3 $\pm$0.3 | 0.5$\pm$0.1 | 1.0$\pm$0.1
Table 5: SB $\rm Ly\alpha$ [R max] is the average surface brightness over the
entire nebula, out to the maximum detected extent in units of 1$\times
10^{-17}$ erg s-1arcsec-2. SB $\rm Ly\alpha$ [R $\rm HeII$] is the average
surface brightness over the spaxels where $\rm HeII$ is detected in units of
1$\times 10^{-17}$ erg s-1arcsec-2. SB $\rm Ly\alpha$ [R $\rm HeII$] is the
$\rm HeII$ surface brightness in units of 1$\times 10^{-17}$ erg s-1arcsec-2.
MHII is the total H II mass derived from equation 5 and 7 for $\rm Ly\alpha$
and $\rm HeII$, respectively, in units of 1$\times
10^{10}$M${}_{\hbox{$\odot$}}$ .
To estimate the ionized gas inflow rate, we divide the gas mass by the
dynamical time scale ($R/v_{r}$) of the inflowing material. For the radius
($R$), we use the maximum extent of the $\rm HeII$ surface brightness profile
measured down to 2$\sigma$, and for the velocity ($v_{r}$) we use the
luminosity weighted velocity difference between the red and the blueshifted
kinematic components. We estimate inflow velocities of 181 km s-1 and 180 km
s-1 for 4C 09.17 and 7C1354+2552, respectively. These inflow velocities are
consistent with motion expected in a massive dark matter halo (White et al.,
2012) and similar to the expected inflow velocity based on cosmological cold-
gas inflows found in Goerdt & Ceverino (2015); Beckmann et al. (2017). For 4C
09.17, we obtain an estimated inflow rate of 60 M⊙ yr-1, while for 7C
1354+2552, we obtain a value of 200 M⊙ yr-1. These inflow rates are also
consistent with the expected value from hydrodynamical simulations (Goerdt &
Ceverino, 2015; Beckmann et al., 2017). These values are an order of magnitude
estimate. Several factors are unknown, such as the geometry of the inflowing
matter, the electron density of the gas in the CGM, the assumption of constant
electron density across the nebulae, unknown power mechanism of $\rm Ly\alpha$
emission, unknown temperature of the gas producing $\rm Ly\alpha$ and $\rm
HeII$ emission, and the unknown fraction of HeIII. Measuring the electron
density of the gas in the CGM is challenging with current instruments,
especially at the lower electron density of the gas in the CGM.
We notice a significant difference between the ionized gas mass derived from
$\rm Ly\alpha$ and $\rm HeII$, over the same aperture. In both cases, the mass
derived from $\rm Ly\alpha$ is a factor of 3-4 greater than what we estimate
in $\rm HeII$. A likely scenario, as discussed earlier, is that a considerable
fraction of the $\rm Ly\alpha$ emission comes from resonant scattering. As
noted in the detailed photoionization simulation in Hennawi & Prochaska (2013)
the surface brightness in $\rm Ly\alpha$ due to resonant scattering can be
very similar to recombination radiation from either optically thin or thick
gas conditions. A significantly smaller gas reservoir can produce scattering
emission with the same surface brightness as recombination from a much larger
gas mass. Using the known equivalent widths that we measure in the extended
$\rm Ly\alpha$ halo for the three detected absorbers in 7C1354+2552 we can
roughly estimate the expected surface brightness value in the $\rm Ly\alpha$
line from scattering. Using equation 20 from Hennawi & Prochaska (2013), for
equivalent width of 6-8 Å within 40 kpc from the quasar we expect a surface
brightness in $\rm Ly\alpha$ due to scattering on the order of 3-4$\times
10^{-17}$erg s-1 cm-2$\rm arcsec^{-2}$, which is close to the observed surface
brightness within 40-80 kpc. Hence a large fraction of the $\rm Ly\alpha$
emission can be due to resonant scattering and if its not properly taken into
account can drastically overestimate the amount of warm ionized gas in the
CGM.
The $\rm HeII$ line comes almost entirely from recombination and hence likely
traces the larger gas reservoir. This showcases the importance of using a
recombination line when measuring the warm ionized gas mass in the resolved
CGM. Observations through hydrogen Balmer lines can also help, in addition to
the $\rm HeII$ data. Likely our assumption of a single constant electron
density for the gas in the CGM does not hold. It may be partially responsible
for the differences in the gas masses derived from $\rm HeII$ and $\rm
Ly\alpha$; however, it is difficult to quantify this uncertainty with our
current data set. Most likely we are still seeing small dense clouds within
presumably lower density, hotter, volume filling gas.
Nevertheless, the gas masses derived from both lines are likely lower limits
on the total gas mass in the CGM as we are still missing tracers of the
neutral atomic, molecular (Ginolfi et al., 2017), and extremely hot ionized
medium gas (Gobat et al., 2019). The estimated inflow rates are for each
entire galaxy group, assuming they reside in one coalesced dark matter halo.
The inflow rates on the individual galaxies are likely lower; however, if the
galaxies merge with the central quasar host galaxy, then the current inflow
rate can be thought of as the total baryon matter supply for the central
cluster/group galaxy.
### 4.4 Evidence for gas stripping in 4C 09.17 system.
Galaxy interactions in proto-groups and clusters create a hot virialized halo
with temperatures up to 107 K. In addition, feedback from supermassive black-
holes through energy-conserving shocks can help provide the halo with hot
diffuse gas that does not radiate energy efficiently (Faucher-Giguère et al.,
2012; Zubovas & King, 2012, 2014). Since it does not radiate efficiently,
detecting the hot gas associated with quasar-driven winds is difficult.
However, it can be indirectly constrained by studying the dynamics of the
outflows. One consequence of energy-conserving shocks is that they produce a
multi-phase outflow with a momentum flux ratio between momentum flux of the
outflow and radiation momentum flux of the accretion disk $>$ 2 on kpc scales.
In the case of 4C 09.17, we have recently detected an outflow that is likely
driven by an energy-conserving shock (Vayner et al., 2021a), indicating the
presence of a hot gas medium. Ram pressure due to the hot gas can cause
stripping and is a common byproduct in local clusters and can be seen as a
large extended gas streams extending from satellite galaxies. These galaxies
are often referred to as “Jellyfish” galaxies (Ebeling et al., 2014). The
morphology of 4C 09.17 B combined with the extended streams structure likely
indicates that we have detected stripping of gas from the galaxy as it merges
with the quasar host galaxy. The gas in 4C 09.17 B along with the extended gas
streams are similar morphologically to a galaxy in a recent FOGGIE simulation
(Cyclone halo) that is being stripped through ram pressure in a dark matter
halo with a mass of 1012M${}_{\hbox{$\odot$}}$ (Simons et al., 2020). The
redshifted kinematic component in 4C 09.17 associated with galaxy D shows
similar streams morphology, likely both galaxies are experiencing some level
of gas stripping as they move down along with the accretion flow.
As discussed in Anglés-Alcázar et al. (2017) stripping of gas from satellite
galaxies can provide a large reservoir of material that can re-accrete onto
the central galaxy or the group/cluster node. The stripped material from the
4C 09.17 B galaxy is unlikely to escape the potential of the galaxy group and
will likely re-accrete onto the galaxy group along with the rest of the CGM
gas that is part of the blue and redshifted kinematic components. Multiple
processes can provide the gas into the CGM of massive dark matter halos
(Anglés-Alcázar et al., 2017). Freshly accreted and recycled gas play
important roles in supplying gas into massive galaxies at high redshift. In 4C
09.17, merger activity, gas accretion and stripping occurs at similar times.
Both accretion of cold gas from the CGM and exchange and transfer of gas from
the ISM of the satellite galaxies are important roles in gas accretion onto
the quasar host galaxy and the central galaxy groups. Based on our unsharp-
mask analysis, we identify that about 24% of the $\rm Ly\alpha$ flux is
associated with the elongated stream-like structure in the blueshifted
kinematic component. Under the assumption that the gas condition and electron
densities are similar between the diffuse and elongated $\rm Ly\alpha$
emission, we can directly translate to a percentage of the warm-ionized gas
mass that is stripped and is re-accreting onto the galaxy group vs. direct gas
accretion from the CGM, indicating that a substantial fraction of the
accreting gas can come from gas-stripping of material from satellite galaxies.
Recent observations by Chen et al. (2021) also have unidentified evidence for
gas stripping in a dusty star-forming galaxy in the well-studied $\rm
Ly\alpha$ halo around UM287.
### 4.5 Dynamics of inflows and outflows.
Hydrodynamical simulations predict the axis along which outflows and inflows
occur. The general picture is that the two are close to 90 degrees apart
(Tumlinson et al., 2017). Typically outflows occur at high galactic latitudes,
near the poles, relative to the galaxy disk, while the inflows occur
perpendicular, close to the plane of the galaxy. Often this scenario is
referred to as the “galactic fountain” model (Tumlinson et al., 2017).
However, most of the existing evidence in support of this picture is
statistical in nature and is based on absorption-line observations of galaxies
and their halos against multiple background quasars. Isolating outflowing and
inflowing gas in the same system is complex. It requires observations that
span large spatial scales that can probe both high spatially resolved
observations on a galactic scale and reach the necessary sensitivity to
diffuse emission on CGM scales.
In the case of 4C 09.17, we have evidence for both outflowing and inflowing
gas. While we do not have a full three-dimensional picture of the inflowing
and outflowing gas vectors, we can see if the projected directions appear to
match the “galactic fountain” model. We discovered a multi-phase molecular and
ionized outflow extending towards the west, blueshifted relative to the
quasar. The outflow appears to be moving away from the redshifted kinematic
component of the CGM gas. The extent of the outflow is indeed close to 90
degrees apart from the major kinematic axis of the CGM gas. Interestingly, the
extent and direction of the molecular outflow in 4C 09.17 C is close to the
minor axis of the radial velocity map. This evidence hints at a picture where
the outflows and inflows occur along different axes in the 4C 09.17 system.
Based on our order of magnitude estimates, the total outflow rate in 4C 09.17
A (the quasar host galaxy) is 450 M⊙ yr-1, only accounting for the molecular
and ionized gas phase. The inflow rate of ionized gas within 30 kpc from the
quasar is 60 M⊙ yr-1, based on the $\rm HeII$ mass and an inflow velocity of
361 km s-1. Interestingly both numbers are comparable; however, when including
the molecular outflow associated with 4C 09.17 C of 2500 M⊙ yr-1, then the
outflow dominates, and there may be a net loss of gas from the galaxy group at
the present time. For 7C 1354+2552, the inflow rate measured within 50 kpc
based on the $\rm HeII$ mass is 200 M⊙ yr-1. We present a schematic diagram of
the complex inflowing and outflowing structures in the 4C 09.17 system in
Figure 9. In 7C 1354+2552, we have only detected outflowing gas in the ionized
gas phase and near the nuclear region of the host galaxy, within 1 kpc with a
rate of 52 M⊙ yr-1. The inflow and outflow rates in 7C1354+2552 are
comparable, and there is likely more of a balance between the accretion and
outflow, at least for the ionized gas phase. Measurement of the neutral gas
and molecular gas (Ginolfi et al., 2017; Emonts et al., 2018; Vidal-García et
al., 2021) in the CGM and detection of outflows in more gas phases are
critical in understanding the total inflow and outflow rates and the overall
balance between feeding and feedback in these group systems.
### 4.6 Angular momentum axis misalignment
In 7C 1354+2552 the major kinematic axis of the $\rm Ly\alpha$ and $\rm HeII$
gradient pattern are counter to the major kinematic axis of the galaxy disc
(Vayner et al., 2021c), hence we do not think that is associated with a larger
scale galaxy disc. Misalngment between the angular momentum distribution of
the disk and the accretion flows is common in massive dark matter halos on
large $>40$ kpc scales in hydrodynamical simulation (Hafen et al., 2022).
Typically the momentum distribution becomes aligned on smaller $<20$ kpc scale
and is a necessary process to help create thin star forming disks in massive
galaxies (Hafen et al., 2022). Likely the location of where the angular
momentum aligns may be unresolved in our observations, since our angular
resolution is 10 kpc. However, the boundary of 0 km/s in 7C 1354+2552 radial
velocity map hints at an in-spiral pattern, consistent with the expected
angular momentum vector alignment predicted in hydro simulations on these
angular scales. Higher angular resolution observation near the intersection of
multiple kinematic components may help us observe the turn-over in the
kinematic pattern as the gas accretes and falls onto galaxy discs in the early
Universe providing fuel for future star formation. Interestingly, the sizes of
the $\rm HeII$ emitting region roughly matches the location of where cooling
flows from the CGM change angles, cool down to 104.5K and “in-spiral” as the
gas accretes and accumulates onto the galactic disk in massive (1012
M${}_{\hbox{$\odot$}}$ ) dark matter halos in hydrodymanical simulations
(Hafen et al., 2022). We speculate that the $\rm HeII$ emission may be
associated with denser regions near the node where multiple CGM filaments
intersect and cause an increase in the gas density, allowing for $\rm HeII$ to
be more easily detected.
Figure 9: Schematic diagram showcasing the motion of gas in the 4C 09.17
system. Accretion from the redshifted component is shown as an in-circled “x”
moving into the image while the blue shifted component is shown as an in-
circled dot moving out of the image. Dashed line showcases that the
blueshifted component is located behind the redshifted component. The minor
kinematic axis of the $\rm Ly\alpha$ radial velocity map bisects the two
kinematic components with a white line moving through the three galaxies that
are part of the quasar host galaxy group. Outflows from 4C 09.17 A and C are
shown with blue lines and dashed arrows. North is up and east is to the left.
## 5 Conclusions
We conducted KCWI observation of two radio-loud quasars, 4C 09.17 (z=2.1083)
and 7C 1354+2552 (z=2.032), targeting the UV emission lines $\rm Ly\alpha$,
CIV and $\rm HeII$ redshifted into the optical bands to resolve and map the
CGM. We found the following results:
1. 1.
We detect extended $\rm Ly\alpha$ emission with a maximum extent of $\sim$ 90
kpc around both quasars.
2. 2.
We detect extended $\rm HeII$ and CIV emission with maximum extents of 30-50
kpc.
3. 3.
In 7C 1354+2552 we additionally detect extended emission in the Si IV and
OIII] UV emission lines.
4. 4.
The radial velocity maps of the UV emission lines show a gradient feature with
velocity ranges of -500 to +500 km s-1. Combining the kinematics of the
emission lines together with spatially resolved $\rm Ly\alpha$ absorption, we
find that the kinematic maps consistent with inflowing gas.
5. 5.
By combining the data with multi-wavelength observations from Keck OSIRIS and
ALMA, we find that the extended $\rm Ly\alpha$ emission is associated with a
group of galaxies. The $\rm HeII$ nebulae in both sources is associated with
the over-density of galaxies.
6. 6.
In the 7C 1354+2552 system, we find that the extended $\rm Ly\alpha$ emission
connects a bridge between the quasar host galaxy and three galaxies detected
with ALMA. This likely indicates that the gas associated with the two
kinematic components in this system is also associated with filamentary gas
accretion from the CGM.
7. 7.
We use the $\rm HeII$ to estimate the amount of warm-ionized gas in the CGM
within the $\rm HeII$ halo. We measure 1-6 $\times 10^{10}$
M${}_{\hbox{$\odot$}}$ within 30-50 kpc from the quasar. Using the gas’s
dynamical time scale, we estimate an inflow rate of 60-200 M⊙ yr-1, within an
order of magnitude of the multi-gas phase outflow rates detected in both
quasar host galaxies.
8. 8.
We find that the inflow and outflow direction are close to 90∘ apart in 4C
09.17, consistent with the hydrodynamical models of the gas kinematics and
dynamics in the CGM of massive galaxies.
9. 9.
We detect narrow gas streams associated with companion galaxies in the 4C
09.17 system that point radially outwards from the quasar and the galaxy
group. We interpret these streams to be gas stripping from the satellite
galaxies likely due to ram pressure stripping of material through interaction
the galaxy’s ISM with the hot gas produced through quasar driven outflows.
Data availability
The Keck OSIRIS data of this work are publicly available from the Keck
Observatory Archive (https://www2.keck.hawaii.edu/koa/public/koa.php). Source
information is provided with this paper. Other data underlying this article
will be shared on a reasonable request to the corresponding author.
Acknowledgments The authors wish to thanks Jim Lyke, Randy Campbell, and other
SAs with their assistance at the telescope to acquire the Keck OSIRIS data
sets. We would like to thank Erica Keller, Melissa Hoffman, and Loreto Barcos
Munoz for assistance with ALMA data reduction and imaging at NRAO. This paper
makes use of the following ALMA data: ADS/JAO.ALMA 2013.1.01359.S,
ADS/JAO.ALMA 2017.1.01527.S. ALMA is a partnership of ESO (representing its
member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST
and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the
Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and
NAOJ. The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc. The data presented herein were obtained at the W.M. Keck
Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California and the
National Aeronautics and Space Administration. The Observatory was made
possible by the generous financial support of the W.M. Keck Foundation. The
authors wish to recognize and acknowledge the very significant cultural role
and reverence that the summit of Maunakea has always had within the indigenous
Hawaiian community. We are most fortunate to have the opportunity to conduct
observations from this mountain. This research has made use of the NASA/IPAC
Extragalactic Database (NED) which is operated by the Jet Propulsion
Laboratory, California Institute of Technology, under contract with the
National Aeronautics and Space Administration.
A.V., N.L.Z. and Y.I. acknowledge support from NASA ADAP grant 80NSSC21K1569.
N.L.Z. was supported at the IAS by the J. Robert Oppenheimer Visiting
Professorship and the Bershadsky Fund. We want to thank the anonymous referee
for their constructive comments that helped improve the manuscript.
A.V. and N.L.Z. would like to thank Wuji Wang and Dominika Wylezalek for
excellent discussions about CGM science with resolved $\rm Ly\alpha$
absorption in halos of luminous quasars.
## References
* Anglés-Alcázar et al. (2017) Anglés-Alcázar, D., Faucher-Giguère, C.-A., Kereš, D., et al. 2017, MNRAS, 470, 4698, doi: 10.1093/mnras/stx1517
* Armus et al. (1997) Armus, L., Neugebauer, G., Lehnert, M. D., & Matthews, K. 1997, MNRAS, 289, 621, doi: 10.1093/mnras/289.3.621
* Arrigoni Battaia et al. (2015) Arrigoni Battaia, F., Hennawi, J. F., Prochaska, J. X., & Cantalupo, S. 2015, ApJ, 809, 163, doi: 10.1088/0004-637X/809/2/163
* Arrigoni Battaia et al. (2019) Arrigoni Battaia, F., Hennawi, J. F., Prochaska, J. X., et al. 2019, MNRAS, 482, 3162, doi: 10.1093/mnras/sty2827
* Arrigoni Battaia et al. (2018) Arrigoni Battaia, F., Prochaska, J. X., Hennawi, J. F., et al. 2018, MNRAS, 473, 3907, doi: 10.1093/mnras/stx2465
* Arrigoni Battaia et al. (2021) Arrigoni Battaia, F., Chen, C.-C., Liu, H.-Y. B., et al. 2021, arXiv e-prints, arXiv:2111.15392. https://arxiv.org/abs/2111.15392
* Bacon (2010) Bacon, R. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 773508, doi: 10.1117/12.856027
* Beckmann et al. (2017) Beckmann, R. S., Devriendt, J., Slyz, A., et al. 2017, MNRAS, 472, 949, doi: 10.1093/mnras/stx1831
* Borisova et al. (2016) Borisova, E., Cantalupo, S., Lilly, S. J., et al. 2016, ApJ, 831, 39, doi: 10.3847/0004-637X/831/1/39
* Buckley et al. (2014) Buckley, M. R., Zavala, J., Cyr-Racine, F.-Y., Sigurdson, K., & Vogelsberger, M. 2014, Phys. Rev. D, 90, 043524, doi: 10.1103/PhysRevD.90.043524
* Cai et al. (2017) Cai, Z., Fan, X., Yang, Y., et al. 2017, ApJ, 837, 71, doi: 10.3847/1538-4357/aa5d14
* Cai et al. (2019) Cai, Z., Cantalupo, S., Prochaska, J. X., et al. 2019, ApJS, 245, 23, doi: 10.3847/1538-4365/ab4796
* Cantalupo (2017) Cantalupo, S. 2017, in Astrophysics and Space Science Library, Vol. 430, Astrophysics and Space Science Library, ed. A. Fox & R. Davé, 195, doi: 10.1007/978-3-319-52512-9_9
* Cantalupo et al. (2005) Cantalupo, S., Porciani, C., Lilly, S. J., & Miniati, F. 2005, ApJ, 628, 61, doi: 10.1086/430758
* Cantalupo et al. (2019) Cantalupo, S., Pezzulli, G., Lilly, S. J., et al. 2019, MNRAS, 483, 5188, doi: 10.1093/mnras/sty3481
* Cappellari (2009) Cappellari, M. 2009, arXiv e-prints, arXiv:0912.1303. https://arxiv.org/abs/0912.1303
* Chen et al. (2021) Chen, C.-C., Arrigoni Battaia, F., Emonts, B. H. C., Lehnert, M. D., & Prochaska, J. X. 2021, ApJ, 923, 200, doi: 10.3847/1538-4357/ac2b9d
* Ebeling et al. (2014) Ebeling, H., Stephenson, L. N., & Edge, A. C. 2014, ApJL, 781, L40, doi: 10.1088/2041-8205/781/2/L40
* Emonts et al. (2018) Emonts, B. H. C., Lehnert, M. D., Dannerbauer, H., et al. 2018, MNRAS, 477, L60, doi: 10.1093/mnrasl/sly034
* Fardal et al. (2001) Fardal, M. A., Katz, N., Gardner, J. P., et al. 2001, ApJ, 562, 605, doi: 10.1086/323519
* Faucher-Giguère et al. (2012) Faucher-Giguère, C.-A., Quataert, E., & Murray, N. 2012, MNRAS, 420, 1347, doi: 10.1111/j.1365-2966.2011.20120.x
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Conley, A., Meierjurgen Farr, W., et al. 2013, emcee: The MCMC Hammer. http://ascl.net/1303.002
* Fossati et al. (2021) Fossati, M., Fumagalli, M., Lofthouse, E. K., et al. 2021, MNRAS, 503, 3044, doi: 10.1093/mnras/stab660
* Geach et al. (2019) Geach, J. E., Peacock, J. A., Myers, A. D., et al. 2019, ApJ, 874, 85, doi: 10.3847/1538-4357/ab0894
* Ginolfi et al. (2017) Ginolfi, M., Maiolino, R., Nagao, T., et al. 2017, MNRAS, 468, 3468, doi: 10.1093/mnras/stx712
* Gobat et al. (2019) Gobat, R., Daddi, E., Coogan, R. T., et al. 2019, AAP, 629, A104, doi: 10.1051/0004-6361/201935862
* Goerdt & Ceverino (2015) Goerdt, T., & Ceverino, D. 2015, MNRAS, 450, 3359, doi: 10.1093/mnras/stv786
* Gould & Weinberg (1996) Gould, A., & Weinberg, D. H. 1996, ApJ, 468, 462, doi: 10.1086/177707
* Hafen et al. (2022) Hafen, Z., Stern, J., Bullock, J., et al. 2022, arXiv e-prints, arXiv:2201.07235. https://arxiv.org/abs/2201.07235
* Haiman et al. (2000) Haiman, Z., Spaans, M., & Quataert, E. 2000, ApJL, 537, L5, doi: 10.1086/312754
* Hall et al. (2018) Hall, K. R., Crichton, D., Marriage, T., Zakamska, N. L., & Mandelbaum, R. 2018, MNRAS, 480, 149, doi: 10.1093/mnras/sty1843
* Harrison et al. (2018) Harrison, C. M., Costa, T., Tadhunter, C. N., et al. 2018, Nature Astronomy, 2, 198, doi: 10.1038/s41550-018-0403-6
* Heckman et al. (1991) Heckman, T. M., Lehnert, M. D., Miley, G. K., & van Breugel, W. 1991, ApJ, 381, 373, doi: 10.1086/170660
* Hennawi & Prochaska (2013) Hennawi, J. F., & Prochaska, J. X. 2013, ApJ, 766, 58, doi: 10.1088/0004-637X/766/1/58
* Hennawi et al. (2015) Hennawi, J. F., Prochaska, J. X., Cantalupo, S., & Arrigoni-Battaia, F. 2015, Science, 348, 779, doi: 10.1126/science.aaa5397
* Herenz et al. (2020) Herenz, E. C., Hayes, M., & Scarlata, C. 2020, AAP, 642, A55, doi: 10.1051/0004-6361/202037464
* Humphrey et al. (2007) Humphrey, A., Villar-Martín, M., Fosbury, R., et al. 2007, MNRAS, 375, 705, doi: 10.1111/j.1365-2966.2006.11344.x
* Husemann et al. (2021) Husemann, B., Worseck, G., Arrigoni Battaia, F., Sander, A. A. C., & Shanks, T. 2021, arXiv e-prints, arXiv:2107.10773. https://arxiv.org/abs/2107.10773
* Inskip et al. (2011) Inskip, K. J., Jahnke, K., Rix, H.-W., & van de Ven, G. 2011, Astrophys. J., 739, 90, doi: 10.1088/0004-637X/739/2/90
* Kereš et al. (2009) Kereš, D., Katz, N., Fardal, M., Davé, R., & Weinberg, D. H. 2009, MNRAS, 395, 160, doi: 10.1111/j.1365-2966.2009.14541.x
* Kormendy & Ho (2013) Kormendy, J., & Ho, L. C. 2013, ARAA, 51, 511, doi: 10.1146/annurev-astro-082708-101811
* Larkin et al. (2013) Larkin, J., Wright, S., Weiss, J., et al. 2013, Keck OSIRIS Data Reduction Pipeline, https://github.com/Keck-DataReductionPipelines/OsirisDRP/tree/master, GitHub
* Lau et al. (2016) Lau, M. W., Prochaska, J. X., & Hennawi, J. F. 2016, ApJS, 226, 25, doi: 10.3847/0067-0049/226/2/25
* Lehnert et al. (1992) Lehnert, M. D., Heckman, T. M., Chambers, K. C., & Miley, G. K. 1992, ApJ, 393, 68, doi: 10.1086/171485
* Lockhart et al. (2019) Lockhart, K. E., Do, T., Larkin, J. E., et al. 2019, AJ, 157, 75, doi: 10.3847/1538-3881/aaf64e
* Luridiana et al. (2015) Luridiana, V., Morisset, C., & Shaw, R. A. 2015, Astron. & Astrop., 573, A42, doi: 10.1051/0004-6361/201323152
* Lyke et al. (2017) Lyke, J., Do, T., Boehle, A., et al. 2017, OSIRIS Toolbox: OH-Suppressing InfraRed Imaging Spectrograph pipeline, Astrophysics Source Code Library, record ascl:1710.021. http://ascl.net/1710.021
* Martin et al. (2010) Martin, C., Moore, A., Morrissey, P., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 77350M, doi: 10.1117/12.858227
* Martin et al. (2019) Martin, D. C., O’Sullivan, D., Matuszewski, M., et al. 2019, Nature Astronomy, 3, 822, doi: 10.1038/s41550-019-0791-2
* Mas-Ribas et al. (2017) Mas-Ribas, L., Dijkstra, M., Hennawi, J. F., et al. 2017, ApJ, 841, 19, doi: 10.3847/1538-4357/aa704e
* Morrissey et al. (2018) Morrissey, P., Matuszewski, M., Martin, D. C., et al. 2018, ApJ, 864, 93, doi: 10.3847/1538-4357/aad597
* Myers et al. (2006) Myers, A. D., Brunner, R. J., Richards, G. T., et al. 2006, ApJ, 638, 622, doi: 10.1086/499093
* Navarro et al. (1996) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ, 462, 563, doi: 10.1086/177173
* Osterbrock & Ferland (2006) Osterbrock, D. E., & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and active galactic nuclei
* O’Sullivan & Chen (2020) O’Sullivan, D., & Chen, Y. 2020, arXiv e-prints, arXiv:2011.05444. https://arxiv.org/abs/2011.05444
* O’Sullivan et al. (2020) O’Sullivan, D. B., Martin, C., Matuszewski, M., et al. 2020, ApJ, 894, 3, doi: 10.3847/1538-4357/ab838c
* Planck Collaboration et al. (2014) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2014, Astron. $\&$ Astrop., 571, A16, doi: 10.1051/0004-6361/201321591
* Prochaska et al. (2013) Prochaska, J. X., Hennawi, J. F., & Simcoe, R. A. 2013, ApJL, 762, L19, doi: 10.1088/2041-8205/762/2/L19
* Prochaska et al. (2014) Prochaska, J. X., Lau, M. W., & Hennawi, J. F. 2014, ApJ, 796, 140, doi: 10.1088/0004-637X/796/2/140
* Roche et al. (2014) Roche, N., Humphrey, A., & Binette, L. 2014, MNRAS, 443, 3795, doi: 10.1093/mnras/stu1430
* Rubin et al. (2012) Rubin, K. H. R., Prochaska, J. X., Koo, D. C., & Phillips, A. C. 2012, ApJL, 747, L26, doi: 10.1088/2041-8205/747/2/L26
* Shukla et al. (2022) Shukla, G., Srianand, R., Gupta, N., et al. 2022, MNRAS, 510, 786, doi: 10.1093/mnras/stab3467
* Shull (2014) Shull, J. M. 2014, ApJ, 784, 142, doi: 10.1088/0004-637X/784/2/142
* Silverman et al. (2019) Silverman, J. D., Treu, T., Ding, X., et al. 2019, ApJL, 887, L5, doi: 10.3847/2041-8213/ab5851
* Simons et al. (2020) Simons, R. C., Peeples, M. S., Tumlinson, J., et al. 2020, ApJ, 905, 167, doi: 10.3847/1538-4357/abc5b8
* Steidel et al. (2010) Steidel, C. C., Erb, D. K., Shapley, A. E., et al. 2010, ApJ, 717, 289, doi: 10.1088/0004-637X/717/1/289
* Stewart et al. (2017) Stewart, K. R., Maller, A. H., Oñorbe, J., et al. 2017, ApJ, 843, 47, doi: 10.3847/1538-4357/aa6dff
* Taniguchi & Shioya (2000) Taniguchi, Y., & Shioya, Y. 2000, ApJL, 532, L13, doi: 10.1086/312557
* Tepper-García (2006) Tepper-García, T. 2006, MNRAS, 369, 2025, doi: 10.1111/j.1365-2966.2006.10450.x
* Tepper-García (2007) —. 2007, MNRAS, 382, 1375, doi: 10.1111/j.1365-2966.2007.12186.x
* Travascio et al. (2020) Travascio, A., Zappacosta, L., Cantalupo, S., et al. 2020, AAP, 635, A157, doi: 10.1051/0004-6361/201936197
* Trebitsch et al. (2016) Trebitsch, M., Verhamme, A., Blaizot, J., & Rosdahl, J. 2016, AAP, 593, A122, doi: 10.1051/0004-6361/201527024
* Tumlinson et al. (2017) Tumlinson, J., Peeples, M. S., & Werk, J. K. 2017, ARAA, 55, 389, doi: 10.1146/annurev-astro-091916-055240
* van de Voort et al. (2011) van de Voort, F., Schaye, J., Booth, C. M., Haas, M. R., & Dalla Vecchia, C. 2011, MNRAS, 414, 2458, doi: 10.1111/j.1365-2966.2011.18565.x
* Vayner et al. (2016) Vayner, A., Wright, S. A., Do, T., et al. 2016, Astrophys. J., 821, 64, doi: 10.3847/0004-637X/821/1/64
* Vayner et al. (2021a) Vayner, A., Zakamska, N., Wright, S. A., et al. 2021a, ApJ, 923, 59, doi: 10.3847/1538-4357/ac2b9e
* Vayner et al. (2021b) Vayner, A., Wright, S. A., Murray, N., et al. 2021b, ApJ, 919, 122, doi: 10.3847/1538-4357/ac0f56
* Vayner et al. (2021c) —. 2021c, ApJ, 910, 44, doi: 10.3847/1538-4357/abddc1
* Vayner et al. (2021d) Vayner, A., Zakamska, N. L., Riffel, R. A., et al. 2021d, MNRAS, 504, 4445, doi: 10.1093/mnras/stab1176
* Vernet et al. (2017) Vernet, J., Lehnert, M. D., De Breuck, C., et al. 2017, AAP, 602, L6, doi: 10.1051/0004-6361/201730865
* Vidal-García et al. (2021) Vidal-García, A., Falgarone, E., Arrigoni Battaia, F., et al. 2021, MNRAS, 506, 2551, doi: 10.1093/mnras/stab1503
* Villar-Martín et al. (2007) Villar-Martín, M., Sánchez, S. F., Humphrey, A., et al. 2007, MNRAS, 378, 416, doi: 10.1111/j.1365-2966.2007.11811.x
* Villar-Martín et al. (2003) Villar-Martín, M., Vernet, J., di Serego Alighieri, S., et al. 2003, MNRAS, 346, 273, doi: 10.1046/j.1365-2966.2003.07090.x
* Wang et al. (2021) Wang, W., Wylezalek, D., De Breuck, C., et al. 2021, arXiv e-prints, arXiv:2107.09066. https://arxiv.org/abs/2107.09066
* White et al. (2012) White, M., Myers, A. D., Ross, N. P., et al. 2012, MNRAS, 424, 933, doi: 10.1111/j.1365-2966.2012.21251.x
* Zakamska et al. (2019) Zakamska, N. L., Sun, A.-L., Strauss, M. A., et al. 2019, MNRAS, 489, 497, doi: 10.1093/mnras/stz2071
* Zhu & Ménard (2013) Zhu, G., & Ménard, B. 2013, ApJ, 773, 16, doi: 10.1088/0004-637X/773/1/16
* Zubovas & King (2012) Zubovas, K., & King, A. 2012, ApJL, 745, L34, doi: 10.1088/2041-8205/745/2/L34
* Zubovas & King (2014) Zubovas, K., & King, A. R. 2014, MNRAS, 439, 400, doi: 10.1093/mnras/stt2472
## Appendix A Galaxy group spectra
Figure 10: Spectra of individual galaxies in the group surrounding the 7C
1354+2552 quasar (left) and those surrounding the 4C 09.17 quasar (right).
These spectra were used to identify and measure the redshift and velocity
offset of each companion galaxy. We present the Gaussian model fits to each
emission line.
## Appendix B Emission and absorption line fitting
In this section, we describe how we perform the emission and absorption line
fitting for larger distinct regions in the individual sources. The $\rm
Ly\alpha$, CIV, and $\rm HeII$ emission lines are all fit with a combination
of Gaussian profiles. The free parameters are the amplitude of the Gaussian
profile, wavelength offset ($\lambda_{0}$) in the observed frame, and the
velocity dispersion ($\sigma_{\lambda}$). For absorption we use an exponential
profile: $\exp(-\tau(\lambda))$, where $\tau(\lambda)$ is the optical depth as
a function of observed wavelength of the following form:
$\tau(\lambda)=\frac{\sqrt{\pi}e^{2}f_{i}\lambda_{0}^{2}}{\Delta\lambda_{D}m_{e}c^{2}}\times
N_{i}\times H(a,x(\lambda))$ (B1)
$f_{i}$ is the oscillator strength, $e$ is the electron charge,
$\Delta\lambda_{D}$ is defined as $b/c\times\lambda_{0}$ where $b$ is the
thermal or Doppler broadening parameter, $m_{e}$ is the electron mass, $c$ is
the speed of light, $N_{i}$ is the column density and $H(a,x(\lambda))$ is the
Voigt-Hjerting function used to describe the shape of the absorption profile.
The Voigt-Hjerting function is defined as:
$H(a,x(\lambda))=\frac{a}{\pi}\int_{-\infty}^{+\infty}\frac{{e}^{-y^{2}}}{(x-y)^{2}+a^{2}}dy$
(B2)
where $x=(\lambda-\lambda_{0})/\Delta\lambda_{D}$ and $y=v/b$. The parameter a
is defined as following:
$a=\frac{\lambda_{0}A_{i}}{4\pi c\Delta\lambda_{D}}$
where $A_{i}$ is the Einstein A-coefficient. For $H(a,x(\lambda)$ we use the
analytic approximation from Tepper-García (2006, 2007):
$H(a,x(\lambda))=H_{0}-\frac{a}{\sqrt{\pi}x}\times(H_{0}\times
H_{0}(4x^{4}+7x^{2}+4+Q)-Q-1)$ (B3)
where $H_{0}=\exp(-x^{2})$ and $Q=1.5x^{-2}$. All atomic data is taken from
physics.nist.gov. The final function that we fit the data is of the following
form:
$F(\lambda)=\sum_{n=1}^{k}F_{G,\lambda}^{n}\times\exp\left({-\sum_{j=1}^{l}\tau_{j,\lambda}}\right)$
convolved with the line-spread function of KCWI before the fitting process
begins. We first fit the data using a Least-Squares algorithm from Scipy. We
then follow up with an MCMC routing using the emcee (Foreman-Mackey et al.,
2013) package. We use the best-fit parameters from the Least-Squares fit as
the starting point for each walker, with a minor perturbation. First, we
initialize 1000 walkers for each free parameter. We then run MCMC for 500
steps starting from the perturbed initial value. The priors on the free
parameters are listed in Table 6. Before extracting the best fit parameters we
discard 50 steps from the final chain.
Table 6: Table presenting the range of the priors for the free parameters for the Least-Squares and MCMC fitting algorithms. Parameter | Prior range
---|---
Gaussian |
Amplitude | 0 - max(SNR)
Offset ($\lambda_{0}$) | Location (max(SNR)) $\pm$ 4 Å
$\sigma_{v}$ | 40-4,000 km s-1
Exponential absorption |
Offset ($\lambda_{0}$) | Location (min(SNR)) $\pm$ 2 Å
Doppler b | 40-400 km s-1
$N_{HI}$ | $10^{13}-10^{20}$cm2
### B.1 Spatial binning and fitting spatially resolved absorption lines.
To fit the absorption lines in $\rm Ly\alpha$ across the spatial extent of the
$\rm Ly\alpha$ halo, we needed to bin up the data spatially to increase the
SNR. We chose to use Voronoi binning method to spatially bin the data at the
smallest loss of spatial resolution and information. We use the Python based
Voronoi binning code by Cappellari (2009). We chose to perform the binning on
the $\rm Ly\alpha$ moment 0 map of both kinematic components such that each
hexagonal tessellation achieves an integrated SNR over the $\rm Ly\alpha$ line
of at least 50. We found this SNR to be optimal at fitting multiple absorption
lines across the $\rm Ly\alpha$ line. We loop over the bins created by the
Voronoi binning and extract the spectrum of each bin by averaging the data
cube spatially at each spectral location. We then perform the exact same
absorption and emission line fitting described above for the integrated
spectrum of each individual kinematic component. From the best fit parameters
we construct resolved radial velocity and equivalent width maps for each
absorber.
## Appendix C Morphology and extent of additional lines detected in the halos
around 4C09.17 and 7C1354+2552.
Figure 11: Surface brightness maps of additional fainter emission lines
detected in both system, top: 7C 1354+2552, bottom: 4C 09.17. The star
represents the location of the quasar and the bar to the left represents 10′′
or approximately 86 kpc at the redshift of our sources.
|
11institutetext: Pembroke College, Cambridge, CB2 1RF 22institutetext: Apoha
Ltd, London, Acklam Rd, W10 5JJ 33institutetext: Department of Chemical
Engineering, University of Cambridge, CB 30 AS, UK
# A complexity perspective on fluid mechanics
Saksham Sharma 1133<EMAIL_ADDRESS><EMAIL_ADDRESS>Giulia Marcucci 22
Adnan Mahmud 33
###### Abstract
This article attempts to use the ideas from the field of complexity sciences
to revisit the classical field of fluid mechanics. For almost a century, the
mathematical self-consistency of Navier-Stokes equations has remained elusive
to the community of functional analysts, who posed the Navier-Stokes problem
as one of the seven millennium problems in the dawn of 21st century. This
article attempts to trace back the historical developments of fluid mechanics
as a discipline and explain the consequences of not rationalising one of the
commonly agreed upon tenets - continuum hypothesis \- in the community. The
article argues that ‘fluids’ can be treated as ‘emergent’ in nature, in that
the atoms and molecules in the nanometre length scale can likely be correlated
with the continuum physics at the microscale. If this is the case, then one
might start trying to find a theoretical framework that models the emergence
of fluids from atoms, effectively solving the multi-scale problem using a
single abstract framework. Cantor set with layers $N$ (N can have up to two
orders of magnitude) is presented as a potential contender (analytical
framework) for connecting the energy in molecular level $C_{1}$ at length
scale $l_{cut}$ to the energy at continuum level $C_{N}$ with length scale
$L$. Apart from fluid mechanics, Cantor set is shown to represent the
conceptual understanding of VLSI hardware design ($N=5$). Apart from Cantor
set, an experimental technique of architecting metafluids is also shown to
solve emergence experimentally (i.e. connect physics at specific lower scales
to higher scales).
###### keywords:
fluid mechanics; Navier-Stokes equations; complexity
## 1 Introduction
Matter, as we see it, with naked eyes or with external aids (to name a few,
optical lenses, image reconstructions from electromagnetic radiations) can be
modeled as existing in three-dimensional space ($\mathcal{R}^{3}$) and
amenable to differential calculus treatment. This article focuses on fluid
particularly as the matter and reflects on the analytical treatment of its
dynamics.
The primary question of interest is: if one is given a continuum 111 of fluid
in $\mathcal{R}^{3}$, which is external to the observer, does there exist an
abstract framework onto which the physics of fluids can be encoded? Is the
mathematical framework, built off orthogonal coordinate systems in
$\mathcal{R}^{3}$, self-consistent to model the dynamics of fluid? More
fundamental question could be: what is “fluid” fundamentally composed of?
As philosophical and fundamental these questions might sound, implicit answers
to them, commonly accepted in scientific community, have dictated the course
of scientific research for past almost two centuries. Starting with Euler, it
was conceptualised that the ‘fluid’ is composed of tiny parcels of continuum
(of length scale significantly higher - 2 or more orders of magnitude - than
molecular length scale) such that the Newton’s second laws of motion can be
applied in point-wise fashion inside the parcel [17, 51]. Such a treatment led
to the development of ‘hydraulic engineering’, ‘fluid mechanics’, ‘continuum
mechanics’ as separate disciplines (that we see now) and the use of calculus
in the continua of fluid became more and more accepted [27].
What is seen as modern ‘fluid mechanics’ discipline is equally the result of
the ideas which were not accepted by the academic community over generations.
A prime example is the ‘Laplace’s demon’ which was an optimistic hope of
Pierre-Simon Laplace to predict the future of a physical system given its past
and the classical laws that if obeys [36]. This hope died many deaths with the
discovery of laws of thermodynamics, quantum mechanics, and with the
construction of Turing machine. To give a sample argument against Laplace’s
demon, Josef Rukavicka [38] and David Wolpert [52] noted that the construction
of Turing machine and inference devices which perform observation and
prediction of the natural world would lead to a fundamental mathematical
structure in its logic, called “halting problem”. This problem prevents the
existence of a master rule or law that tells when the output of the device
would be fixed and determinate, and hence the Laplace’s demon is bound to
fail. Even after the failure of Laplace’s demon, the quest shifted towards
connecting the continuum (macroscopic) laws of physics to atomic/molecular
contributions. French physicists and mathematicians in the 18th century hugely
debated the molecular origins of continuum forces, as discussed by Darrigol
[14]. Navier incorporated the molecular forces that are relevant only during
the deformation of a fluid, Poisson, on the other hand, summed up the
molecular forces by the surrounding molecules acting on a given molecule.
Cauchy’s formulation was quite similar to Poisson, which is sometimes referred
to as the ‘Cauchy-Poisson’ theory. Later on, Poisson offered a fluid theory of
motion by treating a fluid like a solid, which experiences stresses during its
motion so that the fluid stress is related to the its rate of deformation,
which went on to become Navier-Stokes equation with additional pressure
gradient term.
The formulation of Navier-Stokes equations as the descriptor of motion of
fluids established a boundary between the disciplines: one disciplines is
focused on analysing fluids at a continuum level and the other at an
atomic/molecular level. However, it does not mean that the two fields are not
connected with one another. To give an example, for an ideal gas $G$ at a
thermodynamic equilibrium, if $G$ has temperature $T$ and the molecules that
combine to form $G$ have average kinetic energy $E$, then $E=k_{B}TN_{DOF}$.
Another example, for fluid $F$ with surface tension $\gamma$ composed of
molecules in two layers separated at a distance $d_{0}$, is that the surface
energy can be defined as: $\gamma=A/d_{0}^{6}$ where $d_{0}$ is the
intermolecular distance between two layers. This means that it is possible, in
some cases, to find laws that relation the dynamics at atomic level with the
dynamics at continuum level. However, there still is a strong boundary in the
level of treatment between continuum mechanics and atomic/molecular physics:
in modern era of science, this led to distinction between fluid mechanics and
statistical physics as separate disciplines [25, 26].
Though development of distinct disciplines has led to many development in both
fronts, it has also led to many unresolved problems in each discipline that
has captured attention of physicists for a century or more. For continuum
level fluid mechanics, the biggest unresolved problem is the Navier-Stokes
regularity problem [19]. In a loose sense, the problem is that of finding a
proof that the solutions to Navier-Stokes equations either blow up or remain
finite after a certain finite time given a set of initial datum. On the other
hand, for statistical physics, one of the unresolved problem is finding an
analytical treatment of the dynamics of a large N-particle system, also
described by BBGKY set of equations [21]. It is interesting to note that both
problems suffer from a solution only because the problem is restricted at the
first place to a certain scale of analysis: ‘continuum’ for Navier-Stokes
problem and ‘atomic/molecular’ for the BBGKY problem. It would be ideal to
have an abstract framework which connects any arbitrary two different physical
scales, and bringing more unified picture of the physical problem concerned.
Even without the absence of such a framework, much of the past many decades of
research in mathematical fluid dynamics has been influenced by the idea of
complexity/emergence. Towards the end of 20th century, Phillip W. Anderson in
his much celebrated essay “More is Different” [2] argued the twin difficulties
of scale and complexity, when attempting to explain a natural phenomenon in
terms of known fundamental laws. In his own words, “the ability to reduce
everything to simple fundamental laws does not imply the ability to start from
those laws and reconstruct the universe”, meaning that at every new scale a
new set of laws emerge. This scientific philosophy of “complexity science” has
pervaded numerous disciplines over the past 50 years, such as origin of life,
nonlinear synchronisation, self-assembly in active systems, ants and termites
self-organising colonies, and network science, as recently summarised in a
recent article [47] by eight eminent complexity scientists. On fluids,
Anderson wrote in his essay that, “We have already excluded the apparently
unsymmetric cases of liquids, gases, and glasses. (In any real sense they are
more symmetric.)”. The contention here is that in a practical sense, fluids
are quite predictable (upon the application of continuum hypothesis) hence,
they can be assumed to be symmetric. However, this symmetry holds true as long
as the laws of continuum mechanics are attempted to understand at a certain
length scale. Any fluid related phenomenon breaks its symmetry and becomes
difficult to be modelled when it is seen emergent in nature and one attempts
to find a model that encompasses multiple scales beginning from
atomic/molecular contributions.
Apart from emergence, there are few more perspectives posed in the past few
decades that challenges our classical treatment of ‘fluids’. Primary notion
that is being challenged is to see mathematical physics (framework to model
fluids) as fundamentally continuum in nature; it is not the case in
mathematics because of the invent of Turing machines [18] and not in the
physics of fluids because the basic units of fluids are discrete set of
interacting atoms. There is even more convincing argument that the physics is
fundamentally information-theoretic in origin. The primary argument for this
is the choice of doing an experiment that yields an observation which
eventually lets one record what was not measured by the equipment before. In
nutshell, the decision to perform an experiment yields an information about
the ‘fluid’ which then yields the governing equations or algorithm describing
the physics of the experiment. Such an observer-participatory process to yield
information and then the physics can offer another way of looking at ‘fluid
mechanics’, where instead of treating ‘fluid’ parcels as fundamentally
continuum of numbers, one treats them as fundamentally ‘bits’ and discrete
chunks of information in nature. If one accepts these arguments, then one is
liable to argue ‘fluid mechanics’ as somewhat digital in nature. For past few
decades, numerous works are pointing towards the creation of ‘Digital Fluid
Mechanics (DFM)’ as a potential discipline which has a potential to thrive
amongst existing scientific communities with variety of theoretical,
experimental, empirical, and numerical approaches. It should be noted that
while machine learning methods can be seen as contenders for empirical and
numerical approaches within DFM context, experimental approaches remain the
same as conceived before in traditional fluid mechanics research, and theory
of neural networks can be considered as the theoretical framework for DFM in
this regard [3].
Keeping aside the potential creation of DFM as a discipline and unsolved
problems that fluid mechanics and statistical physics are plagued with, there
are many experimental observations using fluids that can be discussed as
classroom examples of emergence. For example, Couette flow occuring when a
fluid is placed between two rotational cylinders at different velocities gives
rise to the roll of vortices when the velocity gradient is above a threshold.
Another example is the Benard cells formed when a layer of fluid between two
horizontal plates is heated from below to lead to the emergence of convection
rolls, in the form of multiple attractors - the end result is dependent upon
chance fluctuations and is often unpredictable [5]. Over the past one decade,
there has been an upsurge in the discovery of nonequilibrium phenomenon
centred around the idea of emergence [40]. To give an example, Palacci et al.
(2013) reported that synthetic photoactivated colloidal particles have a
dynamic assembly in the form of periodic crystals that results from the
competition between the self-propulsion of particles and attractive
interactions caused by the osmotic and phoretic effects [35].
This main result presented in this article is an abstract framework - a Cantor
set - that can be used to connect atomic/molecule scale to continuum scale and
present a more unified approach to understanding the dynamics of fluids. The
ideas from the field of complexity sciences are introduced in Section 2, along
side justifying the need for a framework that can be used to model emergent
phenomenon. By drawing focus on fluid mechanics, the unsolved Navier-Stokes
regularity problem (discussed in Section 3) is used as a case example which
has been attempted in the past by functional analysis communities to be solved
using traditional mathematical methods (assuming fluid is composed of
continuum at the core) discussed in Section 4. New ideas on potentially
solving the Navier-Stokes problem is discussed in Section 6. Section 7 is on
Cantor set which can be used to model ‘fluids’ as discrete, information-
theoretic and computational in nature composed of atoms/molecules. It is being
shown through an example calculation that it is possible to model the
emergence of ‘fluids’ from ‘atoms’ by modeling the interaction between atoms
composed in different discrete Cantor sets as ‘rotary logic gates’. Section 8
is on the potential usage of “Cantor sets” in VLSI hardware design problem (to
show its appeal outside the field of physics). Section 9 is on the existing
engineering techniques of solving emergent problems without using an abstract
framework using physical learning [45] and architecting metamaterials [6].
## 2 Revisiting complexity sciences
A system is a collection of interacting or interdependent components that
operate in accordance with a set of rules to produce a holistic entity.
Conversely to the conventional definition, a system with reference to its
individual components, complexity or complex system defies component-based
characterization since no component is independent of the behaviour of the
other components. These components establish networks of interactions (often
with only a few components involved in numerous interactions) and are capable
of generating novel information. Therefore, it is not feasible to strictly
formulate the collection’s properties based just on knowledge of its
components. Hence, the investigation of complex science requires, in general,
a new mathematical framework and, in specific, a new sets of measurement
metrics. This section of the paper analyses the types of measurement metrics
available in the literature for complex systems.
Complex science derives its multidisciplinary nature from the premise that
shared properties connect systems across fields and thus justifying the search
for universal sets of measuring metrics (modelling techniques in general)
applicable to complex systems stemming from all scientific and professional
domains covering physics, biology, medicine, engineering, ecology, social
sciences, finance, business, management, politics, psychology, anthropology,
and more. Due to the multidisciplinary nature of complex science and
ubiquitous characteristics such as emergence 222The characteristics of a
system that are not evident from its components in isolation, but arise from
their interactions, dependencies, or linkages when brought together in a
system. , nonlinearity 333A system in which a change in the input size does
not result in a proportional change in the output size., and adaptation 444The
ability to adapt and gain knowledge from experience. make defining measuring
metrics for complex science discipline-wise is not the optimised method.
Moreover, relatively recent work on the analytical framework of complex
networks has resulted in a considerable reaffirmation of similarities between
complex systems in several scientific fields [1].
If the measurement metrics are sectioned from the perspective of the
definition of a complex system, then the majority of existing complexity
metrics fall into two categories [43]:
1\. Complexity as Randomness: metrics in this category try to capture the
randomness, information content, or description length of a system or process,
with random processes being the most complicated since they defy compression
the most. This is because compression relies on exploiting a skewed
distribution in the data, and randomness and skewness are inversely
proportional to each other.
2\. Complexity as Structure and Information: metrics in this category
conceptualise complexity as distinct from randomness. In this context, complex
systems include a great deal of structure or information, frequently spanning
numerous temporal and spatial scales. Within this set of metrics, very complex
systems fall halfway between highly ordered (regular) and highly disordered
systems (random).
According to Seth Lloyd [30], contemporary researchers were asking the same
questions regarding the complexity of their respective research subjects;
nonetheless, the solutions they provided for how to assess complexity are
quite comparable. Therefore, if the measurement metric is sectioned from the
perspective of the questions researchers frequently ask to quantify the
complexity, then the metrics can be sectioned into four main categories:
1. 1.
Metrics aiming to measure the difficulty in creating a complex system.
2. 2.
Metrics aiming to measure the difficulty in creating a complex system.
3. 3.
Metrics aiming to measure the degree of organisation of a complex system. This
criterion may further be divided up into two quantities:
1. (a)
Effective Complexity: metrics aiming to measure the difficulty in describing
the organisational structure of a complex system.
2. (b)
Mutual Information: metrics aiming to evaluate the information shared between
the parts of a complex system as the result of this organizational structure.
4. 4.
Non- quantitative metrics.
A non-exhaustive list of metrics, sectioned with the latter method is
visualised in Figure 1.
Figure 1: A non-exhaustive list of measuring metrics for complex system [30].
## 3 Navier-Stokes regularity problem
Navier-Stokes equations (NSE) are used to model the dynamics of a given fluid
given by
$\partial_{t}\vec{u}(t,x)=\nu\Delta\vec{u}-\sum_{i=1}^{3}u_{i}\partial_{i}\vec{u}-\vec{\nabla}p+\vec{f}$
(1) $\nabla.\vec{u}=\sum^{3}_{i=1}\partial_{i}u_{i}=0$ (2)
where $\vec{u}(t,x)$ is the velocity vector field in time $t$ and space $x$,
$\nu$ is the kinematic viscosity, $\vec{f}$ is the external body force. The
initial conditions of the velocity vector field are given as
$\vec{u}(0,x)=\vec{u}_{0}(x)$ (3)
where Eq. 1 and Eq. 2 are defined for $t\geq 0$ and $x\in\mathcal{R}^{n}$.
The problem statement posed by the Clay Mathematical Institute [19] is to
prove either that the solutions to Eq. 1-2 remain smooth or they become
singular for a given set of initial datum $\vec{u}_{0}(x)$ and after a certain
finite-time $T^{*}$.
## 4 Efforts to solve the problem
Attempts to find solutions to the Navier-Stokes equations (NSE) can be traced
back to the first half of the 20th century. In 1911, Oseen [34] used an
explicit tensor $\mathcal{O}_{\nu}$, called the Oseen tensor to convert NSE
into an integro-differential equation and find the solution
$\begin{split}\vec{u}(t,x)=&\int_{\mathcal{R}^{3}}W_{\nu}(t,x-y)\vec{u}_{0}(y)dy+\int_{0}^{t}\int_{\mathcal{R}^{3}}\mathcal{O}_{nu}(t-s,x-y)\\\
&\left(\vec{f}(s,y))-\sum_{i=1}^{3}u_{i}(s,y)\partial_{i}\vec{u}(s,y)\right)dyds\end{split}$
(4)
where $W_{\nu}$ is the heat kernel associated to heat equation
$\partial_{t}T=\nu\Delta T$ where $T$ is temperature. While Oseen derived the
solution of (4) for the time interval $[0,T]$, the question of what can be
said about possible blowup at $z_{0}$ 555A function $f(z)$ is said to blowup
when $f(z)\to\infty$ at $z=z_{0}$. of such solutions was left unanswered. In
1934, Leray [28] used an estimate of the energy
$\int_{\mathcal{R}^{3}}|\vec{u}(t,x)|^{2}dx$ to find the conditions that
prevent the blowup. After realising that the $L^{2}$ norm 666An $L^{2}$ norm
or the Euclidean norm of a function gives the length of a vector
$\mathbf{x}=(x_{1},x_{2},...,x_{n})$ as:
$||\mathbf{x}||_{2}=\sqrt{x_{1}^{2}+x_{2}^{2}+..+x_{n}^{2}}$, for a manifold
$\mathcal{R}^{n}$ is not controllable enough to prevent blowup, Leray
introduced a new kind (in fact, concept) of solutions, called weak solutions.
These solutions have generalised derivatives defined in a Sobolev space
$H^{1}$ 777Sobolev space is the vector space of functions defined by a norm
which is the combination of $L^{p}$ norms and the derivatives of $L^{p}$ norms
to a given order. Effectively, Leray proved that for an initial datum
$u_{0}\in L^{2}$, a global weak solution $\vec{u}(t,x)\in
L_{t}^{\infty}L_{x}^{2}$ exists with an additionally regularity
$L_{t}^{2}H_{x}^{1}$. Ever since such a formulation, the fundamental problem
of proving uniqueness of the weak solution and that of globalness of strong
solutions has remained unsolved.
Another parameter of paramount importance in NSE analysis is vorticity,
$\omega$, given by $\nabla\times u$. In terms of $\omega$, NSE can be written
as
$\partial_{t}\omega+u\cdot\nabla\omega-\nu\Delta\omega=\omega\cdot\nabla u$
(5)
where the RHS term is called the stretching term. For 2D flow, the RHS is zero
because of right angle between the vorticity vector and the flow field plane.
As a result, vorticity is controlled in magntiude: if $\nu=0$, vorticity
distribution is conserved and $L^{p}$ norms do not change in time; if $\nu\neq
0$, the norms are non-increasing functions of time. This leads to the regular
solutions of NSE and Euler equations, in 2D case, without any formation of
singularities. In 3D NSE, the direction of the vorticity vector has an
influence on the possibility of the formation of singularities. Constantin,
Fefferman in 1993 [12] proved that if the angle between unit vorticity vectors
at positions $x$ and $y$ is $\varphi(x,y,t)$, then for given constants
$\Omega$ and $\rho$, if magnitude of the vorticity at both locations is above
$\Omega$, then
$\left|\sin\varphi(x,y,t)\right|\leq\frac{|x-y|}{\rho}$ (6)
where the above equality holds true at high values of vorticity and the
direction of vorticity is coherent so that the singularities can not form. The
physical reason behind this result is the local alignment (vortex tubes being
parallel or antiparallel) which regularises the nonlinearity in NSE.
(1) can be collapsed into a seminlinear heat equation
$\partial_{t}u+B(u,u)=\nu\Delta u$ (7)
where $B(u,v)$ is the nonlinear quadratic function and a bilinear operator
with the general form
$\displaystyle B(u,v):=\frac{1}{2}\mathcal{P}((u.\nabla)v+(v.\nabla)u)$ (8a)
$\displaystyle\langle B(u,u),u\rangle=0$ (8b)
and $\mathcal{P}$ is the orthogonal projection ontp divergence-free vector
fields, also called Leray projection. Dimensional analysis heuristics can be
performed to suggest that NSE might not have much global regularity
properties, and are rather supercritical 888Supercritical differential
equations have a property that they blowup in finite-time. In hydrodynamics
literature, it is used in the sense of supercritical pitchfork or hopf
bifurcation. in nature. Supposing: $\nu=1$; $u(x,t)$ concentrated at a
frequency scale, $N(t)$, and amplitude, $A(t)$, gives
$\displaystyle\nu\Delta u\approx N(t)^{2}A(t)$ (9a) $\displaystyle
B(u,u)=\mathcal{P}(u.\nabla u)\approx N(t)A(t)^{2}$ (9b)
such that the viscosity dominates nonlinear effects when $A(t)\ll N(t)$ and
vice-versa. A power-law model of nonlinear dynamics: $A(t)\approx
N(t)^{\theta}$ for $\theta>1$ gives
$\displaystyle\partial_{t}A(t)\approx O(N(t)A(t)^{2})\approx
O(N(t)^{1+2\theta})$ (10a) $\displaystyle\partial_{t}N(t)\approx
O(N(t)^{1+\theta})$ (10b)
which alludes that if frequency $N(t)$ is of increasing nature, then the
blowup at a finite-time could happen. Now, if the solution $u$ is assumed to
exist in a spatial set of intermittency 999In the turbulent fluid flows,
intermittency is defined as the pause and start of velocity flow signals. The
recorded intermittent signals of the vorticity field $\omega$ is given in [31,
see fig. 2(a)]. dimension $0\leq\alpha\leq 3$, it should be supported in a
ball of volume $N(t)^{-3+\alpha}$. As a result, the energy
$\lim_{\mathcal{R}^{3}}|u(t,x)|^{2}$ is of the order $\approx
A(t)^{2}N(t)^{-3+\alpha}\approx N(t)^{-3+\alpha+2\theta}$. The energy identity
suggests that the exponent of $N(t)$ need to be negative as $N(t)\to\infty$
(blowup), giving the following constraint
$\theta\leq\frac{3}{2}-\frac{\alpha}{2}$ (11)
alongside the constraint $\theta>1$ for nonlinear effects to dominate. While
high intermittency $\alpha\geq 1$ is not possible for nonlinear effects to
dominate [8], if $\alpha<1$, then blowup is potentially possible. An extreme
blowup is possible at $\theta=3/2$ and $\alpha=0$, which means that the
solution $u$ is concentrated in a single ball of size $1/N(t)$. Interestingly,
there is no result available on the proof that finite time blowup for such a
case as $t\to T_{*}<\infty$ is not possible.
A dyadic shell model is often used to model the case: $\theta=3/2,\alpha=0$.
Introduced in Katz, Pavlovic [23], the model is given by
$\partial_{t}X_{n}=\lambda^{n-1}X_{n-1}^{2}-\lambda^{n}X_{n}X_{n+1}-\lambda^{4n/5}X_{n}$
(12)
for a constant $\lambda>0$, $X_{n}$ is the energy of fluid at scale $n-1$
which gets diffused to scale $n$, and the last term in the RHS is the
dissipation term. The total energy of the fluid is given by
$\sum_{n}X_{n}^{2}$. The analog of blowup in this model is when energy is
moved to higher values of $n$, ideally $n\to\infty$. At models corresponding
to five and higher dimensions, Cheskidov in 2018 [11] showed that when the 4/5
exponent is replaced with an exponent below 2/3, the equations can blowup.
However, in 2011, Barbata, Morandin, Romito [4] proved for a large class of
intial datum that the dyadic shell model admitted global regular solutions
when $X_{n}$ exhibited exponential decay as $n\to\infty$, or in other words,
are smooth. This was because the energy dispersed too quickly into a broad
spectrum of several higher frequency scales, where each frequency mode was
activated with amplitude scale such that the effects of viscosity remained
stronger in comparison to the nonlinear effects, to prevent blowup and not let
the solutions dissipate to infinitely larger frequency scales.
One type of scaling of NSE is given as
$v(x,t)\mapsto v_{\lambda}(x,t)=\lambda v(\lambda x,\lambda^{2}t)$ (13)
and there are various partial regularity results known for this. One of the
most prominent one is called Ladyzhenskaya-Prodi-Serrin conditions ([24],
[37], [39]), which states that if the Leray weak solution lies in
$L^{p}_{t}L^{q}_{x}$ with $2/p+3/q\leq 1$, then the solution is unique and
smooth in positive time. The endpoint in the above equality for $p=\infty$ and
$q=3$, given for $L^{\infty}_{t}L^{3}_{x}$, was proved by Escauriaza-
Seregin-S̆verák [16]. It should be noted that while the natural scaling of
weak solutions has a regularity for the condition, $2/p+3/q=3/2$, the energy
equality holds in NSE with an additional regularity for the condition,
$2/p+3/q=5/4$ [41]. This means that there is a gap open between the natural
scaling of the equations and the kinetic energy, which can lead to non-
uniqueness of weak solutions. In fact, this nonuniqueness was hinted by
Jia-S̆verák [22], where the authors proved that in the class
$L_{t}^{\infty}L_{x}^{3,\infty}$, Leray solutions are nonunique if a certain
spectral assumption holds for a linearised Navier-Stokes operator. In 2019,
Buckmaster and Vicol [7] proved a stronger result that the weak solutions to
the 3D NSE, $v\in C_{t}^{0}H_{x}^{\beta}$ (where $H^{\beta}$ is the $L^{2}$
space with the regularity index $\beta$), are nonunique. The nonuniqueness
refers to ill-posedness and existence of infinitely many solutions so that a
clear statement about the nature of the solution can not be made. The idea
used in the proof of this work is the convex integration method [15] and the
authors expect that these tools might in the future establish nonuniqueness of
Leray weak solutions.
## 5 Nature of the problem
Interestingly, it is not the first time that such a question - on whether an
equation yields physically reasonable answers for an arbitrary set of initial
values - has been asked. There have been many problems from other fields in
physics and mathematics that resemble to this problem. These include the ray
tracing problem in 3D optical systems; recurrent neural networks; nonabelian
topological quantum-field theory; stability of $n-$ body systems; and quantum
spectral gap problem, ask a similar, generalised, question. For example, the
quantum gap spectral problem states that given a quantum many-body
Hamiltonian, is the system that it describes gapped or gapless? In fact, as
recently as 2015, this question was proved to be undecidable (in other words,
not answerable) [13]. The notion of the undecidability of a system goes back
to the times of the Russell paradox - a paradox inherent in the statement
“This statement is false” formulated by Bertrand Russell. He showed that this
statement leads to a self-contradiction because the statement predicates
itself. Later on, an abstract computing machine, the Turing machine, devised
by Alan Turing, was marred by the same self-referential paradox, termed the
Halting problem. Turing showed that the question of whether for any arbitrary
pair of programs and inputs, the machine will run or halt is unanswerable;
meaning that no program exists to tell the consistency of all possible pairs
of programs and given inputs. Turing replaced the notion of a ‘question’ with
a ‘program’ to arrive at the result. It has been more than 80 years since this
abstract machine was designed, and even now, the lack of the self-consistency
of this machine has had enormous consequences on the self-consistency of
various other topics in physics. The quantum gap spectral problem was shown to
be unanswerable by formulating it in Turing machine terms, thus bringing in
the pathological nature of the machine. In a loose sense, one way to visualise
this pathology is by considering an ancient symbol ouroboros which is that of
a serpent eating its own tail. To ask a question on a problem by asking
whether the problem is solvable or not for arbitrary cases is akin to a snake
evading or attacking its own tail. This notion of accurate solvability for
Navier-Stokes equations (NSE) has been the subject of enquiry for almost a
century, long before the Navier-Stokes problem became a Clay problem.
## 6 A new scheme to look at the problem
In Section 4, numerous efforts to either prove uniqueness or nonuniqueness of
the solutions to NSE were outlined. This section, specifically, highlights a
new scheme to solve the problem by using ideas from theoretical computer
science.
In 2015, Terence Tao [48] introduced an improved variant to the dyadic shell
model by shunning off most of the nonlinear terms and incorporating only one
pair of adjacent modes $X_{n},X_{n+1}$ which experience a nonlinear and energy
interactions at an instant. An improved version of 12 is given as
$\begin{gathered}\partial_{t}X_{n}=-\lambda^{2n\alpha}X_{n}+1_{n-1=n(t)}\lambda^{n-1}X^{2}_{n-1}\\\
-1_{n=n(t)}\lambda^{n=n(t)}\lambda^{n}X_{n}X_{n+1}\end{gathered}$ (14)
where $n:[0,T_{*})\to\mathcal{Z}$ is a piecewise constant function that
describes the mode pair ($X_{n(t)},X_{n(t)+1}$) allowed to interact at a given
time $t$. A system of ODEs were constructed to simulate 14 by using a sequence
of “quadratic circuits” connected in series fashion. Each circuit is composed
of “quadratic logic gates” which simulate a certain form of basic quadratic
non-linear interaction. As shown in Fig. 3(c), the gates transfer energy from
one mode to another with “amplifier” and “rotor” gates. Combination of these
gates with certain coupling constants aid in energy transfer from frequency
scale $n$ to $n+1$. A bootstrap argument is used to construct a blowup
solution to the truncated system of ODEs, interpreted as “averaged Navier-
Stokes equations”, concisely given by
$\partial_{t}u+\tilde{B}(u,u)=\nu\Delta u$ (15)
and exhibit blowup for values close to $\theta_{3/2}$ and $\alpha=0$ scenarios
discussed previously. It is also important to note that the the dynamics of 15
is approximately and discretely self-similar in time. Also, noteworthy is that
the perfectly continuous self-similar solutions to NSE are not possible, as
proved in Necas-Ruzicka-Svreak [33] and Escuriaza-Seregin-Sverak [16]. When
the viscosity effects are of the lower order, such as in the case of
$\theta=3/2$ and $\alpha=0$, viscosity can been a perturbative term. As a
result, instead of the NSE, the spotlight then is on the Euler equations
($\nu=0$), specifically ones when the blowup solutions were found. When one
considers this equation, Kelvin’s circulation theorem is given by
$\Gamma(t)=\int_{C}\textbf{u}.dn=\int_{S}\omega$ (16)
which means that the line integral of $\vec{u}$ around a curve $C$ that loops
the fluid is equivalent to the surface integral of the curl of velocity field
(vorticity). Euler equations written in terms of vorticity is given as
$\displaystyle\partial_{t}\omega+\mathcal{L}_{u}\omega=0$ (17a) $\displaystyle
u=T\omega$ (17b)
where $\mathcal{L}$ is the Lie derivative 101010Lie derivative represents the
change in the tensor field, given by the directional derivatives of each
component, along the flow fields which are defined by another vector field.
w.r.t $u$ and $T\omega=-\nabla\times\Delta^{-1}(*\omega)$ is the Biot-Savart
operator 111111In fluid mechanics, for the equations: $\nabla\times u=\omega$,
$\nabla.u=0$, the Biot-Savart operator $T$ maps $\omega$ into $\vec{u}$. This
formulation is a concise encoding of the conservation laws that Euler
equations exhibit. Tao in 2016 [48] constructed a blowup for artificial
version of Euler equations where the Bio-Savart operator is replaced by a
truncated linear operator of the form $\tilde{T}$. The blowup scenario is
called a “vortex neck pinch” where the streamlines of the vortices are pinched
into a ring of radius $O((T_{*}-T)^{1/2})$ (Fig. 2) when the blowup is
approached at time $T_{*}$, with $u\sim(T_{*}-t)^{-1/2}$ and
$\omega\sim(T_{*}-t)^{-1}$. The blowup is similar to the discretely self-
similar NSE blowup scenario with $\theta=1,\alpha=0$. The pinchoff shown in
this artificial construction of Euler equations is similar to the those
observed in numerical studies [32].
Given that there is some possibility that the Euler equations can exhibit
blowup in finite blowup, a conjecture was postulated by Tao stating that
###### Conjecture 1
There exists some smooth manifold $M=(M,g)$ (of unspecified dimension) and a
smooth solution $\vec{u}(x,t),p$ to the Euler equations (with suitable decay
at infinity) that blows up in finite time.
The potential route to prove (1) is by carefully designing the metric $g$ and
the dimension $M$ so as to “program” various types of behavior that enables
blowup. It should be carefully noted that no corners or boundaries should
exist in the domain, that might yield a blowup, and ideally, fluid in an
infinite and smooth domain. The rational behind designing the manifold is akin
to constructing a computer program that operates on the fluid and cause it to
blowup.
There are two important features that are required in the solutions to the
NSE. The first one is universality and the second one is the scalar
invariance. In 2017, Tao proved the following theorem [49]
###### Theorem 1
Let $\partial_{t}u=B(u,u)$ be an ODE such that $B:V\times V\to V$ is a
bilinear form and $V$ is a finite-dimensional inner product space. The
conservation law is: $\langle B(u,u),u=0\rangle$. Then there exists a compact
Riemannian manifold $(M,g)$ and a linear isometry $T:V\to C^{\infty}(M)$ that
maps solutions to $\partial_{t}u=B(u,u)$ to solutions to the Euler equation on
M.
where a symmetry reduction from finite-dimensional ODE to Euler equations is
performed. For a specific $V$, there exists a manifold such that $V$ is
embeddable into smooth vector fields. Once this is possible, then any ODE of
the above type can be translated to a smooth vector field and then the
manifold can be created such that it exhibits finite time blowup, albeit for
fixed number of frequency scales. In 2019, Tao proved another theorem [50]
which establishes further than Euler equations (or flows) can be seen as
universal. The theorem states that
###### Theorem 2
If $d\geq 2$, then there is a somewhere dense set (in the smooth topology) of
incompressible flows $X:[0,T]\to\Gamma(T(R/Z)^{d})$ which can be lifted to an
Eulerisable flow on a warped product of $(R/Z)^{d}$ and another torus.
where vector field is Eulersiable if $X:[0,T]\to\Gamma(T(R/Z)^{d})$, meaning
that it solves the Euler equations on manifold $M$ with some metric $g$. The
theorem, in summary, proved that in a torus, all vector fields, though not
Eulerisable, some of them can be lifted up and mapped to a warped product
121212Warped product is a Riemannian manifold whose geometry can be decomposed
into a cartesian product of the $y$ and $x$ geometry where the term containing
$x$ is warped and is rescaled by a function of the other coordinate $y$. of
the torus such that the flows become Eulerisable. The outcome of these two
theorems is the first of many works by Cardona, Miranda, Peralta-Salas, Presas
[9] where the authors proved that
###### Theorem 3
Every geodesible vector field $X$ on a compact manifold $M$ can be obtained as
the pullback $X=\phi^{*}Y$ of a stationary Eulerisable flow $Y$ on another
manifold $N$ with respect to some embedding $\phi:M\to N$.
where the stationary vector fields (geodesic fields 131313Geodesic fields on a
Riemannian manifold are the vector fields having integral curves that are
geodesics, meaning, the curves that connect two points on a surface by the
shortest path.) can be mapped to stationary solutions to the Euler equations,
by embedding the manifold $M$ to bigger manifold $N$. Such flows are also
called Beltrami flows and they are related to Reeb vector fields using ideas
from contact geometry. In 2021, Torres de Lizaur [29] proved that
###### Theorem 4
Given any vector field $X$ on a compact manifold $N$, there exists a
Riemannian manifold (M,g) and an invariant manifold $\tilde{N}$ diffeomorphic
to $N$ in the phase space $\Gamma(TM)$ of the Euler equations on $(M,g)$, such
that the Euler flow on $\tilde{N}$ is arbitrarily close to $X$ in the smooth
topology after identifying $\tilde{N}$ with $N$.
such that the perturbed vector field in a manifold $\tilde{N}$ can be
embdedded in a higher dimensional manifold $N$ such that the solutions to
Euler equations on $M$ are quite close to those in $N$ which is stable upon
perturbation. One corollary of this work is that there are manifolds $(M,g)$
such that Euler dynamics in these manifolds are chaotic in that they exhibit
horseshoe map, homoclinic orbits, and similar features of a chaotic system
which are stable upon perturbations. In 2021, Cardona, Miranda, Peralta-Salas,
Presas [10] proved that
###### Theorem 5
There exists a Riemannian manifold $(M,g)$ whose Euler flow is Turing-
complete: the halting problem for any given Turing machine is equivalent to
the Euler flow for a certain initial condition associated to that machine
entering a certain region.
where the authors constructed solutions for the steady Euler flow (without
viscosity) on a Riemannian 3-sphere $\mathcal{S}^{3}$ that are Turing-
complete. Here, Turing completeness means that for given points on the sphere,
the problem of bounding the dynamical trajectory of those points, as the
equation evolves, is an undecidable problem. Following this work, the authors
extended the analysis to a standard Euclidean 3-dimensional space. The tools
used to prove such a result are contact topology, symplectic geometry,
h-principle, generalised shift maps, and Cantor sets.
Figure 2: Cantor set framework consisting of $N$ layers with length scale $L$
of the top-most one and $l_{cut}$ of the bottom-most one. There are $n$
discrete elements in the bottom-most layer, and the number of elements reduce
by a factor of 2 as one goes bottom-up layer by layer. Each layer is
represented by $C_{i}$, where $0\leq i\leq N$.
## 7 Cantor set and rotor gate analysis
We are interested to analyse two length scales: $l_{cut}$ and $L$, such that
$l_{cut}<<L$ (lower by few orders of magnitude). For example, for a liquid
drop of radius 1 mm sitting on a hydrophobic surface, $L=1$ mm and
$l_{cut}=10^{-9}$ nm; $l_{cut}$ is at a molecular scale, $L$ is at a continuum
scale. Consider that all the atoms at the length scale $l_{cut}$ can be
categorised and put into a collection of countably infinite sets - called
Cantor sets - denoted by $\mathcal{C}_{k}\,(1\leq k\leq N)$. $N$ is the number
of layers between $l_{cut}$ and $L$, so that at the length scale $L$, all the
atoms can be collected and put in single Cantor set denoted by
$\mathcal{C}_{N}$. From Fig. 1, it is apparent, geometrically, that the sets
at layer $k$ merge pairwise, to form the sets at layer $k+1$. Thus, if the
number of subsets in $C_{1}$ are $n$, the the number of subsets in $C_{2}$ are
$n/2$, and so on, until $C_{N}$. Since $C_{N}$ is a single set with just one
subset (itself), $n/2^{N}=1$. Rearranging it
$N=\log n_{2}$ (18)
At a general length scale, say $l$ ($l_{cut}\leq l\leq L$), with the atoms
categorised in the set $C_{k}$, let us call each subset in $C_{k}$ as a
“mode”. Consider any two adjacent modes in $C_{k}$, say $x$ and $y$ with
energies $e_{x}$ and $e_{y}$. At an initial time step $t_{0}$, the modes are
represented by $x(t_{0})$ and $y(t_{0})$. Consider them rotating around the
origin at a constant angular rate $\alpha z(t_{0})$ such that
$x(t)=x(t_{0})\cos(\alpha z(t_{0}))(t-t_{0})-y(t_{0})\sin(\alpha
z(t_{0})(t-t_{0}))$ (19) $y(t)=y(t_{0})\cos(\alpha
z(t_{0}))(t-t_{0})+x(t_{0})\sin(\alpha z(t_{0})(t-t_{0}))$ (20)
$z(t)=z(t_{0})$ (21)
where the mode $z\in C_{k+1}$ is fixed while $x,y\in C_{k}$ is not. In fact,
starting at $t_{0}$ until time $T_{k}$, the two modes $x$ and $y$ interchange
energy with each other and thus mix to $emerge$ and form the mode $z$ at the
layer $C_{k+1}$. This process can be viewed as $z$ mode driving the exchange
of energy between $x$ and $y$ modes. It is to be noted that though it is the
choice of the reader to see the Cantor set in either top-down or bottom-up
fashion, throughout this article, we choose to discuss emergence as a bottom-
up process, starting from $l_{cut}$ and reaching layer-by-layer until $L$.
Equipartition theorem in statistical mechanics gives us
$\alpha z(x^{2}-y^{2})=\partial_{t}(xy)$ (22)
which can be written as
$\alpha\int^{T_{k}}_{t_{0}}z(t)(x^{2}-y^{2})dt=x(T_{k})y(T_{k})-x(t_{0})y(t_{0})$
(23)
such that $x^{2}+y^{2}=E$ and constant $z$ gives
$\frac{1}{T-T_{0}}\int^{T_{k}}_{t_{0}}x^{2}(t)dt=\frac{E}{2}+O(\frac{E}{\alpha|z(t_{0}|)(T-T_{0})|})$
(24)
where $E=e_{x}+e_{y}$ and $e_{x}=e_{y}$. As a result,
$e_{x}=e_{y}=e[C_{k}]\approx\frac{e[C_{k+1}]}{2}$ (25)
Using Eq.25 and Eq.18, one gets the general formula
$e[C_{N}]\approx 2^{N}e[C_{1}]$ (26)
which relates the energy of a discrete block in $C_{1}$ (molecular scale) to
the energy in $C_{N}$ (continuum scale).
Figure 3: A rotor gate present between 2 layers $C_{k}$ and $C_{k+1}$ such
that $x$ and $y$ modes operate to start dynamics in the $z$ mode, which we
term as emergence of physics at layer $k+1$ from layer $k$.
## 8 Different abstraction levels in VLSI hardware design as cantor set
Cantor set as a framework can be used to model and understand phenomenon
exhibiting emergence in disciplines other than physics. This section is on the
VLSI hardware design problem and how it can be modelled using Cantor set
framework.
In 1964, an American semiconductor company, General Microelectronics (GMe)
manufactured first-ever MOS (metal-oxide-semi conductor) integrated chips
which composed of more than 10,000 transistors in a single chip. Later on,
VLSI (very large scale integrated) chips with millions and billions of MOS
transistors on them became the core element of semi-conductor industry.
One of the big challenges in the industry is something called, top-design
problem, which is to efficiently divide the task of making VLSI chips in
different categories. Gajski-Kuhn chart [20] is one visual description that
identifies three different domains of behaviour - behaviour, structure, layout
- that goes top-down to more refined abstraction levels. It is apparent from
the Fig. 4 that these domains, structural, can be expressed conceptually in a
Cantor set framework with N = 5. The interpretation from this representation
is that the knowledge and the workforce focused on the transistor level can
not individually contribute anything of value to the knowledge and workforce
at one level above (gates, flipflops). Same argument can be applied for the
layers above. It is the collective knowledge at one level that is of use to a
level above. This is what is emergence, the impossibility of using information
from single unit to say anything meaningful of the layer above. However, it is
to noted that Cantor set framework for VLSI hardware design is only a
conceptual representation, not a quantitative one.
Figure 4: VLSI hardware design top-down hierarchy represented in the form of a
Cantor set. There are 5 layers in the structural hierarchy and each of the
layer operates in a fashion such that the knowledge of individual element at a
certain layer can not help to deduce the knowledge of an arbitrary element at
the layer above.
## 9 If not cantor set framework, then what?
While the notion of correlating arbitrary lower scales to higher scale might
seem to be theoretical in nature, it can also be looked from an engineering
and design lens. Recently, Song et al. (2022) [42] showed that the linear
viscoelasticity of arrested soft materials (gels, glasses) at a macroscopic
level is correlated to the microscopic dynamics of the material. At the
macroscopic level, the stress relaxation of the system is studied using
rheometry techniques. On the other hand, at the microscopic level, the
superdiffusive dynamics of microscopic clusters are studied by perturbing the
material using X-ray photo correlation spectroscopy. The nonlinear phenomenon
at the smaller scale is physically correlated to the linear phenomenon at the
macroscopic scale. An experimental understanding of this correlation can
inform design experts to interfere and perturb the lower scale phenomenon in
such a way that the desired output at the macroscopic level is reached. The
techniques of physical learning (supervised or unsupervised) [44, 46] could be
useful here to train and adapt a physical system to reach a desired target
property by experimentally perturbing its internal nodes and adjusting the
weights of the network in accordance with the physical law governing the
emergent process.
## Dedication
This article is dedicated to Sir George Gabriel Stokes, a physicist who made
seminar contributions in the field of fluid mechanics. He served as the
Lucasian Professor of Mathematics from 1849 until 1903, and served as the
Master of Pembroke College from 1902-1903, at the University of Cambridge. The
article written above is the result of never-ending source of inspiration that
S.S. received by glancing fondly over the magnanimous portrait of Stokes in
the dining hall of Pembroke.
## References
* [1] Luis AN Amaral and Julio M Ottino. (2004). Complex networks. The European physical journal B, 38(2):147–162.
* [2] Philip W Anderson. (1972). More is different: broken symmetry and the nature of the hierarchical structure of science. Science, 177(4047):393–396.
* [3] Yasaman Bahri, Jonathan Kadmon, Jeffrey Pennington, Sam S Schoenholz, Jascha Sohl-Dickstein, and Surya Ganguli. (2020). Statistical mechanics of deep learning. Annual Review of Condensed Matter Physics, 11(1).
* [4] David Barbato, Francesco Morandin, and Marco Romito. (2011). Smooth solutions for the dyadic model. Nonlinearity, 24(11):3083.
* [5] Robert C Bishop. (2008). Downward causation in fluid convection. Synthese, 160(2):229–248.
* [6] Pierre-Thomas Brun. (2022). Fluid-mediated fabrication of complex assemblies. JACS Au.
* [7] Tristan Buckmaster and Vlad Vicol. (2019). Nonuniqueness of weak solutions to the navier-stokes equation. Annals of Mathematics, 189(1):101–144.
* [8] Luis Caffarelli, Robert Kohn, and Louis Nirenberg. (1982). Partial regularity of suitable weak solutions of the navier-stokes equations. Communications on pure and applied mathematics, 35(6):771–831.
* [9] Robert Cardona, Eva Miranda, Daniel Peralta-Salas, and Francisco Presas. (2019). Universality of euler flows and flexibility of reeb embeddings. arXiv preprint arXiv:1911.01963.
* [10] Robert Cardona, Eva Miranda, Daniel Peralta-Salas, and Francisco Presas. (2021). Constructing turing complete euler flows in dimension 3. Proceedings of the National Academy of Sciences, 118(19):e2026818118.
* [11] Alexey Cheskidov. (2008). Blow-up in finite time for the dyadic model of the navier-stokes equations. Transactions of the American Mathematical Society, 360(10):5101–5120.
* [12] Peter Constantin and Charles Fefferman. (1993). Direction of vorticity and the problem of global regularity for the navier-stokes equations. Indiana University Mathematics Journal, 42(3):775–789.
* [13] Toby Cubitt, David Perez-Garcia, and Michael M Wolf. (2022). Undecidability of the spectral gap. In Forum of Mathematics, Pi, volume 10. Cambridge University Press.
* [14] Olivier Darrigol. (2002). Between hydrodynamics and elasticity theory: the first five births of the navier-stokes equation. Archive for History of Exact Sciences, 56(2):95–150.
* [15] Camillo De Lellis and László Székelyhidi. (2015). On h-principle and onsager’s conjecture. Eur. Math. Soc. Newsl, 95:19–24.
* [16] Luis Escauriaza, Gregory A Seregin, and Vladimir Sverak. (2003). L3 infinity solutions of the navier-stokes equations and backward uniqueness. Russian Mathematical Surveys, 58(2):211–250.
* [17] Leonhard Euler. (1757). Principes généraux du mouvement des fluides. Mémoires de l’académie des sciences de Berlin, pages 274–315.
* [18] Solomon Feferman. (1988). Turing in the land of o (z). In A half-century survey on The Universal Turing Machine, pages 113–147.
* [19] Charles L Fefferman. (2000). Existence and smoothness of the navier-stokes equation. The millennium prize problems, 57:67.
* [20] Daniel D Gajski and Robert H Kuhn. (1983). New vlsi tools. Computer, 16(12):11–14.
* [21] AJS Hamilton. (1988). On hierarchical solutions to the bbgky hierarchy. The Astrophysical Journal, 332:67–74.
* [22] Hao Jia and Vladimír Šverák. (2014). Local-in-space estimates near initial time for weak solutions of the navier-stokes equations and forward self-similar solutions. Inventiones mathematicae, 196(1):233–265.
* [23] Nets Katz and Nataša Pavlović. (2005). Finite time blow-up for a dyadic model of the euler equations. Transactions of the American Mathematical Society, 357(2):695–708.
* [24] OA Ladyzhenskaya and AA Kiselev. (1957). On the existence and uniqueness of the solution of the nonstationary problem for a viscous incompressible fluid. Izv. Akad. Nauk SSSR Ser. Mat, 21(19579):665–680.
* [25] Lev Davidovich Landau and Evgenii Mikhailovich Lifshitz. (2013). Fluid Mechanics: Landau and Lifshitz: Course of Theoretical Physics, Volume 6, volume 6. Elsevier.
* [26] Lev Davidovich Landau and Evgenii Mikhailovich Lifshitz. (2013). Statistical Physics: Volume 5, volume 5. Elsevier.
* [27] Pierre Gilles Lemarié-Rieusset. (2018). The Navier Stokes Problem in the 21st Century. Chapman and Hall/CRC.
* [28] Jean Leray. (1934). Sur le mouvement d’un liquide visqueux emplissant l’espace. Acta mathematica, 63(1):193–248.
* [29] Francisco Torres de Lizaur. (2022). Chaos in the incompressible euler equation on manifolds of high dimension. Inventiones mathematicae, 228(2):687–715.
* [30] Seth Lloyd. (2001). Measures of complexity: a nonexhaustive list. IEEE Control Systems Magazine, 21(4):7–8.
* [31] Detlef Lohse and Siegfried Grossmann. (1993). Intermittency in turbulence. Physica A: Statistical Mechanics and its Applications, 194(1-4):519–531.
* [32] Ryan McKeown, Rodolfo Ostilla-Mónico, Alain Pumir, Michael P Brenner, and Shmuel M Rubinstein. (2020). Turbulence generation through an iterative cascade of the elliptical instability. Science advances, 6(9):eaaz2717.
* [33] Jindvrich Nevcas, Michael Rvzicka, and Vladimir Sverak. (1996). On leray’s self-similar solutions of the navier-stokes equations. Acta Mathematica, 176(2):283–294.
* [34] CW Oseen. (1911). Sur les formules de green généralisées qui se pré-sentent dans l’hydrodynamique et sur quelques. Acta mathematica, 34:205.
* [35] Jeremie Palacci, Stefano Sacanna, Asher Preska Steinberg, David J Pine, and Paul M Chaikin. (2013). Living crystals of light-activated colloidal surfers. Science, 339(6122):936–940.
* [36] Marquis De Laplace Pierre-Simon. (2007). A Philosophical Essay on Probabilities. Cosimo, Inc.
* [37] Giovanni Prodi. (1959). Un teorema di unicita per le equazioni di navier-stokes. Annali di Matematica pura ed applicata, 48(1):173–182.
* [38] Josef Rukavicka. (2014). Rejection of laplace’s demon. The American Mathematical Monthly, 121(6):498–498.
* [39] James Serrin. (1961). On the interior regularity of weak solutions of the Navier-Stokes equations. Mathematics Division, Air Force Office of Scientific Research.
* [40] Suraj Shankar, Anton Souslov, Mark J Bowick, M Cristina Marchetti, and Vincenzo Vitelli. (2022). Topological active matter. Nature Reviews Physics, 4(6):380–398.
* [41] Marvin Shinbrot. (1974). The energy equation for the navier–stokes system. SIAM Journal on Mathematical Analysis, 5(6):948–954.
* [42] Jake Song, Qingteng Zhang, Felipe de Quesada, Mehedi H Rizvi, Joseph B Tracy, Jan Ilavsky, Suresh Narayanan, Emanuela Del Gado, Robert L Leheny, Niels Holten-Andersen, et al. (2022). Microscopic dynamics underlying the stress relaxation of arrested soft materials. Proceedings of the National Academy of Sciences, 119(30):e2201566119.
* [43] O Sporns, (2007). Complexity. scholarpedia, 2 (10), 1623.
* [44] Menachem Stern, Chukwunonso Arinze, Leron Perez, Stephanie E Palmer, and Arvind Murugan. (2020). Supervised learning through physical changes in a mechanical system. Proceedings of the National Academy of Sciences, 117(26):14843–14850.
* [45] Menachem Stern and Arvind Murugan. (2022). Learning without neurons in physical systems. arXiv preprint arXiv:2206.05831.
* [46] Menachem Stern, Matthew B Pinson, and Arvind Murugan. (2020). Continual learning of multiple memories in mechanical networks. Physical Review X, 10(3):031044.
* [47] Steven Strogatz, Sara Walker, Julia M Yeomans, Corina Tarnita, Elsa Arcaute, Manlio De Domenico, Oriol Artime, and Kwang-Il Goh. (2022). Fifty years of ‘more is different’. Nature Reviews Physics, pages 1–3.
* [48] Terence Tao. (2016). Finite time blowup for an averaged three-dimensional navier-stokes equation. Journal of the American Mathematical Society, 29(3):601–674.
* [49] Terence Tao. (2017). On the universality of the incompressible euler equation on compact manifolds. arXiv preprint arXiv:1707.07807.
* [50] Terence Tao. (2019). On the universality of the incompressible euler equation on compact manifolds, ii. non-rigidity of euler flows. arXiv preprint arXiv:1902.06313.
* [51] Clifford Truesdell. (1960). A program toward rediscovering the rational mechanics of the age of reason. Archive for history of exact sciences, 1(1):3–36.
* [52] David H Wolpert. (2008). Physical limits of inference. Physica D: Nonlinear Phenomena, 237(9):1257–1281.
|
# Topological defect coarsening in quenched smectic-C films
analyzed using artificial neural networks
Ravin A. Chowdhury Adam A. S. Green Cheol S. Park Joseph E. Maclennan Noel
A. Clark Department of Physics and Soft Materials Research Center, University
of Colorado, Boulder, Colorado, 80309, USA
###### Abstract
Mechanically quenching a thin film of smectic-C liquid crystal results in the
formation of a dense array of thousands of topological defects in the director
field. The subsequent rapid coarsening of the film texture by the mutual
annihilation of defects of opposite sign has been captured using high-speed,
polarized light video microscopy. The temporal evolution of the texture has
been characterized using an object-detection convolutional neural network to
determine the defect locations, and a binary classification network customized
to evaluate the brush orientation dynamics around the defects in order to
determine their topological signs. At early times following the quench,
inherent limits on the spatial resolution result in undercounting of the
defects and deviations from expected behavior. At intermediate to late times,
the observed annihilation dynamics scale in agreement with theoretical
predictions and simulations of the $2$D XY model.
## I Introduction
Topological defects, which are stable disclinations or dislocations in ordered
physical systems, are typically formed as a result of spontaneous symmetry-
breaking during phase transitions [1, 2]. The formation and evolution of such
defects, which have been predicted, and in some cases observed, in such
diverse contexts as cosmology [3] and condensed matter [4], is a classical
phenomenon that has been studied in many physical systems, including thin
magnetic films [5] and superfluids [6]. Liquid crystals (LCs) are a
particularly convenient medium in which to study the behavior of such defects
experimentally, with disclinations easily visualized in both the nematic and
tilted smectic phases [1]. The structure and dynamics of topological defects
in quasi-two-dimensional liquid crystals is broadly reviewed in [7].
Fluid smectics are fundamentally interesting because they can be drawn into
extremely thin, freely-suspended films of the order of a few molecular layers
thick, allowing the study of physics in two dimensions ($2$D) [8, 9, 10]. In
the smectic-A (SmA) liquid crystal phase, the long axes of the molecules are
oriented, on average, along the layer normal, while in the smectic-C (SmC)
phase they are tilted from the layer normal, breaking the axial symmetry of
the SmA phase and introducing topological complexity. The topology of freely-
suspended SmC films may be described by projecting the average molecular long
axis (the director) onto the plane of the layers, defining a vectorial
orientation field called the $c$-director. When these films are viewed in
reflection under crossed polarizers, this orientation field typically creates
a schlieren texture, with characteristic, cross-like extinction brushes
centered on any topological defects [11].
The visual appearance of defects in SmC films depends, in general, on their
topological strength, the illumination conditions, and the relative locations
and orientations of the other defects [12]. Several experimental studies of
SmC defect dynamics have considered films with only a small number of defects
[13, 14, 15, 16]. When there are only a few defects in the field of view, and
they are well separated, they can either be identified manually or tracked
automatically by cross-correlating the images with synthetically generated
templates of model defect textures [14].
However, in dense arrays of topological defects, such as those generated in
the quenching experiments described here, the orientation fields around the
defects produce irregular and complex schlieren textures in polarized light,
making detecting and tracking the defects using the previously implemented
techniques impractical. Machine learning has been shown to be a useful tool
enabling object detection in images obtained in such diverse areas as solid-
state physics[17], cellular biology [18, 19], and in protein folding
experiments [20]. We shall demonstrate here that deep learning can be used to
solve the seemingly intractable problem of detecting topological defects in
dense, two-dimensional arrays in LC films.
The analysis of coarsening dynamics in LC systems with large numbers of
densely-packed topological defects has been found historically to be
challenging in both experimental and numerical studies because of the
practical difficulty of detecting the defects. The coarsening dynamics of
model $2$D SmC films with high defect densities have been studied extensively
using simulations [21, 22, 23, 24, 25]. Experimental studies of defect
dynamics in thin, quenched SmC films were reported by Muzny [26], who
described the basic phenomenology of quenching, proposed a mechanism for
defect generation, and measured the approach dynamics of defect pairs and the
decay of defect number with time following the quench.
In more recent experiments [27], high-speed video microscopy was used to
capture the textures of mechanically quenched smectic-C films with much better
temporal resolution than in Muzny’s experiments. A preliminary analysis of the
evolution of the observed arrays of topological defects using a convolutional
neural network (CNN), a type of artificial neural network that is ideal for
image analysis, demonstrated the utility of machine learning but also revealed
the limitations imposed by training the network using simulated images,
discussed further below.
In the present study, we analyzed the same experimental images but using a
newer CNN trained on experimental rather than simulated images to determine
the locations of the defects. The network was able to predict the defect
locations starting at earlier times and with a much higher degree of accuracy
than before, measurements verified by comparison with manually determined
(human-annotated) coordinates. In addition, a binary classification network
was trained to distinguish between defects of opposite topological sign,
allowing a comprehensive analysis of the coarsening dynamics in these films.
The observed decay of defect density with time was compared with the XY model,
with the predictions of a model proposed by Yurke and co-workers [21], and
with the results of numerical simulations [22]. At early times ($t<0.4$ s),
when the defect density is high, many of the defects have a separation that is
smaller than the imaging and machine learning spatial resolution limits, and
the number of defects counted is lower than theoretically predicted. At later
times, the defect density exhibits power law decay with an exponent of $0.9$,
in agreement with theory.
## II Experiment
Figure 1: Schematic of the film quenching experiment. A thin smectic-C film
drawn across a small, circular opening in a sealed chamber is temporarily
deformed to a dome by increasing the chamber pressure. A sudden release of the
excess pressure then causes the film to return rapidly to its original planar
geometry, a mechanical quench that results in the spontaneous formation of
topological defects in the director field (inset). The defects are visualized
using polarized reflected light microscopy and the coarsening of the defect
texture recorded using a high-speed video camera.
In the quenching experiments, smectic-C films are drawn across a circular
aperture in a glass cover slip set in the opening of an otherwise airtight
chamber. Increasing the air pressure in the chamber causes the originally flat
film to be distorted into a dome. When the pressure is suddenly released, the
film collapses rapidly to being planar again, a mechanical quench that
increases the hydrostatic pressure in the film, causing a short-lived
transition to the smectic-A phase. The subsequent return to the smectic-C
phase results in the spontaneous appearance of thousands of $2\pi$
disclinations, topological defects of unit strength (i.e., with winding
numbers $\pm 1$), in the film. This initially dense array of defects then
coarsens by the mutual annihilation of defects of opposite sign [26]. The
experimental setup is shown in Fig. 1 and described in further detail by Green
[28]. The evolution of the defect texture was captured using a high-speed
video camera (Phantom V$12.2$) with a spatial resolution of $1104\times{}800$
pixels and a bit depth of $16$, operating at $500\,$fps.
The liquid crystal material used in these experiments was PM$2$, a $50$:$50$
mixture by weight of SYNTHON ST$00552$
($2$-($4$-$n$-hexyloxyphenyl)-$5$-$n$-octylpyrimidine) and SYNTHON ST$00557$
($5$-$n$-decyl-$2$-($4$-$n$-octyloxyphenyl)pyrimidine) [29], with the phase
sequence SmC $52\unit{}$ SmA $68\unit{}$ N $72\unit{}$ Iso [30]. Films $5\,$mm
in diameter and $20$–$30$ molecular layers ($60$–$90\,$nm) thick were drawn in
the SmC phase at room temperature and the quenching experiments conducted at
$35\unit{}$.
The $c$-director field in a ${\sim}0.6\,$mm2 region of the film was visualized
using polarized reflection microscopy. Under crossed polarizers, the film
displays a schlieren texture, with each defect core surrounded by four
alternating dark and light brushes. In order to be able to visualize the
coarsening dynamics at high frame rates, the average intensity of the image
was increased by decrossing the polarizers. This halves the number of brushes
around each defect, transforming the cross-like brush textures to bow ties
[31]. At the earliest times that defects can be observed following a typical
quench, the average separation between defects is typically around $20\,\mu$m.
Defects of opposite sign exert long-range $1/r$ attractive elastic forces on
one another, while those of the same sign repel, interacting like infinite
lines of electrical charge [26]. The defects also exhibit Brownian motion,
diffusing laterally, in the plane of the film [13]. Over time, neighboring
$+1$ and $-1$ defects approach each other and mutually annihilate, with most
of the defects disappearing within a few seconds of the quench. A typical
coarsening sequence is shown in Fig. 2.
Figure 2: Defect coarsening in a mechanically quenched SmC film. The
topological defects are located at the points from which the pairs of light
and dark brushes (resembling bow ties) emanate. Thousands of $+1$ and $-1$
defects are generated during a typical quench and these mutually annihilate
over time. The polarizer and analyzer are decrossed by $60\unit{\degree}$. The
intensities of these images, which show only a small part of the film, have
been normalized as described in the text.
## III Topological Defect Detection
A rigorous analysis of the dynamics of topological defects in smectic-C films
requires identifying their topological signs and tracking their locations as a
function of time. Conventional feature-detection algorithms that use intensity
thresholding or edge-detection of object boundaries have been successfully
used in many investigations of soft materials to detect features with regular
shapes and well defined boundaries, such as colloidal particles in
suspension[32] and islands and droplets on smectic films [33, 34, 35, 36].
Detecting topological defects in liquid crystals is, however, a much more
challenging task because the defects are identified principally by the diffuse
schlieren textures surrounding them, which are typically irregular, have no
well-defined boundaries, and vary in appearance. Quenching a film generates
thousands of defects, resulting in a complex schlieren texture with defect
core separations of as little as a few microns in our experiment. Accurate
analysis of such textures is not feasible using our previous methods, in which
we detected and tracked well-separated defects in equilibrium films by cross-
correlating the experimental images with synthetically generated defect
templates to determine their locations and brush orientations [14].
We recently demonstrated the utility of modern deep learning networks for
defect detection, using the YOLOv2 network [37] trained on a large set of
computed images of defect textures generated by Monte-Carlo simulations of the
$2$D XY model [27]. Although these training images were modified to emulate
the experimental images more closely, nevertheless, several features present
only in the experimental images regularly caused the trained model to predict
false positives during analysis. For example, because dark speckles of the
kind seen in many of the experimental images were not present in the training
images, the network was not able to recognize these as being artifacts rather
than defects. In addition, the black mask of the microscope field stop was not
considered in the computed training images, resulting in false positives along
the edges of the field of view.
In the present analysis, we used YOLOv$5$, a deep learning network designed
for fast object detection [38], that was trained on real experimental images,
allowing us to achieve a much higher detection accuracy than before. YOLOv$5$
is (at this time) the newest iteration of YOLO, a lineage of neural networks
that perform both bounding box predictions (object localization) and
classification tasks for every object in the image, in a single instantiation
of the network. Technical details of the neural network are given in Appendix
A.
### III.1 Image processing
Images captured during nine different quenching experiments were analyzed. To
compensate for variations in the brightness and contrast of these data sets,
all of the images were normalized to have the same average intensity and
dynamic range. This ensured that all of the images input to the neural network
had similar statistical properties, making it easier for the correct weights
to be developed in training. This reduced the number of false positives and
significantly reduced the number of training epochs required. A typical
example of image normalization is shown in Fig. 3.
Figure 3: Normalization of experimental images. (a) Raw experimental image
($t=0.4\,$s). (b) Normalized image. The raw images extracted from the
quenching videos were normalized to have the same mean image intensity and
dynamic range. This procedure ensured that the statistics of all of the
experimental images from different quenching events would match those of the
training set.
### III.2 Model Training
The neural network was trained on experimental data, using a set of $141$
images chosen at random from different film quenching experiments and divided
into training and testing sets in the ratio $80$:$20$. Gradient-descent
calculations were carried out using the training set and the model performance
was logged every cycle using the testing set. The locations of the defects in
the training data (the ‘ground truth’ locations) were determined manually.
Training was carried out using Google’s cloud research computing service, the
Google Colaboratory, on an NVIDIA Tesla T$4$ GPU with $16\,$GB of memory [39].
Four YOLOv$5$ models of different sizes (listed in Appendix A) were trained
for $500$ epochs each on the same training set, using a batch size of $16$.
Model checkpoints (weights) were stored every epoch and the checkpoint
yielding the highest mean average precision (mAP) on the testing set was
selected. Choosing the optimal checkpoint of the neural network in this manner
helped to prevent over-fitting, which is generally a concern when the training
data sets have fewer than $500$ images, as in our case.
YOLOv$5$ uses several data augmentation techniques that further reduce the
possibility of over-fitting by changing the appearance of the training images
in order to increase artificially the amount of training data. These
modifications include carrying out vertical and horizontal flips, cropping,
rotation, and a new method introduced with YOLOv$5$ called mosaic augmentation
that meshes sections of multiple images together. In addition, the image
quality may be altered intentionally using randomized exposure, saturation, or
blurring [38].
Square bounding boxes were used to define the defect core locations. While
keeping the bounding boxes small has the benefit of increasing the precision
of the detections and enabling defect detection even when the defects are
close together, this also reduces the number of pixels associated with each
defect, making training more difficult. The smallest bounding box size that
our network could train on reliably was found to be $11\times{11}$ pixels.
### III.3 Neural Network Performance
The performance of the four trained neural network models was evaluated using
a control set of $48$ normalized test images which were not included in the
training data and hence had never been ‘seen’ before by the network. The
detection results were compared to manually obtained ‘ground-truth’ locations
using YOLOv$5$’s built in performance metrics. Of particular significance are
the mean Average Precision (mAP) values and the precision recall (F$1$)
scores, since these metrics contain information about both false negatives and
false positives [40]. The model with the highest recorded mAP score was the
smallest model, YOLOv$5$s, which achieved an mAP of $0.970$ and a peak F$1$
score of $0.96$. The metrics computed for the various models are summarized in
Table 1 in Appendix B.
Although using image cross-correlation to identify defects yielded accurate
results in experiments where the density of defects in the film was low [14],
defect detection using this method becomes intractable in films with high
defect densities, as is the case at early times in the quenching experiments
described here. Cross-correlation is also sensitive to the presence of
artifacts in the image, such as the liquid crystal deposits on the film
chamber window visible as black speckles in most of our experimental images of
quenched films. The resulting false predictions cause this method to have a
low mAP, typically around $0.5$ (see Table 1 in Appendix B). Deep learning
networks, in contrast, can be trained to ignore such artifacts. The machine
learning method developed by Minor et al. using YOLOv2 trained on synthetic
images to analyze the quenching experiments occasionally mis-identified the
black speckles in the experimental images as topological defects [27], and the
highest mAP achieved with this approach was around $0.8$. In the present
study, the YOLOv$5$ network trained on real images that included speckles and
the edges of the field stop obtained significantly better mAP scores.
## IV Defect Tracking and Sign Classification
### IV.1 Trajectory Linking
Once the defect locations had been determined in each frame, linked
trajectories were constructed using TrackPy [41], a Python implementation of
an algorithm originally developed for tracking colloidal particles [32]. The
‘memory’ feature of this program enables complete trajectories to be
constructed even when there are occasional missed detections.
Visual comparisons of the computed defect trajectories following several
different quenching events with manually determined defect locations
demonstrated broad agreement, providing validation of the predictions of the
neural network. A typical example of computed defect trajectories is shown in
Fig. 4.
Two key assumptions about the defect dynamics were made in linking the defect
locations to form trajectories. First, it was assumed that all of the
topological defects present in the field of view were created at the time of
the quench. Second, we assumed that defects of opposite topological sign
annihilate and disappear in a pairwise manner. Defects that entered or left
the frame during the coarsening experiment were exceptions that required
special treatment when constructing their trajectories.
Figure 4: Defect trajectories in a quenched film as determined by the trained
neural network (green tracks) compared with defect locations identified
manually (white crosses) every $500$ frames (at $1$ second intervals). The
trajectories, which were determined over the course of $3000$ frames (an
elapsed time of $6$ seconds), are superimposed here on the final image in the
video sequence, with the surviving $+1$ and $-1$ defects shown respectively in
red and blue. The dark speckles are liquid crystal droplets deposited on the
chamber window by previously ruptured films. The neural network was trained to
ignore these artifacts. The polarizer/analyzer settings are as in Fig. 3.
### IV.2 Defect Sign Classification from Brush Orientation Dynamics
When SmC films at equilibrium are observed in real time in the polarized light
microscope, the signs of any topological defects may be readily determined by
judiciously varying the decrossing angle of the polarizers and rotating the
film. This procedure can not, however, be followed in the quenching
experiments, where most of the defects disappear less than a second after the
quench occurs. The $+1$ and $-1$ defects are generally very similar in
appearance and distinguishing them in static images alone is difficult. Near
their cores, both kinds of defect resemble bow ties when the polarizers are
decrossed but their orientations and the schlieren textures around them are
highly variable, depending in a non-trivial way on their locations relative to
other defects in the film. We have nevertheless been able to solve this
fundamental classification problem by using their characteristic orientational
dynamics to discriminate between the $+1$ and $-1$ defects.
First, the orientations of all of the defects, by which we mean the
orientations of the bright brushes in the schlieren texture around the
defects, were determined in every video frame. This was achieved by the
previously mentioned technique of cross-correlating the region around every
defect core with small, synthetic defect templates generated at different
angles. When the computed defect orientations are plotted as a function of
time, as in the example of Fig. 5, it is immediately apparent that the defects
fall into two classes. The brush orientations of the first class of defect
show little variation over time, with only small azimuthal fluctuations of
less than $10\unit{\degree}$. The mean orientation of these defects was found,
in all of the quenched films, to be ${\sim}135\unit{\degree}$. The brush
orientations of the second class of defects, in contrast, are not confined to
a particular azimuth and vary substantially over time.
Since $+1$ defects have full axial ($C_{\infty}$) symmetry, their brush
orientation is relatively insensitive to thermal orientation fluctuations of
the $c$-director. Defects of strength $-1$, on the other hand, are
characterized by alternating bend and splay distortions of the surrounding
$c$-director field and have only two-fold ($C_{2}$) symmetry. A consequence of
this anisotropy is that the appearance of the $-1$ defects is effectively more
sensitive to thermal fluctuations and to their environment, responding readily
to reorientations of the surrounding $c$-director field caused, for example,
by distant defect annihilations or resulting from spatial rearrangements
associated with the approach of other defects in the film. The orientational
mobility of the brushes around $-1$ defects was previously reported by Wachs
[15], who observed that in thin films, an isolated $-1$ defect typically
reorients shortly before annihilating with a $+1$ defect, maintaining the
continuity of the director field along the line connecting the two defects.
The appearance of the $+1$ defect, in contrast, seems to be relatively
unaffected by the impending annihilation. This phenomenon has recently also
been observed by Missaoui et al. [16] in thick SmC films and modelled
theoretically by Tang and Selinger [42]. We conclude, therefore, that the
first observed class of defects in our experiments has topological charge $+1$
and the second class topological charge $-1$.
Figure 5: Bright brush orientation as a function of time for a typical pair of
annihilating defects. Quenching occurs at $t=0$ and the pair annihilates at
$t=9.6\,$s. Over the lifetime of the pair, the brushes around the $+1$ defect
(red trace) fluctuate about an azimuth of $135\unit{\degree}$ but do not
change their average orientation significantly. The $-1$ defect (blue trace),
in contrast, is more orientationally mobile. Initially oriented at
$75\unit{\degree}$ in this example, the brushes soon rotate to around
$20\unit{\degree}$, where they remain until annihilation. The insets show
snapshots of the defects shortly after the quench ($t=0.5\,$s) and shortly
before annihilation ($t=8\,$s). The variations in brush orientation over time
are evaluated by a binary classification network in order to determine the
topological signs of all of the defects.
Second, determination of defect sign on the basis of the brush orientation
dynamics was achieved using a custom binary classification network comprising
a fully connected model with one hidden layer. The hidden layer adds the
complexity necessary to account for nonlinearities in the relationship between
brush orientation dynamics and defect strength. The orientational data were
reduced to three input features for each defect: the mean brush orientation,
the standard deviation of the orientation, and the defect lifetime. The model
determined the probability of each defect having strength $+1$ or strength
$-1$. Defects with readily identifiable strengths were selected manually to
generate training data for the model. $74$ such defects were labelled in
total, $60$ for training and $14$ for testing. The model was trained for
$1000$ epochs.
## V Results
### V.1 Detection and Classification of Defects
As we have seen in Fig. 4, the defect locations determined by YOLOv$5$ are in
very good agreement with those determined manually. Starting at early times,
soon after the quench ($t<1\,$s), the network even identifies defects in high-
density regions that are missed by visual inspection. The numbers of $+1$ and
$-1$ defects identified in each video frame were found to be roughly equal, as
expected. Small deviations from parity are inevitable since only a finite
region of the film is imaged, causing some of the partner defects to be
located outside the field of view at any given time. Visual inspection
confirmed that the number of false positives in any image was typically very
small, even near the field stop defining the edges of the field of view or in
the presence of artifacts in the image. An example of defects detected by the
neural network and classified by topological sign is shown in Fig 6.
Figure 6: Defect locations predicted by the neural network and classified
according to topological sign. The image shows a SmC film six seconds after
quenching, with the defects color-coded according to the topological sign
predicted by the custom binary classification network. The defect locations
determined by the deep learning model were proved generally to be highly
accurate. In this example, every defect was detected: $18$ of strength $+1$
and $21$ of strength $-1$. The film diameter is $5\,$mm, much larger than the
image, the black borders here corresponding to the edges of the microscope
field stop. The polarizer/analyzer settings are as in Fig. 3.
### V.2 Coarsening Dynamics
The decay of $N(t)$, the number of topological defects, as a function of time
measured in nine different quenching experiments showed similar behavior.
Defects could only be detected starting about $0.2\,$seconds after the quench
was initiated, when the previously distorted film was flat and in focus again.
In every experiment, $N$ was observed to decrease slowly at first and then
more quickly, decaying algebraically at longer times, with a roughly constant
exponent. A typical measurement of $N(t)$ is shown in Fig. 7.
The overdamped, continuous XY model describing locally interacting, classical
spins in $2$D predicts an inverse power-law relation, $N\propto t^{-1}$ [1].
It is readily apparent from the graph, however, that the observed decay occurs
more slowly overall than predicted by this model and deviates significantly
from a simple power law at early times. Yurke et al. carried out computer
simulations of the $2$D XY model and described the observed coarsening
behavior with $N\ln{N}\propto t^{-1}$, the logarithmic correction accounting
for the effective drag on the defect cores [21]. Jang et al. also carried out
numerical simulations including these frictional forces, finding that $N$
varied as $N\propto t^{-0.9}$ [22], in agreement with the asymptotic scaling
behavior predicted by the Yurke model. As is evident from Fig. 7, our
experimental data are fit well by the Yurke and Jang predictions at
intermediate and long times ($t\gtrsim 0.4$ s).
Figure 7: Defect number vs. time in a quenched smectic-C film. The large
number of topological defects generated in a typical quenching experiment
decreases over time by the mutual annihilation of $+1$, $-1$ defect pairs.
Images of the film were analyzed starting when the film came back into focus,
about $0.2$ seconds after the mechanical quench. A fit with the diffusive
model (green), which describes algebraic decay of the defect number $N$ with
an exponent of $1$, predicts coarsening at long times that is faster than
observed. The logarithmic correction term of Yurke’s model (blue) results in a
slower annihilation rate that closely approximates the experimental
measurements. Power-law decay with an exponent of $0.9$, as suggested by the
simulations of Jang et al., is in accord with the Yurke model and matches the
data similarly well (red). The experimental data is duplicated here for visual
clarity.
At early times, between $0.2$ and $0.4$ seconds after the quench, the observed
dynamics deviate significantly from power-law behavior. Extrapolating the
asymptotic slope to early times leads to the prediction that at $0.2$ seconds
there should be around $800$ topological defects, about $200$ more defects
than were detected by the neural network. This discrepancy is attributed to
the undercounting of defects that are closer than about $10\,\mu$m ($11$
pixels), which is the resolution limit for detection by the trained network.
This limit is also manifest in defect pair correlation functions computed from
the images, which are identically zero for defect separations closer than
$10\,\mu$m, as shown in Fig 10 in Appendix C. Comparing the pair correlation
curves derived from experimental data with those computed from simulations,
also plotted in Fig 10, supports the notion that the neural network is unable
to discern topological defects that are very close together in the
experimental images, resulting in undercounting at early times when the defect
density is highest.
Finally, it has been suggested that there are circumstances in which the $2$D
XY model would be expected to exhibit exponential rather than power-law decay
at early times (see, for example, [43, 44]), which would clearly result in a
knee of the kind shown in the plot of Fig. 7. However, the inherent
uncertainty in our early-time data precludes any systematic consideration of
this possibility.
## VI Summary
In summary, we have demonstrated a deep-learning approach that allows us to
detect topological defects in thin smectic-C liquid crystal films with a high
level of accuracy. We have also developed a rigorous method for classifying
the topological signs of the defects, using the distinctive orientational
dynamic behavior of the director fields around them. A binary classification
network trained to perform this task gives predictions consistent with the
known physical properties of such arrays of defects, for example that mutual
annihilation only occurs between defects of opposite sign and that in any
given region of the film, the numbers of $+1$ and $-1$ defects are expected to
be approximately equal.
We compared our experimental observations of the defect coarsening dynamics in
films with several theoretical models. At long times, the number of defects
was observed to decrease more slowly than predicted by the purely diffusive XY
model, showing instead the power-law behavior with an exponent of $0.9$ in
agreement with the model of Yurke et al. and the simulations of Jang et al.
The anomalously slow annihilation rate observed at early times is attributed
to the undercounting of defects, an unavoidable consequence of the inability
of the neural network to resolve defects with separations smaller than
$10\,\mu$m.
The deep-learning method described here could be improved by designing custom
networks explicitly for detecting topological defects, which could allow a
reduction in the number of required layers in the neural network, making
defect detection faster and more transparent. The kind of machine learning
implemented here could readily be applied to other detection tasks in soft
materials, such as tracking a variety of inclusions in smectic films,
colloidal particles in suspension, and bacterial cells in fluid media, or
could be used to analyze the evolution of bubbles in foam coarsening
experiments.
###### Acknowledgements.
The authors wish to acknowledge many useful discussions with Matt Glaser. This
work was supported by NASA Grants NAGNNX07AE48G and NNX-13AQ81G, and by the
Soft Materials Research Center under NSF MRSEC Grants DMR 0820579 and
DMR-1420736. R.A.C. was supported by a UROP research award from the University
of Colorado Boulder.
## Appendix A The Neural Network
Figure 8: Architecture of the YOLOv$5$ neural network. YOLOv$5$ comprises
three parts, shown in the diagram: The model backbone is a Cross Stage Partial
Network (CSPDarknet) for feature extraction. A Path Aggregation Network
(PANet) combines features, which are then passed to the output layer for
detection.
The architecture of YOLOv$5$, the neural network used in this study, is shown
in Fig. 8. Unlike previous versions of YOLO that were developed in the Darknet
framework, YOLOv$5$ is built using the PyTorch framework in Python. The
backbone uses a Cross Stage Partial Network (CSPNet) to compress predicted
features into fewer channels. A Spatial Pyramid Pooling Network (SPPNet)
restructures the input to bypass fixed-size input constraints. The extracted
features from the YOLOv$5$ backbone are then passed to a Path Aggregation
Network (PANet) for feature fusion. Finally, the fused features are passed to
the output layer (also called the YOLO layer), where the detection results are
computed.
There are four popular YOLOv$5$ model sizes, with increasing numbers of
layers: YOLOv$5$s, YOLOv$5$m, YOLOv$5$l, and YOLOv$5$x. We measured the
training time, mAP value, and peak F$1$ score of all four models. The training
times, shown in Table 1, ranged from $50$ minutes for the smallest model to
$160$ minutes for the largest.
Figure 9: Performance of the YOLOv$5$ model trained to detect topological
defects. (a) The precision-recall curve yields a mean Average Precision (mAP)
of $0.97$, corresponding to a high degree of confidence in defect detection by
the trained network. (b) The highest value on the F$1$ curve (the F$1$ score)
was $0.96$. The best F$1$ score was obtained at a confidence threshold of
$0.27$.
## Appendix B Precision-Recall Curve and F$1$ Plot
In order to determine whether a detected result is a true or false positive,
an intersection over union (IoU) of the detection bounding box with the ground
truth bounding box is calculated. If the IoU ratio is above a pre-specified
confidence threshold, the detection is classified as a true positive, while if
it is below the threshold, it is deemed a false positive. The precision-recall
(PR) curve is created by sampling results from a range of confidence threshold
values, usually from $0.5$ to $0.95$. This is the range used, for example, by
the Microsoft COCO data set, a large annotated collection of images with only
a few sizeable objects per image [45]. In our case, in contrast, there could
be hundreds of defects per image with bounding box sizes as small as
$11\times{}11$ pixels. For bounding boxes as small as these, small pixel-wise
differences between manually created bounding boxes and those determined by
YOLOv$5$ may impact the IoU dramatically. As a result, the likelihood of a
true detection producing an IoU below $0.5$ is relatively high. To obtain a
better measure of the network’s results, we extended the threshold range from
$0.0$ to $0.95$.
The mean average precision (mAP) is defined as the area under the precision-
recall (PR) curve (Fig 9(a)), a plot of the ratio of true positives to all
detections (precision) vs. the ratio of true positives to all ground truth
values (recall). As seen in Table 1, the smaller model sizes were found to
yield slightly better mAP scores in this study, with the smallest model,
YOLOv$5$s, achieving the highest mAP score of $0.970$, marginally better than
the score of $0.960$ achieved by YOLOv$5$x, the largest model we looked at.
These results indicate that the additional layers present in the larger
YOLOv$5$ models do not result in better defect detection.
The F$1$ score is the harmonic mean of the precision and recall values,
calculated for a range of confidence thresholds. Since every threshold value
has an associated F$1$ score, the threshold that yields the highest F$1$ score
is routinely reported. As is evident from Fig 9(b), the highest F$1$ score (of
$0.96$) occurs when no detections are ignored, demonstrating that our model is
unlikely to detect false positives or make double detections (two detections
of the same defect). The F$1$ score drops off rapidly above a confidence
threshold of ${\sim}0.70$ and falls to zero before a threshold of $0.95$ can
be reached. This is a reflection of the sensitivity of the F$1$ score to small
differences of a few pixels in bounding box locations and/or dimensions from
the values manually determined in the control set. The highest F$1$ score for
all four model sizes was $0.96$, as shown in Table 1. Also reported in the
Table are the performance metrics for defect detection carried out both using
template image cross-correlation and in the previous study using YOLOv2 [27].
Table 1: Neural network model training time and defect detection accuracy given by the mAP and F$1$ metrics. Shown are the metrics for topological defect detection using different YOLOv$5$ models, as well as those achieved when cross-correlating the experimental images with synthetic defect templates and using YOLOv2 trained using synthetic images. Model | Training Time | mAP | F$1$ Score
---|---|---|---
Cross-Correlation | – | 0.498 | 0.68
ForLL YOLOv2 | 1.1 hours | 0.818 | 0.81
RealData11$\times$11 YOLOv$5$s | 0.8 hours | 0.970 | 0.96
RealData11$\times$11 YOLOv$5$m | 0.9 hours | 0.967 | 0.96
RealData11$\times$11 YOLOv$5$l | 1.4 hours | 0.963 | 0.96
RealData11$\times$11 YOLOv$5$x | 2.6 hours | 0.960 | 0.96
## Appendix C Defect Pair Correlations
The normalized pair correlation (radial distribution) function, $g(r)$, a
measure of the average defect density as a function of distance from any given
defect, has been computed from the experimental data as a function of time. In
these calculations, we used a radial bin size of $3\,\mu$m and the
correlations were averaged over $10$ frames ($\Delta t=0.02\,$s) in order to
reduce the statistical noise. We show in Fig 10, by way of example, the sign-
agnostic defect pair correlation function at $t=0.2\,$s for the quench event
analyzed in this paper. This correlation function is identically zero below
$10\,\mu$m because no topological defects closer than than this distance were
identified by the neural network. This is an artifact of the detection
process: defects with separations smaller than the bounding box size of
$11\times{11}$ pixels (or about $10\,\mu$m) could not be detected.
A numerical simulation similar to [22] of the evolution of a large number
(${\sim}15,000$) of diffusing, interacting topological charges initially
generated with an average density similar to that observed in the film
quenching experiments was carried out. The pair correlation function derived
from the simulated data, also shown in Fig 10, is seen to grow continuously
from zero, as expected, saturating at long distances with a value $g(r)=1$.
Comparing this to the experimental correlation function confirms that defects
in the experimental images with separations below $10\,\mu$m are not detected
by the neural network, leading to undercounting at early times following the
quenching event.
Figure 10: Sign-agnostic topological defect pair correlation functions
computed from experimental and simulation data shortly after a quench
($t=0.2\,$s). The correlation function derived from the simulation (blue)
grows continuously from zero, while the experimental curve (red) has a sharp
cutoff at $10\,\mu$m, the resolution limit for defect detection by the neural
network.
## References
* Chaikin and Lubensky [2013] P. M. Chaikin and T. C. Lubensky, _Principles of Condensed Matter Physics_ , 7th ed. (Cambridge University Press, Cambridge, 2013).
* Komura and Furukawa [1988] S. Komura and H. Furukawa, eds., _Dynamics of Ordering Processes in Condensed Matter_ (Springer US, Boston, MA, 1988).
* Chuang _et al._ [1991] I. Chuang, R. Durrer, N. Turok, and B. Yurke, Cosmology in the laboratory: defect dynamics in liquid crystals, Science 251, 1336 (1991).
* Srivastava [2001] A. M. Srivastava, Topological defects in condensed matter physics, in _Field Theories in Condensed Matter Physics_ (Hindustan Book Agency, Gurgaon, 2001) pp. 189–237, series Title: Texts and Readings in Physical Sciences.
* Kiselev _et al._ [2011] N. S. Kiselev, A. N. Bogdanov, R. Schäfer, and U. K. Rößler, Chiral skyrmions in thin magnetic films: new objects for magnetic storage technologies?, Journal of Physics D: Applied Physics 44, 392001 (2011).
* Owczarek [1991] R. Owczarek, Topological defects in superfluid helium, International Journal of Theoretical Physics 30, 1605 (1991).
* Harth and Stannarius [2020] K. Harth and R. Stannarius, Topological Point Defects of Liquid Crystals in Quasi-Two-Dimensional Geometries, Frontiers in Physics 8, 112 (2020).
* Young _et al._ [1978] C. Y. Young, R. Pindak, N. A. Clark, and R. B. Meyer, Light-Scattering Study of Two-Dimensional Molecular-Orientation Fluctuations in a Freely Suspended Ferroelectric Liquid-Crystal Film, Physical Review Letters 40, 773 (1978).
* Rosenblatt _et al._ [1980] C. Rosenblatt, R. B. Meyer, R. Pindak, and N. A. Clark, Temperature behavior of ferroelectric liquid-crystal thin films: A classical ${XY}$ system, Physical Review A 21, 140 (1980).
* Pieranski _et al._ [1993] P. Pieranski, L. Beliard, J.-P. Tournellec, X. Leoncini, C. Furtlehner, H. Dumoulin, E. Riou, B. Jouvin, J.-P. Fénerol, P. Palaric, J. Heuving, B. Cartier, and I. Kraus, Physics of Smectic Membranes, Physica A: Statistical Mechanics and its Applications 194, 364 (1993).
* Pindak _et al._ [1980] R. Pindak, C. Y. Young, R. B. Meyer, and N. A. Clark, Macroscopic orientation patterns in smectic-C films, Physical Review Letters 45, 1193 (1980).
* Link _et al._ [2005] D. R. Link, N. Chattham, J. E. Maclennan, and N. A. Clark, Effect of high spontaneous polarization on defect structures and orientational dynamics of tilted chiral smectic freely suspended films, Physical Review E 71, 021704 (2005).
* Muzny and Clark [1992] C. D. Muzny and N. A. Clark, Direct observation of the Brownian motion of a liquid-crystal topological defect, Physical Review Letters 68, 804 (1992).
* Silvestre _et al._ [2009] N. M. Silvestre, P. Patrício, M. M. Telo da Gama, A. Pattanaporkratana, C. S. Park, J. E. Maclennan, and N. A. Clark, Modeling dipolar and quadrupolar defect structures generated by chiral islands in freely suspended liquid crystal films, Physical Review E 80, 041708 (2009).
* Wachs [2014] K. Wachs, _Dynamics of smectic-C point disclinations in freely-suspended liquid crystal films_ , Undergraduate Honors Thesis Thesis, University of Colorado at Boulder (2014).
* Missaoui _et al._ [2020] A. Missaoui, K. Harth, P. Salamon, and R. Stannarius, Annihilation of point defect pairs in freely suspended liquid-crystal films, Physical Review Research 2, 013080 (2020).
* Schmidt _et al._ [2019] J. Schmidt, M. R. G. Marques, S. Botti, and M. A. L. Marques, Recent advances and applications of machine learning in solid-state materials science, NPJ Computational Materials 5, 83 (2019).
* Moen _et al._ [2019] E. Moen, D. Bannon, T. Kudo, W. Graf, M. Covert, and D. Van Valen, Deep learning for cellular image analysis, Nature Methods 16, 1233 (2019).
* Greener _et al._ [2022] J. G. Greener, S. M. Kandathil, L. Moffat, and D. T. Jones, A guide to machine learning for biologists, Nature Reviews Molecular Cell Biology 23, 40 (2022).
* AlQuraishi [2021] M. AlQuraishi, Machine learning in protein structure prediction, Current Opinion in Chemical Biology 65, 1 (2021).
* Yurke _et al._ [1993] B. Yurke, A. N. Pargellis, T. Kovacs, and D. A. Huse, Coarsening dynamics of the XY model, Physical Review E 47, 1525 (1993).
* Jang _et al._ [1995] W. G. Jang, V. V. Ginzburg, C. D. Muzny, and N. A. Clark, Annihilation rate and scaling in a two-dimensional system of charged particles, Physical Review E 51, 411 (1995).
* Ginzburg _et al._ [1995] V. V. Ginzburg, P. D. Beale, and N. A. Clark, Scaling theory of particle annihilation in systems with a long-range interaction, Physical Review E 52, 2583 (1995).
* Burlatsky _et al._ [1996] S. F. Burlatsky, V. V. Ginzburg, and N. A. Clark, Scaling model of annihilation-diffusion kinetics for charged particles with long-range interactions, Physical Review E 54, R1056 (1996).
* Rojas and Rutenberg [1999] F. Rojas and A. D. Rutenberg, Dynamical scaling: The two-dimensional XY model following a quench, Physical Review E 60, 212 (1999).
* Muzny [1994] C. D. Muzny, _Properties of defects in smectic-C thin films_ , Ph.D. thesis, University of Colorado at Boulder, University of Colorado at Boulder (1994).
* Minor _et al._ [2020] E. N. Minor, S. D. Howard, A. A. S. Green, M. A. Glaser, C. S. Park, and N. A. Clark, End-to-end machine learning for experimental physics: using simulated data to train a neural network for object detection in video microscopy, Soft Matter 16, 1751 (2020).
* Green [2019] A. A. S. Green, _Liquid and crystal: the applicability of the XY model to experimental systems of two-dimensional topological fluids; and revealing the nanoscale structure of the bent-core alpha phase_ , Ph.D. thesis, University of Colorado at Boulder (2019).
* [29] SYNTHON Chemicals GmbH & Co. KG.
* Harth [2016] K. Harth, _Episodes of the life and death of thin fluid membranes - patterns and dynamics at the cross-over from two to three dimensions_ , Ph.D. thesis, Otto von Guericke University, Magdeburg (2016).
* Link [1998] D. R. Link, _Symmetry and structure of freely suspended liquid crystal films_ , Ph.D. thesis, University of Colorado at Boulder, University of Colorado at Boulder (1998).
* Crocker and Grier [1996] J. C. Crocker and D. G. Grier, Methods of Digital Video Microscopy for Colloidal Studies, Journal of Colloid and Interface Science 179, 298 (1996).
* Qi _et al._ [2014] Z. Qi, Z. H. Nguyen, C. S. Park, M. A. Glaser, J. E. Maclennan, N. A. Clark, T. Kuriabova, and T. R. Powers, Mutual diffusion of inclusions in freely suspended smectic liquid crystal films, Physical Review Letters 113, 128304 (2014).
* Qi _et al._ [2016] Z. Qi, C. S. Park, M. A. Glaser, J. E. Maclennan, and N. A. Clark, Experimental realization of an incompressible Newtonian fluid in two dimensions, Physical Review E 93, 012706 (2016).
* Qi _et al._ [2017] Z. Qi, K. Ferguson, Y. Sechrest, T. Munsat, C. S. Park, M. A. Glaser, J. E. Maclennan, N. A. Clark, T. Kuriabova, and T. R. Powers, Active microrheology of smectic membranes, Physical Review E 95, 022702 (2017).
* Hedlund _et al._ [2022] E. Hedlund, K. Hedlund, A. Green, R. Chowdhury, C. S. Park, J. E. Maclennan, and N. A. Clark, Detection of islands and droplets on smectic films using machine learning, Physics of Fluids 34, 103608 (2022).
* Redmon and Farhadi [2016] J. Redmon and A. Farhadi, YOLO9000: Better, Faster, Stronger, arXiv:1612.08242 [cs] (2016), arXiv: 1612.08242.
* Jocher _et al._ [2021] G. Jocher, A. Stoken, J. Borovec, NanoCode012, A. Chaurasia, TaoXie, L. Changyu, A. V, Laughing, Tkianai, YxNONG, A. Hogan, Lorenzomammana, AlexWang1900, J. Hajek, L. Diaconu, Marc, Y. Kwon, Oleg, Wanghaoyang0106, Y. Defretin, A. Lohia, Ml5ah, B. Milanko, B. Fineran, D. Khromov, D. Yiwei, Doug, Durgesh, and F. Ingham, ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models, AWS, Supervise.ly and YouTube integrations (2021).
* [39] Google Colaboratory, https://colab.research.google.com/.
* Salton and McGill [1983] G. Salton and M. J. McGill, _Introduction to modern information retrieval_ , McGraw-Hill computer science series (McGraw-Hill, New York, 1983).
* [41] D. B. Allan, T. Caswell, N. C. Keim, C. M. van der Wel, and V. W. Ruben, Trackpy: Fast, Flexible Particle-Tracking Toolkit, http://soft-matter.github.io/trackpy/.
* Tang and Selinger [2020] X. Tang and J. V. Selinger, Annihilation trajectory of defects in smectic-C films, Physical Review E 102, 012702 (2020).
* Toussaint and Wilczek [1983] D. Toussaint and F. Wilczek, Particle–antiparticle annihilation in diffusive motion, The Journal of Chemical Physics 78, 2642 (1983).
* Loft and DeGrand [1987] R. Loft and T. A. DeGrand, Numerical simulation of dynamics in the _XY_ model, Physical Review B 35, 8528 (1987).
* Lin _et al._ [2014] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, Microsoft COCO: Common Objects in Context, in _Computer Vision – ECCV 2014_, Vol. 8693, edited by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars (Springer International Publishing, Cham, 2014) pp. 740–755.
|
ATL-PHYS-PROC-2022-117
Charge asymmetry in top-pair production
with the ATLAS detector
Barbora Eckerova
Comenius University, Bratislava, Slovakia
on behalf of the ATLAS Collaboration
> Measurement of the inclusive and differential top-quark charge asymmetry is
> carried out using full Run 2 data from proton-proton collisions at a center-
> of-mass energy of $13$ TeV collected by the ATLAS detector. The single-
> lepton and dilepton $t\overline{t}$ decay channels are combined. Distorting
> detector effects are removed using fully Bayesian unfolding. In the dilepton
> decay channel, the leptonic charge asymmetry is determined. Results from the
> measurements are compared with the Standard Model prediction. The excess of
> charge asymmetry from zero charge-asymmetry hypothesis at the level of $4.7$
> standard deviations is observed.
> PRESENTED AT
>
>
>
>
> $15^{\mathrm{th}}$ International Workshop on Top Quark Physics
> Durham, UK, 4–9 September, 2022
††footnotetext: Copyright 2022 CERN for the benefit of the ATLAS
Collaboration. CC-BY-4.0 license.
## 1 Introduction
The top-quark charge asymmetry is one of the features of top-quark pair
production. The cross section formula of $t\overline{t}$ production is not
symmetric under the exchange of final-state top quark and top anti-quark. The
asymmetric contribution arises after the inclusion of next-to-leading order
diagrams of $t\overline{t}$ production.
The charge asymmetry is not present in top-quark pair production initiated by
gluon fusion. The largest contribution originates from interactions of
initial-state quarks ($q$) and anti-quarks ($\overline{q}$) [1, 2]. Due to the
charge asymmetry, top quarks are preferentially produced in the direction of
initial-state $q$, and top anti-quarks in the direction of initial-state
$\overline{q}$ as is illustrated by Fig. 1 (left). Hence, due to a
longitudinal momentum imbalance of initial-state $q$ and $\overline{q}$, the
charge asymmetry is manifested by different rapidity distributions of top
quark ($t$) and top anti-quark ($\overline{t}$), as shown in Fig. 1 (right).
The charge asymmetry is based on computing absolute rapidity difference of $t$
and $\overline{t}$, $\Delta|y|=|y_{t}|-|y_{\overline{t}}|$. The sign of
absolute rapidity difference signifies the direction of $t$. The charge
asymmetry is evaluated as
$A_{C}^{t\overline{t}}=\frac{N(\Delta|y|>0)-N(\Delta|y|<0)}{N(\Delta|y|>0)+N(\Delta|y|<0)}$.
Specifically for dilepton $t\overline{t}$ decay channel, the leptonic charge
asymmetry $A_{C}^{\ell\overline{\ell}}$ is defined similarly. Instead of
$\Delta|y|$, absolute pseudorapidity difference
$\Delta|\eta_{\ell\overline{\ell}}|=|\eta_{\ell}|-|\eta_{\overline{\ell}}|$ of
two leptons originating from top-quark pair decay is used.
Figure 1: (Left) Preferred direction of the top quark and top anti-quark in
$q\overline{q}$ annihilation. Top quarks are generally more longitudinally
boosted than top anti-quarks. (Right) An illustration of rapidity
distributions of top quark and top anti-quark as measured by the LHC [3].
In proton-proton collisions, $t\overline{t}$ pairs are produced in
approximately $90$% of cases by charge symmetric gluon-gluon fusion.
Therefore, the size of the charge asymmetry is below $1$%, so the measurement
of the charge asymmetry is challenging.
The enhancement of the charge asymmetry is expected in specific parts of the
phase space. Hence, the charge asymmetry is studied as a function of three
different observables: $m_{t\overline{t}}$, longitudinal boost of
$t\overline{t}$ pair along the beam axis $\beta_{z,t\overline{t}}$ and
transverse momentum of $t\overline{t}$ pair $p_{T,t\overline{t}}$.
## 2 Event selection
The inclusive and differential charge asymmetry is measured using data
collected by the ATLAS detector [4] at $\sqrt{s}=13$ TeV, which correspond to
$139~{}\textrm{fb}^{-1}$. The single-lepton and dilepton $t\overline{t}$ decay
channels are both used for measurement.
For single-lepton events, two topology types are exploited - resolved and
boosted. In resolved topology, reconstruction of $t\overline{t}$ events is
provided by using Boosted Decision Tree (BDT). Data in dilepton decay channel
are divided according to flavor of leptons in same-flavor ($ee+\mu\mu$) and
opposite flavor ($e\mu$) regions. The $t\overline{t}$ system is reconstructed
using Neutrino Weighting [5]. Events in both decay channels are further
divided according to $b$-jet multiplicity to $1~{}b$-tag exclusive and
$2~{}b$-tag inclusive events.
## 3 Fully Bayesian unfolding
The measured distribution of $\Delta|y|$ or $\Delta|\eta|$ is unfolded to the
parton level using fully Bayesian unfolding (FBU) [6]. Distorting effects of
the detector, like limited acceptance and resolution, are corrected by the
unfolding.
The FBU exploits Bayesian inference,
$p(T|D)\propto\mathcal{L}(D|T)\cdot\pi(T)$, to obtain posterior probability
distribution $p(T|D)$ of true distribution $T$ using measured data $D$ and
simulated response matrix $\mathcal{M}$. Systematic uncertainties are embedded
in the likelihood function $\mathcal{L}$ by implementation of nuisance
parameters $\theta$. After marginalization of nuisance parameters (NPs),
obtained constraints together with correlations of NPs lead to the reduction
of the unfolded uncertainty.
Uniform prior probability $\pi(T)$ for bins of true spectrum is chosen,
whereas Gaussian probability terms are used for systematic NPs.
## 4 Results
The charge asymmetry value is unfolded inclusively and differentially as a
function of $m_{t\overline{t}}$, $\beta_{z,t\overline{t}}$ and
$p_{T,t\overline{t}}$. Single-lepton and dilepton events are unfolded
separately, but also in combination. Similarly, leptonic charge asymmetry
measurement is performed for events in the dilepton channel inclusively and in
a differential measurement as a function of $m_{\ell\overline{\ell}}$,
$\beta_{z,\ell\overline{\ell}}$ and $p_{T,\ell\overline{\ell}}$. In Fig. 2,
unfolded charge asymmetry values for inclusive and $m_{t\overline{t}}$
differential measurements are summarized for combination of single-lepton and
dilepton data, but also for events for each decay channel separately.
Inclusive $A_{C}^{t\overline{t}}$ for the combination measurement is
significantly different from the hypothesis of zero charge asymmetry. The non-
zero excess of $A_{C}^{t\overline{t}}$ is $4.7$ standard deviations.
Therefore, the result provides evidence of charge asymmetry in $t\overline{t}$
production at the LHC.
Figure 2: Unfolded $A_{C}^{t\overline{t}}$ in inclusive (left) and
$m_{t\overline{t}}$ differential measurements (right) [7].
The leptonic charge asymmetry is measured to be $0.0054\pm 0.0026$ in the
inclusive measurement. Results from all measurements, either
$A_{C}^{t\overline{t}}$ or $A_{C}^{\ell\overline{\ell}}$, are in good
agreement with the SM prediction calculated at NLO in EW theory and either to
NNLO or NLO in QCD, for $t\overline{t}$ and leptonic charge asymmetries,
respectively [9, 10].
## 5 EFT interpretation
The charge asymmetry measurement has a potential to provide extra information
into global SMEFT fits.
Inclusive and $m_{t\overline{t}}$ differential measurements of
$A_{C}^{t\overline{t}}$ are used to derive constraints on various Wilson
coefficients. In Fig. 3 (left), bounds obtained from individual
$m_{t\overline{t}}$ differential bins together with constraint derived from
the inclusive measurement are summarized. The most stringent limit of the
$C^{8}_{tu}$ coefficient is obtained using information from all
$m_{t\overline{t}}$ differential bins. The combined constraint is more than a
factor of two more stringent than the bound from the inclusive measurement.
The sensitivity increases with higher values of $m_{t\overline{t}}$.
Constraints derived from previous measurements are presented in Fig. 3 (left)
for reference.
The measurement of energy asymmetry $A_{E}$ [11] is complementary to charge
asymmetry measurement in context of EFT interpretation. This is illustrated in
Fig. 3 (right), where limits derived from both measurements ($A_{E}$ and
$A_{C}^{t\overline{t}}$) for coefficients $C^{1,8}_{Qq}$ and $C^{8}_{tq}$ are
plotted. The charge asymmetry measurement does not supply any information in
one part of parameter space (left upper panel of the plot). The energy
asymmetry measurement is sensitive to those values of Wilson coefficients.
Therefore, common plots of bounds for EFT operators give more information.
Figure 3: (Left) Bounds on the Wilson coefficient $C_{tu}^{8}$ derived from
inclusive and $m_{t\overline{t}}$ differential $A_{C}^{t\overline{t}}$
measurement. (Right) Bounds from $m_{t\overline{t}}$ differential
$A_{C}^{t\overline{t}}$ measurement and limits derived from $A_{E}$
measurement [11, 7].
## References
* [1] J.H. Kuhn and G. Rodrigo, 1999 Phys. Rev. D 59 054017
* [2] J.H. Kuhn and G. Rodrigo, 2012 JHEP 01 063
* [3] L. Evans and P. Bryant (editors) 2008 JINST 3 S08001
* [4] ATLAS Collaboration, 2008 JINST 3 S08003
* [5] D0 Collaboration, 2016 Phys. Lett. B 752 18
* [6] G. Choudalakis, 2012 arXiv:1201.4612 [hep-ex]
* [7] ATLAS Collaboration, 2022 arXiv:2208.12095 [hep-ex], submitted to JHEP
* [8] A. Gelman and M.D. Hoffman, 2014 JMLR 15 1593
* [9] M. Czakon, D. Heymes, A. Mitov, D. Pagani, I. Tsinikos and M. Zaro, 2018 Phys. Rev. D 98 014003
* [10] W. Bernreuther and Z.-G. Si, 2012 Phys. Rev. D 86 034026
* [11] ATLAS Collaboration, 2022, Eur. Phys. J. C. 82 374
|
# Possible existence of stable compact stars in
$\kappa(\mathcal{R},\mathcal{T})-$gravity
G R P Teruel<EMAIL_ADDRESS>Departamento de Matemáticas, IES Carrús,
Elche 03205, Alicante, Spain Ksh. Newton Singh<EMAIL_ADDRESS>Department
of Physics, National Defence Academy, Khadakwasla, Pune-411023, India Farook
Rahaman<EMAIL_ADDRESS>Department of Mathematics, Jadavpur
University, Kolkata 700032, West Bengal, India Tanmoy chowdhury
<EMAIL_ADDRESS>Department of Mathematics, Jadavpur University, Kolkata
700032, West Bengal, India
###### Abstract
We present the first interior solutions representing compact stars in
$\kappa(\mathcal{R},\mathcal{T})$ gravity, by solving the modified field
equations in isotropic coordinates. Further, we have assumed the metric
potentials in Schwarzschild’s form and a few parameters along with the
isotropic condition of pressure. For solving, we use specific choice of the
running gravitational constant as
$\kappa(\mathcal{R},\mathcal{T})=8\pi-\lambda\mathcal{T}~{}~{}(G=\tilde{c}=1)$.
Once arrived at the reduced field equations, we investigate two solutions with
$c=1$ and $c\neq 1$, where $c$ denotes here another constant that should not
be confused with the speed of light. Then, we investigate each solution by
determining the thermodynamics variable viz pressure, density, speed of sound,
and adiabatic index. We found that these solutions satisfy the Bondi
criterion, causality condition, and energy conditions. We also found that the
$M-R$ curves generated from these solutions satisfy the stringent constraints
provided by the gravitational wave observations due to the neutron star merger
GW 170817.
## I Introduction
The last advances in astrophysical observations have led to a wide interest in
the study of the composition of astrophysical compact objects. Moreover, these
objects contain ultra-dense nuclear matter in their interiors, which makes
them excellent physical laboratories to test possible departures from general
relativity (GR). Traditionally the term compact objects or compact stars
refers collectively to white dwarfs, neutron stars, and black holes. Compact
stars are also called the stellar remnants, as they are often the endpoints of
catastrophic events such as supernova explosions and binary stellar
collisions. The state and type of a stellar remnant depend mainly on the
properties and composition of the dense matter of the star. However, due to
the lack of knowledge of the extreme condition and its complex composition it
is a formidable task to determine the exact equation of state (EoS) to
represent compact stars. Several astrophysical observations measure masses and
radii of compact stars po02 ; dr02 ; wa02 ; co02 , which in turn, constrain
the EoS of dense matter in compact stars. The observation of 2 solar-mass
neutron stars de10 ; an13 indicates that the EoS for such objects should be
sufficiently stiffer than the ordinary nuclear matter to accommodate the large
mass. This fact enables us to think about the possibility of stable mass
configurations with an interior composed of exotic matter to some extent. Even
in the case of low mass compact stars, the density of the core matter is much
larger than the normal matter. Due to the extreme density, the nuclear matter
may consist, apart from ordinary nucleons and leptons, of exotic components in
their different forms and phases, such as Bose-Einstein condensates of strange
mesons ka86 ; ne87 ; gl98 ; ba03 , baryon resonances and hyperons gl97 , as
well as strange quark matter pr97 ; fa84 ; sc00 . Applying the embedding class
one technique, many authors have explored well-behaved solutions ntn1 ; ntn2 ;
mau1 ; mau2 ; gad1 ; gad2 .
The task of building the EoS of matter beyond the nuclear saturation density,
important for the description of compact stars, is an active and vast field of
research. For a given EoS, the study of physical features of relativistic
compact objects in GR is done by obtaining analytic solutions for static
Einstein’s field equations and then imposing conditions for physical
viability. Due to the non-linear character of Einstein’s field equations, this
usually represents a challenging and complicated task. Since the first exact
solution of GR field equations was found by Schwarzschild sch16 , the amount
of exact analytic solutions has been increasing and are extensively used in
the studies of neutron stars and black hole formations, both in GR and also in
the modified gravity scheme, which includes GR as a particular case in the
appropriate limit ra03 ; fe99 . To model relativistic fluids from a more
realistic point of view, then one should include Buchdahl buc67 and Tolman
VII to39 ; rag15 ; new20 solutions.
It is well known that the Tolman–Oppenheimer–Volkoff (TOV) equation tol34 ;
tol39 ; Oppen39 constrains the structure of a spherically symmetric body of
isotropic material which is in static gravitational equilibrium, as modelled
by GR. Isotropic and anisotropic compact stars models have also been explored
in the framework of modified gravity. In particular, in the context of the
Lagrangian $f(\mathcal{R},\mathcal{T})$ theory, Sharif et al. sha14 have
discussed the stability of isotropic self-gravitating stars, while compact
solutions with conformal Killing vectors have been studied by Rahaman et al.
co16 . Regarding the anisotropic case, also in the scheme of
$f(\mathcal{R},\mathcal{T})$ gravity theory, Biswas and his collaborators
bish19 established a new model for a highly anisotropic star system based on
a previous model of metric potentials due to Krori-Barua kro75 . A charged
star system supported by Chaplygin EoS was explored by Bhar bhar20 .
Recently, one of us gr18 has proposed a new modified theory named as
$\kappa(\mathcal{R},\mathcal{T})$ gravity. This modified theory, instead of
using a standard modified theory of gravity approach, is inspired by Maxwell’s
and Einstein’s original approaches of adding new possible source terms
directly to the field equations. We explicitly refer here to the addition of
the displacement current term by Maxwell to complete the Electromagnetic field
equations, and to the incorporation of the key trace term,
$-\frac{1}{2}\mathcal{R}g^{\mu\nu}$ by Einstein to complete the GR field
equations. Indeed, it is interesting to note that, though the variational
method is a major tool to build a physical theory and its possible
generalizations, it should not be placed on an equal footing with other truly
foundational principles such as the equivalence principle and the principle of
general covariance, which were the two foundational principles of GR, while
the variational principle (the Einstein-Hilbert action) was derived and
incorporated to the theory later on after the first correct derivation of GR
hil15 ; ren07 .
In addition, the huge amount of possible modified gravity Lagrangian theories
available in the literature seems to suggest that some other foundational
principle is lacking beyond the equivalence principle and the principle of
general covariance, and the fact that there is no reason to believe that
ordinary symmetries and standard conservation laws will always be present in a
final theory of Nature, lead us to think that the Non-Lagrangian approach also
deserves to be investigated. In this sense, examples of Non-Lagrangian
theories can be found in other branches of theoretical physics, such as
quantum field theory (QFT), where it is being increasingly clear that these
theories populate much of the QFT landscape and also offer new opportunities
in the search of new types of 4-manifold invariants gu17 ; gadd15 . In light
of the above, the study of viable Non-Lagrangian gravity theories deserves
consideration as a legitimate alternative to the standard variational
approach.
Non-Lagrangian $\kappa(\mathcal{R},\mathcal{T})$ gravity theory is a
relatively unknown and still not well-explored proposal. However, in the last
few years some works have been devoted to studying its cosmological
implications ahm22 ; arc22 , collecting results that seem compatible with
observational data. A model of wormhole in the context of
$\kappa(\mathcal{R},\mathcal{T})$ theory was discussed recently by S. Sarkar
et al. sark19 .
So far, no one has investigated compact stars solutions in the framework of
$\kappa(\mathcal{R},\mathcal{T})$ theory of gravity, hence it would be
interesting to study whether this Non-Lagrangian theory can support acceptable
compact star solutions from a physical point of view. This is the purpose of
this work. The paper is organized as follows: In Sec.II, we have described
Einstein’s field equations in the scheme of this modified gravity theory.
Isotropic coordinates and the line element chosen to solve the equations are
presented in Sec.III, underlining the differences between the $c=1$ and the
$c\neq 1$ cases. The required boundary conditions to match the internal
solution with the external vacuum solution are discussed in Sec.IV. On the
other hand, Sec.V deals with the physical acceptability of the obtained
solutions, discussing in detail the role of the energy conditions as well as
other constraints. The mass-radius relationship is analyzed in Sec.VI.
Finally, the discussion has been made in Sec.VII.
## II Einstein’s field equations in $\kappa(\mathcal{R},\mathcal{T})$ gravity
The field equations in $\kappa(\mathcal{R},\mathcal{T})$ modified gravity are
obtained by adding new possible source terms directly to GR field equations as
gr18
$R_{ij}-\frac{1}{2}\mathcal{R}\,g_{ij}-\Lambda
g_{ij}=\kappa(\mathcal{R},\mathcal{T})\,T_{ij}$ (1)
where $g_{ij}$ is the metric potential, $R_{ij}$ is the Ricci tensor,
$\Lambda$ is a cosmological constant, $T_{ij}$ is the energy-momentum tensor
of the matter source, and $\kappa(\mathcal{R},\mathcal{T})$ corresponds to the
Einstein gravitational constant and it is proposed as a function of the traces
$\mathcal{T}=g_{ij}T^{ij}$, and Ricci scalar $\mathcal{R}=g_{ij}R^{ij}$.
Clearly, the gravitational constant $\kappa$ depends on the scalars, so we can
explore the possibility of a varying gravitational constant, i.e.
generalization of the original Einstein’s gravitational constant (not at the
level of an action functional). A varying gravitational constant in the action
leads to a Brans-Dicke scalar-tensor theory type cb61 ; cb05 with entirely
different field equations from Eq. (1). Since the left hand side of the field
Eq. (1) is divergence free, we have
$\displaystyle\nabla^{j}\Big{[}\kappa(\mathcal{R},\mathcal{T})T_{ij}\Big{]}=0$
(2)
Then, these field equations imply the non-covariant conservation of $T_{ij}$
that can be expressed as
$\displaystyle\nabla^{j}T_{ij}=-\frac{\nabla^{j}\kappa(\mathcal{R},\mathcal{T})}{\kappa(\mathcal{R},\mathcal{T})}T_{ij}.$
(3)
For $\kappa(\mathcal{R},\mathcal{T})\neq 0$. At very high energies, for some
specific choices of the running gravitational constant such as, for example,
$\kappa(\mathcal{R},\mathcal{T})=\kappa(\mathcal{T})=8\pi
G-\lambda\mathcal{T}$, we could have that $\kappa(\mathcal{R},\mathcal{T})=0$;
this may imply an exponential expansion in the early universe ruled by a
cosmological constant. Examples of other non-conservative theories include
Rastall proposal ras72 , or the more recent Lagrangian
$f(\mathcal{R},\mathcal{T})$ theory of gravity by Harko and his collaborators
har11 .
Teruel gr18 has proposed and analyzed some cosmological implications of two
particular models. The first one is proposed by setting,
$\kappa(\mathcal{T})=8\pi G-\lambda\mathcal{T}$, and corresponds to a matter-
matter coupling. The second model is characterized by a gravitational constant
that varies as $\kappa^{\prime}(\mathcal{R})=8\pi G+\xi\mathcal{R}$, which
will provide a coupling between matter and curvature terms. The choice of the
minus sign in the expression for the running gravitational constant
$\kappa(\mathcal{R},\mathcal{T})=8\pi G-\lambda\mathcal{T}$, ($\lambda>0$), is
motivated for cosmological reasons. Indeed, one can show that the modified
Friedmann equations for the opposite choice, i.e,
$\kappa(\mathcal{R},\mathcal{T})=8\pi G+\lambda\mathcal{T}$ implies that at
sufficiently high densities $H^{2}\sim\rho^{2}$, where $H$ is the Hubble
constant. That behavior is worst in terms of divergences than the GR case.
gr18 ; olm18 . In the next section, we will proceed to solve the field
equations of $\kappa(\mathcal{R},\mathcal{T})$ gravity theory for a specific
(isotropic) line element that represents the interior of a compact object
assuming the stress-energy tensor of a perfect fluid as the matter sources.
## III Isotropic Star Solution
To obtain the exact solutions of Einstein’s field equation in the framework of
$\kappa(\mathcal{R},\mathcal{T})$ gravity we can proceed as in ref fr20 .
First, we assume that the static spherically symmetric uncharged matter
distribution corresponds to the isotropic line element given by
$ds^{2}=e^{\nu}dt^{2}-e^{\mu}(dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta\,d\phi^{2}),$
(4)
where $\nu$ and $\mu$ are functions of the radial coordinate $r$ only. For the
specific choice of the running gravitational constant given by
$\kappa(\mathcal{R},\mathcal{T})=8\pi-\lambda\mathcal{T}~{}~{}(G=\tilde{c}=1)$,
we find that the field equations can be written as
$\displaystyle
e^{-\mu}\Big{(}\frac{{\mu^{\prime}}^{2}}{4}+\frac{\mu^{\prime}\nu^{\prime}}{2}+\frac{\mu^{\prime}+\nu^{\prime}}{r}\Big{)}=8\pi
p(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]},$ (5) $\displaystyle
e^{-\mu}\Big{(}\frac{\mu^{\prime\prime}}{2}+\frac{\nu^{\prime\prime}}{2}+\frac{{\nu^{\prime}}^{2}}{4}+\frac{\mu^{\prime}+\nu^{\prime}}{2r}\Big{)}=8\pi
p(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]},$ (6)
$\displaystyle-e^{-\mu}\Big{(}\mu^{\prime\prime}+\frac{{\mu^{\prime}}^{2}}{4}+\frac{2\mu^{\prime}}{r}\Big{)}=8\pi\rho(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]}.$
(7)
In the above equations, $\alpha=\lambda/8\pi$ is a real constant different
from zero, $\rho$ and $p$ are the matter-energy density and pressure,
respectively, and the prime denotes derivative with respect to radial
coordinate only. Under the isotropic condition Eq. (5) must be equal to Eq.
(6), we obtain a differential equation by assuming $e^{\nu}=A\psi(r)^{-a}$ and
$e^{\mu}=B\psi(r)^{b}$ in the form
$\psi^{\prime\prime}-c~{}\frac{{\psi^{\prime}}^{2}}{\psi}-\frac{\psi^{\prime}}{r}=0,$
(8)
where we have defined certain constant $c$ as
$c=\frac{\frac{1}{2}b^{2}-\frac{1}{2}a^{2}-ab+b-a}{b-a}$ (9)
With $a\neq b$. All the parameters $A,B,a,b$ above are real constants.
Analytical solutions can be found by setting different conditions for the
parameters.
### III.1 The case for $c=1$
For such specific value we obtain the following solutions
$\displaystyle\psi(r)$ $\displaystyle=$ $\displaystyle
C_{1}e^{C_{0}r^{2}}~{}~{},~{}~{}e^{\nu}=A_{1}e^{-a_{1}r^{2}}~{}~{},~{}~{}e^{\mu}=B_{1}e^{b_{1}r^{2}}.$
(10)
Here $A_{1}=AC_{1}^{-a}$, $B_{1}=BC_{1}^{b}$, $a_{1}=aC_{0}$ and
$b_{1}=bC_{0}$. The corresponding physical quantities are given below:
$\displaystyle\frac{C_{1}^{-b}}{B}\Big{[}b(b-2a)C_{0}^{2}r^{2}+2(b-a)C_{0}\Big{]}e^{-bC_{0}r^{2}}=8\pi
p(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]},$ (11)
$\displaystyle-\frac{C_{1}^{-b}}{B}\Big{[}6bC_{0}+b^{2}C_{0}^{2}r^{2}\Big{]}e^{-bC_{0}r^{2}}=8\pi\rho(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]}.$
(12)
The variations of the energy density and the pressure are shown in Fig. 1.
From the above equations we get
$\frac{\rho(r)}{p(r)}=\frac{b(6+bC_{0}r^{2})}{b(2a-b)C_{0}r^{2}+2(a-b)}.$ (13)
The speed of sound and the adiabatic index can be determine as
$\displaystyle v^{2}={dp\over d\rho}~{}~{};~{}~{}\Gamma={\rho+p\over
p}\,{dp\over d\rho}.$ (14)
These variations with respect to radial coordinates are shown in Fig. 2.
Figure 1: Density and Pressure for case A.
Figure 2: Speed of sound and adiabatic index for case A.
### III.2 The case for $c\neq 1$
For this particular choice we have
$\psi(r)^{1-c}=C_{0}r^{2}+C_{1}~{}~{},~{}~{}e^{\nu}=A\Big{(}C_{0}r^{2}+C_{1}\Big{)}^{-a_{1}}~{}~{},~{}~{}e^{\mu}=B\Big{(}C_{0}r^{2}+C_{1}\Big{)}^{b_{1}}$
(15)
with $a_{1}=a/(1-c)$ and $b_{1}=b/(1-c)$. In these two particular cases,
$C_{0}$ and $C_{1}$ are two integration constants that may be provided in
terms of $M$ and $R$. Consequently, the physical parameters for this solution
are
$\displaystyle\frac{b^{2}C_{0}r^{2}-2abC_{0}r^{2}+2(b-a)(1-c)(C_{0}r^{2}+C_{1})}{3(C_{0}r^{2}+C_{1})^{\frac{b}{1-c}+2}(1-c)^{2}}$
$\displaystyle=$ $\displaystyle 8\pi
p(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]}$ (16)
$\displaystyle\frac{2bC_{0}(c-1)(3C_{1}+C_{0}r^{2})-b^{2}C_{0}^{2}r^{2}}{B(C_{0}r^{2}+C_{1})^{\frac{b}{1-c}+2}(1-c)^{2}}$
$\displaystyle=$ $\displaystyle
8\pi\rho(r)\big{[}1-\alpha\\{\rho(r)-3p(r)\\}\big{]}$ (17)
$\displaystyle\frac{b^{2}C_{0}r^{2}-2abC_{0}r^{2}+2(b-a)(1-c)(C_{0}r^{2}+C_{1})}{2bC_{0}(c-1)(3C_{1}+C_{0}r^{2})-b^{2}C_{0}^{2}r^{2}}$
$\displaystyle=$ $\displaystyle\frac{p(r)}{\rho(r)}.$ (18)
The variations of energy density and pressure can be seen in Figs. 3. Finally,
the speed of sound and the adiabatic index can be determine as
$\displaystyle v^{2}={dp\over d\rho}~{}~{};~{}~{}\Gamma={\rho+p\over
p}\,{dp\over d\rho}.$ (19)
These variations with respect to radial coordinates are shown in Fig. 4.
Figure 3: Density and Pressure for case B.
Figure 4: Speed of sound and adiabatic index for case B.
## IV Boundary Conditions
A spherically symmetric static fluid distribution described by the metric (4)
should match with the exterior field described by the Schwarzschild solution
given by
$ds^{2}=\Big{(}1-\frac{2M}{r}\Big{)}dt^{2}-\Big{(}1-\frac{2M}{r}\Big{)}^{-1}dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\Big{)}$
(20)
Introducing the radial coordinate transformation
$r=\tilde{r}\Big{(}1+\frac{M}{2\tilde{r}}\Big{)}^{2},$ (21)
the metric takes the following form
$ds^{2}=\frac{\Big{(}1-{M^{2}/4\tilde{r}^{2}}\Big{)}^{2}}{\Big{(}1+{M/2\tilde{r}}\Big{)}^{4}}dt^{2}-\Big{(}1+\frac{M}{2\tilde{r}}\Big{)}^{4}(d\tilde{r}^{2}+\tilde{r}^{2}d\theta^{2}+\tilde{r}^{2}\sin^{2}\theta\,d\phi^{2})$
(22)
Thus, by matching the interior solution to the external vacuum solution on the
boundary $\tilde{r}=R$ we get
$\displaystyle
e^{\nu(R)}=\frac{\Big{(}1-{M^{2}/4R^{2}}\Big{)}^{2}}{\Big{(}1+{M/2R}\Big{)}^{4}}~{}~{}~{}\mbox{and}~{}~{}~{}e^{\mu(R)}=\Big{(}1+\frac{M}{2R}\Big{)}^{4},$
(23)
the quantity $R$ above represents the boundary of the star, where the pressure
vanishes, and $M$ is the total gravitational mass of the star. This can be
used to get an expression for $R$ in the next section
### IV.1 For $c=1$:
Using the matching condition (23), we can re-write as
$\displaystyle A_{1}e^{-a_{1}R^{2}}$ $\displaystyle=$
$\displaystyle\frac{\Big{(}1-{M^{2}/4R^{2}}\Big{)}^{2}}{\Big{(}1+{M/2R}\Big{)}^{4}}~{}~{}~{};~{}~{}B_{1}e^{b_{1}R^{2}}=\Big{(}1+{M\over
2R}\Big{)}^{4}.$ (24)
This leads to
$\displaystyle A_{1}$ $\displaystyle=$ $\displaystyle
e^{a_{1}R^{2}}\frac{\Big{(}1-{M^{2}/4R^{2}}\Big{)}^{2}}{\Big{(}1+{M/2R}\Big{)}^{4}}~{}~{};~{}~{}B_{1}=e^{-b_{1}R^{2}}\Big{(}1+{M\over
2R}\Big{)}^{4},$ (25)
and the vanishing pressure at the boundary $r=R$ gives
$\displaystyle a_{1}=\frac{b_{1}\left(b_{1}R^{2}+2\right)}{2b_{1}R^{2}+2}.$
(26)
Further, we have chosen $M,R$ and $b_{1}$ as free parameters.
### IV.2 For $c\neq 1$:
Similarly, the matching condition in (23) can be re-write as
$\displaystyle A\left(C_{0}R^{2}+C_{1}\right)^{a_{1}}$ $\displaystyle=$
$\displaystyle\frac{\Big{(}1-{M^{2}/4R^{2}}\Big{)}^{2}}{\Big{(}1+{M/2R}\Big{)}^{4}}~{}~{}~{};~{}~{}B\left(C_{0}R^{2}+C_{1}\right)^{b_{1}}=\Big{(}1+{M\over
2R}\Big{)}^{4}.$ (27)
This leads to
$\displaystyle A$ $\displaystyle=$
$\displaystyle\frac{(M-2R)^{2}\left(C_{0}R^{2}+C_{1}\right)^{-a_{1}}}{(M+2R)^{2}}~{}~{};~{}~{}B=\frac{(M+2R)^{4}\left(C_{0}R^{2}+C_{1}\right)^{-b_{1}}}{16R^{4}},$
(28)
and the vanishing pressure at the boundary $r=R$ gives
$\displaystyle
a_{1}=-\frac{b_{1}\left[(b_{1}+2)C_{0}R^{2}+2C_{1}\right]}{2\left[(b_{1}+1)C_{0}R^{2}+C_{1}\right]}.$
(29)
Further, we have taken $M,~{}R,~{}b_{1},~{}C_{0}$ and $C_{1}$ as free
parameters.
## V Physical acceptability of the solution
* (a)
The density $\rho$ is finite positive at $r=0$ and non-increasing towards the
stellar surface, i.e. $d\rho/dr\leq 0$.
* (b)
The pressure is finite positive at $r=0$ and vanishes at the stellar surface
i.e, $p(R)=0$.
* (c)
At the very center, it satisfies the Harrison-Zeldovich-Novikov criterion
Harr65 ; zel71 i.e.
$\displaystyle{p(0)\over\rho(0)}$ $\displaystyle=$ $\displaystyle{3b\over
a-b}\leq 1\Rightarrow
4b<a,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mbox{(Case
A)}$ (30) $\displaystyle{p(0)\over\rho(0)}$ $\displaystyle=$
$\displaystyle{(b-a)(1-c)\over 3bC_{0}(c-1)}\leq 1\Rightarrow
1+3C_{0}\leq{a\over b}.~{}~{}~{}\mbox{(Case B)}$ (31)
* (d)
The isotropic star satisfy the following energy conditions (see Figs. 1 and
3):
1\. Null energy condition (NEC): $\rho>0$
2\. Weak energy condition (WEC): $p+\rho>0$
3\. Strong energy condition (SEC): $\rho+3p>0$
4\. Dominant energy condition (DEC): $\rho-p>0$
* (e)
The solution holds the causality condition i.e., $1\geq dp/d\rho\geq 0$ (see
Figs. 2 and 4).
* (f)
The solution also holds the Bondi’s criterion bondi64 i.e. $\Gamma\geq 4/3$
(see Figs. 2 and 4). Hence, the solution is non-collapsing.
* (g)
The solution satisfies Buchdahl’s bound Buch59 (see Fig. 5).
Figure 5: $M-R$ curves for the solutions case A and B.
## VI Mass-radius relationship
For these two solutions we have generated the $M-R$ curves which can be seen
in Fig. 5. Here one can see that the solution with $c=1$ can hold more massive
object with larger radius ($M_{max}=2.00M_{\odot},~{}R=10.95km$) than the case
B solution ($c\neq 1$, $M_{max}=1.97M_{\odot},~{}R=9.41km$). Further, we have
incorporated the GW 170817 observational constraints that neutron stars of
mass $1.6M_{\odot}$ and $1.4M_{\odot}$ should radius more that
$10.68^{+0.15}_{-0.04}km$ bau17 and $11.0^{+0.9}_{-0.6}km$ cap20
respectively. From our solution, the two neutron star of mass
$M_{max}=1.6M_{\odot}$ has radius of $13.73km$ (Case B) and $14.85km$ (case
A), and for mass $1.4M_{\odot}$ has radius $14.77km$ (Case B) and $15.68km$
(case A). Hence, the solution satisfy the observed constraints from the
neutron star merger event GW 170817. Hence, we can conclude that these first
interior solutions in $\kappa(\mathcal{R},\mathcal{T})-$gravity are physically
inspired. From Fig. 5, it can also be seen that the ratio of mass to radius is
less than the Buchdahl limit, signifying that the solutions are non-
collapsing. The $M-R$ curves generated from these solutions are almost similar
to that of polytropic/SLy4 parameterization Cha97 ; Cha98 ; Art17 .
Table 1: Central density, surface density, central pressure and predicted
radii for neutron stars of masses $1.6M_{\odot}$ and $1.4M_{\odot}$.
Case A | Case B
---|---
$\alpha$ | $\rho_{c}$ | $\rho_{s}$ | $p_{c}$ | $R_{1.6}$ | $R_{1.4}$ | $\alpha$ | $\rho_{c}$ | $\rho_{s}$ | $p_{c}$ | $R_{1.6}$ | $R_{1.4}$
$km^{2}$ | $MeV/fm^{3}$ | $km$ | $km$ | $km^{2}$ | $MeV/fm^{3}$ | $km$ | $km$
500 | 1515.77 | 1273.65 | 58.07 | | | 400 | 1908.95 | 1699.85 | 115.21 | |
520 | 1447.93 | 1213.99 | 55.54 | | | 420 | 1794.53 | 1556.64 | 108.34 | |
540 | 1387.11 | 1157.15 | 53.14 | 14.85 | 15.68 | 440 | 1689.13 | 1473.83 | 101.52 | 13.73 | 14.77
560 | 1328.62 | 1106.38 | 51.00 | | | 460 | 1591.26 | 1395.53 | 95.77 | |
580 | 1274.82 | 1060.76 | 48.87 | | | 480 | 1500.93 | 1324.77 | 90.29 | |
## VII Conclusions
We have successfully presented two exact interior solutions for the first time
in $\kappa(\mathcal{R},\mathcal{T})$ gravity. The field equation has been
derived by assuming isotropic coordinates in Schwarzschild’s form. To solve
the field equations, we have chosen the running gravitational constant as
$\kappa(\mathcal{R},\mathcal{T})=8\pi-\lambda\mathcal{T}$ along with the
isotropic pressure. We have then obtained two solutions for $c=1$ and $c\neq
1$. The case A solution is in the Krori-Barua form and the case B is more like
the Tolman-Finch-Skea form. To obtain the constant parameters, we have matched
the interior solution with exterior Schwarzschild spacetime in isotropic form
along with the condition $p(R)=0$. We have used the mass $M$ and radius $R$ as
input observational parameters along with a few other constants. Further, we
have plotted the variations of density, pressure, speed of sound, and
adiabatic index to see the thermodynamic nature of these solutions. Hence, we
found that these solutions are non-singular at $r=0$, non-increasing tends of
density and pressure, obeyed causality condition, and satisfied Bondi’s
criterion. This further implies that the solutions also satisfy the energy
conditions which can represent the non-collapsing stellar system. The central
values of density & pressure, surface density and predicted radii of
$1.6M_{\odot}$ and $1.4M_{\odot}$ neutron stars are given in Table 1. Further,
we have also shown that these solutions satisfy the constraints on mass and
radius of neutron stars provided by the observation of the gravitational waves
from the neutron star merger GW 170817. Hence, we conclude that these
solutions are physically inspired.
### Acknowledgments
Farook Rahaman would like to thank the authorities of the Inter-University
Centre for Astronomy and Astrophysics, Pune, India for providing the research
facilities.
## References
* (1) J. A. Pons, F. M. Walter, J. M. Lattimer, M. Prakash, R. Neuhauser and P. An, Astrophys. J. 574, 981 (2002).
* (2) J. J. Drake et al., Astrophysics J. 572, 996 (2002).
* (3) F. M. Walter and J. M. Lattimer, Astrophys. J 576, L145 (2002).
* (4) J. Cottam, F. Paerels and M. Mendez, Nature 420, 51 (2002); C. Miller, Nature 420, 31 (2002).
* (5) P. B. Demorest et al., Nature 467, 1081 (2010).
* (6) J. Antoniadis et al., Science 340, 1233232 (2013).
* (7) D. B. Kaplan and A. E. Nelson, Phys. Lett. B 175, 57 (1986).
* (8) A. E. Nelson and D. B. Kaplan, Phys. Lett. B 192, 193 (1987).
* (9) J. Schaffner and I. N. Mishustin, Phys. Rev. C 53, 1416 (1996).
* (10) N. K. Glendenning and J. Schaffner-Bielich, Phys. Rev. Lett. 81, 4564 (1998).
* (11) S. Banik and D. Bandyopadhyay, Phys. Rev. D 67, 123003 (2003).
* (12) N. K. Glendening, Compact stars, (Springer, New York, 1997).
* (13) M. Prakash, et al., Phys. Rep. 280, 1 (1997).
* (14) E. Farhi, R. L. Jaffe, Phys. Rev. D 30, 2379 (1984).
* (15) K. Schertler, C. Greiner, J. Schaffner-Bielich and M. H. Thoma, Nucl. Phys. A 677, 463 (2000).
* (16) K. N. Singh et al., Chinese Physics C 44, 035101 (2020)
* (17) K N. Singh, N. Pradhan, N. Pant, Pramana 89, 1-8 (2017)
* (18) S. K. Maurya, K. N. Singh, R Nag, Chinese Journal of Physics 74, 313-327 (2021)
* (19) S. K. Maurya et al. Results in Physics 29, 104674 (2021)
* (20) S. Gedela et al., Modern Physics Letters A 36, 2150055 (2021)
* (21) N. Pant, M. Govender, S. Gedela, Research in Astronomy and Astrophysics 21, 109 (2021)
* (22) K. Schwarzschild, Sitz. Deut. Akad. Wiss. Berlin, Kl. Math. Phys. 24, 424 (1916).
* (23) Tolman, R.C, Proc. Natl. Acad. Sci. U.S.A. 20 (3): 169-176 (1934).
* (24) Tolman, R.C, Phys. Rev. 55 (4): 364-373. (1939).
* (25) Oppenheimer, J.R; Volkoff, G.M,Phys.Rev. 55 (4): 374-381. (1939)
* (26) S. Ray, A. L. Espindola, M. Malheiro, Phys. Rev. D 68, 084004 (2003).
* (27) F. de Felice, L. Siming, Y. Yungiang, Class. Quantum Grav.16, 2669 (1999).
* (28) H. A. Buchdahl, Astrophys. J. 147, 310 (1967).
* (29) R. C. Tolman, Phys. Rev. 55, 364373 (1939).
* (30) A. M. Raghoonundun and Hobill D W, Phys. Rev. D 92, 124005 (2015).
* (31) K. Newton Singh, A. Ali, F. Rahaman and S. Nashri, Phys. Dark. Univ 29, 100575 (2020)
* (32) M. Sharif, Z. Yousaf, Astrophys. Space. Sci 354, 471 (2014)
* (33) A. Das, F. Rahaman, B. K. Guha and S. Ray, Eur. Phys. J. C 76, 654 (2016)
* (34) S. Biswas, S. Ghosh, S. Ray and F. Rahaman, Annals Phys. 401, 1 (2019), 1803.00442.
* (35) K. Krori and J. Barua Journal of Physics A: Mathematical and General 8, 508 (1975).
* (36) P. Bhar, Eur. Phys. J. Plus 135, 757 (2020).
* (37) Gines R. Perez Teruel, Eur. Phys.J. C 78, 660 (2018)
* (38) D. Hilbert, Die Grundlagen der Physik.(Erste Mitteilung.). Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 1915, 395-408.
* (39) J. Renn (Ed.). (2007). The genesis of general relativity: Sources and interpretations (Vol. 250). Springer Science.
* (40) S. Gukov J. High Energy Phys. 2017, 1-50 (2017).
* (41) Gadde, A., Razamat, S. S., and Willett, B.Phys. Rev. Lett 115(17), 171604. (2015).
* (42) N. Ahmed, A. Pradhan, Indian J Phys 96 ,301–307 (2022).
* (43) A. Dixit, A. Pradhan and R. Chaubey, Int.J.Geom. Methods Mod. Phys. 19, 2250013-6 (2022).
* (44) S. Sarkar, N. Sarkar, F. Rahaman and Y.Aditya, To Physics Journal 2, 7 (2019)
* (45) C. Brans, and R.H. Dicke, Phys. Rev. 124, 925935 (1961)
* (46) C. Brans, The roots of scalar-tensor theory: an approximate history. arXiv:gr-qc/0506063 (2005)
* (47) P. Rastall, Phys. Rev. D 6 3357 (1972)
* (48) T. Harko et al. Phys. Rev. D 84 024020 (2011)
* (49) J. Beltran-Jimenez, L. Heisenberg, G. J. Olmo, D. Rubiera-Garcia Phys. Rep. 727 (2018)
* (50) B.Das, P.CH. Ray, I. Radinschi, F.Rahaman, S. Ray, Inter. J. Mod. Phys. D 20, 16751687 (2011)
* (51) A. Bauswein et al., Astrophys. J. Lett. 850, L34 (2017)
* (52) B. K. Harrison et al, Gravitational Theory and Gravitational Collapse (Chicago: University of Chicago Press) (1965)
* (53) Ya. B. Zeldovich; I.D. Novikov: Relativistic Astrophysics Vol. 1: Stars and Relativity. University of Chicago Press, Chicago (1971)
* (54) H. Bondi Proc. R. Soc. A 282, 303 (1964)
* (55) Buchdahl, H.A. Phys. Rev. 116 (4) (1959)
* (56) C. D. Capano et al., Nat. Astron. 4, 625 (2020)
* (57) Chabanat E, Bonche P, Haensel P, Meyer J and Schaeffer R, Nuclear Physics A 627 710
* (58) Chabanat E, Bonche P, Haensel P, Meyer J and Schaeffer R, Nuclear Physics A635 231
* (59) Artyom V Astashenok et al Class. Quantum. Grav. 34 205008
|
are or for a the of to etc in at and by on that as with 11institutetext:
University of Delaware, Newark, DE, USA
11email<EMAIL_ADDRESS>11email<EMAIL_ADDRESS>
# A comparison between Automatically versus Manually Parallelized NAS
Benchmarks
Parinaz Barakhshan 11 0000-0001-7232-3923
Rudolf Eigenmann 11 0000-0003-1651-827X
###### Abstract
We compare automatically and manually parallelized NAS Benchmarks in order to
identify code sections that differ. We discuss opportunities for advancing
automatic parallelizers. We find ten patterns that pose challenges for current
parallelization technology. We also measure the potential impact of advanced
techniques that could perform the needed transformations automatically. While
some of our findings are not surprising and difficult to attain – compilers
need to get better at identifying parallelism in outermost loops and in loops
containing function calls – other opportunities are within reach and can make
a difference. They include combining loops into parallel regions, avoiding
load imbalance, and improving reduction parallelization.
Advancing compilers through the study of hand-optimized code is a necessary
path to move the forefront of compiler research. Very few recent papers have
pursued this goal, however. The present work tries to fill this void.
###### Keywords:
source-to-source automatic parallelizer Cetus NPB Benchmark manually-
parallelized programs automatically-parallelized programs.
## 1 Introduction
Since the end of Dennard scaling [22] at the turn of the millennium, nearly
all computer systems include parallel architectures that are exposed to their
programmers. In the past two decades, we have witnessed a significant increase
in computer applications in nearly all domains of science, engineering,
business, and our daily lives. As a result, the number of program developers
has drastically increased, including many software engineers trained on the
intricacies of parallel computer architectures and applications, but also an
even larger number of non-experts. Tools that help create and efficiently
implement parallel applications on modern architectures are more important
than ever. While the relevance of automatic parallelizers is obvious for non-
expert programmers, the same tools can also greatly benefit the specialists,
assisting them in efficiently performing many of the tedious programming
tasks.
After four decades of research in automatic parallelization, a large number of
techniques have been developed. Nevertheless, automatic parallelization tools
succeed only in about half of today’s science and engineering applications.
And there is little success in many of the business and daily-life
applications, which represent the major part of today’s software. Users of
parallelizers are often frustrated by the unpredictable performance of
automatic tools, which at times degrade the speed below that of the original
program. Manual parallelization is often a necessity, but its complexity and
tediousness make it amenable to only a minority of highly trained experts.
Even for these experts, creating parallel applications is an expensive and
time-consuming task.
Developing tools that automate these tasks is even more challenging. One of
the biggest questions is how to bring about advancements in this area. The
premise of this paper is that we need to study representative applications,
investigate how manual programmers have performed their tasks, compare the
transformations they have applied with those of automatic parallelizers, and
learn from these comparisons how to improve our tools. Amazingly, there are
very few papers that pursue this direction. We will discuss these papers in
the section on related work.
The present paper tries to fill this void. We identify programming patterns
that differ between manually parallelized and auto-parallelized codes, find
the limitations of auto-parallelizers, and suggest improvements for such
tools, so that they generate programs that are closer to hand-optimized code.
We do this by studying the NAS Parallel Benchmark (NPB) applications [14]. The
NPB applications are a representation of real-world applications. While their
first release was in 1991, they are continually being modernized and include
codes containing irregular code and data patterns. The OpenMP versions of NPB
are used as our hand-parallelized applications, which we compare to the serial
versions parallelized automatically by the Cetus translator [21]. Cetus is an
advanced parallelizer and compiler infrastructure for C programs. We use it to
represent modern parallelization technology.
The remainder of the paper is organized as follows. Section 2 outlines our
experimental design. Section 3 identifies and measures the code sections that
differ between manual and automatic parallelization. Section 4 presents the
main findings, including a description of the code patterns that differ
between automatically and manually parallelized applications, an assessment of
the performance impact of each pattern, and a discussion of opportunities for
compilers. We describe related work in Section 5, followed by conclusions in
Section 6.
## 2 Experimental Design
##### Application Benchmarks:
We use the NAS Parallel Benchmarks NPB 3.3, which includes serial, OpenMP, and
MPI codes for ten applications. The original codes are written in Fortran, but
we use the variants written in C [18]. We evaluate the codes EP, IS, BT, SP,
MG, and CG, which present opportunities for automatic parallelization. For our
experiments, we measure the performance of the applications for input Class A,
which is a small data set, but representative of larger sets, as we will show
in section 3.4. We report the average performance of three program runs.
##### Automatic Parallelization:
We use the Cetus open-source automatic parallelizer, which is a source-to-
source translator for C programs. Cetus represents some of the most advanced
parallelization technology [16, 13, 3], including symbolic program analysis.
It generates OpenMP-annotated C code on output, invoking GCC as a backend code
generator (GCC v4.8.5 with option -O3). We ran Cetus with its default option
to parallelize the codes. Among the major passes applied [4] are range
analysis, points-to and alias analysis, data dependence analysis, data
privatization, induction variable substitution, and reduction parallelization.
This experiment uses Cetus as a representative of current auto-parallelizers.
Cetus has been actively maintained, with recent improvements [3].
##### Platforms:
The key measurements are performed on a four-core system. All CPUs are located
on one NUMA node. Each CPU has a 512 KiB L1d cache and a 512 KiB L1i cache, as
well as a 4 MiB L2 cache. This system provides full access for easy
experimentation; we refer to it as the Interactive System.
To validate our findings on a larger system, we also make use of the
University of Delaware (UD)’s Caviness Cluster [20]. Caviness is a
distributed-memory Linux cluster that was initially deployed in July 2018. A
variety of compute nodes are present with different configurations on this
cluster. Each node consists of multi-core processors (CPUs), memory, and local
disk space. It consists of 126 compute nodes (4536 cores, 24.6 TB memory). The
nodes are built of Intel “Broadwell” 18-core processors in a dual-socket
configuration for 36 cores per node. Experiments on Caviness need to be
submitted via batch queues; we refer to this cluster as the Batch System.
## 3 Examining Differences between Manual and Automatic Parallelization
This section presents overall experimental results. We compare the performance
of the automatically and manually parallelized applications (Section 3.1) and
identify program sections that exhibit differences (Section 3.2) between auto-
and hand-optimized. We also measure the overheads introduced by program
parallelization (Section 3.3). These measurements were taken on the 4-core
Interactive System, introduced in Section 2, using data class A. To validate
our findings on larger data sets and systems, Section 3.4 and Section 3.5 also
measure and discuss the benchmarks for data class B and up to 36 cores, using
the Batch System described in Section 2.
### 3.1 Performance of Auto-Parallelized and Hand-Parallelized Applications
on the interactive system using 1 and 4 cores
Table 1 shows execution times and speedups of the auto-parallelized and hand-
parallelized codes, running on the 4-core Interactive System, using the Class
A data set.
Table 1: Execution times of the auto- and hand-parallelized codes in seconds. Parallel Execution is measured on 1 and 4 cores on the Interactive System. The parallel speedup is calculated as the run time of the parallel code on 1-core divided by the run time on 4-cores. Application | Auto-Parallelized Code | Manually-Parallelized Code
---|---|---
Name | Execution Time(s) | Execution Time(s) | Speedup | Execution Time(s) | Execution Time(s) | Speedup
| (1 core) | (4 cores) | | (1 core) | (4 cores) |
SP | 417 | 362 | 1.2 | 425 | 110 | 3.8
BT | 414 | 356 | 1.2 | 450 | 116 | 3.8
EP | 86 | 63 | 1.4 | 87 | 22 | 3.9
MG | 35 | 15 | 2.3 | 31 | 8 | 3.8
IS | 8 | 7 | 1.1 | 9 | 3 | 3.0
CG | 12 | 5 | 2.4 | 11 | 3 | 3.7
With auto-parallelization, the applications SP, BT, EP, MG, and CG have gained
noticeable speedup. For the IS application, there is little gain; the code has
not been parallelized substantially due to irregular data patterns.
The hand-parallelized code yields speedups for all applications. On average,
the hand-parallelized codes perform 2.5 times faster than auto-parallelized.
### 3.2 Performance of Individual Code Sections
Table 2 on page 2 shows the differences between automatically and manually
parallelized code sections. In addition to the execution times, the table
identifies the differing programming patterns.
Table 2: Differences in individual code sections between auto- and hand-
parallelized applications
App | Loop Name | Auto | Manual | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9
---|---|---|---|---|---|---|---|---|---|---|---|---
CG | main#1-#3 | 2.13 | 0.57 | 1 | 0 | 3 | 0 | 0 | 1 | 0 | 0 | 0
CG | conj_grad#0-#4 | 0.15 | 0.15 | 0 | 0 | 5 | 0 | 0 | 0 | 0 | 1 | 1
CG | sparse#6 | 0.03 | 0.01 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
CG | Program | 5.00 | 3.00 | 3 | 0 | 8 | 0 | 0 | 1 | 0 | 1 | 1
IS | create_seq#0 | 3.48 | 0.88 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 1
IS | full_verify#0 | 0.12 | 0.05 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1
IS | rank#1-#7 | 0.38 | 0.09 | 1 | 0 | 3 | 1 | 1 | 1 | 0 | 0 | 1
IS | Program | 7.39 | 2.80 | 2 | 1 | 5 | 2 | 2 | 3 | 0 | 0 | 3
MG | rprj3#0 | 0.32 | 0.08 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
MG | norm2u3#0 | 0.42 | 0.12 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1
MG | comm3#0 | 0.003 | 0.003 | 0 | 0 | 3 | 0 | 0 | 0 | 0 | 1 | 1
MG | zran3#0 | 1.77 | 0.43 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1
MG | zran3#1-#3 | 0.25 | 0.03 | 1 | 1 | 3 | 0 | 0 | 0 | 0 | 0 | 1
MG | Program | 15.4 | 8.54 | 4 | 3 | 9 | 0 | 0 | 0 | 0 | 2 | 4
EP | main#0 | 0.002 | 0.007 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0
EP | main#3 | 62.5 | 22.4 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1
EP | Program | 63.0 | 22.0 | 1 | 1 | 2 | 0 | 0 | 2 | 1 | 0 | 1
BT | initialize#0-#7 | 0.44 | 0.12 | 8 | 7 | 8 | 0 | 0 | 0 | 0 | 6 | 0
BT | exact_rhs#0-#4 | 0.52 | 0.14 | 5 | 3 | 5 | 0 | 0 | 1 | 0 | 2 | 0
BT | compute_rhs#0-#10 | 20.5 | 20.4 | 0 | 0 | 11 | 0 | 0 | 0 | 0 | 7 | 0
BT | x_solve#0 | 110 | 31.3 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0
BT | y_solve#0 | 110 | 31.5 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0
BT | z_solve#0 | 113 | 32.2 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0
BT | error_norm#1 | 0.08 | 0.02 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1
BT | rhs_norm#1 | 0.004 | 0.003 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1
BT | Program | 356 | 116 | 17 | 14 | 29 | 0 | 0 | 4 | 2 | 17 | 2
SP | error_norm#1 | 0.04 | 0.01 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1
SP | rhs_norm#1 | 0.003 | 0.002 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1
SP | exact_rhs#0-#4 | 0.77 | 0.13 | 1 | 3 | 5 | 0 | 0 | 1 | 0 | 2 | 0
SP | initialize#0-#7 | 0.14 | 0.04 | 1 | 7 | 8 | 0 | 0 | 0 | 0 | 6 | 0
SP | lhsinit#0 | 0.71 | 0.13 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0
SP | lhsinitj#0 | 1.10 | 0.30 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
SP | compute_rhs#0-#10 | 23.3 | 20.1 | 0 | 0 | 11 | 0 | 0 | 0 | 0 | 7 | 0
SP | x_solve#0 | 87.2 | 20.4 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0
SP | y_solve#0 | 123 | 20.8 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0
SP | z_solve#0 | 123 | 21.4 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0
SP | Program | 362 | 111 | 7 | 14 | 30 | 0 | 0 | 6 | 2 | 17 | 2
* •
Auto– Execution time of auto-parallelized code (seconds) on 4 cores
* •
Manual– Execution time of manually-parallelized code (seconds) on 4 cores
* •
P1– Number of loops in which the outermost loop is parallelized; more on
section 4.1
* •
P2– Number of loops containing function calls inside the region; more on
section 4.2
* •
P3– Number of loops inside the parallel region; more on section 4.3
* •
P4– Dynamic schedule (1 means the pattern is applied); more on section 4.4
* •
P5– Irregular patterns like indirect array accesses (1 means the pattern is
applied); more on section 4.5
* •
P6– Threadprivate data access (1 means Threadprivate data has been accessed);
more on section 4.6
* •
P7– Array reduction (1 means the pattern is applied); more on section 4.7
* •
P8– Number of NOWAIT clauses; more on section 4.8
* •
P9– Code modification (1 means the pattern is applied); more on section 4.10
We number adjacent (nests of) loops in each subroutine from 0. For example,
main#1-#3 indicates the second through the fourth for loops in function main.
Our comparison of auto- and hand-parallelized codes revealed several differing
program patterns. The table shows which of these patterns have been used in
the manually parallelized codes. Section 4 explains each of these patterns in
more detail and quantifies the performance differences they make. We omit code
sections with insignificant performance differences.
We examine the reasons behind the differences between the auto-parallelized
and hand-parallelized execution times in Section 4 and discuss the programming
patterns responsible for these differences.
### 3.3 Overhead of parallel transformations in hand-parallelized
applications
The inserted OpenMP directives (also called compiler pragmas) determine how
the compiler and run-time system transform the code, manage and synchronize
teams of threads and distribute work among the threads. Table 3 shows the
serial and 1-core parallel execution of the hand-parallelized applications,
exhibiting the introduced overhead. Two of the six applications incur
overheads of more than 3% while in two of the applications the performance
improves by 9% and 12%. Some of the sources of overhead are:
Table 3: Parallel overhead in manually optimized codes Application Name | Serial Execution Time (s) | Execution Time of hand-parallelized code(s) on 1 core | Difference
---|---|---|---
SP | 416 | 425 | 3%
BT | 414 | 450 | 9%
EP | 86 | 87 | 1%
MG | 35 | 31 | -12%
IS | 8 | 9 | 11%
CG | 12 | 11 | -9%
* •
OpenMP in such compilers as GCC, LLVM, and Intel ICC is implemented using
outlining [12], extracting the parallel region into its own function. The
extra function call, initiating communications with the helper threads,
copying shared variables to the helpers, assigning work shares, as well as
synchronizing the threads, all contribute to this overhead.
* •
In addition, we found that parallel regions that use thread-private data may
incur extra overheads, as described in more detail in Section 4.6.
* •
Moreover, the addition of OpenMP directives can sometimes prevent the compiler
from performing certain optimizations.
### 3.4 Application scalability with increasing data set and system
performance
To verify the validity of our findings on larger systems and data sets, we
executed our benchmarks on the Batch System described in Section 2. We used
the same core counts as in the prior sections (1 and 4 cores) but included in
our measurements data set Class B, next to Class A.
Tables 4 and 5 show the code performance on the Batch System using Class A and
Class B data sets, respectively. Using Class A, the hand-parallelized codes
show an average 4-core speedup of 3.6. The corresponding gain for the auto-
parallelized codes is 1.9.
Using the larger class-B data set, the average speedups are close to those for
Class A – 3.7 for manually parallelized and 2.1 for automatically
parallelized. While the hand-parallelized codes achieve a speedup close to the
available core count, four out of the six auto-parallelized codes yield
substantially lower performance. These findings match the conclusions obtained
for Class A on the Interactive System, justifying the use of the more nimble
Interactive System and Class A for our detailed experiments.
Table 4: Performance of auto- and hand-parallelized codes in seconds. Parallel Execution is measured on 1 and 4 cores. Parallel Speedup is calculated as the parallel run-time on 1-core divided by the run-time on 4-cores. The measurements use Class A data set and the Batch System. Application | Auto-Parallelized Code | Manually-Parallelized Code
---|---|---
Name | Execution Time(s) | Execution Time(s) | Speedup | Execution Time(s) | Execution Time(s) | Speedup
| (1 core) | (4 cores) | | (1 core) | (4 cores) |
SP | 134 | 72 | 1.9 | 133 | 37 | 3.6
BT | 146 | 126 | 1.2 | 155 | 43 | 3.6
EP | 33 | 23 | 1.4 | 33 | 8.8 | 3.7
MG | 5.7 | 2.1 | 2.7 | 5.7 | 1.5 | 3.7
IS | 0.8 | 0.8 | 1.0 | 0.7 | 0.2 | 3.1
CG | 2 | 0.5 | 3.7 | 2 | 0.5 | 3.7
Table 5: Performance of auto- and hand-parallelized codes in seconds. Parallel Execution is measured on 1 and 4 cores. Parallel Speedup is calculated as the parallel run-time on 1-core divided by the run-time on 4-cores. The measurements use Class B data set and the Batch System. Application | Auto-Parallelized Code | Manually-Parallelized Code
---|---|---
Name | Execution Time(s) | Execution Time(s) | Speedup | Execution Time(s) | Execution Time(s) | Speedup
| (1 core) | (4 cores) | | (1 core) | (4 cores) |
SP | 543 | 284 | 1.9 | 556 | 151 | 3.7
BT | 595 | 518 | 1.1 | 638 | 169 | 3.8
EP | 133 | 94 | 1.4 | 129 | 35 | 3.7
MG | 26 | 8.5 | 3.1 | 25.5 | 7 | 3.7
IS | 3.4 | 3 | 1.0 | 3.2 | 0.9 | 3.6
CG | 84 | 22 | 3.9 | 80 | 22 | 3.7
### 3.5 Application Scalability with increasing core counts using Class B
Data set
The previous section shows good efficiency of the hand-parallelized codes on
four cores. The auto-parallelized versions execute at about half that
performance, on average. This section tests the scalability of programs to up
to 36 cores, using the Class B data set.
(a) Auto-parallelized codes
(b) Hand-parallelized codes
Figure 1: Comparing the execution time of auto-parallelized and hand-
parallelized codes of Class B data-set using different core counts
(a) Auto-parallelized codes
(b) Hand-parallelized codes
Figure 2: Comparing the speedup of the auto-parallelized and hand-parallelized
codes of Class B data-set on different core counts
Figures 1 and 2 show the code execution times and speedups, respectively, with
increasing core counts. Auto-parallelized codes cannot efficiently utilize the
available cores. Except for CG, and MG, adding more than 4 cores does not
increase the speed. When increasing the number of cores beyond 32, the
performance gain is not significant and may even worsen. For the hand-
parallelized codes, as the number of cores increases from 32 to 36, the
performance continues to improve, except for MG. In general, the hand-
parallelized benchmarks make efficient use of the available cores.
Overall, the qualitative findings of the 4-core measurements remain the same:
While the hand-parallelized codes show good scalability, the performance of
the auto-parallelized versions is limited. That limitation increases with
higher core counts. Hence the importance of the techniques presented in the
next section, which aim to bring the efficiency of the auto-parallelized codes
closer to manually parallelized, will be even larger than shown by the 4-core
evaluation.
## 4 Code Patterns, Performance Impact, Opportunities for Compilers
We now analyze the program sections in each benchmark that differ between the
manually and automatically parallelized versions. We have identified ten
programming patterns that represent these differences. In rough order of
importance, we first explain the pattern, followed by assessing the
performance impact of enabling/disabling the pattern. We then discuss the
potential for improving compilers to implement the pattern. We quantify the
potential impact of these techniques on our 4-core Interactive System. As
mentioned at the end of Section 3, these numbers represent lower bounds, which
will increase for systems with higher core counts.
### 4.1 Parallelizing nested loops at the outermost level possible
#### 4.1.1 The Pattern
It is well understood that outermost parallelism yields the best performance
and that automatic parallelization may fail to do so, finding parallelism in
inner loops, only. Running outermost loops in parallel minimizes the number of
parallel loop invocations, and the associated fork/join overheads, including
implicit barriers at the loop end. Not surprisingly, we found several program
sections that differed between manually and automatically parallelized in this
way. Among the causes were irregular data accesses, function calls (discussed
later), or dependences that could not be disproven.
#### 4.1.2 Performance Impact
Subroutines x_solve(), y_solve(), and z_solve() in programs BT and SP are
examples of compute-intensive functions that are not parallelized at the
outermost level by the Cetus compiler – due to function calls present in the
outermost loop. Table 6 shows the differences between the auto- and hand-
parallelized code execution times.
Table 6: Impact of parallelizing the outermost loop in nested loops, comparing the execution time of the Cetus-parallelized code with the manually parallelized code. Application Name | Loop Name | Technique Not Applied | Technique Applied | Impact
---|---|---|---|---
BT | x_solve#0 | 110 | 31 | 255%
BT | y_solve#0 | 110 | 31 | 255%
BT | z_solve#0 | 113 | 32 | 253%
BT | program | 356 | 116 | 206%
SP | x_solve#0 | 87 | 20 | 335%
SP | y_solve#0 | 123 | 21 | 486%
SP | z_solve#0 | 123 | 21 | 486%
SP | program | 362 | 111 | 226%
#### 4.1.3 Opportunities for Compilers
The presence of function calls requires inter-procedural analysis or inline
expansion capabilities, which we will discuss in the following subsection.
Irregular access patterns have long been a challenge for compilers, with both
run-time and recent compile-time approaches pursuing improvements. For
disproving data dependences, we have often found that the opportunity is in
the propagation of information across the program (such as interprocedural
symbolic analysis) rather than in increasing the power of data dependence
tests themselves.
### 4.2 Parallelizing loops containing function calls
#### 4.2.1 The Pattern
Most auto-parallelizers, including Cetus, do not consider loops with function
calls or I/O statements for parallelization, unless those functions are known
to be side-effect free. Our study found many examples in which a function call
inside a loop prevented auto-parallelization. The same loops were parallelized
in the manually parallelized codes.
Inline expansion, which replaces a function call with the body of the called
subroutine, can help parallelize such patterns. Users of the Cetus compiler
have that option available. We will measure the effect of doing so, next.
#### 4.2.2 Performance Impact
We performed an experiment to determine how much parallelization is enabled
through inline expansion in Cetus-parallelized codes. Table 7 shows the
result.
[b] App Name Loop Name Number of Inlined Functions Parallelized after
Inlining? Technique Not Applied Inlining Technique Applied Impact BT
initialize#0-#7 9 Yes1 0.44 0.14 214% BT exact_rhs#0-#4 3 Yes2 0.52 0.63 -17%
BT x_solve#0 8 No 110 122 -10% BT y_solve#0 8 No 110 123 -11% BT z_solve#0 8
No 113 124 -9% BT error_norm#1 1 Yes3 0.08 0.03 167% BT Program 356 395 -10%
SP initialize#0-#7 9 Yes1 0.14 0.04 250% SP exact_rhs#0-#4 3 Yes2 0.77 0.7 10%
SP x_solve#0 1 No 87 87 0% SP y_solve#0 1 No 123 124 -1% SP z_solve#0 1 No 123
123 0% SP error_norm#1 1 Yes3 0.04 0.03 33% SP Program 362 377 -4%
Table 7: Impact of inlining in parallelizing automatically parallelized codes;
Comparing the execution time of auto-parallelized codes before and after
inlining.
* 1
In nested loop structures, the outermost loops are parallelized. In manually-
parallelized code, however, all parallel loops are included in a parallel
region.
* 2
In nested loop structures, inner loops are parallelized.
* 3
While the outermost loop is parallelized, the array reduction implementation
differs from the hand-parallelized code that will be discussed later.
We found that auto-parallelization indeed could detect additional parallelism
in several of the loops in question after applying inlining. As displayed in
Table 7, subroutine initialize() in both BT and SP shows significant
performance gain due to parallelization of the outermost loops. However, in
exact_rhs(), the transformation led to performance degradation. While
additional loops could be parallelized, these were inner loops where
parallelization was not profitable. What’s more, the most compute-intensive
loops, in subroutines x_solve(), y_solve(), and z_solve() of both
applications, remained unaffected by inline expansion, as Cetus is still
unable to disprove data dependences.
#### 4.2.3 Opportunities for Compilers
Despite studies on interprocedural analysis (IPA) that have been carried out
for more than three decades, IPA is not available in most compilers. Among the
reasons are the complexity of the technique, the fact that most analyses need
specialized IPA algorithms, and the resulting increase in compilation times.
By interacting with the user, it is possible to identify user functions that
have no side effects and add them to the default list of side-effect-free
functions, which consist primarily of math functions. Other opportunities
include selective subroutine inline expansion during the compiler analysis
only. The former technique could identify additional parallelism with user
input, while the latter would eliminate overheads, such as excessive code
growth.
### 4.3 Parallel regions enclosing multiple Parallel loops
#### 4.3.1 The Pattern
In OpenMP, multiple adjacent parallel loops can be converted into a parallel
region. This way, the parallel threads are spawned only once, at the beginning
of the region, reducing fork/join overhead. The original parallel loops will
become worksharing constructs, which simply distribute their iterations onto
the available threads. In some cases, the programmers had inserted NOWAIT
clauses to eliminate barrier synchronizations at the end of the worksharing
constructs. In the hand-parallelized codes, we found this pattern frequently.
By contrast, auto-parallelizers, including Cetus, typically examine and
parallelize loops individually.
#### 4.3.2 Performance Impact
We have measured the impact of such a technique by converting the hand-
parallelized programs to variants without parallel regions. The loops inside
the regions were changed to individual parallel loops. Note that doing so also
forces a barrier synchronization at the end of each parallel loop. The results
are presented in Table 8.
Table 8: Impact of enclosing multiple parallel loops in a parallel region – comparing the execution times of the code containing individual parallel loops with the hand-optimized code containing a parallel region. App Name | Loop Name | Number of Loops | Technique Not Applied | Technique Applied | | Impact
---|---|---|---|---|---|---
MG | comm3#0-#2 | 3 | 0.003 | 0.003 | | 0%
MG | zran3#1-#3 | 3 | 0.034 | 0.033 | | 4%
MG | Program | 6 | 9 | 8.5 | | 6%
BT | initialize#0-#7 | 8 | 0.124 | 0.116 | | 7%
BT | exact_rhs#0-#4 | 5 | 0.167 | 0.142 | | 18%
BT | compute_rhs#0-#10 | 11 | 0.117 | 0.108 | | 8%
BT | Program | 24 | 129 | 116 | | 11%
#### 4.3.3 Opportunities for Compilers
Developing transformations that combine adjacent parallel loops into a
parallel region seems feasible in some situations, but we are not aware of
auto-parallelizers that do so. In other cases, creating parallel regions can
be challenging because sequential code sections may be present between the
parallel loops. There exists work on eliminating barrier synchronization,
which can be incorporated into such techniques.
### 4.4 Avoiding load Imbalance through dynamic scheduling
#### 4.4.1 The Pattern
Load imbalance can be caused by the uneven distribution of work across worker
threads. Loop scheduling defines chunks of loop iterations and their
distribution onto the threads. In loops that are prone to uneven workload, due
to conditional execution or work that depends on the iteration number, loop
scheduling can affect performance noticeably. Two schedule clauses offered by
OpenMP for resolving load imbalance are dynamic and guided. They make
scheduling decisions at run-time, assigning chunks of iterations to idle
threads.
The developers of the hand-parallelized codes have made use of these clauses.
By contrast, the Cetus compiler currently does not change the default loop
schedule, which is the static distribution of an equal share of iterations to
all worker threads.
#### 4.4.2 Performance Impact
We have found some loops where loop scheduling made a substantial difference.
Table 9, shows two such loops in the IS program. The improved code performance
of 43% and 17%, respectively, translates to a noticeable overall program speed
improvement of 6%. The impact of dynamic scheduling on the whole application
is significant, as the rank function is invoked multiple times.
Table 9: Impact of adding dynamic scheduling. The execution time of the code when scheduling is disabled is compared to the execution time of the manually parallelized code. Application Name | Loop Name | Technique Not Applied | Technique Applied | | Impact
---|---|---|---|---|---
IS | full_verify#0 | 0.07 | 0.05 | | 43%
IS | rank#1-#7 | 0.10 | 0.09 | | 17%
IS | program | 2.14 | 2.02 | | 6%
#### 4.4.3 Opportunities for Compilers
Program and loop workloads are affected by both program and execution
characteristics. Dynamic factors, such as external programs in shared
machines, and conditional executions guided by input data, are difficult to
assess. However, the compiler can analyze programs for conditional execution
patterns that may depend on input data, iteration numbers that tend to load
threads unevenly, and inner loops whose workload depends on outer loop
iterations (e.g., triangular loops).
### 4.5 analyzing irregular data access patterns in hand-parallelized codes
#### 4.5.1 The Pattern
Applications that have irregular data access patterns with complex code
structures prevent auto-parallelizers from succeeding.
The IS application exhibits such patterns. The loops full_verify#0, rank#0,
rank#2, rank#4, and rank#6 include indirect array accesses, which prevents
Cetus from detecting parallelism.
#### 4.5.2 Performance Impact
Table 10 reports execution times of these loops in the IS application when
they are not parallelized by the auto-parallelizer due to such patterns, and
when they are parallelized in the manually parallelized code.
Table 10: Impact of parallelizing irregular patterns. The execution time of the auto-parallelized code, where irregular patterns remain serial, is compared to the execution time of the manually parallelized code, where the same loops are parallelized. Application Name | Loop Name | Technique Not Applied | Technique Applied | Impact
---|---|---|---|---
IS | full_verify#0 | 0.12 | 0.05 | 135%
IS | rank#1-#7 | 0.38 | 0.09 | 318%
IS | program | 7.39 | 2.80 | 163%
#### 4.5.3 Opportunities for Compilers
Loops containing subscripted subscripts are among the most complex patterns
for compilers to analyze. A number of run-time techniques have been developed,
such as run-time data-dependence tests and inspector-executor schemes. Recent
work has also begun to develop compile-time techniques based on the
observation that, in some cases, the information needed to prove the absence
of data dependences is present in the application program [3] [5].
### 4.6 Threadprivate Data
#### 4.6.1 Pattern Explanation
The OpenMP threadaprivate directive specifies that variables are replicated,
with a copy being kept in each thread. It privatizes static or global
variables that are modified by multiple parallel regions. Threadprivate
variables persist across regions. The manually parallelized benchmarks make
use of this concept in a number of program sections.
Auto-parallelizers, including Cetus, do not create threadprivate data. Data
that need to be replicated across threads and persist across parallel regions
or loops need to be implemented through data expansion or copying region/loop-
private data in and out – sometimes through first/last-private clauses.
#### 4.6.2 Performance Impact
We measured the impact of using threadprivate data by considering those
program sections where conversion to loop-private data was possible. We
compared the performance of the variants without threadprivate data (loop-
private data only) with the hand-parallelized variants, which use
threadprivate data.
The result was unexpected. Table 11 on page 11 shows that using threadprivate
data lowers the performance in all of our cases. The compute-intensive loops
in BT, subroutine x/y/z_solve, see a 25% performance reduction. The superior
performance of regions without the use of threadprivate data is consistent
with the findings of others [11], who attribute this effect to inefficient
OpenMP implementations.
We did not measure other program sections where additional programming steps
would be necessary for the transformation to region/loop-private data. In
these cases, the additional steps would likely add overhead, making the
threadprivate variant more desirable.
Table 11: Impact of using Threadprivate directives. We compare the execution time of the code where threadprivate data is replaced by loop-private data, with the execution time of the manually parallelized code. Application Name | Loop Name | Technique Not Applied | Technique Applied | | Impact
---|---|---|---|---|---
EP | main#0 | 0.003 | 0.006 | | -50%
EP | main#3 | 20.93 | 22.40 | | -7%
EP | Program | 21.0 | 22.5 | | -7%
BT | exact_rhs#0-#4 | 0.055 | 0.142 | | -61%
BT | x_solve#0 | 23.43 | 31.24 | | -25%
BT | y_solve#0 | 23.63 | 31.51 | | -25%
BT | z_solve#0 | 24.45 | 32.17 | | -24%
BT | Program | 93 | 116 | | -20%
#### 4.6.3 Opportunities for Compilers
Identifying threadprivate variables would involve analyses similar to current
data privatization, combined with liveness analysis across loops. While this
appears feasible, careful consideration will need to be given to
profitability, so as to avoid situations with negative impacts.
### 4.7 Array Reductions
#### 4.7.1 The Pattern
Reduction operations are parallelized as follows: Each thread concurrently
performs a reduction operation on the assigned loop iterations, creating
partial results, followed by a step that combines the partial results. We have
found differences in this combination step. For array reductions, the hand-
parallelized versions perform the needed mutual exclusion operation on each
element individually, using an OpenMP atomic construct. By contrast, the Cetus
implementation performs the mutual exclusion for the entire array, by means of
a critical section. This is the major difference, next to a minor variation in
how data is allocated.
The following example is taken from the BT, rhs_norm() subroutine. A reduction
optimization can be applied to the rms array shown in Listing 1, on line 6,
using a (+) reduction operator.
⬇
1 for (k = 1; k <= grid_points[2]-2; k++) {
2 for (j = 1; j <= grid_points[1]-2; j++) {
3 for (i = 1; i <= grid_points[0]-2; i++) {
4 for (m = 0; m < 5; m++) {
5 add = rhs[k][j][i][m];
6 rms[m] = rms[m] + add*add;
7 }
8 }
9 }
10 }
Listing 1: Example taken from the BT rhs_norm() procedure in which an array
reduction technique can be applied
Listing 2 on page 2 shows how Cetus implements array reduction. On line 3, the
temporary reduction array, _reduce_ , is dynamically allocated in shared space
and initialized by the loop on line 5. The array is used on line 20 to perform
the partial reduction. On line 26, a Critical construct is used to protect the
concurrent update of _rms_ by multiple threads. The critical section encloses
a loop that will be executed by each thread sequentially.
⬇
1 #pragma omp parallel
if((10000<(((((((-43L+(92L*grid_points[0L]))+(86L*grid_points[1L]))+(89L*grid_points[2L]))+((-46L*grid_points[0L])*grid_points[1L]))+((-46L*grid_points[0L])*grid_points[2L]))+((-43L*grid_points[1L])*grid_points[2L]))+(((23L*grid_points[0L])*grid_points[1L])*grid_points[2L]))))
private(add, i, j, k, m)
2 {
3 double * reduce = (double * )malloc(5*sizeof (double));
4 int reduce_span_0;
5 for (reduce_span_0=0; reduce_span_0<5; reduce_span_0 ++ ) {
6 reduce[reduce_span_0]=0;
7 }
8
9 #pragma loop name rhs_norm#1
10 #pragma omp for
11 for (k=1; k<=(grid_points[2]-2); k ++ ) {
12 #pragma loop name rhs_norm#1#0
13 for (j=1; j<=(grid_points[1]-2); j ++ ) {
14 #pragma loop name rhs_norm#1#0#0
15 for (i=1; i<=(grid_points[0]-2); i ++ ) {
16 #pragma loop name rhs_norm#1#0#0#0
17 for (m=0; m<5; m ++ )
18 {
19 add=rhs[k][j][i][m];
20 reduce[m]=(reduce[m]+(add*add));
21 }
22 }
23 }
24 }
25
26 #pragma omp critical
27 {
28 for (reduce_span_0=0; reduce_span_0<5; reduce_span_0 ++ ) {
29 rms[reduce_span_0]+=reduce[reduce_span_0];
30 }
31 }
32 }//end of parallel region
Listing 2: Cetus implementation of the array reduction
Listing3 on page 3 demonstrates how array reduction is implemented in the
hand-parallelized code in the same example. This implementation differs from
the previous implementation in the following ways:
* •
Using a privatized version of the reduction array rms_local, instead of a
dynamically allocated array.
* •
On line 7, a NOWAIT clause is used to omit the barrier at the end of the for
loop, allowing each thread to immediately proceed to the update step.
* •
Using an atomic construct inside the for loop on line 18 to combine the
partial results.
⬇
1 double rms_local[5];
2 #pragma omp parallel default(shared) private(i,j,k,m,add,rms_local)
shared(rms)
3 {
4 for (m = 0; m < 5; m++) {
5 rms_local[m] = 0.0;
6 }
7 #pragma omp for nowait
8 for (k = 1; k <= grid_points[2]-2; k++) {
9 for (j = 1; j <= grid_points[1]-2; j++) {
10 for (i = 1; i <= grid_points[0]-2; i++) {
11 for (m = 0; m < 5; m++) {
12 add = rhs[k][j][i][m];
13 rms_local[m] = rms_local[m] + add*add;
14 }
15 }
16 }
17 }
18 for (m = 0; m < 5; m++) {
19 #pragma omp atomic
20 rms[m] += rms_local[m];
21 }
22 } //end of parallel region
Listing 3: Array reduction implemented in hand-parallelized code
#### 4.7.2 Performance Impact
We compared the two variants, individual element synchronization (manual) and
overall synchronization (automatic), by replacing two of the array reduction
patterns in the hand-parallelized codes with the Cetus implementation scheme.
We measured the execution time of those code sections and the entire program.
Table 12 shows a significant performance impact on the two loops. The overall
effect in the overall programs is minor, as loop rhs_norm#1 is small and
executed once per application in both SP and BT. In general, as reduction
operations can show up in compute-intensive sections of programs, the impact
may be much larger, however.
Table 12: The table compares the performance of the Cetus-applied array-reduction transformation versus the manually applied technique in the hand-parallelized codes. Application Name | Loop Name | Technique Not Applied (Cetus) | Technique Applied (Manual) | | Impact
---|---|---|---|---|---
BT | rhs_norm#1 | 0.005 | 0.003 | | 66%
BT | program | 117 | 116 | | 1%
SP | rhs_norm#1 | 0.006 | 0.002 | | 200%
SP | program | 112 | 111 | | 1%
#### 4.7.3 Opportunities for Compilers
Compilers can easily transform array reductions in either of the described
variants. The choice of best implementation depends on several factors,
including the efficiency of implementation of the OpenMP atomic directive. If
the implementation simply uses a critical section (which we have seen in some
OpenMP libraries), the current Cetus transformation likely performs the same
or better. This calls for compilers having knowledge of architectural and
platform parameters, which we will discuss more in Section 4.9 on conditional
parallelization.
### 4.8 NOWAIT – Eliminating Barrier Synchronizations
#### 4.8.1 The Pattern
A barrier is implicitly included at the end of some OpenMP constructs,
including parallel, for, and single constructs. This barrier is the safe
default so that all threads have completed their share of work before
proceeding with the execution. This synchronization is not needed if threads
do not access data previously operated on by a different thread or on the last
worksharing loop inside a parallel region. The OpenMP NOWAIT clause eliminates
the barrier.
NOWAIT clauses have been inserted on many parallel loops inside the parallel
regions in the hand-parallelized programs, reducing substantial overhead. The
auto-parallelized code do not include such techniques.
#### 4.8.2 Performance Impact
In order to test the performance impact of the technique, we have created
program variants of hand-parallelized codes with removed NOWAIT clauses. Table
13 on page 13 compares these codes with the hand-parallelized variants. The
impact in most of the programs is only about 1%, even though individual loops
see a gain of up to 16%. It is likely that the impact would increase with a
larger number of threads (recall that we use four) or in programs/loops with
imbalanced load.
#### 4.8.3 Opportunities for Compilers
Compile-time techniques for barrier elimination have been explored.
Engineering them into available compilers is still an opportunity to be
seized. Given the relatively minor impact, other techniques may need to be
prioritized. Note also that this technique is related to enclosing loops in a
parallel region.
Table 13: Impact of Eliminating Barrier Synchronization: Execution times of removed versus present NOWAIT clauses in hand-parallelized codes. Application Name | Loop Name | Number of NOWAIT Clauses | Technique Not Applied | Technique Applied | | Impact
---|---|---|---|---|---|---
CG | Conj_grad#0-4 | 1 | 0.173 | 0.149 | | 16%
CG | program | 1 | 3 | 2.98 | | 1%
MG | norm2u3#0 | 1 | 0.121 | 0.119 | | 2%
MG | comm3#0 | 1 | 0.003 | 0.003 | | 0%
MG | program | 2 | 8.7 | 8.5 | | 2%
BT | initialize#0-#7 | 6 | 0.116 | 0.116 | | 0%
BT | exact_rhs#0-#4 | 2 | 0.142 | 0.142 | | 0%
BT | compute_rhs#0-#10 | 7 | 21.343 | 20.345 | | 5%
BT | error_norm#1 | 1 | 0.019 | 0.019 | | 0%
BT | rhs_norm#1 | 1 | 0.003 | 0.003 | | 0%
BT | program | 17 | 117 | 116 | | 1%
SP | initialize#0-#7 | 6 | 0.043 | 0.043 | | 0%
SP | exact_rhs#0-#4 | 2 | 0.133 | 0.132 | | 1%
SP | compute_rhs#0-#10 | 7 | 21.452 | 20.050 | | 7%
SP | error_norm#1 | 1 | 0.012 | 0.011 | | 9%
SP | rhs_norm#1 | 1 | 0.002 | 0.002 | | 0%
SP | program | 17 | 111.5 | 110.5 | | 1%
### 4.9 Conditional Parallelization
#### 4.9.1 The Pattern
A technique present in auto-parallelized codes, but not in the manual
variants, is the conditional parallelization of loops. The Cetus compiler
estimates the workload of loops, as the product of the number of statements
and iterations. It parallelizes only those with high workloads. If the
estimate is an expression that cannot be computed at compile time, it uses
OpenMP’s conditional parallelization clause with an if condition that the
expression exceeds a threshold. The manually parallelized programs do not use
such conditional parallelization. One can expect that conditional
parallelization benefits programs with small data sets, as some of the loops
will not beneficially run in parallel.
#### 4.9.2 Performance Impact
We have found some conditionally parallelized loops that are too small to run
in parallel beneficially. But their impact on the overall programs was very
small, even when they were executed in parallel, as these loops do not take
significant execution time. We have also found some loops that Cetus did not
parallelize since they were not profitable. While conditional parallelization
adds a small runtime overhead, due to the if clause added to the OpenMP
directive that checks for exceeding a threshold, the technique is generally
beneficial.
To estimate the overhead conditional parallelization may add to the execution
time of the loop, we measured the execution time of the rhs_norm#1 loop of the
BT application with and without conditional parallelization in the Cetus-
parallelized code. The results of this measurement are presented in Table 14.
Table 14: Impact of conditional analysis; A comparison is made between the execution time of the code with and without conditional analysis in Cetus-parallelized code. Application Name | Loop Name | Technique Applied | Technique Disabled | | Impact
---|---|---|---|---|---
BT | rhs_norm#1 | 0.004 | 0.003 | | 33%
#### 4.9.3 Opportunities for Compilers
Like many other optimizers, Cetus uses a crude profitability model. There is
much room for improving these models to estimate the execution time of loops
or program sections under various optimization variants.
### 4.10 Code Modifications in hand-parallelized codes
#### 4.10.1 The Pattern
Our comparison of hand-parallelized and auto-parallelized versions of the
codes revealed additional modifications that were made to the hand-
parallelized codes. They include:
* •
Enclosing a mix of parallel loops and serial sections in a parallel region. An
example of this is in the CG application, loops conj_grad#0-#4.
* •
Changing the scope of variables to enable the creation of parallel regions. An
example of this is in the CG application, conj_grad() subroutine.
* •
Explicitly mapping tasks or iterations to threads. An example is the
create_seq#0 loop in the IS application.
* •
Resolving some dependences (mostly output dependences) to enable
parallelization. An example is the IS application’s full_verify#0 loop.
* •
Improving cache performance. An example is the rank() subroutine in the IS
application.
* •
Merging loops that perform similar tasks. An example is the MG application’s
comm3#0 loop.
#### 4.10.2 Performance Impact
These modifications were often applied to enable later parallelization steps.
Their impact was thus difficult to isolate. In general, they contributed
significantly to the good performance of the parallel applications.
#### 4.10.3 Opportunities for Compilers
While, at a high level, the mentioned modifications seem automatable, they
tend to make use of application-specific knowledge. They are thus non-trivial
to implement with general benefit.
## 5 Related Work
Many studies have been conducted on the NAS Parallel Benchmarks, including
analyses, evaluation, parallelization, and tuning, but no comparison has been
made between automatically and manually parallelized codes. Some studies do
propose improvements to auto-parallelizers, based upon limitations of such
tools that they have encountered in their experiments. The following are
studies in this regard.
A study by Prema et al.[15] comparing different auto-parallelizers such as
Cetus, Par4all [1], Pluto [7], Parallware [10], ROSE [17], and Intel C++
Compiler (ICC) [19] while parallelizing NAS Parallel Benchmarks (NPB) finds
that auto-parallelizers have limitations, which programmers should be aware of
in order to intervene manually if necessary during parallelization. The
development of an interactive environment that highlights the difficulties
encountered while parallelizing loops is proposed.
Blume [6] and Eigenmann et al.[9] discuss the successes and limitations of
auto-parallelizers, based on a study performed on the Perfect Benchmarks. A
modified version of the KAP restructurer and the VAST restructurer were used
as representatives of parallelizing compilers for Fortran programs. Based on
the limitations of auto-parallelizers at that time, this study proposes new
techniques.
Dave et al. [8] have measured the serial performance as well as the
performance of the manually parallelized codes on a subset of the NAS Parallel
and SPEC OMP2001 benchmarks. In contrast to the present paper, their
experiment compared the performance of these programs with that of auto-tuned
codes.
A distinguishing feature of the present paper is that it compares auto-
parallelized with hand-parallelized codes in order to identify opportunities
for compiler improvements. Performance differences attributable to the
identified program patterns are also measured, so as to quantify their
importance for future compiler developments. Our proposed improvements could
help auto-parallelizers reach performance approaching that of hand-
parallelized code.
## 6 Conclusion
We compared how expert programmers parallelize programs with how automatic
parallelization does the same. The goal is to learn how to improve auto-
parallelizers so as to approach hand-optimized performance. We believe that
such studies are essential to push the forefront of research in compilers for
parallel computing.
Currently, auto-parallelized codes are not as efficient as hand-parallelized
codes. Our analysis of a subset of the NAS Parallel benchmarks found that
auto-parallelized codes perform better than serial codes in many programs, but
hand-parallelized codes perform significantly better. We have identified code
sections, their program patterns, and the performance where differences occur.
Additionally, we found examples in which hand-parallelized codes performed
better after the use of threadprivate data was undone.
Among the patterns that showed the biggest performance differences were:
Parallelizing the outermost loop in nested loops, parallelizing loops that
enclose function calls, parallelizing loops with irregular patterns, enclosing
loops in parallel regions, applying dynamic scheduling to cases with
imbalanced loads, and using NOWAIT clauses to eliminate implicit barriers.
Opportunities for advancing compilers exist in several areas, including in
advancing analysis techniques, improving transforming techniques, and
efficient OpenMP code generation. These opportunities are explained along with
the differentiating patterns that have been identified. The findings of this
study will be used to guide future developments of the Cetus parallelizing
compiler platform and the interactive Cetus parallelizer (iCetus) [2].
#### 6.0.1 Acknowledgements
This work was supported by the National Science Foundation (NSF) under Awards
Nos. 1931339, 2209639, and 1833846.
## References
* [1] Amini, M., Creusillet, B., Even, S., Keryell, R., Goubier, O., Guelton, S., McMahon, J.O., Pasquier, F.X., Péan, G., Villalon, P., et al.: Par4all: From convex array regions to heterogeneous computing. In: 2nd International Workshop on Polyhedral Compilation Techniques, Impact (Jan 2012) (2012)
* [2] Barakhshan, P., Eigenmann, R.: iCetus: A semi-automatic parallel programming assistant. In: Li, X., Chandrasekaran, S. (eds.) Languages and Compilers for Parallel Computing. p. 18–32. Springer International Publishing, Cham (2022)
* [3] Bhosale, A., Barakhshan, P., Rosas, M.R., Eigenmann, R.: Automatic and interactive program parallelization using the cetus source to source compiler infrastructure v2.0. Electronics 11(5), 809 (2022)
* [4] Bhosale, A., Barakhshan, P., Rosas, M.R., Eigenmann, R.: The Cetus Compiler Manual (2022), https://sites.udel.edu/cetus-cid/the-cetus-compiler-manual/
* [5] Bhosale, A., Eigenmann, R.: On the automatic parallelization of subscripted subscript patterns using array property analysis. In: Proceedings of the ACM International Conference on Supercomputing. pp. 392–403 (2021)
* [6] Blume, W.J.: Success and Limitations in Automatic Parallelization of the Perfect Benchmarks Programs. Master’s thesis, Univ. of Illinois at Urbana-Champaign, Center for Supercomputing Res. & Dev. (July 1992)
* [7] Bondhugula, U., Hartono, A., Ramanujam, J., Sadayappan, P.: A practical automatic polyhedral parallelizer and locality optimizer. In: Proceedings of the 29th ACM SIGPLAN Conference on Programming Language Design and Implementation. p. 101–113. PLDI ’08, Association for Computing Machinery, New York, NY, USA (2008). https://doi.org/10.1145/1375581.1375595, https://doi.org/10.1145/1375581.1375595
* [8] Dave, C., Eigenmann, R.: Automatically tuning parallel and parallelized programs. In: Languages and Compilers for Parallel Computing. p. 126–139 (2010)
* [9] Eigenmann, R., Hoeflinger, J., Padua, D.: On the automatic parallelization of the perfect benchmarks (r). IEEE Transactions on Parallel and Distributed Systems 9(1), 5–23 (1998)
* [10] Gomez-Sousa, H., Arenaz, M., Rubinos-Lopez, O., Martinez-Lorenzo, J.A.: Novel source-to-source compiler approach for the automatic parallelization of codes based on the method of moments. In: 2015 9th European Conference on Antennas and Propagation (EuCAP). pp. 1–6 (2015)
* [11] Martorell, X., Gonzalez, M., Duran, A., Balart, J., Ferrer, R., Ayguade, E., Labarta, J.: Techniques supporting threadprivate in openmp. In: Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. pp. 7 pp.– (2006). https://doi.org/10.1109/IPDPS.2006.1639501
* [12] Mattson, T., Chapman, B.: OpenMP in action (2022), https://www.openmp.org/wp-content/uploads/omp-in-action-SC05.pdf
* [13] Mosseri, I., Alon, L.o., Harel, R., Oren, G.: Compar: optimized multi-compiler for automatic openmp s2s parallelization. In: International Workshop on OpenMP. pp. 247–262. Springer (2020)
* [14] NASA Advanced Supercomputing (NAS) Division : NAS Parallel Benchmarks (2022), https://www.nas.nasa.gov/software/npb.html
* [15] Prema, S., Jehadeesan, R., Panigrahi, B.K.: Identifying pitfalls in automatic parallelization of NAS parallel benchmarks. In: 2017 National Conference on Parallel Computing Technologies (PARCOMPTECH). p. 1–6 (Feb 2017). https://doi.org/10.1109/PARCOMPTECH.2017.8068329
* [16] Prema, S., Nasre, R., Jehadeesan, R., Panigrahi, B.: A study on popular auto-parallelization frameworks. Concurrency and Computation: Practice and Experience 31(17), e5168 (2019). https://doi.org/10.1002/cpe.5168
* [17] Quinlan, D., Liao, C.: The ROSE source-to-source compiler infrastructure. In: Cetus users and compiler infrastructure workshop, in conjunction with PACT. vol. 2011, p. 1. Citeseer (2011)
* [18] SNUNPB(2013): NAS Parallel Benchmarks C version. Available online: (2019), http://aces.snu.ac.kr/software/snu-npb/
* [19] Tian, X., Bik, A., Girkar, M., Grey, P., Saito, H., Su, E.: Intel® openmp c++/fortran compiler for hyper-threading technology: Implementation and performance. Intel Technology Journal 6(1) (2002)
* [20] UD IT RESEARCH CYBER INFRASTRUCTURE : Caviness Cluster (2022), https://docs.hpc.udel.edu/abstract/caviness/caviness
* [21] University of Delaware: Cetus,A Parallelizing Source-to-Source Compiler for C Programs (2022), https://sites.udel.edu/cetus-cid/
* [22] Wikipedia: Dennard scaling (2022), https://en.wikipedia.org/wiki/Dennard_scaling
|
# Robust Circuitry-Based Scores of Structural Importance of Human Brain Areas
Dániel Hegedűs<EMAIL_ADDRESS>Vince Grolmusz<EMAIL_ADDRESS>PIT
Bioinformatics Group, Eötvös University, H-1117 Budapest, Hungary Uratim
Ltd., H-1118 Budapest, Hungary
###### Abstract
We consider the 1015-vertex human consensus connectome computed from the
diffusion MRI data of 1064 subjects. We define seven different orders on these
1015 graph vertices, where the orders depend on parameters derived from the
brain circuitry, that is, from the properties of the edges (or connections)
incident to the vertices ordered. We order the vertices according to their
degree, the sum, the maximum, and the average of the fiber counts on the
incident edges, and the sum, the maximum and the average length of the fibers
in the incident edges. We analyze the similarities of these seven orders by
the Spearman correlation coefficient and by their inversion numbers and have
found that all of these seven orders have great similarities. In other words,
if we interpret the orders as scoring of the importance of the vertices in the
consensus connectome, then the scores of the vertices will be similar in all
seven orderings. That is, important vertices of the human connectome typically
have many neighbors, connected with long and thick axonal fibers (where
thickness is measured by fiber numbers), and their incident edges have high
maximum and average values of length and fiber-number parameters, too.
Therefore, these parameters may yield robust ways of deciding which vertices
are more important in the anatomy of our brain circuitry than the others.
Running head: Circuitry-Based Scores of Human Brain Areas
Keywords: Connectome; importance of cerebral areas; brain circuitry
## Introduction
Identifying the most important nodes in large networks solely from their
graph-theoretical properties was an important problem in the late 1990s,
applied in scoring the web search engine hits. The most well-known solutions,
the PageRank of Google [1] and the HITS algorithm of Kleinberg [2],
fundamentally influenced the related areas.
Both the PageRank and the HITS algorithms score the nodes of a directed,
unweighted graph, originally corresponded to the graph of the World Wide Web,
but later, those algorithms were successfully applied for directed and
undirected biological, social and chemical graphs, among other applications
[3, 4, 5, 6].
Since all human activities are governed by the cooperation of the cells in our
brain, the study of the connections of these cells has specific interest.
Unfortunately, the connections of the 80 billion neurons of the human brain
are not mapped yet, and will not be mapped in the foreseeable future: to date,
the only adult organism with completely mapped connections between its neurons
(also called the connectome or braingraph) is the nematode C. elegans, having
only 302 neurons [7]. Recently, after many years of concentrated efforts, the
neuronal-level connectome of a part of the brain of the adult fruit fly
Drosophila melanogaster, its central brain, is mapped and published [8]. Out
of the 100,000 neurons of the fruit fly, the central brain contains around
25,000 neurons. The whole Drosophila melanogaster connectome is not published
yet.
Instead of the neuronal-level connections, the imaging methods are capable
today of mapping the human connectome on a much coarser scale than the level
of the neurons. Due to the technical developments of magnetic resonance
imaging (MRI) in the last fifteen years [9, 10], today we can map the
macroscopic connections between 1000 anatomically identified brain areas.
These developments have opened up a new area of brain science called
“connectomics”, which examines the connections between the brain areas, and,
instead of comparing the volumes of brain areas between healthy or diseased,
old and young, male or female subjects, as in hundreds of previous cerebral
volumetric studies, it concentrates to a more central question: the
connections between those areas.
Our research group has studied the mathematical properties of human
connectomes by applying strict graph-theoretical methods, terms, and
approaches. We have used the public release imaging data sets of the Human
Connectome Project [11], and prepared publicly available braingraphs from the
imaging data, downloadable at the address https://braingraph.org in five
different resolutions [12, 13, 14, 15]. The vertices of the braingraphs
correspond to the anatomically identified areas of the cortical and sub-
cortical gray matter, and two of the vertices are connected by an edge if the
tractography phase [16, 17] of the processing identified axonal fibers between
the areas, mapped to the vertices.
Using the exact methods and deep algorithms and approaches of graph theory, we
have discovered numerous connectomical properties, related to the human sex
differences [18, 19, 20, 21, 22], early brain development [23, 24, 25, 13],
different lobal structures and organizations [26, 27, 28], and frequent edge
sets in the whole brain or only those which are adjacent to the hippocampus
[29, 30, 31, 32].
In the present contribution, we consider an averaged consensus connectome,
computed from the imaging data of 1064 subjects, and we order the vertices of
the consensus braingraph, intended to catch their order of “importance” and
compare the order of vertices in those lists. Our main result is that orders
generated from the
* 1.
degree of the nodes,
* 2.
the sum of the numbers,
* 3.
the maximum number,
* 4.
and the average number of fibers in the incident edges,
* 5.
the sum of the fiber lengths,
* 6.
the maximum fiber length,
* 7.
the average fiber length in the incident edges
are similar to one another, their Spearman’s rank correlation is high, and
their inversion numbers are low.
This result means that ordering by any of the seven parameters above produces
similar orders of the nodes, where the “similar” word is explained in detail
later in this work.
In other words, the result can be interpreted that the most important nodes in
our braingraph statistically have numerous and long incident axonal fibers,
with high maximum and averaged values either for the length or for the fiber
numbers. That is, if a node is in front of others in one of the seven
parameters above, then, typically, it will have high values in the remaining
six parameters, too. Therefore, all of these seven orders are robust in
comparison with the other six ones.
In what follows, we describe precisely our methods and results.
## Methods
### Graph construction
The data source of the present work is the 1200-subject public release of the
Human Connectome Project [11]. The 3 Tesla diffusion magnetic resonance
imaging data were processed with the help of the Connectome Mapper Tool Kit
[16].
We have computed five different graphs for each subject with 83, 129, 234,
463, and 1015 nodes, where each node corresponded to an anatomic area of the
cortical- and sub-cortical gray matter. The parcellation tool FreeSurfer was
applied here [33, 34, 17].
The details of the workflow we followed are described in [14]. Concisely, the
axonal fibers were mapped by the MRtrix 0.2 tractography software, and
repeated ten times for each subject. We connected two graph vertices, which
corresponded to two gray matter areas, by an edge if, in all the 10 runs,
axonal fibers were found running between the two areas. In this case, the
maximum and the minimum number of fibers were deleted, and the remaining eight
integer values were averaged and assigned to the edge as the fiber number
weight. The length of the edge is also determined as the average length of the
defining fibers. Consequently, all graph edges carry a positive weight
(meaning the average of 8 fiber numbers) and a positive length (in
millimeters).
Next, we constructed one single consensus graph on 1015 vertices from the 1064
individual graphs as follows. We have averaged the weight and the length for
each edge, but we have followed different strategies. For averaging the
weight, we added up the edge weight in all subject’s graph and divided the sum
by 1064; if an edge was not present in a subject, then we counted it as an
edge with (an artificial) weight of 0. In the case of computing the average
length, we counted the existing edges and divided the length-sum by this
integer (for vertex pairs, which do not appear as an edge, 0 lengths were
assigned).
Consequently, if $\\#\\{i,j\\}$ denotes the number of appearance of edge
$\\{i,j\\}$, and $s_{i,j,k}$ and $h_{i,j,k}$ denote in subject $k$ the weight
and the length of edge $\\{i,j\\}$, respectively, then
$s_{i,j}={1\over 1064}\sum_{k=1}^{1064}s_{i,j,k}$ (1)
$h_{i,j}={1\over{\\#(i,j)}}\sum_{k=1}^{1064}h_{i,j,k}$ (2)
We note that our earlier works [35, 36] also describe parameterizable
consensus graphs by user-selectable parameters at the website of the Budapest
Reference Connectome https://pitgroup.org/connectome. In contrast, the dataset
of the present contribution is a static graph.
### Ordering the nodes
Here we consider seven different orders on the set of the 1015 vertices of our
consensus graph, with abbreviations:
* 1.
by the degree of the nodes (Degree);
* 2.
by the sum of the number of fibers in the incident edges (SUM-weight);
* 3.
by the maximum number of fibers in the incident edges (MAX-weight);
* 4.
by the average number of fibers in the incident edges (AVG-weight);
* 5.
by the sum of the fiber lengths in the incident edges (SUM-length);
* 6.
by the maximum fiber length in the incident edges (MAX-length);
* 7.
by the average fiber length in the incident edges (AVG-length).
The orders are defined by the decreasing values of these parameters.
The Degree describes the number of the incident edges, i.e., more important
nodes are believed to be connected to many other nodes, so, consequently,
their degree should be larger than the degree of the less important nodes.
SUM-weight is the weighted version of the Degree parameter. Here the fiber
numbers of the incident edges are added up. Edges with higher fiber number or
weight may connect more small gray matter areas than those with less weight.
Consequently, we think that the SUM-weight parameter is more relevant in
deciding the importance of a node than just the Degree.
MAX-weight – in a certain sense – is a simplification of the SUM-weight
parameter. It describes only the weight of the largest-weight incident edge
instead of adding up all the weights of the incident edges. Theoretically, it
may happen that the ordering according to the MAX-weight differs a lot from
the order defined by SUM-weight if, in many vertices, the incident edges have
a small number of large a large number of small weights. It turns out later
that in the case of our graph, it is not true; the orders are similar.
AVG-weight: Clearly, for each vertex, the Degree times AVG-weight is the SUM-
weight. Therefore, the AVG-weight-based vertex-order may differ strongly from
both the Degree-order and the SUM-weight order. As we show, the AVG-weight
based order is also similar to the Degree and to the SUM-weight order.
SUM-length is the length-weighted version of the Degree. The Degree and the
SUM-length values may differ a lot if a node is adjacent to many other
vertices by short edges or few other nodes but with very long edges. If an
important node usually has numerous and long incident edges, then the orders
by Degree and SUM-length would not differ a lot. We show that in the case of
our consensus braingraph, this is the situation.
MAX-length can be large, while SUM-length is small, so the order according to
these parameters can be different in numerous positions. We show that this is
not the case in our graph.
AVG-length times the Degree is the SUM-length for each vertex. Therefore, the
AVG-length-based order can strongly differ from both the Degree and the SUM-
length based order. We show the opposite for the case of the consensus
braingraph.
The seven orders are explicitly given in the Appendix.
In the following subsections, we introduce two tools for the analysis of the
similarity of these orders: the Spearman correlation and the inversion
numbers.
### Spearman’s rank correlation coefficient
The Spearman $\varrho$ coefficient [37] is an ideal tool for comparing
different orders on the same base set. In our case, the base set is the set of
vertices, and the seven different orders are defined by the seven parameters
Degree, SUM-weight, MAX-weight, AVG-weight, SUM-length, MAX-length, and AVG-
length.
The $\varrho$ coefficient gives information about the correlation of two
attributes, using the two orderings by the two attributes of the elements. Two
indices are associated with every element, telling what its index is in the
given ordering. There is a simple equation that calculates the coefficient if
the indices are unique, meaning that no two attributes are the same. (Luckily,
this is true for the consensus braingraph.) If we calculate two attributes of
$n$ elements and $d_{i}$ is the difference of the $i$-th element’s two
indices, then:
$\varrho=1-\frac{6\cdot\sum_{i=1}^{n}d_{i}^{2}}{n^{3}-n}$ (3)
The value of Spearman’s correlation coefficient $\varrho$ satisfies
$-1\leq\varrho\leq 1$, where $\varrho=1$ means the perfect correlation and
$\varrho=-1$ means the perfect opposition.
###### Remark 1.
The bounds for $\varrho$ can be proven by using the cubic formula for the sum
of the first $n$ square numbers: $\frac{n(n+1)(2n+1)}{6}$. In the case of
perfect opposition, we should calculate the sum of the first $\frac{n}{2}$ odd
square numbers. This can easily be acquired from the sum of (not counted) even
square numbers, as this is exactly four times the sum of the first that many
square numbers.
###### Remark 2.
The closer the coefficient $\varrho$ is to $0$, the less we can say about the
predicted correlation. As $n$ grows, the coefficients with smaller absolute
values can also be significant. So the $p$-value is not only defined by
$\varrho$, but it also depends on $n$. The acquired $p$-value is the
probability of the correlation being that extreme under the assumption that
the null hypothesis is true.
### Inversion numbers
###### Definition 1.
In two given permutations of $n$ elements, two different elements are in
inversion with each other if their order is opposite in the permutations.
###### Definition 2.
Two permutations’ inversion number is the number of (not ordered) pairs of
elements which are in inversion. An element’s inversion is the number of
elements with which it is in inversion.
###### Lemma 3.
The expected value of the inversion number of two permutations of length $n$
is $\frac{n(n-1)}{4}$.
###### Proof.
Look at an arbitrary unordered $\\{i,j\\}$ pair of elements. Because of
symmetric reasoning, the expected value of the contribution of this pair to
the inversion number is $\frac{1}{2}$. (As every permutation has a bijective
pair which differs only in the $(i,j)$ transposition. Transposition is the
function that only swaps two elements in a permutation.) Since
$\binom{n}{2}=\frac{n(n-1)}{2}$ unordered pair of elements exist in the set of
$n$ elements, by using the linearity of the expectation, we get that the
expected value of the inversion number is the desired
$\frac{1}{2}\cdot\frac{n(n-1)}{2}=\frac{n(n-1)}{4}$. ∎
###### Corollary 4.
The expected value of the inversion of any element is $\frac{n-1}{2}$.
###### Proof.
The linearity of the expected value can be used again, but now for the result
of Lemma 1. As there are $n$ elements and each inversion is counted twice (for
both elements of the pair), the expected value by elements is
$\frac{n(n-1)}{4}\cdot\frac{2}{n}=\frac{n-1}{2}$. ∎
Alternative proof. Make a bijection between the permutations: the pair of a
permutation is the opposite permutation. In every pair of permutations, every
unordered pair of elements $\\{i,j\\}$ has each of its two orderings in
exactly one of the members of the permutation pair. So for every element $i$,
there are $n-1$ different $j$ elements, and the expected value of their
inversion one-by-one is $\frac{1}{2}$. So the expected value of the inversion
of the arbitrary element $i$ is $\frac{n-1}{2}$. $\square$
## Discussion and Results
### Spearman-correlations of different orderings
Correlations with the degree
| Degree
---|---
| SUM-weight | MAX-weight | AVG-weight | SUM-length | MAX-length | AVG-length
$\varrho$ | $0.88$ | $0.84$ | $0.79$ | $0.98$ | $0.51$ | $0.76$
$p$ | $0.0$ | $10^{-278}$ | $10^{-220}$ | $0.0$ | $10^{-68}$ | $10^{-194}$
Table 1 Spearman-correlations between the Degree-based and the six other
orders. The first row contains the $\varrho$ coefficients, and the second the
significance-characterizing p values. We note that the weakest correlation in
the case of MAX-length is still very far from 0, and its p-value is very
small.
Correlations with SUM-weight and MAX-weight
| SUM-weight | MAX-weight
---|---|---
| SUM-length | MAX-length | AVG-length | SUM-length | MAX-length | AVG-length
$\varrho$ | $0.86$ | $0.42$ | $0.63$ | $0.83$ | $0.41$ | $0.62$
$p$ | $10^{-302}$ | $10^{-45}$ | $10^{-112}$ | $10^{-262}$ | $10^{-43}$ | $10^{-110}$
Table 2 Spearman-correlations between the SUM-weight and MAX-weight orders and
the three length-based orders. The first row contains the $\varrho$
coefficients, the second the significance-characterizing p-values. We note
that the weakest correlations in the case of MAX-length are still very far
from 0, and their p-values are very small.
Correlations with AVG-weight
| AVG-weight
---|---
| SUM-length | MAX-length | AVG-length
$\varrho$ | $0.77$ | $0.37$ | $0.54$
$p$ | $10^{-199}$ | $10^{-33}$ | $10^{-79}$
Table 3 Spearman-correlations between the AVG-weight order and the three
length-based orders. The first row contains the $\varrho$ coefficients, the
second the significance-characterizing p values. We note that the weakest
correlation in the case of MAX-length is still very far from 0, and its
p-value is very small.
Correlations between weight-weight and length-length orders
| weight based orders | length based orders
---|---|---
| SUM-MAX | SUM-AVG | MAX-AVG | SUM-MAX | SUM-AVG | MAX-AVG
$\varrho$ | $0.97$ | $0.98$ | $0.96$ | $0.58$ | $0.86$ | $0.70$
$p$ | $0.0$ | $0.0$ | $0.0$ | $10^{-91}$ | $10^{-296}$ | $10^{-148}$
Table 4 Spearman correlations and p-values between the weight-weight based and
the length-length-based orders. All the correlations are high, and the lowest
value belongs to the SUM-length vs. MAX-length correlations.
### A simple control
For a simple control, we have computed the Spearman correlation between two
obviously unrelated orders. Namely, we have taken the ordinal numbers of the
vertices assigned by the parcellation software and the AVG-weight-defined
order. The ordinal numbers are assigned in the way that around the first 500
numbered vertices are situated in the left and the second 500 vertices in the
right hemisphere of the brain, in the same order. For these two orders
$\varrho=0.01$ and $p=0.65$, therefore, our results in Tables 1-4 present a
biological rule.
### Analysis of the order-similarity by inversion numbers
In the Methods section, we have defined the inversion numbers and listed some
of their fundamental properties. Here we present a graphical evaluation of the
inversion numbers between the pairs of the seven orders studied by the
Spearman correlation in the previous section.
Figure 1: The number of vertices with high inversion-numbers according to
distinct parameter-pairs. Point $n$ on axis $x$ correspond to the most
important $n$ vertices in one of the pair-defined orders, while the height
(i.e. the $y$ coordinate) of the point correspond to the number with higher-
than-expectation inversion number between the most important $n$ pairs of
orderings. On each panel the black line shows the $n/2$ expectation. It is
easy to see on all panels that the higher-than-expectation inversion numbers
appear in very few pairs, almost independently from the examined pairs of
orderings.
## Conclusions
We have analyzed the order of vertex-importance in the anatomically labeled
consensus graph of the human brain, defined by circuitry-based parameters of
the vertices: the degree (Degree), and the following parameters, computed on
the incident edges for the vertex: Sum of fiber counts (SUM-weight), Max of
fiber counts (MAX-weight), Average of fiber counts (AVG-weight), Sum of fiber
lengths (SUM-length), Max of fiber lengths (MAX-length) and the Average of
fiber lengths (AVG-length). For the analysis, we have used the Spearman
correlation coefficient and the inversion numbers between the orders. We have
found that the seven orders f vertex importance, defined by these seven
circuitry-based parameters of the vertices, have a great similarity: i.e., the
most important vertices - statistically - have many neighbors, connected with
long and numerous fibers. We also have shown that orders defined by the
maximum weight or length of the incident edges or the orders defined by the
average weight or length of the incident edges do not differ very much from
the orders defined by the sum of these parameters.
The results show the robustness of the orders by these seven parameters and
also shows that vertex-importance in the human brain can be characterized by
numerous parameters, but the list of the important vertices (or anatomical
brain areas) will not be changed much.
## References
* Brin and Page [1998] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual web search engine. _Computer Networks and ISDN Systems_ , 30:107–117, 1998\.
* Kleinberg [1998] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. In _Proceedings of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms_ , pages 668–677, San Francisco, California, 25–27 January 1998. ACM Press.
* Iván and Grolmusz [2011] Gábor Iván and Vince Grolmusz. When the web meets the cell: using personalized pagerank for analyzing protein interaction networks. _Bioinformatics_ , 27(3):405–407, 2011.
* Grolmusz et al. [2012] Vince Grolmusz, Gabor Ivan, Daniel Banky, and Balazs Szerencsi. How to find non hub important nodes in protein networks? _Biophysical Journal_ , 102(3):184a, 2012.
* Bánky et al. [2013] Dániel Bánky, Gábor Iván, and Vince Grolmusz. Equal opportunity for low-degree network nodes: a pagerank-based method for protein target identification in metabolic graphs. _PLoS One_ , 8(1):e54204, 2013.
* Grolmusz [2015] Vince Grolmusz. A note on the pagerank of undirected graphs. _Information Processing Letters_ , 115(6):633–634, 2015.
* White et al. [1986] JG White, E Southgate, JN Thomson, and S Brenner. The structure of the nervous system of the nematode Caenorhabditis elegans: the mind of a worm. _Phil. Trans. R. Soc. Lond_ , 314:1–340, 1986.
* Scheffer et al. [2020] Lousis Scheffer, Shan Xu, Michal Januszewski, and Zhiyuan Lu et al. A connectome and analysis of the adult drosophila central brain. _eLife_ , 9, 2020. doi: 10.7554/eLife.57443. URL https://doi.org/10.7554/eLife.57443.
* Hagmann et al. [2008] Patric Hagmann, Leila Cammoun, Xavier Gigandet, Reto Meuli, Christopher J. Honey, Van J. Wedeen, and Olaf Sporns. Mapping the structural core of human cerebral cortex. _PLoS Biol_ , 6(7):e159, Jul 2008. doi: 10.1371/journal.pbio.0060159. URL http://dx.doi.org/10.1371/journal.pbio.0060159.
* Hagmann et al. [2012] Patric Hagmann, Patricia E. Grant, and Damien A. Fair. MR connectomics: a conceptual framework for studying the developing brain. _Front Syst Neurosci_ , 6:43, 2012. doi: 10.3389/fnsys.2012.00043. URL http://dx.doi.org/10.3389/fnsys.2012.00043.
* McNab et al. [2013] Jennifer A. McNab, Brian L. Edlow, Thomas Witzel, Susie Y. Huang, Himanshu Bhat, Keith Heberlein, Thorsten Feiweier, Kecheng Liu, Boris Keil, Julien Cohen-Adad, M Dylan Tisdall, Rebecca D. Folkerth, Hannah C. Kinney, and Lawrence L. Wald. The Human Connectome Project and beyond: initial applications of 300 mT/m gradients. _Neuroimage_ , 80:234–245, Oct 2013. doi: 10.1016/j.neuroimage.2013.05.074. URL http://dx.doi.org/10.1016/j.neuroimage.2013.05.074.
* Kerepesi et al. [2017] Csaba Kerepesi, Balazs Szalkai, Balint Varga, and Vince Grolmusz. The braingraph. org database of high resolution structural connectomes and the brain graph tools. _Cognitive Neurodynamics_ , 11(5):483–486, 2017\.
* Szalkai et al. [2019a] Balazs Szalkai, Csaba Kerepesi, Balint Varga, and Vince Grolmusz. High-resolution directed human connectomes and the consensus connectome dynamics. _PLoS ONE_ , 14(4):e0215473, September 2019a. URL https://doi.org/10.1371/journal.pone.0215473.
* Varga and Grolmusz [2021] Balint Varga and Vince Grolmusz. The braingraph.org database with more than 1000 robust human structural connectomes in five resolutions. _Cognitive Neurodynamics_ , 2021. URL https://doi.org/10.1007/s11571-021-09670-5.
* [15] Laszlo Keresztes, Evelin Szogi, Balint Varga, and Vince Grolmusz. Introducing and applying newtonian blurring: An augmented dataset of 126,000 human connectomes at braingraph.org. _Scientific Reports_. doi: 10.1038/s41598-022-06697-4. URL https://www.nature.com/articles/s41598-022-06697-4.
* Daducci et al. [2012] Alessandro Daducci, Stephan Gerhard, Alessandra Griffa, Alia Lemkaddem, Leila Cammoun, Xavier Gigandet, Reto Meuli, Patric Hagmann, and Jean-Philippe Thiran. The connectome mapper: an open-source processing pipeline to map connectomes with MRI. _PLoS One_ , 7(12):e48121, 2012. doi: 10.1371/journal.pone.0048121. URL http://dx.doi.org/10.1371/journal.pone.0048121.
* Tournier et al. [2012] J Tournier, Fernando Calamante, Alan Connelly, et al. Mrtrix: diffusion tractography in crossing fiber regions. _International Journal of Imaging Systems and Technology_ , 22(1):53–66, 2012.
* Szalkai et al. [2015a] Balázs Szalkai, Bálint Varga, and Vince Grolmusz. Graph theoretical analysis reveals: Women’s brains are better connected than men’s. _PLoS One_ , 10(7):e0130045, 2015a. doi: 10.1371/journal.pone.0130045. URL http://dx.doi.org/10.1371/journal.pone.0130045.
* Szalkai et al. [2018a] Balázs Szalkai, Bálint Varga, and Vince Grolmusz. Brain size bias-compensated graph-theoretical parameters are also better in women’s connectomes. _Brain Imaging and Behavior_ , 12(3):663–673, 2018a. doi: 10.1007/s11682-017-9720-0. URL http://dx.doi.org/10.1007/s11682-017-9720-0.
* Szalkai et al. [2021] Balázs Szalkai, Bálint Varga, and Vince Grolmusz. The graph of our mind. _Brain Sciences_ , 11(3), 2021. URL https://doi.org/10.3390/brainsci11030342.
* Keresztes et al. [2021] Laszlo Keresztes, Evelin Szogi, Balint Varga, and Vince Grolmusz. Identifying super-feminine, super-masculine and sex-defining connections in the human braingraph. _Cognitive Neurodynamics_ , 15(6):949–959, 2021\. URL https://doi.org/10.1007/s11571-021-09687-w.
* Keresztes et al. [2022] Laszlo Keresztes, Evelin Szogi, Balint Varga, and Vince Grolmusz. Discovering sex and age implicator edges in the human connectome. _Neuroscience letters_ , 791:136913, November 2022. ISSN 1872-7972. doi: 10.1016/j.neulet.2022.136913.
* Kerepesi et al. [2016] Csaba Kerepesi, Balazs Szalkai, Balint Varga, and Vince Grolmusz. How to direct the edges of the connectomes: Dynamics of the consensus connectomes and the development of the connections in the human brain. _PLOS One_ , 11(6):e0158680, June 2016. URL http://dx.doi.org/10.1371/journal.pone.0158680.
* Szalkai et al. [2017a] Balázs Szalkai, Bálint Varga, and Vince Grolmusz. The robustness and the doubly-preferential attachment simulation of the consensus connectome dynamics of the human brain. _Scientific Reports_ , 7(16118), 2017a. doi: 10.1038/s41598-017-16326-0.
* Kerepesi et al. [2018a] Csaba Kerepesi, Balint Varga, Balazs Szalkai, and Vince Grolmusz. The dorsal striatum and the dynamics of the consensus connectomes in the frontal lobe of the human brain. _Neuroscience Letters_ , 673:51–55, March 2018a. doi: 10.1016/j.neulet.2018.02.052.
* Kerepesi et al. [2018b] Csaba Kerepesi, Balázs Szalkai, Bálint Varga, and Vince Grolmusz. Comparative connectomics: Mapping the inter-individual variability of connections within the regions of the human brain. _Neuroscience Letters_ , 662(1):17–21, 2018b. doi: 10.1016/j.neulet.2017.10.003.
* Szalkai et al. [2018b] Balazs Szalkai, Balint Varga, and Vince Grolmusz. Comparing advanced graph-theoretical parameters of the connectomes of the lobes of the human brain. _Cognitive Neurodynamics_ , 12(6):549–559, 2018b.
* Szalkai et al. [2019b] Balazs Szalkai, Balint Varga, and Vince Grolmusz. Mapping correlations of psychological and connectomical properties of the dataset of the human connectome project with the maximum spanning tree method. _Brain Imaging and Behavior_ , 13(5):1185–1192, feb 2019b. doi: https://doi.org/10.1007/s11682-018-9937-6.
* Fellner et al. [2019] Mate Fellner, Balint Varga, and Vince Grolmusz. The frequent subgraphs of the connectome of the human brain. _Cognitive Neurodynamics_ , 13(5):453–460, 2019\. URL https://doi.org/10.1007/s11571-019-09535-y.
* Fellner et al. [2020a] Mate Fellner, Balint Varga, and Vince Grolmusz. The frequent network neighborhood mapping of the human hippocampus shows much more frequent neighbor sets in males than in females. _PLOS One_ , 15(1):e0227910, 2020a. URL https://doi.org/10.1371/journal.pone.0227910.
* Fellner et al. [2020b] Máté Fellner, Bálint Varga, and Vince Grolmusz. The frequent complete subgraphs in the human connectome. _PloS One_ , 15(8):e0236883, 2020b. URL https://doi.org/10.1371/journal.pone.0236883.
* Fellner et al. [2020c] Mate Fellner, Balint Varga, and Vince Grolmusz. Good neighbors, bad neighbors: The frequent network neighborhood mapping of the hippocampus enlightens several structural factors of the human intelligence on a 414-subject cohort. _Scientific Reports_ , 10(11967), 2020c. URL https://doi.org/10.1038/s41598-020-68914-2.
* Fischl [2012] Bruce Fischl. Freesurfer. _Neuroimage_ , 62(2):774–781, 2012.
* Desikan et al. [2006] Rahul S. Desikan, Florent Segonne, Bruce Fischl, Brian T. Quinn, Bradford C. Dickerson, Deborah Blacker, Randy L. Buckner, Anders M. Dale, R Paul Maguire, Bradley T. Hyman, Marilyn S. Albert, and Ronald J. Killiany. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. _Neuroimage_ , 31(3):968–980, Jul 2006. doi: 10.1016/j.neuroimage.2006.01.021. URL http://dx.doi.org/10.1016/j.neuroimage.2006.01.021.
* Szalkai et al. [2015b] Balázs Szalkai, Csaba Kerepesi, Bálint Varga, and Vince Grolmusz. The Budapest Reference Connectome Server v2. 0. _Neuroscience Letters_ , 595:60–62, 2015b.
* Szalkai et al. [2017b] Balazs Szalkai, Csaba Kerepesi, Balint Varga, and Vince Grolmusz. Parameterizable consensus connectomes from the Human Connectome Project: The Budapest Reference Connectome Server v3.0. _Cognitive Neurodynamics_ , 11(1):113–116, feb 2017b. doi: http://dx.doi.org/10.1007/s11571-016-9407-z.
* [37] Carl Spearman. The proof and measurement of association between two things. _The American Journal of Psychology_ , 15(1):72–101. URL https://doi.org/10.2307/1412159.
## Appendix
In this section we list the orderings of the 1015 vertices of our consensus
braingraph by the examined weight parameters. The nodes are denoted by the
cerebral areas, which they are corresponded to. The columns contain the
orderings by different parameters, denoted in the column header.
See pages 1-23 of Vertices.pdf
|
# Time-Efficient Reward Learning
via Visually Assisted Cluster Ranking
David Zhang Micah Carroll Andreea Bobu Anca Dragan
###### Abstract
One of the most successful paradigms for reward learning uses human feedback
in the form of comparisons. Although these methods hold promise, human
comparison labeling is expensive and time consuming, constituting a major
bottleneck to their broader applicability. Our insight is that we can greatly
improve how effectively human time is used in these approaches by batching
comparisons together, rather than having the human label each comparison
individually. To do so, we leverage data dimensionality-reduction and
visualization techniques to provide the human with a interactive GUI
displaying the state space, in which the user can label subportions of the
state space. Across some simple Mujoco tasks, we show that this high-level
approach holds promise and is able to greatly increase the performance of the
resulting agents, provided the same amount of human labeling time.
## 1 Introduction
Learning reward functions from human feedback has been a successful paradigm
in both sequential decision-making problems [1, 46, 19, 6, 17, 14], enabling
AI agents to learn behaviors that are challenging, or perhaps even impossible,
to specify manually. Although these methods hold promise, the main bottleneck
to applying them broadly in real-world scenarios is the amount of human effort
they require, sometimes requiring hours of uninterrupted human attention and
interaction [17].
To make it easier for the human to provide input, many recent approaches use a
flexible and human-friendly type of reward feedback in the form of comparisons
between trajectory snippets [17]. However, this approach is still too slow for
practical use because human time is not utilized very effectively: every time
the AI agent interacts with the human, it typically receives feedback for a
single query at a time.
Figure 1: The CLRVis algorithm. Our method consists of several stages: we
collect environment interactions using our current policy $\pi$ and show them
to the user through a t-SNE visualization. The user then provides feedback in
the form of cluster orderings, which are then used to update the reward model
$\hat{r}$; in turn, the reward model $\hat{r}$ is then used to update the
policy $\pi$ and the process repeats.
Recent techniques based on active learning attempt to further alleviate the
burden on the human by shifting it onto the robot: rather than randomly select
which queries to ask the human for feedback on, the robot itself tries to come
up with the most informative ones, in order to be efficient with the human’s
time. In this project, we are interested in a complementary question: rather
than design metrics for having the robot decide which queries are most
informative, we want to build tools to assist humans in providing feedback on
multiple related queries at once.
For instance, imagine you were given an interactive visualization of the state
space for a self-driving task, in which similar states are placed close
together: in the top-right, you hover over various states and by looking at
the states, you can tell they all are bendy roads; you zoom in and see
subclusters of states in which the car is swerving in the other lane, or well-
centered in its lane. This could enable the human to provide feedback on many
semantically similar states at once, at the level of granularity that the
human deems to be most appropriate. Using a visual interface of this kind, it
seems like the human would be able to give quick, rich feedback on entire sets
of similar states at once.
While this kind of tool might seem taken out of a sci-fi hologram scene, in
this work we set out to build a similar type of system. We test such an
approach can, in fact, make more efficient use of the human’s time than
current state of the art methods. Our insight is that enabling the human to
provide input over large swaths of the state space at once relies on how that
state space is visually presented to them: if states that behave similarly are
visualized close together, then the person should be able to more easily think
of them similarly and group them together. As such, we use high-dimensional
data visualization techniques [40] to cluster states based on both their
visual and reward characteristics. We make use of recent contrastive
techniques [28, 25, 16] to learn an embedding that maps visually similar
states near each other, and combine it with the current trained reward
embedding to visualize states close both in reward and appearance near each
other.
We show in our experiments that our method (which we call CLRVis) requires up
to 7x less human time than a DRLHP-like baseline across some Mujoco reward-
learning tasks. We validate this with some real human data and a suite of
simulated human-feedback experiments.
## 2 Related Work
Reward Learning from Human Input. A popular paradigm for learning robot
behaviors from human input is inverse reinforcement learning (IRL), where the
robot seeks to extract a reward function capturing why specific behaviors may
be desirable. Traditionally, IRL relies on human demonstrations to recover the
reward function responsible for the human’s input [1, 46, 3, 21, 19, 44].
Recent research goes beyond demonstrations, utilizing other types of human
input to learn rewards, such as corrections [6], comparisons [43, 17], labels
[42, 34], rankings [14], or some combination [24, 30, 26]. Comparisons have
been especially successful in recent NLP approaches [29, 38, 5].
The issue that persists across all these methods is that the human still bears
most of the labeling burden. Active learning (AL) attempts to alleviate some
of this burden on the human by shifting it onto the robot: to reduce the total
amount of necessary human labels, the robot tries to come up with the most
informative queries to ask the human for help with. Robot learning methods
have investigated a variety of metrics the robot can optimize to select the
best queries, from maximizing uncertainty estimates [37], minimizing
performance risk [15], maximizing exploration [9], minimizing human answer
uncertainty [10], or a combination of the above [34, 11]. While researchers in
data visualization and labeling find that active learning is an effective way
to correctly label data, it is not necessarily the most efficient [36]. This
is in part due to the cold start problem of AL, where without a good initial
model all of these selection strategies start poorly. [36] finds that user-
interactive labeling can be much more efficient than designing active learning
metrics for collecting better reward queries from the human – following this
works’ footsteps, we are interested constructing tools that assist humans in
labeling more reward data faster.
AI Assistance in User-Interactive Feedback. Prior work in user-interactive
labeling has used semi-supervised [18] or self-supervised [41] models to
predict the most likely label associated with the query. This strategy can
assist the user in labeling individual queries faster, but the user still has
to label each data point one at a time. Instead, more recent approaches
propose using visual interfaces to help humans label more data, which can even
enable unifying visual assistance with active learning [8]. Based on this,
[22] proposes a visual interactive labeling approach where the human can
select clusters via a lasso tool, after which a model recommends a label for
each cluster. Then, the interface allows the human to deselect individual
points that don’t match the cluster’s overall label. Instead, [7] use a
clustering algorithm to propose clusters automatically to the human, who can
manually remove instances that don’t belong to each cluster. Both methods have
primarily been demonstrated on MNIST and only work on categorical variables,
whereas the reward learning scenarios we are interested in require predicting
reward as a continuous variable. Since humans are notoriously noisy when
labeling continuous variables [13], we similarly look at visual interfaces
that use clustering to assist the user, but opt for ranking clusters instead
of labeling them.
High-dimensional Data Visualization. As we just saw, one way to construct
better tools for labeling data is to construct better ways to visualize that
data. t-SNE is a non-linear dimensionality reduction technique commonly used
for visualizing high dimensional data in an easy and intuitive way [40]. Most
relevant to our work is its widespread use in reinforcement learning as a tool
for visually evaluating the quality of the learned RL agent embeddings [27,
45, 4, 2]. In these works, the intuition behind using t-SNE is that if an
agent learned a good embedding for Q-value, reward, etc., states that are
embedded similarly – i.e. have similar Q-values or rewards – will appear close
in t-SNE space, even if they are perceptually dissimilar. We use a similar
visualization technique, but we differ in that we don’t only use it for
qualitative evaluation purposes at the end of training an RL agent. Rather, we
seek to recompute the t-SNE visualization at every labeling iteration, after
having both experienced more states and retrained the embedding.
The above methods typically apply t-SNE on states sampled during the fully
trained RL agent’s execution of its policy [27] or during the agent’s entire
training experience [45]. Since we recompute t-SNE at every iteration, we
instead only have access to states from the current iteration policy or from
prior policies. In the future, we could assume a small amount of successful
task demonstrations and sample states according to a curriculum [20] or along
an expert demonstration of the task [35]. Furthermore, these methods have
either visualized the raw pixel representation or the final layer Q-value
network embedding of the sampled states. Since we are primarily interested in
relieving the visual labeling burden from the human, we also train a
contrastive network [28, 16, 39, 25], which seeks to learn visually
discriminative embeddings. We concatenate the learned reward embedding and the
contrastive one, and compute the t-SNE visualization from the concatenated
vector. Prior work has looked at constructing representation embeddings that
result in visualizations more interpretable to the human [23], but there the
assumption is that the robot is the teacher and the human is the student
trying to learn most effectively from that visualization.
## 3 CLRVis
We call our methodology CLusters Rankings with Visual assistance (CLRVis). On
a high-level, our method consists of iteratively showing a person
visualizations of subsets of the state space, which can be used as an
interface to efficiently provide additional reward labels (see Figure 1).
Every iteration of CLRVis can be broken down into various steps:
1. 1.
State space sampling: we sample some states using our current policy $\pi$.
2. 2.
Visualizing states and obtaining cluster rankings: we visualize states sampled
in step 1 with t-SNE, and have the user identify $M$ clusters of similar
states through a graphical interface, and provide a ranking of such $M$
clusters in terms of increasing reward.
3. 3.
Converting cluster rankings to pairwise state comparisons: for each pair of
clusters in our cluster ranking, we can then consider all individual pairs of
states and treat them as single-state comparisons (if cluster $A$ was ranked
higher than cluster $B$, we can treat this as equivalent to stating that state
$a\in A$ has higher reward than state $b\in B$). This gives us on the order of
$O(M^{2}S^{2})$ comparisons (if the size of the clusters was fixed to be $S$),
where the $M^{2}$ term comes from transforming cluster rankings into cluster
comparisons, and the $S^{2}$ term comes from comparing two clusters.
4. 4.
Updating reward model: with these $O(M^{2}S^{2})$ comparisons, we can update
our running reward model $\hat{r}$.
5. 5.
Updating policy: with our now-improved reward model $\hat{r}$, we can train
$\pi$ further, and return to step 1. For a schematic description of the
algorithm, see Algorithm 1.
In short, providing cluster reward-rankings with the aid of an appropriate
visual interface should not take many times longer than comparing individual
states. However, such cluster rankings could contain up to $O(M^{2}S^{2})$
more information than an individual cluster comparison. While we might not
expect a $O(M^{2}S^{2})$ speedup in human-time in practice (because many of
the $O(M^{2}S^{2})$ individual comparisons might be similar and not very
informative), conceptually this approach still seems significantly better than
individual state comparisons, which is the current standard approach.
However, the ease (and thus speed) of cluster ranking will crucially depend on
how interpretable the state space visualization is: are highly dissimilar
states next to each other? Do states in similar regions of the visualization
have similar reward (and thus labeling them as a cluster is easy)? For the
purpose of encouraging visualizations which are conducive to ease of cluster
labeling, we combine two approaches: 1) we train a contrastive learning
embedding for each new set of state images, and 2) after the first update of
the reward model $\hat{r}$, we append the reward model embedding to the
contrastive learning one to encourage the states that we already suspect have
similar rewards to be close together in the state space visualization.
Reward model update. To update the reward model $\hat{r}$, we use the dataset
$D$ of pairwise comparisons gathered from cluster rankings. Each comparison
can be represented as a tuple $(s_{0},s_{1},y)\in D$, where $s_{0}$, $s_{1}$
are the two states, and $y\in\\{0,1\\}$ is a label indicating which state has
higher reward. Under the Bradley-Terry model, we can assume our reward model
is a score function that predicts the likelihood one state is better than
another [12].
$P(s_{0}>s_{1})=\frac{e^{\hat{r}(s_{0})}}{e^{\hat{r}(s_{0})}+e^{\hat{r}(s_{1})}}$
(1)
We optimize $\hat{r}$ with a cross-entropy loss between the predicted labels
and true labels as follows:
$\text{loss}(\hat{r})=-\sum_{(s_{0},s_{1},y)\in
D}y\log(P(s_{0}>s_{1}))+(1-y)\log(1-P(s_{0}>s_{1}))$ (2)
Algorithm 1 The CLRVis algorithm
an initial policy $\pi^{0}$, an initial reward model $\hat{r}^{0}$.
for $i$ in $0,\dots,K$ do
Sample $s_{0},\dots,s_{N}$ from rollouts of $\pi^{i}$.
Visualize $s_{0},\dots,s_{N}$ with t-SNE, based on embeddings from
$\hat{r}^{i}$.
Human selects clusters of states with similar rewards, and orders them by
increasing reward.
Obtain $\hat{r}^{i+1}$ by updating $\hat{r}^{i}$ with cluster ordering
information.
Obtain $\pi^{i+1}$ by training $\pi^{i}$ with the updated reward model
$\hat{r}^{i+1}$.
end for
## 4 Experiments
### 4.1 Experimental Setup
Environments. We test our method on 3 MuJoCo environments – Swimmer, Reacher,
HalfCheetah – but we modify the goal-task. We do this because, similarly to
[17], we want to showcase that our method can also work for tasks that we
don’t have an explicit reward for. However, for the purposes of evaluating
CLRVis, we found it useful to hardcode ground truth reward functions for our
new task-goals, even though they were hidden from CLRVis’s learning process.
The advantage is that we can use such ground truth reward functions both as a
oracle performance indicator, and as a way to ground simulated models of user
feedback, which we use in our experiments.
Modified environment tasks. For Reacher, we set the goal to get the arm as
close as possible to a fixed dot position in the bottom left. For HalfCheetah,
we change the goal to have the cheetah stand upright on its head, rather than
running forward. For Swimmer, we set the goal to curl up in a horseshoe shape.
See Appendix A for more information and for the custom reward functions.
Mixed policy. We noticed that CLRVis would sometimes get stuck in local optima
during training (i.e. low-reward policies). We suspected this might be due to
low diversity of states in the t-SNE visualizations (potentially caused by low
entropy of the latest policy, used to collect states for the t-SNE). To
mitigate this issue, we combine a random policy with our current policy while
collecting episodes. We start each episode by using our current policy, and
switch to a random policy after $T$ timesteps, where $T$ is uniformly drawn
from the max possible length of the episode.
Figure 2: State-space Visualization Interface. A real human can provide reward
feedback by selecting clusters in our t-SNE state visualization and inputting
a ranking. Sample clusters are denoted with different colors. To identify
clusters of states with similar rewards, the user hovers over different points
on the t-SNE which will display an image of the corresponding states on the
right.
Visualization and ranking interface. To collect feedback on reward preferences
from the user, we implemented the web interface in Figure 2. At each iteration
we display a t-SNE visualization of $N$ states, sampled with the mixed policy
described above. The user can then hover over each point in the graph, which
will display the corresponding image of the environment state. To provide
feedback, the user uses a lasso tool to select clusters which they believe to
have similar reward values. After selecting $M$ such clusters, the user ranks
them by entering their order into a text box. Based on this ranking, we
extract pair-wise preferences between states to update our reward model.
Human-time. In our context of learning a reward function (and optimal policy)
from human feedback, the main practical bottleneck is that of human time: how
much time are we requesting from humans to provide labels to update our reward
model $\hat{r}$? Human time is expensive, and in comparison, the computational
cost of updating the policy $\pi$ is negligible. Because of this, we report
algorithm performance across different points of human-time, rather than
different numbers of iterations.
Simulated Human – CLRVis. Since collecting real human feedback for every
experiment we ran was prohibitive, to test our method we implemented a
simulated human, which we calibrated to be approximately as good as a real
human or worse. We show some comparisons between our simulated human and a
real human in Figure 4. To simulate human cluster choice, we use agglomerative
clustering [31] over the t-SNE plot, and filter out the clusters that are too
small or have high variance relative to other clusters. We then choose $M$
clusters that are equally spaced in terms of average reward. To do this, we
first compute $M$ "target values" that are equally spaced from minimum to
maximum cluster reward. Then, we select the cluster with average reward
closest to each target value. Using the ground truth reward function, we
finally rank the clusters from $1$ to $M$ based on their average reward. Note
that this simulated human is noisy: the selected clusters will have some
variance, and so the cluster rankings will lead to some mis-labeled data,
similarly to what would happen with a real human. We calibrate how long we
expect the simulated human to take per-cluster-ranking for each environment by
timing multiple real human CLRVis runs for a small number of iterations, and
take the average of the cluster-ranking times.
Simulated Human – DRLHP. We also need to simulate human feedback for DRLHP. To
be certain to benefit the baseline method, we assume that human feedback is
without noise: every comparison is perfectly labeled, according to the oracle
reward function. For each single-state comparison in DRLHP, we assumed that
the human labeling time would be 3 seconds (lower bound from [17]).
Reinforcement Learning. To update our policy after learning a reward function,
we use PPO for all experiments. While we use images to learn a contrastive
representation, both the reward function and policy function use the
vectorized observation space as input. For more details, see Appendix B.
### 4.2 Hypotheses
Manipulated Variables. We test the effect of two different strategies of human
feedback collection: a version of DRLHP [17] which elicits comparisons between
individual states (i.e. in which the trajectory snippet length is equal to 1);
and CLRVis (our method), which instead elicits cluster rankings made through
our graphical user interface.
Dependent Measures. To compare each method’s performance, we report the oracle
Episode Reward that each policy type collects after a certain amount of human
labeling time.
Hypotheses. We have two hypotheses: H1. In low human-time regimes, CLRVis will
recover better rewards than DRLHP with the same amount of labeling time; H2.
In high human-time regimes, DRLHP will eventually recover similar or better
rewards than CLRVis (because DRLHP has less mislabeled datapoints).
### 4.3 Quantitative Results
Figure 3: CLRVis simulated vs human. We run CLRVis with a real human for
Swimmer and Reacher. In Reacher, the simulated human performs roughly the same
as the real one. Swimmer only requires one iteration of CLRVis feedback to
converge – only one seed does poorly in simulation, but the rest match the
real human.
All experiments shown are run with 5 seeds, except for those with real human
data, which are run with 3 seeds. All error bars reported are standard errors
of the mean across seeds.
Human model realism. For Reacher and Swimmer, we ran CLRVis with both a real
human and our simulated human to teach an agent the intended tasks. We show
the results in Figure 3, from which we see that the simulated human
performance is similar or an underestimate to the real human one. We didn’t do
this same procedure for Cheetah as because the human time required was too
restrictive at this stage of the research. As a quick sanity check, we notice
that all models perform similarly when human labeling hasn’t yet started
(where the x-axis is $0$), so no reward model has yet been learned. While the
real human seems to perform significantly better than the simulated human in
Swimmer, this is only caused by a single bad seed. The others match the
simulated human well.
CLRVis’s time-efficiency. From Figure 4, we see that CLRVis is more time-
efficient in the short run and achieves greater reward in the same amount of
human time. Despite having some degree of noise in its cluster rankings,
CLRVis is able to get close to the oracle policy with little human effort,
performing much better than DRLHP in Reacher and Cheetah. CLRVis matches DRLHP
in Swimmer, but both methods converge to the oracle policy. Therefore, these
results support H1.
DRLHP under unlimited human-time. From Figure 5, we see that if one could
collect vast amounts of human data, DRLHP’s performance will eventually
approach CLRVis. However, despite the inherent noise in providing cluster
rankings, DRLHP does not surpass CLRVis, and in fact does slightly worse,
partially supporting H2. Instead, we see a large speedup in human time when
using CLRVis to reach the optimal DRLHP point. In Reacher, DRLHP takes 3345
seconds to reach its optimum, while CLRVis takes 1330 seconds to reach the
same point. In Cheetah, DRLHP takes 12489 seconds to reach its optimum, while
CLRVis only takes 1800. Therefore, we observe roughly a 2.5x and 7x speedup
respectively. We do not run long-term experiments on Swimmer because it is
able to converge to the oracle policy in the short-term.
For more details about our treatment of human time across our methods and
results, see Section B.3.
Figure 4: CLRVis’s time-limited performance. We see that when human time is
constrained, CLRVis tends to perform significantly better than DRLHP for the
same amount of time, and is able to quickly approach oracle performance across
all 3 environments. Figure 5: Long-term results of CLRVis and DRLHP. When we
allow DRLHP to use much more human time to collect comparisons, it’s
performance approaches that of CLRVis. However, we still see a significant
speedup in human-time when switching from DRLHP to CLRVis, roughly 2.5x and 7x
for Reacher and Cheetah respectively.
### 4.4 Qualitative Results
Figure 6: t-SNE progression. Over time, the t-SNE state visualization becomes
easier for a human to cluster. States with similar reward are grouped more
closely together as our reward embedding improves. Figure 7: CLRVis end
behavior. CLRVis is able to learn novel behaviors in the Reacher, Swimmer, and
Cheetah environments.
t-SNE Progression. As our agent and reward model embedding improves from human
feedback, we expect states of similar reward to be more closely grouped in our
t-SNE visualization. Also, we expect that the average reward in our sampled
states will increase due to a better policy.
In Figure 6, we plot the t-SNE visualization of Reacher states after 0, 4, and
8 iterations of CLRVis feedback. We color the t-SNE plots according to the
oracle reward values (which would not accessible to the real human labeler).
Note how there is a large increase in average reward between iterations 0 and
4, and small increase between iterations 0 and 8. This correlates with Figure
5, where we see Reacher’s episode reward grow sharply towards the beginning of
training, and rise slowly later on. In addition, we notice how it is much
easier for a human to label cluster rankings in later iterations of feedback.
In particular, states of similar reward tend to be become grouped more closely
together, and movements in t-SNE space correspond to gradual movements along a
gradient from high reward to low reward.
Behavior Examples. An example of agents reaching their goals in CLRVis can be
found in Figure 7. Videos of agents trained from CLRVis can be found at
https://bit.ly/clrvis-videos.
## 5 Conclusion
Discussion. Before deciding to focus on cluster rankings, we also tried other
forms of feedback, such as labeling clusters with average reward values or
providing individual cluster comparisons. However, we found that labeling
clusters with continuous reward values was difficult for a human to do, while
cluster rankings performed better than comparisons. Although there was some
noise from using clusters instead of individual states, it didn’t have a
significant effect on our overall results. In our approach, users are also
able to actively select the states they want to provide feedback on. As a
result, users have more flexibility to choose states in which the expect the
reward function to be more nuanced.
This is in contrast to DRLHP, where users are forced to compare video snippets
chosen by the algorithm.
Limitations. In this work, we have the users provide feedback over single
states (images) instead of trajectory snippets (videos). While comparing
images might be easier than comparing video snippets for a human, this limits
us to tasks where reward only depends on the current state (a user only needs
a single frame to tell if a Cheetah is standing upright). Another clear
limitation of our work is that most of our experiments are based on simulated
humans.
Future Work. In addition to running more extensive real user studies for both
CLRVis and DRLHP, future work could extend our interface to handle trajectory
snippets, enabling to learn more complex tasks that depend on a sequence of
images, such as getting the Cheetah to perform a backflip. In addition, future
work might utilize task discovery to pretrain policies in the beginning,
rather than rely on a random policy. After doing so, one could fine-tune the
model rather than learn from scratch. This is similar to existing NLP methods,
where language models are trained on large corpuses of text, and fine-tuned
for specific purposes.
Summary. Having users provide individual state comparisons for reward feedback
can be expensive and take a lot of time. We alleviate this issue by providing
an interface where users can provide feedback on large clusters of the state
space, which is visualized using t-SNE. Generally, we want states with similar
reward to be closer to each other, so we make use of contrastive learning and
the reward model representation. To prevent our policy from being stuck in
local optima, we also introduce the notion of a mixed-policy, where the states
shown in our t-SNE come from a combination of our current policy and random
policy. Through experiments in Mujoco environments, we find that CLRVis can
substantially outperform DRLHP given the same amount of human feedback time.
## Acknowledgments and Disclosure of Funding
We thank the members of the InterACT lab for their invaluable feedback. This
work was supported by ONR YIP and the NSF Fellowship.
## References
* Abbeel and Ng [2004] Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In _Machine Learning (ICML), International Conference on_. ACM, 2004\.
* Annasamy and Sycara [2019] Raghuram Mandyam Annasamy and Katia Sycara. Towards better interpretability in deep q-networks. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 33(01):4561–4569, Jul. 2019. doi: 10.1609/aaai.v33i01.33014561. URL https://ojs.aaai.org/index.php/AAAI/article/view/4377.
* Argall et al. [2009] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. _Robotics and autonomous systems_ , 57(5):469–483, 2009.
* Aytar et al. [2018] Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , NIPS’18, page 2935–2945, Red Hook, NY, USA, 2018. Curran Associates Inc.
* Bai et al. [2022] Yushi Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, T. J. Henighan, Nicholas Joseph, Saurav Kadavath, John Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom B. Brown, Jack Clark, Sam McCandlish, Christopher Olah, Benjamin Mann, and Jared Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback. _ArXiv_ , abs/2204.05862, 2022.
* Bajcsy et al. [2017] Andrea Bajcsy, Dylan P. Losey, Marcia K. O’Malley, and Anca D. Dragan. Learning robot objectives from physical human interaction. In Sergey Levine, Vincent Vanhoucke, and Ken Goldberg, editors, _Proceedings of the 1st Annual Conference on Robot Learning_ , volume 78 of _Proceedings of Machine Learning Research_ , pages 217–226. PMLR, 13–15 Nov 2017. URL http://proceedings.mlr.press/v78/bajcsy17a.html.
* Beil and Theissler [2020] David Beil and Andreas Theissler. Cluster-clean-label: An interactive machine learning approach for labeling high-dimensional data. In _Proceedings of the 13th International Symposium on Visual Information Communication and Interaction_ , VINCI ’20, New York, NY, USA, 2020\. Association for Computing Machinery. ISBN 9781450387507. doi: 10.1145/3430036.3430060. URL https://doi.org/10.1145/3430036.3430060.
* Bernard et al. [2018] Jürgen Bernard, Matthias Zeppelzauer, Michael Sedlmair, and Wolfgang Aigner. Vial: a unified process for visual interactive labeling. _The Visual Computer_ , 34, 09 2018. doi: 10.1007/s00371-018-1500-3.
* Biyik et al. [2019] Erdem Biyik, Kenneth Wang, Nima Anari, and Dorsa Sadigh. Batch active learning using determinantal point processes. _CoRR_ , abs/1906.07975, 2019.
* Biyik et al. [2020] Erdem Biyik, Malayandi Palan, Nicholas C. Landolfi, Dylan P. Losey, and Dorsa Sadigh. Asking easy questions: A user-friendly approach to active reward learning. In Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiura, editors, _Proceedings of the Conference on Robot Learning_ , volume 100 of _Proceedings of Machine Learning Research_ , pages 1177–1190. PMLR, 30 Oct–01 Nov 2020. URL https://proceedings.mlr.press/v100/b-iy-ik20a.html.
* Bobu et al. [2022] Andreea Bobu, Chris Paxton, Wei Yang, Balakumar Sundaralingam, Yu-Wei Chao, Maya Cakmak, and Dieter Fox. Learning perceptual concepts by bootstrapping from human queries. _IEEE Robotics and Automation Letters_ , 7(4):11260–11267, 2022. doi: 10.1109/LRA.2022.3196164.
* Bradley and Terry [1952] Ralph Allan Bradley and Milton E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. _Biometrika_ , 39(3/4):324–345, 1952. ISSN 00063444. URL http://www.jstor.org/stable/2334029.
* Braziunas and Boutilier [2008] Darius Braziunas and Craig Boutilier. Elicitation of factored utilities. _AI Magazine_ , 29(4):79, Dec. 2008. doi: 10.1609/aimag.v29i4.2203. URL https://ojs.aaai.org/index.php/aimagazine/article/view/2203.
* Brown et al. [2019] Daniel Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. In _International Conference on Machine Learning_ , pages 783–792. PMLR, 2019.
* Brown et al. [2018] Daniel S Brown, Yuchen Cui, and Scott Niekum. Risk-aware active inverse reinforcement learning. In _Conference on Robot Learning_ , pages 362–372. PMLR, 2018.
* Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _Proceedings of the 37th International Conference on Machine Learning_ , ICML’20. JMLR.org, 2020.
* Christiano et al. [2017] Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. 06 2017.
* Desmond et al. [2021] Michael Desmond, Michael Muller, Zahra Ashktorab, Casey Dugan, Evelyn Duesterwald, Kristina Brimijoin, Catherine Finegan-Dollak, Michelle Brachman, Aabhas Sharma, Narendra Nath Joshi, and Qian Pan. Increasing the speed and accuracy of data labeling through an ai assisted interface. In _26th International Conference on Intelligent User Interfaces_ , IUI ’21, page 392–401, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380171. doi: 10.1145/3397481.3450698. URL https://doi.org/10.1145/3397481.3450698.
* Finn et al. [2016] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In _Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48_ , ICML’16, page 49–58. JMLR.org, 2016.
* Florensa et al. [2017] Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse curriculum generation for reinforcement learning, 2017. URL https://arxiv.org/abs/1707.05300.
* Fu et al. [2018] Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adverserial inverse reinforcement learning. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=rkHywl-A-.
* Grimmeisen and Theissler [2020] Benedikt Grimmeisen and Andreas Theissler. The machine learning model as a guide: Pointing users to interesting instances for labeling through visual cues. In _Proceedings of the 13th International Symposium on Visual Information Communication and Interaction_ , VINCI ’20, New York, NY, USA, 2020\. Association for Computing Machinery. ISBN 9781450387507. doi: 10.1145/3430036.3430058. URL https://doi.org/10.1145/3430036.3430058.
* Hilgard et al. [2021] Sophie Hilgard, Nir Rosenfeld, Mahzarin R Banaji, Jack Cao, and David Parkes. Learning representations by humans, for humans. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 4227–4238. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/hilgard21a.html.
* Ibarz et al. [2018] Borja Ibarz, Jan Leike, Tobias Pohlen, Geoffrey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. _ArXiv_ , abs/1811.06521, 2018.
* Laskin et al. [2020] Michael Laskin, Aravind Srinivas, and Pieter Abbeel. CURL: Contrastive unsupervised representations for reinforcement learning. In Hal Daumé III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 5639–5650. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/laskin20a.html.
* Mehta and Losey [2022] Shaunak A. Mehta and Dylan P. Losey. Unified learning from demonstrations, corrections, and preferences during physical human-robot interaction, 2022. URL https://arxiv.org/abs/2207.03395.
* Mnih et al. [2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_ , 518:529–533, 2015.
* Oord et al. [2018] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding, 2018. URL https://arxiv.org/abs/1807.03748.
* Ouyang et al. [2022] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. Training language models to follow instructions with human feedback. _ArXiv_ , abs/2203.02155, 2022.
* Palan et al. [2019] Malayandi Palan, Nicholas C. Landolfi, Gleb Shevchuk, and Dorsa Sadigh. Learning reward functions by integrating human demonstrations and preferences. _ArXiv_ , abs/1906.08928, 2019.
* Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830, 2011\.
* Raffin [2020] Antonin Raffin. Rl baselines3 zoo. https://github.com/DLR-RM/rl-baselines3-zoo, 2020.
* Raffin et al. [2021] Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. Stable-baselines3: Reliable reinforcement learning implementations. _Journal of Machine Learning Research_ , 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/20-1364.html.
* Reddy et al. [2020] S. Reddy, A. Dragan, Sergey Levine, S. Legg, and J. Leike. Learning human objectives by evaluating hypothetical behavior. In _ICML_ , 2020.
* Salimans and Chen [2018] Tim Salimans and Richard J. Chen. Learning montezuma’s revenge from a single demonstration. _ArXiv_ , abs/1812.03381, 2018.
* Sedlmair [2018] J. Bernard ; Marco Hutter ; Matthias Zeppelzauer ; Dieter W. Fellner ; Michael Sedlmair. Comparing visual-interactive labeling with active learning: An experimental study. _IEEE Transactions on Visualization and Computer Graphics_ , 24:298–308, 2018.
* Singh et al. [2019] Avi Singh, Larry Yang, Chelsea Finn, and Sergey Levine. End-to-end robotic reinforcement learning without reward engineering. In Antonio Bicchi, Hadas Kress-Gazit, and Seth Hutchinson, editors, _Robotics: Science and Systems XV, University of Freiburg, Freiburg im Breisgau, Germany, June 22-26, 2019_ , 2019. doi: 10.15607/RSS.2019.XV.073. URL https://doi.org/10.15607/RSS.2019.XV.073.
* Stiennon et al. [2020] Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan J. Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback. _ArXiv_ , abs/2009.01325, 2020.
* Stooke et al. [2021] Adam Stooke, Kimin Lee, Pieter Abbeel, and Michael Laskin. Decoupling representation learning from reinforcement learning. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 9870–9879. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/stooke21a.html.
* van der Maaten and Hinton [2008] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. _Journal of Machine Learning Research_ , 9(86):2579–2605, 2008. URL http://jmlr.org/papers/v9/vandermaaten08a.html.
* Wang et al. [2022] Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. Pico: Contrastive label disambiguation for partial label learning, 2022\. URL https://arxiv.org/abs/2201.08984.
* Warnell et al. [2018] Garrett Warnell, Nicholas R. Waytowich, Vernon J. Lawhern, and Peter Stone. Deep tamer: Interactive agent shaping in high-dimensional state spaces. _ArXiv_ , abs/1709.10163, 2018.
* Wirth et al. [2017] Christian Wirth, Riad Akrour, Gerhard Neumann, Johannes Fürnkranz, et al. A survey of preference-based reinforcement learning methods. _Journal of Machine Learning Research_ , 18(136):1–46, 2017.
* Wulfmeier et al. [2016] M. Wulfmeier, D. Z. Wang, and I. Posner. Watch this: Scalable cost-function learning for path planning in urban environments. In _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 2089–2095, 2016.
* Zahavy et al. [2016] Tom Zahavy, Nir Ben-Zrihem, and Shie Mannor. Graying the black box: Understanding dqns. In Maria Florina Balcan and Kilian Q. Weinberger, editors, _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pages 1899–1908, New York, New York, USA, 20–22 Jun 2016. PMLR. URL https://proceedings.mlr.press/v48/zahavy16.html.
* Ziebart et al. [2008] Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In _Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3_ , AAAI’08, pages 1433–1438. AAAI Press, 2008. ISBN 978-1-57735-368-3. URL http://dl.acm.org/citation.cfm?id=1620270.1620297.
## Appendix A Environment Details
Custom rewards. Each environment is modified with its own reward function. In
general, we remove the torque-based penalties from MuJoCo, which aren’t
apparent to a human. We also use fixed-length episodes to ensure that
termination conditions don’t influence our reward function.
Reacher: We use the same reward function as the original environment, but only
keep the $reward\\_dist$ component and discard $reward\\_control$.
Cheetah: We change the reward function to be $-abs(obs[1]-1.25)$, where
$obs[1]$ is the angle of the main body. The $1.25$ is the target angle for our
cheetah, which means we want it to be upright with a slight tilt.
Swimmer: We change the reward function to be $obs[1]*obs[2]$, where $obs[1]$
and $obs[2]$ are the angles of the left and right rotor. We multiply them
together to ensure that they are in the same direction, and not in a Z shape
instead.
State and action spaces. We use the same observation and action spaces as the
original Mujoco environment. In general, the observation space contains joint
angles, angular and linear velocities, and position coordinates. All of the
action spaces are continuous.
## Appendix B Experimental Details
Table 1: DRLHP Hyperparameters Parameter | DRLHP
---|---
Initial reward training epochs | 200
Reward training epochs | 2
Reward batch size | 50
Reward learning rate | $3*10^{-4}$
Label schedule | Hyperbolic
Table 2: CLRVis Hyperparameters Parameter | Value
---|---
Initial reward training steps | 2000
Reward training steps | 500
Reward batch size | 500
Reward learning rate | $3*10^{-4}$
CL batch size | 50
CL learning rate | $5*10^{-5}$
CL embedding dim | 512
PCA dim | 50
t-SNE perplexity | 30
### B.1 Preprocessing and Hyperparameters
We train all agents using PPO. The PPO parameters for Reacher are from rl-
baselines3-zoo [32], while the ones for Cheetah and Swimmer are from stable-
baselines3 [33]. Agents receive reward feedback every 250000 steps for Reacher
and Swimmer, and every 100000 steps for Cheetah. These parameters are kept
consistent between DRLHP and CLRVis.
To train our contrastive network, we render the Mujoco images and resize them
to 100x100. We apply a random crop augmentation to both the anchor and
positive example. To form our t-SNE representation, we first apply
dimensionality reduction with PCA to the concatenation of our contrastive and
reward embedding. Then, we run the t-SNE algorithm with a perplexity of 30. We
show 2000 states to the user each time.
Additional parameters for DRLHP and CLRVis are in Table 1 and Table 2
respectively.
### B.2 DRLHP
The initial reward model is trained on states sampled from a random policy,
before we begin training our agent. In addition, $25\%$ of the total
comparisons are used to train the initial reward model, and comparisons are
provided on a hyperbolic schedule in future iterations.
### B.3 Human timings
To estimate the amount of time needed to provide a set of cluster rankings, a
real human timed themselves for a small number of feedback iterations and used
the average. For each iteration, the time measurement starts when the t-SNE
first loads into the interface and ends when the human submits their cluster
orderings.
For Figure 4, we use the average time per CLRVis iteration to determine the
number of comparisons DRLHP gets at each iteration. This ensures that each
point in DRLHP has a corresponding point in CLRVis with the same “human time”,
so we can directly compare the two methods. Instead, for Figure 5 we don’t
attempt to equalize the amount of human time per-iteration across methods.
This enables us to sweep over DRLHP parameters such as the number of feedback
comparisons per iteration that had to be fixed for Figure 4. By doing this, we
can find the best performing hyperparameters without the time-per-iteration
constraint. However, this also means that we won’t have the same number of
datapoints on the x-axis across methods: DRLHP performs best with many
comparisons per iteration, leading to a much higher human-time per iteration
relative to CLRVis.
### B.4 Computational cost
All experiments were run on an on-premise server. The server has 2 Intel Xeon
6130 32-Core CPUs and 4 NVIDIA GTX 1080 GPUs, shared between multiple
projects. The short-term experiments in Figure 4 took around 6 hours of
compute. The long-term experiments in Figure 5 took around 60 hours of
compute; this is mainly due to running CLRVis for many iterations to compare
against DRLHP. In practice, one would require far less time (closer to 6
hours) to achieve a reasonable policy.
## Appendix C Additional results
Episode reward in timesteps. In Figure 8, we show how CLRVis and DRLHP perform
after a certain number of timesteps. Since CLRVis uses less human time per
timestep, we run it for longer than DRLHP. While CLRVis might require more
timesteps to converge in reward, our goal is to minimize the amount of
humantime required instead.
Figure 8: Mean episode reward vs timesteps. Instead of using human time for
our x-axis, we switch to environment timesteps. Since CLRVis uses less human
time per timestep, we are able to run it for longer than DRLHP while spending
the same amount of human time.
|
aainstitutetext: Leinweber Center for Theoretical Physics, Randall Laboratory
of Physics,
Department of Physics, University of Michigan, Ann Arbor, MI 48109,
USA.bbinstitutetext: CPHT, CNRS, École Polytechnique,
Institut Polytechnique de Paris, F-91128 Palaiseau, France.ccinstitutetext:
School of Physics and Astronomy, Queen Mary University of London,
Mile End Road, London E1 4NS, UK.ddinstitutetext: DAMTP, Centre for
Mathematical Sciences, Cambridge University,
Wilberforce Road, Cambridge CB3 OWA, UK.eeinstitutetext: Trinity College,
Cambridge, CB2 1TQ, UK.
# Singular Supertranslations and Chern-Simons Theory on the Black Hole Horizon
Ratindranath Akhoury b Sangmin Choi c,d,e and Malcolm J. Perry
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We construct the standard and dual supertranslation charges on the future
horizon of the Schwarzschild black hole, using the first-order formulation of
gravity with the Holst action. The Dirac bracket algebra of standard and dual
supertranslation charges is shown to exhibit a central term in the presence of
singularities in the two-sphere function associated with supertranslation. We
show that one can cancel this anomalous term and restore the asymptotic
symmetry algebra by introducing a gravitational Chern-Simons theory on the
horizon. This demonstrates that consistency of the asymptotic symmetry algebra
requires a new structure on the horizon.
## 1 Introduction
Black hole physics provides us with a paradox that has been around now for
almost fifty years Hawking:1976ra . The information paradox illustrates an
apparent conflict between classical and semi-classical general relativity and
the fundamental tenets of quantum theory. The classical black hole uniqueness
theorems appear to indicate that the Kerr-Newman family of black holes,
characterised by their mass, angular momentum and electric charge, is
sufficient to describe all black hole stationary states111There are some
exceptions to this, but they do not change the general picture.. If this were
true in a complete quantum theory, then it would not be possible to
distinguish between a black hole formed from matter and one formed form
antimatter. It appears that to any observer outside a black hole that has
reached a stationary state, the black hole is independent of the details of
its formation. In particular, the black hole has no memory of the quantum
state of the material that formed it. The trouble comes when black holes
evaporate. Hawking showed that the outgoing radiation is thermal and so has a
large von Neumann entropy. Suppose that the matter forming the black hole was
in a pure quantum state. In quantum mechanics, the von Neumann entropy is
constant because the time evolution operator is unitary but that is
inconsistent with the picture outlined above. Identifying what is wrong with
this picture has been a huge challenge and, despite much hard labour, has not
yet yielded any clear solution.
Recently, it has been realised that black holes can have soft hair
Hawking:2016msc ; Hawking:2016sgy . Soft hair are extra degrees of freedom
that a black hole can have. The geometry remains that of the Kerr-Newman
sequence with the soft hair being described by a particular class of gauge
transformations. Suppose we look at an asymptotically flat spacetime that does
not contain a black hole. Bondi-Metzner-Sachs (BMS) transformations acting on
the gravitational field at both past and future null infinity generalise the
Poincaré symmetry group familiar from non-gravitational settings Bondi:1962px
; Sachs:1962wk . The Poincaré group acts on Minkowski space with large gauge
transformations generating translations, rotations or boosts. Each of these
large gauge transformations is associated with a charge namely the momentum,
angular momentum and the boost charge. Similarly, the BMS transformations are
associated with a charge conjugate to large gauge transformations. These
charges distinguish the infinite number of distinct vacua of the gravitational
field. In electromagnetism, one is familiar with integrating a current over a
spacelike three-surface to describe the charge passing through the surface.
Gauss’ theorem then guarantees that this integral can be turned into a surface
integral that measures the charge inside that surface. Exactly the same thing
happens for the BMS charges so they can be described by surface integrals on
sections of past or future null infinity. These charges can change as the
result of incoming matter or gravitational waves passing through past null
infinity or outgoing matter or gravitational waves passing through future null
infinity. Gravitational memory gives a method of observing the changes in
these charges. We refer interested readers to Strominger:2013lka ;
Strominger:2013jfa ; He:2014laa ; Hyun:2014kfa ; Adamo:2014yya ; He:2014cra ;
Campiglia:2015kxa ; Campiglia:2015lxa ; Campiglia:2015qka ; Campiglia:2015yka
; Kapec:2015ena ; Avery:2015gxa ; Avery:2015iix ; Lysov:2015jrs for some
earlier literature on this development. See Strominger:2017zoo for a review.
In black hole spacetimes, the horizon is a boundary of what can be observed
form the exterior. The integrals of currents may then have two boundary
components, one at null infinity and the other on the horizon. As a
consequence black holes will also carry soft charges in much the same way as
they can be found at null infinity. This paper is concerned with some of the
consequences of this observation. Our main aim here is to consider pure
gravity without matter. However, in the interests of clarity and simplicity,
we will also provide an outline discussion of the case of electromagnetism as
a model of the more complicated case of gravitation. The main results of this
paper have been outline in the letter Akhoury:2022sfj , and in this paper we
provide details of the calculation.
We examine in detail the physics of horizon (standard and dual) BMS charges
for the Schwarzschild black hole. The horizon charges are computed using the
machinery presented by Godazgar:2020gqd ; Godazgar:2020kqd in the first-order
formalism of gravity. The algebra of the charges is expected to reflect the
algebra of the vector fields that generate the corresponding symmetry. We find
an anomaly in the algebra of charges. To preserve the symmetry of the theory,
we need to introduce some degrees of freedom to cancel the anomaly since
otherwise the theory would be inconsistent. We show that this can be done by
the introduction of a (holographic) gravitational Chern-Simons theory on the
horizon. It would be satisfying to show that the states of this Chern-Simons
theory reproduce the correct black hole entropy and thereby describe the
states of the black hole itself. Such a goal is currently beyond us but a
subject of current investigation222Were this true in the most obvious simple
way, it would appear run into difficulties because of the species problem. The
Chern-Simons theory on the boundary depends not only on the gravitational
degrees of freedom but also on the matter degrees of freedom. So you would
expect the spectrum of states to depend on the entropy. However, the black
hole entropy as given by Hawking, is just one quarter of the area of the event
horizon and does not depend on the matter content. All state counting
arguments appear to run into this type of difficulty.. We find the Chern-
Simons theory for electromagnetism to have gauge group $U(1)\otimes U(1)$ and
for gravitation to be $SL(2,{\mathbb{C}})$ Witten:1989ip . Thus from this
point of view, consistency requires the introduction of a new structure at the
horizon.
In section 2, we describe the analog of standard and dual BMS transformations
on the horizon in the Bondi gauge. The use of the Bondi gauge on the horizon
makes computations particularly simple for the case of the Schwarzschild
metric. It is noteworthy that the algebra of vector fields that generate
supertranslations and superrotations is identical to that found for the BMS
group at null infinity. However, to establish this result, we had to revisit
some earlier work of Barnich and Troessaert where a modified Lie bracket was
introduced; the rationale and description is also discussed in section 2. In
section 3, we introduce the charges associated to the diffeomorphism
symmetries. In parallel, we also discuss the dual (magnetic) counterpart of
the diffeomorphism symmetries. We restrict ourselves here to use of smooth
vector fields to generate the symmetries. In section 4, we continue the
discussion of charges but allow for the possibility that there could be
singularities in the supertranslations. We examine in detail the case of the
supertranslation generator having a pole when expressed in the usual complex
coordinates on the $S^{2}$ of the horizon. In section 5, we show that the
algebra of electric and magnetic supertranslation charges is anomalous and
discover the nature of a central charge. In section 6, we give an alternative
derivation of the same result. In section 7, we examine electromagnetic soft
hair and show that a singularity lead to an anomaly in the charge algebra when
one has both electric and magnetic transformations. We show that this anomaly
can be canceled by supposing that the horizon has a Chern-Simons theory living
on it. It is fortunate that the Chern-Simons is a topological theory as it is
metric independent. There are two nice properties that follow. The first is
that since the horizon is a null surface, the metric is degenerate there and
one cannot invert the metric. Had the theory been metric-dependent, as most
are, it would have been impossible to formulate a theory that is restricted to
the null surface. The second also follows from being metric-independent. The
energy-momentum tensor of a theory is given by varying the action with respect
to the metric. Therefore, in the Chern-Simons case, the energy-momentum tensor
vanishes and the holographic theory does not disturb the black hole geometry.
In section 8, we repeat this analysis for the gravitational case. Finally,
there is a brief discussion of our results in section 9. In addition, there
are three appendices that deal with some technical matters involved in our
computations333 Notation: We work in units where $G_{N}=1$. We will use lower-
case Latin letters $a,b,c,\ldots$ for the four-dimensional curved indices,
Greek letters $\alpha,\beta,\gamma,\ldots$ for the four-dimensional flat
(Lorentz) indices, and capital Latin letters $A,B,C,\ldots$ for the two-
dimensional curved indices corresponding to angular variables on a sphere. The
indices $a,b,c,\ldots$ are lowered/raised by $g_{ab}$ and its inverse
$g^{ab}$, while $\alpha,\beta,\gamma,\ldots$ are lowered/raised by
$\eta_{\alpha\beta}$ and its inverse $\eta^{\alpha\beta}$. The two-dimensional
indices $A,B,C,\cdots$ are lowered and raised by the unit 2-sphere metric
$\gamma_{AB}$ and its inverse $\gamma^{AB}$. An exception to this convention
is used in appendix B where $g_{AB}$ and its inverse $g^{AB}$ are used to
lower and raise indices..
## 2 Horizon BMS transformations in the Bondi gauge
We briefly review the BMS supertranslations and superrotations on the future
horizon of a Schwarzschild black hole Hawking:2016sgy . Throughout our paper,
we work in the Bondi gauge,
$\displaystyle
g_{rr}=g_{rA}=0,\qquad{\partial}_{r}\det\left(\frac{g_{AB}}{r^{2}}\right)=0.$
(1)
In terms of the ingoing Eddington-Finkelstein coordinates, the Schwarzschild
metric is given by
$\displaystyle ds^{2}=-\Lambda
dv^{2}+2dvdr+r^{2}\gamma_{AB}d\Theta^{A}d\Theta^{B},\qquad\Lambda\equiv
1-\frac{2M}{r},$ (2)
where $\gamma_{AB}$ is the metric on the unit 2-sphere. A diffeomorphism $\xi$
that preserves these conditions should satisfy
$\displaystyle\mathcal{L}_{\xi}g_{rr}=\mathcal{L}_{\xi}g_{rA}=0,\qquad\gamma^{AB}\mathcal{L}_{\xi}g_{AB}=0.$
(3)
Such diffeomorphisms can be parametrized as Hawking:2016sgy
$\displaystyle\xi=X{\partial}_{v}-\frac{1}{2}\left(rD_{A}X^{A}+D^{2}X\right){\partial}_{r}+\left(X^{A}+\frac{1}{r}D^{A}X\right){\partial}_{A},$
(4)
where $X^{A}=X^{A}(v,\Theta)$ is an arbitrary vector field and $X=X(v,\Theta)$
is an arbitrary scalar field on the future horizon $\mathcal{H}^{+}$. Here
$D_{A}$ denotes the covariant derivative on the unit 2-sphere and so
$D^{A}=\gamma^{AB}D_{B}$ and also $D^{2}\equiv
D^{A}D_{A}=\gamma^{AB}D_{A}D_{B}$.
A supertranslation is given by
$\displaystyle X=f(\Theta),\qquad X^{A}=0,$ (5)
where $f$ is a smooth function on the 2-sphere. In later sections, we relax
the smoothness condition to allow $f$ to have poles.
A superrotation is given by
$\displaystyle X=\frac{v}{2}D_{A}Y^{A},\qquad X^{A}=Y^{A}(\Theta),$ (6)
where $Y^{A}$ is a smooth vector field on the 2-sphere.
Since supertranslations and superrotations are metric-dependent, the
diffeomorphisms (4) do not form a closed algebra under the Lie bracket of
vector fields. To see why, consider a transformation of the metric generated
by $\xi_{1}^{a}$. Under such a transformation $g_{ab}\rightarrow
g_{ab}+h_{ab}$ with
$\displaystyle
h_{ab}={\mathcal{L}}_{\xi_{1}}g_{ab}=\xi_{1}^{c}\partial_{c}g_{ab}+g_{ac}\partial_{b}\xi_{1}^{c}+g_{cb}\partial_{a}\xi_{1}^{c}.$
(7)
Now a second transformation generated by $\xi_{2}^{a}$ will produce the second
order variation of the metric but will also produce a variation of
$\xi_{1}^{a}$. The variation of $\xi_{1}^{a}$ needs to be removed in order to
isolate the second order variation of the metric. The Lie bracket
$[\xi_{1},\xi_{2}]$ of two vector fields $\xi_{1}^{a}$ and $\xi_{2}^{a}$ is
conventionally defined by
$\displaystyle\mathcal{L}_{\xi_{1}}\mathcal{L}_{\xi_{2}}-\mathcal{L}_{\xi_{2}}\mathcal{L}_{\xi_{1}}=\mathcal{L}_{[\xi_{1},\xi_{2}]}$
(8)
so that
$\displaystyle[\xi_{1},\xi_{2}]^{a}=\xi_{1}^{b}{\partial}_{b}\xi_{2}^{a}-\xi_{2}^{b}{\partial}_{b}\xi_{1}^{a}.$
(9)
The Lie bracket needs to be modified in order to isolate just the second order
variation of the metric. An appropriately modified Lie bracket of vector
fields was introduced by Barnich and Troessaert Barnich:2011mi of vector
fields and is
$\displaystyle[\xi_{1},\xi_{2}]^{a}_{M}$
$\displaystyle=[\xi_{1},\xi_{2}]^{a}-\delta_{\xi_{1}}\xi_{2}^{a}+\delta_{\xi_{2}}\xi_{1}^{a},$
(10)
where $\delta_{\xi_{1}}\xi_{2}^{a}$ denotes the change in the vector component
$\xi_{2}^{a}$ induced by the diffeomorphism $\xi_{1}$. Supertranslations and
superrotations acting on the metric then form a closed algebra under the
modified bracket.
For example, given a pair of vector fields $\xi_{i}$ ($i=1,2$) that generate a
supertranslation $f_{i}$ and a superrotation $Y_{i}$, one can show that
$\displaystyle[\xi_{1},\xi_{2}]_{M}=\xi_{3},$ (11)
where $\xi_{3}$ is a vector field that generates both a supertranslation
$\hat{f}$ and a superrotation $\hat{Y}$ given by
$\displaystyle\hat{f}$
$\displaystyle=\frac{1}{2}f_{1}D_{A}Y^{A}_{2}-\frac{1}{2}f_{2}D_{A}Y^{A}_{1}+Y_{1}^{A}D_{A}f_{2}-Y_{2}^{A}D_{A}f_{1},$
(12) $\displaystyle\hat{Y}^{A}$
$\displaystyle=Y_{1}^{B}D_{B}Y_{2}^{A}-Y_{2}^{B}D_{B}Y_{1}^{A}.$ (13)
A derivation of the above result is given in appendix A. We note that this is
the same as for the BMS4 algebra at null infinity Barnich:2011mi .
Another important ingredient that plays a central role in this work is dual
supertranslation, which is a new set of asymptotic symmetries of gravity that
has recently been uncovered Godazgar:2018qpq . Interestingly, dual
supertranslations are not diffeomorphisms of any kind Kol:2019nkc , and they
have a natural interpretation as the magnetic dual of the standard BMS
supertranslation Godazgar:2018dvh ; Godazgar:2018qpq ; Godazgar:2019dkh ;
Kol:2019nkc . In electromagnetism, magnetic large gauge symmetry is tied to
the complexification of the large gauge transformation charge (see
Strominger:2015bla for instance). Similarly in gravity, the appearance of
dual supertranslation can be understood as the complexification of the BMS
charge. Just like the BMS supertranslation charge $Q_{f}^{\mathcal{I}^{+}}$
can be written as the real part of a complex Weyl scalar,
$\displaystyle Q_{f}^{\mathcal{I}^{+}}$
$\displaystyle=\frac{1}{4\pi}\int_{\mathcal{I}^{+}_{-}}d^{2}z\sqrt{\gamma}f(z,{\bar{z}})\operatorname{Re}\left[\Psi^{0}_{2}(u,z,{\bar{z}})\right],$
(14)
the dual supertranslation charge $\widetilde{Q}_{f}^{\mathcal{I}^{+}}$ is
associated to its imaginary part Godazgar:2018qpq ; Kol:2019nkc ,
$\displaystyle\widetilde{Q}_{f}^{\mathcal{I}^{+}}$
$\displaystyle=\frac{1}{4\pi}\int_{\mathcal{I}^{+}_{-}}d^{2}z\sqrt{\gamma}f(z,{\bar{z}})\operatorname{Im}\left[\Psi^{0}_{2}(u,z,{\bar{z}})\right].$
(15)
A prime example of a spacetime with a non-trivial global dual supertranslation
charge is the Taub-NUT spacetime Taub:1950ez ; Newman:1963yy , which has been
studied in detail in the context of dual supertranslation in Kol:2019nkc .
There are also examples of asymptotically flat spacetimes with bulk dust
configurations that lead to a non-trivial dual supertranslation at the null
infinity, see section III.D of Satishchandran:2019pyc .
More recently, it has been demonstrated by Godazgar:2020gqd ; Godazgar:2020kqd
that dual supertranslation charges (or dual diffeomorphism charges in general)
can be computed using covariant phase space formalism in first-order formalism
of gravity with the Holst action Holst:1995pc . In the next section, we employ
this method to compute the dual supertranslation charge on the future
Schwarzschild horizon. This dual charge is then used along with the standard
horizon supertranslation charge to compute the Dirac bracket algebra of
horizon charges.
## 3 Horizon charges
We will now construct the supertranslation charges on the future horizon
$\mathcal{H}^{+}$ assuming smoothness of the supertranslation parameter $f$.
Following Hawking:2016sgy , let us define $\Sigma$ to be a spacelike
hypersurface extending from a section of $\mathcal{I}^{+}$ to a section of the
horizon $\mathcal{H}^{+}$. A charge $Q^{\Sigma}$ associated with $\Sigma$
breaks into two parts, one being on the horizon and the other on null
infinity. These two parts of $Q^{\Sigma}$ correspond to the two components of
the boundary of $\Sigma$, $\partial\Sigma$.
$\displaystyle Q^{\Sigma}$
$\displaystyle=Q^{\mathcal{H}^{+}}+Q^{\mathcal{I}^{+}}.$ (16)
In Godazgar:2020gqd ; Godazgar:2020kqd , the authors provide a formula for the
(possibly non-integrable) variation of electric and magnetic charges
associated with a vector field $\xi$. The metric is varied inducing a
variation of the connection $1$-form $\omega^{\alpha\beta}$ of
$\delta\omega^{\alpha\beta}$
$\displaystyle\not{\delta}Q^{\Sigma}_{E}$
$\displaystyle=\frac{1}{16\pi}\epsilon_{{\alpha\beta\gamma\delta}}\int_{{{\partial}\Sigma}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta},$ (17) $\displaystyle\not{\delta}Q^{\Sigma}_{M}$
$\displaystyle=\frac{1}{8\pi}\int_{{{\partial}\Sigma}}(i_{\xi}E^{\alpha})\delta{\omega}_{\alpha\beta}\wedge
E^{\beta},$ (18)
Each of these break into two contributions $\not{\delta}Q^{\mathcal{H}^{+}}$
and $\not{\delta}Q^{\mathcal{I}^{+}}$. On the horizon, there is an advanced
time coordinate $v$ and the horizon contributions at time $v_{0}$ take the
form
$\displaystyle\not{\delta}Q^{\mathcal{H}^{+}}_{E}$
$\displaystyle=\frac{1}{16\pi}\epsilon_{{\alpha\beta\gamma\delta}}\int_{\,\mathcal{H}^{+}_{v_{0}}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta},$ (19) $\displaystyle\not{\delta}Q^{\mathcal{H}^{+}}_{M}$
$\displaystyle=\frac{1}{8\pi}\int_{\,\mathcal{H}^{+}_{v_{0}}}(i_{\xi}E^{\alpha})\delta{\omega}_{\alpha\beta}\wedge
E^{\beta}.$ (20)
Throughout this paper, we will take the viewpoint that the black hole
ultimately evaporates. Therefore, although there is a future boundary
${\mathcal{H}^{+}}$ to the horizon, we assume that there is no contribition to
the charge there. If an horizon has a future end-point, in classical general
relativity it must be singular. We presume, in conformity with common
practice, that this is not an issue and that quantum phenomena will take care
of matters. We therefore take
${\partial}{\mathcal{H}^{+}}\equiv\mathcal{H}^{+}_{-}$, the past endpoint of
the horizon, and ignore all possible contributions of $\mathcal{H}^{+}_{+}$,
the future endpoint of the horizon.444 For eternal black holes, one should add
boundary degrees of freedom on $\mathcal{H}^{+}_{+}$ such that they cancel the
contribution of $\mathcal{H}^{+}_{+}$ to the integral, since
$\mathcal{H}^{+}_{+}$ is not a genuine part of the boundary
${{\partial}\Sigma}$. See Geiller:2017whh ; Geiller:2017xad ; Speranza:2017gxd
; Hosseinzadeh:2018dkh ; Freidel:2018fsk for a discussion of electromagnetism
on $\mathcal{I}^{+}$.
Expressions for the horizon contributions in Bondi coordinates are derived in
appendix B. Taking $\xi$ to be the supertranslation vector field
$\displaystyle\xi=f{\partial}_{v}-\frac{1}{2}D^{2}f{\partial}_{r}+\frac{1}{r}D^{A}f{\partial}_{A},$
(21)
we obtain the horizon supertranslation charge
${\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$ from (300) and the dual
supertranslation charge ${\not{\delta}{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}$
from (320) to be
$\displaystyle{\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle=\frac{M}{8\pi}\int_{\mathcal{H}^{+}_{v_{0}}}d^{2}\Theta\sqrt{\gamma}\bigg{[}D^{A}\left(\frac{f}{M}h_{vA}+(D_{A}f)h_{vr}\right)$
(22) $\displaystyle\hskip
85.35826pt-(D^{A}f){\partial}_{r}h_{vA}+2fh_{vv}+(D^{2}f)h_{vr}\bigg{]},$ (23)
$\displaystyle{\not{\delta}{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle=-\frac{1}{32\pi
M}\int_{\mathcal{H}^{+}_{v_{0}}}d^{2}\Theta\sqrt{\gamma}(D^{B}f)\epsilon_{A}{}^{C}D^{A}h_{BC}.$
(24)
$\epsilon^{AB}$ is the alternating tensor on the unit 2-sphere and take
$\epsilon^{\theta\phi}=\frac{1}{\sin\theta}$.
For smooth functions everywhere, we can discard total derivatives in the
integrand, and the supertranslation charge is then in exact agreement with
that of Hawking:2016sgy . After residual gauge fixing and using a combination
of the constraints on $\mathcal{H}^{+}$, the supertranslation charge
simplifies to the expression
$\displaystyle{\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle=\frac{1}{16\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}\Theta\sqrt{\gamma}f(\Theta)D^{A}D^{B}\sigma_{AB},$
(25)
where $\sigma_{AB}=\frac{1}{2}{\partial}_{v}h_{AB}$ is the conjugate momentum
of $h_{AB}$. The integral over the advanced time parameter $v$ is taken from
$\mathcal{H}^{+}_{-}$ to $v_{0}$. The phase space of the horizon
${\mathcal{H}^{+}}$ has the Dirac bracket Hawking:2016sgy ,
$\displaystyle\\{\sigma_{AB}(v,\Omega),h_{CD}(v^{\prime},\Omega^{\prime})\\}_{D}=32\pi
M^{2}\gamma_{ABCD}\delta(v-v^{\prime})\delta(\Omega-\Omega^{\prime}),$ (26)
where
$\gamma_{ABCD}\equiv\gamma_{AC}\gamma_{BC}+\gamma_{AD}\gamma_{BC}-\gamma_{AB}\gamma_{CD}$
is proportional to the DeWitt metric DeWitt:1967yk .
Since we can integrate by parts freely without having to worry about boundary
terms, we can move all covariant derivatives to act on $f$. As such, we can
now identify the integrable horizon supertranslation charge
${\delta{Q_{f}^{\mathcal{H}^{+}}}}$ and dual supertranslation charge
${\delta{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}$ as
$\displaystyle{\delta{Q_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle\equiv\frac{1}{16\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}\Theta\sqrt{\gamma}\,(D^{B}D^{A}f)\sigma_{AB},$
(27) $\displaystyle{\delta{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle\equiv-\frac{1}{32\pi
M}\int_{\mathcal{H}^{+}_{-}}d^{2}\Theta\sqrt{\gamma}\,(D^{B}D^{A}f)\epsilon_{A}{}^{C}h_{BC}.$
(28)
Notice that in this form, the dual supertranslation charge is related to
supertranslation charge by the twisting procedure
$h_{AB}\to\epsilon_{A}{}^{C}h_{CB}$ proposed in Godazgar:2018dvh ;
Godazgar:2019dkh . When we have smooth functions everywhere,
${\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}={\delta{Q_{f}^{\mathcal{H}^{+}}}}$
and
${\not{\delta}{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}={\delta{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}$,
i.e. the charges are integrable.
## 4 Supertranslation charge with poles on the complex plane
We now extend the construction of previous section to allow for the
possibility that the supertranslation parameters, $f$, have simple poles.
The easiest way to explore this possibility is to use complex stereographic
coordinates $(z,{\bar{z}})$, defined as
$\displaystyle
z=e^{i\phi}\tan\frac{\theta}{2},\qquad{\bar{z}}=e^{-i\phi}\tan\frac{\theta}{2},$
(29)
where $\theta$ and $\phi$ are the standard spherical coordinates on a unit
sphere. The metric on the unit sphere in these coordinates is
${\gamma_{z\bar{z}}}=\frac{2}{(1+z{\bar{z}})^{2}}$,
$\gamma_{zz}=\gamma_{{\bar{z}}{\bar{z}}}=0$. The integration measure on the
sphere is
$\displaystyle d^{2}\Theta\sqrt{\gamma}$ $\displaystyle=d^{2}z\sqrt{\gamma},\
\ \ {\rm with}\ \ \ d^{2}z\equiv idz\wedge d{\bar{z}},\ \ \ {\rm and}\ \ \
\sqrt{\gamma}={\gamma_{z\bar{z}}}.$ (30)
The notation has been organized such that $d^{2}z$ is real. The alternating
tensor is defined such that $\epsilon_{z{\bar{z}}}=i\sqrt{\gamma}$. The only
non-vanishing Christoffel symbols are
${}^{(2)}\Gamma^{z}_{zz}=\frac{-2{\bar{z}}}{1+z{\bar{z}}}$ and
${}^{(2)}\Gamma^{\bar{z}}_{{\bar{z}}{\bar{z}}}=\frac{-2z}{1+z{\bar{z}}}$.
Let us compute the supertranslation charge
${\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$ when $f(z,{\bar{z}})$ has a pole at
some complex coordinate $w$, that is, $f=\frac{1}{z-w}$. After fully fixing
the residual gauge freedom on $\mathcal{H}^{+}$, as in Hawking:2016sgy , we
have
$\displaystyle\begin{split}h_{vv}&=h_{vA}=0,\\\
h_{vr}&=\frac{1}{4M^{2}}[D^{2}-1]^{-1}D^{B}D^{C}h_{BC},\\\
{\partial}_{r}h_{vA}&=-\frac{1}{4M^{2}}D_{A}[D^{2}-1]^{-1}D^{B}D^{C}h_{BC}+\frac{1}{4M^{2}}D^{B}h_{AB},\end{split}$
(31)
and the supertranslation charge (23) takes the form
$\displaystyle{\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle=\frac{M}{8\pi}\int_{{\partial}\mathcal{H}^{+}}d^{2}z\sqrt{\gamma}\left(-(D^{A}f)\frac{1}{4M^{2}}D^{B}h_{AB}+2D_{A}(D^{A}fh_{vr})\right).$
(32)
In obtaining this we have used (31) for $D_{A}h_{vr}+{\partial}_{r}h_{vA}$.
Now consider the total derivative term $D_{A}(D^{A}fh_{vr})$. For
$f=\frac{1}{z-w}$ we have,
$\displaystyle\int d^{2}z\sqrt{\gamma}\,D^{A}((D_{A}f)h_{vr})$
$\displaystyle=i\int dz\wedge
d{\bar{z}}\,({\partial}_{\bar{z}}(h_{vr}{\partial}_{z}f)+{\partial}_{z}(h_{vr}{\partial}_{\bar{z}}f))$
(33)
$\displaystyle=-i\oint_{w}dz\,h_{vr}{\partial}_{z}f+i\oint_{w}d{\bar{z}}\,h_{vr}{\partial}_{\bar{z}}f$
(34) $\displaystyle=-2\pi{\partial}_{z}h_{vr}\big{|}_{z=w}.$ (35)
In the second line, the contour is a small circle taken counter-clockwise
around $z=w$. The second term on the r.h.s. of the second line vanishes
because $f=\frac{1}{z-w}$ satisfies the identity555 Note that we normalize
$\delta^{2}(z-w)$ as a real density, so $1=\int
d^{2}z\,\delta^{2}(z-w)=\int{\bf{\epsilon}}\frac{1}{\sqrt{\gamma}}\delta^{2}(z-w)$,
where ${\bf{\epsilon}}=d^{2}z\sqrt{\gamma}$ is the volume form on the unit
sphere.
$\displaystyle{\partial}_{\bar{z}}f=2\pi\delta^{2}(z-w).$ (36)
The contour of $\oint_{w}d{\bar{z}}$ is a small circle around $z=w$ and so
does not pick up any contribution from the delta-function. In the first term
of the second line since ${\partial}_{z}f=\frac{-1}{(z-w)^{2}}$, there is a
contribution proportional to ${\partial}_{z}h_{vr}$ evaluated at $w$, which is
the result (35). Substiuting in the expression (31) for $h_{vr}$ we find
$\displaystyle{\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle=-\frac{1}{16\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}z\sqrt{\gamma}\,(D^{A}f)D^{B}\sigma_{AB}-\frac{1}{4M}\int_{-\infty}^{\infty}dvD_{z}[D^{2}-1]^{-1}D^{B}D^{A}\sigma_{AB}\bigg{|}_{z=w}.$
(37)
Partial integration of the first term gives
$\displaystyle{\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}$
$\displaystyle=\frac{1}{16\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}f)\sigma_{AB}-\frac{1}{4M}\int_{-\infty}^{\infty}dvD_{z}[D^{2}-1]^{-1}D^{B}D^{A}\sigma_{AB}\bigg{|}_{z=w}.$
(38)
In (38), the first term vanishes since
$\displaystyle\int d^{2}z\sqrt{\gamma}\,D^{B}(\sigma_{AB}D^{A}f)$
$\displaystyle=\int
d^{2}z\left({\partial}_{\bar{z}}(\sigma_{zz}D^{z}f)+{\partial}_{z}(\sigma_{{\bar{z}}{\bar{z}}}D^{\bar{z}}f)\right)$
(39)
$\displaystyle=-i\oint_{w}dz{\gamma^{z\bar{z}}}\sigma_{zz}{\partial}_{\bar{z}}f+i\oint_{w}d{\bar{z}}{\gamma^{z\bar{z}}}\sigma_{{\bar{z}}{\bar{z}}}{\partial}_{z}f$
(40) $\displaystyle=0.$ (41)
In obtaining (41) we have again used
${\partial}_{\bar{z}}f=2\pi\delta^{2}(z-w)$, the contour of $\oint_{w}dz$ is a
circle around $w$, and
$\sigma_{{\bar{z}}{\bar{z}}}{\partial}_{z}f=-\frac{1}{2}(z-w)^{-2}{\partial}_{v}h_{{\bar{z}}{\bar{z}}}$
does not have poles in ${\bar{z}}$.
We recognize the first term in (38) for general $f$ to be the integrable
supertranslation charge ${\delta{Q_{f}^{\mathcal{H}^{+}}}}$ (27). Thus, we
find that a pole in $f$ leads ${\delta{Q_{f}^{\mathcal{H}^{+}}}}$ to acquire a
non-integrable part ${\mathcal{N}_{f}^{\mathcal{H}^{+}}}$,
$\displaystyle{\not{\delta}{Q_{f}^{\mathcal{H}^{+}}}}={\delta{Q_{f}^{\mathcal{H}^{+}}}}+{\mathcal{N}_{f}^{\mathcal{H}^{+}}},$
(42)
where ${\delta{Q_{f}^{\mathcal{H}^{+}}}}$ is given by (27), and
$\displaystyle{\mathcal{N}_{f}^{\mathcal{H}^{+}}}$
$\displaystyle=-\frac{1}{4M}\int_{-\infty}^{\infty}dvD_{z}[D^{2}-1]^{-1}D^{B}D^{A}\sigma_{AB}\bigg{|}_{z=w}.$
(43)
This splitting into integrable and non-integrable parts is, of course, not
unique (see for instance Godazgar:2020kqd ). Our choice is justified as
firstly ${\delta{Q_{f}^{\mathcal{H}^{+}}}}$ is the horizon supertranslation
charge in the absence of poles in $f$, and secondly
${\mathcal{N}_{f}^{\mathcal{H}^{+}}}$ has zero Dirac bracket with both
${\delta{Q_{g}^{\mathcal{H}^{+}}}}$ and
${\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}$ and so carries no degrees of
freedom. We encountered the first observation at the end of section 3 and we
will demonstrate second in appendix C.
## 5 Dirac bracket between charges
We now compute the Dirac bracket
$\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\\}_{D}$,
where $f=\frac{1}{z-w}$ and $g$ is assumed to be smooth. This bracket probes
central terms of the algebra of charges. To see this note that the charges
have the expansions,
$\displaystyle{Q_{f}^{\mathcal{H}^{+}}}$
$\displaystyle=Q_{f}^{(h=0)}+{\delta{Q_{f}^{\mathcal{H}^{+}}}}+O(h^{2}),$ (44)
$\displaystyle{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}$
$\displaystyle=\widetilde{Q}_{g}^{(h=0)}+{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}+O(h^{2}),$
(45)
where $Q_{f}^{(h=0)}$ and $\widetilde{Q}_{g}^{(h=0)}$ are the constant charges
of the background metric and hence do not carry degrees of freedom. This
gives,
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=\underbrace{\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\\}_{D}}_{\text{constant}}+O(h).$
(46)
The constant term corresponds to the central charge of the charge algebra.
Now let us compute
$\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\\}_{D}$,
with $f=\frac{1}{z-w}$ and $g$ smooth. Using the expressions (27) and (28) and
applying (26), we obtain
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=\frac{-1}{2(16\pi
M)^{2}}\left\\{\int_{\mathcal{H}^{+}}dv\,d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}f)\sigma_{AB},\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\,(D^{E}D^{C}g)\epsilon_{E}{}^{D}h_{CD}\right\\}_{D}$
$\displaystyle=-\frac{1}{16\pi}\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}f)(D^{E}D^{C}g)\epsilon_{E}{}^{D}\gamma_{ABCD}.$
(47)
Rearranging $D^{B}$ in (47) results in
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=-\frac{1}{16\pi}\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\bigg{(}D^{B}\left((D^{A}f)(D^{E}D^{C}g)\epsilon_{E}{}^{D}\gamma_{ABCD}\right)-(D^{A}f)(D^{B}D^{E}D^{C}g)\epsilon_{E}{}^{D}\gamma_{ABCD}\bigg{)}.$
(48)
Substituting in the expressions for $\epsilon_{A}{}^{B}$ and $\gamma_{ABCD}$,
we can see that the first term is zero,
$\displaystyle\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\,D^{B}\left((D^{A}f)(D^{E}D^{C}g)\epsilon_{E}{}^{D}\gamma_{ABCD}\right)$
$\displaystyle=\int_{\mathcal{H}^{+}_{-}}d^{2}z\,{\partial}_{\bar{z}}\left((D^{z}f)(D^{\bar{z}}D^{\bar{z}}g)\epsilon_{\bar{z}}{}^{\bar{z}}\gamma_{zz{\bar{z}}{\bar{z}}}\right)$
$\displaystyle\quad+\int_{\mathcal{H}^{+}_{-}}d^{2}z\,{\partial}_{z}\left((D^{\bar{z}}f)(D^{z}D^{z}g)\epsilon_{z}{}^{z}\gamma_{{\bar{z}}{\bar{z}}zz}\right)$
$\displaystyle=-2\oint_{w}dz\,({\partial}_{\bar{z}}f)(D^{\bar{z}}D^{\bar{z}}g){\gamma_{z\bar{z}}}-2\oint_{w}d{\bar{z}}\,({\partial}_{z}f)(D^{z}D^{z}g){\gamma_{z\bar{z}}}$
$\displaystyle=0.$ (49)
In obtaining this result, we have used the fact that the $\oint_{w}dz$
integral vanishes since its contour is a circle around $w$ and does not
intersect the singularity of the delta function
${\partial}_{\bar{z}}f=2\pi\delta^{2}(z-w)$, and the $\oint_{w}d{\bar{z}}$
integral vanishes since $({\partial}_{z}f)(D^{z}D^{z}g){\gamma_{z\bar{z}}}$
does not have a pole in ${\bar{z}}$. We obtain
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=\frac{1}{8\pi}\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\,{\gamma_{z\bar{z}}}^{2}\left((D^{z}f)(D^{z}D^{\bar{z}}D^{\bar{z}}g)\epsilon_{\bar{z}}{}^{\bar{z}}+(D^{\bar{z}}f)(D^{\bar{z}}D^{z}D^{z}g)\epsilon_{z}{}^{z}\right)$
(50)
$\displaystyle=\frac{-i}{8\pi}\int_{\mathcal{H}^{+}_{-}}d^{2}z\left(({\partial}_{\bar{z}}f)D^{z}D_{z}^{2}g-({\partial}_{z}f)D^{\bar{z}}D_{\bar{z}}^{2}g\right).$
(51)
$\displaystyle=\frac{-i}{8\pi}\int_{\mathcal{H}^{+}_{-}}d^{2}z{\gamma^{z\bar{z}}}\bigg{(}({\partial}_{\bar{z}}f)[D_{\bar{z}},D_{z}]D_{z}g+({\partial}_{\bar{z}}f)D_{z}D_{\bar{z}}D_{z}g$
$\displaystyle\qquad\qquad\qquad\qquad-({\partial}_{z}f)[D_{z},D_{\bar{z}}]D_{\bar{z}}g-({\partial}_{z}f)D_{\bar{z}}D_{z}D_{\bar{z}}g\bigg{)}.$
(52)
The commutators are $[D_{\bar{z}},D_{z}]D_{z}g={\gamma_{z\bar{z}}}D_{z}g$ and
$[D_{z},D_{\bar{z}}]D_{\bar{z}}g={\gamma_{z\bar{z}}}D_{\bar{z}}g$. Thus we
have
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=\frac{-i}{8\pi}\int
d^{2}z\left(({\partial}_{\bar{z}}f)D_{z}g-({\partial}_{z}f)D_{\bar{z}}g+({\partial}_{\bar{z}}f)D_{z}D_{\bar{z}}D^{\bar{z}}g-({\partial}_{z}f)D_{\bar{z}}D_{z}D^{z}g\right).$
(53)
For the last two terms in the parentheses, we have used ${\gamma^{z\bar{z}}}$
to purposely raise the index of the first derivative acting on $g$. This
allows us to write the third covariant derivatives acting on $g$ as partial
derivatives,
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=\frac{-i}{8\pi}\int
d^{2}z\left(({\partial}_{\bar{z}}f)D_{z}g-({\partial}_{z}f)D_{\bar{z}}g+({\partial}_{\bar{z}}f){\partial}_{z}D_{\bar{z}}D^{\bar{z}}g-({\partial}_{z}f){\partial}_{\bar{z}}D_{z}D^{z}g\right).$
(54)
Now we partial integrate all ${\partial}_{A}f$’s inside the parentheses. Only
the boundary terms survive since partial derivatives commute and
$\displaystyle D_{\bar{z}}D^{\bar{z}}g-D_{z}D^{z}g$
$\displaystyle={\gamma^{z\bar{z}}}({\partial}_{\bar{z}}{\partial}_{z}g-{\partial}_{z}{\partial}_{\bar{z}})g=0.$
(55)
Therefore, we have via Stokes’ theorem,
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=\frac{-i}{8\pi}\int
d^{2}z\left({\partial}_{\bar{z}}(fD_{z}g)-{\partial}_{z}(fD_{\bar{z}}g)+{\partial}_{\bar{z}}(f{\partial}_{z}D_{\bar{z}}D^{\bar{z}}g)-{\partial}_{z}(f{\partial}_{\bar{z}}D_{z}D^{z}g)\right)$
(56)
$\displaystyle=-\frac{1}{8\pi}\oint_{w}\left(dz\frac{(D_{z}g+{\partial}_{z}D_{\bar{z}}D^{\bar{z}}g)}{z-w}+d{\bar{z}}\frac{(D_{\bar{z}}g+{\partial}_{\bar{z}}D_{z}D^{z}g)}{z-w}\right).$
(57)
The $\oint_{w}d{\bar{z}}$ integral vanishes due to the absence of
${\bar{z}}$-poles. Now observe that we can use
$[D_{\bar{z}},D_{z}]D_{z}g={\gamma_{z\bar{z}}}D_{z}g$ to simplify
$\displaystyle D_{z}g+{\partial}_{z}D_{\bar{z}}D^{\bar{z}}g$
$\displaystyle=D_{z}g+D_{z}D_{\bar{z}}D^{\bar{z}}g$ (58)
$\displaystyle=D_{z}g+{\gamma^{z\bar{z}}}D_{z}D_{\bar{z}}D_{z}g$ (59)
$\displaystyle=D_{z}g+{\gamma^{z\bar{z}}}[D_{z},D_{\bar{z}}]D_{z}g+{\gamma^{z\bar{z}}}D_{\bar{z}}D_{z}D_{z}g$
(60) $\displaystyle=D^{z}D_{z}D_{z}g.$ (61)
and write
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=-\frac{1}{8\pi}\oint_{w}dz\frac{D^{z}D_{z}D_{z}g}{z-w}$ (62)
The residue theorem then gives
$\displaystyle\left\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\right\\}_{D}$
$\displaystyle=-\frac{i}{4}D^{z}D_{z}^{2}g\bigg{|}_{z=w}.$ (63)
## 6 Another approach to the computation of the central term
The result for the central term is new and has important implications. We will
now reproduce the central term of the previous section using a completely
different method.
We start from our expression (28) for the integrable variation
${\delta{\widetilde{Q}_{f}^{\mathcal{H}^{+}}}}$ of dual supertranslation
charge, which reads
$\displaystyle{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}$
$\displaystyle=-\frac{1}{32\pi
M}\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}g)\epsilon_{A}{}^{C}h_{BC},$
(64)
and invoke equation (3.4) in the work of Barnich and Troessaert Barnich:2011mi
,
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=\delta_{f}{\widetilde{Q}_{g}^{\mathcal{H}^{+}}},$ (65)
where $\delta_{f}\widetilde{Q}_{g}$ denotes taking the expression (64) for
${\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}$ and replacing $h_{AB}$ with a
diffeomorphism constructed from $f$ with $f$ being only dependent on $z$ and
$\bar{z}$. A general diffeomorphism is of the form
$h_{ab}=\nabla_{a}\xi_{b}+\nabla_{b}\xi_{a}$. Let $\xi_{a}=\partial_{a}f$.
Then restricting $h_{ab}$ to the sphere gives
$\displaystyle h_{BC}\ \to\ 2M(2D_{B}D_{C}f-\gamma_{BC}D^{2}f).$ (66)
This leads to the expression
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=-\frac{1}{16\pi}\int
d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}g)\epsilon_{A}{}^{C}(2D_{B}D_{C}f-\gamma_{BC}D^{2}f)$
(67) $\displaystyle=-\frac{1}{8\pi}\int
d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}g)\epsilon_{A}{}^{C}D_{B}D_{C}f+\frac{1}{16\pi}\int
d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}g)\epsilon_{AB}D^{2}f.$ (68)
The second term on the r.h.s. is zero, since $D^{B}D^{A}g$ is symmetric and
$\epsilon_{AB}$ is antisymmetric. We are just left with the first term,
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=-\frac{1}{8\pi}\int
d^{2}z\sqrt{\gamma}\,(D^{B}D^{A}g)\epsilon_{A}{}^{C}D_{B}D_{C}f.$ (69)
Rewrite this as the sum of two terms
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=-\frac{1}{8\pi}(X+Y),$ (70)
with
$\displaystyle X$ $\displaystyle\equiv\int
d^{2}z\sqrt{\gamma}D_{B}\left((D^{B}D^{A}g)\epsilon_{A}{}^{C}D_{C}f\right),$
(71) $\displaystyle Y$ $\displaystyle\equiv-\int
d^{2}z\sqrt{\gamma}(D^{2}D^{A}g)\epsilon_{A}{}^{C}D_{C}f.$ (72)
$X$ is of the form of an integral over the sphere of the divergence of a
vector field $V^{A}$ on the sphere. So,
$\displaystyle\int d^{2}z\sqrt{\gamma}D_{B}V^{B}$ $\displaystyle=i\int
dz\wedge d{\bar{z}}\,{\gamma_{z\bar{z}}}(D_{z}V^{z}+D_{\bar{z}}V^{\bar{z}})$
(73) $\displaystyle=i\int dz\wedge
d{\bar{z}}\,({\partial}_{z}V_{\bar{z}}+{\partial}_{\bar{z}}V_{z})$ (74)
$\displaystyle=i\oint_{w}d{\bar{z}}\,V_{\bar{z}}-i\oint_{w}dz\,V_{z}.$ (75)
In the second line, we have used the fact that the only non-vanishing
Christoffel symbols are $\Gamma^{z}_{zz}$ and
$\Gamma^{\bar{z}}_{{\bar{z}}{\bar{z}}}$ to write
$D_{z}V_{\bar{z}}={\partial}_{z}V_{\bar{z}}$ and
$D_{\bar{z}}V_{z}={\partial}_{\bar{z}}V_{z}$. Finally we use Stokes’ theorem
to write $X$ as
$\displaystyle X$
$\displaystyle=i\oint_{w}d{\bar{z}}(D_{\bar{z}}D^{A}g)\epsilon_{A}{}^{C}D_{C}f-i\oint
dz(D_{z}D^{A}g)\epsilon_{A}{}^{C}D_{C}f.$ (76)
Everything is smooth except for $f=\frac{1}{z-w}$, so the first term with
$\oint d{\bar{z}}$ never sees a pole in ${\bar{z}}$ and therefore vanishes.
Writing out the second term while noting that the only non-vanishing
components of $\epsilon_{A}{}^{B}$ are
$\epsilon_{z}{}^{z}=-\epsilon_{\bar{z}}{}^{\bar{z}}=i$, we obtain
$\displaystyle X$ $\displaystyle=\oint dz(D_{z}D^{z}g){\partial}_{z}f-\oint
dz(D_{z}D^{\bar{z}}g){\partial}_{\bar{z}}f.$ (77)
The second term vanishes since it has
${\partial}_{\bar{z}}f=2\pi\delta^{2}(z-w)$ and the contour never meets $w$.
We can partial integrate the first term and using the residue theorem and
$f=\frac{1}{z-w}$ obtain
$\displaystyle X$ $\displaystyle=-\oint dz({\partial}_{z}D_{z}D^{z}g)f$ (78)
$\displaystyle=-\oint dz\frac{{\partial}_{z}D_{z}D^{z}g}{z-w}$ (79)
$\displaystyle=-2\pi i{\partial}_{z}D_{z}D^{z}g|_{z=w}.$ (80)
Now we turn to $Y$ in (70), which reads
$\displaystyle Y$ $\displaystyle=-\int
d^{2}z\sqrt{\gamma}(D^{2}D^{A}g)\epsilon_{A}{}^{C}D_{C}f$ (81)
$\displaystyle=-\int
d^{2}z\sqrt{\gamma}D_{C}\left((D^{2}D^{A}g)\epsilon_{A}{}^{C}f\right)+\int
d^{2}z\sqrt{\gamma}(D_{C}D^{2}D^{A}g)\epsilon_{A}{}^{C}f.$ (82)
One quickly see that the second term vanishes as
$\displaystyle\epsilon_{A}{}^{C}D_{C}D^{2}D^{A}g$
$\displaystyle=\epsilon^{AC}D_{C}D^{2}D_{A}g$ (83)
$\displaystyle=\epsilon^{AC}D_{C}[D^{2},D_{A}]g+\epsilon^{AC}D_{C}D_{A}D^{2}g$
(84) $\displaystyle=\epsilon^{AC}D_{C}D_{A}g+\epsilon^{AC}D_{C}D_{A}D^{2}g$
(85) $\displaystyle=0,$ (86)
since both $D_{C}D_{A}g$ and $D_{C}D_{A}D^{2}g$ are symmetric in $A$ and $C$
and $[D^{2},D_{A}]g=D_{A}g$. So we are left with just
$\displaystyle Y$ $\displaystyle=-\int
d^{2}z\sqrt{\gamma}D_{C}\left((D^{2}D^{A}g)\epsilon_{A}{}^{C}f\right),$ (87)
which again is of the form (75), so we can writing the explicit form of $f$ as
$\frac{1}{z-w}$, and
$\epsilon_{z{\bar{z}}}=-\epsilon_{{\bar{z}}z}=i{\gamma_{z\bar{z}}}$ we find
$\displaystyle Y$
$\displaystyle=-i\oint_{w}d{\bar{z}}(D^{2}D^{z}g)\epsilon_{z{\bar{z}}}f+i\oint_{w}dz(D^{2}D^{\bar{z}}g)\epsilon_{{\bar{z}}z}f$
(88)
$\displaystyle=\oint_{w}d{\bar{z}}\frac{(D^{2}D_{\bar{z}}g)}{z-w}+\oint_{w}dz\frac{(D^{2}D_{z}g)}{z-w}.$
(89)
Explicitly writing $f$ as $\frac{1}{z-w}$, and
$\epsilon_{z{\bar{z}}}=-\epsilon_{{\bar{z}}z}=i{\gamma_{z\bar{z}}}$. The first
term is zero since there are no poles in ${\bar{z}}$, and the second term
yields the residue at $z=w$,
$\displaystyle Y$ $\displaystyle=2\pi D^{2}D_{z}g|_{z=w}.$ (90)
Collecting the results (80) and (90) and plugging them into (70), we obtain
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=-\frac{1}{8\pi}\left(X+Y\right)$ (91)
$\displaystyle=-\frac{i}{4}\left.\left(-{\partial}_{z}D_{z}D^{z}g+D^{2}D_{z}g\right)\right|_{z=w}.$
(92)
Simplifying
$\displaystyle-{\partial}_{z}D_{z}D^{z}g+D^{2}D_{z}g$
$\displaystyle=-{\partial}_{z}D^{z}D_{z}g+D^{2}D_{z}g$ (93)
$\displaystyle=-D_{z}D^{z}D_{z}g+D_{z}D^{z}D_{z}g+D_{\bar{z}}D^{\bar{z}}D_{z}g$
(94) $\displaystyle=D_{\bar{z}}D^{\bar{z}}D_{z}g$ (95)
$\displaystyle=D^{z}D_{z}D_{z}g,$ (96)
where in the second line we have used
${\partial}_{z}D^{z}D_{z}g=D_{z}D_{\bar{z}}D^{\bar{z}}g=D_{z}D^{z}D_{z}g$.
This finally leads to
$\displaystyle\\{{Q_{f}^{\mathcal{H}^{+}}},{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=-\frac{i}{4}D^{z}D_{z}^{2}g|_{z=w}.$ (97)
This is in complete agreement with our earlier result (63) for the
infinitesimal bracket
$\\{{\delta{Q_{f}^{\mathcal{H}^{+}}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\\}_{D}$.
What are the implications of this central term? It is usually understood that
this is indicative of an anomaly in the theory which must be cancelled in
order for the theory to make sense. In order to understand how to remove the
central term in the supertranslation algebra we will first take a look at the
simpler case of the electromagnetic charges of large gauge transformation
which is discussed in the next section.
## 7 Electromagnetism
Consider now electromagnetic soft charges on the Schwarzschild horizon. Our
discussion is parallel to the case of future null infinity $\mathcal{I}^{+}$
since both ${\mathcal{H}^{+}}$ and $\mathcal{I}^{+}$ are null hypersurfaces.
We refer the reader to Hosseinzadeh:2018dkh ; Freidel:2018fsk for a treatment
of the electromagnetic case on $\mathcal{I}^{+}$.
Just like the BMS charges, the electromagnetic charges split into the
${\mathcal{H}^{+}}$ and $\mathcal{I}^{+}$ contributions (16). Horizon
contributions to the (soft) electric and magnetic charges are given by
$\displaystyle{\mathcal{Q}_{\lambda}^{\mathcal{H}^{+}}}$
$\displaystyle=\int_{{\mathcal{H}^{+}}}d\lambda\wedge*F,$ (98)
$\displaystyle{\widetilde{\mathcal{Q}}_{\lambda}^{\mathcal{H}^{+}}}$
$\displaystyle=\int_{{\mathcal{H}^{+}}}d\lambda\wedge F,$ (99)
where $\lambda$ is an arbitrary function on the sphere. We use the curly
letter $\mathcal{Q}$ to distinguish these charges from the diffeomorphism
charges.
We can write these charges as integrals over the null surface
${\mathcal{H}^{+}}$ subject to the same boundary conditions as described in
section three. In the complex coordinates (29)
$\displaystyle{\mathcal{Q}_{\lambda}^{\mathcal{H}^{+}}}$
$\displaystyle=-i\int_{{\mathcal{H}^{+}}}dv\,d^{2}z\left({\partial}_{\bar{z}}\lambda(*F)_{vz}-{\partial}_{z}\lambda(*F)_{v{\bar{z}}}\right)$
(100)
$\displaystyle=-\int_{{\mathcal{H}^{+}}}dv\,d^{2}z\left(F_{vz}{\partial}_{\bar{z}}\lambda+F_{v{\bar{z}}}{\partial}_{z}\lambda\right),$
(101) $\displaystyle{\widetilde{\mathcal{Q}}_{\lambda}^{\mathcal{H}^{+}}}$
$\displaystyle=-i\int_{{\mathcal{H}^{+}}}dv\,d^{2}z\left(F_{vz}{\partial}_{\bar{z}}\lambda-
F_{v{\bar{z}}}{\partial}_{z}\lambda\right).$ (102)
Alternatively we can write the charges as integrals over section of the
horizon at some instant of advanced time $v$. In the temporal gauge $A_{v}=0$,
we have $F_{vz}={\partial}_{v}A_{z}$ and
$\displaystyle{\widetilde{\mathcal{Q}}_{\lambda}^{\mathcal{H}^{+}}}$
$\displaystyle=i\int_{{\mathcal{H}^{+}_{-}}}d^{2}z\left(A_{z}{\partial}_{\bar{z}}\lambda-
A_{\bar{z}}{\partial}_{z}\lambda\right).$ (103)
The relevant Dirac bracket is Freidel:2018fsk (see He:2014cra ;
Strominger:2017zoo for details on the symplectic structure),
$\displaystyle\\{{\mathcal{Q}_{\lambda}^{\mathcal{H}^{+}}},A_{z}\\}_{D}=-{\partial}_{z}\lambda$
(104)
using which we obtain
$\displaystyle\\{{\mathcal{Q}_{\lambda}^{\mathcal{H}^{+}}},{\widetilde{\mathcal{Q}}_{\sigma}^{\mathcal{H}^{+}}}\\}_{D}$
$\displaystyle=\int_{\mathcal{H}^{+}_{-}}d^{2}z\sqrt{\gamma}\,\epsilon^{AB}{\partial}_{A}\lambda{\partial}_{B}\sigma$
(105) $\displaystyle=\int_{S^{2}}d\lambda\wedge d\sigma.$ (106)
For $\lambda$ with singularities in $z$, this gives rise to a central term in
the algebra, just as in the case of gravity.
To get rid of the central term in the algebra, one may imagine that there
exists a boundary theory on $\mathcal{H}^{+}$ whose purpose is to cancel the
anomalous contribution suggested by the central charge discussed above. For
this purpose, let us consider a $U(1)\times U(1)$ Chern-Simons theory with two
independent 1-form fields $a$ and $\widetilde{a}$ on a null surface $\Sigma$,
$\displaystyle S=\frac{k}{4\pi}\int_{\Sigma}a\wedge d\widetilde{a}.$ (107)
Under an electric large gauge transformation $a$ and $\widetilde{a}$ transform
as
$\displaystyle a$ $\displaystyle\ \to\ a+d\phi,$ (108)
$\displaystyle\widetilde{a}$ $\displaystyle\ \to\ \widetilde{a},$ (109)
and under a magnetic large gauge transformation they transform as
$\displaystyle a$ $\displaystyle\ \to\ a,$ (110) $\displaystyle\widetilde{a}$
$\displaystyle\ \to\ \widetilde{a}+d\chi.$ (111)
From the action we find the equations of motion to be $da=0$ and
$d\widetilde{a}=0$. Variation of the action yields
$\displaystyle\delta S$ $\displaystyle=\frac{k}{4\pi}\int_{\Sigma}(\delta
a\wedge d\widetilde{a}+a\wedge d\delta\widetilde{a})$ (112)
$\displaystyle=\frac{k}{4\pi}\int_{\Sigma}(\delta a\wedge
d\widetilde{a}-da\wedge\delta\widetilde{a})+\frac{k}{4\pi}\int_{{\partial}\Sigma}a\wedge\delta\widetilde{a},$
(113)
from which we obtain the symplectic potential as,
$\displaystyle\theta(a,\widetilde{a},\delta a,\delta\widetilde{a})$
$\displaystyle=\frac{k}{4\pi}a\wedge\delta\widetilde{a}.$ (114)
Accordingly, the symplectic current density is
$\displaystyle{\omega}(a,\widetilde{a},\delta_{1}a,\delta_{1}\widetilde{a},\delta_{2}a,\delta_{2}\widetilde{a})$
$\displaystyle=\frac{k}{4\pi}\left(\delta_{1}a\wedge\delta_{2}\widetilde{a}-\delta_{2}a\wedge\delta_{1}\widetilde{a}\right).$
(115)
Since there are two types of large gauge transformations, we have two
integrable charge variations. One is the electric charge,
$\displaystyle\delta\mathcal{Q}_{\phi}$
$\displaystyle=\int_{{\partial}\Sigma}{\omega}(a,\widetilde{a},\delta
a,\delta\widetilde{a},d\phi,0)$ (116)
$\displaystyle=-\frac{k}{4\pi}\int_{{\partial}\Sigma}d\phi\wedge\delta\widetilde{a},$
(117)
the other is the magnetic charge,
$\displaystyle\delta\widetilde{\mathcal{Q}}_{\chi}$
$\displaystyle=\int_{{\partial}\Sigma}{\omega}(a,\widetilde{a},\delta
a,\delta\widetilde{a},0,d\chi)$ (118)
$\displaystyle=\frac{k}{4\pi}\int_{{\partial}\Sigma}\delta a\wedge d\chi.$
(119)
We can compute the algebra using either one of the variations,
$\displaystyle\\{\mathcal{Q}_{\phi},\widetilde{\mathcal{Q}}_{\chi}\\}_{D}$
$\displaystyle=\delta_{\phi}\widetilde{\mathcal{Q}}_{\chi}=-\delta_{\chi}\mathcal{Q}_{\phi},$
(120)
and one can see that we get the same answer for both cases,
$\displaystyle\\{\mathcal{Q}_{\phi},\widetilde{\mathcal{Q}}_{\chi}\\}_{D}$
$\displaystyle=-\frac{k}{4\pi}\int_{{\partial}\Sigma}d\phi\wedge d\chi.$ (121)
The electric-electric and magnetic-magnetic brackets vanish regardless of the
presence of poles,
$\displaystyle\\{\mathcal{Q}_{\phi},\mathcal{Q}_{\phi^{\prime}}\\}_{D}$
$\displaystyle=0,$ (122)
$\displaystyle\\{\widetilde{\mathcal{Q}}_{\chi},\widetilde{\mathcal{Q}}_{\chi^{\prime}}\\}_{D}$
$\displaystyle=0.$ (123)
Therefore, one finds the algebra to be exactly parallel to that of standard
and dual large gauge transformation charges on the horizon. The algebra (121),
(122) and (123) tells us that putting a $U(1)\times U(1)$ Chern-Simons theory
with the proper choice of the level $k$ on the horizon, we can get rid of the
central term obtained earlier in the standard and dual large gauge
transformation algebra.
Chern-Simons theory is a topological theory and as such is independent of the
metric. This is how it is possible to have a holographic theory defined on the
null surface forming the horizon. There is no obstacle to theory being defined
on a surface with a degenerate metric. A further benefit is that, being
independnt of the metric, the theory has vanishing energy-momentum tensor and
so does not affect the spacetime geometry.
## 8 Gravitational Chern-Simons theory
Gravity in three dimensions is a topological theory. Suppose one starts from
the Einstein action, with or without a cosmological term, and count up the
number of physical degrees of freedom at each point in spacetime. In
$d$-dimensions the metric has $\tfrac{1}{2}d(d+1)$ components.
Diffeomorphisms, being related to first class constraints generated by a
vector fields that subtract out $2d$ degrees of freedom. The total number of
physical degrees of freedom is therefore $\tfrac{1}{2}d(d-3)$. So in $d=3$
there are no local degrees of freedom. We should therefore expect to find a
topological gravitational theory that is independent of any metric. The
Einstein action is not such a construct. However, Witten Witten:1989ip found
a Chern-Simons theory that is equivalent to the Einstein theory provided some
of its fields are identified with the metric. The Chern-Simons theory is
independent of any metric and can therefore be formulated consistently on null
surfaces where the spacetime metric is degenerate. In more conventional
theories the necessity of using an inverse metric prevents their formulation
on null surfaces.
### 8.1 Chern-Simons Actions
We now briefly describe Witten’s gravitational Chern-Simons theory. The
ingredients are a basis of one-forms $e^{a}=e^{a}_{i}dx^{i}$, a connection
one-form $\omega^{ab}=\omega_{i}{}^{ab}dx^{i}$ and a dimensionful real
parameter $\lambda$ that is in many way analogous to a cosmological constant.
The indices $i,j\ldots$ are spacetime indices whereas $a,b\ldots$ are tangent
space indices. Spacetime indices never need to be raised or lowered, however
we do need as an extra piece of spacetime structure, the alternating symbol
$\epsilon^{ijk}$. By contrast, tangent space indices are raised or lowered
using the Lorentz metric $\eta_{ab}$. To construct a three-dimensional
spacetime, we construct its metric $g_{ij}$ using
$e_{i}^{a}e_{j}^{b}\eta_{ab}$ where $\eta_{ab}={\rm diag}(-++)$. In Witten’s
approach to gravity in three spacetime dimensions, this identification is used
to conclude the equivalence with the Einstein theory. A Chern-Simons theory
needs a gauge group ${\bf G}$, and in the case $\lambda=0$, ${\bf G}$ is
chosen to be $ISO(2,1)$. If $\lambda<0$, ${\bf G}$ is $SO(3,1)$ and if
$\lambda>0$, ${\bf G}$ is $SO(2,2)$. Note that in last case the gauge group
can be factorized as $SO(2,2)\equiv SL(2,{\mathbb{R}})\otimes
SL(2,{\mathbb{R}})$. The case of $SO(3,1)$ cannot be factorized, but it can be
regarded as a complex group $SL(2,{\mathbb{C}})$.
One can write an all encompassing gauge field
$A_{i}=e_{i}^{a}P_{a}+\omega_{i}^{a}J_{a}$ where
$\omega_{i}^{a}=\tfrac{1}{2}\epsilon^{abc}\omega_{i}{\,}_{bc}$ and $P_{a}$ and
$J_{a}$ are the generators of the gauge group. They have the commutation
relations
$[J_{a},J_{b}]=\epsilon_{abc}J^{c},\ \ \ [J_{a},P_{b}]=\epsilon_{abc}P^{c},\ \
\ [P_{a},P_{b}]=\lambda\epsilon_{abc}J^{c}$ (124)
For arbitrary $\lambda$, the Killing form is given by
$\langle J_{a},J_{b}\rangle=0,\ \ \ \langle J_{a},P_{b}\rangle=\eta_{ab},\ \ \
\langle P_{a},P_{b}\rangle=0.$ (125)
However, when $\lambda\neq 0$, ${\bf G}$ factorises, a second Killing form
exists
$\langle J_{a},J_{b}\rangle=\eta_{ab},\ \ \ \langle J_{a},P_{b}\rangle=0,\ \ \
\langle P_{a},P_{b}\rangle=\lambda\eta_{ab}.$ (126)
If $\lambda=0$, the second Killing form is degenerate and not particularly
useful.
From these relations we can construct Chern-Simons theory from the general
expression
$I_{CS}=\frac{k}{4\pi}\int\operatorname{tr}\ \bigl{(}A\wedge
dA+\tfrac{2}{3}A\wedge A\wedge A\bigr{)}.$ (127)
If $\lambda\neq 0$, we can construct two different actions using the two
different Killing forms. For any value of $\lambda$, we can construct an
“electric" theory using the Killing form of (125). In terms of the
differential forms $e^{a}$ and $\omega^{a}$ we get
$I_{electric}=\frac{k}{2\pi}\int 2e^{a}\wedge
d\omega_{a}+\epsilon_{abc}e^{a}\wedge\omega^{a}\wedge\omega^{c}+\,\tfrac{1}{3}\lambda\epsilon_{abc}e^{a}\wedge
e^{b}\wedge e^{c}.$ (128)
or, perhaps more conveniently for some of the following calculations, in terms
of components
$I_{electric}=\frac{k}{2\pi}\int\ d^{3}x\ \epsilon^{ijk}\
e_{i}^{a}\,\bigl{(}2\partial_{j}\omega_{ka}+\epsilon_{abc}\omega_{j}^{b}\omega_{k}^{c}+\tfrac{1}{3}\lambda\epsilon_{abc}e^{b}_{j}e^{c}_{k}\bigr{)}.$
(129)
When $\lambda\neq 0$, we can use the alternative Killing form (126) to
construct a different action, the “magnetic" action
$I_{magnetic}=\frac{\widetilde{k}}{\pi}\int\omega^{a}\wedge
d\omega_{a}+\tfrac{1}{3}\epsilon_{abc}\omega^{a}\wedge\omega^{b}\wedge\omega^{c}+\lambda
e^{a}\wedge de_{a}+\lambda\epsilon_{abc}\omega^{a}\wedge e^{b}\wedge e^{c}.$
(130)
This too can be more conveniently for practical calculations be written in
terms of components as
$I_{magnetic}=\frac{\widetilde{k}}{\pi}\int\ d^{3}x\ \epsilon^{ijk}\
\Bigl{(}\omega_{i}^{a}\,\bigl{(}\partial_{j}\omega_{ka}+\tfrac{1}{3}\epsilon_{abc}\omega_{j}^{b}\omega_{k}^{c}\bigr{)}+\lambda
e_{i}^{a}\partial_{j}e_{ka}+\lambda\epsilon_{abc}\omega^{a}_{i}e^{b}_{j}e^{k}_{c}\Bigr{)}.$
(131)
Both the electric and the magnetic action have the same equations of motion
and the same gauge invariance. The equation of motion from variation $e^{a}$
in the electric action is
$d\omega^{a}+\frac{1}{2}\epsilon^{abc}\omega_{b}\wedge\omega_{c}+\tfrac{1}{2}\lambda\epsilon_{abc}e^{b}\wedge
e^{c}=0.$ (132)
It is the analog of the Einstein equation and specifies the curvature of the
connection $\omega^{a}$. Variation of $\omega^{a}$ in the electric action
gives
$de^{a}+\epsilon^{abc}\omega_{b}\wedge e_{c}=0$ (133)
which shows that the connection is torsion-free. For the magnetic action, it
is the variation of $e^{a}$ that specifies the curvature of the connection and
the variation of $\omega^{a}$ that tells us that it is torsion-free. It is in
this sense that these two actions are dual to each other.
The gauge transformations are of two types. The first is labeled by a tangent-
space vector $\rho^{a}$. The gauge variations of $e^{a}$ and $\omega^{a}$ are
$\delta
e_{i}^{a}=-\partial_{i}\rho^{a}-\epsilon^{abc}\omega_{ib}\wedge\rho_{c}$ (134)
and
$\delta\omega_{i}^{a}=-\lambda\epsilon^{abc}e_{i\,b}\rho_{c}.$ (135)
The second gauge transformation is generated by a second vector $\tau^{a}$.
The resulting gauge variations are
$\delta e_{i}^{a}=-\epsilon^{abc}e_{ib}\wedge\tau_{c}$ (136)
and
$\delta\omega_{i}^{a}=-\partial_{i}\tau^{a}-\epsilon^{abc}\omega_{ib}\wedge\tau_{c}.$
(137)
After recalling that one has dualised the spin connection, one observes that
the $\tau$-transformations are just local Lorentz rotations.
The nature of diffeomorphisms is not quite so striaghtforward. Suppose that
one has a diffeomorphism generated by an infinitesial vector field $v^{i}$.
The the variation of the components of the basis of $1$-forms is
$\displaystyle\delta
e_{i}^{a}=-v^{k}(\partial_{k}e_{i}^{a}-\partial_{i}e_{k}^{a})-\partial_{i}(v^{k}e_{k}^{a})$
(138)
Similarly, the variation of the spin connection is
$\displaystyle\delta\omega_{i}^{a}=-v^{k}(\partial_{k}\omega_{i}^{a}-\partial_{i}\omega_{k}^{a})-\partial_{i}(v^{k}\omega_{k}^{a}).$
(139)
We now see how to find a diffeomorphism in terms of $\rho^{a}$ and $\tau^{a}$.
Setting
$\displaystyle\rho^{a}=v^{k}e_{k}^{a}\ \ \ {\rm and}\ \ \
\tau^{a}=v^{k}\omega_{k}^{a}$ (140)
reproduces what is expected for the transformations of both $e^{a}$ and
$\omega^{a}$ under a diffeomorphism.
### 8.2 The Charges
We now need to find the soft charges resulting from this pair of actions. The
calculation is routine in the covariant phase space formalism. Firstly one
performs a variation of the action in terms of the variation of the fields
$\delta e^{a}$ and $\delta\omega^{a}$. The bulk term then gives the usual
equations of motion which we have already described. However, there is also a
boundary term, the symplectic potential $\theta$. For our actions we find for
the electric case
$\displaystyle\theta_{electric}=-\frac{k}{\pi}\omega^{a}\wedge\delta\omega_{a}$
(141)
and for the magnetic case
$\displaystyle\theta_{magnetic}=-\frac{\widetilde{k}}{\pi}\left(\omega^{a}\wedge\delta\omega_{a}+\lambda
e^{a}\wedge\delta e_{a}\right).$ (142)
Given a symplectic potential, one finds the symplectic form $\Omega$ by
carrying out a second variation in $\theta$ of the fields,
$\delta^{\prime}e^{a}$ and $\delta^{\prime}\omega^{a}$, antisymmetrising over
the two variations and integrating the resultant $2$-form over a spacelike
surface $\Sigma$. For the electric action we find
$\displaystyle\Omega_{electric}=-\frac{k}{\pi}\int_{\Sigma}\delta
e^{a}\wedge\delta^{\prime}\omega_{a}+\delta\omega^{a}\wedge\delta^{\prime}e_{a}$
(143)
and for the magnetic case
$\displaystyle\Omega_{magnetic}=-\frac{2\widetilde{k}}{\pi}\int_{\Sigma}\delta^{\prime}\omega^{a}\wedge\delta\omega_{a}+\lambda\delta^{\prime}e^{a}\wedge\delta
e_{a}.$ (144)
The charges are now found by setting the second variation
$\delta^{\prime}e^{a}$ and $\delta^{\prime}\omega^{a}$ to be pure gauge
transformations determined by $\rho^{\prime}$ and $\tau^{\prime}$. Now
substituting these variations into the symplectic form and using the equations
of motion, one finds that the integral for $\Omega$ collaspe into boundary
terms giving the variation of the charges conjugate to $\rho^{\prime}$ and
$\tau^{\prime}$ on $\partial\Sigma$ under the variation of the fields $\delta
e^{a}$ and $\delta\omega^{a}$.
For the electric case, we find
$\displaystyle\delta
Q^{E}_{\rho,\tau}=-\frac{k}{\pi}\int_{\partial\Sigma}\tau_{a}\delta
e^{a}+\rho_{a}\delta\omega^{a}$ (145)
and for the magnetic case
$\displaystyle\delta
Q^{M}_{\rho,\tau}=-\frac{2\widetilde{k}}{\pi}\int_{\partial\Sigma}\tau_{a}\delta\omega^{a}+\lambda\rho_{a}\delta
e^{a}.$ (146)
Both of these charges are integrable, and so we will define the charges to be
$\displaystyle Q^{E}_{\rho,\tau}=-\frac{k}{\pi}\int_{\partial\Sigma}\
\tau_{a}e^{a}+\rho_{a}\omega^{a}$ (147)
for the electric case and
$\displaystyle
Q^{M}_{\rho,\tau}=-\frac{2\widetilde{k}}{\pi}\int_{\partial\Sigma}\tau_{a}\omega^{a}+\lambda\rho_{a}e^{a}$
(148)
for the magnetic case.
A knowledge of the symplectic form allows one to compute the Dirac bracket of
various quantities of importance in the theory. For reasons that will be
explained later, we do this now for just the electric theory. On the sphere
coordinatised by the complex coordinate $z$, the (electric) symplectic form
becomes
$\displaystyle\Omega_{electric}=\frac{ik}{\pi}\int d^{2}z\bigl{(}\delta
e^{a}_{z}\ \delta^{\prime}\omega_{\bar{z}\,a}-\delta
e^{a}_{\bar{z}}\delta^{\prime}\omega_{z\,a}-(\delta\leftrightarrow\delta^{\prime})\bigr{)}.$
(149)
From this it follows that only non-trivial Dirac brackets are
$\displaystyle\\{e^{a}_{z}(z,\bar{z}),\omega^{b}_{\bar{z}}\\}=-\frac{i\pi}{k}\eta^{ab}\delta^{2}(z-z^{\prime})$
(150)
and its complex conjugate.
We can use these expressions to compute the bracket of the charges with the
field variables. Modulo the equations of motion, these brackets should
reproduce the gauge transformations of the fields. Explicit calculation
reveals that
$\displaystyle\\{Q^{E},e_{i}^{a}\\}=-\partial_{i}\rho^{a}-\epsilon^{abc}e_{i\,b}\tau_{c}-\epsilon^{abc}\omega_{i\,b}\rho_{c},$
$\displaystyle\\{Q^{E},\omega^{a}_{i}\\}=-\partial_{i}\tau^{a}-\epsilon^{abc}\omega_{i\,b}\tau_{c}-\lambda\epsilon^{abc}e_{i\,b}\rho_{c},$
(151)
as expected. Similarly, the brackets of the magnetic charges with $e^{a}$ and
$\omega^{a}$ are
$\displaystyle\\{Q^{M},e^{a}_{i}\\}=2\frac{\widetilde{k}}{k}\Bigl{(}-\partial_{i}\tau^{a}-\epsilon^{abc}\omega_{i\,b}\tau_{c}-\lambda\epsilon^{abc}e_{i\,b}\rho_{c}\Bigr{)},$
$\displaystyle\\{Q^{M},\omega^{a}_{i}\\}=2\lambda\frac{\widetilde{k}}{k}\Bigl{(}-\partial_{i}\rho^{a}-\epsilon^{abc}\omega_{i\,b}\rho_{c}-\epsilon^{abc}e_{i\,b}\tau_{c}\Bigr{)}.$
(152)
Again, these are gauge transformations but with the role of $\tau$ and
$\omega$ interchanged and rescaled.
### 8.3 Charge algebra
We now compute the charge algebra. From hereon we are going to work
exclusively with the electric theory. One might then wonder what the point of
introducing the magnetic theory is. The answer is that allows us to find the
magnetic charges in a straightforward fashion. Had we not done so, finding the
magnetic charges would have been an involved, convoluted and obscure process.
The magnetic charges still exist in the electric theory just as electric
charges exist in the magnetic theory. However, one needs to make a choice of
symplectic form at some point and we choose the electric picture.
#### 8.3.1 Electric-electric bracket
The bracket between two electric charges is
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{E}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=-\frac{k}{\pi}\int_{{\partial}\Sigma}\left(\epsilon^{abc}\left(\tau^{\prime}_{b}\tau_{c}+\lambda\rho^{\prime}_{b}\rho_{c}\right)e_{a}+\epsilon^{abc}\left(\tau^{\prime}_{b}\rho_{c}-\tau_{b}\rho^{\prime}_{c}\right){\omega}_{a}\right)$
$\displaystyle\quad-\frac{k}{\pi}\int_{{\partial}\Sigma}\left(\rho^{a}d\tau^{\prime}_{a}+\tau^{a}d\rho^{\prime}_{a}\right).$
(153)
Recall from (147) that the integrated electric charge takes the form
$\displaystyle Q^{E}_{\tau,\rho}$
$\displaystyle=-\frac{k}{\pi}\int_{{\partial}\Sigma}\left(\tau_{a}e^{a}+\rho_{a}{\omega}^{a}\right).$
(154)
Comparing this with the result for the bracket, we observe that
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{E}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}}-\frac{k}{\pi}\int_{{\partial}\Sigma}\left(\rho^{a}d\tau^{\prime}_{a}+\tau^{a}d\rho^{\prime}_{a}\right),$
(155)
where
$\displaystyle\tau^{\prime\prime a}$
$\displaystyle=\epsilon^{abc}\left(\tau^{\prime}_{b}\tau_{c}+\lambda\rho^{\prime}_{b}\rho_{c}\right),$
(156) $\displaystyle\rho^{\prime\prime a}$
$\displaystyle=\epsilon^{abc}\left(\tau^{\prime}_{b}\rho_{c}-\tau_{b}\rho^{\prime}_{c}\right).$
(157)
With $\rho^{a}=v^{i}e_{i}^{a}$, the central term is zero whenever $\tau^{a}=0$
or $\tau^{a}=v^{i}{\omega}_{i}^{a}$.
#### 8.3.2 Electric-magnetic bracket
The bracket between electric and magnetic charges can be obtained in two
distinct ways since
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=\delta^{E}_{\tau,\rho}Q^{M}_{\tau^{\prime},\rho^{\prime}}=-\delta^{M}_{\tau^{\prime},\rho^{\prime}}Q^{E}_{\tau,\rho},$
(158)
where here $\delta^{E}_{\tau,\rho}$ denotes the gauge transformation generated
by $Q^{E}_{\rho,\tau}$ given in (151)and
$\delta^{M}_{\tau^{\prime},\rho^{\prime}}$ denotes the gauge transformation
generated by $Q^{M}_{\rho,\tau}$ given in (152). These two results must agree.
Let us first compute
$\displaystyle\delta^{E}_{\tau,\rho}Q^{M}_{\tau^{\prime},\rho^{\prime}}$
$\displaystyle=-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\epsilon^{abc}\left(\tau^{\prime}_{b}\tau_{c}+\lambda\rho^{\prime}_{b}\rho_{c}\right){\omega}_{a}+\lambda\epsilon^{abc}\left(\tau^{\prime}_{b}\rho_{c}-\tau_{b}\rho^{\prime}_{c}\right)e_{a}\right)$
$\displaystyle\quad-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\tau_{a}d\tau^{\prime
a}+\lambda\rho_{a}d\rho^{\prime a}\right).$ (159)
Comparing this to the magnetic charge (148), we can see that
$\displaystyle\delta^{E}_{\tau,\rho}Q^{M}_{\tau^{\prime},\rho^{\prime}}$
$\displaystyle=\widetilde{Q}^{M}_{\tau^{\prime\prime},\rho^{\prime\prime}}-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\tau_{a}d\tau^{\prime
a}+\lambda\rho_{a}d\rho^{\prime a}\right)$ (160)
where $\tau^{\prime\prime}$ and $\rho^{\prime\prime}$ are defined in (156) and
(157).
Let us next use (152) to compute
$-\delta^{M}_{\tau^{\prime},\rho^{\prime}}Q^{E}_{\tau,\rho}$. We obtain
$\displaystyle-\delta^{M}_{\tau^{\prime},\rho^{\prime}}Q^{E}_{\tau,\rho}$
$\displaystyle=-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\epsilon^{abc}\left(\tau^{\prime}_{b}\tau_{c}+\lambda\rho^{\prime}_{b}\rho_{c}\right){\omega}_{a}+\lambda\epsilon^{abc}\left(\tau^{\prime}_{b}\rho_{c}-\tau_{b}\rho^{\prime}_{c}\right)e_{a}\right)$
$\displaystyle\quad-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\tau_{a}d\tau^{\prime
a}+\lambda\rho_{a}d\rho^{\prime a}\right).$ (161)
Observe that this is exactly the same as the expression for
$\delta^{E}_{\tau,\rho}Q^{M}_{\tau^{\prime},\rho^{\prime}}$. This is a nice
consistency check.
Therefore, we conclude that the electric-magnetic charge bracket is
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=\widetilde{Q}^{M}_{\tau^{\prime\prime},\rho^{\prime\prime}}-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\tau_{a}d\tau^{\prime
a}+\lambda\rho_{a}d\rho^{\prime a}\right),$ (162)
with $\tau^{\prime\prime}$ and $\rho^{\prime\prime}$ given in (156) and (157).
#### 8.3.3 Magnetic-magnetic bracket
The bracket between two magnetic charges is
$\displaystyle\\{Q^{M}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=\delta^{M}_{\tau,\rho}Q^{M}_{\tau^{\prime},\rho^{\prime}}.$
(163)
Using (146) and (152), we obtain
$\displaystyle\\{Q^{M}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=-\frac{4\lambda\widetilde{k}^{2}}{\pi
k}\int_{{\partial}\Sigma}\left(\epsilon^{abc}\left(\tau^{\prime}_{b}\tau_{c}+\lambda\rho^{\prime}_{b}\rho_{c}\right)e_{a}+\epsilon^{abc}\left(\tau^{\prime}_{b}\rho_{c}-\tau_{b}\rho^{\prime}_{c}\right){\omega}_{a}\right)$
$\displaystyle\quad-\frac{4\lambda\widetilde{k}^{2}}{\pi
k}\int_{{\partial}\Sigma}\left(\rho^{a}d\tau^{\prime}_{a}+\tau^{a}d\rho^{\prime}_{a}\right).$
(164)
Comparing this to the electric charge (147), we conclude that
$\displaystyle\\{Q^{M}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=4\lambda\frac{\widetilde{k}^{2}}{k^{2}}Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}}-4\lambda\frac{\widetilde{k}^{2}}{\pi
k}\int_{{\partial}\Sigma}\left(\rho^{a}d\tau^{\prime}_{a}+\tau^{a}d\rho^{\prime}_{a}\right).$
(165)
Again, $\tau^{\prime\prime}$ and $\rho^{\prime\prime}$ are given in (156) and
(157). The central term is the same (up to a constant) as that of
$\\{Q^{E},Q^{E}\\}$, so it vanishes for supertranslations.
It may be worth noting that there is a relation
$\displaystyle\\{Q^{M}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}=4\lambda\frac{\widetilde{k}^{2}}{k^{2}}\\{Q^{E}_{\tau,\rho},Q^{E}_{\tau^{\prime},\rho^{\prime}}\\}.$
(166)
The factor of $4\lambda$ seems to be just an artifact for a less than optimal
choice of scale for $Q^{M}$ (and preceding that for $I_{magnetic}$). For
instance, if we started from $2I_{magnetic}$ we would have $2Q^{M}$ in place
of $Q^{M}$ and this would have led to having $16\lambda$ in place of the
factor $4\lambda$.
### 8.4 $e^{a}$ and ${\omega}^{a}$ on the horizon
In this section, we consider putting a gravitational Chern-Simons theory on
the future Schwarzschild horizon ${\mathcal{H}^{+}}$, and find the solutions
of the equations of motion for $e^{a}$ and ${\omega}^{a}$. We observe that the
“cosmological constant” $\lambda$ is fixed by the equations of motion.
In the context of our work, $g_{ij}$ is the pullback of the four-dimensional
metric in advanced Eddington-Finkelstein coordinates to the future
Schwarzschild horizon,
$\displaystyle g_{ij}=4M^{2}\begin{pmatrix}0&0&0\\\
0&0&\frac{2}{(1+z{\bar{z}})^{2}}\\\
0&\frac{2}{(1+z{\bar{z}})^{2}}&0\end{pmatrix},$ (167)
where $i,j$ span $(v,z,{\bar{z}})$. The “flat metric” $\eta_{ab}$ is the
Cartan metric $\eta_{ab}=\operatorname{diag}(-1,1,1)$. They are connected by
the “triad”
$\displaystyle e_{i}{}^{a}$ $\displaystyle=2M\begin{pmatrix}0&0&0\\\
0&\frac{1}{1+z{\bar{z}}}&\frac{i}{1+z{\bar{z}}}\\\
0&\frac{1}{1+z{\bar{z}}}&\frac{-i}{1+z{\bar{z}}}\end{pmatrix}$ (168)
that satisfies
$\displaystyle e_{i}{}^{a}e_{j}{}^{b}\eta_{ab}=g_{ij}.$ (169)
We do not have the inverse relation $g^{ij}e_{i}{}^{a}e_{j}{}^{b}=\eta^{ab}$
because $g_{ij}$ is not invertible. We can write the above matrix form as
collection of one-forms,
$\displaystyle e^{0}$ $\displaystyle=0,$ (170) $\displaystyle e^{1}$
$\displaystyle=\frac{2M}{1+z{\bar{z}}}dz+\frac{2M}{1+z{\bar{z}}}d{\bar{z}},$
(171) $\displaystyle e^{2}$
$\displaystyle=\frac{2iM}{1+z{\bar{z}}}dz-\frac{2iM}{1+z{\bar{z}}}d{\bar{z}},$
(172)
from which we obtain
$\displaystyle dz$
$\displaystyle=\frac{1}{4M}(1+z{\bar{z}})\left(e^{1}-ie^{2}\right),$ (173)
$\displaystyle d{\bar{z}}$
$\displaystyle=\frac{1}{4M}(1+z{\bar{z}})\left(e^{1}+ie^{2}\right),$ (174)
$\displaystyle dz\wedge d{\bar{z}}$
$\displaystyle=\frac{i}{8M^{2}}(1+z{\bar{z}})^{2}e^{1}\wedge e^{2}.$ (175)
The spin connection can be obtained using the equations of motion and the
anholonomy coefficients
$\displaystyle de^{a}$ $\displaystyle=-{\omega}^{a}{}_{b}\wedge
e^{b}=\frac{1}{2}c^{a}{}_{bc}e^{b}\wedge e^{c},$ (176)
$\displaystyle{\omega}_{ab}$
$\displaystyle=\frac{1}{2}(c_{abc}-c_{bac}-c_{cab})e^{c},$ (177)
where we keep in mind that ${\omega}^{bc}=-{\omega}^{a}\epsilon_{a}{}^{bc}$.
The exterior derivative of $e^{a}$ yields
$\displaystyle de^{0}$ $\displaystyle=0$ (178) $\displaystyle de^{1}$
$\displaystyle=2M\frac{(z-{\bar{z}})}{(1+z{\bar{z}})^{2}}dz\wedge
d{\bar{z}}=\frac{i}{4M}(z-{\bar{z}})e^{1}\wedge e^{2},$ (179) $\displaystyle
de^{2}$ $\displaystyle=2iM\frac{(z+{\bar{z}})}{(1+z{\bar{z}})^{2}}dz\wedge
d{\bar{z}}=-\frac{1}{4M}(z+{\bar{z}})e^{1}\wedge e^{2},$ (180)
from which we read off
$\displaystyle c^{1}{}_{12}=c_{112}=\frac{i}{4M}(z-{\bar{z}}),\qquad
c^{2}{}_{12}=c_{212}=-\frac{1}{4M}(z+{\bar{z}}),$ (181)
with all other coefficients vanishing. Accordingly, the only non-vanishing
component of the spin connection is
$\displaystyle{\omega}_{12}$
$\displaystyle=-\frac{i}{4M}(z-{\bar{z}})e^{1}+\frac{1}{4M}(z+{\bar{z}})e^{2}$
(182)
$\displaystyle=\frac{i{\bar{z}}}{1+z{\bar{z}}}dz-\frac{iz}{1+z{\bar{z}}}d{\bar{z}}.$
(183)
The only non-vanishing component of the dual
${\omega}^{a}=\frac{1}{2}\epsilon^{abc}{\omega}_{bc}$ is thus
$\displaystyle{\omega}^{0}$
$\displaystyle=-{\omega}_{12}=-\frac{i{\bar{z}}}{1+z{\bar{z}}}dz+\frac{iz}{1+z{\bar{z}}}d{\bar{z}}$
(184)
since $\epsilon^{012}=-1$.
Let us see if this satisfies the other set of equations of motion
$\displaystyle
d{\omega}^{a}+\frac{1}{2}\epsilon^{abc}{\omega}_{b}\wedge{\omega}_{c}+\frac{\lambda}{2}\epsilon^{abc}e_{b}\wedge
e_{c}$ $\displaystyle=0.$ (185)
Since only ${\omega}^{0}$ is non-zero, we have
$\epsilon^{abc}{\omega}_{b}\wedge{\omega}_{c}=0$. The only non-vanishing
component of $d{\omega}^{a}$ is
$\displaystyle d{\omega}^{0}$
$\displaystyle=\frac{2i}{(1+z{\bar{z}})^{2}}dz\wedge d{\bar{z}},$ (186)
and the only non-vanishing term of $\frac{\lambda}{2}\epsilon^{abc}e_{b}\wedge
e_{c}$ is
$\displaystyle\frac{\lambda}{2}\epsilon^{0bc}e_{b}\wedge e_{c}$
$\displaystyle=-\lambda e^{1}\wedge
e^{2}=\frac{8iM^{2}\lambda}{(1+z{\bar{z}})^{2}}dz\wedge d{\bar{z}}.$ (187)
Therefore, the above equations of motion boils down to fixing $\lambda$,
$\displaystyle\lambda=-\frac{1}{4M^{2}}.$ (188)
#### 8.4.1 Compensating $\tau$-transformation for central term
We have seen that the electric and magnetic charges satisfy the algebra (155),
(162) and (165), which reads
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{E}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}}-\frac{k}{\pi}\int_{{\partial}\Sigma}\left(\rho^{a}d\tau^{\prime}_{a}+\tau^{a}d\rho^{\prime}_{a}\right),$
(189)
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=Q^{M}_{\tau^{\prime\prime},\rho^{\prime\prime}}-\frac{2\widetilde{k}}{\pi}\int_{{\partial}\Sigma}\left(\tau_{a}d\tau^{\prime
a}+\lambda\rho_{a}d\rho^{\prime a}\right),$ (190)
$\displaystyle\\{Q^{M}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=4\lambda\frac{\widetilde{k}^{2}}{k^{2}}Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}}-4\lambda\frac{\widetilde{k}^{2}}{\pi
k}\int_{{\partial}\Sigma}\left(\rho^{a}d\tau^{\prime}_{a}+\tau^{a}d\rho^{\prime}_{a}\right),$
(191)
with the composition $\tau^{\prime\prime}$ and $\rho^{\prime\prime}$ given by
(156) and (157). We want the central terms of this algebra to cancel the
central term of supertranslation algebra on the Schwarzschild horizon. Recall
that the $\rho$ transformation is related to a diffeomorphism $v^{i}$ by
$\displaystyle v^{i}$
$\displaystyle=\left(f,\frac{1}{2M}D^{z}f,\frac{1}{2M}D^{\bar{z}}f\right),$
(192) $\displaystyle\rho^{a}$ $\displaystyle=v^{i}e_{i}^{a}$ (193)
$\displaystyle=\frac{1}{1+z{\bar{z}}}\left(0,\ D^{z}f+D^{\bar{z}}f,\
i(D^{z}f-D^{\bar{z}}f)\right).$ (194)
We demand that, in this Chern-Simons theory, supertranslation is accompanied a
compensating Lorentz transformation ($\tau$-transformation) given by
$\displaystyle\tau^{a}$
$\displaystyle=\left(\frac{1}{8\widetilde{k}^{1/2}}(D^{2}+2)f,i\sqrt{\lambda}\rho^{2},-i\sqrt{\lambda}\rho^{1}\right).$
(195)
This leads to the algebra
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{E}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}},$ (196)
$\displaystyle\\{Q^{E}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=Q^{M}_{\tau^{\prime\prime},\rho^{\prime\prime}}+\frac{i}{4}(D^{z}D_{z}^{2}f^{\prime})|_{z=w},$
(197)
$\displaystyle\\{Q^{M}_{\tau,\rho},Q^{M}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=4\lambda\frac{\widetilde{k}^{2}}{k^{2}}Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}}.$
(198)
Observe that the standard supertranslations commute by themselves, the dual
supertranslations commute by themselves, but the standard and dual charges
have the correct form of central term. Thus, we see that exactly the form of
the central term obtained in sections 5 and 6 is reproduced. The Chern-Simons
theory can then be used to cancel the anomalous behavior of the
supertranslation charge algebra in the case that the supertranslation
parameter $f$ has a pole.
Finally, we note that the constant $\widetilde{k}$ can also be fixed in terms
of $k$ by demanding that the complexified charge algebra is closed up to the
central terms. One may readily check that the complexified charge
$\textbf{Q}_{\tau,\rho}=Q^{E}_{\tau,\rho}+iQ^{M}_{\tau,\rho}$ satisfies the
bracket
$\displaystyle\\{\textbf{Q}_{\tau,\rho},\textbf{Q}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=\left[1-4\lambda\frac{\widetilde{k}^{2}}{k^{2}}\right]Q_{\tau^{\prime\prime},\rho^{\prime\prime}}^{E}+2iQ_{\tau^{\prime\prime},\rho^{\prime\prime}}^{M}-\left.\frac{1}{2}(D^{z}D_{z}^{2}f^{\prime})\right|_{z=w}.$
(199)
For this to close up to the central term, we demand that the coefficient of
$Q^{E}_{\tau^{\prime\prime},\rho^{\prime\prime}}$ be $2$, which fixes
$\widetilde{k}^{2}=-\frac{k^{2}}{4\lambda}=k^{2}M^{2}$. Then, we obtain
$\displaystyle\\{\textbf{Q}_{\tau,\rho},\textbf{Q}_{\tau^{\prime},\rho^{\prime}}\\}$
$\displaystyle=\textbf{Q}_{2\tau^{\prime\prime},2\rho^{\prime\prime}}-\left.\frac{1}{2}(D^{z}D_{z}^{2}f^{\prime})\right|_{z=w}.$
(200)
## 9 Discussion
We have constructed standard and dual supertranslation charges on the future
horizon of the Schwarzschild black hole using the first-order formalism of
Godazgar:2020gqd ; Godazgar:2020kqd . Then, we have explored the consequences
of allowing for singularities in the parameter function of supertranslations.
Singular supertranslations arise naturally in the extended phase space
associated with the BMS algebra Barnich:2011mi . Also, in electrodynamics
singular large gauge transformations are closely related to Dirac string
configurations in the bulk Freidel:2018fsk , and singular supertranslations
can be considered as their gravitational analog. Using a simple pole as an
example, we have demonstrated that singularities lead to the presence of a
central term in the Dirac bracket charge algebra, implying that the symmetry
algebra becomes anomalous. In order to remove such a term, we have introduced
a gravitational Chern-Simons theory Witten:1989ip with gauge group
$SL(2,{\mathbb{C}})$ on the horizon. Being a topological theory, this theory
is suitable to live on the horizon which is a null surface, and in addition
does not contribute a stress-energy tensor which may perturb the gravitational
field. We have shown that the large gauge transformation of this boundary
theory can be organized such that its charge algebra cancels the anomalous
central term of the bulk gravity theory.
Some comments are in order. In this paper, we have shown that an
$SL(2,{\mathbb{C}})$ Chern-Simons theory on the horizon can cancel the central
term, but what we have not shown is that this theory is unique in being
capable of this job. Whether there exist other topological field theories that
can cancel the central term is an interesting question, as the properties
shared by the set of such theories will teach us more about the fundamental
nature of the structure on the black hole horizon.
Since the standard and dual supertranslation algebra on the horizon is an
asymptotic symmetry algebra and hence is not gauged, one may observe the
anomalous central term and decide that we extend the symmetry algebra to
incorporate such a term instead of removing it. As an example of this
viewpoint, central extension of classical asymptotic symmetry algebra is also
present in the literature such as Brown:1986nw . It would be very interesting
to explore this direction, as the work of Brown and Henneaux is intimately
related to the existence of a dual two-dimensional holographic boundary CFT.
We leave this for future investigation.
In electromagnetism, there are specific examples of configurations that are
associated with singular gauge transformations Freidel:2018fsk . Then, one may
ask whether there are well-known gravitational configurations associated with
singular supertranslations. It has been shown by Strominger and Zhiboedov
Strominger:2016wns that finite superrotations at the null infinity map
asymptotically flat spacetimes to spacetimes with isolated defects, which are
interpreted as cosmic strings. It is not clear whether singular
supertranslations can have similar effects. It would be very interesting to
see find such an example associated with singular supertranslations.
Finally, the structure of null infinity is very similar to the future
Schwarzschild horizon, and thus we expect a similar structure to be present at
the null infinity as well. It would be interesting to explore how such a
structure could affect scattering amplitudes.
###### Acknowledgements.
SC thanks the participants of the Corfu 2022 Workshop on Celestial Amplitudes
and Flat Space Holography for stimulating discussions. MJP acknowledges
funding from the Science and Technology Facilities Council (STFC) Consolidated
Grant ST/T000686/1 “Amplitudes, Strings and duality”. MJP would also like to
thank the UK STFC for financial support under grant ST/L000415/1. No new data
were generated or analysed during this study. The work of SC is supported by
the European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme (grant agreement No 852386). SC also
acknowledges financial support from the Samsung Scholarship.
## Appendix A Modified Lie bracket
In this appendix we describe, in detail, the construction of the modified Lie
bracket Barnich:2011mi on the Schwarzschild horizon ${\cal H}^{+}.$
The vector field $\xi$ that generates a supertranslation $f(\Theta)$ and a
superrotation $Y(\Theta)$ is
$\displaystyle\xi$
$\displaystyle=\left(f+\frac{v}{2}\psi\right){\partial}_{v}-\frac{1}{2}\left(D^{2}(f+\frac{v}{2}\psi)+r\psi\right){\partial}_{r}+\left(\frac{1}{r}D^{A}(f+\frac{v}{2}\psi)+Y^{A}\right){\partial}_{A},$
(201)
where $\psi\equiv D_{A}Y^{A}$. Let
$\displaystyle F(v,\Theta)\equiv f(\Theta)+\frac{v}{2}\psi(\Theta),$ (202)
such that ${\partial}_{v}F=\frac{1}{2}\psi$. Then,
$\displaystyle\xi$
$\displaystyle=F{\partial}_{v}-\frac{1}{2}D^{2}F{\partial}_{r}+\frac{1}{r}D^{A}F{\partial}_{A}-\frac{r}{2}\psi{\partial}_{r}+Y^{A}{\partial}_{A},$
$\displaystyle\xi_{v}$ $\displaystyle=-\Lambda
F-\frac{1}{2}D^{2}F-\frac{r}{2}\psi,\qquad\xi_{r}=F,\qquad\xi_{A}=rD_{A}F+r^{2}Y_{A},$
(203)
where $Y_{A}=\gamma_{AB}Y^{B}$. In this form, $\xi$ is like a $v$-dependent
supertranslation $F$ but with “corrections”
$-\frac{r}{2}\psi{\partial}_{r}+Y^{A}{\partial}_{A}$. Since we are only
interested in terms linear in $\xi$, we can compute the contributions of $F$
and the remainders separately.
For the unperturbed Schwarzschild spacetime, the non-vanishing Christoffel
symbols $\bar{\Gamma}^{a}_{bc}$ are
$\displaystyle\bar{\Gamma}^{v}_{vv}=\frac{M}{r^{2}},\quad\bar{\Gamma}^{v}_{AB}=-r\gamma_{AB},\quad\bar{\Gamma}^{r}_{vv}=\frac{M\Lambda}{r^{2}},\quad\bar{\Gamma}^{r}_{vr}=-\frac{M}{r^{2}},$
$\displaystyle\bar{\Gamma}^{r}_{AB}=-r\Lambda\gamma_{AB},\quad\bar{\Gamma}^{A}_{rB}=\frac{1}{r}\delta^{A}_{B},\quad\bar{\Gamma}^{A}_{BC}={}^{(2)}\Gamma^{A}_{BC}.$
(204)
Now, the metric perturbations $\bar{\delta}g_{ab}$ generated by $\xi$ are
$\delta\bar{g}_{ab}\equiv\mathcal{L}_{\xi}\bar{g}_{ab}$ and so we find
$\displaystyle\delta\bar{g}_{vv}$
$\displaystyle=\frac{M}{r^{2}}D^{2}F-\psi+\frac{3M}{r}\psi-\frac{1}{2}D^{2}\psi,$
(205) $\displaystyle\delta\bar{g}_{vr}$ $\displaystyle=0,$ (206)
$\displaystyle\delta\bar{g}_{vA}$ $\displaystyle=-D_{A}\left(\Lambda
F+\frac{1}{2}D^{2}F\right),$ (207) $\displaystyle\delta\bar{g}_{AB}$
$\displaystyle=2rD_{A}D_{B}F-r\gamma_{AB}D^{2}F+r^{2}\left(D_{A}Y_{B}+D_{B}Y_{A}-\gamma_{AB}\psi\right).$
(208)
The perturbed metric is then
$\displaystyle ds^{2}$
$\displaystyle=-\left(\Lambda-\frac{M}{r^{2}}D^{2}F+\psi-\frac{3M}{r}\psi+\frac{1}{2}D^{2}\psi\right)dv^{2}+2dvdr-
D_{A}\left(2\Lambda F+D^{2}F\right)dvd\Theta^{A}$
$\displaystyle\quad+\left[r^{2}\gamma_{AB}+2rD_{A}D_{B}F-r\gamma_{AB}D^{2}F+r^{2}\left(D_{A}Y_{B}+D_{B}Y_{A}-\gamma_{AB}\psi\right)\right]d\Theta^{A}d\Theta^{B}.$
(209)
Using this and the relation
$\displaystyle\Gamma^{a}_{bc}$
$\displaystyle=\bar{\Gamma}^{a}_{bc}+\frac{1}{2}\bar{g}^{ad}\left(\bar{\nabla}_{b}\delta\bar{g}_{dc}+\bar{\nabla}_{c}\delta\bar{g}_{db}-\bar{\nabla}_{d}\delta\bar{g}_{bc}\right)+O(\delta\bar{g}^{2}),$
(210)
we compute some of the perturbed Christoffel symbols to linear order in $\xi$,
$\displaystyle\Gamma^{v}_{rr}$
$\displaystyle=\Gamma^{r}_{rr}=\Gamma^{A}_{rr}=0,$ (211)
$\displaystyle\Gamma^{v}_{rA}$ $\displaystyle=0,$ (212)
$\displaystyle\Gamma^{r}_{rA}$
$\displaystyle=\frac{1}{r}D_{A}F-\frac{3M}{r^{2}}D_{A}F+\frac{1}{2r}D_{A}D^{2}F,$
(213) $\displaystyle\Gamma^{B}_{rA}$
$\displaystyle=\frac{1}{r}\delta^{B}_{A}-\frac{1}{2r^{2}}\left(2D^{B}D_{A}F-\delta^{B}_{A}D^{2}F\right),$
(214)
which turn out to be exactly the same as the components of supertranslated
metric with just $f\to F$. Also
$\displaystyle\gamma^{AB}\Gamma^{v}_{AB}$ $\displaystyle=-2r,$ (215)
$\displaystyle\gamma^{AB}\Gamma^{r}_{AB}$
$\displaystyle=-2r\Lambda-D^{2}F+\frac{4M}{r}D^{2}F-2r\psi+6M\psi-
rD^{2}\psi-\frac{1}{2}D^{2}D^{2}F,$ (216)
$\displaystyle\gamma^{AB}\Gamma^{C}_{AB}$
$\displaystyle=\gamma^{AB}{}^{(2)}\Gamma^{C}_{AB}+\frac{4M}{r^{2}}D^{C}F+Y^{C}+D^{2}Y^{C}.$
(217)
Using the above, we can write for any vector field $\zeta_{a}$
$\displaystyle\nabla_{r}\zeta_{r}$ $\displaystyle={\partial}_{r}\zeta_{r},$
(218) $\displaystyle\nabla_{r}\zeta_{A}+\nabla_{A}\zeta_{r}$
$\displaystyle={\partial}_{r}\zeta_{A}+D_{A}\zeta_{r}-\zeta_{r}\left(\frac{2}{r}D_{A}F-\frac{6M}{r^{2}}D_{A}F+\frac{1}{r}D_{A}D^{2}F\right)$
$\displaystyle\quad-\frac{2}{r}\zeta_{A}+\frac{1}{r^{2}}\zeta_{B}\left(2D^{B}D_{A}F-\delta^{B}_{A}D^{2}F\right),$
(219) $\displaystyle\gamma^{AB}\nabla_{A}\zeta_{B}$
$\displaystyle=\zeta_{r}\left(D^{2}F-\frac{4M}{r}D^{2}F+2r\psi-6M\psi+rD^{2}\psi+\frac{1}{2}D^{2}D^{2}F+2r\Lambda\right)$
$\displaystyle\quad+D^{A}\zeta_{A}+2r\zeta_{v}-\zeta_{C}\Bigg{(}\frac{4M}{r^{2}}D^{C}F+Y^{C}+D^{2}Y^{C}\Bigg{)}.$
(220)
Now we can relate the components of any contravariant vector field $\zeta^{a}$
to the components of a covaraint vector field $\zeta_{a}$ using the perturbed
metric,
$\displaystyle\zeta_{v}$
$\displaystyle=-\Lambda\zeta^{v}+\zeta^{v}\left(\frac{M}{r^{2}}D^{2}F-\psi+\frac{3M}{r}\psi-\frac{1}{2}D^{2}\psi\right)+\zeta^{r}-\zeta^{A}D_{A}\left(\Lambda
F+\frac{1}{2}D^{2}F\right),$ (221) $\displaystyle\zeta_{r}$
$\displaystyle=\zeta^{v},$ (222) $\displaystyle\zeta_{A}$
$\displaystyle=-\zeta^{v}D_{A}\left(\Lambda
F+\frac{1}{2}D^{2}F\right)+r^{2}\gamma_{AB}\zeta^{B}$
$\displaystyle\quad+\zeta^{B}\left(2rD_{A}D_{B}F-r\gamma_{AB}D^{2}F+r^{2}\left(D_{A}Y_{B}+D_{B}Y_{A}-\gamma_{AB}\psi\right)\right).$
(223)
Now let $\zeta^{a}$ to be a new Schwarzschild supertranslation plus
superrotation vector field parametrized by $g(\Theta)$ and $Z^{A}(\Theta)$.
Employing the shorthand $\phi\equiv D_{A}Z^{A}$ and $G\equiv
g+\frac{v}{2}\phi$,
$\displaystyle\zeta^{v}=G+\delta\zeta^{v},\qquad\zeta^{r}=-\frac{1}{2}D^{2}G-\frac{r}{2}\phi+\delta\zeta^{r},\qquad\zeta^{A}=\frac{1}{r}D^{A}G+Z^{A}+\delta\zeta^{A},$
(224)
where $\delta\zeta^{a}$ is the change in $\zeta^{a}$ due to the original
diffeomorphism $\xi^{a}$. To first order in the perturbation,
$\displaystyle\zeta_{v}$
$\displaystyle=-\Lambda\delta\zeta^{v}+\delta\zeta^{r}+G\left(-\Lambda+\frac{M}{r^{2}}D^{2}F-\psi+\frac{3M}{r}\psi-\frac{1}{2}D^{2}\psi\right)-\frac{1}{2}D^{2}G-\frac{r}{2}\phi$
$\displaystyle\quad-\left(\frac{1}{r}D^{A}G+Z^{A}\right)\left(\Lambda
D_{A}F+\frac{1}{2}D_{A}D^{2}F\right),$ (225) $\displaystyle\zeta_{r}$
$\displaystyle=G+\delta\zeta^{v},$ (226) $\displaystyle\zeta_{A}$
$\displaystyle=\left(\frac{1}{r}D^{B}G+Z^{B}\right)\left(2rD_{A}D_{B}F-r\gamma_{AB}D^{2}F+r^{2}\left(D_{A}Y_{B}+D_{B}Y_{A}-\gamma_{AB}\psi\right)\right)$
$\displaystyle\quad-G\left(\Lambda
D_{A}F+\frac{1}{2}D_{A}D^{2}F\right)+rD_{A}G+r^{2}Z_{A}+r^{2}\gamma_{AB}\delta\zeta^{B}.$
(227)
Plugging back in and demanding that
$\nabla_{r}\zeta_{r}=\nabla_{A}\zeta_{r}+\nabla_{r}\zeta_{A}=\gamma^{AB}\nabla_{A}\zeta_{B}=0$,
we obtain
$\displaystyle 0$ $\displaystyle={\partial}_{r}\delta\zeta^{v},$ (228)
$\displaystyle 0$
$\displaystyle=r^{2}\gamma_{AB}{\partial}_{r}\delta\zeta^{B}+D_{A}\delta\zeta^{v}-\frac{2}{r}(D^{B}G)D_{A}D_{B}F$
$\displaystyle\quad+\frac{1}{r}(D_{A}G)D^{2}F-(D^{B}G)(D_{A}Y_{B}+D_{B}Y_{A})+(D_{A}G)\psi,$
(229) $\displaystyle 0$
$\displaystyle=-\Lambda(D^{A}G)D_{A}F+2(D^{A}D^{B}G)D_{A}D_{B}F+\frac{r^{2}}{2}(D^{A}Z^{B}+D^{B}Z^{A})(D_{A}Y_{B}+D_{B}Y_{A})$
$\displaystyle\quad-(D^{2}G)D^{2}F-r^{2}\phi\psi-r(D^{2}G)\psi-r(D^{2}F)\phi+2r(D^{A}D^{B}G)D_{A}Y_{B}$
$\displaystyle\quad-\frac{1}{2}(D^{A}G)D_{A}D^{2}F+2r(D^{A}Z^{B})D_{A}D_{B}F+r^{2}D_{A}\delta\zeta^{A}+2r\delta\zeta^{r}.$
(230)
Solving for $\delta\zeta$, we obtain
$\displaystyle\delta\zeta^{v}$ $\displaystyle=0,$ (231)
$\displaystyle\delta\zeta^{r}$
$\displaystyle=\frac{1}{2r}\left(\Lambda(D^{A}G)D_{A}F-(D^{A}D^{B}G)D_{A}D_{B}F+\frac{1}{2}(D^{2}G)D^{2}F\right)-rD^{(A}Z^{B)}D_{(A}Y_{B)}$
$\displaystyle\quad+\frac{1}{2r}(D^{A}G)D^{2}D_{A}F-(D^{A}Z^{B})D_{A}D_{B}F+\frac{1}{2}(D^{2}F)\phi+\frac{1}{2}(D_{B}G)D^{2}Y^{B}$
$\displaystyle\quad+\frac{1}{2}(D_{B}G)Y^{B}+\frac{r}{2}\phi\psi,$ (232)
$\displaystyle\delta\zeta^{A}$
$\displaystyle=-\frac{1}{r^{2}}(D^{B}G)D^{A}D_{B}F+\frac{1}{2r^{2}}(D^{A}G)D^{2}F-\frac{1}{r}(D_{B}G)(D^{A}Y^{B}+D^{B}Y^{A})$
$\displaystyle\quad+\frac{1}{r}(D^{A}G)\psi.$ (233)
We need to remind ourselves here that these $\delta\zeta^{a}$ are the changes
in $\zeta^{a}$ due to the transformation $\xi^{a}$. Due to this nature of
$\delta\zeta^{a}$, we will change our notation to
$\delta\zeta^{a}\to\delta_{\xi}\zeta^{a}$. The changes in $\xi^{a}$ due to
$\zeta^{a}$ can be obtained by exchanging $\xi\leftrightarrow\zeta$, and we
will denote this as $\delta_{\zeta}\xi^{a}$.
The regular Lie bracket
$[\xi,\zeta]^{a}=\xi^{b}{\partial}_{b}\zeta^{a}-\zeta^{b}{\partial}_{b}\xi^{a}$
of two vector fields can be computed straightforwardly from (201),
$\displaystyle[\xi,\zeta]^{v}$
$\displaystyle=\frac{1}{2}F\phi-\frac{1}{2}G\psi+Y^{A}D_{A}G-Z^{A}D_{A}F,$
(234) $\displaystyle[\xi,\zeta]^{r}$
$\displaystyle=-\frac{1}{4}FD^{2}\phi+\frac{1}{4}GD^{2}\psi+\frac{1}{4}\phi
D^{2}F-\frac{1}{4}\psi
D^{2}G-\frac{1}{2r}(D^{A}F)D_{A}D^{2}G+\frac{1}{2r}(D^{A}G)D_{A}D^{2}F$
$\displaystyle\quad-\frac{1}{2}(D^{A}F)D_{A}\phi+\frac{1}{2}(D^{A}G)D_{A}\psi-\frac{1}{2}Y^{A}D_{A}D^{2}G+\frac{1}{2}Z^{A}D_{A}D^{2}F-\frac{r}{2}Y^{A}D_{A}\phi$
$\displaystyle\quad+\frac{r}{2}Z^{A}D_{A}\psi,$ (235)
$\displaystyle[\xi,\zeta]^{A}$
$\displaystyle=\frac{1}{2r}FD^{A}\phi-\frac{1}{2r}GD^{A}\psi+\frac{1}{2r^{2}}(D^{2}F)D^{A}G-\frac{1}{2r^{2}}(D^{2}G)D^{A}F+\frac{1}{2r}\psi
D^{A}G-\frac{1}{2r}\phi D^{A}F$
$\displaystyle\quad+\frac{1}{r^{2}}(D^{B}F)D_{B}D^{A}G-\frac{1}{r^{2}}(D^{B}G)D_{B}D^{A}F+\frac{1}{r}Y^{B}D_{B}D^{A}G-\frac{1}{r}Z^{B}D_{B}D^{A}F$
$\displaystyle\quad+\frac{1}{r}(D^{B}F)D_{B}Z^{A}-\frac{1}{r}(D^{B}G)D_{B}Y^{A}+Y^{B}D_{B}Z^{A}-Z^{B}D_{B}Y^{A}.$
(236)
We define the modified bracket by correcting this by $\delta_{\xi}\zeta^{a}$
and $\delta_{\zeta}\xi^{a}$,
$\displaystyle[\xi,\zeta]^{a}_{M}$
$\displaystyle=[\xi,\zeta]^{a}-\delta_{\xi}\zeta^{a}+\delta_{\zeta}\xi^{a}.$
(237)
Using the expressions for $\delta_{\xi}\zeta^{a}$ that we have computed
earlier, we obtain
$\displaystyle[\xi,\zeta]^{v}_{M}$
$\displaystyle=\frac{1}{2}F\phi+Y^{A}D_{A}G-(\xi\leftrightarrow\zeta),$ (238)
$\displaystyle[\xi,\zeta]^{r}_{M}$
$\displaystyle=-\frac{1}{4}FD^{2}\phi-\frac{1}{4}(D^{2}F)\phi-\frac{1}{2}(D^{A}F)D_{A}\phi-\frac{1}{2}Y^{A}D^{2}D_{A}G-(D^{A}Y^{B})D_{A}D_{B}G$
$\displaystyle\quad-\frac{1}{2}(D_{B}G)D^{2}Y^{B}-\frac{r}{2}Y^{A}D_{A}\phi-(\xi\leftrightarrow\zeta),$
(239) $\displaystyle[\xi,\zeta]^{A}_{M}$
$\displaystyle=\frac{1}{2r}FD^{A}\phi-\frac{1}{2r}\psi
D^{A}G+\frac{1}{r}Y^{B}D_{B}D^{A}G+Y^{B}D_{B}Z^{A}+\frac{1}{r}(D_{B}G)D^{A}Y^{B}$
$\displaystyle\quad-(\xi\leftrightarrow\zeta).$ (240)
The $v$-component can be reorganized as
$\displaystyle[\xi,\zeta]^{v}$
$\displaystyle=\frac{1}{2}f\phi-\frac{1}{2}g\psi+Y^{A}D_{A}g-Z^{A}D_{A}f+\frac{v}{2}D_{A}\left(Y^{B}D_{B}Z^{A}-Z^{B}D_{B}Y^{A}\right).$
(241)
Let us define
$\displaystyle\hat{f}$
$\displaystyle=\frac{1}{2}f\phi-\frac{1}{2}g\psi+Y^{A}D_{A}g-Z^{A}D_{A}f,$
(242) $\displaystyle\hat{Y}^{A}$
$\displaystyle=Y^{B}D_{B}Z^{A}-Z^{B}D_{B}Y^{A}.$ (243)
Then, define $\hat{\psi}\equiv D_{A}\hat{Y}^{A}$, and take
$\hat{F}\equiv\hat{f}+\frac{v}{2}\hat{\psi}$ so that we have
$[\xi,\zeta]^{v}=\hat{F}$,
$\displaystyle\hat{F}$
$\displaystyle=\frac{1}{2}F\phi+Y^{A}D_{A}G-(\xi\leftrightarrow\zeta).$ (244)
With this definition, observe that we have exactly the modified bracket
components
$\displaystyle-\frac{1}{2}D^{2}\hat{F}-\frac{r}{2}\hat{\psi}$
$\displaystyle=-\frac{1}{4}FD^{2}\phi-\frac{1}{4}(D^{2}F)\phi-\frac{1}{2}(D^{A}F)D_{A}\phi-\frac{1}{2}Y^{A}D^{2}D_{A}G$
$\displaystyle\quad-(D^{A}Y^{B})D_{A}D_{B}G-\frac{1}{2}(D_{A}G)D^{2}Y^{A}-\frac{r}{2}Y^{A}D_{A}\phi-(\xi\leftrightarrow\zeta)$
(245) $\displaystyle=[\xi,\zeta]^{r}_{M},$ (246)
and
$\displaystyle\frac{1}{r}D^{A}\hat{F}+\hat{Y}^{A}$
$\displaystyle=\frac{1}{2r}FD^{A}\phi-\frac{1}{2r}\psi
D^{A}G+\frac{1}{r}Y^{B}D_{B}D^{A}G+Y^{B}D_{B}Z^{A}+\frac{1}{r}(D_{B}G)D^{A}Y^{B}$
$\displaystyle\quad-(\xi\leftrightarrow\zeta)$ (247)
$\displaystyle=[\xi,\zeta]^{A}_{M}.$ (248)
This implies that
$\displaystyle[\xi,\zeta]_{M}$
$\displaystyle=\left(\hat{f}+\frac{v}{2}\hat{\psi}\right){\partial}_{v}-\frac{1}{2}\left(D^{2}\left(\hat{f}+\frac{v}{2}\hat{\psi}\right)+r\hat{\psi}\right){\partial}_{r}+\left(\frac{1}{r}D^{A}\left(\hat{f}+\frac{v}{2}\hat{\psi}\right)+\hat{Y}^{A}\right){\partial}_{A}.$
(249)
Comparing the RHS to the expression (201), we can see that it is another
supertranslation $\hat{f}$ together with superrotation $\hat{Y}^{A}$.
We conclude that given two pairs $(f_{1},Y_{1}),(f_{2},Y_{2})$ of
supertranslation and superrotation, the modified bracket has the algebra
$\displaystyle[(f_{1},Y_{1}),(f_{2},Y_{2})]_{M}=(\hat{f},\hat{Y}),$ (250)
with the product being another supertranslation together with a superrotation
parametrized by
$\displaystyle\hat{f}$
$\displaystyle=\frac{1}{2}f_{1}D_{A}Y^{A}_{2}-\frac{1}{2}f_{2}D_{A}Y^{A}_{1}+Y_{1}^{A}D_{A}f_{2}-Y_{2}^{A}D_{A}f_{1},$
(251) $\displaystyle\hat{Y}^{A}$
$\displaystyle=Y_{1}^{B}D_{B}Y_{2}^{A}-Y_{2}^{B}D_{B}Y_{1}^{A},$ (252)
which is equivalent to the BMS algebra at the null infinity Barnich:2011mi .
## Appendix B Derivation of horizon charges
In this section, we will give a derivation of the supertranslation and dual
supertranslation charges using the formula of Godazgar:2020gqd ;
Godazgar:2020kqd ,
$\displaystyle\not{\delta}Q_{E}^{\mathcal{H}^{+}}$
$\displaystyle=\frac{1}{16\pi}\epsilon_{{\alpha\beta\gamma\delta}}\int_{{\partial}\mathcal{H}^{+}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta},$ (253) $\displaystyle\not{\delta}Q_{M}^{\mathcal{H}^{+}}$
$\displaystyle=\frac{1}{8\pi}\int_{{\partial}\mathcal{H}^{+}}(i_{\xi}E^{\alpha})\delta{\omega}_{\alpha\beta}\wedge
E^{\beta}.$ (254)
Here ${\omega}_{\alpha\beta}$ is the (torsion-free) spin connection 1-form,
and $\delta{\omega}$ is the change in ${\omega}$ induced by the variation
$\delta g_{ab}=h_{ab}$ of the metric.
In order to incorporate the variation of the metric, we will parametrize a
generic metric in Bondi gauge by
$\displaystyle g_{ab}$ $\displaystyle=\begin{pmatrix}V+W_{A}W^{A}&U&W_{B}\\\
U&0&0\\\ W_{A}&0&g_{AB}\end{pmatrix},$ (255)
where $V$, $U$, $W_{A}$ are real functions of $v$, $r$, $\Theta^{A}$. The
inverse metric is
$\displaystyle g^{ab}$ $\displaystyle=\begin{pmatrix}0&U^{-1}&0\\\
U^{-1}&-VU^{-2}&-U^{-1}W^{B}\\\ 0&-U^{-1}W^{A}&g^{AB}\end{pmatrix},$ (256)
where $g^{AB}$ is the inverse of the two-dimensional metric $g_{AB}$, and
$W^{A}=g^{AB}W_{B}$ (not $\gamma^{AB}W_{B}$). Since this metric may deviate
from that of Schwarzschild, the two-dimensional curved indices $A,B,C,\ldots$
in this section, and only in this section, are raised and lowered using
$g^{AB}$ and $g_{AB}$ rather than $\gamma^{AB}$ and $\gamma_{AB}$, the metric
on the unit 2-sphere metric.
We will employ the following set of vielbein
$E^{\alpha}=E^{\alpha}{}_{a}dx^{a}$,
$\displaystyle E^{1}$ $\displaystyle=\frac{V}{2}dv+Udr,$ (257) $\displaystyle
E^{2}$ $\displaystyle=-dv,$ (258) $\displaystyle E^{3}$
$\displaystyle=W_{A}\mu^{A}dv+\mu_{A}d\Theta^{A},$ (259) $\displaystyle E^{4}$
$\displaystyle=W_{A}\bar{\mu}^{A}dv+\bar{\mu}_{A}d\Theta^{A},$ (260)
where $\mu_{A}$, $\bar{\mu}_{A}$ are complex functions of $v$, $r$,
$\Theta^{A}$, and $\mu^{A}=g^{AB}\mu_{B}$, $\bar{\mu}^{A}=g^{AB}\bar{\mu}_{B}$
(bar denotes complex conjugation, so $\bar{\mu}_{A}$ is the complex conjugate
of $\mu_{A}$ and hence $E^{3}=\overline{E^{4}}$). They satisfy the conditions
$\displaystyle\mu_{A}\bar{\mu}_{B}+\mu_{B}\bar{\mu}_{A}$
$\displaystyle=g_{AB},\qquad\mu^{A}\bar{\mu}_{A}=1,\qquad\mu^{A}\mu_{A}=\bar{\mu}^{A}\bar{\mu}_{A}=0.$
(261)
The tangent space metric and its inverse are
$\displaystyle\eta_{\alpha\beta}=\eta^{\alpha\beta}=\begin{pmatrix}0&-1&0&0\\\
-1&0&0&0\\\ 0&0&0&1\\\ 0&0&1&0\end{pmatrix},$ (262)
and the inverse vielbeins $E_{\alpha}=E_{\alpha}{}^{a}{\partial}_{a}$ are
$\displaystyle E_{1}$ $\displaystyle=U^{-1}{\partial}_{r},$ (263)
$\displaystyle E_{2}$
$\displaystyle=-{\partial}_{v}+\frac{V}{2U}{\partial}_{r}+W^{A}{\partial}_{A},$
(264) $\displaystyle E_{3}$ $\displaystyle=\bar{\mu}^{A}{\partial}_{A},$ (265)
$\displaystyle E_{4}$ $\displaystyle=\mu^{A}{\partial}_{A}.$ (266)
One can readily check that
$\displaystyle E_{\alpha}{}^{a}E^{\alpha}{}_{b}=\delta^{a}{}_{b}$
$\displaystyle,\qquad
E_{\alpha}{}^{a}E^{\beta}{}_{a}=\delta_{\alpha}{}^{\beta},$ (267)
$\displaystyle E^{\alpha}{}_{a}E^{\beta}{}_{b}\eta_{\alpha\beta}=g_{ab}$
$\displaystyle,\qquad
E_{\alpha}{}^{a}E_{\beta}{}^{b}\eta^{\alpha\beta}=g^{ab}.$ (268)
The spin connection 1-form ${\omega}_{\alpha\beta}$ is defined as
$\displaystyle dE^{\alpha}$ $\displaystyle=-{\omega}^{\alpha}{}_{\beta}\wedge
E^{\beta}=\frac{1}{2}c^{\alpha}{}_{\beta\gamma}E^{\beta}\wedge E^{\gamma},$
(269) $\displaystyle{\omega}_{\alpha\beta}$
$\displaystyle=\frac{1}{2}(c_{\alpha\beta\gamma}-c_{\beta\alpha\gamma}-c_{\gamma{\alpha\beta}})E^{\gamma},$
(270)
where $c_{\alpha\beta\gamma}$ are the anholonomy coefficients. Explicit
expressions for the coefficients read
$\displaystyle c_{1\beta\gamma}$ $\displaystyle=0,$ (271) $\displaystyle
c_{212}$
$\displaystyle=\frac{1}{U}\left(\frac{1}{2}V^{\prime}-\dot{U}+W^{A}{\partial}_{A}U\right),$
(272) $\displaystyle c_{213}$
$\displaystyle=\frac{\bar{\mu}^{A}{\partial}_{A}U}{U},$ (273) $\displaystyle
c_{223}$
$\displaystyle=-\frac{1}{2}\bar{\mu}^{A}{\partial}_{A}V+\frac{\bar{\mu}^{A}{\partial}_{A}U}{2U}V,$
(274) $\displaystyle c_{234}$ $\displaystyle=0,$ (275) $\displaystyle c_{312}$
$\displaystyle=-\frac{W^{A}{}^{\prime}\bar{\mu}_{A}}{U},$ (276) $\displaystyle
c_{313}$ $\displaystyle=\frac{\bar{\mu}^{A}\bar{\mu}_{A}^{\prime}}{U},$ (277)
$\displaystyle c_{314}$
$\displaystyle=\frac{\mu^{A}\bar{\mu}_{A}^{\prime}}{U},$ (278) $\displaystyle
c_{323}$
$\displaystyle=\bar{\mu}^{A}{\partial}_{A}(W\cdot\bar{\mu})-\bar{\mu}^{A}\dot{\bar{\mu}}_{A}+\frac{\bar{\mu}^{A}\bar{\mu}_{A}^{\prime}}{2U}V+W^{A}\bar{\mu}^{B}({\partial}_{A}\bar{\mu}_{B}-{\partial}_{B}\bar{\mu}_{A}),$
(279) $\displaystyle c_{324}$
$\displaystyle=\mu^{A}{\partial}_{A}(W\cdot\bar{\mu})-\mu^{A}\dot{\bar{\mu}}_{A}+\frac{\mu^{A}\bar{\mu}_{A}^{\prime}}{2U}V+W^{A}\mu^{B}({\partial}_{A}\bar{\mu}_{B}-{\partial}_{B}\bar{\mu}_{A}),$
(280) $\displaystyle c_{334}$
$\displaystyle=\left(\mu^{A}\bar{\mu}^{B}-\bar{\mu}^{A}\mu^{B}\right){\partial}_{B}\bar{\mu}_{A}.$
(281)
The remaining coefficients can be obtained using the antisymmetry
$c_{\alpha\beta\gamma}=-c_{\alpha\gamma\beta}$ and the fact
$E^{3}=\overline{E^{4}}$ implies switching indices $3\leftrightarrow 4$
corresponds to complex conjugation, for instance $c_{213}=\overline{c_{214}}$
and $c_{434}=\overline{c_{343}}=-\overline{c_{334}}$. Using this to compute
${\omega}_{{\alpha\beta}}$, we obtain
$\displaystyle{\omega}_{12}$
$\displaystyle=\frac{1}{U}\left(-\frac{1}{2}V^{\prime}+\dot{U}-W^{A}{\partial}_{A}U\right)E^{2}$
$\displaystyle\quad+\frac{1}{2U}\left(-\bar{\mu}^{A}{\partial}_{A}U+{W^{A}}^{\prime}\bar{\mu}_{A}\right)E^{3}+\frac{1}{2U}\left(-\mu^{A}{\partial}_{A}U+{W^{A}}^{\prime}\mu_{A}\right)E^{4},$
(282) $\displaystyle{\omega}_{13}$
$\displaystyle=\frac{1}{2U}\left(W^{A}{}^{\prime}\bar{\mu}_{A}-\bar{\mu}^{A}{\partial}_{A}U\right)E^{2}-\frac{\bar{\mu}^{A}\bar{\mu}_{A}^{\prime}}{U}E^{3}-\frac{1}{2U}\left(\mu^{A}\bar{\mu}_{A}^{\prime}+\bar{\mu}^{A}\mu_{A}^{\prime}\right)E^{4},$
(283) $\displaystyle{\omega}_{23}$
$\displaystyle=\frac{1}{2U}\left(-\bar{\mu}^{A}{\partial}_{A}U-W^{A}{}^{\prime}\bar{\mu}_{A}\right)E^{1}+\frac{1}{2}\left(\bar{\mu}^{A}{\partial}_{A}V-\frac{\bar{\mu}^{A}{\partial}_{A}U}{U}V\right)E^{2}$
$\displaystyle\quad-\left(\bar{\mu}^{A}{\partial}_{A}(W\cdot\bar{\mu})-\bar{\mu}^{A}\dot{\bar{\mu}}_{A}+\frac{\bar{\mu}^{A}\bar{\mu}_{A}^{\prime}}{2U}V+W^{A}\bar{\mu}^{B}({\partial}_{A}\bar{\mu}_{B}-{\partial}_{B}\bar{\mu}_{A})\right)E^{3}$
$\displaystyle\quad-\frac{1}{2}\left(\mu^{A}{\partial}_{A}(W\cdot\bar{\mu})-\mu^{A}\dot{\bar{\mu}}_{A}+\frac{\mu^{A}\bar{\mu}_{A}^{\prime}}{2U}V+W^{A}\mu^{B}({\partial}_{A}\bar{\mu}_{B}-{\partial}_{B}\bar{\mu}_{A})+\text{c.c.}\right)E^{4},$
(284) $\displaystyle{\omega}_{34}$
$\displaystyle=\frac{1}{2U}\left(-\mu^{A}\bar{\mu}_{A}^{\prime}+\bar{\mu}^{A}\mu_{A}^{\prime}\right)E^{1}$
$\displaystyle\quad+\frac{1}{2}\left(\bar{\mu}^{A}{\partial}_{A}(W\cdot\mu)-\bar{\mu}^{A}\dot{\mu}_{A}+\frac{\bar{\mu}^{A}\mu_{A}^{\prime}}{2U}V+W^{A}\bar{\mu}^{B}({\partial}_{A}\mu_{B}-{\partial}_{B}\mu_{A})-\text{c.c.}\right)E^{2}$
$\displaystyle\quad-\left(\mu^{A}\bar{\mu}^{B}-\bar{\mu}^{A}\mu^{B}\right){\partial}_{B}\bar{\mu}_{A}E^{3}-\left(\mu^{A}\bar{\mu}^{B}-\bar{\mu}^{A}\mu^{B}\right){\partial}_{B}\mu_{A}E^{4}.$
(285)
We keep in mind that $E^{3}=\overline{E^{4}}$. The remaining components can be
obtained by antisymmetry and complex conjugation, for instance
${\omega}_{42}=\overline{{\omega}_{32}}=-\overline{{\omega}_{23}}$.
### B.1 Supertranslation charge
The conserved electric charge involves the differential form
$\displaystyle\frac{1}{16\pi}\epsilon_{{\alpha\beta\gamma\delta}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta}.$ (286)
We are interested in integrating
$\displaystyle\frac{1}{2}\epsilon_{{\alpha\beta\gamma\delta}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta}$
$\displaystyle=\epsilon_{1234}i_{\xi}E^{3}\delta{\omega}^{12}\wedge
E^{4}+\epsilon_{1324}i_{\xi}E^{2}\delta{\omega}^{13}\wedge
E^{4}+\epsilon_{2314}i_{\xi}E^{1}\delta{\omega}^{23}\wedge E^{4}$
$\displaystyle\quad+\epsilon_{1243}i_{\xi}E^{4}\delta{\omega}^{12}\wedge
E^{3}+\epsilon_{1423}i_{\xi}E^{2}\delta{\omega}^{14}\wedge
E^{3}+\epsilon_{2413}i_{\xi}E^{1}\delta{\omega}^{24}\wedge E^{3}$
$\displaystyle\quad+\cdots$ (287)
over $S^{2}$. Observe that the alternating tensor
$\epsilon_{\alpha\beta\gamma\delta}$ is purely imaginary,
$\displaystyle\overline{\epsilon_{1234}}=\epsilon_{1243}=-\epsilon_{1234}.$
(288)
By explicit computation, one finds that
$\displaystyle\epsilon_{1234}=E_{1}{}^{a}E_{2}{}^{b}E_{3}{}^{c}E_{4}{}^{d}\epsilon_{abcd}=-i,$
(289)
where $\epsilon_{abcd}$ is the alternating tensor in the curved coordinates
with $\epsilon_{vr\theta\phi}=\sqrt{-\det g}=Ur^{2}\sin\theta$. Using this and
rearranging the indices, we obtain
$\displaystyle\frac{i}{2}\epsilon_{{\alpha\beta\gamma\delta}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta}$ $\displaystyle=-i_{\xi}E^{3}\delta{\omega}_{12}\wedge
E^{4}+i_{\xi}E^{2}\delta{\omega}_{24}\wedge
E^{4}-i_{\xi}E^{1}\delta{\omega}_{14}\wedge E^{4}$
$\displaystyle\quad+i_{\xi}E^{4}\delta{\omega}_{12}\wedge
E^{3}-i_{\xi}E^{2}\delta{\omega}_{23}\wedge
E^{3}+i_{\xi}E^{1}\delta{\omega}_{13}\wedge E^{3}$
$\displaystyle\quad+\cdots.$ (290)
Let us look at this expression term by term. We are interested only in
coefficients of $E^{3}\wedge E^{4}$ as we are integrating a the two-sphere on
the horizon. The first and fourth terms combine to yield
$\displaystyle-i_{\xi}E^{3}\delta{\omega}_{12}\wedge
E^{4}+i_{\xi}E^{4}\delta{\omega}_{12}\wedge E^{3}$
$\displaystyle=\frac{1}{2}\xi^{A}\left({\partial}_{A}h_{vr}+\frac{2}{r}h_{vA}-{\partial}_{r}h_{vA}\right)E^{3}\wedge
E^{4}$ $\displaystyle\quad+\cdots.$ (291)
For the second term we have
$\displaystyle i_{\xi}E^{2}\delta{\omega}_{24}\wedge E^{4}$
$\displaystyle=\frac{\xi^{v}}{2}\left(\frac{1}{r}h_{vv}+{\partial}_{A}h^{A}{}_{v}+h^{A}{}_{v}\left(\bar{\mu}^{B}{\partial}_{A}\mu_{B}+\mu^{B}{\partial}_{A}\bar{\mu}_{B}\right)\right)E^{3}\wedge
E^{4}$ $\displaystyle\quad+\cdots,$ (292)
where we have used
$\delta(\bar{\mu}^{A}\dot{\mu}_{A})=\delta(\mu^{A}\dot{\bar{\mu}}_{A})=0$. It
turns out that
$\displaystyle{\partial}_{A}h^{A}{}_{v}+h^{A}{}_{v}\left(\bar{\mu}^{B}{\partial}_{A}\mu_{B}+\mu^{B}{\partial}_{A}\bar{\mu}_{B}\right)=g^{AB}D_{A}h_{vB}=\frac{1}{r^{2}}\gamma^{AB}D_{A}h_{vB},$
(293)
where $D_{A}$ denotes covariant derivative on the unit 2-sphere (that is,
compatible with $\gamma_{AB}$, not $g_{AB}$). Thus, we can write
$\displaystyle i_{\xi}E^{2}\delta{\omega}_{24}\wedge
E^{4}=\frac{\xi^{v}}{2}\left(\frac{1}{r}h_{vv}+\frac{1}{r^{2}}\gamma^{AB}D_{A}h_{vB}\right)E^{3}\wedge
E^{4}+\cdots.$ (294)
The coefficient of $E^{3}\wedge E^{4}$ is real,
$\displaystyle i_{\xi}E^{2}\delta{\omega}_{24}\wedge
E^{4}-i_{\xi}E^{2}\delta{\omega}_{23}\wedge E^{3}$
$\displaystyle=\xi^{v}\left(\frac{1}{r}h_{vv}+\frac{1}{r^{2}}\gamma^{AB}D_{A}h_{vB}\right)E^{3}\wedge
E^{4}+\cdots.$ (295)
We also have
$\displaystyle-i_{\xi}E^{1}\delta{\omega}_{14}\wedge E^{4}$
$\displaystyle=\xi^{r}\delta\left[\frac{1}{2U}\left(\bar{\mu}^{A}\mu_{A}^{\prime}+\mu^{A}\bar{\mu}_{A}^{\prime}\right)\right]E^{3}\wedge
E^{4}+\frac{\xi^{r}}{2}\left(\bar{\mu}^{A}\mu_{A}^{\prime}+\mu^{A}\bar{\mu}_{A}^{\prime}\right)\delta
E^{3}\wedge E^{4}$ $\displaystyle\quad+\cdots,$ (296) $\displaystyle
i_{\xi}E^{1}\delta{\omega}_{13}\wedge E^{3}$
$\displaystyle=\xi^{r}\delta\left[\frac{1}{2U}\left(\mu^{A}\bar{\mu}_{A}^{\prime}+\bar{\mu}^{A}\mu_{A}^{\prime}\right)\right]E^{3}\wedge
E^{4}+\frac{\xi^{r}}{2}\left(\mu^{A}\bar{\mu}_{A}^{\prime}+\bar{\mu}^{A}\mu_{A}^{\prime}\right)E^{3}\wedge\delta
E^{4}$ $\displaystyle\quad+\cdots.$ (297)
Together we have
$\displaystyle-i_{\xi}E^{1}\delta{\omega}_{14}\wedge
E^{4}+i_{\xi}E^{1}\delta{\omega}_{13}\wedge E^{3}$
$\displaystyle=-\frac{2\xi^{r}}{r}h_{vr}E^{3}\wedge
E^{4}+\frac{\xi^{r}}{r}\delta(E^{3}\wedge E^{4})+\cdots,$ (298)
where we have used
$\delta(\mu^{A}\bar{\mu}_{A}^{\prime})=\delta(\bar{\mu}^{A}\mu_{A}^{\prime})=0$.
With $\delta r=0$, we also have $\delta(E^{3}\wedge E^{4})=0$ due to the Bondi
gauge condition $\gamma^{AB}h_{AB}=0$.
Collecting the results, we obtain
$\displaystyle\frac{i}{2}\epsilon_{{\alpha\beta\gamma\delta}}(i_{\xi}E^{\gamma})\delta{\omega}^{{\alpha\beta}}\wedge
E^{\delta}$
$\displaystyle=\Bigg{[}\frac{1}{2}\xi^{A}\left({\partial}_{A}h_{vr}+\frac{2}{r}h_{vA}-{\partial}_{r}h_{vA}\right)+\xi^{v}\left(\frac{1}{r}h_{vv}+\frac{1}{r^{2}}\gamma^{AB}D_{A}h_{vB}\right)$
$\displaystyle\qquad-\frac{2\xi^{r}}{r}h_{vr}\Bigg{]}E^{3}\wedge
E^{4}+\cdots.$ (299)
Plugging this into (253), we obtain the electric diffeomorphism charge
associated with vector field $\xi$ on the Schwarzschild horizon $r=2M$ to be
$\displaystyle\not{\delta}Q_{E}^{\mathcal{H}^{+}}$
$\displaystyle=\frac{M^{2}}{4\pi}\int
d^{2}\Theta\sqrt{\gamma}\Bigg{[}\xi^{A}\left({\partial}_{A}h_{vr}+\frac{1}{M}h_{vA}-{\partial}_{r}h_{vA}\right)$
$\displaystyle\hskip
85.35826pt+\frac{1}{M}\xi^{v}\left(h_{vv}+\frac{1}{2M}\gamma^{AB}D^{A}h_{vB}\right)-\frac{2\xi^{r}}{M}h_{vr}\Bigg{]}.$
(300)
For a smooth function $f(\Theta)$ and the horizon supertranslation vector
field (21), this formula is in exact agreement with the horizon
supertranslation charge derived in Hawking:2016sgy , as anticipated.
### B.2 Dual supertranslation charge
The magnetic diffeomorphism charge associated with a vector field $\xi$ takes
the form
$\displaystyle\not{\delta}Q_{M}^{\mathcal{H}^{+}}$
$\displaystyle=\frac{i}{8\pi}\int_{{\partial}{\mathcal{H}^{+}}}(i_{\xi}E^{\alpha})\delta{\omega}_{\alpha\beta}\wedge
E^{\beta}.$ (301)
Again, we only need to compute the $E^{3}\wedge E^{4}$ component of the two-
form
$\displaystyle(i_{\xi}E^{\alpha})\delta{\omega}_{\alpha\beta}\wedge
E^{\beta}.$ (302)
The only part of the expression relevant to the $S^{2}$ integral is
$\displaystyle(i_{\xi}E^{\alpha})\delta{\omega}_{\alpha\beta}\wedge E^{\beta}$
$\displaystyle=(i_{\xi}E^{1})(\delta{\omega}_{13}\wedge
E^{3}+\delta{\omega}_{14}\wedge
E^{4})+(i_{\xi}E^{2})(\delta{\omega}_{23}\wedge
E^{3}+\delta{\omega}_{24}\wedge E^{4})$
$\displaystyle\quad+(i_{\xi}E^{3})\delta{\omega}_{34}\wedge
E^{4}+(i_{\xi}E^{4})\delta{\omega}_{43}\wedge E^{3}+\cdots,$ (303)
where $\cdots$ contains all the irrelevant components. Using the expression
(284) for the spin connection, we can write
$\displaystyle(\delta{\omega}_{13}\wedge E^{3}+\delta{\omega}_{14}\wedge
E^{4})|_{d\Theta^{A}\wedge d\Theta^{B}}$
$\displaystyle=-\delta\left[\frac{1}{2U}\left(\bar{\mu}^{A}\mu_{A}^{\prime}+\mu^{A}\bar{\mu}_{A}^{\prime}\right)\right](E^{3}\wedge
E^{4}+E^{4}\wedge E^{3})$
$\displaystyle\quad-\frac{1}{2}\left(\bar{\mu}^{A}\mu_{A}^{\prime}+\mu^{A}\bar{\mu}_{A}^{\prime}\right)(\delta
E^{3}\wedge E^{4}+\delta E^{4}\wedge E^{3})$
$\displaystyle\quad-(\bar{\mu}^{A}\bar{\mu}_{A}^{\prime})\delta E^{3}\wedge
E^{3}-(\mu^{A}\mu_{A}^{\prime})\delta E^{4}\wedge E^{4}.$ (304)
The first line on the RHS is clearly zero since $E^{3}\wedge E^{4}+E^{4}\wedge
E^{3}=0$. The third line is also zero since
$\displaystyle\bar{\mu}^{A}\bar{\mu}_{A}^{\prime}=\frac{1}{r}\bar{\mu}^{A}\bar{\mu}_{A}=0,\qquad\mu^{A}\mu_{A}^{\prime}=\frac{1}{r}\mu^{A}\mu_{A}=0.$
(305)
In the second line, we have
$\displaystyle\delta E^{3}\wedge E^{4}+\delta E^{4}\wedge E^{3}$
$\displaystyle=(\delta\mu_{A}\bar{\mu}_{B}+\delta\bar{\mu}_{A}\mu_{B})d\Theta^{A}\wedge
d\Theta^{B}.$ (306)
One can show that the expression in parentheses on the RHS is
$\frac{1}{2}h_{AB}$ and is therefore symmetric,
$\displaystyle
h_{AB}=\delta(\mu_{A}\bar{\mu}_{B}+\bar{\mu}_{A}\mu_{B})=2(\delta\mu_{A}\bar{\mu}_{B}+\delta\bar{\mu}_{A}\mu_{B}),$
(307)
which implies $\delta E^{3}\wedge E^{4}+\delta E^{4}\wedge E^{3}=0$. Therefore
we have
$\displaystyle(\delta{\omega}_{13}\wedge E^{3}+\delta{\omega}_{14}\wedge
E^{4})|_{d\Theta^{A}\wedge d\Theta^{B}}=0.$ (308)
The expression for $\delta{\omega}_{23}\wedge E^{3}+\delta{\omega}_{24}\wedge
E^{4}$ is similar but with just more complicated coefficients. To see this,
first observe that the $E^{3}$ and $E^{4}$ components of ${\omega}_{23}$ and
${\omega}_{24}$ have the form
$\displaystyle{\omega}_{23}$ $\displaystyle=\cdots-
AE^{3}-BE^{4},\qquad{\omega}_{24}=\cdots-BE^{3}-\bar{A}E^{4},$ (309)
where $A$ is complex and $B$ is real,
$\displaystyle A$
$\displaystyle=\bar{\mu}^{A}{\partial}_{A}(W\cdot\bar{\mu})-\bar{\mu}^{A}\dot{\bar{\mu}}_{A}+\frac{\bar{\mu}^{A}\bar{\mu}_{A}^{\prime}}{2U}(V-W^{2})+W^{A}\bar{\mu}^{B}({\partial}_{A}\bar{\mu}_{B}-{\partial}_{B}\bar{\mu}_{A}),$
(310) $\displaystyle B$
$\displaystyle=\frac{1}{2}\left(\mu^{A}{\partial}_{A}(W\cdot\bar{\mu})-\mu^{A}\dot{\bar{\mu}}_{A}+\frac{\mu^{A}\bar{\mu}_{A}^{\prime}}{2U}(V-W^{2})+W^{A}\mu^{B}({\partial}_{A}\bar{\mu}_{B}-{\partial}_{B}\bar{\mu}_{A})+\text{c.c.}\right).$
(311)
Note that $A=B=0$ on Schwarzschild; it is only the variations $\delta A$ and
$\delta B$ that do not necessarily vanish. Thus, we have
$\displaystyle(\delta{\omega}_{23}\wedge E^{3}+\delta{\omega}_{24}\wedge
E^{4})|_{d\Theta^{A}\wedge d\Theta^{B}}$ $\displaystyle=-(\delta
B)(E^{3}\wedge E^{4}+E^{4}\wedge E^{3})-B(\delta E^{3}\wedge E^{4}+\delta
E^{4}\wedge E^{3})$ $\displaystyle\quad-A\delta E^{3}\wedge
E^{3}-\bar{A}\delta E^{4}\wedge E^{4}$ $\displaystyle=0,$ (312)
where the second line vanishes since $A=B=0$, and the first line vanishes due
to $E^{3}\wedge E^{4}+E^{4}\wedge E^{3}=0$.
At this point we are left with the two terms,
$\displaystyle(i_{\xi}E^{3})\delta{\omega}_{34}\wedge
E^{4}+(i_{\xi}E^{4})\delta{\omega}_{43}\wedge E^{3}.$ (313)
We first note that the $E^{3}$ and $E^{4}$ components of
${\omega}_{34}=-{\omega}_{43}$ can be written compactly using
$\mu^{A}\bar{\mu}^{B}-\bar{\mu}^{A}\mu^{B}=i\epsilon^{AB}$ as
$\displaystyle{\omega}_{34}$
$\displaystyle=\cdots+i\epsilon^{AB}\left({\partial}_{A}\bar{\mu}_{B}E^{3}+{\partial}_{A}\mu_{B}E^{4}\right).$
(314)
The variation $\delta\epsilon^{AB}$ is proportional to the trace
$\gamma^{AB}h_{AB}$ and therefore vanishes in Bondi gauge. Therefore if we
vary ${\omega}_{34}$, the variation only acts on the expression inside the
parentheses,
$\displaystyle\delta{\omega}_{34}$
$\displaystyle=\cdots+i\epsilon^{AB}\delta\left({\partial}_{A}\bar{\mu}_{B}E^{3}+{\partial}_{A}\mu_{B}E^{4}\right)$
$\displaystyle=\cdots+i\epsilon^{AB}\left({\partial}_{A}\delta\bar{\mu}_{B}E^{3}+{\partial}_{A}\delta\mu_{B}E^{4}+{\partial}_{A}\bar{\mu}_{B}\delta
E^{3}+{\partial}_{A}\mu_{B}\delta E^{4}\right).$ (315)
Plugging this in and using $i_{\xi}E^{3}=\xi^{A}\mu_{A}$ and
$i_{\xi}E^{4}=\xi^{A}\bar{\mu}_{A}$, we obtain
$\displaystyle(i_{\xi}E^{3})\delta{\omega}_{34}\wedge
E^{4}+(i_{\xi}E^{4})\delta{\omega}_{43}\wedge E^{3}$
$\displaystyle=i\epsilon^{AB}\xi^{C}\mu_{C}\left({\partial}_{A}\delta\bar{\mu}_{B}E^{3}+{\partial}_{A}\bar{\mu}_{B}\delta
E^{3}+{\partial}_{A}\mu_{B}\delta E^{4}\right)\wedge E^{4}$
$\displaystyle\quad-i\epsilon^{AB}\xi^{C}\bar{\mu}_{C}\left({\partial}_{A}\delta\mu_{B}E^{4}+{\partial}_{A}\bar{\mu}_{B}\delta
E^{3}+{\partial}_{A}\mu_{B}\delta E^{4}\right)\wedge E^{3}$
$\displaystyle=\xi^{C}X_{C},$ (316)
where $X_{C}$ takes the form
$\displaystyle X_{C}$
$\displaystyle=i\epsilon^{AB}\mu_{C}\left({\partial}_{A}\delta\bar{\mu}_{B}E^{3}+{\partial}_{A}\bar{\mu}_{B}\delta
E^{3}+{\partial}_{A}\mu_{B}\delta E^{4}\right)\wedge E^{4}$
$\displaystyle\quad-i\epsilon^{AB}\bar{\mu}_{C}\left({\partial}_{A}\delta\mu_{B}E^{4}+{\partial}_{A}\bar{\mu}_{B}\delta
E^{3}+{\partial}_{A}\mu_{B}\delta E^{4}\right)\wedge E^{3}$
$\displaystyle=i\epsilon^{AB}\Big{[}\mu_{C}({\partial}_{A}\delta\bar{\mu}_{B})\mu_{D}\bar{\mu}_{E}+\mu_{C}({\partial}_{A}\bar{\mu}_{B})\delta\mu_{D}\bar{\mu}_{E}+\mu_{C}({\partial}_{A}\mu_{B})\delta\bar{\mu}_{D}\bar{\mu}_{E}$
$\displaystyle\quad+\bar{\mu}_{C}({\partial}_{A}\delta\mu_{B})\mu_{D}\bar{\mu}_{E}+\bar{\mu}_{C}({\partial}_{A}\bar{\mu}_{B})\mu_{D}\delta\mu_{E}+\bar{\mu}_{C}({\partial}_{A}\mu_{B})\mu_{D}\delta\bar{\mu}_{E}\Big{]}d\Theta^{D}\wedge
d\Theta^{E}.$ (317)
One finds that this expression is
$\displaystyle X_{C}$
$\displaystyle=\frac{1}{2}\left({\partial}_{\theta}\frac{h_{\phi\theta}}{\sin\theta}+\frac{2\cos\theta}{\sin^{2}\theta}h_{\phi\theta}-\frac{{\partial}_{\phi}h_{\theta\theta}}{\sin\theta},\sin\theta{\partial}_{\theta}\frac{h_{\phi\phi}}{\sin^{2}\theta}+\frac{2\cos\theta}{\sin^{2}\theta}h_{\phi\phi}-{\partial}_{\phi}\frac{h_{\theta\phi}}{\sin\theta}\right)d\Omega$
$\displaystyle=-\frac{r^{2}}{2}\epsilon^{AB}D_{A}h_{BC}d\Omega,$ (318)
where $d\Omega=\sin\theta d\theta\wedge d\phi$, and $D_{A}$ denotes the unit
2-sphere covariant derivative compatible with $\gamma_{AB}$. Notice that
$\epsilon^{AB}$ here is the Levi-Civita tensor for the metric $g_{AB}$, which
contains the $r^{2}$ factor. If we write $\bar{\epsilon}^{AB}$ for the Levi-
Civita tensor corresponding to the $S^{2}$ metric $\gamma_{AB}$, we have the
relation $\bar{\epsilon}^{AB}=r^{2}\epsilon^{AB}$ and
$\displaystyle X_{C}=-\frac{1}{2}\bar{\epsilon}^{AB}D_{A}h_{BC}d\Omega.$ (319)
Collecting the results, we obtain the magnetic diffeomorphism charge
associated with a vector field $\xi$ to be
$\displaystyle\not{\delta}Q_{M}^{\mathcal{H}^{+}}$
$\displaystyle=\frac{1}{8\pi}\int_{{\partial}{\mathcal{H}^{+}}}\xi^{C}X_{C}$
$\displaystyle=-\frac{1}{16\pi}\int_{{\partial}{\mathcal{H}^{+}}}d^{2}\Theta\sqrt{\gamma}\,\xi^{C}\bar{\epsilon}^{AB}D_{A}h_{BC}.$
(320)
## Appendix C Dirac bracket of non-integrable piece
We can re-write ${\mathcal{N}_{f}^{\mathcal{H}^{+}}}$ in terms of the delta
function ${\partial}_{\bar{z}}f=2\pi\delta^{2}(z-w)$. Doing so and taking note
that the covariant derivative $D_{z}$ is acting on a scalar and is therefore a
plain partial derivative, we obtain
$\displaystyle{\mathcal{N}_{f}^{\mathcal{H}^{+}}}$
$\displaystyle=-\frac{1}{8\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}z\,({\partial}_{\bar{z}}f){\partial}_{z}[D^{2}-1]^{-1}D^{B}D^{A}\sigma_{AB}$
(321)
Partial integration in the second term by ${\bar{z}}$ yields
$\displaystyle{\mathcal{N}_{f}^{\mathcal{H}^{+}}}$
$\displaystyle=\frac{1}{8\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}z\,({\partial}_{z}{\partial}_{\bar{z}}f)[D^{2}-1]^{-1}D^{B}D^{A}\sigma_{AB}.$
(322)
The boundary term arising from this vanishes, since
${\partial}_{\bar{z}}f=2\pi\delta^{2}(z-w)$ and the contour does not cross
$w$. To treat $[D^{2}-1]^{-1}$ explicitly, let us consider its Green’s
function ${\Delta}(z,z^{\prime})$ of $D^{2}-1$,666 The Green’s function
depends on both $(z,{\bar{z}})$ and $(z^{\prime},{\bar{z}}^{\prime})$, so we
should have written ${\Delta}(z,{\bar{z}},z^{\prime},{\bar{z}}^{\prime})$ to
be precise. We use the shorthand ${\Delta}(z,z^{\prime})$ for notational
brevity.
$\displaystyle(D^{2}-1){\Delta}(z,z^{\prime})=\frac{1}{{\gamma_{z\bar{z}}}}\delta^{2}(z-z^{\prime}),$
(323)
which is derived in appendix C.1 to be,
$\displaystyle{\Delta}(z,z^{\prime})=\frac{1}{4\sin(\pi\lambda)}P_{\lambda}(-\mathbf{n}_{z}\cdot\mathbf{n}_{z^{\prime}}),$
(324)
where $\lambda=\frac{1}{2}(-1+i\sqrt{3})$, $P_{\lambda}$ is the Legendre
function, and
$\displaystyle\mathbf{n}_{z}=\left(\frac{z+{\bar{z}}}{1+z{\bar{z}}},\frac{i({\bar{z}}-z)}{1+z{\bar{z}}},\frac{1-z{\bar{z}}}{1+z{\bar{z}}}\right)$
(325)
is the Cartesian coordinates of a unit vector on the sphere characterized by
$(z,{\bar{z}})$. The quantity $\mathbf{n}_{z}\cdot\mathbf{n}_{z^{\prime}}$
reduces to $\cos\theta$ when $(z^{\prime},{\bar{z}}^{\prime})$ is set to the
north pole, as it should. Using ${\Delta}$, we can write (43) as
$\displaystyle{\mathcal{N}_{f}^{\mathcal{H}^{+}}}$
$\displaystyle=\frac{1}{8\pi
M}\int_{\mathcal{H}^{+}}dv\,d^{2}z\,({\partial}_{z}{\partial}_{\bar{z}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}\,{\Delta}(z,z^{\prime})D^{B^{\prime}}D^{A^{\prime}}\sigma_{A^{\prime}B^{\prime}}.$
(326)
In the second term on the r.h.s., let us partial integrate the two covariant
derivatives on $\sigma_{A^{\prime}B^{\prime}}$ to ${\Delta}$. This gives rise
to two boundary terms, but one can use (324) to show that they vanish, see
appendix C.2 for details,
$\displaystyle{\mathcal{N}_{f}^{\mathcal{H}^{+}}}$
$\displaystyle=\frac{1}{8\pi
M}\int_{\mathcal{H}^{+}}d^{2}z\,({\partial}_{z}{\partial}_{\bar{z}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}\,(D^{A^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime}))\sigma_{A^{\prime}B^{\prime}}.$
(327)
First, let us compute the Dirac bracket
$\\{{\mathcal{N}_{f}^{\mathcal{H}^{+}}},{\delta{Q_{g}^{\mathcal{H}^{+}}}}\\}_{D}$.
This is zero, since it is proportional to the expression
$\displaystyle\left\\{\int_{\mathcal{H}^{+}}dv\,d^{2}z({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}(D^{A^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime}))\sigma_{A^{\prime}B^{\prime}},\int_{\mathcal{H}^{+}}dv\,d^{2}z^{\prime\prime}\sqrt{\gamma^{\prime\prime}}(D^{E^{\prime\prime}}D^{C^{\prime\prime}}g)\sigma_{D^{\prime\prime}C^{\prime\prime}}\right\\}_{D}$
that vanishes. Next, we compute
$\\{{\mathcal{N}_{f}^{\mathcal{H}^{+}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\\}_{D}$.
It is proportional to the quantity
$\displaystyle\left\\{\int_{\mathcal{H}^{+}}dv\,d^{2}z({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}(D^{A^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime}))\sigma_{A^{\prime}B^{\prime}},\int_{\mathcal{H}^{+}_{-}}d^{2}z^{\prime\prime}\sqrt{\gamma^{\prime\prime}}(D^{E^{\prime\prime}}D^{C^{\prime\prime}}g)\epsilon_{E^{\prime\prime}}{}^{D^{\prime\prime}}h_{D^{\prime\prime}C^{\prime\prime}}\right\\}_{D}$
$\displaystyle=32\pi
M^{2}\int_{\mathcal{H}^{+}_{-}}d^{2}z\,({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}(D^{A^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime}))(D^{E^{\prime}}D^{C^{\prime}}g)\epsilon_{E^{\prime}}{}^{D^{\prime}}\gamma_{A^{\prime}B^{\prime}D^{\prime}C^{\prime}},$
(328)
where we have used (26), with
$\gamma_{ABCD}=\gamma_{AC}\gamma_{BD}+\gamma_{AD}\gamma_{BC}-\gamma_{AB}\gamma_{CD}$.
Partial integrating the two covariant derivatives on ${\Delta}$ to $g$ while
noting that $D_{A}\epsilon_{BC}=0$ and $D_{A}\gamma_{BCDE}=0$, we obtain
$\displaystyle\left\\{\int_{\mathcal{H}^{+}}dv\,d^{2}z({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}(D^{A^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime}))\sigma_{A^{\prime}B^{\prime}},\int_{\mathcal{H}^{+}_{-}}d^{2}z^{\prime\prime}\sqrt{\gamma^{\prime\prime}}(D^{E^{\prime\prime}}D^{C^{\prime\prime}}g)\epsilon_{E^{\prime\prime}}{}^{D^{\prime\prime}}h_{D^{\prime\prime}C^{\prime\prime}}\right\\}_{D}$
$\displaystyle=32\pi M^{2}\int
d^{2}z\,({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}{\Delta}(z,z^{\prime})(D^{B^{\prime}}D^{A^{\prime}}D^{E^{\prime}}D^{C^{\prime}}g)\epsilon_{E^{\prime}}{}^{D^{\prime}}\gamma_{A^{\prime}B^{\prime}D^{\prime}C^{\prime}}$
$\displaystyle=64\pi iM^{2}\int
d^{2}z\,({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}{\Delta}(z,z^{\prime})\left(D^{{\bar{z}}^{\prime}}D^{{\bar{z}}^{\prime}}D^{z^{\prime}}D^{z^{\prime}}g-D^{z^{\prime}}D^{z^{\prime}}D^{{\bar{z}}^{\prime}}D^{{\bar{z}}^{\prime}}g\right)(\gamma_{z^{\prime}{\bar{z}}^{\prime}})^{2}$
$\displaystyle=64\pi iM^{2}\int
d^{2}z\,({\partial}_{z}{\partial}_{{\bar{z}}}f)\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}{\Delta}(z,z^{\prime})(\gamma^{z^{\prime}{\bar{z}}^{\prime}})^{2}[D_{z^{\prime}}^{2},D_{{\bar{z}}^{\prime}}^{2}]g.$
(329)
The boundary term arising from the partial integration is similar to that
discussed in appendix C.2 and vanish for the same reason.777 The boundary term
arising from the partial integration is proportional to the expression
$\displaystyle\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}\bigg{[}D^{A^{\prime}}\left(D^{B^{\prime}}{\Delta}(z,z^{\prime})D^{E^{\prime}}D^{C^{\prime}}g\right)-D^{B^{\prime}}\left({\Delta}(z,z^{\prime})D^{A^{\prime}}D^{E^{\prime}}D^{C^{\prime}}g\right)\bigg{]}\epsilon_{E^{\prime}}{}^{D^{\prime}}\gamma_{A^{\prime}B^{\prime}D^{\prime}C^{\prime}}$
$\displaystyle=2i\oint_{z}dz^{\prime}\gamma^{z^{\prime}{\bar{z}}^{\prime}}\left(\left(D_{z}^{2}g\right){\partial}_{{\bar{z}}^{\prime}}{\Delta}(z,z^{\prime})-{\Delta}(z,z^{\prime})D_{{\bar{z}}^{\prime}}D_{z}^{2}g\right)+2i\oint_{z}d{\bar{z}}^{\prime}\gamma^{z^{\prime}{\bar{z}}^{\prime}}\left(\left(D_{{\bar{z}}^{\prime}}^{2}g\right){\partial}_{z^{\prime}}{\Delta}(z,z^{\prime})-{\Delta}(z,z^{\prime})D_{z^{\prime}}D_{{\bar{z}}^{\prime
2}}g\right).$ It is shown in appendix C.1 that
${\Delta}\sim\frac{1}{4}\log|z-z^{\prime}|^{2}$ as $z\to z^{\prime}$, so the
above expression vanishes due to lack of appropriate poles. In the second
equation, we have used the fact that the only non-vanishing components of
$\epsilon_{A}{}^{B}$ and $\gamma_{ABCD}$ are
$\epsilon_{z}{}^{z}=-\epsilon_{\bar{z}}{}^{\bar{z}}=i$ and
$\gamma_{zz{\bar{z}}{\bar{z}}}=\gamma_{{\bar{z}}{\bar{z}}zz}=\frac{8}{(1+z{\bar{z}})^{2}}=2{\gamma_{z\bar{z}}}^{2}$
respectively. One can readily check that $[D_{z}^{2},D_{\bar{z}}^{2}]g=0$.
We conclude that ${\mathcal{N}_{f}^{\mathcal{H}^{+}}}$ has zero bracket with
both charges,
$\displaystyle\\{{\mathcal{N}_{f}^{\mathcal{H}^{+}}},{\delta{Q_{g}^{\mathcal{H}^{+}}}}\\}_{D}$
$\displaystyle=0,\qquad\\{{\mathcal{N}_{f}^{\mathcal{H}^{+}}},{\delta{\widetilde{Q}_{g}^{\mathcal{H}^{+}}}}\\}_{D}=0,$
(330)
and therefore we do not be concerned about this term when computing Dirac
brackets.
### C.1 Green’s function for $D^{2}-1$
In this section, we present a derivation of the green’s function for the
negative-definite operator $D^{2}-1$ on the unit sphere using standard
textbook techniques. As operators of this form are of interest in various
areas of physics, their Green’s functions can be found in many places in the
literature, see for example Szmytkowski2006 and references therein.
The Green’s function ${\Delta}(\Omega,\Omega^{\prime})$ for $D^{2}-1$ is a
solution to the equation
$\displaystyle(D^{2}-1){\Delta}(\Omega,\Omega^{\prime})=\delta(\Omega-\Omega^{\prime})\equiv\frac{1}{\sin\theta}\delta(\theta-\theta^{\prime})\delta(\phi-\phi^{\prime}),$
(331)
where $\Omega$ and $\Omega^{\prime}$ represent points on the unit sphere, and
the differential operator acts on $\Omega$. Due to spherical symmetry, the
Green’s function will only depend on the geodesic distance between $\Omega$
and $\Omega^{\prime}$. Without any loss of generality, we can assign the
coordinates on the sphere such that $\Omega^{\prime}$ sits at the north pole.
Then, the geodesic distance between $\Omega$ and $\Omega^{\prime}$ is given by
$\theta$. By spherical symmetry, this solution must be the same as when
$\Omega^{\prime}$ is not necessarily at the north pole but instead
$\phi=\phi^{\prime}$, in which case the geodesic distance is
$|\theta-\theta^{\prime}|$. Thus, we will solve the following equation first,
$\displaystyle(D^{2}-1){\Delta}(|\theta-\theta^{\prime}|)=\frac{1}{2\pi\sin\theta}\delta(\theta-\theta^{\prime}),$
(332)
and restore the $\phi$-dependence later. The operator $D^{2}$ in spherical
coordinates reads
$\displaystyle
D^{2}=\frac{1}{\sin\theta}\frac{{\partial}}{{\partial}\theta}\sin\theta\frac{{\partial}}{{\partial}\theta}+\frac{1}{\sin^{2}\theta}\frac{{\partial}^{2}}{{\partial}\phi^{2}},$
(333)
so by changing variables to $t=\cos\theta$, we can write (332) as
$\displaystyle\left(\frac{d}{dt}(1-t^{2})\frac{d}{dt}-1\right){\Delta}(t,t^{\prime})=\frac{1}{2\pi}\delta(t-t^{\prime}).$
(334)
We can obtain the Green’s function ${\Delta}$ by solving this equation for
$t<t^{\prime}$ and $t>t^{\prime}$, and then stitching the two solutions
together at $t=t^{\prime}$.
The differential equation (334) states that a second-order differential
operator acting on ${\Delta}$ yields a delta function. This implies that
${\Delta}$ is continuous at $t=t^{\prime}$; otherwise the discontinuity can
locally be written in terms of the Heaviside step function, and
$\frac{d^{2}}{dt^{2}}$ acting on it will yield a derivative of the delta
function, which is not present in (334). So, we have
$\displaystyle\lim_{\epsilon\to
0^{+}}{\Delta}(t^{\prime}-\epsilon,t^{\prime})=\lim_{\epsilon\to
0^{+}}{\Delta}(t^{\prime}+\epsilon,t^{\prime}).$ (335)
On the other hand, $\frac{d{\Delta}}{dt}$ is discontinuous, which can be seen
by integrating (334) around an infinitesimal region around $t=t^{\prime}$,
$\displaystyle\lim_{\epsilon\to 0^{+}}(1-t^{\prime
2})\left(\left.\frac{d{\Delta}}{dt}\right|_{t=t^{\prime}+\epsilon}-\left.\frac{d{\Delta}}{dt}\right|_{t=t^{\prime}-\epsilon}\right)=\frac{1}{2\pi}.$
(336)
With the stitching conditions (335) and (336) in mind, let us solve (334) for
$t\neq t^{\prime}$. Equation (334) for $t\neq t^{\prime}$ takes the form of a
Legendre equation,
$\displaystyle\left(\frac{d}{dt}(1-t^{2})\frac{d}{dt}+\lambda(\lambda+1)\right){\Delta}(t,t^{\prime})=0,$
(337)
with $\lambda=\frac{-1\pm i\sqrt{3}}{2}$ (such that $\lambda(\lambda+1)=-1$).
Being a second-order ordinary differential equation, this has two linearly
independent solutions, the Legendre functions $P_{\lambda}(t)$ and
$Q_{\lambda}(t)$ of the first and second kind. When $\lambda=n$ where $n$ is
an integer, $P_{n}(t)$ is a Legendre polynomial. Legendre polynomials have a
definite parity, so for instance $P_{n}(t)$ and $P_{n}(-t)=(-1)^{n}P_{n}(t)$
are not linearly independent. However, for non-integer $\lambda$,
$P_{\lambda}(t)$ is linearly independent to $P_{\lambda}(-t)$, (eqns. 8.2.3
and 8.3.1 of Abramowitz1974 )
$\displaystyle
P_{\lambda}(-t)=\cos(\lambda\pi)P_{\lambda}(t)-\frac{2}{\pi}\sin(\pi\lambda)Q_{\lambda}(t).$
(338)
This relation implies that for non-integer $\lambda$, we can use
$P_{\lambda}(t)$ and $P_{\lambda}(-t)$ (instead of the standard pair
$P_{\lambda}(t)$ and $Q_{\lambda}(t)$) as a basis of solutions to (337). Thus,
we can write
$\displaystyle{\Delta}(t,t^{\prime})=\begin{cases}a_{1}P_{\lambda}(t)+a_{2}P_{\lambda}(-t)&\qquad\text{for
$t<t^{\prime}$,}\\\ b_{1}P_{\lambda}(t)+b_{2}P_{\lambda}(-t)&\qquad\text{for
$t>t^{\prime}$,}\end{cases}$ (339)
where $a_{1}$, $a_{2}$, $b_{1}$ and $b_{2}$ are functions of $t^{\prime}$
only. We demand that the Green’s function ${\Delta}(t,t^{\prime})$ is well-
defined everywhere but $t=t^{\prime}$. Taking note that $P_{\lambda}(1)=1$ and
$P_{\lambda}(-1)=\infty$, one can see that this fixes $a_{1}=b_{2}=0$,
$\displaystyle{\Delta}(t,t^{\prime})=\begin{cases}a_{2}P_{\lambda}(-t)&\qquad\text{for
$t<t^{\prime}$,}\\\ b_{1}P_{\lambda}(t)&\qquad\text{for
$t>t^{\prime}$.}\end{cases}$ (340)
The remaining coefficients $a_{2}$ and $b_{1}$ are fixed by the stitching
conditions (335) and (336), which read
$\displaystyle a_{2}P_{\lambda}(-t^{\prime})$
$\displaystyle=b_{1}P_{\lambda}(t^{\prime}),$ (341) $\displaystyle
b_{1}P_{\lambda}^{\prime}(t^{\prime})+a_{2}P^{\prime}_{\lambda}(-t^{\prime})$
$\displaystyle=\frac{1}{2\pi(1-t^{\prime 2})}.$ (342)
These can equivalently be written as
$\displaystyle\begin{pmatrix}P_{\lambda}(-t^{\prime})&-P_{\lambda}(t^{\prime})\\\
P^{\prime}_{\lambda}(-t^{\prime})&P_{\lambda}^{\prime}(t^{\prime})\end{pmatrix}\begin{pmatrix}a_{2}\\\
b_{1}\end{pmatrix}=\begin{pmatrix}0\\\ \frac{1}{2\pi(1-t^{\prime
2})}\end{pmatrix}.$ (343)
Solving for $a_{2}$ and $b_{1}$, we obtain
$\displaystyle\begin{pmatrix}a_{2}\\\ b_{1}\end{pmatrix}$
$\displaystyle=\frac{1}{(P_{\lambda}(-t^{\prime})P_{\lambda}^{\prime}(t^{\prime})+P_{\lambda}(t^{\prime})P^{\prime}_{\lambda}(-t^{\prime}))}\begin{pmatrix}P_{\lambda}^{\prime}(t^{\prime})&P_{\lambda}(t^{\prime})\\\
-P^{\prime}_{\lambda}(-t^{\prime})&P_{\lambda}(-t^{\prime})\end{pmatrix}\begin{pmatrix}0\\\
\frac{1}{2\pi(1-t^{\prime 2})}\end{pmatrix}$
$\displaystyle=\frac{-1}{2\pi(1-t^{\prime
2})\mathcal{W}\\{P_{\lambda}(t),P_{\lambda}(-t)\\}|_{t=t^{\prime}}}\begin{pmatrix}P_{\lambda}(t^{\prime})\\\
P_{\lambda}(-t^{\prime})\end{pmatrix},$ (344)
where $\mathcal{W}\\{\cdot,\cdot\\}|_{t=t^{\prime}}$ is the Wronskian,
$\displaystyle\mathcal{W}\\{P_{\lambda}(t),P_{\lambda}(-t)\\}$
$\displaystyle=\begin{vmatrix}P_{\lambda}(t)&P_{\lambda}(-t)\\\
\frac{d}{dt}P_{\lambda}(t)&\frac{d}{dt}P_{\lambda}(-t)\end{vmatrix}=\begin{vmatrix}P_{\lambda}(t)&P_{\lambda}(-t)\\\
P^{\prime}_{\lambda}(t)&-P^{\prime}_{\lambda}(-t)\end{vmatrix}$
$\displaystyle=-\left(P_{\lambda}(t)P_{\lambda}^{\prime}(-t)+P_{\lambda}(-t)P^{\prime}_{\lambda}(t)\right),$
(345)
evaluated at $t=t^{\prime}$. To compute the Wronskian of $P_{\lambda}(t)$ and
$P_{\lambda}(-t)$, we first note that the Wronskian of $P_{\lambda}(t)$ and
$Q_{\lambda}(t)$ is (eqn. 8.1.9 of Abramowitz1974 )
$\displaystyle\mathcal{W}\\{P_{\lambda}(t),Q_{\lambda}(t)\\}=\frac{1}{1-t^{2}}.$
(346)
Then, we use the relation (338) to obtain
$\displaystyle\mathcal{W}\\{P_{\lambda}(t),P_{\lambda}(-t)\\}$
$\displaystyle=\cos(\lambda\pi)\mathcal{W}\\{P_{\lambda}(t),P_{\lambda}(t)\\}-\frac{2}{\pi}\sin(\pi\lambda)\mathcal{W}\\{P_{\lambda}(t),Q_{\lambda}(t)\\}$
$\displaystyle=\frac{-2\sin(\pi\lambda)}{\pi(1-t^{2})},$ (347)
since $\mathcal{W}\\{P_{\lambda}(t),P_{\lambda}(t)\\}=0$. This with (344)
implies that $a_{2}$ and $b_{1}$ are
$\displaystyle\begin{pmatrix}a_{2}\\\ b_{1}\end{pmatrix}$
$\displaystyle=\frac{1}{4\sin(\pi\lambda)}\begin{pmatrix}P_{\lambda}(t^{\prime})\\\
P_{\lambda}(-t^{\prime})\end{pmatrix}.$ (348)
Plugging these into (340), we obtain the Green’s function
$\displaystyle{\Delta}(t,t^{\prime})=\frac{1}{4\sin(\pi\lambda)}\begin{cases}P_{\lambda}(t^{\prime})P_{\lambda}(-t)&\qquad\text{for
$t<t^{\prime}$,}\\\ P_{\lambda}(-t^{\prime})P_{\lambda}(t)&\qquad\text{for
$t>t^{\prime}$.}\end{cases}$ (349)
Putting $\Omega^{\prime}$ back at the north pole (and hence
$\theta^{\prime}=0$ and $t^{\prime}=1$) and recalling that
$\lambda=\frac{-1+i\sqrt{3}}{2}$, we obtain
$\displaystyle{\Delta}(\theta)=\frac{1}{4\sin(\pi\lambda)}P_{\frac{-1+i\sqrt{3}}{2}}(-\cos\theta).$
(350)
So, this is the Green’s function when $\Omega^{\prime}$ is the north pole. For
a generic point $\Omega^{\prime}$ on the sphere, spherical symmetry demands
that ${\Delta}$ only depend on the geodesic distance $\gamma$ between $\Omega$
and $\Omega^{\prime}$, which is given as
$\displaystyle\cos\gamma=\cos\theta\cos\theta^{\prime}+\sin\theta\sin\theta^{\prime}\cos(\phi-\phi^{\prime}),$
(351)
and we have
$\displaystyle{\Delta}(\Omega,\Omega^{\prime})=\frac{1}{4\sin(\pi\lambda)}P_{\frac{-1+i\sqrt{3}}{2}}(-\cos\gamma),$
(352)
as a solution to the equation (331). We note that it does not matter which of
the two orders $\lambda=\frac{-1\pm i\sqrt{3}}{2}$ we choose, since
$P_{\lambda}(t)=P_{\lambda^{*}}(t)$; we have just chosen plus sign for
definiteness.
### C.2 Treatment of boundary term
In this section, we show that the boundary terms arising from partial
integrating the r.h.s. of (326) vanish.
One can see that this partial integration involves
$\displaystyle\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}{\Delta}(z,z^{\prime})D^{B^{\prime}}D^{A^{\prime}}\sigma_{A^{\prime}B^{\prime}}$
$\displaystyle=\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}D^{B^{\prime}}\left({\Delta}(z,z^{\prime})D^{A^{\prime}}\sigma_{A^{\prime}B^{\prime}}\right)-\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}D^{A^{\prime}}\left(\sigma_{A^{\prime}B^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime})\right)$
$\displaystyle\quad+\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}\left(D^{A^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime})\right)\sigma_{A^{\prime}B^{\prime}},$
(353)
so the boundary term arising from this procedure is proportional to the
quantity
$\displaystyle\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}D^{B^{\prime}}\left({\Delta}(z,z^{\prime})D^{A^{\prime}}\sigma_{A^{\prime}B^{\prime}}\right)-\int
d^{2}z^{\prime}\sqrt{\gamma^{\prime}}D^{A^{\prime}}\left(\sigma_{A^{\prime}B^{\prime}}D^{B^{\prime}}{\Delta}(z,z^{\prime})\right)$
$\displaystyle=-i\oint_{z}dz^{\prime}\gamma^{z^{\prime}{\bar{z}}^{\prime}}\left({\Delta}(z,z^{\prime}){\partial}_{{\bar{z}}^{\prime}}\sigma_{z^{\prime}z^{\prime}}-\sigma_{z^{\prime}z^{\prime}}{\partial}_{{\bar{z}}^{\prime}}{\Delta}(z,z^{\prime})\right)$
$\displaystyle\quad+i\oint_{z}d{\bar{z}}^{\prime}\gamma^{z^{\prime}{\bar{z}}^{\prime}}\left({\Delta}(z,z^{\prime}){\partial}_{z^{\prime}}\sigma_{{\bar{z}}^{\prime}{\bar{z}}^{\prime}}-\sigma_{{\bar{z}}^{\prime}{\bar{z}}^{\prime}}{\partial}_{z^{\prime}}{\Delta}(z,z^{\prime})\right),$
(354)
where we have used Stokes’ theorem. This vanishes if (a) ${\Delta}$ and
${\partial}_{{\bar{z}}^{\prime}}{\Delta}$ do not have $z^{\prime}$-poles at
$z^{\prime}=z$ and (b) ${\Delta}$ and ${\partial}_{z^{\prime}}{\Delta}$ do not
have ${\bar{z}}^{\prime}$-poles at $z^{\prime}=z$.
To show that both (a) and (b) are true, we start from the Green’s function
${\Delta}(z,z^{\prime})$ given in (324). For the moment, let us put
$z^{\prime},{\bar{z}}^{\prime}=0$ (the north pole) and restore them later.
This gives
$\displaystyle{\Delta}(z,0)$
$\displaystyle=\frac{1}{4\sin(\lambda\pi)}P_{\lambda}\left(\frac{z{\bar{z}}-1}{z{\bar{z}}+1}\right).$
(355)
Only the asymptotic behavior of ${\Delta}(z,0)$ near $z,{\bar{z}}=0$ is
relevant for the boundary contribution (354), and for this we need the
asymptotic behavior of $P_{\lambda}(t)$ near $t=-1$. This can be derived via
the asymptotic behaviors of $P_{\lambda}(t)$ and $Q_{\lambda}(t)$ near $t=1$,
which read DLMF
$\displaystyle P_{\lambda}(t)$ $\displaystyle\sim 1,\qquad
Q_{\lambda}(t)\sim\frac{1}{2}\ln\left(\frac{2}{1-t}\right),\qquad\text{as
$t\to 1$,}$ (356)
and using the relation (338), which yields
$\displaystyle P_{\lambda}(t)$
$\displaystyle\sim\frac{1}{\pi}\sin(\pi\lambda)\ln\left(1+t\right),\qquad\text{as
$t\to-1$.}$ (357)
Applying this to the Green’s function (355) with
$t=(z{\bar{z}}-1)/(z{\bar{z}}+1)$, we obtain
$\displaystyle{\Delta}(z,0)\sim\frac{1}{4}\ln\left(z{\bar{z}}\right),\qquad\text{as
$z,{\bar{z}}\to 0$.}$ (358)
Restoring the reference point $z^{\prime}$, the asymptotic form of the Green’s
function near $z=z^{\prime}$ is888 One can also derive this without putting
$z^{\prime}=0$ in the first place. To do so, one notes that $\cos\gamma$ in
(351) for generic $z$ and $z^{\prime}$ can be obtained by taking the dot
product of two vectors of the form (325), and that it satisfies
$\displaystyle
1-\cos\gamma=1-\mathbf{n}_{z}\cdot\mathbf{n}_{z^{\prime}}=\frac{2(z^{\prime}-z)({\bar{z}}^{\prime}-{\bar{z}})}{(1+z{\bar{z}})(1+z^{\prime}{\bar{z}}^{\prime})}.$
(359) Then, taking $z=z^{\prime}+re^{i\phi}$ and expanding around $r=0$ leads
to $\displaystyle
1-\cos\gamma=\frac{2r^{2}}{(1+z^{\prime}{\bar{z}}^{\prime})^{2}}+O(r^{3}),$
(360) which plugged into (357) for $P_{\lambda}(-\cos\gamma)$ and then into
(324) leads to ${\Delta}(z,z^{\prime})\sim\frac{1}{4}\ln
r^{2}=\frac{1}{4}\ln(z-z^{\prime})({\bar{z}}-{\bar{z}}^{\prime})$ for $r\to
0$, in agreement with (361).
$\displaystyle{\Delta}(z,z^{\prime})\sim\frac{1}{4}\ln|z-z^{\prime}|^{2},\qquad\text{as
$(z,{\bar{z}})\to(z^{\prime},{\bar{z}}^{\prime})$.}$ (361)
One immediately sees that ${\Delta}$ has a logarithmic singularity at
$z=z^{\prime}$ and therefore has no poles there. Also,
${\partial}_{z^{\prime}}{\Delta}=\frac{1}{4(z^{\prime}-z)}$ has no
${\bar{z}}^{\prime}$-pole at $z^{\prime}=z$, and
${\partial}_{{\bar{z}}^{\prime}}{\Delta}=\frac{1}{4({\bar{z}}^{\prime}-{\bar{z}})}$
has no $z^{\prime}$-pole at $z=z^{\prime}$. Therefore, the boundary term (354)
receives no residues and vanishes.
## References
* (1) S. W. Hawking, _Breakdown of predictability in gravitational collapse_ , _Phys. Rev._ D14 (1976) 2460–2473.
* (2) S. W. Hawking, M. J. Perry and A. Strominger, _Soft hair on black holes_ , _Phys. Rev. Lett._ 116 (2016) 231301, [1601.00921].
* (3) S. W. Hawking, M. J. Perry and A. Strominger, _Superrotation charge and supertranslation hair on black holes_ , _JHEP_ 05 (2017) 161, [1611.09175].
* (4) H. Bondi, M. G. J. van der Burg and A. W. K. Metzner, _Gravitational waves in general relativity. 7. Waves from axisymmetric isolated systems_ , _Proc. Roy. Soc. Lond._ A269 (1962) 21–52.
* (5) R. K. Sachs, _Gravitational waves in general relativity. 8. Waves in asymptotically flat space-times_ , _Proc. Roy. Soc. Lond._ A270 (1962) 103–126.
* (6) A. Strominger, _Asymptotic symmetries of Yang-Mills theory_ , _JHEP_ 07 (2014) 151, [1308.0589].
* (7) A. Strominger, _On BMS invariance of gravitational scattering_ , _JHEP_ 07 (2014) 152, [1312.2229].
* (8) T. He, V. Lysov, P. Mitra and A. Strominger, _BMS supertranslations and Weinberg’s soft graviton theorem_ , _JHEP_ 05 (2015) 151, [1401.7026].
* (9) S. Hyun, S.-A. Park and S.-H. Yi, _Quasi-local charges and asymptotic symmetry generators_ , _JHEP_ 06 (2014) 151, [1403.2196].
* (10) T. Adamo, E. Casali and D. Skinner, _Perturbative gravity at null infinity_ , _Class. Quant. Grav._ 31 (2014) 225008, [1405.5122].
* (11) T. He, P. Mitra, A. P. Porfyriadis and A. Strominger, _New symmetries of massless QED_ , _JHEP_ 10 (2014) 112, [1407.3789].
* (12) M. Campiglia and A. Laddha, _Asymptotic symmetries of gravity and soft theorems for massive particles_ , _JHEP_ 12 (2015) 094, [1509.01406].
* (13) M. Campiglia, _Null to time-like infinity Green’s functions for asymptotic symmetries in Minkowski spacetime_ , _JHEP_ 11 (2015) 160, [1509.01408].
* (14) M. Campiglia and A. Laddha, _Asymptotic symmetries of QED and Weinberg’s soft photon theorem_ , _JHEP_ 07 (2015) 115, [1505.05346].
* (15) M. Campiglia and A. Laddha, _New symmetries for the gravitational S-matrix_ , _JHEP_ 04 (2015) 076, [1502.02318].
* (16) D. Kapec, M. Pate and A. Strominger, _New symmetries of QED_ , _Adv. Theor. Math. Phys._ 21 (2015) 1769–1785, [1506.02906].
* (17) S. G. Avery and B. U. W. Schwab, _Burg-Metzner-Sachs symmetry, string theory, and soft theorems_ , _Phys. Rev._ D93 (2016) 026003, [1506.05789].
* (18) S. G. Avery and B. U. W. Schwab, _Residual Local Supersymmetry and the Soft Gravitino_ , _Phys. Rev. Lett._ 116 (2016) 171601, [1512.02657].
* (19) V. Lysov, _Asymptotic Fermionic Symmetry From Soft Gravitino Theorem_ , 1512.03015.
* (20) A. Strominger, _Lectures on the infrared structure of gravity and gauge theory_ , 1703.05448.
* (21) R. Akhoury, S. Choi and M. J. Perry, _Holography from Singular Supertranslations on a Black Hole Horizon_ , 2205.07923.
* (22) H. Godazgar, M. Godazgar and M. J. Perry, _Asymptotic gravitational charges_ , _Phys. Rev. Lett._ 125 (2020) 101301, [2007.01257].
* (23) H. Godazgar, M. Godazgar and M. J. Perry, _Hamiltonian derivation of dual gravitational charges_ , _JHEP_ 09 (2020) 084, [2007.07144].
* (24) E. Witten, _Quantization of Chern-Simons Gauge Theory With Complex Gauge Group_ , _Commun. Math. Phys._ 137 (1991) 29–66.
* (25) G. Barnich and C. Troessaert, _BMS charge algebra_ , _JHEP_ 12 (2011) 105, [1106.0213].
* (26) H. Godazgar, M. Godazgar and C. N. Pope, _New dual gravitational charges_ , _Phys. Rev. D_ 99 (2019) 024013, [1812.01641].
* (27) U. Kol and M. Porrati, _Properties of Dual Supertranslation Charges in Asymptotically Flat Spacetimes_ , _Phys. Rev._ D100 (2019) 046019, [1907.00990].
* (28) H. Godazgar, M. Godazgar and C. N. Pope, _Tower of subleading dual BMS charges_ , _JHEP_ 03 (2019) 057, [1812.06935].
* (29) H. Godazgar, M. Godazgar and C. N. Pope, _Dual gravitational charges and soft theorems_ , _JHEP_ 10 (2019) 123, [1908.01164].
* (30) A. Strominger, _Magnetic Corrections to the Soft Photon Theorem_ , _Phys. Rev. Lett._ 116 (2016) 031602, [1509.00543].
* (31) A. H. Taub, _Empty space-times admitting a three parameter group of motions_ , _Annals Math._ 53 (1951) 472–490.
* (32) E. Newman, L. Tamburino and T. Unti, _Empty space generalization of the Schwarzschild metric_ , _J. Math. Phys._ 4 (1963) 915.
* (33) G. Satishchandran and R. M. Wald, _Asymptotic behavior of massless fields and the memory effect_ , _Phys. Rev._ D99 (2019) 084007, [1901.05942].
* (34) S. Holst, _Barbero’s Hamiltonian derived from a generalized Hilbert-Palatini action_ , _Phys. Rev. D_ 53 (1996) 5966–5969, [gr-qc/9511026].
* (35) M. Geiller, _Lorentz-diffeomorphism edge modes in 3d gravity_ , _JHEP_ 02 (2018) 029, [1712.05269].
* (36) M. Geiller, _Edge modes and corner ambiguities in 3d Chern–Simons theory and gravity_ , _Nucl. Phys. B_ 924 (2017) 312–365, [1703.04748].
* (37) A. J. Speranza, _Local phase space and edge modes for diffeomorphism-invariant theories_ , _JHEP_ 02 (2018) 021, [1706.05061].
* (38) V. Hosseinzadeh, A. Seraj and M. M. Sheikh-Jabbari, _Soft Charges and Electric-Magnetic Duality_ , _JHEP_ 08 (2018) 102, [1806.01901].
* (39) L. Freidel and D. Pranzetti, _Electromagnetic duality and central charge_ , _Phys. Rev. D_ 98 (2018) 116008, [1806.03161].
* (40) B. S. DeWitt, _Quantum Theory of Gravity. 1. The Canonical Theory_ , _Phys. Rev._ 160 (1967) 1113–1148.
* (41) S. He, Y.-t. Huang and C. Wen, _Loop Corrections to Soft Theorems in Gauge Theories and Gravity_ , _JHEP_ 12 (2014) 115, [1405.1410].
* (42) J. D. Brown and M. Henneaux, _Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity_ , _Commun. Math. Phys._ 104 (1986) 207–226.
* (43) A. Strominger and A. Zhiboedov, _Superrotations and Black Hole Pair Creation_ , _Class. Quant. Grav._ 34 (2017) 064002, [1610.00639].
* (44) R. Szmytkowski, _Closed form of the generalized green’s function for the helmholtz operator on the two-dimensional unit sphere_ , _Journal of Mathematical Physics_ 47 (2006) 063506.
* (45) M. Abramowitz and I. Stegun, _Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,_. Dover Publications, Inc., USA, 1974.
* (46) “NIST Digital Library of Mathematical Functions.” http://dlmf.nist.gov/, Release 1.0.28 of 2020-09-15.
|
# SPADE: Semi-supervised Anomaly Detection under
Distribution Mismatch
Jinsung Yoon, Kihyuk Sohn, Chun-Liang Li, Sercan Ö. Arik, Tomas Pfister
{jinsungyoon, kihyuks, chunliang, soarik<EMAIL_ADDRESS>
Google Cloud AI
###### Abstract
Semi-supervised anomaly detection is a common problem, as often the datasets
containing anomalies are partially labeled. We propose a canonical framework:
Semi-supervised Pseudo-labeler Anomaly Detection with Ensembling (SPADE) that
isn’t limited by the assumption that labeled and unlabeled data come from the
same distribution. Indeed, the assumption is often violated in many
applications – for example, the labeled data may contain only anomalies unlike
unlabeled data, or unlabeled data may contain different types of anomalies, or
labeled data may contain only ‘easy-to-label’ samples. SPADE utilizes an
ensemble of one class classifiers as the pseudo-labeler to improve the
robustness of pseudo-labeling with distribution mismatch. Partial matching is
proposed to automatically select the critical hyper-parameters for pseudo-
labeling without validation data, which is crucial with limited labeled data.
SPADE shows state-of-the-art semi-supervised anomaly detection performance
across a wide range of scenarios with distribution mismatch in both tabular
and image domains. In some common real-world settings such as model facing new
types of unlabeled anomalies, SPADE outperforms the state-of-the-art
alternatives by 5% AUC in average.
## 1 Introduction
Anomaly detection has numerous real-world applications, including
identification of manufacturing defects, network security threats, and
financial fraud (Chalapathy & Chawla, 2019; Ahmed et al., 2016; Vanerio &
Casas, 2017). Anomaly detection can be considered in different settings. One
is the fully-supervised setting, where the labels for all samples are
available, for both normal and anomalous samples (Chawla et al., 2002;
Estabrooks et al., 2004; Hwang et al., 2011; Barua et al., 2012). This setting
is typically addressed with specialized approaches for data imbalance, e.g.
weighted loss functions or resampling methods. An important special case of
this fully-supervised setting is when only labeled normal samples exist
(Schölkopf et al., 1999; Tax & Duin, 2004; Ruff et al., 2018; Golan & El-
Yaniv, 2018; Sohn et al., 2021; Li et al., 2021), for which one class
classifiers (OCCs) (e.g. with SVM (Schölkopf et al., 1999) or auto-encoder
(Ruff et al., 2018)) and Isolation Forest (Liu et al., 2008) are popular
approaches. Despite being widely-studied, the challenge towards the real-world
use for these supervised settings is their tedious labeling requirement. At
the other extreme, there is the fully unsupervised anomaly detection setting
where no labeled data is available (Breunig et al., 2000; Liu et al., 2008;
Zong et al., 2018; Bergman & Hoshen, 2019; Yoon et al., 2022). While the
labeling costs can be entirely eliminated for this setting, the performance
degradation is often significant compared to the supervised setting (Bergman &
Hoshen, 2019; Zong et al., 2018), limiting its applicability for deployment.
Figure 1: Three common real-world settings with labeled and unlabeled data
coming from different distributions. (Left) Labeled data only include
anomalous samples while unlabeled data have both anomalous and normal.
(Middle) The anomaly is a new type (yellow boxes) which isn’t in labeled data.
(Right) Labeled data only have ‘easy-to-label’ samples while unlabeled data
include ‘hard-to-label’ samples (yellow boxes).
To achieve the best of both worlds, we focus on the semi-supervised anomaly
detection setting, aiming to achieve high performance with a limited labeling
budget. In previous works on semi-supervised anomaly detection (Zhang & Zuo,
2008; Bekker & Davis, 2020; Blanchard et al., 2010; Akcay et al., 2018;
Görnitz et al., 2013; Ruff et al., 2020), some focus on the positive-unlabeled
setting (Zhang & Zuo, 2008; Bekker & Davis, 2020), and others utilize one-
class classifiers or adversarial training on semi-supervised learning (Görnitz
et al., 2013; Akcay et al., 2018). Ruff et al. (2020) treats all unlabeled
data as normal samples to construct an anomaly detector in semi-supervised
settings. In addition, any semi-supervised learning method (even when they
aren’t developed for anomaly detection) can be adapted to the semi-supervised
anomaly detection setting (Sohn et al., 2020; Chen et al., 2020a; Grill et
al., 2020).
Most semi-supervised learning methods assume that the labeled and unlabeled
data come from the same distributions (Sohn et al., 2020; Chen et al., 2020a;
Grill et al., 2020). In other words, the subsets of the data are labeled such
that sampling from the unlabeled data is randomly uniform. However, in
practice, this assumption often does not hold: _distribution mismatch_
commonly occur, with labeled and unlabeled data coming from different
distributions. Some works (Kim et al., 2020) tackle this in a limited setting
where only the label distributions are different (e.g., the anomalous ratio is
10% for training but 50% for testing), however, there are other more general
real-world scenarios, as exemplified in Fig. 1. First, positive and unlabeled
(PU) or negative and unlabeled (NU) settings are common, where the
distributions between labeled (either positive or negative) and unlabeled
(both positive and negative) samples are different (see Fig. 1(Left)) (Zhang &
Zuo, 2008; Bekker & Davis, 2020). Second, additional unlabeled data can be
gathered after labeling, causing distribution shift. For example,
manufacturing processes may keep evolving and thus, the corresponding defects
can change and the defect types at labeling differ from the defect types in
unlabeled data (see Fig. 1(Middle)). In addition, for financial fraud
detection and anti-money laundering applications, new anomalies can appear
after the data labeling process, as the criminals adapt themselves. Lastly,
human labelers are more confident on easy samples; thus, easy samples are more
likely to be included in the labeled data and difficult samples are more
likely to be included in the unlabeled data (see Fig. 1(Right)). For example,
with some crowd-sourcing-based labeling tools, only the samples with some
consensus on the labels (as a measure of confidence) are included in the
labeled set.
As we experimentally demonstrate (in Sec. 5), standard semi-supervised
learning methods (Sohn et al., 2020; Chen et al., 2020a; Grill et al., 2020)
are sub-optimal for anomaly detection under distribution mismatch, because
they are developed with the assumption that labeled and unlabeled data come
from the same distribution. Generated pseudo-labels are highly dependent on a
small set of labeled data; thus, the trained semi-supervised models would be
biased on the labeled data distribution. Transfer learning methods or the
frameworks for distribution shifts may constitute alternatives (Pan & Yang,
2009; Yu et al., 2020; Raina et al., 2007) by treating source/target data as
labeled/unlabeled data. However, these have not been effective with a small
number of source (labeled) samples (as shown in Sec. 5).
In this paper, we propose a novel semi-supervised anomaly detection framework
SPADE, that yields strong and robust performance even under distribution
mismatch. Contributions of this paper can be summarized as below:
* •
Motivated by the common real-world scenarios, we tackle the distribution
mismatch problem for semi-supervised anomaly detection which is critical but
under-explored.
* •
We propose a novel semi-supervised learning framework, SPADE. Carefully-
designed components of SPADE enable robust semi-supervised learning. As such,
SPADE introduces a pseudo-labeling mechanism using an ensemble of OCCs and a
judicious way of combining supervised and self-supervised learning.
* •
SPADE reduces the dependence on the labeled data as the predictors are trained
with a small number of labeled and pseudo-labeled samples.
* •
We propose a novel approach using a partial matching method (Du Plessis &
Sugiyama, 2014) to pick hyperparameters without a validation set. This
innovation is critical as conventional hyperparameter selection relies on
validation set, which is often unavailable in real world with limited labeled
data.
* •
We show state-of-the-art semi-supervised anomaly detection performance of
SPADE in multiple settings that represent common real-world scenarios. AUC
improvements of SPADE can be up to $10.6\%$ on tabular data and $3.6\%$ on
image data.
* •
We focus on an important real-world machine learning challenge: fraud
detection with distribution shifts over time due to the adversarial nature of
the environment. We show that SPADE consistently outperforms existing methods
considered.
Frameworks | Description | Use of data | Examples
---|---|---|---
Supervised classification | Train supervised model with labeled data | L | MLP, RF, XGBoost
Negative supervised classification | Train supervised model while treating unlabeled data as normal data | L+U | MLP, RF, XGBoost
One-class classifier (OCC) | Train OCC only with labeled normal data | L(normal) | OC-SVM, GDE
Negative OCC | Train OCC while treating unlabeled data as normal data | L(normal)+U | OC-SVM, GDE
Unsupervised OCC | Train OCC with unlabeled data refinement | L(normal)+U | SRR (Yoon et al., 2022)
Semi-supervised learning | Train a predictive model via pseudo-labeling and representation learning | L+U | FixMatch (Sohn et al., 2020), VIME (Yoon et al., 2020)
Domain adaptation | Train a predictive model via domain-invariant representation learning | L+U | DANN (Ganin et al., 2016)
PU learning | Train a predictive model only with L (anomalous) + U via weighted ensemble learning | L(anomalous)+U | Elkanoto(Elkan & Noto, 2008), BaggingPU(Mordelet & Vert, 2014)
Table 1: Conventional approaches to tackle anomaly detection with semi-
supervised settings with distribution mismatch. (L: Labeled data, U: Unlabeled
data, MLP: Multi-layer Perceptron, RF: Random Forest, GDE: Gaussian
Distribution Estimator).
## 2 Related Work
Semi-supervised learning. State-of-the-art methods (Sohn et al., 2020; Chen et
al., 2020a; Grill et al., 2020) are developed under the assumption that both
labeled and unlabeled samples come from the same distribution. They have
pseudo-labeling approaches based on the consistency of label predictions with
different augmentations. Such approaches are highly dependent on the small
amount of labeled data. Thus, the bias from the labeled data would propagate
to pseudo-labels of the unlabeled data, causing them to construct a biased
predictive model if there is distribution mismatch between labeled and
unlabeled data. Kim et al. (2020) tackles this in the setting where only the
label priors are different. DeepSAD (Ruff et al., 2020) tackles semi-
supervised anomaly detection problem while treating unlabeled samples as
normal samples.
As a way of employing OCCs, SPADE differentiates from typical pseudo-labeling
methods used in semi-supervised learning (Lee et al., 2013; Sohn et al., 2020)
that require building binary classifiers to assign pseudo-labels. We argue
that OCC-based pseudo-labeling is better-suited when there exists distribution
mismatch between labeled and unlabeled data, a common pitfall for semi-
supervised anomaly detection applications, and more universally applicable
(e.g., a binary classifier isn’t available for PU settings). Yoon et al.
(2022) also employs an ensemble of OCCs for fully-unsupervised settings.
However, it only identifies pseudo-normal samples from unlabeled data and it
needs prior knowledge on label distribution, which may not be available in
practice (more details can be found in Appendix. A.4).
Distribution mismatch. Some recent works directly addressed the distribution
mismatch between labeled and unlabeled data. (Chen et al., 2020b; Saito et
al., 2021) assume that the distribution of labeled data and testing data are
the same but the unlabeled data include additional out-of-distribution
samples. Both papers focus on filtering out out-of-distribution samples from
the unlabeled data to match the distribution between labeled and unlabeled
data. On the other hand, in SPADE, the testing distribution is the union of
the labeled and unlabeled distributions and the labeled data distribution is
different from the testing distribution. Pang et al. (2019) assumes the
existence of positively labeled samples which are included in the PU scenarios
in SPADE. Pang et al. (2021) further assumes new anomaly types in unlabeled
data, which is also addressed in this paper (see Sect. 5.1).
Domain adaptation. Various methods have been proposed to address the issue of
the training distribution being different from the testing distribution (Long
et al., 2016; Baktashmotlagh et al., 2013; Sun et al., 2019). These often
focus on learning domain-invariant representations for better generalization
to testing set with different distributions. If we assume that we have access
to features of the test data (which is a common assumption in domain
adaptation), we can consider the domain adaptation problem as a semi-
supervised learning problem where training data are treated as labeled and
test data are treated as unlabeled. However, with small amount of labeled data
(less common in domain adaptation setting), the performance of the trained
model on a small source data would be limited.
Positive-Unlabeled (PU) learning. An important special scenario is when we
only have the positive samples as the labeled data, while unlabeled data
include both positive and negative samples (Zhang & Zuo, 2008). In this
setting, the labeled data distribution is clearly different from the unlabeled
data, as a special case of semi-supervised anomaly detection with distribution
mismatch. Related literature on PU learning is summarized in Bekker & Davis
(2020). There are two commonly-used approaches: (i) two-stage models (He et
al., 2018; Chaudhari & Shevade, 2012), where the first stage is discovering
the confident negative labels and the second stage is training the supervised
model using positive labels and confident negative labels; (ii) biased
learning by treating all the unlabeled data as negative samples with class
label noise (Liu et al., 2003; Sellamanickam et al., 2011). The shortcoming of
(i) is excluding the possible positive samples from unlabeled data, whereas
the shortcoming of (ii) is contamination of unlabeled data that affects model
training. While being relevant, these are limited to the special case of PU
setting, and sub-optimal when applied to the general semi-supervised settings.
## 3 Problem Formulation
We focus on the general semi-supervised anomaly detection problem with
distribution mismatch. Consider the given labeled training data
$\mathcal{D}^{l}=\\{(\textbf{x}_{i}^{l},y_{i}^{l})\\}_{i=1}^{N_{l}}$ and
unlabeled training data
$\mathcal{D}^{u}=\\{\textbf{x}_{j}^{u}\\}_{j=1}^{N_{u}}$.
$\textbf{x}^{l}\sim\mathcal{P}_{X}^{l}$ and
$\textbf{x}^{u}\sim\mathcal{P}_{X}^{u}$ are the feature vectors and
$\mathcal{P}_{X}^{l}$ and $\mathcal{P}_{X}^{u}$ are corresponding feature
distributions of the labeled and unlabeled data, respectively. For anomaly
detection, the labels $y\in\mathcal{Y}$ are either normal ($0$) or anomalous
($1$) and there are far more normal examples than anomaly, i.e.,
$\mathbb{P}(y=0)\gg\mathbb{P}(y=1)$. Most semi-supervised methods assume that
both labeled and unlabeled data come from the same distribution (i.e.,
$\mathcal{P}_{X}^{l}=\mathcal{P}_{X}^{u}$). In this work, we aren’t limited by
this assumption and allow the scenario of the distributions between labeled
and unlabeled data to be different (i.e.,
$\mathcal{P}_{X}^{l}\neq\mathcal{P}_{X}^{u}$). We exemplify such scenarios in
Fig. 1. For instance, if new anomaly types are only included in the unlabeled
data, $\mathcal{P}_{X}^{u}$ would be different from $\mathcal{P}_{X}^{l}$. The
labels $y$ are determined by the unknown function
$f^{*}:\mathcal{X}\rightarrow\mathcal{Y}$ where
$\textbf{x}^{l},\textbf{x}^{u}\in\mathcal{X}$. Our main objective is to
construct an anomaly detection model $f:\mathcal{X}\rightarrow\mathcal{Y}$
that can minimize the test loss $\mathcal{L}(f(x),y)$ in the union of
$\mathcal{P}_{X}^{l}$ and $\mathcal{P}_{X}^{u}$. As a way of motivation, the
conventional approaches to tackle this problem along with their limitations
are summarized in Table. 1. All these are quantitatively compared with SPADE
in Sec. 5. Further details can be found in Appendix A.
## 4 Proposed Method - SPADE
Sec. 4.1 first explains the design principles of SPADE, and then the
implementation details are provided in the subsequent subsections. Sec. 4.2
introduces building blocks of the framework, Sec. 4.3 and 4.4 explain the
details of the pseudo-labeler and Sec. 4.5 describes loss functions and
optimization.
### 4.1 Desiderata
The core idea of our framework, Semi-supervised Pseudo-labeler Anomaly
Detection with Ensembling (SPADE), is based on self-training, following recent
advances in semi-supervised learning (Sohn et al., 2020; Chen et al., 2020a).
We aim to train a binary classifier for normal and anomalous data by
iteratively learning from labeled and pseudo-labeled data. As such, the key
component is the pseudo-labeler to assign binary labels to unlabeled data.
While it is common to use a trained binary classifier for pseudo-labeling (Lee
et al., 2013; Sohn et al., 2020), we argue that it may be sub-optimal for
anomaly detection with distribution shift as the decision boundaries of binary
classifiers could be highly biased by the small labeled data. As shown in Fig.
2(b, c), this would have a negative impact when labeled and unlabeled data
distributions are mismatched. Instead, we decouple the pseudo-labeler from the
trained binary classifier and build it with OCCs. While this cannot utilize
the labeled positive data like binary classifiers, it would prevent
overfitting to the small amount of labeled data, and thus can be more robust
to distribution shifts, as shown in Fig. 2(d).
Figure 2: Examples in semi-supervised anomaly detection with distribution
mismatch. (a) Original data distribution. Note that the labeled (color) and
unlabeled (grey) data distributions are different; (b) Standard supervised
learning approach only with labeled data; (c) Standard supervised learning
approach after treating all the unlabeled data as normal samples; and (d) OCC
without using labels. Purple line represents the decision boundary.
### 4.2 Building blocks
Figure 3: (Left) Block diagram of the proposed semi-supervised anomaly
detection framework, SPADE. (Right) We zoom in the detailed block diagram of
the proposed pseudo-labeler which is an ensemble of OCCs. Predictor is a
binary classifier. Blue line represents the inference steps.
Fig. 3 illustrates the four components of SPADE framework: (i) (data) encoder,
(ii) predictor, (iii) pseudo-labeler, and (iv) projection head. First, the
encoder: $h:\mathcal{X}\rightarrow\mathcal{H}$ maps the input features x into
latent representations $\textbf{r}=h(\textbf{x})$. As the encoder, any neural
network architecture can be employed – in our experiments, we use multi-layer
perceptron (MLP) for tabular data and convolutional neural networks (CNNs) for
image data. The predictor $q:\mathcal{H}\rightarrow\mathcal{Y}$ utilizes the
learned representation r to output the anomaly scores $q(\textbf{r})$. The
anomaly score is determined by the encoder ($h$) and predictor ($q$) as
follows: $q(h(\textbf{x}))$. Pseudo-labeler and projection head help the
encoder and predictor training. Pseudo-labeler
$v:\mathcal{H}\rightarrow\\{0,1,-1\\}$ determines the pseudo-labels of the
unlabeled data $\textbf{x}^{u}$ using an ensemble of OCCs.
$v(h(\textbf{x}^{u}))=1/0/-1$ represents pseudo-anomalous/pseudo-
normal/unlabeled. The predictor only utilizes the labeled data and unlabeled
data with $v(h(\textbf{x}^{u}))=1/0$ for training. Lastly, projection head
$g:\mathcal{H}\rightarrow\mathcal{G}$ is the block to help representation
learning of the encoder. Any representation learning method can be utilized,
such as contrastive learning and pretext task predictions (such as masked
autoencoder).
### 4.3 Pseudo-labeling via consensus
A major novel component of SPADE is the design of pseudo-labeler. The pseudo-
labeler ($v$ in Fig. 3) is composed of an ensemble of $K$ OCCs
($o_{1},o_{2},...,o_{K}$). Each OCC is trained with the negative labeled data
($\mathcal{D}_{0}^{l}$) and one of $K$ disjoint subsets of unlabeled data
($\mathcal{D}_{1}^{u},\mathcal{D}_{2}^{u},...,\mathcal{D}_{K}^{u}$).
$o_{k}(\textbf{x})$ outputs the anomaly scores of x. We assign the positive
pseudo-labels (i.e. anomalous predictions) to unlabeled data samples if all
OCCs agree on them: $v(h(\textbf{x}^{u}))=1$ if
$\prod_{k=1}^{K}\hat{y}^{pu}_{k}=1$ where
$\hat{y}^{pu}_{k}=\begin{cases}1&\text{if
}o_{k}(h(\textbf{x}^{u}))>\eta_{k}^{p}\\\ 0&\text{otherwise }\\\ \end{cases}$
(1)
Similarly, we assign a negative pseudo-label (i.e., normal) if all OCCs agree
on negative pseudo-labels: $v(h(\textbf{x}^{u}))=0$ if
$\prod_{k=1}^{K}\hat{y}^{nu}_{k}=1$ where
$\hat{y}^{nu}_{k}=\begin{cases}1&\text{if
}o_{k}(h(\textbf{x}^{u}))<\eta_{k}^{n}\\\ 0&\text{otherwise }\\\ \end{cases}$
(2)
Unlabeled data without consensus are annotated as unknown:
$v(h(\textbf{x}^{u}))=-1$ if
$\prod_{k=1}^{K}\hat{y}^{pu}_{k}\times\hat{y}^{nu}_{k}=0$.
### 4.4 Determining $\eta^{p},\eta^{n}$ using partial matching
In SPADE framework, thresholds $\eta^{p}$ and $\eta^{n}$ are critical
parameters. One option is considering them as user-defined hyper-parameters
and determining them by the hyper-parameter optimization. However, hyper-
parameter tuning requires extra validation data which should come from labeled
training set (same impacts as reducing the number of labeled samples in
training data which is critical in semi-supervised setting). Instead, we
propose to learn these parameters without sacrificing the labeled data for
validation. We propose adapting the partial matching method (Christoffel et
al., 2016), which has been developed to estimate the marginal distribution of
unlabeled data by matching the distribution to the known one-class (either
positive or negative) distribution. The underlying intuition is that normal
samples are closer to other normal samples, and anomalous samples are closer
to other anomalous samples. In our case, we match the distribution of anomaly
scores of the positive labeled data to that of unlabeled data to estimate
their marginal distribution and determine $\eta^{p}$ accordingly. The same is
applied to determine $\eta^{n}$ using negative labeled data. Formulations for
$\eta^{p}$ and $\eta^{n}$ are given in Eqs. 3 and 4 below:
$\displaystyle\eta^{p}_{k}$
$\displaystyle{=}\arg\min_{\eta}D_{w}(\\{o_{k}(h(\textbf{x}^{l})|y^{l}{=}1\\},\\{o_{k}(h(\textbf{x}^{u}){>}\eta\\})$
(3) $\displaystyle\eta^{n}_{k}$
$\displaystyle{=}\arg\min_{\eta}D_{w}(\\{o_{k}(h(\textbf{x}^{l})|y^{l}{=}0\\},\\{o_{k}(h(\textbf{x}^{u}){<}\eta\\})$
(4)
where $D_{w}$ is the Wasserstein distance between two distributions. That is,
we determine the subsets of the unlabeled data for pseudo-labeling whose
Wasserstein distance from labeled data is minimum.
In some semi-supervised settings such as PU and NU, only one-class of labeled
samples are available. In that case, we employ Otsu’s method (Otsu, 1979) to
identify the threshold of the class without labeled samples. With Otsu’s
method, we can determine the threshold that minimizes intra-class anomaly
score variances in an unsupervised way. For instance, in PU setting, we set
$\eta^{p}$ using Eq. 3 and $\eta^{n}$.
### 4.5 Loss functions and optimization
We train the anomaly detection model $q(h(\cdot))$ using three loss functions:
(i) binary cross entropy (BCE) on labeled and (ii) BCE on pseudo-labeled data,
and (iii) self-supervised loss on the entire data. The self-supervised module
$g$ (e.g., decoder for reconstruction loss, MLP projection head for
contrastive loss) is jointly trained with an auxiliary self-supervised loss.
Next, we describe the loss formulations. The BCE loss on the labeled data is
proposed as:
$\mathcal{L}_{Y^{l}}=\mathbb{E}\big{[}\mathcal{L}_{BCE}(q(h(\textbf{x}^{l})),y^{l})\big{]},$
and the BCE loss on pseudo-labeled data as:
$\mathcal{L}_{Y^{u}}=\mathbb{E}\big{[}\mathcal{L}_{BCE}(q(h(\textbf{x}^{u})),v(h(\textbf{x}^{u})))\times\mathbbm{1}\big{\\{}v^{u}\,{\in}\,\\{0,1\\}\big{\\}}\big{]}.$
Here, instead of subsampling unlabeled data with known pseudo-labels, we
assign a binary weight ($\mathbbm{1}\\{v^{u}\,{\in}\,\\{0,1\\}\\}$) to each
unlabeled sample so that the loss contribution from pseudo-labeled data can be
controlled based on the model quality.
To improve the quality of the encoder ($h$), we utilize auxiliary self-
supervised losses with various pretext tasks depending on application domain.
This may include the reconstruction objective:
$\mathcal{L}_{R}=\mathbb{E}\big{[}\mathcal{L}_{MSE}(\textbf{x},g(h(\textbf{x})))\big{]},$
or more specific objectives to data type, such as contrastive learning (Chen
et al., 2020a) and CutPaste (Li et al., 2021) for image.
Overall, the encoder ($h$), predictor ($q$), and the self-supervised module
($g$) are trained by solving the following optimization problem:
$h^{*},g^{*},q^{*}=\arg\min_{h,g,q}\big{[}\mathcal{L}_{Y^{l}}+\alpha\mathcal{L}_{Y^{u}}+\beta\mathcal{L}_{R}\big{]},$
(5)
where $\alpha,\beta$ are hyper-parameters (we set both $\alpha$ and $\beta$ as
1.0 for the experiments). Training loss is used for the convergence criteria –
if the training loss is converged (if no improvement is observed in the loss
for 5 epochs), we treat that the models are converged as well. Note that the
pseudo-labeler also converges during training, often faster. The overall
pseudo-code can be found in Alg. 1.
Algorithm 1 Semi-supervised Pseudo-labeler Anomaly Detection with Ensembling
(SPADE).
Input: Labeled / unlabeled training data $\mathcal{D}^{l}$ / $\mathcal{D}^{u}$
Output: Trained encoder ($h$), predictor ($q$)
1:function Pseudo-
labeler($\mathcal{D}^{l}_{1},\mathcal{D}^{l}_{0},\mathcal{D}^{u},h$)
2: Divide $\mathcal{D}^{u}$ into $K$ disjoint subsets
$\\{\mathcal{D}_{k}^{u}\\}_{k=1}^{K}$
3: for k=1:K do
4: Train OCC models $o_{k}$ on $\mathcal{D}_{k}^{u}\cup\mathcal{D}^{l}_{0}$
5: Set $\eta_{k}^{p}/\eta_{k}^{p}$ using partial matching with
$\mathcal{D}^{l}_{1},\mathcal{D}^{l}_{0}$ using Eqs. 3 and 4.
6: end for
7: Build pseudo-labeler $v$ following Eqs. 1 and 2.
8: Return pseudo-labeler $v$.
9:end function
10:Initialize $g,h,q$.
11:Set positively / negatively labeled data
$\mathcal{D}^{l}_{1},\mathcal{D}^{l}_{0}$
12:while $g,h,q$ not converged do
13: $v$=Pseudo-
labeler($\mathcal{D}^{l}_{1},\mathcal{D}^{l}_{0},\mathcal{D}^{u},h$)
14: Update $g,h,q$ using Eq. 5.
15:end while
## 5 Experiments
We conduct extensive experiments to highlight the benefits of the proposed
method, SPADE, in various practical settings of semi-supervised learning with
distribution mismatch. We consider multiple anomaly detection datasets for
image and tabular data types. As image data, we use MVTec anomaly detection
(Bergmann et al., 2019) and Magnetic tile datasets (Huang et al., 2020). As
tabular data, we use Covertype, Thyroid, and Drug datasets (see Appendix for
detailed data description). In Sec. 5.4, we further utilize two real-world
fraud detection datasets (Kaggle credit and Xente) to evaluate the performance
of SPADE.
In all experiments, unless the dataset comes with its own train and test
split, we randomly divide the dataset into disjoint train and test data. Then,
we further divide the training data into disjoint labeled and unlabeled data.
Note that we construct labeled and unlabeled data such that they come from
different distributions (more details can be found in the following
subsections). We run 5 independent experiments and report average values
(standard deviations can be found in Appendix C). We use AUC as the evaluation
metric. More experimental details (on model architectures, training settings,
and pseudo-labelers) are provided in Appendix B. Computational complexity
analyses can be found in Appendix B.7.
We compare SPADE to baselines from Table 1. Note that not all baselines are
applicable to every scenario. More specifically, we use Gaussian Distribution
Estimator (GDE) for both OCC (only using the negatively labeled data) and
Negative OCC (only excluding the positively labeled data). Note that GDE
performs the best in comparison to the alternatives in our experiments
(including isolation forests, OC-SVM). We use SRR (Yoon et al., 2022) as the
unsupervised OCC baseline and Random Forest as the supervised (only using the
labeled data) and negative supervised (treat unlabeled data as negative)
baselines. For image data, FixMatch is used instead of VIME as the semi-
supervised baseline. We use CutPaste (Li et al., 2021) as the baseline
architecture for Negative OCC, Unsupervised OCC, and SPADE for MVTec and
Magnetic datasets.
### 5.1 New types of anomalies
Anomalies can evolve over time in many applications. For fraud detection,
criminals might invent new fraudulent approaches to trick the existing
systems; or for manufacturing, modified process might yield different defects
that have been never met before. Therefore, labeled data can get out-dated and
newly-gathered unlabeled data can come from different distributions. To mimic
such scenarios, we construct datasets with multiple anomaly types. Among
multiple anomaly types, we provide subsets of the anomaly types (and normal
samples) as the labeled data. It means that other anomaly types only appear in
the unlabeled data. Detailed experimental settings can be found in Appendix.
B.2.
Datasets | Thyroid | Drug | Covertype
---|---|---|---
Metrics (AUC) | Overall | Given | Missed | Overall | Given | Missed | Overall | Given | Missed
Supervised | 0.815 | 0.996 | 0.741 | 0.818 | 0.810 | 0.833 | 0.858 | 0.988 | 0.693
Negative Supervised | 0.622 | 0.837 | 0.533 | 0.676 | 0.670 | 0.685 | 0.761 | 0.881 | 0.610
OCC | 0.711 | 0.876 | 0.643 | 0.741 | 0.727 | 0.765 | 0.897 | 0.910 | 0.880
Negative OCC | 0.446 | 0.637 | 0.367 | 0.731 | 0.700 | 0.780 | 0.825 | 0.832 | 0.815
Unsupervised OCC | 0.429 | 0.612 | 0.353 | 0.769 | 0.747 | 0.803 | 0.843 | 0.853 | 0.831
VIME | 0.592 | 0.724 | 0.538 | 0.792 | 0.777 | 0.820 | 0.837 | 0.967 | 0.672
DANN | 0.725 | 0.876 | 0.662 | 0.744 | 0.730 | 0.768 | 0.791 | 0.979 | 0.552
SPADE (Ours) | 0.921 | 0.997 | 0.891 | 0.837 | 0.831 | 0.849 | 0.928 | 0.957 | 0.892
Table 2: Experimental results with new types of anomalies scenario in terms of
Overall / Given / Not given (Missed) AUC. Overall/Given/Missed: Put
all/given/missed anomaly types and normal samples in the test set for
evaluation.
Tables 2 and 3 (left) show that SPADE achieves consistently and significantly
better performance in all 3 metrics (overall, given, and missed AUC),
demonstrating its generalizability to unseen anomalies. On the other hand,
supervised and semi-supervised (VIME and FixMatch) methods remain highly
biased towards given anomalies and generalize poorly to new types of
anomalies. Compared to the best baseline, SPADE improves overall AUC by 0.106,
0.015, and 0.031 on the three tabular datasets.
Each baseline has its own limitations. Supervised classifiers cannot utilize
unlabeled data at all, and negative supervised classifier suffers from
contaminated labeled data for training the predictive model. OCC models are
suboptimal as they cannot utilize the anomalous label information. Semi-
supervised learning baselines suffer from distribution mismatch between
labeled and unlabeled data. For domain adaptation baseline, it shows poor
performances with a small number of source samples.
Scenarios | New anomalies | Easiness
---|---|---
Datasets | MVTec | Magnetic | MVTec | Magnetic
Supervised | 84.3 | 82.3 | 90.9 | 81.7
Negative Supervised | 76.5 | 63.5 | 79.2 | 59.3
Negative OCC | 81.3 | 69.0 | 87.6 | 70.1
Unsupervised OCC | 85.4 | 72.2 | 88.4 | 73.1
FixMatch | 81.4 | 69.1 | 83.5 | 70.8
SPADE (Ours) | 87.9 | 85.2 | 92.1 | 83.9
Table 3: Experimental results on image domain with (left) new types of
anomalies, (right) labeling based on easiness scenarios in terms of overall
AUC.
### 5.2 Labeling based on the ‘easiness’ of samples
The difficulty for human labeling may differ across different samples – while
some samples are easy to label, some samples can be misleadingly difficult to
humans because they appear differently from the known cases. To simulate this
scenario, we focus on an experiment where the labeled data only includes easy-
to-label samples while hard-to-label samples are included in the unlabeled
dataset. To this end, we train logistic regression using the entire training
data, and gather the labeled samples where confidence of the trained logistic
regression outputs are larger than a certain threshold and the predictions are
correct. Details can be found in Appendix. B.3.
Datasets | Thyroid | Drug | Covertype
---|---|---|---
Supervised | 0.805 | 0.848 | 0.878
Negative Supervised | 0.626 | 0.701 | 0.599
OCC | 0.787 | 0.838 | 0.888
Negative OCC | 0.464 | 0.741 | 0.826
Unsupervised OCC | 0.484 | 0.786 | 0.846
VIME | 0.728 | 0.849 | 0.843
DANN | 0.731 | 0.754 | 0.835
SPADE (Ours) | 0.833 | 0.846 | 0.892
Table 4: Experimental results with labeling based on the ‘easiness’ of samples
in terms of overall AUC.
Tables 3 (right) and 4 show that SPADE achieves superior or similar anomaly
detection performances compared to the best alternative. This constitutes a
great potential in reducing human labeling costs by allowing the labelers to
skip samples if they would take too long to correctly label. The experimental
results with the opposite setting (only labeling the high-risk samples) can
also be found in Appendix D.1.
### 5.3 PU (Positive & Unlabeled) learning
With only positive samples as the labeled data and all other samples being
unlabeled, i.e. the positive and unlabeled (PU) settings, the distributions
between labeled (only positive samples) and unlabeled (both positive and
negative samples) would be different. We use the same experimental settings
with the ‘new types of anomalies’ scenario except additionally excluding
normal samples from the labeled data, to represent PU setting. Detailed
experimental settings can be found in Appendix. B.4.
Datasets | Thyroid | Drug | Covertype
---|---|---|---
Metrics (AUC) | Overall | Given | Missed | Overall | Given | Missed | Overall | Given | Missed
Negative Supervised | 0.786 | 0.997 | 0.698 | 0.839 | 0.839 | 0.840 | 0.846 | 0.996 | 0.657
Negative OCC | 0.470 | 0.695 | 0.377 | 0.739 | 0.709 | 0.787 | 0.849 | 0.864 | 0.831
Unsupervised OCC | 0.519 | 0.707 | 0.441 | 0.771 | 0.748 | 0.809 | 0.863 | 0.880 | 0.842
Weighted Elkanoto (Elkan & Noto, 2008) | 0.772 | 0.934 | 0.705 | 0.711 | 0.714 | 0.706 | 0.699 | 0.917 | 0.422
BaggingPU (Mordelet & Vert, 2014) | 0.787 | 0.964 | 0.714 | 0.734 | 0.740 | 0.724 | 0.726 | 0.907 | 0.497
SPADE (Ours) | 0.929 | 0.996 | 0.901 | 0.840 | 0.842 | 0.837 | 0.896 | 0.940 | 0.839
Table 5: Experimental results on PU settings on 3 tabular datasets in AUC of
overall/given/missed (not given). Due to the absence of negatively-labeled
samples, Supervised, OCC, semi-supervised, and domain adaptation baselines are
excluded. Instead, two PU baselines are included.
Table 5 compares the performances of the proposed method (SPADE) in PU
settings on multiple tabular datasets. SPADE generalizes much better and
outperforms all other alternatives with significantly better AUC in missed
(not given) anomaly types. Note that PU baselines severely suffer from
distribution mismatches when new types of anomalies are included in the
unlabeled data.
### 5.4 Time-varying distributions: real-world fraud detection
We evaluate the proposed framework with two real-world fraud detection
datasets: (i) Kaggle credit card fraud111https://www.kaggle.com/datasets/mlg-
ulb/creditcardfraud (0.17% anomaly ratio with total 284807 samples), (ii)
Xente fraud detection222https://zindi.africa/competitions/xente-fraud-
detection-challenge/data (0.20% anomaly ratio with total 95662 samples). For
these tasks, anomalies are evolving (i.e., their distributions are changing
over time) (Grover et al., 2022). To catch the evolving anomalies, we need to
keep labeling for new anomalies and retrain the anomaly detection model.
However, labeling is costly and time consuming. Even without additional
labeling, SPADE can improve the anomaly detection performance using both
labeled data and newly-gathered unlabeled data.
Datasets | Kaggle Credit Fraud | Xente Fraud
---|---|---
Labeling ratio | 5% | 10% | 10% | 20%
Supervised | 0.975 | 0.977 | 0.906 | 0.925
Negative Supervised | 0.971 | 0.976 | 0.909 | 0.918
OCC | 0.717 | 0.803 | 0.891 | 0.920
Negative OCC | 0.838 | 0.835 | 0.608 | 0.630
Unsupervised OCC | 0.897 | 0.897 | 0.806 | 0.912
VIME | 0.941 | 0.943 | 0.859 | 0.893
DANN | 0.921 | 0.922 | 0.798 | 0.822
SPADE (Ours) | 0.982 | 0.983 | 0.920 | 0.931
Table 6: Experimental results on two real-world fraud detection datasets in
terms of overall AUC.
In our experiments, we split the train and test data based on the measurement
time. Latest samples are included in the testing data (50%) and early acquired
data is included in the training data (50%). We further divide the training
data as labeled and unlabeled data. Early acquired data are included in the
labeled data (5%-20%), while later acquired data are included in the unlabeled
data (80%-95%). We use AUC as the anomaly detection metric. As shown in Table.
6, SPADE consistently outperforms alternatives for different labeling ratio
values on both datasets, taking advantage of the unlabeled data and showing
robustness to evolving distributions.
## 6 Discussions
Accuracy of the pseudo-labels. SPADE is based on the proposed pseudo-labeling
mechanism. The accuracy of the pseudo-labeler is highly related to the
robustness of semi-supervised anomaly detection. We analyze the accuracy (in
precision) of the pseudo-labels vs. anomaly score percentiles for both normal
and anomalous samples.
(a) Thyroid
(b) Drug
(c) Covertype
Figure 4: Precision for pseudo-labelers across anomaly score percentiles on 3
tabular datasets with new types of anomalies. $\eta^{n},\eta^{p}$ represents
the discovered threshold for normal and anomalous pseudo-labels by partial
matching in percentile.
Fig. 4 shows that the proposed pseudo-labeler achieves fairly robust pseudo-
labeling for normal samples. On the other hand, for anomalous samples, the
precision of pseudo-labeling gets high typically when the anomaly scores are
higher than 80%, however we observe drop in precision in some cases, which we
attribute to imperfect OCC fitting. While this underlines the room for
improvement for pseudo-labeling, due to the robustness of partial matching,
the impact of imperfect precision on anomaly detection performance is low.
Note that our partial matching algorithm catches this threshold fairly
accurately to make pseudo-labels robust without any threshold parameter
tuning.
Ablation studies. SPADE consists of multiple components and understanding the
impact of each component is of high importance. In Table. 7, we demonstrate
the performance impacts of 5 different components in SPADE on the Thyroid data
with the setting of new anomaly types. All components of the SPADE are
observed to contribute to the robust anomaly detection performance
considerably. Self-supervised learning component contributes to 0.018 AUC
improvements of SPADE framework. In addition, with majority votes instead of
unanimous votes for pseudo-labeling, the performance of SPADE is degraded by
0.024 AUC. Additional ablation studies on other datasets can be found in
Appendix D.2.
$\alpha$ is a critical hyper-parameter of SPADE determining the importance of
pseudo-label loss in comparison to given labeled data. We analyze the impact
of this hyper-parameter in Fig. 5. With $\alpha=0$, the performance is much
worse than $\alpha>0$ on Thyroid (0.08 lower AUC) and on Covertype (0.06 lower
AUC) datasets. This underlines the impact of utilizing the unlabeled data. In
addition, the performances are similar across different $\alpha>0$,
demonstrating that SPADE isn’t sensitive to the hyper-parameter $\alpha$. Note
that, all the experiments in Sec. 5 use $\alpha=1$ as the default value.
Figure 5: Overall AUC across different values of $\alpha$ using three tabular
datasets. ($\alpha=0$ represents SPADE without utilizing pseudo-labels.)
Variants | Overall AUC
---|---
(i) No partial matching | 0.898
(ii) No ensemble | 0.894
(iii) $\beta=0$ (No self-supervised) | 0.903
(iv) No normal samples | 0.901
(v) Majority vote | 0.897
SPADE | 0.921
Table 7: Ablation studies on Thyroid dataset in new anomaly settings: (i)
without partial matching, (ii) without an ensemble of OCC, (iii) with
$\beta=0$ (No self-supervised learning), (iv) without normal samples for
pseudo-labeler training, (v) majority vote instead of unanimous votes for
pseudo-labeling.
## 7 Conclusions
Semi-supervised anomaly detection is a highly-important challenge in practice
– in many scenarios, we cannot assume that the distributions of labeled and
unlabeled samples are the same. In this paper, we focus on this and
demonstrate the underperformance of standard frameworks in this setting. We
propose a novel framework, SPADE, which introduces a novel pseudo-labeling
mechanism using an ensemble of OCCs and a judicious way of combining
supervised and self-supervised learning. In addition, our framework involves a
novel approach to pick hyperparameters without a validation set, a crucial
component for data-efficient anomaly detection. Overall, we show that SPADE
consistently outperforms the alternatives in various scenarios – AUC
improvements with SPADE can be up to $10.6\%$ on tabular data and $3.6\%$ on
image data. In addition to anomaly detection, future work can extend this
semi-supervised framework to multi-class classification or regression with
distribution mismatch.
## References
* Ahmed et al. (2016) Mohiuddin Ahmed, Abdun Naser Mahmood, and Md Rafiqul Islam. A survey of anomaly detection techniques in financial domain. _Future Generation Computer Systems_ , 55:278–288, 2016\.
* Akcay et al. (2018) Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. Ganomaly: Semi-supervised anomaly detection via adversarial training. In _Asian conference on computer vision_ , pp. 622–637. Springer, 2018.
* Baktashmotlagh et al. (2013) Mahsa Baktashmotlagh, Mehrtash T Harandi, Brian C Lovell, and Mathieu Salzmann. Unsupervised domain adaptation by domain invariant projection. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 769–776, 2013.
* Barua et al. (2012) Sukarna Barua, Md Monirul Islam, Xin Yao, and Kazuyuki Murase. Mwmote–majority weighted minority oversampling technique for imbalanced data set learning. _IEEE Trans on knowledge and data engineering_ , 26(2):405–425, 2012.
* Bekker & Davis (2020) Jessa Bekker and Jesse Davis. Learning from positive and unlabeled data: A survey. _Machine Learning_ , 109(4):719–760, 2020.
* Bergman & Hoshen (2019) Liron Bergman and Yedid Hoshen. Classification-based anomaly detection for general data. In _International Conference on Learning Representations_ , 2019.
* Bergmann et al. (2019) Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. MVTec AD–a comprehensive real-world dataset for unsupervised anomaly detection. In _CVPR_ , 2019.
* Bergmann et al. (2021) Paul Bergmann, Kilian Batzner, Michael Fauser, David Sattlegger, and Carsten Steger. The mvtec anomaly detection dataset: a comprehensive real-world dataset for unsupervised anomaly detection. _International Journal of Computer Vision_ , 129(4):1038–1059, 2021.
* Blanchard et al. (2010) Gilles Blanchard, Gyemin Lee, and Clayton Scott. Semi-supervised novelty detection. _JMLR_ , 11:2973–3009, 2010.
* Breunig et al. (2000) Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jörg Sander. Lof: identifying density-based local outliers. In _Proceedings of the 2000 ACM SIGMOD international conference on Management of data_ , pp. 93–104, 2000.
* Chalapathy & Chawla (2019) Raghavendra Chalapathy and Sanjay Chawla. Deep learning for anomaly detection: A survey. _arXiv preprint arXiv:1901.03407_ , 2019.
* Chaudhari & Shevade (2012) Sneha Chaudhari and Shirish Shevade. Learning from positive and unlabelled examples using maximum margin clustering. In _International Conference on Neural Information Processing_ , pp. 465–473. Springer, 2012.
* Chawla et al. (2002) Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. _Journal of artificial intelligence research_ , 16:321–357, 2002.
* Chen et al. (2020a) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_ , pp. 1597–1607. PMLR, 2020a.
* Chen et al. (2020b) Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class distribution mismatch. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pp. 3569–3576, 2020b.
* Christoffel et al. (2016) Marthinus Christoffel, Gang Niu, and Masashi Sugiyama. Class-prior estimation for learning from positive and unlabeled data. In _Asian Conference on Machine Learning_ , pp. 221–236. PMLR, 2016\.
* Du Plessis & Sugiyama (2014) Marthinus Christoffel Du Plessis and Masashi Sugiyama. Class prior estimation from positive and unlabeled data. _IEICE TRANSACTIONS on Information and Systems_ , 97(5):1358–1362, 2014.
* Elkan & Noto (2008) Charles Elkan and Keith Noto. Learning classifiers from only positive and unlabeled data. In _Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining_ , pp. 213–220, 2008.
* Estabrooks et al. (2004) Andrew Estabrooks, Taeho Jo, and Nathalie Japkowicz. A multiple resampling method for learning from imbalanced data sets. _Computational intelligence_ , 20(1):18–36, 2004\.
* Ganin et al. (2016) Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. _The journal of machine learning research_ , 17(1):2096–2030, 2016.
* Golan & El-Yaniv (2018) Izhak Golan and Ran El-Yaniv. Deep anomaly detection using geometric transformations. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , pp. 9781–9791, 2018.
* Görnitz et al. (2013) Nico Görnitz, Marius Kloft, Konrad Rieck, and Ulf Brefeld. Toward supervised anomaly detection. _Journal of Artificial Intelligence Research_ , 46:235–262, 2013.
* Grill et al. (2020) Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. _arXiv preprint arXiv:2006.07733_ , 2020.
* Grover et al. (2022) Prince Grover, Zheng Li, Jianbo Liu, Jakub Zablocki, Hao Zhou, Julia Xu, and Anqi Cheng. Fdb: Fraud dataset benchmark. _arXiv preprint arXiv:2208.14417_ , 2022.
* He et al. (2018) Fengxiang He, Tongliang Liu, Geoffrey I Webb, and Dacheng Tao. Instance-dependent pu learning by bayesian optimal relabeling. _arXiv preprint arXiv:1808.02180_ , 2018.
* Huang et al. (2020) Yibin Huang, Congying Qiu, and Kui Yuan. Surface defect saliency of magnetic tile. _The Visual Computer_ , 36(1):85–96, 2020.
* Hwang et al. (2011) Jae Pil Hwang, Seongkeun Park, and Euntai Kim. A new weighted approach to imbalanced data classification problem via support vector machine with quadratic cost function. _Expert Systems with Applications_ , 38(7):8580–8585, 2011.
* Kim et al. (2020) Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin. Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning. _arXiv preprint arXiv:2007.08844_ , 2020.
* Lee et al. (2013) Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In _Workshop on challenges in representation learning, ICML_ , volume 3, pp. 896, 2013.
* Li et al. (2021) Chun-Liang Li, Kihyuk Sohn, Jinsung Yoon, and Tomas Pfister. Cutpaste: Self-supervised learning for anomaly detection and localization. In _CVPR_ , 2021.
* Liu et al. (2003) Bing Liu, Yang Dai, Xiaoli Li, Wee Sun Lee, and Philip S Yu. Building text classifiers using positive and unlabeled examples. In _Third IEEE International Conference on Data Mining_ , pp. 179–186. IEEE, 2003.
* Liu et al. (2008) Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation forest. In _ICDM_ , 2008.
* Long et al. (2016) Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. _arXiv preprint arXiv:1602.04433_ , 2016.
* Mordelet & Vert (2014) Fantine Mordelet and J-P Vert. A bagging svm to learn from positive and unlabeled examples. _Pattern Recognition Letters_ , 37:201–209, 2014.
* Otsu (1979) Nobuyuki Otsu. A threshold selection method from gray-level histograms. _IEEE transactions on systems, man, and cybernetics_ , 9(1):62–66, 1979.
* Pan & Yang (2009) Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. _IEEE Transactions on knowledge and data engineering_ , 22(10):1345–1359, 2009.
* Pang et al. (2019) Guansong Pang, Chunhua Shen, and Anton van den Hengel. Deep anomaly detection with deviation networks. In _Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining_, pp. 353–362, 2019.
* Pang et al. (2021) Guansong Pang, Anton van den Hengel, Chunhua Shen, and Longbing Cao. Toward deep supervised anomaly detection: Reinforcement learning from partially labeled anomaly data. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, pp. 1298–1308, 2021.
* Raina et al. (2007) Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught learning: transfer learning from unlabeled data. In _Proceedings of the 24th international conference on Machine learning_ , pp. 759–766, 2007.
* Ruff et al. (2018) Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In _ICML_ , 2018.
* Ruff et al. (2020) Lukas Ruff, Robert A Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, and Marius Kloft. Deep semi-supervised anomaly detection. In _ICLR_ , 2020.
* Saito et al. (2021) Kuniaki Saito, Donghyun Kim, and Kate Saenko. Openmatch: Open-set semi-supervised learning with open-set consistency regularization. _Advances in Neural Information Processing Systems_ , 34, 2021.
* Schölkopf et al. (1999) Bernhard Schölkopf, Robert C Williamson, Alexander J Smola, John Shawe-Taylor, John C Platt, et al. Support vector method for novelty detection. In _NIPS_ , 1999.
* Sellamanickam et al. (2011) Sundararajan Sellamanickam, Priyanka Garg, and Sathiya Keerthi Selvaraj. A pairwise ranking based approach to learning with positive and unlabeled examples. In _Proceedings of the 20th ACM international conference on Information and knowledge management_ , pp. 663–672, 2011.
* Sohn et al. (2020) Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. _NeurIPS_ , 2020.
* Sohn et al. (2021) Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, and Tomas Pfister. Learning and evaluating representations for deep one-class classification. In _ICLR_ , 2021.
* Sun et al. (2019) Yu Sun, Eric Tzeng, Trevor Darrell, and Alexei A Efros. Unsupervised domain adaptation through self-supervision. _arXiv preprint arXiv:1909.11825_ , 2019.
* Tax & Duin (2004) David MJ Tax and Robert PW Duin. Support vector data description. _Machine learning_ , 54(1):45–66, 2004.
* Vanerio & Casas (2017) Juan Vanerio and Pedro Casas. Ensemble-learning approaches for network security and anomaly detection. In _Proceedings of the Workshop on Big Data Analytics and Machine Learning for Data Communication Networks_ , pp. 1–6, 2017.
* Yoon et al. (2020) Jinsung Yoon, Yao Zhang, James Jordon, and Mihaela van der Schaar. Vime: Extending the success of self-and semi-supervised learning to tabular domain. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Yoon et al. (2022) Jinsung Yoon, Kihyuk Sohn, Chun-Liang Li, Sercan O Arik, Chen-Yu Lee, and Tomas Pfister. Self-supervise, refine, repeat: Improving unsupervised anomaly detection. _Transactions on Machine Learning Research_ , 2022. URL https://openreview.net/forum?id=b3v1UrtF6G.
* Yu et al. (2020) Zhongjie Yu, Lin Chen, Zhongwei Cheng, and Jiebo Luo. Transmatch: A transfer-learning scheme for semi-supervised few-shot learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 12856–12864, 2020.
* Zhang & Zuo (2008) Bangzuo Zhang and Wanli Zuo. Learning from positive and unlabeled examples: A survey. In _2008 International Symposiums on Information Processing_ , 2008\.
* Zong et al. (2018) Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In _ICLR_ , 2018.
## Appendix
## Appendix A Details of the conventional solutions
### A.1 Standard supervised learning
The most straightforward approach is applying the standard supervised learning
framework. We can construct the supervised model $g_{sup}$ only with the
labeled data $\mathcal{D}^{l}$ as follows.
$g_{sup}=\arg\max_{g}\sum_{i=1}^{N_{l}}\mathcal{L}(g(x_{i}^{l}),y_{i}^{l})$
However, in this case, we cannot benefit from the unlabeled data
$\mathcal{D}^{u}$ which can be beneficial for further boosting the performance
with various semi-supervised learning framework. Also, the training data
distribution $\mathcal{X}^{l}$ is different from the testing distribution
$\mathcal{X}$ which can negatively impact on the test performance. We may
treat all the unlabeled data as normal samples and apply the supervised
learning framework ($g_{sup+}$) as follows:
$g_{sup+}=\arg\max_{g}\big{[}\frac{1}{N_{l}}\sum_{i=1}^{N_{l}}\mathcal{L}(g(x_{i}^{l}),y_{i}^{l})+\frac{1}{N_{u}}\sum_{j=1}^{N_{u}}\mathcal{L}(g(x_{j}^{u}),0)\big{]}.$
However, in this case, labeled normal samples are contaminated.
### A.2 Standard one-class classifiers (OCCs)
OCCs are one of the most promising methods to tackle the anomaly detection
problem. Instead of using incomplete anomaly labels, we can only utilize the
labeled normal samples
$\mathcal{D}_{0}^{l}=\\{(x_{j},y_{j})\in\mathcal{D}^{l}|y_{j}=0\\}$ to
construct the OCC ($g_{one}$). However, in this case, we need to drop all
labeled abnormal samples and unlabeled samples which may include quite
critical information. We can include the unlabeled data ($\mathcal{D}^{u}$) to
construct another OCC ($g_{one+}$) such as SRR (Yoon et al., 2022). However,
it still cannot utilize the labeled abnormal samples.
### A.3 Semi-supervised learning
With both labeled and unlabeled data, we usually prioritize to apply semi-
supervised learning approaches. We can utilize the semi-supervised learning
framework to construct the anomaly detection model as follows.
$g_{semi}=\arg\max_{g}\sum_{i=1}^{N_{l}}\mathcal{L}(g(x_{i}^{l}),y_{i}^{l})+\lambda\sum_{j=1}^{N_{u}}\mathcal{L}^{u}(g(x_{j}^{u}))$
Most semi-supervised learning frameworks assume that the labeled data
$\mathcal{D}^{l}$ and unlabeled data $\mathcal{D}^{u}$ come from the same
distribution. However, this assumption does not hold in our problem
formulation. Thus, possibly-biased labeled data distribution can negatively
affect on the trained semi-supervised model.
### A.4 Detailed comparison with SRR (Yoon et al., 2022)
SPADE has some resemblance with the SRR paper (Yoon et al., 2022). However,
there are clear differences between SPADE and SRR. First, the problem setting
is different. One of the biggest novelties of SPADE is tackling an important
but under-explored problem: semi-supervised learning with distribution
mismatch (e.g., common labeling bias). This has not been discussed in SRR
which focused on only the fully unsupervised setting. Extension from fully
unsupervised to general semi-supervised setting is not straightforward.
Second, the approach to utilize the positive and negative samples is not
discussed in SRR, which is critical in SPADE. We should consider how we
utilize the normal samples for improving the pseudo-labeler training (please
see the ablation studies in Table 6) and how we utilize the labeled samples
for determining the thresholds - Line 4 and 5 in Algorithm 1. Third, SPADE can
automatically determine the thresholds parameters without true anomaly ratios
or validation set by the proposed partial matching.
## Appendix B Detailed experimental settings
### B.1 Convert multi-class datasets into anomaly detection datasets
* •
For Thyroid data333https://archive.ics.uci.edu/ml/datasets/thyroid+disease,
there are 3 classes. The class distributions are (1: 2.47%, 2: 5.06%, 3:
92.47%). We treat label 3 as the normal samples and label 1 and 2 as the
abnormal samples. We use the pre-defined training and testing dataset
division.
* •
For Drug
data444https://archive.ics.uci.edu/ml/datasets/Drug+consumption+%28quantified%29,
there are 7 classes. The class distributions are (1: 75.27%, 2: 2.02%, 3:
4.56%, 4: 8.28%, 5: 3.29%, 6: 2.12%, 7: 4.46%). We treat label 1 as the normal
samples and all the other labels as the abnormal samples. We divide the entire
dataset into training (50%) and testing (50%).
* •
For Covertype data555https://archive.ics.uci.edu/ml/datasets/covertype, there
are 7 classes. The class distributions are (1: 36.55%, 2: 48.75%, 3: 6.14%, 4:
0.47%, 5: 1.64%, 6: 2.94%, 7: 3.50%). We treat label 1 and 2 as the normal
samples and all the other labels as the abnormal samples. We divide the entire
dataset into training (50%) and testing (50%).
* •
For MVTec data (Bergmann et al.,
2021)666https://www.mvtec.com/company/research/datasets/mvtec-ad, different
categories (15 categories) have different number of anomaly types. We set type
0 as the normal samples and all the other types as abnormal samples. Note that
we first mix given training and testing data and divide them into training
(80%) and testing (20%) to make the same abnormal ratio between training and
testing sets.
* •
For Magnetic Tile dataset (Huang et al.,
2020)777https://github.com/abin24/Magnetic-tile-defect-datasets., there are 6
types of samples: free, blowhole, crack, break, fray, and uneven. We set the
free type as the normal, and the other 5 types as anomalies. We mix given
training and testing data and divide them into training (80%) and testing
(20%) to have the same abnormal ratio between training and testing sets.
### B.2 Detailed experimental settings in Scenario 1: New types of anomalies
On each of the 5 datasets that we used in this paper, there are multiple types
of anomalies. In such scenarios, we only provide a subset of anomaly types as
the labeled data and the rest of the anomaly types as the unlabeled data. The
below explains which types of anomalies are provided as the labeled data for
each dataset:
* •
For Thyroid data, we provide label 1 anomaly type to the labeled data (label 2
anomaly type is only in the unlabeled data).
* •
For Drug data, we provide label 2,3,4 as anomaly types to the labeled data
(label 5, 6, 7 anomaly types are only in the unlabeled data).
* •
For Covertype data, we provide label 3, 4, 5 as anomaly types to the labeled
data (label 6, 7 anomaly types are only in the unlabeled data).
* •
For MVTec and Magnetic tile datasets, different categories have different
number of anomaly types. We provide the odd types as anomaly types to the
labeled data. All the even types of anomalies are included in the unlabeled
data.
Note that we only provide 5% of the data as labeled data for tabular datasets
and 20% for image datasets, for the scenario of new types of anomalies.
### B.3 Detailed experimental settings in Scenario 2: Labeling based on the
easiness of samples
To identify the easiness of the samples, we train a logistic regression model
using the entire training data, and we gather the labeled samples where
confidence of the trained logistic regression model outputs are larger than a
certain threshold and the predictions are correct. To balance the labeled data
in both normal and abnormal samples, we select top 10% confidences (from
trained logistic regression) of each class as the labeled data for tabular
datasets (20% confidence for image datasets). The rest of the samples are
treated as the unlabeled samples.
### B.4 Detailed experimental settings in Scenario 3: PU learning
The experimental settings in PU settings are the same with scenario 1 (new
types of anomaly) except the following points:
* •
We exclude all the normal samples from the labeled data to make the
experiments in PU setting.
* •
We provide 50% of the given anomaly types as the labeled data. However, the
number of labeled data is less than Scenario 1 because we exclude all the
normal samples from the labeled data.
### B.5 Details on model architecture and training
For image data, we use ResNet-18 as the base network architecture. For
representation learning, we incorporate CutPaste (Li et al., 2021) for MVTec
and Magnetic Tile datasets. We follow all the training details in (Li et al.,
2021) (including all the hyper-parameters).
For tabular data, we use two-layer perceptron as the base network architecture
where the hidden dimensions is the half of the original feature dimensions.
Pseudo-labelers consist of 5 Gaussian Distribution Estimator (GDE) based OCCs.
For each epoch, we update the ensemble of OCCs for the pseudo-labels and
further training the data encoder, projection head, and predictor. We set
$\alpha=1$ and $\beta=1$ for all the experiments except the ablation studies.
We use the training loss as the convergence criteria. More specifically, if
the training loss does not improve for 5 epochs, we stop model training.
### B.6 Baselines
We compare SPADE with various baselines in different settings. Below describes
the details of the baselines used in this paper:
* •
Gaussian Distribution Estimator (GDE) for both OCC (only using the negatively
labeled data) and Negative OCC (only excluding the positively labeled
data)888https://scikit-
learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html.
* •
Random Forests for the supervised (only using the labeled data) and negative
supervised (treat all the unlabeled data as negative)999https://scikit-
learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
* •
VIME101010https://github.com/jsyoon0823/VIME for the tabular semi-supervised
learning baseline and FixMatch111111https://github.com/google-
research/fixmatch for the image semi-supervised learning baseline.
* •
Domain Adversarial Neural Network (DANN) for the domain adaptation
baseline121212https://github.com/pumpikano/tf-dann.
* •
Weighted
Elkanoto131313https://pulearn.github.io/pulearn/doc/pulearn/index.html and
BaggingPU141414https://pulearn.github.io/pulearn/doc/pulearn/index.html for PU
learning baselines.
* •
CutPaste for the base architecture of image domain anomaly
detection151515https://github.com/Runinho/pytorch-cutpaste.
### B.7 Computational complexity analyses
All the experiments are done on a single V100 GPU. For tabular datasets,
training and inference need at most 1 hour per each experiment (with the
largest dataset, Covertype). For image dataset, training and inference need at
most 4 hours per each experiment, mostly due to the representation learning
with CutPaste. Note that the pseudo-labeler (an ensemble of OCCs) is re-
trained per an epoch (not per an iteration) and we use shallow OCCs (GDE) for
the ensemble to further reduce the computational complexity.
## Appendix C Standard deviations of the experiment results
In this section, we report the standard deviations of the experimental results
described in the main manuscript across 5 independent runs.
Datasets | Thyroid | Drug | Covertype
---|---|---|---
Metrics (AUC) | Overall | Given | Missed | Overall | Given | Missed | Overall | Given | Missed
Supervised | 0.051 | 0.003 | 0.076 | 0.028 | 0.031 | 0.031 | 0.003 | 0.001 | 0.008
Negative Supervised | 0.037 | 0.094 | 0.025 | 0.058 | 0.062 | 0.055 | 0.003 | 0.004 | 0.004
OCC | 0.094 | 0.074 | 0.108 | 0.062 | 0.071 | 0.052 | 0.001 | 0.001 | 0.001
Negative OCC | 0.002 | 0.006 | 0.001 | 0.020 | 0.022 | 0.021 | 0.001 | 0.002 | 0.001
Unsupervised OCC | 0.017 | 0.034 | 0.010 | 0.013 | 0.016 | 0.018 | 0.001 | 0.002 | 0.001
VIME | 0.068 | 0.064 | 0.072 | 0.075 | 0.080 | 0.067 | 0.014 | 0.001 | 0.032
DANN | 0.063 | 0.075 | 0.061 | 0.084 | 0.083 | 0.088 | 0.010 | 0.001 | 0.022
SPADE (Ours) | 0.029 | 0.001 | 0.041 | 0.024 | 0.026 | 0.026 | 0.001 | 0.001 | 0.002
Table 8: Standard deviations of experiments with new types of anomalies scenario in terms of Overall / Given / Not given (Missed) AUC. Overall/Given/Missed: Put all/given/missed anomaly types and normal samples in the test set for evaluation. Scenarios | New anomalies | Easiness
---|---|---
Datasets | MVTec | Magnetic | MVTec | Magnetic
Supervised | 0.048 | 0.034 | 0.035 | 0.025
Negative Supervised | 0.074 | 0.025 | 0.049 | 0.034
Negative OCC | 0.034 | 0.025 | 0.028 | 0.026
Unsupervised OCC | 0.038 | 0.024 | 0.034 | 0.029
FixMatch | 0.033 | 0.025 | 0.037 | 0.034
SPADE (Ours) | 0.041 | 0.032 | 0.032 | 0.025
Table 9: Standard deviations of experiments on image domain with (left) new types of anomalies, (right) labeling based on easiness scenarios in terms of overall AUC. Datasets | Thyroid | Drug | Covertype
---|---|---|---
Supervised | 0.013 | 0.009 | 0.002
Negative Supervised | 0.010 | 0.033 | 0.002
OCC | 0.031 | 0.016 | 0.001
Negative OCC | 0.004 | 0.020 | 0.002
Unsupervised OCC | 0.015 | 0.016 | 0.001
VIME | 0.033 | 0.015 | 0.017
DANN | 0.045 | 0.037 | 0.018
SPADE (Ours) | 0.021 | 0.014 | 0.001
Table 10: Standard deviations of experiments with labeling based on the
‘easiness’ of samples in terms of overall AUC.
Datasets | Thyroid | Drug | Covertype
---|---|---|---
Metrics (AUC) | Overall | Given | Missed | Overall | Given | Missed | Overall | Given | Missed
Negative Supervised | 0.028 | 0.001 | 0.040 | 0.011 | 0.013 | 0.014 | 0.001 | 0.000 | 0.002
Negative OCC | 0.007 | 0.018 | 0.003 | 0.020 | 0.021 | 0.020 | 0.001 | 0.001 | 0.001
Unsupervised OCC | 0.016 | 0.016 | 0.017 | 0.016 | 0.016 | 0.020 | 0.001 | 0.001 | 0.001
Weighted Elkanoto (Elkan & Noto, 2008) | 0.022 | 0.035 | 0.026 | 0.018 | 0.022 | 0.021 | 0.006 | 0.006 | 0.010
BaggingPU (Mordelet & Vert, 2014) | 0.029 | 0.019 | 0.036 | 0.019 | 0.020 | 0.020 | 0.021 | 0.016 | 0.027
SPADE (Ours) | 0.042 | 0.001 | 0.060 | 0.008 | 0.008 | 0.016 | 0.002 | 0.001 | 0.002
Table 11: Standard deviations of the experiments on PU settings on 3 tabular datasets in AUC of overall/given/missed (not given). Datasets | Kaggle Credit Fraud | Xente Fraud
---|---|---
Labeling ratio | 5% | 10% | 10% | 20%
Supervised | 0.002 | 0.001 | 0.024 | 0.009
Negative Supervised | 0.002 | 0.002 | 0.022 | 0.012
OCC | 0.021 | 0.043 | 0.064 | 0.010
Negative OCC | 0.011 | 0.007 | 0.005 | 0.010
Unsupervised OCC | 0.004 | 0.004 | 0.090 | 0.011
VIME | 0.012 | 0.013 | 0.023 | 0.019
DANN | 0.033 | 0.027 | 0.013 | 0.021
SPADE (Ours) | 0.001 | 0.001 | 0.001 | 0.009
Table 12: Standard deviations of the experiments with two real-world fraud
detection datasets in terms of overall AUC.
## Appendix D Additional Experiments
### D.1 Labeling high-risk samples
In this subsection, we evaluate the performance of SPADE in PNU settings only
with the labeled high-risk samples which is a common practical setting in
fraud detection applications (including anti-money laundering). More
specifically, the predictive model first estimates the anomaly scores of the
unlabeled data. Then, the users manually check the samples only with high
anomaly scores, and label them as either positive or negative. Thus, most
labeled samples are actually high-risk samples and most unlabeled samples are
low-risk samples which make the distribution differences between labeled and
unlabeled data.
Similar with easiness scenario, we first train a simple logistic regression
model (with the full label) and compute the anomaly scores of the unlabeled
data. Then, we only extract the high risk samples (e.g., with top 2% highest
anomaly scores). Then, we provide true labels for 50% (selected by uniformly
random) of those high risk samples. It means that we have 1% of labeled data
(either positive or negative) and 99% of unlabeled data. We exclude original
OCC as the baseline because in some cases, there are too small numbers of
negatively labeled samples which make OCC hard to converge.
Datasets | Thyroid | Drug | Covertype
---|---|---|---
Labeling ratio | 1% | 1.5% | 2.5% | 1% | 1.5% | 2.5% | 1% | 1.5% | 2.5%
Supervised | 0.758 | 0.984 | 0.984 | 0.578 | 0.655 | 0.615 | 0.619 | 0.602 | 0.669
Negative Supervised | 0.726 | 0.814 | 0.905 | 0.697 | 0.727 | 0.778 | 0.635 | 0.667 | 0.734
Negative OCC | 0.466 | 0.468 | 0.469 | 0.725 | 0.729 | 0.734 | 0.829 | 0.836 | 0.848
Unsupervised OCC | 0.502 | 0.526 | 0.519 | 0.763 | 0.766 | 0.769 | 0.846 | 0.851 | 0.865
VIME | 0.677 | 0.703 | 0.717 | 0.669 | 0.681 | 0.690 | 0.841 | 0.843 | 0.847
DANN | 0.735 | 0.744 | 0.749 | 0.724 | 0.747 | 0.761 | 0.749 | 0.762 | 0.769
SPADE (Ours) | 0.924 | 0.983 | 0.981 | 0.828 | 0.835 | 0.838 | 0.871 | 0.867 | 0.865
Table 13: Experimental results with labeling only on high-risk samples in
terms of overall AUC.
Table 13 shows that SPADE achieves superior or similar anomaly detection
performance compared to the best alternative.
### D.2 Additional ablation studies
Scenarios | New anomaly types | Easiness
---|---|---
Variants | Drug | Covertype | Thyroid | Drug | Covertype
(i) No partial matching | 0.827 | 0.916 | 0.811 | 0.830 | 0.869
(ii) No ensemble | 0.830 | 0.915 | 0.786 | 0.830 | 0.876
(iii) $\beta=0$ (No self-supervised) | 0.829 | 0.919 | 0.818 | 0.827 | 0.877
(iv) No normal samples | 0.835 | 0.922 | 0.822 | 0.841 | 0.887
(v) Majority vote | 0.835 | 0.918 | 0.807 | 0.839 | 0.890
SPADE | 0.837 | 0.928 | 0.833 | 0.846 | 0.892
Table 14: Ablation studies on multiple tabular datasets with new anomaly and
easiness settings: (i) without partial matching, (ii) without an ensemble of
OCC, (iii) with $\beta=0$ (No self-supervised learning), (iv) without normal
samples for pseudo-labeler training, (v) majority vote instead of unanimous
votes for pseudo-labeling.
|
# The disappearances of six supernova progenitors
Schuyler D. Van Dyk,1 Asia de Graw,2 Raphael Baer-Way,2 WeiKang Zheng,2 Alexei
V. Filippenko,2 Ori D. Fox,3 Nathan Smith,4 Thomas G. Brink,2 Thomas de
Jaeger,2,5 Patrick L. Kelly6 and Sergiy S. Vasylyev2
1Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA
2Department of Astronomy, University of California, Berkeley, CA 94720-3411,
USA
3Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD
21218, USA
4Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson,
AZ 85721, USA
5Institute for Astronomy, University of Hawai‘i, 2680 Woodlawn Dr., Honolulu,
HI 96822, USA
6University of Minnesota, School of Physics and Astronomy, 116 Church St. SE,
Minneapolis, MN 55455, USA E-mail<EMAIL_ADDRESS>(SVD)
0000-0001-9038-9950 0000-0002-2636-6508 0000-0003-3460-0103
0000-0003-2238-1572 0000-0001-5510-2424 0000-0001-5955-2502
0000-0001-6069-1139 0000-0003-3142-997X 0000-0002-4951-8762
(Accepted 2022 November 29. Received 2022 November 28; in original form 2022
September 16)
###### Abstract
As part of a larger completed Hubble Space Telescope (HST) Snapshot program,
we observed the sites of six nearby core-collapse supernovae (SNe) at high
spatial resolution: SN 2012A, SN 2013ej, SN 2016gkg, SN 2017eaw, SN 2018zd,
and SN 2018aoq. These observations were all conducted at sufficiently late
times in each SN’s evolution to demonstrate that the massive-star progenitor
candidate identified in each case in pre-explosion imaging data had indeed
vanished and was therefore most likely the actual progenitor. However, we have
determined for SN 2016gkg that the progenitor candidate was most likely a
blend of two objects: the progenitor, which itself has likely vanished, and
another closely-neighbouring star. We thus provide a revised estimate of that
progenitor’s properties: a binary system with a hydrogen-stripped primary star
at explosion with effective temperature $\approx 6300$–7900 K, bolometric
luminosity $\approx 10^{4.65}$ L⊙, radius $\approx 118$–154 $R_{\odot}$, and
initial mass 9.5–11 M⊙. Utilising late-time additional archival HST data
nearly contemporaneous with our Snapshots, we also show that SN 2017eaw had a
luminous ultraviolet excess, which is best explained as a result of ongoing
interaction of the SN shock with pre-existing circumstellar matter. We offer
the caveat, particularly in the case of SN 2013ej, that obscuration from SN
dust may be compromising our conclusions. This sample adds to the growing list
of confirmed or likely core-collapse SN progenitors.
###### keywords:
stars: massive – stars: evolution – supergiants – binaries: general –
supernovae: individual – SN 2012A, SN 2013ej, SN 2016gkg, SN 2017eaw, SN
2018zd, SN 2018aoq
††pubyear: 2022††pagerange: The disappearances of six supernova
progenitors–The disappearances of six supernova progenitors
## 1 Introduction
The explosive endpoints of stellar evolution, supernovae (SNe) are among the
most energetic events in the Universe. A consensus is that explosions of white
dwarfs with donor companions in binary systems lead to thermonuclear Type Ia
SNe. The terminal event in the lives of stars with zero-age main sequence
(ZAMS) masses $M_{\rm ZAMS}\gtrsim 8$ M⊙ is the collapse of the stellar core
at the end of nuclear burning — such core-collapse SNe (CCSNe) are observed as
Type Ib, Ic, and II. (See Filippenko 1997 for a review of SN classification.)
In the local Universe, CCSNe constitute the overwhelming majority, $\sim$76%,
of all SNe (Li et al., 2011; Graur et al., 2017). Among CCSNe, the Type II SNe
(SNe II) have classically been further divided into II-Plateau (II-P) and II-
Linear (II-L; although the actual existence of a II-L distinction has been
recently questioned, e.g., Anderson et al. 2014; Valenti et al. 2016); SNe
II-P are $\sim$48% of all CCSNe (Smith et al., 2011). A minority ($\sim$14%)
of all SNe II appear to be dominated by interaction of the SN shock with a
pre-existing circumstellar medium (CSM) and manifest themselves
spectroscopically as II-narrow (IIn; e.g., Schlegel, 1990; Smith, 2017). The
remainder of the CCSNe are the so-called “stripped-envelope” SNe (SESNe), in
which the outer H-rich envelope of the progenitor star has been greatly or
entirely removed, and for some SESNe even the He layer has been stripped away,
prior to explosion. Included among the SESNe are the SNe IIb ($\sim$10% of all
CCSNe; Smith et al. 2011; Shivvers et al. 2017), for which the progenitor
retains a mass of hydrogen $M_{H}\lesssim 0.15$ M⊙ at the time of its demise
(Yoon et al. 2017; although more-extended SN IIb progenitors may exceed this
limit).
Mapping the various SN types to their progenitor systems is one of the primary
goals of SN astrophysics. This knowledge can be acquired in various indirect
ways, such as modeling SN light curves via progenitor population synthesis
(e.g., Eldridge et al. 2018), probing the SN ejecta at late times (e.g.,
Milisavljevic et al. 2012; Jerkstrand et al. 2012), analysing the emission
from the CSM ionized by early X-ray/ultraviolet (UV) radiation from the SN
explosion (e.g., Khazov et al. 2016), and inferring ages and therefore turn-
off masses from the SN’s local stellar environment (e.g., Maund 2017; Williams
et al. 2018; Corgan et al. 2022). However, the most direct means of obtaining
insight into the progenitor-SN connection is the precise identification of the
star itself as seen at some time before the explosion (Smartt et al., 2009;
Smartt, 2015; Van Dyk, 2017b). Whilst a few progenitors have been located
using ground-based data, including SN 1978K (Ryder et al., 1993), SN 1987A
(e.g., Sonneborn et al. 1987), SN 1993J (Richmond et al., 1993; Aldering et
al., 1994), SN 2004et (Li et al., 2005), and SN 2008bk (Mattila et al., 2008;
Van Dyk et al., 2012), the majority have been identified in serendipitous pre-
explosion imaging obtained with the Hubble Space Telescope (HST; Van Dyk
2017a, and references therein).
However, the identified star, or stellar system, remains merely a progenitor
candidate, until its confirmation as the true progenitor through its
disappearance. The key to this pursuit is waiting until the SN has faded
sufficiently so one can successfully determine that the remaining light is
less luminous than the original luminosity of the progenitor. This anticipated
fading can be curtailed, though, if the SN is still interacting with a CSM,
leading to excess luminosity above what would be expected from the gradual
exponential decline rate associated with the reprocessing of $\gamma$-rays and
positrons from radioactive decay. To date, the community has confirmed the
progenitors of SN 1987A (Gilmozzi et al., 1987; Walborn et al., 1987;
Sonneborn et al., 1987); SN 1993J and SN 2003gd (Maund & Smartt, 2009); SN
2004A, SN 2005cs, and SN 2006my (Maund et al., 2014); SN 2005gl (Gal-Yam &
Leonard, 2009); SN 2008ax (Folatelli et al., 2015); SN 2008bk (Mattila et al.,
2010; Van Dyk, 2013); SN 2008cn (Maund et al., 2015); SN 2009ip (Smith et al.,
2022); SN 2010bt (Elias-Rosa et al., 2018); SN 2011dh (Van Dyk et al., 2013);
SN 2012aw (Van Dyk et al., 2015; Fraser, 2016); iPTF13bvn (Folatelli et al.,
2016; Eldridge & Maund, 2016); SN 2015bh (Jencson et al., 2022); and AT
2016jbu (Brennan et al., 2022). On the other hand, observing at sufficiently
late times has also revealed that some candidates likely were not the
progenitors at all, including SN 1999ev (Maund et al., 2014); SN 2006ov
(Crockett et al., 2011); SN 2009kr and SN 2009md (Maund et al., 2015); and, SN
2010jl (Fox et al., 2017).
Of course, it is possible that, rather than the progenitor candidate outright
vanishing, dust has formed and accumulated after the explosion and is
obscuring the remaining emission from the SN. This would be particularly
relevant if the progenitor candidate had experienced and survived a non-
terminal explosion and was then obscured by the dust. In order to eliminate
this possibility, observations of the SN would have to be undertaken in the
infrared (IR). This has generally not been done for the previously-mentioned
SNe. However, in a number of objects, such as SN 2003gd, the mass of dust
estimated to be present at late times is quite low ($4\times 10^{-5}$ M⊙;
Meikle et al. 2007) based on Spitzer Space Telescope IR observations. For
other SNe this may not necessarily be the case; we therefore offer this as a
caveat on results of this kind. Also, unfortunately very few progenitor
candidates have had pre-explosion IR counterparts isolated with which to
compare directly. This situation may well change with the advent of the James
Webb Space Telescope (JWST).
In this paper, as part of an HST observing program that we had executed, we
report on the apparent disappearances of six recent SNe (SNe 2012A, 2013ej,
2016gkg, 2017eaw, 2018zd, and 2018aoq), adding to the growing list of
confirmed progenitors. This sample includes two normal SNe II-P, one low-
luminosity SN II-P, one luminous possible SN II-P/II-L hybrid, one possible
electron-capture event (or another SN II-P), and one SN IIb.
## 2 Observations and Reductions
Observations were successfully conducted with HST as a Cycle 28 Snapshot
program, GO-16179 (PI A. Filippenko). The sole instrument utilised during this
program was the Wide Field Camera 3 (WFC3) with the UV/Visible channel (UVIS).
A total of 37 visits were executed (out of 54 requested), in which data were
acquired in two photometric bands within a single orbit per visit. A small
line dither was employed between the two short exposures (frames) in each of
the two bands, in an attempt to mitigate against cosmic-ray (CR) hits and
imaging-array imperfections. A full description of the program and the
complete results from it will be presented in a forthcoming paper. We note
that no Exclusive Access period was imposed on the program’s data, which were
public as soon as they were posted in the Mikulski Archive for Space
Telescopes (MAST). Here we highlight the results from the six visits that
included the SNe mentioned above. Several of these SNe were expressly targeted
in order to determine whether the progenitors had indeed vanished.
The two frames per band per visit were mosaicked using the routine
Astrodrizzle (STScI Development Team, 2012). An important outcome from running
this routine is that CR hits are masked in the data quality (DQ) array of each
frame. All of the frames per visit were then processed with Dolphot (Dolphin,
2016), in order to extract photometry via point-spread-function (PSF) fitting
of all of the stellar (or stellar-like) objects detected above a nominal
3$\sigma$ threshold in the mosaic of one band that is used as a reference
image. We adopted the Dolphot input parameters FitSky=3, RAper=8, and
InterpPSFlib=1 (since the UVIS frames have already been corrected in the
standard pipeline for charge-transfer efficiency, we set WFC3useCTE=0). The
photometric results from Dolphot are returned in Vega magnitudes. We then
employed prior images, which in most cases were previous HST images of the SN
or of its progenitor, to precisely isolate the late-time counterpart of the SN
in a Snapshot visit. In some cases below, Dolphot did not detect a counterpart
at the SN position; even a visual inspection in these cases readily revealed
that the SN was no longer detectable to the exposure depth of the data.
Calculating upper limits in these instances is not straightforward. After some
consideration we have chosen to set the upper limits based on the formal
signal-to-noise ratio (S/N) provided by Dolphot. However, we point out that
Williams et al. (2014) found that Dolphot tends to underestimate photometric
uncertainties, particularly in crowded environments; thus, setting accurate
limits, based on S/N, at the lowest flux levels is hampered by this
underestimation. Similarly, we have since found that injecting artificial
stars with Dolphot at a SN site at flux levels near the image noise floor
(e.g., Van Dyk et al. 2016) tends to lead to recovered fluxes that are
systematically too high (possibly as a result of the PSF fitting including too
much background around the injected star). Fortunately, the locations for
which we need to impose upper limits are relatively uncrowded environments,
and the photometric uncertainties are probably only underestimated by factors
of a few (Williams et al., 2014). To be conservative, therefore, we hereafter
set upper limits at S/N $=5$ based on photometry around the SN site, with the
realisation that these limits could well be at S/N $\approx 1.7$–2.5. The
reader should take all of these limitations and caveats into account when we
quote detection upper limits in this paper.
## 3 Discussion of Individual Supernovae
### 3.1 SN 2012A
Tomasella et al. (2013) demonstrated through detailed optical/IR photometric
and spectroscopic monitoring of SN 2012A in NGC 3239 that it was a normal SN
II-P. We now have overwhelming evidence that SNe II-P, as theoretically
expected, arise from massive stars in the red supergiant (RSG) phase, with
$M_{\rm ZAMS}$ in the range of $\sim$8–17 M⊙ (Smartt et al., 2009; Smartt,
2015), although arguments have been made that the range may extend up to
$\sim$19 M⊙ or higher (Davies & Beasor, 2018). In the case of SN 2012A, Prieto
et al. (2012) first identified a point source at the SN position in high-
resolution pre-explosion Gemini-North Near-InfraRed Imager and Spectrometer
(NIRI) $K^{\prime}$ images. The detection was only in this one single band.
Their preliminary estimate of the apparent brightness of the progenitor
candidate was $K=20.1\pm 0.2$ mag. Adjusting this measurement to an assumed
distance to the host galaxy resulted in an absolute brightness
$M_{K}\approx-9.4$ mag, consistent with expectations for RSGs.
Tomasella et al. (2013) reanalysed the Gemini data and, photometrically
calibrating with 2MASS $K_{s}$ (similarly to Prieto et al. 2012), found that
the candidate had $K^{\prime}$= $20.29\pm 0.13$ mag. Given the distance to the
host galaxy and the $K$-band extinction they assumed, as well as an estimated
bolometric correction for $K$, they concluded that the candidate had a
bolometric magnitude $-7.08\pm 0.36$ and thus a bolometric luminosity
$\log(L/{\rm L}_{\odot})=4.73\pm 0.14$. From a comparison to theoretical
single-star evolutionary tracks, Tomasella et al. (2013) concluded that the
RSG star had $M_{\rm ZAMS}=10.5^{+4.5}_{-2}$ M⊙. As those authors noted, their
treatment of the progenitor candidate had not included possible circumstellar
dust in the RSG envelope, although they reasoned that any circumstellar
extinction in the optical would be less by about a factor of $\sim$10 at $K$,
so the impacts of dust should be minimal on their luminosity estimate for the
star.
The HST observations of SN 2012A from our program in F606W and F814W occurred
on 2021 February 16 (UT dates are used throughout this paper), 3329 d ($\sim
9.1$ yr) after explosion (Tomasella et al., 2013). The total exposure times
were 710 s (F606W) and 780 s (F814W). (Total exposure times will be listed
hereafter in parentheses.) We focus on the F814W band, since it is the closer
of the two Snapshot bands to $K^{\prime}$ in wavelength. A comparison between
$K^{\prime}$ and F814W is shown in Figure 1. For the purposes of the figure,
we reconstructed the $K^{\prime}$ mosaic from the original NIRI data (in a
similar fashion to that of Tomasella et al. 2013). To locate the SN position
in the F814W image, we used 12 stars in common to astrometrically register the
two images, with a $1\sigma$ root-mean-square (RMS) uncertainty of 0.19 UVIS
pixel (7.5 mas). We estimated an upper limit at that position of $>25.8$ mag
in F814W (for completeness we note an upper limit in F606W of $>26.9$ mag).
For context we further extrapolated the observed exponential decline in the
$I$-band light curve from Tomasella et al. (2013) to the SN 2012A Snapshot
observation date and would expect the SN to have been at $I\approx 55$ mag
(!). We can therefore fairly safely rule out any residual SN emission. We note
that we have considered the upper limit at F814W, but the actual upper limit
at $K^{\prime}$ would depend on the expected ${\rm F814W}-K^{\prime}$ colour
for an RSG. Assuming an RSG with effective temperature $T_{\rm eff}=3600$ K,
and including the assumed extinction to SN 2012A, the colour of the star would
be $\sim 2.48$ mag, such that the presumed limit on the progenitor candidate
is then $K^{\prime}\gtrsim 23.3$ mag. If dust from the SN were obscuring the
candidate, this would require an extinction $A_{K}\gtrsim 3.0$ mag
($A_{V}\gtrsim 27.3$ mag [!]) from such dust. We consider it far more likely
that the putative RSG progenitor candidate has vanished.
Figure 1: Left: A portion of the pre-explosion Gemini-N NIRI $K^{\prime}$-band
mosaic from 2006 May 13, with the progenitor candidate for SN 2012A indicated
by tick marks (see also Tomasella et al. 2013, their Figure 16). Right: A
portion of the WFC3/UVIS F814W mosaic from 2021 February 16, with the
corresponding position of the progenitor candidate encircled. No source is
detected at the SN position to $>25.8$ mag in that band. North is up, and east
is to the left.
### 3.2 SN 2013ej
SN 2013ej in NGC 628 (M74) drew immediate attention, owing to its relative
proximity (9.8 Mpc) and its occurrence in a host of multiple SNe. Several
studies presented results of intensive UV/optical/IR photometric and
spectroscopic monitoring (e.g., Valenti et al. 2014; Bose et al. 2015; Huang
et al. 2015; Dhungana et al. 2016; Yuan et al. 2016). The unusually long rise,
comparatively shorter and steeper plateau phase, high luminosity, and spectral
indications of CSM interaction were highly suggestive of an unusual SN II,
with characteristics pointing toward the classical II-L designation. At the
very least, it appears it could be considered an atypical SN II-P or an SN
II-P/II-L hybrid, as some in the community have already been tagging it.
Chakraborti et al. (2016) observed and modeled X-ray emission from SN 2013ej
and found evidence for steady mass loss during the final 400 yr of the
progenitor star. Polarimetry, both photometric (Kumar et al., 2016) and
spectroscopic (Mauerhan et al., 2017; Nagao et al., 2021), have pointed to a
complex circumstellar environment around the SN.
Fraser et al. (2014) identified a progenitor candidate in pre-explosion HST
images obtained with the Advanced Camera for Surveys (ACS)/Wide-Field channel
(WFC) in bands F435W, F555W, and F814W on 2003 November 20, 2003 December 29
(F555W only), and 2005 June 16. What those authors found was that SN 2013ej
was most coincident with the source at F814W, at $22.66\pm 0.03$ mag (Mauerhan
et al. 2017 measured a slightly brighter $22.60\pm 0.01$ and $22.57\pm 0.02$
mag from 2003 and 2005, respectively), but that the photocentres of the source
at both F435W and F555W were significantly offset (by $\sim$40–47 mas, an
$8\sigma$ displacement; at the M74 distance, this is $\sim 2$ pc) relative to
F814W, implying that the source detected at F435W and F555W is unrelated to
the one at F814W, which they ascribed to the SN progenitor. (Lead author S.
Van Dyk had conducted a similar unpublished analysis and found essentially the
same results.) The unrelated source was at $25.14\pm 0.07$ and $24.98\pm 0.05$
mag in F435W and F555W, respectively. (Wide Field and Planetary Camera 2
[WFPC2] F336W data also exist, with the detected source at $23.31\pm 0.14$
mag.) Assuming that the candidate was an RSG, Fraser et al. (2014), based on
F814W only, concluded that its luminosity was in the range $\log(L/{\rm
L}_{\odot})=4.46$–4.85; comparing to the endpoints of stellar evolution
models, $M_{\rm ZAMS}=8$–15.5 M⊙, or at the least, $<16$ M⊙. We note that,
based on the photometric and spectroscopic observations, the progenitor mass
was estimated to be $M_{\rm ZAMS}\approx 14$ M⊙, 12–13 M⊙, $\sim$15 M⊙, and
12–15 M⊙ by Bose et al. (2015), Huang et al. (2015), Dhungana et al. (2016),
and Yuan et al. (2016), respectively.
The Snapshot observations of SN 2013ej were obtained on 2021 August 19 at
F555W (710 s) and F814W (780 s), 2949.5 d ($\sim$8.1 yr) after explosion.
(Note that we had also obtained F438W and F625W snapshots, as part of the same
HST program, in 2021 February of the site of AT 2019krl, which is also in M74;
however, the SN 2013ej site unfortunately fell within the chip gap for both of
those bands.) One can see in Figure 2 that a source at the SN position is
still clearly detected in the observations. We measured brightnesses of
$24.26\pm 0.03$ and $23.39\pm 0.03$ mag in the two respective bands. Comparing
with the published values (Fraser et al., 2014), the light at the position of
the progenitor candidate is $\sim$0.75 mag fainter in F814W than in 2003 or
2005. Mauerhan et al. (2017) reported that, based on WFC3/UVIS observations
from 2015 December (GO-14116; PI S. Van Dyk) and 2016 October (GO-14668; PI A.
Filippenko), the candidate was $\sim$0.46 mag fainter; however, we consider
that the decline to the 2021 value is a more convincing decrease. We therefore
believe we can now safely and strongly say that the candidate has vanished and
was therefore the progenitor.
Nevertheless, the light in F555W is still brighter, by $\sim$0.61 mag, than
what was measured pre-explosion, implying that the SN itself is still
contributing significantly, likely as a result of ongoing CSM interaction
(e.g., the H$\alpha$ and [O i] emission spectroscopically detected by Mauerhan
et al. 2017 in 2016 are within the F555W bandpass). Furthermore, the SN image
in F555W is offset by 0.38 UVIS pixel relative to F814W (compared to an
astrometric 1$\sigma$ uncertainty of 0.15 pixel), and the Dolphot output
parameter “roundness” is 0.353 at F555W, whereas it is 0.222 at F814W. So, we
surmise that the light at F555W includes not only the old SN, but also light
from the previously-mentioned contaminating neighbouring source or sources.
Therefore, although the SN has now faded substantially in F814W, it is not yet
faint enough in both bands that we can obtain a more detailed and less
confused view of the immediate SN environment (within $\sim$1–2 pixels).
Before we can be certain about the SN 2013ej progenitor, though, we must note
that this case is not nearly as clear-cut as for SN 2012A, above, owing to the
likely presence of dust formed in a cold dense shell behind the reverse shock
in the SN-CSM interaction region (Mauerhan et al., 2017). Mauerhan et al.
(2017) observed a very late-time rebrightening in the mid-IR, based on Spitzer
data, between $\sim$700 and 1000 d after explosion (see also Szalai et al.
2021). Furthermore, the SN was clearly detected in early observations in 2022
with the JWST NIRCam (Pierel et al., 2022). So, we cannot at this point
neglect the effects of dust on the light that we measure in our Snapshot data.
We therefore strongly encourage further monitoring of SN 2013ej with HST, as
well as with JWST, for as long as possible.
Figure 2: Left: A portion of the pre-explosion ACS/WFC F814W mosaic from 2003
November 20, with the progenitor candidate for SN 2013ej indicated by tick
marks (see also Fraser et al. 2014, their Figure 1). Right: A portion of the
WFC3/UVIS F814W mosaic from 2021 August 19, with the corresponding position of
the progenitor candidate indicated by tick marks. A source is still clearly
visible in the 2021 Snapshots in both F555W and F814W. North is up, and east
is to the left.
### 3.3 SN 2016gkg
SN 2016gkg in NGC 613 was discovered by an Argentinian amateur astronomer
literally within a few hours after explosion, and was soon shown to be an SN
IIb. Arcavi et al. (2017) modeled the very early shock cooling after the
initial luminosity peak and determined that the progenitor had a radius of
$\sim$40–150 $R_{\odot}$ with $\sim 2$–$40\times 10^{-2}$ M⊙ in its extended
envelope. Piro et al. (2017) also modeled the first cooling peak and found
$\sim$180–260 $R_{\odot}$ and $\sim 0.02$ M⊙. The double-peaked light curve,
characteristic of some SNe IIb, became evident with continued early
monitoring. Tartaglia et al. (2017) reported on the isolation of a progenitor
candidate in HST/WFPC2 F450W, F606W, and F814W images. Based on analysis of
their photometry of what they considered “Source A” (the candidate), they
inferred that the progenitor was of about mid-F spectral type and $M_{\rm
ZAMS}=15$–20 M⊙. Kilpatrick et al. (2017) had also identified the progenitor
candidate in the same HST data and characterised it as having a temperature of
9500 K and luminosity $\log(L/{\rm L}_{\odot})=5.15$, consistent with an A0 Ia
class; their estimate of the radius was $\sim 257\,R_{\odot}$. Through
Bayesian inference, Sravan et al. (2018) concluded the probability that the
progenitor was a binary system was as high as 44%, with a main-sequence or
red-giant companion. At late times (300–800 d), Kuncarayakti et al. (2020)
found evidence from nebular spectroscopy for an asymmetric explosion with two-
velocity-component ejecta, whilst radio emission detected up to 1429 d
provided evidence of SN shock-CSM interaction.
Bersten et al. (2018) also fully characterised both the SN properties and the
progenitor. They found from their hydrodynamic modeling that an envelope
radius of $\sim 320\,R_{\odot}$ with $\sim$ 0.01 M⊙, and ejecta mass $\sim$3.4
M⊙, best reproduced the observed SN properties. Furthermore, numerical
simulations of the properties of the progenitor — $T_{\rm eff}\approx 7250$ K
and $\log(L/{\rm L}_{\odot})\approx 5.10$ — inferred from analysis of the pre-
explosion photometry after unambiguously identifying the candidate, implied
that the progenitor was a binary system consisting of components $M_{\rm
ZAMS}=19.5$ M⊙ (primary) and 13.5 M⊙ (companion) with an initial orbital
period of 70 d. Those were the initial parameters of the binary; in the same
model Bersten et al. (2018) proposed the stripped progenitor’s mass at the
time of explosion was reduced to 4.6 M⊙, whereas the companion had gained mass
through mass transfer and was likely a main sequence star with a mass of
17–20.5 M⊙ (depending on the accretion efficiency) at the time of the
progenitor’s explosion, and the final orbital period had increased
substantially to 631 d.
Our Snapshot observations in F438W (710 s) and F606W (780 s) occurred on 2021
August 19, 1794.7 d (4.9 yr) after explosion. We show in Figures 3 and 4 the
results compared to the pre-SN data. It is clear that the SN environment, as
Tartaglia et al. (2017) had first found, is quite complicated. This is by far
the most complex environment of any of the SNe that we consider in this paper.
We were able to confidently identify which source was most likely associated
with the SN, much as Bersten et al. (2018) had done, by astrometrically
registering our Snapshots to the early-time 2016 WFC3/UVIS imaging of the SN
in F555W (GO-14115; PI S. Van Dyk) with a $1\sigma$ RMS uncertainty of 0.76
UVIS pixel ($0{\aas@@fstack{\prime\prime}}03$). We note that Kilpatrick et al.
(2022) also analysed our Snapshot images and identified the same SN
counterpart, although they arrived at slightly different values for the
source’s brightness, with somewhat larger uncertainties, i.e., $26.61\pm 0.27$
and $25.10\pm 0.07$ mag in F438W and F606W, respectively; we measured the
corresponding brightness of that source to be $>26.8$ and $24.88\pm 0.04$ mag.
We therefore find the SN to be somewhat brighter in F606W than did Kilpatrick
et al. (2022) and not detected at all in F438W. The sustained late-time
brightness in F606W likely arises, as Kuncarayakti et al. (2020) and
Kilpatrick et al. (2022) have shown, from strong [O i] $\lambda\lambda$6300,
6364 emission, and weaker H$\alpha$+[N ii] emission, within that band.
In the SN environment, as indicated in Figures 3 and 4, we identify three
other objects, in addition to the SN itself; these are labelled as Stars A, B,
and C. The first two stars, along with the SN, are confined to a tight
cluster-like gathering over $\sim 0{\aas@@fstack{\prime\prime}}4$, whereas
Star C is clearly separated from that cluster, and from the SN, by $\sim
0{\aas@@fstack{\prime\prime}}2$ (in our assessment, this object is likely
“Star B” of Tartaglia et al. 2017). We measure brightnesses for the three
stars in F438W and F606W (respectively) of $24.96\pm 0.06$ and $24.85\pm 0.04$
(A); $25.85\pm 0.12$ and $25.14\pm 0.05$ (B); and, $25.40\pm 0.08$ and
$25.04\pm 0.05$ mag (C).
We reprocessed the pre-explosion WFPC2 data for the host galaxy yet again with
astrodrizzle and Dolphot, obtaining $24.00\pm 0.15$ and $23.84\pm 0.06$ mag in
F450W and F606W, respectively, for the progenitor candidate. These results
agree quite well at F450W with those of Bersten et al. (2018), although here
we find the progenitor candidate to be somewhat brighter in F606W. (The main
differences in this processing with that of Bersten et al. 2018 is that here
we set WFPC2UseCTE=1 and have also flagged CR hits prior to Dolphot, which had
not been done in the prior study.) We also very carefully inspected the
residual images created after the PSF fitting photometry. By comparing to our
Snapshot data, the progenitor candidate in WFPC2, as extracted in the PSF
fitting, appears to be a blend of the profiles of the progenitor and Star A —
Dolphot detects Star C as a separate object and does not detect Star B at all.
Although we cannot be certain that the blend did not also include at least
some of the light from Star B or from other, fainter objects that were not
detectable, if we subtract the brightness of Star A from the progenitor
candidate, we infer that the brightness of the progenitor alone would have
been $24.58\pm 0.22$ and $24.38\pm 0.22$ mag in F450W and F814W,
respectively111With the compounding effects of a $5\sigma$ detection limit of
$24.8$ mag in F450W, and a $\sim 1.3$ mag difference in brightness and
separation of only $\sim 0{\aas@@fstack{\prime\prime}}015$ from the
progenitor, it is not surprising that Star B was not detected in the WFPC2
data.. We note that this is in overall agreement, to within the uncertainties,
with the brightness found by Kilpatrick et al. (2022) based on forced
photometry at the progenitor site.
If this inference is correct, then we can certainly say that, based on our
measurements from the Snapshot data, the progenitor candidate for SN 2016gkg
has vanished. Kilpatrick et al. (2022) came to a similar conclusion.
If we assume a host-galaxy distance of 18.2 Mpc (based on the Numerical Action
Methods model; Shaya et al. 2017; Kourkchi et al. 2020) — note that this is
significantly different from the distance assumed by Bersten et al. (2018) and
less than that assumed by Kilpatrick et al. (2022) — and only Galactic
foreground extinction, $A_{V}=0.053$ mag (as did Bersten et al. 2018) with
$R_{V}=3.1$, then the absolute brightnesses of the progenitor would be $M_{\rm
F450W}=-6.79\pm 0.22$ and $M_{\rm F606W}=-6.97\pm 0.22$ mag. In Figure 5, we
show this locus for the progenitor in a colour-colour diagram, along with (for
comparison) the endpoints for BPASS v2.2.2 (Stanway & Eldridge 2018) model
binary systems which would lead to SNe IIb, following the prescription of
Eldridge et al. (2017; also J. Eldridge, private communication). As can be
seen from the figure, a number of models compare well, to within the
uncertainties in the progenitor brightness (although, in general, these models
are all systematically somewhat more luminous), with $M_{\rm ZAMS}\leq 11$ M⊙
and a range of component mass ratios $q$; all of the model systems are of long
initial orbital periods $\log P$ (i.e., $P\approx 251$–631 d). The primary in
all of these model systems terminates with $T_{\rm eff}\approx 6300$–7900 K,
$\log(L/{\rm L}_{\odot})\approx 4.65$, $R_{\rm primary}\approx 118$–154
$R_{\odot}$, and $M_{\rm ejecta}\approx 1.20$–1.45 M⊙. The $T_{\rm eff}$,
$\log(L/{\rm L}_{\odot})$, and $R_{\rm primary}$ are somewhat cooler, more
luminous, and slightly larger (respectively) than those found by Kilpatrick et
al. (2022); however, both that study and this one determined that the primary
was less luminous than previous estimates. The range in radii that we
determine now is significantly less than those by Piro et al. (2017) and
Bersten et al. (2018), but it is still roughly consistent with that of Arcavi
et al. (2017). The model primaries we find also terminate with $M_{H}\approx
0.001$–0.004 M⊙, consistent with the results from the modeling of SNe Ib and
SNe IIb by Dessart et al. (2011; although see ). All of these parameters
should be considered upper-limit constraints, since there is bound to have
been fainter stellar emission, other than that of Star A, which was blended
with the progenitor in the detected candidate; however, we have no means here
of determining that beyond much deeper future observations.
We mention here that the most luminous companion (for model [11 M⊙, $q=0.9$,
$\log P=2.4$]) has $T_{\rm eff}\approx 20,750$ K and $\log(L/{\rm
L}_{\odot})=4.10$ at the terminus of the primary. More importantly for
attempts at detecting the companion (e.g., Fox et al. 2022), it has absolute
magnitudes $-5.57$ and $-5.05$ at F275W and F336W, respectively. At the
distance and reddening to SN 2016gkg, these are apparent magnitudes 25.84 and
26.34. In some future companion search, one could reach these brightnesses at
S/N = 4 in total exposure times of $\sim 8700$ and $\sim 6200$ s,
respectively, in these two bands222According to the HST WFC3/UVIS Exposure
Time Calculator, https://etc.stsci.edu/etc/input/wfc3uvis/imaging/..
Interestingly, Kilpatrick et al. (2022) already set an upper limit of 26.0 AB
mag (24.5 Vega mag) at F275W, based on an available 1500 s Snapshot
observation containing SN 2016gkg on 2021 May 14 (GO-16287; PI J. Lyman).
However, we mention two final notes related to this discussion. First, for SNe
IIb the default approach has been to invoke binary models for their
progenitors, whereas the fact that Cassiopeia A was an SN IIb (e.g., Krause et
al. 2008) and also not a binary at death (Kochanek 2018) seem to contradict
the need to search for a surviving companion for this SN type. Second, if
core-collapse SNe produce dust, as is generally believed and observed for SN
1987A (e.g., Matsuura et al. 2015), Cas A (e.g., De Looze et al. 2017), and
the Crab Nebula (e.g., Owen & Barlow 2015), then a binary companion will be
obscured by the dust at these early phases (unless one invokes the scenario
that Kochanek 2017 proposed, of a sufficiently hot companion which can
suppress dust formation) and also should not be detectable; see Kochanek
(2017) for a general discussion of this problem.
Figure 3: Left: A portion of the pre-explosion WFPC2 F450W mosaic from 2001
August 21, with the progenitor candidate for SN 2016gkg indicated by tick
marks (see also, e.g., Bersten et al. 2018, their Extended Data Figure 6). The
progenitor site was very near the edge of the mosaic. Right: A portion of the
WFC3/UVIS F438W mosaic from 2021 August 19 (nearly 20 yr later!), with the
corresponding position of the progenitor candidate indicated by tick marks. We
also identify three other resolved sources in the immediate environment, Stars
“A,” “B,” and “C.” North is up, and east is to the left.
Figure 4: Same as Figure 3, but with left, WFPC2 F606W from 2001 August 21
and, right, WFC3/UVIS F606W mosaic from 2021 August 19.
Figure 5: Left: A colour-magnitude diagram showing the SN 2016gkg progenitor
(green filled square), based on our new estimate of its properties (see text).
For comparison, we show several model binary systems at the terminus of the
primary star from BPASS v2.2.2 (Stanway & Eldridge 2018; purple open circles),
focusing in particular on those nine models which are consistent with the
progenitor locus, to within the uncertainties. See figure legend — the models
are distinguished by the initial mass of the primary, $M_{\rm prim}$; the mass
ratio, $q$; and the initial orbital period (in d), $\log P$. Right: A
Hertzsprung-Russell diagram showing the evolutionary tracks for the primary of
the nine BPASS models (solid lines), as well as the tracks for several of the
corresponding secondaries (companions; dashed lines) — less luminous companion
tracks are not shown. The open stars denote the locus of the terminus for each
model primary.
### 3.4 SN 2017eaw
SN 2017eaw in NGC 6946 generated significant interest in the community, since
it is the tenth historical SN to occur in what has been nicknamed the
“Fireworks Galaxy.” Extensive optical and infrared follow-up observations of
this luminous SN II-P ensued (e.g., Tsvetkov et al. 2018; Tinyanont et al.
2019; Van Dyk et al. 2019; Szalai et al. 2019). Kilpatrick & Foley (2018), Van
Dyk et al. (2019), and Rui et al. (2019) all characterised the progenitor
candidate that was identified in several HST bands, as well as in the
shortest-wavelength Spitzer data. Specifically, Van Dyk et al. (2019) measured
the progenitor candidate to have $26.40\pm 0.05$ and $22.87\pm 0.05$ mag in
F606W and F814W, respectively, on 2016 October 26. Those authors conducted
detailed dust radiative-transfer modeling of the candidate’s spectral energy
distribution (SED) and concluded that the star was a dusty, luminous RSG with
$M_{\rm ZAMS}\approx 15$ M⊙.
Our Snapshot observations in F555W (710 s) and F814W (780 s) are from 2020
November 11, 1279.6 d (3.5 yr) after explosion. We show in Figure 6 the F814W
observation compared to the pre-explosion F814W image from 2016 October 26.
From these observations we measure $23.84\pm 0.02$ and $23.17\pm 0.03$ mag in
F555W and F814W, respectively. For further interpretation here, we assume a
distance to the host galaxy of $7.11\pm 0.38$ Mpc ($\mu=29.26\pm 0.12$ mag),
adopting the tip-of-the-red-giant-branch (TRGB) distance estimate from the
Extragalactic Distance Database333http://edd.ifa.hawaii.edu/ (note that this
differs from the TRGB distance that Van Dyk et al. 2019 determined and
assumed). Following Van Dyk et al. (2019), we assume that the extinction to SN
2017eaw is only due to the Galactic foreground, $A_{V}=0.941$ mag (Schlafly &
Finkbeiner 2011; with $R_{V}=3.1$). Referring to the candidate measurements by
Van Dyk et al. (2019), the SN at F555W is still $\sim 2.7$ mag brighter than
the progenitor was at F606W (the difference in the bandpasses between WFC3
F555W and ACS/WFC F606W has no effect on this brightness disparity). However,
the SN has faded just enough in F814W that we can state it is highly likely
that the progenitor candidate has vanished. This is demonstrated graphically
in Figure 7.
We note that, as the SN has faded and at the higher resolution of WFC3/UVIS,
there does appear visually to be faint, extended, diffuse emission,
particularly in F555W. The star to the southeast of the SN was obvious in the
pre-explosion images, as it is now. However, there are indications in these
late-time images in both bands that, at least partially, fainter stars in the
immediate environment could be contributing to the extended profile and
detected diffuse emission. We can likely exclude a luminous compact light
echo, since no obvious indication of scattered early-time SN light appears in
the Weil et al. (2020) day 900 SN spectrum (although a low-luminosity echo
could still be a component of the late-time emission).
Additionally, we analysed nearly-contemporaneous, publicly-available archival
WFC3 data, from 2020 November 3 (only 8.3 d prior to our Snapshots), in F275W
and F336W from GO-15877 (PI E. Levesque) that serendipitously covered the SN
site. From these we measure $22.72\pm 0.02$ and $24.09\pm 0.04$ mag in the two
respective short-wavelength bands. Astoundingly, we can see in Figure 7 that
the emission at the SN position is significantly brighter in both F336W and
F275W than in F555W. We cannot completely rule out, based on the pre-SN F606W
and F814W progenitor detections, a pre-existing UV-bright source which is
within the PSF of the progenitor; at the distance of the host galaxy, the PSF
full width at half-maximum intensity (FWHM) of $\sim$2.1 UVIS pixels ($\sim
0{\aas@@fstack{\prime\prime}}08$) is $\sim$2.8 pc, so it is conceivable that
the profile could contain additional sources within it. For instance, a single
O-type star with $M{\rm ZAMS}=15$ M⊙ (i.e., a closely neighbouring massive
star at the same turnoff mass as the progenitor) could have been concealed in
the light of the progenitor, if we take the F606W and F814W pre-SN
measurements at face value (the O-star would have negligible contributions in
the IR bands). However, more than one such star would have been harder to
reconcile without having substantially more effect on the measured progenitor
brightness in the blue. Additionally, what we have detected in the UV/blue at
late times is enormously more luminous than a putative O-star neighbour or
two; furthermore, the shape of the SED of the UV excess would be highly
unusual for a normal star.
We believe that a far simpler explanation for the UV excess seen in 2020 is
ongoing late-time SN shock/CSM interaction. This is further illustrated via
the $U$-band light curve for SN 2017eaw shown in Figure 7, with the addition
of the 2020 F336W detection, which is many magnitudes more luminous than
extrapolation of the decline trend for the early-time $U$ emission, again
implying an additional source for the $U$ flux in 2020. The possibility of
continued interaction is entirely consistent with that implied by the late-
time spectroscopic observations of the SN obtained by Weil et al. (2020) – the
elevated flux at F555W could well be produced by a possible continuation of
the strong, boxy H$\alpha$ emission from day 900 to day 1279.6, since that
prominent emission would be just within the redward wing of the F555W
bandpass. The luminous excess in F275W could be the result of strong Mg ii
$\lambda\lambda$2796, 2803 emission, similar to that observed at late times
from two extraordinary SNe IIn, SN 2010jl (Fransson et al. 2014) and SN 2005ip
(Fox et al. 2020). The slight upturn at F814W could be further enhanced as the
result of SN dust, since we know that dust has been forming (Tinyanont et al.
2019; Szalai et al. 2019).
As a final note here, we can more solidly confirm that the progenitor has
likely vanished, from WFC3 data obtained on 2022 February 12 (GO-16691; PI R.
Foley) with the SN at $23.93\pm 0.04$ and $23.36\pm 0.05$ mag in F555W and
F814W, respectively.
Figure 6: Left: A portion of the pre-explosion ACS/WFC F814W mosaic from 2016
October 26, with the progenitor candidate for SN 2017eaw indicated by tick
marks (see also, e.g., Van Dyk et al. 2019, their Figure 20). Right: A portion
of the WFC3/UVIS F814W mosaic from 2020 November 11, with the corresponding
position of the progenitor candidate indicated by tick marks. North is up, and
east is to the left.
Figure 7: Left: The SED for SN 2017eaw based on WFC3/UVIS observations from
2020 November (solid circles). Also shown is the SED for the progenitor
candidate (open circles), as well as a best-fitting model for the SED (solid
curve, adjusted to the distance assumed here for the host galaxy), from Van
Dyk et al. (2019). Additionally, we show the SED of a $M_{\rm ZAMS}=15$ M⊙
O-type star at solar metallicity (dashed curve). Right: The $U$-band light
curve of SN 2017eaw (solid circles; Tsvetkov et al. 2018; Szalai et al. 2019),
with an extrapolation in time from the observed light-curve decline (dashed
line). Also shown is the F336W data point corresponding to WFC3/UVIS
observations at F336W from 2020 November 3 (GO-15877; PI E. Levesque).
### 3.5 SN 2018zd
SN 2018zd in NGC 2146 has been characterised as either a low-luminosity SN
II-P with CSM interaction, from a $M_{\rm ZAMS}\approx 12$ M⊙ star which
produced a relatively small amount of 56Ni in the core-collapse explosion
(Zhang et al. 2020; Callis et al. 2021), or an electron-capture (EC) event
from a super-asymptotic giant branch (SAGB) progenitor (Hiramatsu et al.
2021). Much of the source of the debate of SN 2018zd’s nature stems from a
lack of definitive knowledge of the host galaxy’s distance (Callis et al.
2021). The EC scenario has underpinnings, among other aspects, in the
identification of a progenitor candidate in HST image data. Hiramatsu et al.
(2021) determined that an object seen pre-explosion at the SN position was
likely stellar and had $25.05\pm 0.15$ mag in F814W (upper limits to detection
were also estimated in F225W and F658N); given the distance they assumed
($9.6\pm 1.0$ Mpc), the luminosity was more consistent with an SAGB star than
with a more massive RSG. However, should the host be at a much larger distance
(e.g., $\sim 18$ Mpc), the identified progenitor might be more in agreement
with an RSG, albeit still at the lower-luminosity end of known SN II-P
progenitors.
Our Snapshots were obtained in F606W (710 s) and F814W (780 s) on 2021
February 7, 1074.7 d (2.9 yr) after explosion. We show the SN site in Figure
8. Neither the SN nor the precursor object are detected in either band, to
$>27.0$ and $>26.1$ mag, respectively. We therefore consider it likely that
the object has vanished and that the candidate, whether an SAGB or RSG, was
indeed the progenitor of SN 2018zd. Again, $\gtrsim 1$ mag of extinction at
F814W from freshly-formed dust would be required to obscure the object at the
SN position, if it were not directly associated with SN 2018zd.
Figure 8: Left: A portion of the pre-explosion ACS/WFC F814W mosaic from 2004
April 10, with the progenitor candidate for SN 2018zd indicated by tick marks
(see also Hiramatsu et al. 2021, their Extended Data Figure 1). Right: A
portion of the WFC3/UVIS F814W mosaic from 2021 February 7, with the
corresponding position of the progenitor candidate encircled. No source is
detected at the SN position to $>26.1$ mag in that band. Note that the pre-SN
image was a single exposure, and CR hits were removed via a deep-learning
algorithm (Hiramatsu et al. 2021); the Snapshot mosaic was created from two
dithered short exposures. Therefore, some residual CR hits may still be
visible in both panels. North is up, and east is to the left.
### 3.6 SN 2018aoq
SN 2018aoq in NGC 4151 is a SN II-P, possibly less luminous than more-normal
events such as SN 1999em, an assessment by O’Neill et al. (2019) also
supported by overall lower observed ejecta expansion velocities. Tsvetkov et
al. (2019) provided further photometric and spectroscopic follow-up data, as
well as distance estimates based on the SN itself. O’Neill et al. (2019) took
advantage of the rich array of multiband, multi-epoch pre-explosion WFC3 data,
which were used to measure a Cepheid-based distance to this well-studied
Seyfert 1 galaxy (Yuan et al. 2020), to determine the nature of the progenitor
candidate isolated in those data. O’Neill et al. (2019) had measured an
average brightness for the candidate of $26.68\pm 0.11$ and $23.99\pm 0.04$
mag in F555W and F814W, respectively; along with measurements in F350LP and
F160W, they concluded that the star was an RSG with $\log(L/{\rm
L}_{\odot})\approx 4.7$ and $M_{\rm ZAMS}\approx 10$ M⊙.
The Snapshots were obtained in F555W (710 s) and F814W (780 s) on 2020
December 5, 981.3 d (2.7 yr) post-explosion. We show the image in F814W,
compared to the pre-SN image in the same band, in Figure 9. We do not detect
anything at the SN site to $>26.9$ and $>25.9$ mag in F555W and F814W,
respectively. If the candidate were totally unrelated to the SN progenitor,
extinctions of $A_{\rm F555W}$ and $A_{\rm F814W}\gtrsim 1.3$ mag as a result
of dust would be required to make the candidate no longer detectable. It is
far more likely, in our opinion, that since lower-luminosity SNe II-P do not
tend to be associated with much post-explosion dust (e.g., Meikle et al.
2007), the identified candidate was the progenitor of SN 2018aoq.
Figure 9: Left: A portion of the pre-explosion WFC3/UVIS F814W mosaic from
2015 December 21, with the progenitor candidate for SN 2018aoq indicated by
tick marks (see also O’Neill et al. 2019, their Figure 2). Right: A portion of
the WFC3/UVIS F814W mosaic from 2020 December 5, with the corresponding
position of the progenitor candidate encircled. No source is detected at the
SN position to $>25.9$ mag in that band. North is up, and east is to the left.
## 4 Summary
We have presented the results for six of the 37 visits of nearby SNe from a
successfully completed HST Snapshot program and shown that the progenitor
candidates for these CCSNe have likely vanished, confirming these objects as
the actual progenitors. This only became possible since each of these SNe (SN
2012A, SN 2013ej, SN 2016gkg, SN 2017eaw, SN 2018zd, and SN 2018aoq) had faded
sufficiently in at least one band that the SN has become less luminous than
the progenitor candidate. (For SN 2012A, the progenitor candidate was
identified in only $K^{\prime}$ from the ground, not in HST data as was the
case for the other five.) We therefore have added to the list of 17 CCSN
progenitors that have been previously confirmed; this increases the current
sample size of confirmed CCSN progenitors by about 35%. It is essential that
other SNe whose progenitor has not yet been confirmed be observed with HST, or
even JWST, at sufficiently late times.
We offered the caveat that CSM dust could be obscuring the progenitor
candidate enough that it appears to have disappeared, but may merely have been
dimmed by the dust and could still be present. For all but one of the six SNe
presented here, we were able to argue that this is most likely not the case.
However, we urge further monitoring, with both HST and JWST, of SN 2013ej, for
which evidence points to the presence of dust at late times, to confirm our
tentative result presented here.
With the addition of HST F275W and F336W archival data nearly contemporaneous
with our Snapshots, we also showed that SN 2017eaw exhibited a UV/blue excess
that can best be explained by the existence of ongoing, late-time interaction
of the SN with the progenitor’s CSM.
With the notable exception of SN 2016gkg, the other five SNe occurred
relatively isolated from other stars in their immediate environment, although
indications possibly exist of fainter objects in the close vicinity of SN
2017eaw, and we cannot yet successfully distinguish SN 2013ej from a likely
luminous blue object near it. Further observations of SN 2017eaw should reveal
whether it was a member of a stellar cluster, although the aforementioned CSM
interaction could delay such observations from being fruitful for some unknown
duration of time. Similar circumstances, as well as the dust mentioned above,
could complicate future observations of SN 2013ej.
We also demonstrated that the progenitor candidate originally identified in
WFPC2 images of SN 2016gkg likely consisted of a blend of the actual
progenitor with a closely neighbouring star, this latter object now becoming
more evident in our WFC3 Snapshot data. We then subtracted the light of the
neighbour from that of the candidate and find that the progenitor, if the
primary in a binary system, likely had at explosion $T_{\rm eff}\approx
6300$–7900 K, $\log(L/{\rm L}_{\odot})\approx 4.65$, $R\approx 118$–154
$R_{\odot}$, and $M_{\rm ZAMS}\approx 9.5$–11 M⊙. These parameters represent a
revision of estimates made at earlier times in the SN’s evolution.
In future attempts at progenitor identification, we therefore recommend that,
particularly for SESNe such as SN 2016gkg, any characterization of progenitor
properties based on a candidate identified — especially in pre-explosion WFPC2
images — should be considered provisional, until very late-time follow-up
observations with WFC3/UVIS are possible. Another salient example of this,
although not an SESN, is SN II 2008cn, for which Maund et al. (2015) (via
late-time ACS/WFC imaging) determined that the progenitor candidate, also
identified in WFPC2 data (Elias-Rosa et al. 2009; 2010), was actually a blend
of two sources, the RSG progenitor and another neighbouring star. Other cases
include SN 1999ev (Maund et al. 2014), and SN 2009kr and SN 2009md (Maund et
al. 2015), for which each progenitor candidate persisted and had not vanished.
In general, this cautionary message is particularly pertinent for SNe
occurring in host galaxies at larger distances (e.g., at $\gtrsim 5$–10 Mpc)
and those clearly occurring in relatively crowded environments.
## Acknowledgements
We thank the referee for their review, and for providing comments regarding SN
IIb progenitors and detection of their putative companions. We are also
grateful to Jay Anderson for a discussion regarding source detection and WFC3
charge-transfer efficiency. This research is based on observations, associated
with programs GO-16179 and GO-15877, made with the NASA/ESA Hubble Space
Telescope and obtained from the Space Telescope Science Institute (STScI),
which is operated by the Association of Universities for Research in
Astronomy, Inc., under NASA contract NAS5-26555. Support for GO-16179 was
provided by NASA through a grant from STScI. A.V.F.’s SN team at U.C. Berkeley
also received generous support from the Miller Institute for Basic Research in
Science (where A.V.F. was a Miller Senior Fellow), Gary and Cynthia Bengier
(T. deJ. was a Bengier Postdoctoral Fellow), the Christopher R. Redlich Fund,
and many individual donors.
## Data Availability
All of the HST data analysed herein are publicly available via the Mikulski
Archive for Space Telescopes (MAST) portal,
https://mast.stsci.edu/search/ui/#/hst. All of the photometric results that we
have obtained from these data have been listed above. All other data (e.g.,
BPASS models) are available via other known public sources (e.g.,
https://bpass.auckland.ac.nz/).
## References
* Aldering et al. (1994) Aldering G., Humphreys R. M., Richmond M., 1994, AJ, 107, 662
* Anderson et al. (2014) Anderson J. P., et al., 2014, ApJ, 786, 67
* Arcavi et al. (2017) Arcavi I., et al., 2017, ApJ, 837, L2
* Bersten et al. (2018) Bersten M. C., et al., 2018, Nature, 554, 497
* Bose et al. (2015) Bose S., et al., 2015, ApJ, 806, 160
* Brennan et al. (2022) Brennan S. J., Elias-Rosa N., Fraser M., Van Dyk S. D., Lyman J. D., 2022, A&A, 664, L18
* Callis et al. (2021) Callis E., et al., 2021, arXiv e-prints, p. arXiv:2109.12943
* Chakraborti et al. (2016) Chakraborti S., et al., 2016, ApJ, 817, 22
* Corgan et al. (2022) Corgan A., Smith N., Andrews J., Filippenko A. V., Van Dyk S. D., 2022, MNRAS, 510, 1
* Crockett et al. (2011) Crockett R. M., Smartt S. J., Pastorello A., Eldridge J. J., Stephens A. W., Maund J. R., Mattila S., 2011, MNRAS, 410, 2767
* Davies & Beasor (2018) Davies B., Beasor E. R., 2018, MNRAS, 474, 2116
* De Looze et al. (2017) De Looze I., Barlow M. J., Swinyard B. M., Rho J., Gomez H. L., Matsuura M., Wesson R., 2017, MNRAS, 465, 3309
* Dessart et al. (2011) Dessart L., Hillier D. J., Livne E., Yoon S.-C., Woosley S., Waldman R., Langer N., 2011, MNRAS, 414, 2985
* Dhungana et al. (2016) Dhungana G., et al., 2016, ApJ, 822, 6
* Dolphin (2016) Dolphin A., 2016, DOLPHOT: Stellar photometry, Astrophysics Source Code Library, record ascl:1608.013 (ascl:1608.013)
* Eldridge & Maund (2016) Eldridge J. J., Maund J. R., 2016, MNRAS, 461, L117
* Eldridge et al. (2017) Eldridge J. J., Stanway E. R., Xiao L., McClelland L. A. S., Taylor G., Ng M., Greis S. M. L., Bray J. C., 2017, Publ. Astron. Soc. Australia, 34, e058
* Eldridge et al. (2018) Eldridge J. J., Xiao L., Stanway E. R., Rodrigues N., Guo N. Y., 2018, Publ. Astron. Soc. Australia, 35, e049
* Elias-Rosa et al. (2009) Elias-Rosa N., et al., 2009, ApJ, 706, 1174
* Elias-Rosa et al. (2010) Elias-Rosa N., et al., 2010, ApJ, 711, 1343
* Elias-Rosa et al. (2018) Elias-Rosa N., et al., 2018, ApJ, 860, 68
* Filippenko (1997) Filippenko A. V., 1997, ARA&A, 35, 309
* Folatelli et al. (2015) Folatelli G., Bersten M. C., Kuncarayakti H., Benvenuto O. G., Maeda K., Nomoto K., 2015, ApJ, 811, 147
* Folatelli et al. (2016) Folatelli G., et al., 2016, ApJ, 825, L22
* Fox et al. (2017) Fox O. D., et al., 2017, ApJ, 836, 222
* Fox et al. (2020) Fox O. D., et al., 2020, MNRAS, 498, 517
* Fox et al. (2022) Fox O. D., et al., 2022, ApJ, 929, L15
* Fransson et al. (2014) Fransson C., et al., 2014, ApJ, 797, 118
* Fraser (2016) Fraser M., 2016, MNRAS, 456, L16
* Fraser et al. (2014) Fraser M., et al., 2014, MNRAS, 439, L56
* Gal-Yam & Leonard (2009) Gal-Yam A., Leonard D. C., 2009, Nature, 458, 865
* Gilkis & Arcavi (2022) Gilkis A., Arcavi I., 2022, MNRAS, 511, 691
* Gilmozzi et al. (1987) Gilmozzi R., et al., 1987, Nature, 328, 318
* Graur et al. (2017) Graur O., Bianco F. B., Modjaz M., Shivvers I., Filippenko A. V., Li W., Smith N., 2017, ApJ, 837, 121
* Hiramatsu et al. (2021) Hiramatsu D., et al., 2021, Nature Astronomy, 5, 903
* Huang et al. (2015) Huang F., et al., 2015, ApJ, 807, 59
* Jencson et al. (2022) Jencson J. E., Sand D. J., Andrews J. E., Smith N., Strader J., Aghakhanloo M., Pearson J., Valenti S., 2022, ApJ, 935, L33
* Jerkstrand et al. (2012) Jerkstrand A., Fransson C., Maguire K., Smartt S., Ergon M., Spyromilio J., 2012, A&A, 546, A28
* Khazov et al. (2016) Khazov D., et al., 2016, ApJ, 818, 3
* Kilpatrick & Foley (2018) Kilpatrick C. D., Foley R. J., 2018, MNRAS, 481, 2536
* Kilpatrick et al. (2017) Kilpatrick C. D., et al., 2017, MNRAS, 465, 4650
* Kilpatrick et al. (2022) Kilpatrick C. D., Coulter D. A., Foley R. J., Piro A. L., Rest A., Rojas-Bravo C., Siebert M. R., 2022, ApJ, 936, 111
* Kochanek (2017) Kochanek C. S., 2017, MNRAS, 471, 3283
* Kochanek (2018) Kochanek C. S., 2018, MNRAS, 473, 1633
* Kourkchi et al. (2020) Kourkchi E., Courtois H. M., Graziani R., Hoffman Y., Pomarède D., Shaya E. J., Tully R. B., 2020, AJ, 159, 67
* Krause et al. (2008) Krause O., Birkmann S. M., Usuda T., Hattori T., Goto M., Rieke G. H., Misselt K. A., 2008, Science, 320, 1195
* Kumar et al. (2016) Kumar B., Pandey S. B., Eswaraiah C., Kawabata K. S., 2016, MNRAS, 456, 3157
* Kuncarayakti et al. (2020) Kuncarayakti H., et al., 2020, ApJ, 902, 139
* Li et al. (2005) Li W., Van Dyk S. D., Filippenko A. V., Cuillandre J.-C., 2005, PASP, 117, 121
* Li et al. (2011) Li W., et al., 2011, MNRAS, 412, 1441
* Matsuura et al. (2015) Matsuura M., et al., 2015, ApJ, 800, 50
* Mattila et al. (2008) Mattila S., Smartt S. J., Eldridge J. J., Maund J. R., Crockett R. M., Danziger I. J., 2008, ApJ, 688, L91
* Mattila et al. (2010) Mattila S., Smartt S., Maund J., Benetti S., Ergon M., 2010, arXiv e-prints, p. arXiv:1011.5494
* Mauerhan et al. (2017) Mauerhan J. C., et al., 2017, ApJ, 834, 118
* Maund (2017) Maund J. R., 2017, MNRAS, 469, 2202
* Maund & Smartt (2009) Maund J. R., Smartt S. J., 2009, Science, 324, 486
* Maund et al. (2014) Maund J. R., Reilly E., Mattila S., 2014, MNRAS, 438, 938
* Maund et al. (2015) Maund J. R., Fraser M., Reilly E., Ergon M., Mattila S., 2015, MNRAS, 447, 3207
* Meikle et al. (2007) Meikle W. P. S., et al., 2007, ApJ, 665, 608
* Milisavljevic et al. (2012) Milisavljevic D., Fesen R. A., Chevalier R. A., Kirshner R. P., Challis P., Turatto M., 2012, ApJ, 751, 25
* Nagao et al. (2021) Nagao T., et al., 2021, MNRAS, 505, 3664
* O’Neill et al. (2019) O’Neill D., et al., 2019, A&A, 622, L1
* Owen & Barlow (2015) Owen P. J., Barlow M. J., 2015, ApJ, 801, 141
* Pierel et al. (2022) Pierel J., et al., 2022, Transient Name Server AstroNote, 147, 1
* Piro et al. (2017) Piro A. L., Muhleisen M., Arcavi I., Sand D. J., Tartaglia L., Valenti S., 2017, ApJ, 846, 94
* Prieto et al. (2012) Prieto J. L., Osip D., Palunas P., 2012, The Astronomer’s Telegram, 3863, 1
* Richmond et al. (1993) Richmond M., et al., 1993, IAU Circ., 5737, 1
* Rui et al. (2019) Rui L., et al., 2019, MNRAS, 485, 1990
* Ryder et al. (1993) Ryder S., Staveley-Smith L., Dopita M., Petre R., Colbert E., Malin D., Schlegel E., 1993, ApJ, 416, 167
* STScI Development Team (2012) STScI Development Team 2012, DrizzlePac: HST image software, Astrophysics Source Code Library, record ascl:1212.011 (ascl:1212.011)
* Schlafly & Finkbeiner (2011) Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
* Schlegel (1990) Schlegel E. M., 1990, MNRAS, 244, 269
* Shaya et al. (2017) Shaya E. J., Tully R. B., Hoffman Y., Pomarède D., 2017, ApJ, 850, 207
* Shivvers et al. (2017) Shivvers I., et al., 2017, PASP, 129, 054201
* Smartt (2015) Smartt S. J., 2015, Publ. Astron. Soc. Australia, 32, e016
* Smartt et al. (2009) Smartt S. J., Eldridge J. J., Crockett R. M., Maund J. R., 2009, MNRAS, 395, 1409
* Smith (2017) Smith N., 2017, in Alsabti A. W., Murdin P., eds, , Handbook of Supernovae. Springer International Publishing AG, p. 403, doi:10.1007/978-3-319-21846-5_38
* Smith et al. (2011) Smith N., Li W., Filippenko A. V., Chornock R., 2011, MNRAS, 412, 1522
* Smith et al. (2022) Smith N., Andrews J. E., Filippenko A. V., Fox O. D., Mauerhan J. C., Van Dyk S. D., 2022, MNRAS, 515, 71
* Sonneborn et al. (1987) Sonneborn G., Altner B., Kirshner R. P., 1987, ApJ, 323, L35
* Sravan et al. (2018) Sravan N., Marchant P., Kalogera V., Margutti R., 2018, ApJ, 852, L17
* Stanway & Eldridge (2018) Stanway E. R., Eldridge J. J., 2018, MNRAS, 479, 75
* Szalai et al. (2019) Szalai T., et al., 2019, ApJ, 876, 19
* Szalai et al. (2021) Szalai T., et al., 2021, ApJ, 919, 17
* Tartaglia et al. (2017) Tartaglia L., et al., 2017, ApJ, 836, L12
* Tinyanont et al. (2019) Tinyanont S., et al., 2019, ApJ, 873, 127
* Tomasella et al. (2013) Tomasella L., et al., 2013, MNRAS, 434, 1636
* Tsvetkov et al. (2018) Tsvetkov D. Y., et al., 2018, Astronomy Letters, 44, 315
* Tsvetkov et al. (2019) Tsvetkov D. Y., et al., 2019, MNRAS, 487, 3001
* Valenti et al. (2014) Valenti S., et al., 2014, MNRAS, 438, L101
* Valenti et al. (2016) Valenti S., et al., 2016, MNRAS, 459, 3939
* Van Dyk (2013) Van Dyk S. D., 2013, AJ, 146, 24
* Van Dyk (2017a) Van Dyk S. D., 2017a, in Alsabti A. W., Murdin P., eds, , Handbook of Supernovae. Springer International Publishing AG, p. 693, doi:10.1007/978-3-319-21846-5_126
* Van Dyk (2017b) Van Dyk S. D., 2017b, Philosophical Transactions of the Royal Society of London Series A, 375, 20160277
* Van Dyk et al. (2012) Van Dyk S. D., et al., 2012, AJ, 143, 19
* Van Dyk et al. (2013) Van Dyk S. D., et al., 2013, ApJ, 772, L32
* Van Dyk et al. (2015) Van Dyk S. D., et al., 2015, ApJ, 806, 195
* Van Dyk et al. (2016) Van Dyk S. D., de Mink S. E., Zapartas E., 2016, ApJ, 818, 75
* Van Dyk et al. (2019) Van Dyk S. D., et al., 2019, ApJ, 875, 136
* Walborn et al. (1987) Walborn N. R., Lasker B. M., Laidler V. G., Chu Y.-H., 1987, ApJ, 321, L41
* Weil et al. (2020) Weil K. E., Fesen R. A., Patnaude D. J., Milisavljevic D., 2020, ApJ, 900, 11
* Williams et al. (2014) Williams B. F., et al., 2014, ApJS, 215, 9
* Williams et al. (2018) Williams B. F., Hillis T. J., Murphy J. W., Gilbert K., Dalcanton J. J., Dolphin A. E., 2018, ApJ, 860, 39
* Yoon et al. (2017) Yoon S.-C., Dessart L., Clocchiatti A., 2017, ApJ, 840, 10
* Yuan et al. (2016) Yuan F., et al., 2016, MNRAS, 461, 2003
* Yuan et al. (2020) Yuan W., et al., 2020, ApJ, 902, 26
* Zhang et al. (2020) Zhang J., et al., 2020, MNRAS, 498, 84
|
# Targeted cutting of random recursive trees
Laura Eslava Instituto de Investigaciones en Matemáticas Aplicadas y en
Sistemas, Universidad Nacional Autónoma de México. Sergio I. López Facultad
de Ciencias, Universidad Nacional Autńoma de México. Marco L. Ortiz Facultad
de Ciencias, Universidad Nacional Autńoma de México.
###### Abstract
We propose a method for cutting down a random recursive tree that focuses on
its higher degree vertices. Enumerate the vertices of a random recursive tree
of size $n$ according to a decreasing order of their degrees; namely, let
$(v^{(i)})_{i=1}^{n}$ be so that $\deg(v^{(1)})\geq\cdots\geq\deg(v^{(n)})$.
The targeted, vertex-cutting process is performed by sequentially removing
vertices $v^{(1)}$, $v^{(2)},\ldots,v^{(n)}$ and keeping only the subtree
containing the root after each removal. The algorithm ends when the root is
picked to be removed. The total number of steps for this procedure,
$X_{n}^{targ}$, is upper bounded by $Z_{\geq D}$, which denotes the number of
vertices that have degree at least as large as the degree of the root. We
obtain that the first order growth of $X_{n}^{targ}$ is upper bounded by
$n^{1-\ln 2}$, which is substantially smaller than the required number of
removals if, instead, the vertices where selected uniformly at random. More
precisely, we prove that $\ln(Z_{\geq D})$ grows as $\ln(n)$ asymptotically
and obtain its limiting behavior in probability. Moreover, we obtain that the
$k$-th moment of $\ln(Z_{\geq D})$ is proportional to $(\ln(n))^{k}$.
## 1 Introduction
Random recursive trees (abbreviated as RRTs ) are rooted trees, where each
vertex has a unique label, obtained by the following procedure: Let $T_{1}$ be
a single vertex labeled $1$. For $n>1$ the tree $T_{n}$ is obtained from the
tree $T_{n-1}$ by adding a edge directed from a new vertex labeled $n$ to a
vertex with label in $\\{1,...,n-1\\}$, chosen uniformly at random and
independent for each $n$. We say that $T_{n}$ has size $n$ and that the degree
of a vertex $v$ is the number of edges directed towards $v$.
The idea of cutting random recursive trees was introduced by Meir and Moon
[27]. They studied the following procedure: Start with a random recursive tree
on $n$ vertices. Choose an edge at random and remove it, along with the cut
subtree that does not contain the root. Repeat until the remaining tree
consists only of the root; at which point, we say that the tree has been
deleted. Let $X_{n}$ be the number of edge removals needed to delete a RRT
with $n$ vertices.
By a recursive approach using that the remaining tree after one deletion has
itself the distribution of a RRT of smaller, random size, Meir and Moon proved
that the expectation of $X_{n}$ grows asymptotically as $\frac{n}{\ln(n)}$.
Panholzer [28] proposed an extension of this procedure: to study the total
cost of the algorithm until deletion; which is the sum of the costs at every
step where cutting an edge of a tree of size $n$ has a cost of $n^{c}$ for
some non negative constant $c$ (the original cutting corresponds to $c=0$). By
the use of generating functions and recursion, they obtained the asymptotic
behavior for the $k$-moment of the total cost and proved, as a corollary, that
$\frac{\ln(n)}{n}X_{n}$ converges to one, in probability. Iksanov and Möhle
[19] obtained the expression of $X_{n}$ up to its random order; namely, that
$\displaystyle Y_{n}:=\frac{(\ln(n))^{2}}{n}X_{n}-\ln(n)-\ln(\ln(n))$ (1)
converges weakly to a random variable $Y$ with characteristic function given
by
$\varphi_{Y}(\lambda):=\exp\Big{\\{}i\lambda\ln|\lambda|-\frac{\pi|\lambda|}{2}\Big{\\}}.$
Their proof is based on the construction of a coupling of $X_{n}$ with the
first passage time of certain random walk; while Drmota et al. [15] give a
proof of this theorem using recursive methods.
In this work we consider random recursive trees and propose a cutting
procedure that corresponds to a targeted cutting (focused on high-degree
vertices). We first present our model together with the main result, we then
overview the proof strategy and discuss some possible interpretations of this
procedure and several related models of tree-deletion.
To define the targeted cutting, let $T_{n}$ be a RRT of size $n$ and enumerate
its vertices as $(v^{(i)})_{i=1}^{n}$ according to a decreasing order of their
degrees; that is, $\deg(v^{(1)})\geq\cdots\geq\deg(v^{(n)})$, breaking ties
uniformly at random. The targeted vertex-cutting process is performed by
sequentially removing vertices $v^{(1)}$, $v^{(2)},\ldots,v^{(n)}$ and keeping
only the subtree containing the root after each removal (skip a step if the
chosen vertex had been previously removed). The procedure ends when the root
is picked to be removed. Let $X^{targ}_{n}$ denote the number of vertex
deletions before we select the root to be removed.
Our main theorem is an stochastic upper bound for the deletion time. In doing
so, we analyse the number of vertices with degree as large as that of the
root; see Section 1.1.
###### Theorem 1.1.
The random number of cuts $X_{n}^{targ}$ in the targeted cutting of $T_{n}$
satisfies, for any $\varepsilon>0$,
$X_{n}^{targ}=O_{p}(n^{\gamma+\varepsilon})$ where $\gamma:=1-\ln 2$. Namely,
for each $\delta>0$, there is $M=M(\varepsilon,\delta)>0$ and
$N=N(\varepsilon,\delta)\in\mathbb{N}$ such that for all $n\geq N$,
$\displaystyle{\mathbf{P}}\left(X_{n}^{targ}>Mn^{\gamma+\varepsilon}\right)<\delta.$
(2)
Theorem 1.1 gives a precise answer to an expected outcome: the targeted
cutting deletion time is significantly smaller compared to the deletion time
for uniformly random removals.
In the next section we provide precise statements on a random upper bound for
$X_{n}^{targ}$ from which Theorem 1.1 follows.
### 1.1 A random tail of the degree sequence
For $d\in\mathbb{N}$, we let $Z_{d}$ and $Z_{\geq d}$ be the number of
vertices in $T_{n}$ that have degree $d$ and degree at least $d$, respectively
(we omit the dependence in $n$ for these variables throughout). The asymptotic
joint distribution of $(Z_{d})_{d\geq 0}$ is described by the limiting
distribution, as $n\to\infty$, of certain urn models in [20], while the
limiting joint distribution of $(Z_{\geq
d+\lfloor\log_{2}n\rfloor})_{d\in\mathbb{Z}}$ is described (along suitable
subsequences) by an explicit Poisson point process in
$\mathbb{Z}\cup\\{\infty\\}$ in [2]; the lattice shift by $\log_{2}n$ steams
from the fact that $\Delta_{n}/\log_{2}n$, the renormalized maximum degree in
$T_{n}$, converges a.s. to 1 [13]. In contrast, the degree of the root in
$T_{n}$ is asymptotically normal with mean $\ln n$ (note that
$\log_{2}n\approx 1.4\ln n$).
In this paper we are interested in the random variable $Z_{\geq D}$, which
corresponds to the number of vertices that have degree at least the degree of
the root of $T_{n}$, henceforth denoted by $D=D(n)$. The techniques developed
in [16] provide bounds on the total variation distance between $Z_{\geq c\ln
n}$ and a Poisson random variable with mean $n^{\alpha}$ with a suitable
$\alpha=\alpha(c)$ for $c\in(1,\log_{2}n)$. However, there is far less control
over the random variables $Z_{\geq c\ln n}$ for $c\leq 1$ [2]. The following
two theorems provide precise statements to back up the informal approximation
$Z_{\geq D}\sim n^{1-\ln 2}$.
###### Theorem 1.2.
Let $\gamma:=1-\ln(2)$, the following convergence in probability holds
$\dfrac{\ln(Z_{\geq D})}{\ln(n)}\stackrel{{\scriptstyle
p}}{{\longrightarrow}}\gamma.$
###### Theorem 1.3.
Let $\gamma:=1-\ln(2)$. For any positive integer $k$,
$\displaystyle{\mathbf{E}}\left[\big{(}\ln\left(Z_{\geq
D}\right)\big{)}^{k}\right]=\big{(}\gamma\ln\left(n\right)\big{)}^{k}\left(1+o(1)\right).$
Theorem 1.1 follows immediately from Theorem 1.2.
###### Proof of Theorem 1.1 assuming Theorem 1.2.
Since the removal of vertices is done accordingly to their degree, $Z_{\geq
D}$ gives us the worst-case destruction time for the targeted cutting
procedure; that is, $X^{targ}_{n}\leq Z_{\geq D}$ a.s.. It then follows that
(2) is satisfied, for $\varepsilon,\delta>0$, by letting
$M(\varepsilon,\delta)=1$ and $N$ large enough that ${\mathbf{P}}\left(Z_{\geq
D}\geq n^{\gamma+\varepsilon}\right)<\delta$, for $n\geq N$. ∎
The proof of Theorem 1.2 is based on the concentration of $D$ and the first
and second moment method, see Section 2. Unfortunately, the tails of $D$ do
not vanish as fast as a naive bounding of the moments of $Z_{\geq D}$ would
require by the current control we have on the distribution of $Z_{\geq d}$ for
$d\sim\ln n$ (see (8) and Propositions 2.1 and 2.2, respectively). Instead, to
establish Theorem 1.3, we resort to a coupling between a random recursive tree
$T_{n}$ of size $n$ and a random recursive tree $T_{n}^{(\varepsilon)}$ of
size $n$ conditioned on $D$ to take values in
$((1-\varepsilon)\ln(n),(1+\varepsilon)\ln(n))$, for any given
$\varepsilon\in(0,1)$, see Section 3.
Briefly described, the coupling is the following. Let $n\in\mathbb{N}$, for
the construction of both $T_{n}$ and $T_{n}^{(\varepsilon)}$ it suffices to
define the parent of vertex $i$, for each $1<i\leq n$. For each tree, we break
down this choice in two steps: First sample a random Bernoulli to decide
whether $i$ attaches to the root or not. Then the parent of $i$ is either the
root or a uniformly sampled vertex among the rest of the possible parents. The
Bernoulli variables are coupled in such a way that $T_{n}$ is constructed by
independent variables while the Bernoulli random variables related to the tree
$T_{n}^{(\varepsilon)}$ imply the conditioning on $D$ taking values in the
aforementioned interval. On the other hand, if a given vertex chooses a parent
distinct from the root, then this choice is exactly the same for both the
unconditioned and conditioned tree.
### 1.2 Discussion on related models
Cutting processes of random trees and graphs can serve as a proxy to the
resilience of a particular network to breakdowns, either from intentional
attacks or fortuitous events. What is considered a breakdown and resilience
may differ depending on the context. In the first cutting procedure,
introduced by Meir and Moon for random recursive trees [27]: it considers
contamination from a certain source within an organism and one may think of
the number of cuts as the necessary steps to isolate the source of the
contamination.
Janson [21], noted that the number of cuts needed to destroy a tree is
equivalent to the number of records that arises from a random labeling of the
edges, this approach is further used by Holmgren [17, 18] to study split trees
and binary search trees, respectively.
Several modifications of the uniform edge deletion process have been proposed
by different authors. Javanian and Vahini-Asl [23] modified the process with
the objective of isolating the last vertex added to the network. Kuba and
Panholzer [24, 25, 26] published a series of articles focusing specifically on
isolating a distinguished vertex; for example, the isolation of a leaf or
isolating multiple distinguished vertices.
In the context of Galton-Watson trees, cutting down trees has been
predominantly studied from the perspective of vertex-cutting. That is,
vertices are selected to be removed, rather than edges, and once a vertex has
been removed, we keep the subtree containing the root; the procedure is
repeated until the root is removed. Note that selecting an edge uniformly at
random is equivalent to uniformly selecting a vertex other than the root.
Bertoin and Miermont [5] constructed a method to compare vertex-cutting versus
edge cutting and studied the tree destruction process. Addario-Berry et al.
[1] studied the case of Galton-Watson trees with critical, finite-variance
offspring distribution, conditioned to have a fixed number of total progeny.
Dieuleveut [14] modified the vertex-cutting process so that the probability of
a given vertex to be cut, at each step, is proportional to its degree. This
process was generalized by Cai et al. [10] by considering that every vertex
can stand $k$ attacks before being removed. This generalization was later
studied by Berzunza et al. [7] for deterministic trees and by Berzunza et al.
[6] for conditioned Galton-Watson trees.
Similar cutting procedures have been studied in more general graphs. To name a
few, Berche et al. [4] studied public transport networks; Xu et al. [29]
studied a broad range of dynamical processes including birth-death processes,
regulatory dynamics and epidemic processes on scale free graphs and Erdös-
Renyi networks under two different targeted attacks, with both high-degree and
low-degree vertices; Alenazi and Sterbenz [3] compared different graph
metrics, such as node betweenness, node centrality and node degree, to measure
network resilience to random and directed attacks.
As we mentioned before, what is considered a breakdown and resilience may
differ depending on the context. In the internet, for example, the failures of
connectivity that happen over time can be modeled by random cuts, while
malicious attacks, from a hacker or enemy trying to disconnect some given
network of servers, can be mathematically described by targeted cutting
towards highly connected servers. These ideas on resilience were posed by
Cohen et al. [11, 12] for scale free graphs. Later, Bollobás and Riordan [8]
compared random versus malicious attacks on scale free graphs. Since malicious
attacks may hold an strategy to better take advantage of some characteristics
of the network, it is expected that the number of cuts required would be
significantly fewer than in a completely random attack.
For scale-free networks Cohen et al. [12] obtained the next result. Assume the
degree of a uniformly random vertex of the graph follows a power law with
decay $\alpha$ on the support $m,\dots,n$. If a proportion $p$ of the vertices
with the highest degree is deleted, then the probability $\bar{p}$ of a
randomly chosen node to connect to a deleted vertex is roughly approximated by
$p^{\frac{2-\alpha}{1-\alpha}}$, for $\alpha>2$. In the case $\alpha=2$ the
approximation of $\bar{p}$ is given by $\ln\Big{(}\frac{np}{m}\Big{)}$. This
result supports the intuition of needing a small proportion of vertices to be
removed in a targeted attack to delete a network.
### 1.3 Notation
We use $|A|$ to denote the cardinality of a set $A$. For $n<m\in\mathbb{N}$ we
write $[n]:=\\{1,2,\dots,n\\}$ and $[m,n]=\\{m,m+1,\ldots,n\\}$. For
$a,b,c,x\in\mathbb{R}$ with $b,c>0$, we use $x\in(a\pm b)c$ as an abbreviation
for $(a-b)c\leq x\leq(a+b)c$. In what follows we denote natural and base $2$
logarithms by $\ln(\cdot)$ and $\log(\cdot)$, respectively. We often use the
identity $\ln(a)=\ln(2)\log(a)$ for $a>0$.
For $r\in\mathbb{R}$ and $a\in\mathbb{N}$ define
$(r)_{a}:=r(r-1)\cdots(r-a+1)$ and $(r)_{0}=1$. For real functions $f,g$ we
write $f(x)=o(g(x))$ when $\lim_{x\to\infty}f(x)/g(x)=0$ and $f(x)=O(g(x))$
when $|f(x)/g(x)|\leq C$ for some $C>0$. The convergence in probability will
be written as $\stackrel{{\scriptstyle p}}{{\longrightarrow}}$. A rooted tree
is a tree with a distinguished vertex, which we call the root. We always
consider the edges to be directed towards the root. A directed edge $e=uv$ is
directed from $u$ to $v$ and, in this case, we will say that $v$ is the parent
of $u$. Given a rooted tree $T$ and one of its vertices $v$, the degree of $v$
in $T$, denoted by $d_{T}(v)$, is the number of edges directed towards $v$. We
say that $T$ has size $n$ if it has $n$ vertices, and $[n]$ will denote the
set of vertices of a tree of size $n$.
## 2 Deterministic tails of the degree sequence
Given a random recursive tree $T_{n}$, let $Z_{\geq d}$ denote the number of
vertices with degree at least $d$, that is
$\displaystyle Z_{\geq d}\equiv Z_{\geq
d}(n):=\big{|}\\{v\in[n]:d_{T_{n}}(v)\geq d\\}\big{|}.$ (3)
Our theorems build upon results on the convergence of the variables $Z_{\geq
d}$ since, they are non-increasing on $d$ and $Z_{\geq D}$ is a tail of the
degree sequence with random index; recall that $D=D(n)$ denotes the degree of
the root of $T_{n}$. The following two propositions are simplified versions of
Proposition 2.1 in [2] and Theorem $1.8$ in [16], respectively.
###### Proposition 2.1 (Moments of $Z_{\geq d}$).
For any $d\in\mathbb{N}$, ${\mathbf{E}}\left[Z_{\geq d}\right]\leq
2^{\log(n)-d}$. Moreover, there exists $\alpha>0$ such that if
$d<\frac{3}{2}\ln(n)$ and $k\in\\{1,2\\}$ then
$\displaystyle{\mathbf{E}}\left[(Z_{\geq d})_{k}\right]$
$\displaystyle=(2^{\log(n)-d})^{k}(1+o(n^{-\alpha})),$ (4)
$\displaystyle{\mathbf{E}}\left[Z_{\geq d}^{2}\right]$
$\displaystyle={\mathbf{E}}\left[Z_{\geq
d}\right]^{2}\big{(}1+o(n^{-\alpha})\big{)}.$ (5)
###### Proposition 2.2 (Total variation distance).
Let $0<\varepsilon<\frac{1}{3}$, then for
$(1+\varepsilon)\ln(n)<d<(1+3\varepsilon)\ln(n)$ there exists
$\alpha^{\prime}=\alpha^{\prime}(\varepsilon)>0$ such that
$\mathrm{d}_{\mathrm{TV}}(Z_{\geq d},\mathrm{Poi}({\mathbf{E}}\left[Z_{\geq
d}\right]))\leq O(n^{-\alpha^{\prime}}).$
As we mentioned before, the error bounds in the previous propositions are not
strong enough to estimate the moments of $Z_{\geq D}$. Instead we focus on the
variable $\ln(Z_{\geq D})$. Furthermore, we will transfer the task of moment
estimation, for Theorem 1.3, to ${\mathbf{E}}\left[(\ln X)^{\ell}\right]$
where instead of having $X=Z_{\geq D}$ we consider either $1+Z_{\geq m_{-}}$
or $1+Z_{\geq m_{+}}$ for suitable values $m_{\pm}=c_{\pm}\ln n$ with
$c_{-}<1<c_{+}$.
The upper bound in the following proposition follows from a straightforward
application of Jensen’s inequality, together with Proposition 2.1; while the
lower bound uses the refined bounds given in Proposition 2.2.
###### Proposition 2.3.
Let $\varepsilon\in(0,\frac{1}{3})$ and $\ell\in\mathbb{N}$. Let
$m_{-}:=\lfloor(1-\frac{\varepsilon}{2\ln(2)})\ln(n)\rfloor$,
$m_{+}:=\lceil(1+\frac{\varepsilon}{2\ln(2)})\ln(n)+1\rceil$. There exist
$\alpha^{\prime}=\alpha^{\prime}(\varepsilon)>0$ and
$C_{\ell}=C_{\ell}(\varepsilon)>0$ such that
$\displaystyle{\mathbf{E}}\left[\big{(}\ln\left(1+Z_{\geq
m_{-}}\right)\big{)}^{\ell}\right]$
$\displaystyle\leq\left[\big{(}1-\ln(2)+\varepsilon\big{)}\ln(n)\right]^{\ell}+C_{\ell},$
(6) $\displaystyle{\mathbf{E}}\left[\big{(}\ln\left(1+Z_{\geq
m_{+}}\right)\big{)}^{\ell}\right]$
$\displaystyle\geq\left[\left(1-\ln(2)-\varepsilon\right)\ln(n)\right]^{\ell}\left(1-O(n^{-\alpha^{\prime}})\right).$
(7)
###### Proof.
First, let $f(x)=(\ln(x))^{\ell}$ and note that $f^{\prime\prime}(x)\leq 0$
for $x>e^{\ell-1}$. Hence, by Jensen’s inequality, for any non negative random
variable $X$ it holds
${\mathbf{E}}\left[(\ln(X))^{\ell}\right]={\mathbf{E}}\left[(\ln(X))^{\ell}\mathbf{1}_{\\{X>e^{\ell-1}\\}}\right]+{\mathbf{E}}\left[(\ln(X))^{\ell}\mathbf{1}_{\\{X\leq
e^{\ell-1}\\}}\right]\leq(\ln({\mathbf{E}}\left[X\right]))^{\ell}+(\ell-1)^{\ell}.$
Next we will use the upper bound in Proposition 2.1 for
${\mathbf{E}}\left[Z_{\geq m_{-}}\right]$. Note that
$\log(n)-m_{-}\leq(1-\ln(2)+\frac{\varepsilon}{2})\log(n)$. Thus,
${\mathbf{E}}\left[1+Z_{\geq m_{-}}\right]\leq
1+2n^{1-\ln(2)+\frac{\varepsilon}{2}}\leq n^{1-\ln(2)+\varepsilon}$; where the
second inequality holds for $n_{0}=n_{0}(\varepsilon)$ large enough. Then
$\displaystyle{\mathbf{E}}\left[\left(\ln(1+Z_{\geq
m_{-}})\right)^{\ell}\right]$
$\displaystyle\leq\left[\ln\left({\mathbf{E}}\left[1+Z_{\geq
m_{-}}\right]\right)\right]^{\ell}+(\ell-1)^{\ell}\leq\left[(1-\ln(2)+\varepsilon)\ln(n)\right]^{\ell}+C_{\ell};$
where $C_{\ell}=\sup_{n\leq
n_{0}}\\{(\ln(1+2n^{1-\ln(2)+\frac{\varepsilon}{2}}))^{\ell}\\}+(\ell-1)^{\ell}$.
Next, let $\mu={\mathbf{E}}\left[Z_{\geq m_{+}}\right]$ and let
$(X,X^{\prime})$ be random variables coupled so that
$X\stackrel{{\scriptstyle\mathcal{L}}}{{=}}Z_{\geq m_{+}}$,
$X^{\prime}\stackrel{{\scriptstyle\mathcal{L}}}{{=}}\mathrm{Poi}(\mu)$ and
${\mathbf{P}}\left(X=X^{\prime}\right)$ is maximized; that is,
${\mathbf{P}}\left(X\neq
X^{\prime}\right)=\mathrm{d}_{\mathrm{TV}}(X,X^{\prime})$. Note that
$\displaystyle{\mathbf{E}}\left[(\ln(1+X))^{\ell}\right]\geq{\mathbf{E}}\left[(\ln(1+X))^{\ell}{\mathbf{1}}_{[X>n^{\gamma-\varepsilon}]}\right]\geq\left(\ln\left(n^{\gamma-\varepsilon}\right)\right)^{\ell}{\mathbf{P}}\left(X>n^{\gamma-\varepsilon}\right);$
then (7) boils down to lower bounding
${\mathbf{P}}\left(X>n^{\gamma-\varepsilon}\right)$. Since $X<n$, by the
coupling assumption, we have
$\displaystyle{\mathbf{P}}\left(X>n^{\gamma-\varepsilon}\right)$
$\displaystyle\geq{\mathbf{P}}\left(n^{\gamma-\varepsilon}<X^{\prime}<n\right)-{\mathbf{P}}\left(X\neq
X^{\prime}\right)$ $\displaystyle=1-{\mathbf{P}}\left(X^{\prime}\geq
n\right)-{\mathbf{P}}\left(X^{\prime}\leq
n^{\gamma-\varepsilon}\right)-\mathrm{d}_{\mathrm{TV}}(X,X^{\prime}).$
By Proposition 2.2,
$\mathrm{d}_{\mathrm{TV}}(X,X^{\prime})=O(n^{-\alpha^{\prime}})$ for
$\alpha^{\prime}(\varepsilon)>0$. Using the Chernoff bounds for the tails of a
Poisson variable (see, e.g. Section 2.2 in [9]) and that
$\mu=n^{\gamma-\frac{\varepsilon}{2}}(1+o(1))$ we have
$\displaystyle{\mathbf{P}}\left(X^{\prime}\geq n\right)$
$\displaystyle\leq\left(\frac{en^{\gamma-\frac{\varepsilon}{2}}}{n}\right)^{n}e^{-n^{\gamma-\frac{\varepsilon}{2}}}\leq\left(\frac{e^{n^{\ln(2)}}}{en^{\ln(2)}}\right)^{-n},$
$\displaystyle{\mathbf{P}}\left(X^{\prime}\leq n^{\gamma-\varepsilon}\right)$
$\displaystyle\leq\left(\frac{en^{\gamma-\frac{\varepsilon}{2}}}{n^{\gamma-\varepsilon}}\right)^{n^{\gamma-\varepsilon}}e^{-n^{\gamma-\frac{\varepsilon}{2}}}\leq\left(\frac{e^{n^{\varepsilon/2}}}{en^{\frac{\varepsilon}{2}}}\right)^{-n^{\gamma-\varepsilon}};$
both bounds are $o(n^{-\alpha^{\prime}})$ so the proof is completed. ∎
### 2.1 Proof of Theorem 1.2
We will use the first and second moment method, together with Proposition 2.1,
the concentration of $D$ and the fact that for each $n$, $Z_{\geq m}$ is non-
increasing in $m$.
For completeness, we show that $D$ is concentrated around $\ln n$. Indeed, $D$
is a sum of independent Bernoulli random variables $(B_{i})_{1<i\leq n}$, each
with mean $1/(i-1)$ and so ${\mathbf{E}}\left[D\right]=H_{n-1}>\ln n$ where
$H_{n}$ denotes the $n$-th harmonic number. From the fact that $H_{n}-\ln n$
is a decreasing sequence we infer that: for any $0<\varepsilon<3/2$ and $n$
sufficiently large, $|D-H_{n-1}|\leq\frac{\varepsilon}{2}H_{n-1}$ implies
$|D-\ln n|\leq\varepsilon\ln n$. Using the contrapositive of such statement
and Bernstein’s inequality (see, e.g. Theorem 2.8 in [22]) we obtain, for $n$
large enough,
${\mathbf{P}}\left(D\notin(1\pm\varepsilon)\ln(n)\right)\leq{\mathbf{P}}\left(\left|D-H_{n-1}\right|>\dfrac{\varepsilon}{2}H_{n-1}\right)\leq
2\exp\left\\{-\dfrac{\varepsilon^{2}}{12}H_{n-1}\right\\}\leq
2n^{-\varepsilon^{2}/12}.$ (8)
Recall $\gamma=1-\ln(2)$. It suffices to prove that for every $\varepsilon>0$,
$\lim_{n\to\infty}{\mathbf{P}}\left(Z_{\geq
D}\notin(n^{\gamma-\varepsilon},n^{\gamma+\varepsilon})\right)=0.$
We infer from (8) that ${\mathbf{P}}\left(Z_{\geq
D}\notin(n^{\gamma-\varepsilon},n^{\gamma+\varepsilon}),D\notin(1\pm\varepsilon)\ln(n)\right)$
vanishes as $n$ grows.
Let $m_{-}:=m_{-}(\varepsilon)=\lfloor(1-\varepsilon)\ln(n)\rfloor$ and
$m_{+}:=m_{+}(\varepsilon)=\lceil(1+\varepsilon)\ln(n)\rceil$. Using the
monotonicity of $Z_{\geq m}$ on $m$, we have
${\mathbf{P}}\left(Z_{\geq
D}\notin(n^{\gamma-\varepsilon},n^{\gamma+\varepsilon}),\,D\in(1\pm\varepsilon)\ln(n)\right)\leq{\mathbf{P}}\left(Z_{\geq
m_{-}}\geq n^{\gamma+\varepsilon}\right)+{\mathbf{P}}\left(Z_{\geq m_{+}}\leq
n^{\gamma-\varepsilon}\right);$ (9)
so it remains to show that both terms in the right side of (9) vanish. First,
using that $\ln(n)=\ln(2)\log n$ and so $\log(n)-(1\pm\varepsilon)\ln
n=(1-\ln(2)\mp\varepsilon\ln(2))\log(n)$, we infer from Proposition 2.1 that
$\displaystyle\frac{1}{2}n^{\gamma-\varepsilon\ln(2)}(1-o(n^{-\alpha}))\leq{\mathbf{E}}\left[Z_{\geq
m_{+}}\right]\leq{\mathbf{E}}\left[Z_{\geq m_{-}}\right]\leq
2n^{\gamma+\varepsilon\ln(2)}.$ (10)
Markov’s inequality then gives ${\mathbf{P}}\left(Z_{\geq m_{-}}\geq
n^{\gamma+\varepsilon}\right)\leq 2n^{-\varepsilon\gamma}\to 0$. Next, let
$\theta$ be defined so that
$n^{\gamma-\varepsilon}=\theta{\mathbf{E}}\left[Z_{\geq m_{+}}\right]$; in
particular, $\theta\leq 2n^{-\varepsilon\gamma}$. Paley-Zygmund inequality
gives
$\displaystyle{\mathbf{P}}\left(Z_{\geq
m_{+}}>n^{\gamma-\varepsilon}\right)\geq{\mathbf{P}}\left(Z_{\geq
m_{+}}>\theta{\mathbf{E}}\left[Z_{\geq
m_{+}}\right]\right)\geq(1-\theta)^{2}\dfrac{{\mathbf{E}}\left[Z_{\geq
m_{+}}\right]^{2}}{{\mathbf{E}}\left[Z_{\geq m_{+}}^{2}\right]};$
which tends to 1 as $n\to\infty$ by the upper bound for $\theta$ and (5). This
implies ${\mathbf{P}}\left(Z_{\geq m_{+}}\leq n^{\gamma-\varepsilon}\right)$
vanishes, as desired.
## 3 Control on $D$ through a coupling
Let $n\in\mathbb{N}$. Write $\mathcal{I}_{n}$ for the set of increasing trees
of size $n$; namely, labelled rooted trees with label set $[n]$ such that
vertex labels are increasing along any path starting from the root. It is
straightforward to verify that the law of $T_{n}$ is precisely the uniform
distribution on $\mathcal{I}_{n}$.
Consider the following construction of an increasing tree of size $n$. Let
$(b_{i})_{1<i\leq n}$ and $(y_{i})_{1<i\leq n}$ be integer-valued sequences
such that $b_{2}=y_{2}=1$, $b_{i}\in\\{0,1\\}$ and $2\leq y_{i}\leq i-1$ for
$3\leq i\leq n$. Let vertex labelled 1 be the root and, for each $1<i\leq n$,
let vertex $i$ be connected to vertex $1$ if $b_{i}=1$ and, otherwise, let
vertex $i$ be connected to vertex $y_{i}$.
The following coupling is exploited in the proof of Theorem 1.3. Define random
vectors $(B_{i})_{1<i\leq n},(B_{i}^{(\varepsilon)})_{1<i\leq
n},(Y_{i})_{1<i\leq n}$ as follows. Let $(B_{i})_{1<i\leq n}$ be independent
$\mathrm{Bernoulli}(\frac{1}{i-1})$ random variables, let
$(B_{i}^{(\varepsilon)})_{1<i\leq n}$ have the law of $(B_{i})_{1<i\leq n}$
conditioned on $\sum_{i=2}^{n}B_{i}\in(1\pm\varepsilon)\ln(n)$ and let
$(Y_{i})_{1<i\leq n}$ be independent random variables such that $Y_{2}=1$ a.s.
and $Y_{i}$ is uniform over $\\{2,\ldots,i-1\\}$ for $3\leq i\leq n$. We
assume that the vector $(Y_{i})_{1<i\leq n}$ is independent from the rest,
while the coupling of $(B_{i})_{1<i\leq n}$ and
$(B_{i}^{(\varepsilon)})_{1<i\leq n}$ is arbitrary.
The tree obtained from $(B_{i})_{1<i\leq n},(Y_{i})_{1<i\leq n}$ and the
construction above has the distribution of a RRT. To see this, write $v_{i}$
for the parent of vertex $i$; then $1\leq v_{i}<i$ for each $1<i\leq n$.
First, note that each $v_{i}$ is independent from the rest since
$(B_{i})_{1<i\leq n}$ and $(Y_{i})_{1<i\leq n}$ are independent. Next we show
that $v_{i}$ is chosen uniformly at random from $\\{1,2,\dots,i-1\\}$. First,
we have $v_{2}=1$ almost surely. For $2\leq\ell<i\leq n$, by the independence
of $B_{i}$ and $Y_{i}$, we have
$\displaystyle{\mathbf{P}}\left(v_{i}=1\right)$
$\displaystyle={\mathbf{P}}\left(B_{i}=1\right)=\frac{1}{i-1}={\mathbf{P}}\left(B_{i}=0,Y_{i}=\ell\right)={\mathbf{P}}\left(v_{i}=\ell\right);$
therefore, the tree obtained has the law of a RRT and so we denote it by
$T_{n}$. Analogously, write $T_{n}^{(\varepsilon)}$ for the tree obtained from
$(B_{i}^{(\varepsilon)})_{1<i\leq n}$ and $(Y_{i})_{1<i\leq n}$, and let
$D^{(\varepsilon)}$ be the degree of its root.
By definition of $D$ and the construction above we have
$D=\sum_{i=2}^{n}B_{i}$. Thus, conditioning on $\sum_{i=2}^{n}B_{i}$ means,
under this construction, to condition $T_{n}$ on the root degree $D$. In
particular, the distribution of $(B_{i}^{(\varepsilon)})_{1<i\leq n}$ is
defined so that $T_{n}^{(\varepsilon)}$ has the distribution of a RRT of size
$n$ conditioned on $D^{(\varepsilon)}\in(1\pm\varepsilon)\ln(n)$.
Since $D$ is concentrated around $\ln(n)$, the conditioning on
$T_{n}^{(\varepsilon)}$ is over an event of probability close to one and so
the degree sequences of $T_{n}$ and $T_{n}^{(\varepsilon)}$ do not differ by
much. Hence, the proof strategy of Theorem 1.3 is to estimate the moments
${\mathbf{E}}\left[(\ln(Z_{\geq D}))^{k}\right]$ using the monotonicity of
$Z_{\geq d}$, by conditioning on $D\in(1\pm\varepsilon)\ln(n)$ while retaining
$Z_{\geq d}$ instead of $Z_{\geq d}^{(\varepsilon)}$ ; see (22)–(24).
The following two propositions makes this idea rigorous. For $d\geq 0$, let
$\displaystyle W_{d}$ $\displaystyle=\frac{1+Z_{\geq
d}^{(\varepsilon)}}{1+Z_{\geq d}}$ (11)
where $Z_{\geq d}$ is defined as in (3) and, similarly, let $Z_{\geq
d}^{(\varepsilon)}:=|\\{v\in
T_{n}^{(\varepsilon)}:\,d_{T_{n}^{(\varepsilon)}}(v)\geq d\\}|$.
The key in the proof of Proposition 3.1 lies on (14), which yields an upper
bound on the number of vertices that have differing degrees in $T_{n}$ and
$T_{n}^{(\varepsilon)}$ under the coupling. In turn, this allows us to infer
bounds on the ratio $W_{d}$ that hold with high probability and are uniform on
$d$.
###### Proposition 3.1.
Let $\varepsilon,\delta\in(0,1)$ and $0\leq
d\leq(1+\varepsilon)\ln\left(n\right)$. There is $C>0$,
$\beta=\beta(\varepsilon)$ and $n_{0}=n_{0}(\delta)$ such that for $n\geq
n_{0}$, under the coupling described above, we have
${\mathbf{P}}\left(W_{d}\in(1\pm\delta)\right)\geq 1-Cn^{-\beta}.$
###### Proof.
Let $m_{+}=\lceil(1+\varepsilon)\ln n\rceil$ so that $Z_{\geq d}\geq Z_{\geq
m_{+}}$. By Chebyshev’s inequality and (5), for $c\in(0,1)$,
$\displaystyle{\mathbf{P}}\left(Z_{\geq m_{+}}\leq c{\mathbf{E}}\left[Z_{\geq
m_{+}}\right]\right)\leq{\mathbf{P}}\left(\left|Z_{\geq
m_{+}}-{\mathbf{E}}\left[Z_{\geq
m_{+}}\right]\right|\geq(1-c){\mathbf{E}}\left[Z_{\geq
m_{+}}\right]\right)=o(n^{-\alpha}).$ (12)
Rewrite $W_{d}=1+\frac{Z_{\geq d}^{(\varepsilon)}-Z_{\geq d}}{1+Z_{\geq d}}$
to see $W_{d}\in(1\pm\delta)$ is equivalent to $|Z_{\geq
d}^{(\varepsilon)}-Z_{\geq d}|\in[0,\delta(1+Z_{\geq d}))$. Hence, it suffices
to show that there is $n_{0}(\delta)\in\mathbb{N}$, such that for $n\geq
n_{0}$,
$\displaystyle\\{Z_{\geq m_{+}}\geq c{\mathbf{E}}\left[Z_{\geq
m_{+}}\right],D\in(1\pm\varepsilon)\ln(n)\\}\subset\\{|Z_{\geq
d}^{(\varepsilon)}-Z_{\geq d}|\leq\delta(1+Z_{\geq d})\\};$ (13)
which by a contrapositive argument, together with (8) and (12), yields for
$\beta=\min{\\{\alpha,\varepsilon^{2}/12\\}}>0$,
$\displaystyle{\mathbf{P}}\left(|Z_{\geq d}^{(\varepsilon)}-Z_{\geq
d}|>\delta(1+Z_{\geq d})\right)$
$\displaystyle\leq{\mathbf{P}}\left(D\notin(1\pm\varepsilon)\ln(n)\right)+{\mathbf{P}}\left(Z_{\geq
m_{+}}\leq c{\mathbf{E}}\left[Z_{\geq m_{+}}\right]\right)=O(n^{-\beta}).$
We extend the notation introduced for the coupling; let
${\mathcal{S}}:=\\{i\in[2,n]:v_{i}=v_{i}^{(\varepsilon)}\\}$ be the set of
vertices that have the same parent in $T_{n}$ and $T_{n}^{(\varepsilon)}$ and
for $i\in[n]$ denote the set of children of $i$ in $T_{n}$ and
$T_{n}^{(\varepsilon)}$, respectively, by
$\displaystyle\mathcal{C}(i):=\\{j\in[2,n]:v_{j}=i\\}\qquad\text{and}\qquad\mathcal{C}^{(\varepsilon)}(i):=\\{j\in[2,n]:v_{j}^{(\varepsilon)}=i\\}.$
By the coupling construction,
${\mathcal{S}}\cup(\mathcal{C}(1)\bigtriangleup\mathcal{C}^{(\varepsilon)}(1))$
is a partition of $[2,n]$; that is, whenever the parent of a vertex
$i\in[2,n]$ differs in $T_{n}$ and $T_{n}^{(\varepsilon)}$ we infer $B_{i}\neq
B_{i}^{(\varepsilon)}$ and so either
$i\in\mathcal{C}(1)\setminus\mathcal{C}^{(\varepsilon)}(1)$ or
$i\in\mathcal{C}(1)^{(\varepsilon)}\setminus\mathcal{C}(1)$. The consequence
of this observation is two-fold: First, for any $i\in[2,n]$, a necessary
condition for $d_{T_{n}}(i)\neq d_{T_{n}^{(\varepsilon)}}(i)$ is that
$\mathcal{C}(i)\neq\mathcal{C}^{(\varepsilon)}(i)$. Second, the function
$j\mapsto Y_{j}$ that maps
$\mathcal{C}(1)\bigtriangleup\mathcal{C}^{(\varepsilon)}(1)$ to
$\\{i\in[2,n]:\mathcal{C}(i)\neq\mathcal{C}^{(\varepsilon)}(i)\\}$ is
surjective. Indeed, if $i\neq 1$ and
$\mathcal{C}(i)\neq\mathcal{C}^{(\varepsilon)}(i)$ then there exists
$j\in\mathcal{C}(1)\bigtriangleup\mathcal{C}^{(\varepsilon)}(1)$ such that
$Y_{j}=i$. Together they imply the following chain of inequalities,
$|\\{i\in[2,n]:d_{T_{n}}(i)\neq
d_{T_{n}^{(\varepsilon)}}(i)\\}|\leq|\\{i\in[2,n]:\mathcal{C}(i)\neq\mathcal{C}^{(\varepsilon)}(i)\\}|\leq|\mathcal{C}(1)\bigtriangleup\mathcal{C}(1)^{(\varepsilon)}|;$
(14)
the first inequality by containment of the corresponding sets and the second
one by the surjective function described above. On the other hand, $|Z_{\geq
d}-Z_{\geq d}^{(\varepsilon)}|$ equals
$\displaystyle\left|\sum_{i=1}^{n}1_{\\{d_{T_{n}}(i)\geq
d\\}}(i)-\sum_{i=1}^{n}1_{\\{d_{T_{n}^{(\varepsilon)}\\!}(i)\geq
d\\}}(i)\right|$ $\displaystyle\leq\sum_{i=1}^{n}\Big{|}1_{\\{d_{T_{n}}(i)\geq
d\\}}(i)-1_{\\{d_{T_{n}^{(\varepsilon)}\\!}(i)\geq d\\}}(i)\Big{|}$
$\displaystyle\leq 1+|\\{i\in\\{2,...,n\\}:d_{T_{n}}(i)\neq
d_{T_{n}^{(\varepsilon)}}(i)\\}|.$
Therefore,
$\displaystyle|Z_{\geq d}-Z_{\geq d}^{(\varepsilon)}|\leq
1+|\mathcal{C}(1)\bigtriangleup\mathcal{C}^{(\varepsilon)}(1)|\leq
1+2\max\\{|\mathcal{C}(1)|,|\mathcal{C}^{(\varepsilon)}(1)|\\}.$ (15)
We are now ready to prove (13). Fix, e.g., $c=1/2$ and let
$n_{0}=n_{0}(\delta)$ be large enough that
$\frac{1+4\ln(n)}{\delta}<c{\mathbf{E}}\left[Z_{\geq m_{+}}\right]$; this is
possible since ${\mathbf{E}}\left[Z_{\geq m_{+}}\right]$ grows polynomially in
$n$, by Proposition 2.1. In particular, recalling $Z_{\geq d}\geq Z_{\geq
m_{+}}$, for $n\geq n_{0}$ and any $\varepsilon\in(0,1)$ we have
$\displaystyle\\{Z_{\geq m_{+}}\geq c{\mathbf{E}}\left[Z_{\geq
m_{+}}\right]\\}\subset\left\\{Z_{\geq
d}\geq\frac{1+2(1+\varepsilon)\ln(n)}{\delta}\right\\}.$ (16)
Moreover, by the construction of $T_{n}^{(\varepsilon)}$,
$|\mathcal{C}(1)^{(\varepsilon)}|=D^{(\varepsilon)}\leq(1+\varepsilon)\ln n$,
so that (15) implies
$\displaystyle\left\\{Z_{\geq
d}\geq\frac{1+2(1+\varepsilon)\ln(n)}{\delta},D\in(1\pm\varepsilon)\ln(n)\right\\}\subset\\{|Z_{\geq
d}^{(\varepsilon)}-Z_{\geq d}|\leq\delta(1+Z_{\geq d})\\};$ (17)
note that (17) holds for all $n\in\mathbb{N}$. Together with (16), implies
(13) for $n\geq n_{0}$, as desired. ∎
###### Proposition 3.2.
Let $\varepsilon\in(0,1)$, $\ell\in\mathbb{N}$. For $0\leq
d\leq(1+\varepsilon)\ln\left(n\right)$, there is $\beta=\beta(\varepsilon)>0$
such that, under the coupling described above, we have
$\left|{\mathbf{E}}\left[(\ln\left(W_{d}\right))^{\ell}\right]\right|\leq(\ln\left(n\right))^{\ell}O(n^{-\beta})+1.$
###### Proof.
We first simplify to consider
$\displaystyle\left|{\mathbf{E}}\left[(\ln\left(W_{d}\right))^{\ell}\right]\right|\leq{\mathbf{E}}\left[\left|(\ln(W_{d}))^{\ell}\right|\right]\leq{\mathbf{E}}\left[|\ln(W_{d})|^{\ell}\right].$
Now, $\frac{1}{n}\leq W_{d}\leq n$ for every $0\leq
d\leq(1\pm\varepsilon)\ln(n)$, since $W_{0}\equiv 1$ while $\max\\{Z_{\geq
d},Z_{\geq d}^{(\varepsilon)}\\}<n$ for $d\geq 1$, as in any tree there is at
least one vertex of degree zero. Then, for any $\delta\in(0,1)$, we have
$\displaystyle{\mathbf{E}}\left[|\ln(W_{d})|^{\ell}{\mathbf{1}}_{[W_{d}<1-\delta]}\right]\leq(\ln(n))^{\ell}{\mathbf{P}}\left(W_{d}<1-\delta\right),$
$\displaystyle{\mathbf{E}}\left[|\ln(W_{d})|^{\ell}{\mathbf{1}}_{[W_{d}>1+\delta]}\right]\leq(\ln(n))^{\ell}{\mathbf{P}}\left(W_{d}>1+\delta\right).$
Proposition 3.1 implies these two terms are
$(\ln(n))^{\ell}O\left(n^{-\beta}\right)$, where the implicit constant depends
on the choice of $\delta$. With foresight fix $\delta$ to satisfy
$\left(\frac{\delta}{1-\delta}\right)^{\ell}+\delta^{\ell}=1$. Using that
$x\geq 0$ satisfies $1-\frac{1}{x}\leq\ln(x)\leq x-1$,
$\displaystyle{\mathbf{E}}\left[|\ln(W_{d})|^{\ell}{\mathbf{1}}_{[W_{d}\in(1\pm\delta)]}\right]$
$\displaystyle\leq{\mathbf{E}}\left[\Big{|}1-\frac{1}{W_{d}}\Big{|}^{\ell}{\mathbf{1}}_{[W_{d}\in(1-\delta,1)]}\right]+{\mathbf{E}}\left[|W_{d}-1|^{\ell}{\mathbf{1}}_{[W_{d}\in[1,1+\delta)]}\right]$
$\displaystyle\leq\Big{(}\frac{\delta}{1-\delta}\Big{)}^{\ell}+\delta^{\ell}=1;$
and so the result follows. ∎
### 3.1 Proof of Theorem 1.3
Fix $k\in\mathbb{N}$ and recall $\gamma:=1-\ln(2)$. Suppose that for any
$\varepsilon\in(0,\frac{1}{3})$ there exists $c=c(\varepsilon)>0$ and
$C=C(k,\varepsilon)>0$ such that
$\displaystyle\big{(}(\gamma-\varepsilon)\ln\left(n\right)\big{)}^{k}\big{(}1+O(n^{-c})\big{)}-C\leq{\mathbf{E}}\left[\big{(}\ln\left(Z_{\geq
D}\right)\big{)}^{k}\right]\leq\big{(}(\gamma+\varepsilon)\ln\left(n\right)\big{)}^{k}+C.$
(18)
It is straightforward to verify that (18) establishes Theorem 1.3. So it
remains to prove (18).
Let $\varepsilon^{\prime}=\varepsilon/(2\ln(2))$ and write
$\displaystyle
m_{-}=\lfloor(1-\varepsilon^{\prime})\ln(n)\rfloor\quad\text{and}\quad
m_{+}=\lceil(1+\varepsilon^{\prime})\ln(n)+1\rceil.$ (19)
We focus on the term ${\mathbf{E}}\left[(\ln\left(Z_{\geq
D}\right))^{k}\mathbf{1}_{\\{D\in(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]$
as (8) and $1\leq Z_{\geq D}\leq n$ imply
$0\leq{\mathbf{E}}\left[(\ln\left(Z_{\geq
D}\right))^{k}\mathbf{1}_{\\{D\notin(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]\leq(\ln\left(n\right))^{k}{\mathbf{P}}\left(D\notin(1\pm\varepsilon^{\prime})\ln\left(n\right)\right)=(\ln\left(n\right))^{k}O(n^{-\varepsilon^{\prime}/12}).$
(20)
Using the monotonicity of $Z_{\geq d}$, we have
$\displaystyle{\mathbf{E}}\left[(\ln\left(Z_{\geq
D}\right))^{k}\mathbf{1}_{\\{D\in(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]\leq{\mathbf{E}}\left[(\ln\left(Z_{\geq
m_{-}}\right))^{k}\mathbf{1}_{\\{D\in(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]\leq{\mathbf{E}}\left[(\ln\left(Z_{\geq
m_{-}}\right))^{k}\right],$
which by (6) yields,
${\mathbf{E}}\left[(\ln\left(Z_{\geq
D}\right))^{k}\mathbf{1}_{\\{D\in(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]\leq((\gamma+\varepsilon)\ln(n))^{k}+C_{k}.$
(21)
For the lower bound we consider the conditional variable
$Z^{(\varepsilon^{\prime})}_{\geq d}$ defined in the previous section with
$d=m_{+}$. Observe that, if $D\in(1\pm\varepsilon^{\prime})\ln(n)$ then
$Z_{\geq D}\geq 1+Z_{m_{+}}$, thus we obtain
$\displaystyle{\mathbf{E}}\left[(\ln\left(Z_{\geq
D}\right))^{k}\mathbf{1}_{\\{D\in(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]$
$\displaystyle\geq{\mathbf{E}}\left[(\ln\left(1+Z_{\geq
m_{+}}\right))^{k}\mathbf{1}_{\\{D\in(1\pm\varepsilon^{\prime})\ln(n)\\}}\right]$
$\displaystyle\geq{\mathbf{E}}\left[\left(\ln\left(1+Z^{(\varepsilon^{\prime})}_{\geq
m_{+}}\right)\right)^{k}\right]{\mathbf{P}}\left(D\in(1\pm\varepsilon^{\prime})\ln(n)\right).$
(22)
The definition of $W_{d}$ gives
$\displaystyle\ln\left(1+Z_{\geq
m_{+}}^{(\varepsilon^{\prime})}\right)=\ln\left(\left(1+Z_{\geq
m_{+}}^{(\varepsilon^{\prime})}\right)\dfrac{1+Z_{\geq m_{+}}}{1+Z_{\geq
m_{+}}}\right)=\ln\left(1+Z_{\geq m_{+}}\right)+\ln\left(W_{m_{+}}\right);$
(23)
similarly, for $k\geq 2$, the binomial expansion implies
$\displaystyle\left(\ln\left(1+Z_{\geq
m_{+}}^{(\varepsilon^{\prime})}\right)\right)^{k}=\left(\ln(1+Z_{\geq
m_{+}})\right)^{k}+\sum_{\ell=1}^{k}\binom{k}{\ell}\ln(1+Z_{\geq
m_{+}})^{k-\ell}\ln(W_{m_{+}})^{\ell}.$ (24)
We use (7) for a lower bound on the expectation of the main term in these last
two decompositions. For the error terms involving $W_{m_{+}}$ we use
Proposition 3.2. If $k=1$, we directly get
$\displaystyle{\mathbf{E}}\left[\ln\left(1+Z_{m_{+}}^{(\varepsilon^{\prime})}\right)\right]$
$\displaystyle={\mathbf{E}}\left[\ln\left(1+Z_{\geq
m_{+}}\right)\right]+{\mathbf{E}}\left[\ln\left(W_{m_{+}}\right)\right]$
$\displaystyle\geq(\gamma-\varepsilon)\ln(n)\left(1+O(n^{-\alpha^{\prime}})\right)+\ln\left(n\right)O(n^{-\beta})-1.$
If $k\geq 2$, we control each of the terms in the sum of (24). For
$1\leq\ell\leq k$, the Cauchy-Schwarz inequality gives
$\displaystyle\bigg{|}{\mathbf{E}}\left[\ln(1+Z_{\geq
m_{+}})^{k-\ell}\ln(W_{m_{+}})^{\ell}\right]\bigg{|}$
$\displaystyle\leq{\mathbf{E}}\left[\ln(1+Z_{\geq
m_{+}})^{2(k-\ell)}\right]^{1/2}{\mathbf{E}}\left[\ln(W_{m_{+}})^{2\ell}\right]^{1/2}.$
(25)
The deterministic bound $Z_{\geq m_{+}}<n$ implies
${\mathbf{E}}\left[\ln(1+Z_{\geq
m_{-}})^{2(k-\ell)}\right]^{1/2}\leq(\ln(n))^{k-\ell}$. On the other hand,
Proposition 3.2 yields
$\displaystyle{\mathbf{E}}\left[\ln(W_{m_{+}})^{2\ell}\right]^{1/2}$
$\displaystyle\leq\left((\ln\left(n\right))^{2\ell}O(n^{-\beta})+1\right)^{1/2}\leq\left(\ln(n)\right)^{\ell}O(n^{-\beta})+1.$
(26)
Thus, after taking expectations in (24), we get
$\displaystyle{\mathbf{E}}\left[\left(\ln\left(1+Z^{(\varepsilon^{\prime})}_{\geq
m_{+}}\right)\right)^{k}\right]\geq((\gamma-\varepsilon)\ln(n))^{k}(1-o(n^{-c}))-C;$
(27)
where $c=\min\\{\alpha^{\prime},\beta\\}$ and $C=\max\\{C_{k},2^{k}\\}$. Then
(20), (21), (27) together with (8), imply (18) completing the proof.
## 4 Conclusion and open problems
As far as we know, our results provide the first quantitative estimate for the
deletion time $X^{targ}_{n}$ for random recursive trees. Our main result,
Theorem 1.1, confirms the intuition that the targeted procedure requires
substantially fewer cuts than the random edge deletion procedure. It remains
an open question whether $\ln(X^{targ}_{n})$ also grows asymptotically as
$\ln(n)$. Contrary to the case of uniform edge-cutting, in the targeted
vertex-cutting process it is challenging to describe, at each step, the
distribution of either the cut tree or the remaining tree. Even keeping track
of the number of vertices in the first cut tree remains an open question.
## References
* [1] L. Addario-Berry, N. Broutin, and C. Holmgren. Cutting down trees with a Markov chainsaw. The Annals of Applied Probability, 24(6):2297–2339, 2014.
* [2] L. Addario-Berry and L. Eslava. High degrees in random recursive trees. Random Structures and Algorithms, 52(4):560–575, 2017.
* [3] M. Alenazi and J. Sterbenz. Comprehensive comparison and accuracy of graph metrics in predicting network resilience. International Conference on the Design of Reliable Communication Networks (DRCN), 11:157–164, 2015.
* [4] B. Berche, C. von Ferber, T. Holovatch, and Y. Holovatch. Resilience of public transport networks against attacks. The European Physical Journal B, 71(1):125–137, 2009.
* [5] J. Bertoin and G. Miermont. The cut-tree of large galton-watson trees and the brownian crt. Annals of Applied Probability, 23:1469–1493, 2013.
* [6] G. Berzunza, X. S. Cai, and C. Holmgren. The k-Cut Model in Conditioned Galton-Watson Trees. In M. Drmota and C. Heuberger, editors, 31st International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2020), volume 159 of Leibniz International Proceedings in Informatics (LIPIcs), pages 5:1–5:10, Dagstuhl, Germany, 2020\. Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
* [7] G. Berzunza, X. S. Cai, and C. Holmgren. The k-Cut Model in Deterministic and Random Trees. The Electronic Journal of Combinatorics, 28(1):P1.25, 2021.
* [8] B. Bollobás and O. Riordan. Robustness and vulnerability of scale-free random graphs. Internet Mathematics, 1(1):1–35, 2004.
* [9] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities. Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence, With a foreword by Michel Ledoux.
* [10] X. Cai, L. Devroye, C. Holmgren, and F. Skerman. K-cut on paths and some trees. Electronic Journal of Probability, 25(53):22, 2019.
* [11] R. Cohen, K. Erez, D. ben Avraham, and S. Havlin. Resilience of the internet to random breakdowns. Physical review letters, 85:4626–8, 2000.
* [12] R. Cohen, K. Erez, D. ben Avraham, and S. Havlin. Breakdown of the internet under intentional attack. Physical review letters, 86:3682–5, 2001.
* [13] L. Devroye and J. Lu. The strong convergence of maximal degrees in uniform random recursive trees and dags. Random Structures Algorithms, 7(1):1–14, 1995.
* [14] D. Dieuleveut. The vertex-cut-tree of galton-watson trees converging to a stable tree. Annals of Applied Probability, 25:2215–2262, 2015.
* [15] M. Drmota, A. Iksanov, M. Möhle, and U. Roesler. A limiting distribution for the number of cuts needed to isolate the root of a random recursive tree. Random Structures and Algorithms, 34(3):319–336, 2009.
* [16] L. Eslava. A non-increasing tree growth process for recursive trees and applications. Combinatorics, Probability and Computing, pages 1–26, 2020.
* [17] C. Holmgren. Random records and cuttings in split trees. Discrete Mathematics $\&$ Theoretical Computer Science, pages 269–282, 2008.
* [18] C. Holmgren. Random records and cuttings in binary search trees. Combinatorics, Probability and Computing, 19(3):391–424, 2010.
* [19] A. Iksanov and M. Möhle. A probabilistic proof of a weak limit law for the number of cuts needed to isolate the root of a random recursive tree. Electronic Communications in Probability, 12:28 – 35, 2007.
* [20] S. Janson. Asymptotic degree distribution in random recursive trees. Random Structures Algorithms, 26(1-2):69–83, 2005.
* [21] S. Janson. Random cutting and records in deterministic and random trees. Random Structures and Algorithms, 29(2):139–179, 2006.
* [22] S. Janson, T. Łuczak, and A. Rucinski. Random graphs. John Wiley $\&$ Sons, Inc., 1st edition, 2000.
* [23] M. Javanian and M. Vahidi-Asl. Isolating the most recent entry in a random recursive tree by random cuts. Journal of Applied Mathematics and Computing, 16:115–123, 2004\.
* [24] M. Kuba and A. Panholzer. Isolating a leaf in rooted trees via random cuttings. Annals of Combinatorics, 12:81–99, 2008.
* [25] M. Kuba and A. Panholzer. Isolating nodes in recursive trees. Aequationes mathematicae, 76:258–280, 2008.
* [26] M. Kuba and A. Panholzer. Multiple isolation of nodes in recursive trees. Online Journal of Analytic Combinatorics, 9, 2013.
* [27] A. Meir and J. Moon. Cutting down recursive trees. Bellman Prize in Mathematical Biosciences, 21:173–181, 1974.
* [28] A. Panholzer. Destruction of recursive trees. Mathematics and Computer Science III, 528:267–280, 2004.
* [29] F. Xu, S. Si, D. Duan, C. Lv, and J. Xie. Dynamical resilience of networks against targeted attack. Physica A: Statistical Mechanics and Its Applications, 528:121329, 2019.
|
# Learning Agile Paths from Optimal Control
Alex Beaudin
McGill University
Department of Computer Science
Montreal, Quebec, Canada
<EMAIL_ADDRESS>
&Hsiu-Chin Lin
McGill University
Department of Computer Science
Montreal, Quebec, Canada
<EMAIL_ADDRESS>
###### Abstract
Efficient motion planning algorithms are of central importance for deploying
robots in the real world. Unfortunately, these algorithms often drastically
reduce the dimensionality of the problem for the sake of feasibility, thereby
foregoing optimal solutions. This limitation is most readily observed in agile
robots, where the solution space can have multiple additional dimensions.
Optimal control approaches partially solve this problem by finding optimal
solutions without sacrificing the complexity of the environment, but do not
meet the efficiency demands of real-world applications. This work proposes an
approach to resolve these issues simultaneously by training a machine learning
model on the outputs of an optimal control approach.
> Keywords: Legged Robots, Imitation Learning, Optimal Control
## 1 Introduction
Autonomous robotic systems are of particular interest for many fields,
especially those that can be dangerous for human intervention like search and
rescue, and maintenance on rigs. However, motion planning in unstructured
environment is still a hard problem for legged robots and their success
depends largely on their ability to plan their paths robustly. Moreover, the
method in which a controller deals with obstacles has great consequences on
the planned trajectory, and these optimizations are quintessential in
generating agile motions for real-world robots.
Trajectory optimization is a common practice for generating motion for legged
systems [1, 2, 3], since it can produce optimal trajectories which satisfy the
physical and environmental constraints of the robot. However, the solution
from trajectory optimization is only valid for a particular pair of initial
and target positions, and one needs to re-plan if the pair changes. Due to
high-dimensionality and complexity, solving such an optimization problem for
legged robots is infeasible in real-time.
Previous work simplified the problem by using a reduced-order model [4] and
refining the trajectory using model predictive control [5]. However, the issue
is exacerbated in the presence of obstacles, since collision avoidance
constraints are non-linear algebraic constraints and so harder to solve.
In recent years, imitation learning [6, 7] and reinforcement learning [8, 9]
have become the dominant focus in the research community. The data-driven
approach offers a global solution and removes the hurdle of re-planning. On
the other hand, collecting data for imitation learning is labour intensive
work, which can be done by using motion capture [10] or using animal data
[11], which is extremely difficult on legged robots. Reinforcement learning
does not require any data, but it is extremely time-consuming to learn a
policy.
For planning with obstacles, most work focuses on modelling the environment as
a 2-dimensional grid that represents the height of the obstacles [12]. The
collision avoidance method finds the traversable paths in the plane [13].
However, the paths may be sub-optimal, since completely circumventing an
obstacle is time consuming at best, and completely impossible at worst.
To mitigate the limitations of optimal control and imitation learning, we
propose a self-supervised learning approach for efficient 3D collision
avoidance in real-time. Specifically, we generate a set of motion data from
optimal control with a reduced model to create a rough plan and learn a policy
that reproduces the motion data. The learned policy is refined through whole-
body model predictive control which satisfies the physical constraints of the
robot.
## 2 Background
Let $\mathbf{x}_{k},\mathbf{u}_{k}$ represent the states and actions of the
robot at time-step $k$, the goal of optimal control is to find a trajectory, a
set of $\mathbf{x},\mathbf{u}$, such that a given cost function is minimized.
Assuming that $\mathbf{x}^{i}$, $\mathbf{x}^{t}$ are the initial and target
state of the robot specified by the user, a typical problem can be formulated
as the following
$\displaystyle\min_{\mathbf{x}_{0},\dots\mathbf{x}_{N},\mathbf{u}_{0},\mathbf{u}_{N}}$
$\displaystyle\sum\mathcal{L}(\mathbf{x}_{k},\mathbf{u}_{k})$ cost function
(1) subject to $\displaystyle\mathbf{x}_{0}=\mathbf{x}^{i}$ initial condition
$\displaystyle\mathbf{x}_{N}=\mathbf{x}^{t}$ terminal condition
$\displaystyle\mathbf{x}_{k+1}=\mathcal{F}(\mathbf{x}_{k},\mathbf{u}_{k})$
forward dynamics $\displaystyle\vdots$ other constraints
For legged robots, motion planning is normally done through optimization. It
is well-known that the states of legged robots drift, and predicting a long
trajectory is not ideal. In addition, trajectory optimization, especially for
long horizon, is not feasible in real-time. This is particularly an issue in
the presence of obstacles, since collision constraints are generally non-
linear and thus require non-linear solvers. Most people combine trajectory
optimization for long horizon planning with model predictive control for short
horizon planning real time planning and control.
In the proposed work, we will use trajectory optimization to plan a rough path
for the robot torso while avoiding collisions with the environment. The
outcome of trajectory optimization is generated using an approximated model,
which may not be realistic for the robot. Therefore, we use model-predictive
control to refine the path from the reduced model.
## 3 Methods
Let $\mathbf{q},\dot{\mathbf{q}},\ddot{\mathbf{q}}\in\mathbb{R}^{12}$ denote
the joint positions, velocities, and accelerations of a 12 degree-of-freedom
quadruped robot. We assume that the target position of the robot torso is
given, and the environment constraints are fully provided. Our goal is find
the control torques $\bm{\tau}\in\mathbb{R}^{12}$ that can reach the given
target while avoiding the obstacles. The objective of the proposed framework
is to enable robots to learn a novel skill from self-labelled data.
### 3.1 Autonomous Data Generation
Our approach is to formulate the problem as an optimal control problem. We
simplify the problem by focusing on the trajectory of the robot torso.
$\mathbf{s}=[x,y,z,\theta_{z}]^{T}\in\mathbb{R}^{4}$ (2)
where $x,y,z$ denote the translation of the torso, $\theta$ denotes the
rotation about its local z-axis, and roll and pitch are fixed during the
optimization. Given the initial position of the robot state $\mathbf{s}^{i}$,
the task is to find the a sequence of states $\\{\mathbf{s}_{k}\\}_{k=1}^{N}$
that guides the robot from its initial pose $\mathbf{s}^{i}\in\mathbb{R}^{4}$
to its target pose $\mathbf{s}^{t}\in\mathbb{R}^{4}$ while minimizing the time
$\mathcal{T}$ and avoiding the obstacles at position $\mathbf{s}^{o}$.
We formulate this as a trajectory optimization problem, where the state is the
positions and velocities
$\mathbf{x}=\left[\mathbf{s},\dot{\mathbf{s}}\right]^{T}\in\mathbb{R}^{8}$,
and the command is the acceleration
$\mathbf{u}=\ddot{\mathbf{s}}\in\mathbb{R}^{4}$. The decision variables are
the sequences of N states and commands, as described in Equation 3.
$\displaystyle\min_{\mathbf{x}_{0},\cdots,\mathbf{x}_{N},\mathbf{u}_{0},\cdots,\mathbf{u}_{N},}$
$\displaystyle\qquad\mathcal{T}$ minimum time (3) subject to
$\displaystyle\mathbf{x}_{k+1}=\mathcal{F}(\mathbf{x}_{k},\mathbf{u}_{k}),\quad\forall
k=0,\dots,N-1$ forward dynamics
$\displaystyle\mathbf{x}_{0}=[\mathbf{s}^{i},\mathbf{0}]^{T},\mathbf{x}_{1}=\mathbf{0}$
initial condition
$\displaystyle\mathbf{x}_{N}=[\mathbf{s}^{t},\mathbf{0}]^{T},\mathbf{u}_{N}=\mathbf{0}$
terminal condition
$\displaystyle\mathbf{x}_{min}\leq\mathbf{x}_{k}\leq\mathbf{x}_{max},\quad\forall
k=0,\dots,N$ state boundary conditions
$\displaystyle\mathbf{u}_{min}\leq\mathbf{u}_{k}\leq\mathbf{u}_{max},\quad\forall
k=0,\dots,N$ action boundary conditions
$\displaystyle\mathcal{D}(\mathbf{p}_{k},\mathbf{p}^{o})\geq\epsilon,\quad\forall
k=0,\dots,N$ collision constraints
where $\mathcal{F}$ defines the dynamic equation of the system,
$\mathbf{x}_{min}$, $\mathbf{x}_{max}$, $\mathbf{u}_{min}$,$\mathbf{u}_{max}$
are the lower and upper bounds of states and actions, and
$\mathcal{D}(\mathbf{p}_{k},\mathbf{p}^{o})$ denotes the distance between the
robot and the obstacles. This problem is transcribed into a direct collocation
problem [14] and solved using CasADi [15].
### 3.2 Learning a Predictive Model
Assume that data are generated using the formal section as a set of positions
$\mathbf{p}$ and velocities $\dot{\mathbf{p}}$, our goal is to learn a mapping
$\bm{\pi}(.)\in\mathbb{R}^{4}\rightarrow\mathbb{R}^{4}$ that predicts the most
suitable velocity given the current state
$\tilde{\dot{\mathbf{p}}}_{k}=\bm{\pi}(\mathbf{p}_{k})$).
We use a neural network to encode this relationship. The architecture consists
of six fully connected layers, each separated by a $\tanh$ activation
function. The network is trained using stochastic gradient descent to optimize
the mean squared error between the generated $\dot{\mathbf{p}}_{k}$ and the
predicted velocity $\tilde{\dot{\mathbf{p}}}_{k}$.
### 3.3 Whole-Body Model Predictive Control
Assuming that $\mathbf{x}_{k}^{ref}$ is the reference state produced by the
learnt model in Sec. 3.2, the whole-body model predictive control [16]
computes the torques $\bm{\tau}\in\mathbb{R}^{12}$ that track the desired
inputs $\mathbf{x}_{k}^{ref}$. This component minimizes the ground reaction
force $\mathbf{F}=[\mathbf{f}_{1},\dots,\mathbf{f}_{K}]^{T}$ for $K$ stance
legs while satisfying the physical constraints of the robot and the friction
cone constraints, which prevent slippage.
$\displaystyle\min_{\mathbf{F}_{1},\mathbf{F}_{2},\dots,\mathbf{F}_{M}}$
$\displaystyle\sum||\mathbf{x}_{k}-\mathbf{x}_{k}^{ref}||+||\mathbf{F}_{k}||$
loss function (4) subject to
$\displaystyle\mathbf{x}_{k+1}=\mathcal{F}(\mathbf{x}_{k},\mathbf{u}_{k})$
forward dynamics
$\displaystyle\mu\lambda_{z}\geq\sqrt{\lambda_{x}^{2}+\lambda_{y}^{2}}$
friction cone constraints
$\displaystyle\bm{\tau}^{min}\leq\bm{\tau}_{k}\leq\bm{\tau}^{max}$ torque
limit constraints
$\displaystyle\mathbf{q}^{min}\leq\mathbf{q}_{k}\leq\mathbf{q}^{max}$ joint
limit constraints
Here, the horizon $M$ is a relatively small number. Once the $\mathbf{F}$ is
found, the first solution $\mathbf{F}_{0}$ is taken and the rest are
discarded. The ground reaction forces are converted into the equivalent
torques.
The swing leg motion is independent from the model predictive control. Given
the desired base velocity, we use a simple footstep planner which reads the
desired base velocity and generate the next footstep position [17]. A simple
interpolation is applied between the current footstep position and the next
footstep position. Then, this is tracked by standard PD control.
Finally, a flowchart of the proposed framework is summarized in Fig. 1.
Figure 1: The pipeline of self-supervised collision avoidance planning
## 4 Experiments
The experiments were carried out on a quadruped robot in PyBullet [18]. We
created a simulated world where the robot needs to move from its initial
position to its target position with an obstacle between them. The nominal
height of the robot torso is 28 cm, and the height of the table is 25 cm. The
robot can crawl under the obstacle only if it lowers its torso height.
Fig. 2 illustrates the simulated setup. The robot starts at the origin and
must move to the black arrow at $(3,0)$. The red curve shows the path
generated by planning the 2D motion, and the green curve shows the path
generated by the 3D optimal control approach.
Figure 2: The experimental setup from the front and the side view. The robot
starts at the origin and must move to the target (black arrow). The red curve
shows the path generated by planning motion in 2D, and the green curve shows
the path generated via 3D optimal control.
The target position is $(3,0)$, the table is placed at $(1.5,0)$, and the
initial positions of the robot are randomly drawn from
$\mathbf{p}^{i}\sim\mathcal{N}([0.5,0.066,0.026],[0.5,0.66,0.02])$. We use the
trajectory optimization method discussed in Sec. 3.1 to generate a path for
each initial position, which yields 10000 trajectories, each with $\approx
100$ data points.
We use the methods from Sec. 3.2 for learning a predictive model. The network
architecture is [256,1024,1024,1024,1024,256] in the hidden layers, and it
took 80 seconds to train a model. This was done with batch sizes of 1024 data
points for 20 epochs with the stochastic gradient descent optimizer and an
initial learning rate of 0.5. The train-validate-test size proportions were
$80\%-10\%-10\%$. The results of the model can achieve average mean squared
error of $10^{-5}$.
Fig. 3 shows the snapshot of an example trajectory generated using the
proposed method. We can see that the robot lower is body in order to crawl
under the table.
Figure 3: A snapshot of motion generated from learned model
## 5 Conclusion
This work proposes a self-supervised learning approach to learn a rough plan
for a quadruped robot maneuver around obstacles. We use optimal control to
generate a rough plan and then use supervised learning to learn a predictive
model. The learned model provides the desired base motion and then it is
refined using model predictive control for whole-body control. Further
improvements include relaxing more control variables to include the pitch and
roll of the base and incorporating cameras and LiDARs for perceiving the
environment.
#### Acknowledgments
We acknowledge the support of the Natural Sciences and Engineering Research
Council of Canada (NSERC).
Nous remercions le Conseil de recherches en sciences naturelles et en génie du
Canada (CRSNG) de son soutien.
## References
* Kalakrishnan et al. [2011] M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal. Stomp: Stochastic trajectory optimization for motion planning. In _IEEE international conference on robotics and automation_ , pages 4569–4574, 2011.
* Posa et al. [2016] M. Posa, S. Kuindersma, and R. Tedrake. Optimization and stabilization of trajectories for constrained dynamical systems. In _IEEE International Conference on Robotics and Automation (ICRA)_ , pages 1366–1373, 2016.
* Bjelonic et al. [2020] M. Bjelonic, P. K. Sankar, C. D. Bellicoso, H. Vallery, and M. Hutter. Rolling in the deep–hybrid locomotion for wheeled-legged robots using online trajectory optimization. _IEEE Robotics and Automation Letters_ , 5(2):3626–3633, 2020.
* Apgar et al. [2018] T. Apgar, P. Clary, K. Green, A. Fern, and J. W. Hurst. Fast online trajectory optimization for the bipedal robot cassie. In _Robotics: Science and Systems_ , volume 101, page 14, 2018.
* Di Carlo et al. [2018] J. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim. Dynamic locomotion in the mit cheetah 3 through convex model-predictive control. In _IEEE/RSJ international conference on intelligent robots and systems (IROS)_ , pages 1–9, 2018.
* Schaal [1996] S. Schaal. Learning from demonstration. _Advances in neural information processing systems_ , 9, 1996.
* [7] Y. Duan, M. Andrychowicz, B. Stadie, O. Jonathan Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba. One-shot imitation learning. _Advances in neural information processing systems_ , 30.
* Yang et al. [2022] Y. Yang, T. Zhang, E. Coumans, J. Tan, and B. Boots. Fast and efficient locomotion via learned gait transitions. In _Conference on Robot Learning_ , pages 773–783, 2022.
* Li et al. [2019] T. Li, H. Geyer, C. G. Atkeson, and A. Rai. Using deep reinforcement learning to learn high-level policies on the atrias biped. In _2019 International Conference on Robotics and Automation (ICRA)_ , pages 263–269, 2019. doi:10.1109/ICRA.2019.8793864.
* Lin et al. [2014] H.-C. Lin, M. Howard, and S. Vijayakumar. A novel approach for representing and generalising periodic gaits. _Robotica_ , 32(8):1225–1244, 2014.
* Peng et al. [2020] X. B. Peng, E. Coumans, T. Zhang, T.-W. E. Lee, J. Tan, and S. Levine. Learning agile robotic locomotion skills by imitating animals. In _Robotics: Science and Systems_ , 2020.
* Lee et al. [2020] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. _Science robotics_ , 5(47):eabc5986, 2020.
* Gasparino et al. [2022] M. V. Gasparino, A. N. Sivakumar, Y. Liu, A. E. Velasquez, V. A. Higuti, J. Rogers, H. Tran, and G. Chowdhary. Wayfast: Navigation with predictive traversability in the field. _IEEE Robotics and Automation Letters_ , 7(4):10651–10658, 2022.
* Von Stryk [1993] O. Von Stryk. _Numerical solution of optimal control problems by direct collocation_. Springer, 1993.
* Andersson et al. [2019] J. A. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl. Casadi: a software framework for nonlinear optimization and optimal control. _Mathematical Programming Computation_ , 11(1):1–36, 2019.
* Di Carlo et al. [2018] J. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim. Dynamic locomotion in the mit cheetah 3 through convex model-predictive control. In _IEEE/RSJ international conference on intelligent robots and systems (IROS)_ , pages 1–9, 2018.
* Raibert [1986] M. H. Raibert. _Legged robots that balance_. MIT press, 1986.
* Coumans and Bai [2016] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning, 2016.
|
# Closed Loop Quantum Interferometry for
Phase-Resolved Rydberg Atom Field Sensing
Samuel Berweger1 Alexandra B. Artusio-Glimpse1 Andrew P. Rotunno1 Nikunjkumar
Prajapati1 Joseph D. Christesen2 Kaitlin R. Moore3 Matthew T. Simons1
Christopher L. Holloway1 1National Institute of Standards and Technology,
Boulder, CO, 80305 2SRI International, Boulder, CO, 80302 3SRI
International, Princeton, NJ, 08540
###### Abstract
Although Rydberg atom-based electric field sensing provides key advantages
over traditional antenna-based detection, it remains limited by the need for a
local oscillator (LO) for low-field and phase resolved detection. In this
work, we demonstrate that closed-loop quantum interferometric schemes can be
used to generate a system-internal reference that can directly replace an
external LO for Rydberg field sensing. We reveal that this quantum-
interferometrically defined internal reference phase and frequency can be used
analogously to a traditional LO for atom-based down-mixing to an intermediate
frequency for lock-in phase detection. We demonstrate that this LO-equivalent
functionality provides analogous benefits to an LO, including full 360∘ phase
resolution as well as improved sensitivity. The general applicability of this
approach is confirmed by demodulating a four phase-state signal broadcast on
the atoms. Our approach opens up new sensing schemes and although the present
implementation still uses an auxiliary RF field, it provides a clear path
towards all-optical Rydberg atom sensing implementations.
## I Introduction
Rydberg atom-based field sensing is an emerging technology that uses resonant
transitions between excited states at high principal quantum numbers, $n$, to
detect radio frequency (RF) electric (E) fields [1, 2, 3]. This technology has
the potential to replace traditional wavelength-scaling antenna architectures
with compact atomic vapor cells [4] that use an optical readout of the atomic
response. However, similar to traditional RF demodulation schemes, phase-
sensitive detection often requires a local reference field. For the case of
Rydberg atoms, an additional local oscillator (LO) field can be applied, which
will be mixed to an intermediate frequency (IF) by the atoms themselves
(“Rydberg mixer“) [5, 6]. This Rydberg mixer provides benefits including
improved sensitivity [7, 6], frequency selectivity [8, 9], and phase
sensitivity [10] that allows angle-of-arrival [11] detection and demodulation
of phase-modulated communication signals [12]. However, a Rydberg mixer
nevertheless requires an additional LO RF field radiating the atoms with a
frequency within a few MHz of the measured field and a matched phase front,
which can be difficult to achieve and is undesirable in many applications.
One possible way to eliminate the need for an externally applied LO is to use
a closed loop scheme [13, 14]. These schemes exploit the quantum mechanical
interference across a set of driven transitions between discrete states that
form a closed loop. These can be used to mutually reference the phases of
fields across large frequency ranges [15]. Any transition between states in
this loop will simultaneously occur in both directions, where interference
between the two paths results in a transition probability that depends on the
relative phases of all fields involved [16]. Such approaches have typically
been applied to atomic ground-state transitions [15], but more recently
Rydberg atom loop schemes have emerged as an attractive means for phase
transfer between optical and microwave photons for quantum communications
applications [17, 18]. Closed-loop schemes can be complex because of orbital
angular momentum selection rules that require at least four transitions, and
in this respect a proposed scheme for Rydberg sensing that requires four RF
fields is impractical [14]. A recent experimental implementation that drives
two degenerate RF transitions succeeded at eliminating the need for additional
RF fields but was unable to achieve the full 360∘ phase resolution necessary
for modern digital modulation schemes such as phase key shifting [19].
In this work, we demonstrate the general applicability of closed loop schemes
using Rydberg states for phase-sensitive field sensing applications and show
how they can directly produce LO-equivalent functionality. We implement a
quantum interferometric loop scheme for RF sensing where we leverage the
versatility of such an approach by using a loop where the four transitions are
comprised of two optical and two non-degenerate RF frequencies. We show that
this scheme provides full 360∘ phase resolution on both RF fields.
Furthermore, we clearly demonstrate that a closed loop scheme establishes an
LO-free quantum coherent reference frequency and phase that can be exploited
analogously to the LO used in established Rydberg mixer measurements [5].
Using this quantum reference we perform LO-free demodulation of a quadrature
phase shift key (QPSK)-equivalent four-phase state signal at a symbol rate of
800 Hz. We reveal that the sensitivity relative to a traditional LO-based
Rydberg mixer is reduced by as little as a factor of 5, and we expect
significantly improved sensitivity and bandwidth using optimally chosen states
[20]. Although the present implementation still requires an auxiliary RF
field, a notable feature of this scheme is the possibility of an RF-free all-
optical implementation that closes the loop using three phase-locked optical
fields to measure a fourth RF field.
Figure 1: Experimental Details. (a) Schematic of the EIT ladder and Rydberg
states used for the quantum interference scheme, as well as (b) the phase-
modulating electro-optic modulator (EOM) used for coupling laser sideband
generation. (c) Schematic of the experimental setup with counterpropagating
probe and coupling beams. (d) EIT spectra of the Rydberg states used with the
two RF fields turned on and off as indicated.
## II Experimental
A schematic of the closed-loop scheme used for our experiment is shown in Fig.
1(a). The loop is comprised of four fields, $E_{i},i=1,\ldots 4$, each with
corresponding frequencies $\omega_{i}$, Rabi frequencies $\Omega_{i}$, and
phases $\phi_{i}$. The probe field on the D2 transition first couples the 85Rb
5S1/2 ground state to the 5P3/2 state. An electro-optic modulator (EOM) driven
by an external phase-stable signal at a frequency $\omega_{mod}=2\pi\times$
5.822 GHz generates sidebands on the coupling field (Fig. 1(b)) at
$\omega_{1}$ = $\omega_{c}-\omega_{mod}$ and $\omega_{4}$ = $\omega_{c}$ \+
$\omega_{mod}$ that then couple to the 79S1/2 and 78D5/2 states, respectively.
These states are then linked through the 79P3/2 state via two RF-frequency
transitions at $\omega_{2}=2\pi\times$ 7.292 GHz (SP transition) and
$\omega_{3}=2\pi\times$ 4.352 GHz (DP transition). The phases and frequencies
of this loop state arrangement are related by
$\omega_{1}+\omega_{2}+\omega_{3}=\omega_{4}$ and
$\phi_{1}+\phi_{2}+\phi_{3}=\phi_{4}$. Using the two EOM-generated sidebands
(Fig. 1(b)), these relationships reduce to
$\omega_{2}+\omega_{3}-2\omega_{mod}=0$ and $\phi_{2}+\phi_{3}-2\phi_{mod}=0$.
One notable benefit of this scheme is that any frequency or phase noise and/or
drift in the coupling laser cancels out, with the only remaining dependence on
our phase-locked RF fields. This set of states is chosen based on the narrow
bandwidth of our EOM around 5.8 GHz, but this approach is generally applicable
and we have verified that it works for other closed-loop state manifolds.
Our setup uses the established electromagnetically-induced transparency (EIT)
ladder approach to excite and probe the high lying Rydberg states [21]. Shown
in Fig. 1(c) is a schematic of the experimental setup, consisting of
counterpropagating coupling and probe lasers that are spatially overlapped in
a rubidiuim vapor cell and the RF fields are broadcast onto the cell using a
standard gain horn antenna. The EIT-induced change in probe transmission is
detected using balanced photodetection. The EOM is inserted into the coupling
beam path to generate sidebands on the coupling laser frequency $\omega_{c}$,
located at $\omega_{1}$ and $\omega_{4}$ as described above.
## III Results
We begin by examining the EIT spectra of our system of states. Shown in Fig.
1(d) are a set of spectra of the bare 79S and 78D states (left and right,
respectively) with the EOM turned off, where the 79S state EIT amplitude is
approximately 5 times weaker than the 78D. As we probe the bare states, we see
Autler-Townes (AT) splitting when the RF fields corresponding to the adjacent
transitions are turned on ($\omega_{2}$ for 79S and $\omega_{3}$ for 78D, Fig.
1(d) left and right plots, respectively), where we ensure that the Rabi
frequencies are equivalent: $\Omega_{2}=\Omega_{3}=2\pi\times$ 80 MHz. In
contrast, when only the non-adjacent RF fields are applied ($\omega_{2}$ for
79S and $\omega_{3}$ for 78D), there is little effect on the EIT spectra;
though a limited effect on the 78D state is seen due to the nearby transition
to 76F7/2. This nearby transition is also likely responsible for the asymmetry
seen in the AT doublet. When both RF fields are applied, we see two key
effects: the AT-splitting further increases and the central EIT peak
reappears. The reappearance of the EIT peak is due to a two-photon Raman
transition that we described previously [22], which here resonantly links the
79S and 78D states.
We set up our interferometric loop by turning on the EOM and setting the
coupling laser frequency halfway between the 79S and 78D states, i.e., the
sidebands are on-resonance with these transitions. As a result, the sideband-
generated 79S and 78D EIT peaks are spectrally superimposed when no RF is
applied in Fig.1(d) (center). With applied $\omega_{2}$ or $\omega_{3}$ RF
fields, the corresponding constituent 79S and 78D peaks undergo AT splitting
and the other peak remains unchanged as a residual central EIT peak. This
superposition peak is dominated by the significantly stronger contribution of
the 78D transition. When both RF fields are applied – thus completing the
interferometric loop – we see a superposition of the AT-doublets from the 79S
and 78D states, as well as a central EIT peak that is due to the two-photon
Raman transition. It is this configuration with the superimposed AT doublets
and the two-photon Raman peak that we will use to examine the effect of RF
phase.
Figure 2: Phase Sensitivity. Demonstration of phase sensitivity on the DP
transition (a) while the phase of the other field is held constant. The false-
color plot shows the evolution of the EIT amplitude as a function of the
corresponding RF phase ($\phi_{3})$. The right panel shows the line cuts along
each of the prominent EIT peaks as a function of phase and the bottom panel
shows the EIT spectra taken along the dashed lines in the false color plot
indicated by the colored arrows. Spectra showing the equivalent maxima in EIT
amplitude as a function of the RF phase applied to the SP transition
($\phi_{2}$) is shown in (b). Modeled results in (c)–(e) show the evolution of
the phase modulation depth as a function of the difference in coupling laser
Rabi frequencies.
We begin our examination of the RF phase with the effect of the phase of
$\omega_{3}$ ($\phi_{3}$). Shown in Fig. 2(a) is a false color plot of the
superposition EIT peak amplitude as $\phi_{3}$ is swept over 360∘. Phase-
dependent (vertical) line cuts taken from the central (blue circles) and side
peaks (open black circles and squares), shown on the right reveal a clear
oscillation in the amplitude with a depth of around 20$\%$ and a periodicity
of 360∘. The error bars in the phase-dependent line cuts are calculated from
the standard deviation of 10 measurements taken in sequence, and primarily
reflect the noise of the probe laser while larger variations arise from
fluctuations in our balanced detection due to thermal variations and drift. As
confirmed in spectral (horizontal) line cuts taken along the white dashed
lines and shown on the bottom, the central and AT-split side peaks oscillate
out of phase.
Since the accumulated phase of our quantum interferometric loop is due to all
fields involved, our measurement is sensitive to changes in phase of any of
the fields. Shown in Fig. 2(b) is the response of the EIT signal to the phase
of $\omega_{2}$ ($\phi_{2}$), where a comparable 20$\%$ modulation of the peak
is also seen.
While our modulation depth is only around 20$\%$, this is comparable to the
relative amplitudes of the 79S and 78D EIT peaks that we are superimposing for
our measurements. The optical field strengths of our EOM-generated sidebands
are locked at the same value, so any differences in peak amplitudes are
expected to be due to the transition dipole moments of the coupling laser
transitions. However, with transition dipole moments $\mu_{4}\approx 2\mu_{1}$
we similarly expect a factor of two difference in the EIT amplitudes. While we
see EIT amplitudes differing by a factor of 2 for $n<60$ – in good agreement
in our modeled EIT amplitudes – we see a large difference here for $n\approx
78$. The origin of this discrepancy is unclear, but we expect that it relates
to the increasingly higher density of nearby high angular momentum states at
large $n$.
We theoretically investigate the effect of the optical Rabi frequencies
$\Omega_{1}$ and $\Omega_{4}$ on the phase-dependent modulation depths. Shown
in Fig. 2(c)-(e) are the modeled [23, 22] EIT amplitudes at in-phase and out-
of-phase conditions with $\Omega_{1}$ and $\Omega_{4}$ as indicated. Here we
see that a full modulation depth can be achieved if the Rabi frequencies – and
thus EIT amplitudes – of the two fields are the same, which decreases to
42$\%$ for $\Omega_{1}=4\times\Omega_{4}$. We note that although quantitative
changes in the modulation depth depends on the effective Rabi frequency,
larger differences between $\Omega_{1}$ and $\Omega_{4}$ always lead to
reduced contrast. These models also confirm the experimental observation noted
above that the central and AT-doublet EIT peaks oscillate 180∘ out of phase as
a function of RF phase.
We now turn to the utility of our phase-sensitive quantum interferometric
scheme. As noted above, the relative phases and frequencies of the three
applied fields fixes a reference frequency and phase on the fourth transition.
For our measurements in Fig. 2(a), we use a frequency locked to that of this
reference, which represents a homodyne measurement. We again emphasize that
our reference phase and frequency are not the result of an applied field, but
rather are encoded in the quantum mechanical wave functions of the Rydberg
states adjacent to our transition. This reference can then be used in a
heterodyne configuration analogously to a conventional Rydberg mixer [5].
Figure 3: Quantum Interferometric Rydberg Mixer. In-phase (I) lock-in
demodulated signal (a) as a function of $\phi_{2}$, together with (b) a line
cut along the signal maximum along with the corresponding quadrature (Q)
signal and the lock-in magnitude. An E-field strength-dependent measurement
(c) shows a monotonic mixer sensitivity over a broad range. Sensitivity
measurements (d) compare the field-dependent amplitudes of Rydberg mixers on
the bare DP and SP transitions with sensitivities achieved using the quantum
interferometric mixer on the DP and SP transitions independently.
In this approach, one of the RF fields, e.g., $\omega_{3}$, is applied at a
detuned frequency $\omega^{\prime}_{3}$ = $\omega_{3}$ \+ $\delta$. This
detuned frequency is equivalent to a resonant frequency with a time-varying
phase, $\omega_{3}$ \+ $\delta$ = $\omega_{3}$ \+ $d\phi/dt$, and the
resulting oscillation in the EIT signal can be demodulated using lock-in
detection at frequency $\delta$, where the lock-in phase provides a direct
measure of the RF phase. For Fig. 3(a)-(c) we detune and demodulate
$\omega_{3}$, which can be used to measure either $\phi_{2}$ or $\phi_{3}$.
Shown in Fig. 3(a) is the $\phi_{2}$-dependent in-phase (I) lock-in mixer
signal as a function of $\Delta_{C}$, showing the expected 360∘ phase
sensitivity. The corresponding $\phi_{2}$-dependent evolution of the in-phase
and out-of-phase (quadrature, Q) signals is shown in Fig. 3(b), taken along
the spectral position of the dashed line in (a). We can clearly demodulate the
I and Q components of the RF signal simultaneously while the overall signal
magnitude remains constant.
Next we examine the dependence of our quantum interferometric Rydberg mixer on
electric field strength. Shown in Fig. 3(c) is the mixer magnitude as a
function of E2 at different values of E3 as indicated. We can clearly see that
for all values of E3 there is a corresponding optimum value of E2 that
provides maximum sensitivity, where a weaker E2 favors a weaker E3 and vice
versa. Since the signal magnitude remains monotonic over a large range of E2
this approach can enable both phase and amplitude sensing.
We examine the low-field sensitivity of our approach in Fig. 3(d). Here we use
a lock-in filter bandwidth of 1 Hz to measure the signal amplitude at low-
field strengths for normal Rydberg mixers applied on the SP and DP transitions
individually [7], as well as our interferometric loop mixer probing the SP and
DP resonant transitions. The normal Rydberg mixers are measured using an LO
field and signal field sourced from two different signal generators to produce
a beat note at 10 kHz while the coupling laser Rabi frequencies are idential
to those in the loop measurements. In all cases, the LO or complementary loop
RF fields were empirically optimized for low-field sensitivity. As we are
measuring the magnitude (R) output of our lock-in, the noise floor is
calculated based on the average of the measured data points below the
sensitivity threshold for the DP loop and used to determine the value for a
signal-to-noise ratio (SNR) of 1. From this, we estimate sensitivities of
approximately 0.15 $\rm mV/m\cdot\sqrt{\rm{Hz}}$ and 1 $\rm
mV/m\cdot\sqrt{\rm{Hz}}$ for the DP and SP mixers, respectively, and
sensitivities of 2 $\rm mV/m\cdot\sqrt{\rm{Hz}}$ and 6 $\rm
mV/m\cdot\sqrt{\rm{Hz}}$ for the DP and SP interferometric loop mixers,
respectively.
Figure 4: Signal Demodulation The phase plot showing detection of a four
phase-state signal using the the quantum phase mixer, along with the
corresponding I and Q signals is shown in (a) at a symbol rate of 800 Hz. The
corresponding constellation diagram (b) shows the well-resolved phase-states.
## IV Discussion
To demonstrate the general utility of our quantum interferometric loop Rydberg
mixer we broadcast a four phase-state $\phi_{3}$ signal onto the atoms,
generated using an IQ mixer to simulate a QPSK signal. Shown in Fig. 4(a) is
the lock-in output of our simulated QPSK signal with a symbol rate of 800 Hz,
showing the phase (black dots) together with the corresponding orthogonal I
and Q channels (green and blue, respectively). The received constellation
diagram in Fig. 4(b) shows the four well-resolved phase states detected using
our scheme. We emphasize that our 800 Hz bandwidth was not bandwidth-limited,
but without the vector signal generator and analyzer used in previous work
[24] it was chosen for illustrative purposes based on instrumental and signal-
level limitations of our approach.
The use of our phase-coherent quantum interferometric scheme holds notable
advantages compared to conventional Rydberg mixer approaches. First off, due
to the dependence of the signal on the accumulated phase over the
interferometric loop, we can now detune one field to generate a mixer that
measures the phase of a different field. With the necessary presence of at
least four fields to complete the loop, this may enable new modulation and
frequency mixing schemes for detection and demodulation of RF fields. The use
of degenerate RF frequencies [19] is a special case of our approach. Using two
degenerate transitions, the RF phase is accumulated on both transitions
$\phi_{tot}=\phi_{2}+\phi_{3}=2\phi_{RF}$, where the doubled phase provides
only 180∘ phase resolution and thus renders common phase modulation schemes
unusable. And although our present implementation does not do away with RF
fields altogether, the ability to apply a field on one RF transition in order
to measure another allows frequency separation of the two fields, potentially
into distinct bands. This may be useful for sensing applications where
broadcasting an LO field in the spectral vicinity of the signal of interest is
undesirable.
Our sensitivity measurements show a diminished sensitivity of our loop
approach compared to standard Rydberg mixers on the same transitions. The DP
interferometric loop mixer shows an order of magnitude decrease in sensitivity
compared to the DP mixer, though it is only a factor of 5 less than the SP
mixer. This underscores a key conclusion of Fig. 2(b)-(d): The overall
sensitivity of this approach is maximized when $\Omega_{1}=\Omega_{4}$, i.e.,
the EIT amplitudes are the same, and we see that it is ultimately limited by
the weaker of the two. As such, we expect that high sensitivity could
nevertheless be achieved with our approach given a better matched state
manifold. Although the nD-(n-1)F-(n+1)D state manifold provides better matched
and large transition dipole moments, we have found that the transitions
connecting the nD-(n+2)P-(n+1)D states are within a few 100 MHz of the D-F
ones and thus adversely affect the sensitivity of the associated loop. We do
expect improvement in Cs, however, where the D-P-D transitions are well-
isolated.
Our overall mixer sensitivity is also reduced by a factor of around 20
relative to record values reported in the literature [25, 6]. The E-field
sensitivity is a trade-off between high RF transition dipole moments at high
$n$ and better state isolation and higher coupling laser transition dipole
moments at low $n$. Thus, it is not surprising that these high sensitivity
values are achieved using higher angular momentum states, but they also rely
on lower principal quantum numbers in the vicinity of $n\approx 50$. As such,
we would expect dramatic improvement using an EOM with higher operating
frequency. In this context it is important to note that although the origin of
the discrepancy between our measured S- and D-state EIT amplitudes and those
predicted by our model is unclear, this further underscores the benefit of
operating at lower values of $n$ where this discrepancy disappears and smaller
EIT ratios are seen. Concluding that the overall sensitivity of our quantum
interferometric scheme can approach that of a traditional LO-based mixer, it
is also clear that the sensitivity improvements afforded by an LO-based
approach also apply here.
Although we have used a four-photon scheme involving two optical and two RF
fields, our demonstration of weak-field sensitivity suggests that it should be
possible to replace one of the RF fields with an additional optical one –
whose Rabi frequencies are typically low – to enable all-optical Rydberg atom
E-field sensing. Eliminating the need for an external RF field altogether is
attractive for angle-of-arrival measurements where a phase front (propagation
direction) mismatch between the external field and the signal of interest will
cause a reduced signal amplitude and phase accuracy. The inherent challenges
of phase-locking three optical fields to obtain a phase-stable loop can be
achieved by several possible means. While potentially difficult to achieve,
parametric generation generates phase-locked fields that will cancel any phase
noise in the pump field through the loop, while phase-locking using a
frequency comb requires three separate lasers but may allow for a broader
range of frequencies.
## V Conclusion
We have demonstrated a Rydberg atom-based RF electric field sensor scheme that
uses quantum interference over a closed loop of atomic transitions with phase
and frequency fixed by the fields used. This approach provides full 360∘
phase-resolved field sensing without an applied local oscillator near the
detected frequency. We further show that this approach enables LO-free
functionality analogous to a traditional LO-based Rydberg mixer, where we
demonstrate phase-resolved demodulation of a four phase-state QPSK signal
using our quantum interferometric mixer. These experiments demonstrate the
clear-cut advantages of closed-loop quantum interferometric schemes for
Rydberg atom-based RF field sensing, and further hold potential for all-
optical field sensing.
## Acknowledgements
This research was developed with funding from the Defense Advanced Research
Projects Agency (DARPA). The views, opinions and/or findings expressed are
those of the author and should not be interpreted as representing the official
views or policies of the Department of Defense or the U.S. Government. A
contribution of the US government, not subject to copyright in the United
States.
## References
* Artusio-Glimpse _et al._ [2022] A. Artusio-Glimpse, M. T. Simons, N. Prajapati, and C. L. Holloway, Modern rf measurements with hot atoms: A technology review of rydberg atom-based radio frequency field sensors, IEEE Microwave Magazine 23, 44 (2022).
* Sedlacek _et al._ [2012] J. A. Sedlacek, A. Schwettman, H. Kübler, R. Löw, T. Pfau, and J. P. Shaffer, Microwave electrometry with rydberg atoms in a vapour cell using bright atomic resonances, Nat. Photon 8, 819 (2012).
* Fan _et al._ [2015] H. Fan, S. Kumar, J. Sedlacek, H. Kübler, S. Karimkashi, and J. P. Shaffer, Atom based RF electric field sensing, J. Phys. B: At. Mol. Opt. Phys 48, 202001 (2015).
* Cox _et al._ [2018] K. C. Cox, D. H. Meyer, F. K. Fatemi, and P. D. Kunz, Quantum-limited atomic receiver in the electrically small regime, Phys. Rev. Lett. 121, 110502 (2018).
* Simons _et al._ [2019] M. T. Simons, A. H. Haddab, J. A. Gordon, and C. L. Holloway, A Rydberg atom-based mixer: Measuring the phase of a radio frequency wave, Appl. Phys. Lett. 114, 114101 (2019).
* Jing _et al._ [2020] M. Jing, Y. Hu, J. Ma, H. Zhang, L. Zhang, L. Xiao, and S. Jia, Atomic superheterodyne receiver based on microwave-dressed Rydberg spectroscopy, Nat. Phys. 16, 911 (2020).
* Gordon _et al._ [2019] J. A. Gordon, M. T. Simons, A. H. Haddab, and C. L. Holloway, Weak electric-field detection with sub-1 hz resolution at radio frequencies using a Rydberg atom-based mixer, AIP Advances 9, 045030 (2019).
* Meyer _et al._ [2021] D. H. Meyer, P. D. Kunz, and K. C. Cox, Waveguide-coupled Rydberg spectrum analyzer from 0 to 20 ghz, Phys. Rev. Appl. 15, 014053 (2021).
* Liu _et al._ [2022] X.-H. Liu, K.-Y. Liao, Z.-X. Zhang, H.-T. Tu, W. Bian, Z.-Q. Li, S.-Y. Zheng, H.-H. Li, W. Huang, H. Yan, and S.-L. Zhu, Continuous-frequency microwave heterodyne detection in an atomic vapor cell, Phys. Rev. Appl. 18, 054003 (2022).
* Meyer _et al._ [2018] D. H. Meyer, K. C. Cox, F. K. Fatemi, and P. D. Kunz, Digital communication with rydberg atoms and amplitude-modulated microwave fields, Appl. Phys. Lett. 112, 211108 (2018).
* Robinson _et al._ [2021] A. K. Robinson, N. Prajapati, D. Senic, M. T. Simons, and C. L. Holloway, Determining the angle-of-arrival of a radio-frequency source with a rydberg atom-based sensor, Appl. Phys. Lett. 118, 114001 (2021).
* Holloway _et al._ [2019a] C. L. Holloway, M. T. Simons, J. A. Gordon, and D. Novotny, Detecting and receiving phase-modulated signals with a rydberg atom-based receiver, IEEE Antennas and Wireless Propagation Letters 18, 1853 (2019a).
* Morigi _et al._ [2002] G. Morigi, S. Franke-Arnold, and G.-L. Oppo, Phase-dependent interaction in a four-level atomic configuration, Phys. Rev. A 66, 053409 (2002).
* Shylla _et al._ [2018] D. Shylla, E. Ogaro, and K. Pendey, Highly sensitive atomic based mw interferometry, Sci. Rep. 8, 8692 (2018).
* Huss _et al._ [2004] A. F. Huss, R. Lammegger, C. Neureiter, E. A. Korsunsky, and L. Windholz, Phase correlation of laser waves with arbitrary frequency spacing, Phys. Rev. Lett. 93, 223601 (2004).
* Kajari-Schröder _et al._ [2007] S. Kajari-Schröder, G. Morigi, S. Franke-Arnold, and G.-L. Oppo, Phase-dependent light propagation in atomic vapors, Phys. Rev. A 75, 013816 (2007).
* Han _et al._ [2018] J. Han, T. Vogt, C. Gross, D. Jaksch, M. Kiffner, and W. Li, Coherent microwave-to-optical conversion via six-wave mixing in rydberg atoms, Phys. Rev. Lett. 120, 093201 (2018).
* Kumar _et al._ [2023] A. Kumar, A. Suleymanzade, M. Stone, L. Taneya, A. Anferov, D. I. Schuster, and J. Simon, Quantum-enabled millimetre wave to optical transduction using neutral atoms, Nature 615, 614 (2023).
* Anderson _et al._ [2022] D. A. Anderson, R. E. Sapiro, L. F. Gonçalves, R. Cardman, and G. Raithel, Optical radio-frequency phase measurement with an internal-state rydberg atom interferometer, Phys. Rev. Appl. 17, 044020 (2022).
* Chopinaud and Pritchard [2021] A. Chopinaud and J. D. Pritchard, Optimal state choice for Rydberg-atom microwave sensors, Phys. Rev. Appl. 16, 024008 (2021).
* Mohapatra _et al._ [2008] A. K. Mohapatra, M. G. Bason, B. Butscher, K. J. Weatherill, and C. S. Adams, A giant electro-optic effect using polarizable dark states, Nat. Phys. 4, 890 (2008).
* Berweger _et al._ [2023] S. Berweger, N. Prajapati, A. B. Artusio-Glimpse, A. P. Rotunno, R. Brown, C. L. Holloway, M. T. Simons, E. Imhof, S. R. Jefferts, B. N. Kayim, M. A. Viray, R. Wyllie, B. C. Sawyer, and T. G. Walker, Rydberg-state engineering: Investigations of tuning schemes for continuous frequency sensing, Phys. Rev. Appl. 19, 044049 (2023).
* Holloway _et al._ [2017] C. L. Holloway, M. T. Simons, J. A. Gordon, A. Dienstfrey, D. A. Anderson, and G. Raithel, Electric field metrology for si traceability: Systematic measurement uncertainties in electromagnetically induced transparency in atomic vapor, J. Appl. Phys. 121, 233106 (2017).
* Holloway _et al._ [2019b] C. L. Holloway, M. T. Simons, J. A. Gordon, and D. Novotny, Detecting and receiving phase-modulated signals with a rydberg atom-based receiver, IEEE Antenn. Wirel. Pr. 18, 1853 (2019b).
* Prajapati _et al._ [2021] N. Prajapati, A. K. Robinson, S. Berweger, M. T. Simons, A. B. Artusio-Glimpse, and C. L. Holloway, Enhancement of electromagnetically induced transparency based rydberg-atom electrometry through population repumping, Appl. Phys. Lett. 119, 214001 (2021).
|
# Multi-Task Imitation Learning for Linear Dynamical Systems
Thomas T. Zhang Authors contributed equally to this work. University of
Pennsylvania Katie Kang††footnotemark: University of California, Berkeley
Bruce D. Lee††footnotemark: University of Pennsylvania Claire Tomlin
University of California, Berkeley Sergey Levine University of California,
Berkeley Stephen Tu Google Research, Brain Team Nikolai Matni University
of Pennsylvania Google Research, Brain Team
###### Abstract
We study representation learning for efficient imitation learning over linear
systems. In particular, we consider a setting where learning is split into two
phases: (a) a pre-training step where a shared $k$-dimensional representation
is learned from $H$ source policies, and (b) a target policy fine-tuning step
where the learned representation is used to parameterize the policy class. We
find that the imitation gap over trajectories generated by the learned target
policy is bounded by
$\tilde{O}\left(\frac{kn_{x}}{HN_{\mathrm{shared}}}+\frac{kn_{u}}{N_{\mathrm{target}}}\right)$,
where $n_{x}>k$ is the state dimension, $n_{u}$ is the input dimension,
$N_{\mathrm{shared}}$ denotes the total amount of data collected for each
policy during representation learning, and $N_{\mathrm{target}}$ is the amount
of target task data. This result formalizes the intuition that aggregating
data across related tasks to learn a representation can significantly improve
the sample efficiency of learning a target task. The trends suggested by this
bound are corroborated in simulation.
Keywords: Imitation learning, transfer learning, multi-task learning,
representation learning
## 1 Introduction
Imitation learning (IL), which learns control policies by imitating expert
demonstrations, has demonstrated success across a variety of domains including
self-driving cars (Codevilla et al., 2018) and robotics (Schaal, 1999).
However, using IL to learn a robust behavior policy may require a large amount
of training data (Ross et al., 2011), and expert demonstrations are often
expensive to collect. One remedy for this problem is multi-task learning:
using data from other tasks (source tasks) in addition to from the task of
interest (target task) to jointly learn a policy. We study the application of
multi-task learning to IL over linear systems, and demonstrate improved sample
efficiency when learning a controller via representation learning.
Our results expand on prior work that studies multi-task representation
learning for supervised learning (Du et al., 2020; Tripuraneni et al., 2021),
addressing the new challenges that arise in the imitation learning setting.
First, the data for IL is temporally dependent, as it is generated from a
dynamical system $x[t+1]=f(x[t],u[t],w[t])$. In contrast, the supervised
learning setting assumes that both the train and test data are independent and
identically distributed (i.i.d.) from the same underlying distribution.
Furthermore, we are interested in the performance of the learned controller in
closed-loop rather than its error on expert-controlled trajectories. Hence,
bounds on excess risk, which corresponds to the one-step prediction error of
the learned controller under the expert distribution, are not immediately
informative for the closed-loop performance. We instead focus our analysis on
the tracking error between the learned and expert policies, which requires us
to account for the distribution shift between the learned and expert
controllers.
We address these challenges in the setting of IL for linear systems. The
following statement captures the benefits of multi-task representation
learning on sample complexity:
###### Theorem 1.1 (main result, informal)
Suppose that the source task controllers are sufficiently related to the
target task controller. Then, the tracking error between the learned target
controller and the corresponding expert is bounded with high probability by:
$\displaystyle\textrm{tracking error}\lesssim\frac{\textrm{rep.\ dimension
}\times\textrm{state dimension}}{\textrm{\\# source task
datapoints}}+\frac{\textrm{rep.\ dimension }\times\textrm{input
dimension}}{\textrm{\\# target task datapoints}}.$
The first term in this bound corresponds to the error from learning a common
representation, and the second term the error in fitting the remaining weights
of the target task controller. The key upshot of this result is that the
numerator of the second term (rep. dimension $\times$ input dimension) is
smaller than the number of parameters (input dimension $\times$ state
dimension) in the target controller. This demonstrates an improvement in
sample complexity of multi-task IL over direct IL, where the error scales as
$\frac{\\#\mathrm{parameters}}{\\#\mathrm{datapoints}}$. Furthermore, we note
that the error in learning the representation decays along all axes of the
data: # of tasks $\times$ # of trajs $\times$ traj length for source tasks,
and # of trajs $\times$ traj length for the target task. It is non-trivial to
demonstrate that the error decays with the trajectory length, and doing so
requires tools that handle causally dependent data in our analysis.
The remainder of the paper formulates the multi-task IL problem, and the
assumptions required to prove Theorem 1.1. The main contributions may be
summarized as follows:
* •
We provide novel interpretable notions of source task overlap with the target
task (§2 and §3).
* •
We bound the imitation gap achieved by multi-task IL as in Theorem 1.1 (§3).
* •
We empirically show the efficacy of multi-task IL when the assumptions are
satisfied (§4).
### 1.1 Related Work
Multi-task imitation and reinforcement learning: Multi-task RL and IL methods
seek to represent policies solving different tasks with shared parameters,
enabling the transfer of knowledge across related tasks (Teh et al., 2017;
Espeholt et al., 2018; Hessel et al., 2018; Singh et al., 2020; Deisenroth et
al., 2014), and rapid test-time adaptation to new tasks (Finn et al., 2017;
Rakelly et al., 2019; Duan et al., 2016; Yu et al., 2021; Yang and Nachum,
2021). There also exists a body of work which theoretically analyses the
sample complexity of representation learning in multi-task RL and IL (Lu et
al., 2021; Cheng et al., 2022; Lu et al., 2022; Xu et al., 2020; Maurer et
al., 2015; Arora et al., 2020). While this line of work considers a more
general MDP setting compared with the linear dynamical systems we consider,
the specific results are often stated with incompatible assumptions (such as
bounded states/cost functions and discrete action spaces), and/or do not
reflect how system-theoretic properties such as closed-loop task stability
affect the final rates.
Multi-task system identification and adaptive control: Recent work has also
considered applications of multi-task learning where the dynamics change
between tasks, and the goal is to perform adaptive control (Harrison et al.,
2018; Richards et al., 2021, 2022; Shi et al., 2021; Muthirayan et al., 2022)
or dynamics forecasting (Wang et al., 2021). Multi-task system identification
(Modi et al., 2021) and stabilization using data from related systems (Li et
al., 2022) have also been considered. Our work instead studies the problem of
learning to imitate different expert controllers while the system remains the
same, and demonstrates bounds on the tracking error between the learned
controller and its corresponding expert.
Sample complexity of multi-task learning: Numerous works have studied the
sample efficiency gains of multi-task learning for regression and
classification under various task similarity assumptions (Baxter, 1995;
Crammer et al., 2008; Maurer et al., 2016; Tripuraneni et al., 2020; Chua et
al., 2021). Most closely related to our results are Du et al. (2020) and
Tripuraneni et al. (2021), both of which show multi-task representation
learning sample complexity bounds in the linear regression setting in which
the error from learning the representation decays with the total number of
source training samples. Our work leverages these results to tackle the
setting of linear imitation learning, which has the additional challenges of
non-i.i.d. data and test time distribution shift.
## 2 Problem Formulation
### 2.1 Multi-Task Imitation Learning
Imitation learning uses state/action pairs
$(x,u)\in\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{u}}$ of expert demonstrations
to learn a controller
$\hat{\pi}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{n_{u}}$, by matching the
learned controller actions to the expert actions. In particular, if
$\mathcal{D}$ is the training set of expert state/action pairs, then
$\hat{\pi}\in\operatorname*{argmin}_{\pi}\sum_{(x,u)\in\mathcal{D}}\left\|\pi(x)-u\right\|^{2}.$
We are interested in the problem of _multi-task_ imitation learning, where we
consider $H+1$ different expert controllers. We call the first $H$ controllers
source controllers and the $(H+1)^{\textrm{st}}$ controller the target
controller. We assume that we have access to $N_{1}$ trajectories for each
source task, and $N_{2}\leq N_{1}$ trajectories for the target task. For
simplicity, we assume all trajectories are of the same length $T$. In
particular, for each source task
$h\in\mathopen{}\left\\{1,\dots,H\right\\}\mathclose{}$, our source data
consists of
$\mathopen{}\left\\{\mathopen{}\left\\{(x_{i}^{(h)}[t],u_{i}^{(h)}[t])\right\\}\mathclose{}_{t=0}^{T-1}\right\\}\mathclose{}_{i=1}^{N_{1}}$,
while our target data consists of
$\mathopen{}\left\\{\mathopen{}\left\\{(x_{i}^{(H+1)}[t],u_{i}^{(H+1)}[t])\right\\}\mathclose{}_{t=0}^{T-1}\right\\}\mathclose{}_{i=1}^{N_{2}}$.
Our goal is to learn a controller which effectively imitates the target
controller. However, because we only have access to a small number ($N_{2}$)
of target expert trajectories, we leverage the $HN_{1}$ expert trajectories
from the source controllers to accelerate the learning of the target
controller.
To do so, we break our training into two stages: a pre-training stage which
learns from the combined source task data, and a target training stage which
only learns from the target task data. In the pre-training stage, we extract a
common, low dimensional representation for the source controllers, which is
used later in the target training stage. More specifically, we learn a common,
low dimensional representation mapping
$\hat{\phi}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{k}$, where $k<n_{x}$ is
the dimension of the representation, and linear predictors
$\hat{F}^{(h)}\in\mathbb{R}^{n_{u}\times k}$ unique to each task:
$\displaystyle\hat{\phi},\hat{F}^{(1)},\dots,\hat{F}^{(H)}\in\operatorname*{argmin}_{\phi,F^{(1)},\dots,F^{(H)}}\sum_{h=1}^{H}\sum_{i=1}^{N_{1}}\sum_{t=0}^{T-1}\left\|F^{(h)}\phi(x_{i}^{(h)}[t])-u_{i}^{(h)}[t]\right\|^{2}.$
(1)
We do not address the details of solving the empirical risk minimization
problem, and instead perform our analysis assuming (1) can be solved to
optimality. Note however that Tripuraneni et al. (2021) demonstrate in the
linear regression setting that a method-of-moments-based algorithm can
efficiently find approximate empirical risk minimizers.
Once a common representation $\hat{\phi}$ is obtained, we move on to target
task training. During target task training, we use the common representation
mapping $\hat{\phi}$ learned from the pre-training step to map the states into
the lower dimensional representation, and learn an additional linear predictor
$\hat{F}^{(H+1)}$ unique to the target task to model the target controller:
$\displaystyle\hat{F}^{(H+1)}=\operatorname*{argmin}_{F}\sum_{i=1}^{N_{2}}\sum_{t=0}^{T-1}\left\|F\hat{\phi}\mathopen{}\left(x_{i}^{(H+1)}[t]\right)\mathclose{}-u_{i}^{(H+1)}[t]\right\|^{2}.$
(2)
Since the representation $\hat{\phi}$ is fixed from pre-training, (2) is an
ordinary least squares problem.
### 2.2 System and Data Assumptions
We focus our analysis on a linear systems setting, with state
$x[t]\in\mathbb{R}^{n_{x}}$, input $u[t]\in\mathbb{R}^{n_{u}}$, and Gaussian
process noise $w[t]\in\mathbb{R}^{n_{x}}$ obeying dynamics
$\displaystyle x[t+1]=Ax[t]+Bu[t]+w[t].$ (3)
Let each expert controller be of the form $u[t]=K^{(h)}x[t]+z[t]$, where
$z[t]\in\mathbb{R}^{n_{u}}$ is Gaussian actuator noise.111As the control
actions are the labels in IL, actuator noise corresponds to label noise in
supervised learning. In the absence of such noise, the controller is recovered
by $n_{x}$ linearly independent states and corresponding expert inputs. We
assume the system matrices $(A,B)$ remain the same between tasks, but the
process noise covariance and the controllers may change.222This could be the
case, for instance, if different controllers are designed for different levels
of noise. In particular, we have
$w^{(h)}[t]\overset{i.i.d.}{\sim}\mathcal{N}(0,\Sigma_{w}^{(h)})$ and
$z^{(h)}[t]\overset{i.i.d.}{\sim}\mathcal{N}(0,\sigma_{z}^{2}I)$ with
$\Sigma_{w}^{(h)}\succ 0$ for all $h\in[H+1]$ and $\sigma_{z}^{2}>0$.
We assume all of the expert controllers $K^{(h)}$ are stabilizing, i.e., the
spectral radii of $A+BK^{(h)}$ are less than one. Note that this implies that
$(A,B)$ is stabilizable. No other assumptions on $(A,B)$ are required. The
state distribution of the system under each expert controller will converge to
the stationary distribution $\mathcal{N}(0,\Sigma_{x}^{(h)})$, where
$\Sigma_{x}^{(h)}$ solves the following discrete Lyapunov equation:
$\Sigma_{x}^{(h)}=(A+BK^{(h)})\Sigma_{x}^{(h)}(A+BK^{(h)})^{\top}+\sigma_{z}^{2}BB^{\top}+\Sigma_{w}^{(h)}.$
For simplicity, we assume that the initial states of the expert demonstrations
in our datasets are sampled
$x_{i}^{(h)}[0]\overset{i.i.d.}{\sim}\mathcal{N}(0,\Sigma_{x}^{(h)})$. Thus at
all times, the marginal state distributions of the expert demonstrations are
equal to $\mathcal{N}(0,\Sigma_{x}^{(h)})$.333This assumption is not
restrictive, as stable systems exponentially converge to stationarity.
Finally, we assume that the expert controllers share a low dimensional
representation. Specifically, there exists some
$\Phi_{\star}\in\mathbb{R}^{k\times n_{x}}$ with $2k\leq n_{x}$444While this
assumption is more stringent than the intuitive $k<n_{x}$ assumption, it
arises from the fact that the residual of the stacked source controllers may
be of rank $2k$. and weights
$F^{(1)}_{\star},F^{(2)}_{\star},\dots,F^{(H+1)}_{\star}\in\mathbb{R}^{n_{u}\times
k}$ such that for all $h\in[H+1]$, $K^{(h)}=F_{\star}^{(h)}\Phi_{\star}$, and
the action taken at time $t$ for trajectory $i$ is:555An example of a setting
where expert controllers satisfy this assumption is when the system has high
dimensional states which exhibit low dimensional structure, e.g. when $A$ and
$B$ can be decomposed into $A=\Phi_{\star}^{\dagger}\tilde{A}\Phi_{\star}$ and
$B=\Phi_{\star}^{\dagger}\tilde{B}$, where $\tilde{A}\in\mathbb{R}^{k\times
k}$ and $\tilde{B}\in\mathbb{R}^{k\times n_{u}}$. Here, linear policies $K$
which optimize some objective in terms of the low dimensional features of the
system can be decomposed into $K=\tilde{K}\Phi_{\star}$, where
$\tilde{K}\in\mathbb{R}^{n_{u}\times k}$, mirroring the assumptions of our
expert controllers. We provide a concrete example in Section 4.
$u_{i}^{(h)}[t]=F^{(h)}_{\star}\Phi_{\star}x_{i}^{(h)}[t]+z_{i}^{(h)}[t].$
Under this assumption, the learned common representation $\hat{\phi}$ in
Section 2.1 can be restricted to linear representations, i.e.,
$\hat{\phi}(x)=\hat{\Phi}x$, where $\hat{\Phi}\in\mathbb{R}^{k\times n_{x}}$.
Note that solving Problem (2) with $\hat{\Phi}$ fixed involves solving for
only $kn_{u}$ parameters, which is smaller than the $n_{u}n_{x}$ unknown
parameters when learning from scratch. In particular, by representing the
controller as $F^{(H+1)}\Phi$, we have $k(n_{u}+n_{x})$ unknown parameters:
$kn_{x}$ of the parameters are, however, learned using the source task data,
leaving only $kn_{u}$ parameters to learn with target task data.
### 2.3 Notation
The Euclidean norm of a vector $x$ is denoted $\left\|x\right\|$. For a matrix
$A$, the spectral norm is denoted $\left\|A\right\|$, and the Frobenius norm
is denoted $\left\|A\right\|_{F}$. The spectral radius of a square matrix is
denoted $\rho(A)$. We use $\dagger$ to denote the Moore-Penrose pseudo-
inverse. For a square matrix $A$ with $\rho(A)<1$, define
$\mathcal{J}(A)=\sum_{t\geq 0}\left\|A^{t}\right\|<\infty$. A symmetric,
positive semi-definite (psd) matrix $A=A^{\top}$ is denoted $A\succeq 0$.
Similarly $A\succeq B$ denotes that $A-B$ is positive semidefinite. The
condition number of a positive definite matrix $A$ is denoted
$\kappa(A)=\frac{\lambda_{\max}(A)}{\lambda_{\min}(A)}$, where
$\lambda_{\max}$ and $\lambda_{\min}$ denote the maximum and minimum
eigenvalues, respectively. Similarly, $\sigma_{i}(A)$ denote the singular
values of $A$. We denote the normal distribution with mean $\mu$ and
covariance $\Sigma$ by $\mathcal{N}(\mu,\Sigma)$. We use standard
$\mathcal{O}(\cdot)$, $\Theta(\cdot)$ and $\Omega(\cdot)$ to omit universal
constant factors, and $\tilde{}\mathcal{O}(\cdot),\tilde{\Theta}(\cdot)$ and
$\tilde{\Omega}(\cdot)$ to also omit polylog factors. We also use $a\lesssim
b$ to denote $a=O(b)$. We use the indexing shorthand
$[K]:=\mathopen{}\left\\{1,\dots,K\right\\}\mathclose{}$. For a given task
$h\in[H+1]$, the matrix of stacked states is defined as
$\displaystyle\mathbf{X}^{(h)}$
$\displaystyle=\begin{bmatrix}x_{1}^{(h)}[0]&\dots&x_{1}^{(h)}[T-1]&\dots&x_{N_{1}}^{(h)}[0]&\dots&x_{N_{1}}^{(h)}[T-1]\end{bmatrix}^{\top}\in\mathbb{R}^{N_{1}T\times
n_{x}}.$ (4)
Lastly, let $\bar{\lambda}=\max_{1\leq h\leq
H}\lambda_{\max}(\Sigma_{x}^{(h)})$ and $\underline{\lambda}=\min_{1\leq h\leq
H}\lambda_{\min}(\Sigma_{x}^{(h)})$.
## 3 Sample Complexity of Multi-Task Imitation Learning
In order to derive any useful information from source tasks for a downstream
task, the source tasks must satisfy some notion of task diversity that
sufficiently covers the downstream task. To that end, we introduce the
following notions of source tasks covering the target task.
###### Definition 3.1 (target task covariance coverage (Du et al., 2020))
Define the constant $c$ as:
$\displaystyle
c:=\min_{h\in[H]}\lambda_{\min}((\Sigma_{x}^{(H+1)})^{-1/2}\Sigma_{x}^{(h)}(\Sigma_{x}^{(H+1)})^{-1/2}).$
(5)
Note that $c$ is well-defined and positive by our assumption that
$\Sigma_{w}^{(h)}\succ 0$ for all $h\in[H+1]$.
Definition 3.1 captures the degree to which the closed-loop distribution of
states for each source task aligns with that of the target task. We then
introduce the following notion of task similarity between the source and
target task weights, which generalizes the well-conditioning assumptions in Du
et al. (2020) and Tripuraneni et al. (2021).
###### Assumption 3.1 (diverse source controllers)
We assume the target task weights $F_{\star}^{(H+1)}$ and the source task
weights $F_{\star}^{(1)},\dots,F_{\star}^{(H)}$ satisfy
$\displaystyle\left\|F_{\star}^{(H+1)}\begin{bmatrix}F_{\star}^{(1)}\\\
\vdots\\\
F_{\star}^{(H)}\end{bmatrix}^{\dagger}\right\|^{2}\leq\mathcal{O}\mathopen{}\left(\frac{1}{H}\right)\mathclose{}.$
(6)
3.1 states that the alignment and loadings of the singular spaces between the
stacked source task weights and target task weights closely match along the
low-dimensional representation dimension. For example, if
$F_{\star}^{(h)}=F_{\star}^{(H+1)}$ for each $h\in[H]$, the RHS of (6) is
$1/H$. We note that this assumption subsumes and is more geometrically
informative than a direct bound on the ratio of singular values, e.g.
$\sigma_{\max}^{2}(F_{\star}^{(H+1)})/\sigma_{k}^{2}\mathopen{}\left(\begin{bmatrix}F_{\star}^{(1)}\\\
\vdots\\\
F_{\star}^{(H)}\end{bmatrix}\right)\mathclose{}\leq\mathcal{O}(1/H),$
which would follow by naively extending the well-conditioning assumptions in
Du et al. (2020) and Tripuraneni et al. (2021). Notably, such a condition
might not be satisfied even if $F_{\star}^{(h)}=F_{\star}^{(H+1)}$, $\forall
h\in[H]$, e.g., if $F_{\star}^{(H+1)}$ is rank-deficient.
### 3.1 Excess Risk Bound: Generalization Along Expert Target Task
Trajectories
First we show that learning controllers through multi-task representation
learning leads to favorable generalization bounds on the excess risk of the
learned controller inputs on the expert target task state distribution,
analogous to the bounds on multi-task linear regression in Du et al. (2020);
Tripuraneni et al. (2021). However, a key complicating factor in our setting
is the fact that the input noise $z^{(h)}[t]$ enters the process, and thus the
data $x^{(h)}[t]$ is causally dependent on the “label noise”. In order to
overcome this issue and preserve our statistical gains along time $T$, we
leverage the theory of self-normalized martingales, in particular generalizing
tools from Abbasi-Yadkori et al. (2011) to the matrix-valued setting. The full
argument is detailed in Appendix A and Appendix B. This culminates in the
following target task excess risk bound.
###### Theorem 3.1 (target task excess risk bound)
Given $\delta\in(0,1)$, suppose that
$\displaystyle N_{1}T$
$\displaystyle\gtrsim\max_{h\in[H]}\mathcal{J}\mathopen{}\left(A+BK^{(h)}\right)\mathclose{}^{2}\kappa\mathopen{}\left(\Sigma_{x}^{(h)}\right)\mathclose{}(n_{x}+\log(H/\delta)),$
$\displaystyle N_{2}T$
$\displaystyle\gtrsim\mathcal{J}\mathopen{}\left(A+BK^{(H+1)}\right)\mathclose{}^{2}\kappa\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}(k+\log(1/\delta)).$
Define $\mathcal{P}_{0:T-1}^{(H+1)}$ as the distribution over target task
trajectories $(x^{(H+1)}[0],\cdots,x^{(H+1)}[T-1])$. Then with probability at
least $1-\delta$, the excess risk of the learned representation $\hat{\Phi}$
and target task weights $\hat{F}^{(H+1)}$ is bounded by
$\displaystyle\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$
$\displaystyle:=\frac{1}{2T}\mathbb{E}_{\mathcal{P}_{0:T-1}^{(H+1)}}\mathopen{}\left[\sum_{t=0}^{T-1}\left\|(F_{\star}^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi})x^{(H+1)}[t]\right\|^{2}\right]\mathclose{}$
$\displaystyle\lesssim\sigma_{z}^{2}\mathopen{}\left(\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta})}{N_{2}T}\right)\mathclose{}.$
(7)
Note that when we are operating in the setting where we have much more source
data than target data, the second term limits the excess risk bound in (7).
The second term scales with $kn_{u}$, which is smaller than the number of
total parameters in the controller $n_{u}n_{x}$, or $k(n_{u}+n_{x})$ under the
assumption of a low rank (rank-$k$) controller. Therefore, the benefit of
multi-task learning exhibited by this bound is most clear in the setting of
underactuation, i.e., when $n_{u}\leq n_{x}$. It should also be noted that the
quantity $kn_{x}$ in the numerator of the first term will only be smaller than
the number of source controller parameters ($n_{x}n_{u}H$) if $k$ is much
smaller than $n_{u}H$. This is reasonable, because if $k\geq n_{u}H$, an
optimal representation could simply contain all of the source task
controllers.
### 3.2 Closed-Loop Guarantees: Tackling Distribution Shift
We show that using multi-task representation learning leads to favorable
generalization bounds of the performance of the learned target controller in
closed-loop. As we are studying the pure offline imitation learning
(“behavioral cloning”) setting, we do not assume that the expert controllers
are optimizing any particular objective. Therefore, to quantify the
performance of the controller, we bound the deviation of states generated by
the learned and expert target controller run in closed-loop, i.e., the
tracking error, which implies general expected-cost bounds.
In order to transfer a bound on the excess risk of the target task
$\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$ into a bound on the tracking error,
we must account for the fundamental distribution shift between the expert
trajectories seen during training and the trajectories generated by running
the learned controller in closed-loop. We leverage the recent framework of
Pfrommer et al. (2022) to bound the tracking error, making the necessary
modifications to handle stochasticity. Our bound formalizes the notion that
“low training error implies low test error,” even under the aforementioned
distribution shift. A detailed exposition can be found in Appendix C.
Let us define the following coupling of the states of the expert versus
learned target task closed-loop systems: given a learned controller
$\hat{K}=\hat{F}^{(H+1)}\hat{\Phi}$ from solving the pre-training and fine-
tuning optimization problems (1) and (2), for a realization of process
randomness $x[0]\sim\mathcal{N}(0,\Sigma_{x}^{(H+1)})$ and
$z[t]\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\sigma_{z}^{2}I)$,
$w[t]\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\Sigma_{w}^{(H+1)})$ for
$t=0,\dots,T-1$, we write
$\displaystyle x_{\star}[t+1]$
$\displaystyle=(A+BK^{(H+1)})x_{\star}[t]+Bz[t]+w[t],\quad x_{\star}[0]=x[0],$
$\displaystyle\hat{x}[t+1]$
$\displaystyle=(A+B\hat{K})\hat{x}[t]+Bz[t]+w[t],\quad\hat{x}[0]=x[0].$
Thus $\hat{x}[t]$ and $x_{\star}[t]$ are the states visited by the learned and
expert target task systems with the _same_ draw of process randomness. We show
a high probability bound on the closed-loop tracking error
$\left\|x_{\star}[t]-\hat{x}[t]\right\|$ that scales with the excess risk of
the learned controller. Denote by $\mathcal{P}^{\star}_{1:T}$ and
$\hat{}\mathcal{P}_{1:T}$ the distributions of trajectories
$\mathopen{}\left\\{x_{\star}[t]\right\\}\mathclose{}_{t=1}^{T}$ and
$\mathopen{}\left\\{\hat{x}[t]\right\\}\mathclose{}_{t=1}^{T}$.
###### Theorem 3.2 (Target task tracking error bound)
Let $(\hat{\Phi},\hat{F}^{(H+1)})$ denote the learned representation and
target task weights, and $\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$ denote the
corresponding excess risk. Define $A_{\mathsf{cl}}:=A+BK^{(H+1)}$. Assume that
the excess risk satisfies:
$\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})\lesssim\frac{\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}{\mathcal{J}\mathopen{}\left(A_{\mathsf{cl}}\right)\mathclose{}^{2}\left\|B\right\|^{2}}.$
(8)
Then with probability greater than $1-\delta$, for a new target task
trajectory sampled with process randomness
$x[0]\sim\mathcal{N}(0,\Sigma_{x}^{(H+1)})$ and
$z[t]\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\sigma_{z}^{2}I)$,
$w[t]\overset{\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,\Sigma_{w}^{(H+1)})$ for
$t=0,\dots,T-1$, the tracking error satisfies
$\displaystyle\max_{1\leq t\leq T}\left\|\hat{x}[t]-x_{\star}[t]\right\|^{2}$
$\displaystyle\lesssim\mathcal{J}\mathopen{}\left(A_{\mathsf{cl}}\right)\mathclose{}^{2}\left\|B\right\|^{2}\log\mathopen{}\left(\frac{T}{\delta}\right)\mathclose{}\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)}).$
(9)
Furthermore, for any cost function $h(\cdot)$ that is $L$-Lipschitz with
respect to the trajectory-wise metric
$d\mathopen{}\left(\bm{x}_{1:T},\bm{y}_{1:T}\right)\mathclose{}=\max_{1\leq
t\leq T}\left\|x[t]-y[t]\right\|$, we have the following bound on the expected
cost gap
$\displaystyle{\left|\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h(\hat{\bm{x}}_{1:T})\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h(\bm{x}^{\star}_{1:T})\right]\mathclose{}\right|}$
$\displaystyle\lesssim
L\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|\sqrt{\log{T}}\sqrt{\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})}$
(10)
By invoking the bound on the excess risk from Theorem 3.1, condition (8) is
satisfied with probability at least $1-\delta^{\prime}$ if we have
sufficiently many samples $H$, $T$, $N_{1}$, $N_{2}$ such that
$\sigma_{z}^{2}\mathopen{}\left(\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta^{\prime}})}{N_{2}T}\right)\mathclose{}\lesssim\frac{\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}{\mathcal{J}\mathopen{}\left(A_{\mathsf{cl}}\right)\mathclose{}^{2}\left\|B\right\|^{2}}.$
The bound on excess risk from Theorem 3.1 may also be substituted into the
tracking error bound in (9) to find that with probability at least
$1-\delta-\delta^{\prime}$, the tracking error satisfies
$\max_{1\leq t\leq
T}\left\|\hat{x}[t]-x_{\star}[t]\right\|^{2}\lesssim\mathcal{J}\mathopen{}\left(A_{\mathsf{cl}}\right)\mathclose{}^{2}\left\|B\right\|^{2}\log\mathopen{}\left(\frac{T}{\delta}\right)\mathclose{}\sigma_{z}^{2}\mathopen{}\left(\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta^{\prime}})}{N_{2}T}\right)\mathclose{}.$
The above inequality provides the informal statement of the main result in
Theorem 1.1 by hiding $\log$ terms as well as the terms dependent on system
parameters. A bound for the expected cost gap
${\left|\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h(\hat{\bm{x}}_{1:T})\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h(\bm{x}^{\star}_{1:T})\right]\mathclose{}\right|}$
can be similarly instantiated.
###### Remark 3.1
The dependence of the tracking error bound in (9) on the stability of the
target-task closed-loop system through $\mathcal{J}(A_{\mathsf{cl}})$ is tight
(see Appendix C). Intuitively, less stable systems exacerbate the input errors
from the learned controller.
###### Remark 3.2
Some immediate examples of $h(\cdot)$ include LQR state costs
$h(\bm{x}_{1:T})=\max_{t}\left\|Q^{1/2}x[t]\right\|$ and regularized tracking
costs
$h(\bm{x}_{1:T})=\max_{t}\left\|x[t]-x_{\mathrm{goal}}[t]\right\|+\lambda\left\|Rx[t]\right\|$.
Since $\frac{1}{T}\sum_{t=1}^{T}\left\|x[t]-y[t]\right\|\leq\max_{1\leq t\leq
T}\left\|x[t]-y[t]\right\|$, (10) holds with no modification for time-averaged
costs $h(\cdot)$. Bounds on the full LQR cost
$h\mathopen{}\left(\mathopen{}\left(\bm{x}_{1:T},K\right)\mathclose{}\right)\mathclose{}:=\max_{1\leq
t\leq T}\left\|\begin{bmatrix}Q^{1/2}\\\ R^{1/2}K\end{bmatrix}x[t]\right\|$
can be similarly derived, and are detailed in Appendix D.
## 4 Numerical Results
We consider a simple system with $n_{x}=4$ and $n_{u}=2$ from Hong et al.
(2021). In particular, let
$\displaystyle x[t+1]=\begin{bmatrix}.99&.03&-.02&-.32\\\ .01&.47&4.7&.00\\\
.02&-.06&.40&.00\\\
.01&-.04&.72&.99\end{bmatrix}x[t]+\begin{bmatrix}.01&.99\\\ -3.44&1.66\\\
-.83&.44\\\ -.47&.25\end{bmatrix}u[t]=:Ax[t]+Bu[t].$
We generate a collection of stabilizing controllers $K^{(1)}$,
$K^{(2)},\dots,K^{(H+1)}$ as LQR controllers with different cost matrices.
Specifically, let $R=I_{2}$, and $Q^{(h)}=\alpha^{(h)}I_{4}$ for
$\alpha^{(h)}\in\texttt{logspace}(-2,2,H+1)$, where $H=9$. The controllers
$K^{(h)}$ are then given by
$K^{(h)}=-(B^{\top}P^{(h)}B+R)^{-1}B^{\top}P^{(h)}A$, where $P^{(h)}$ solves
the following Discrete Algebraic Riccati equation:
$P^{(h)}=A^{\top}P^{(h)}A+A^{\top}P^{(h)}B(B^{\top}P^{(h)}B+R)^{-1}B^{\top}P^{(h)}A+Q^{(h)}$.
Next, assume that rather than directly observing the state, we obtain a high
dimensional observation given by an injective linear function of the state:
such an observation model can be viewed as a linear “perceptual sensor” or
camera. In particular, we suppose that $y_{t}=Gx_{t}$, where
$G\in\mathbb{R}^{50\times 4}$. For simplicity, we select the elements of $G$
i.i.d. from $\mathcal{N}(0,1)$, which ensures that $G$ is injective almost
surely. The dynamics of the observations may be written
$y[t+1]=GAx_{t}+GBu[t]=GAG^{\dagger}y[t]+GBu[t],$ with the input
$u[t]=K^{(h)}x[t]=K^{(h)}G^{\dagger}y[t]$. Define $\bar{A}=GAG^{\dagger}$ and
$\bar{B}=GB$, and $\bar{K}^{(h)}=K^{(h)}G^{\dagger}$. Consider the dynamics in
the face of process noise $w[t]\overset{i.i.d.}{\sim}\mathcal{N}(0,I_{50})$,
along with inputs corrupted by noise
$z[t]\overset{i.i.d.}{\sim}\mathcal{N}(0,I_{2})$:
$\displaystyle y[t+1]$
$\displaystyle=(\bar{A}+\bar{B}\bar{K}^{(h)})y[t]+\bar{B}z[t]+w[t],\quad
u[t]=\bar{K}^{(h)}y[t]+z[t].$ (11)
For the first $H$ controllers, we collect $N_{1}$ trajectories of length
$T=20$ to get the pairs
$\mathopen{}\left\\{\mathopen{}\left\\{\mathopen{}\left\\{(y_{i}^{h}[t],u_{i}^{h}[t])\right\\}\mathclose{}_{t=0}^{T-1}\right\\}\mathclose{}_{i=1}^{N_{1}}\right\\}\mathclose{}_{h=1}^{H}$.
For the last controller, we collect $N_{2}$ length $T=20$ trajectories to get
the dataset
$\mathopen{}\left\\{\mathopen{}\left\\{(y_{i}^{H+1}[t],u_{i}^{H+1}[t])\right\\}\mathclose{}_{t=0}^{T-1}\right\\}\mathclose{}_{i=1}^{N_{2}}$.
Our goal is to learn the controller $\bar{K}^{(H+1)}$ from the collected state
measurements and inputs. We compare the following ways of doing so:
---
(_a_) Tracking Error
---
(_b_) Parameter Error
---
(_c_) Percent Stable
Figure 1: We plot the tracking error between trajectories from the expert and
learned controllers, $\underset{{1\leq t\leq
T_{\textrm{test}}}}{\max}\left\|\hat{y}[t]-y_{\star}[t]\right\|^{2}$, the
parameter error,
$\mathopen{}\left(\left\|\hat{F}^{(H+1)}\hat{\Phi}-\bar{K}^{(H+1)}\right\|_{F}\right)\mathclose{}$,
and the percent of stable closed-loop systems for varying amounts of target
task data to compare multi-task IL to directly learning the controller from
target task data only. All metrics are plotted with respect to the lifted
system in Equation (11). Multi-task IL demonstrates a significant benefit over
direct IL in all metrics, especially when there is limited target task data.
* •
Multi-task Imitation Learning: We observe that the data generating mechanism
ensures the existence of a low dimensional representation. In particular, one
possible $\Phi_{\star}$ is $G^{\dagger}$. Therefore, the stage is set for the
two step approach outlined in Section 2. In particular, we assume that the
true underlying state dimension is known, and we set the low dimensional
representation dimension to $k=4$, and jointly optimize over $\Phi$,
$F^{(1)},\dots,F^{(H)}$ in Problem (1). We approximately solve this problem
with $10000$ steps of alternating gradient descent using the adam optimizer
(Kingma and Ba, 2014) in optax (Babuschkin et al., 2020) with a learning rate
of $0.0001$. The learned representation is then fixed, and the target training
data is used to optimize $F^{(H+1)}$.
* •
Direct Imitation Learning: We compare multi-task learning to direct learning,
which does not leverage the source data. In particular, given the target data,
direct learning solves the problem
$\operatorname*{minimize}_{F^{(H+1)}}\sum_{i=1}^{N_{1}}\sum_{t=0}^{T-1}\left\|F^{(H+1)}y_{i}^{(H+1)}[t]-u_{i}^{(H+1)}[t]\right\|^{2}$
666Note that another baseline leverages the fact that a $k$ dimensional
representation of the state exists, and learns it using target task data only
by solving
$\operatorname*{minimize}_{F^{(H+1)},\Phi}\sum_{i=1}^{N_{1}}\sum_{t=0}^{T-1}\left\|F^{(H+1)}\Phi
y^{(H+1)}_{i}[t]-u_{i}^{(H+1)}[t]\right\|^{2}$. For the current example,
however, $n_{u}<k$, so this approach is less efficient than the direct
learning approach proposed.. In this setting, we let $\hat{\Phi}=I_{50}.$
Note that a $2\times 50$ controller has $n_{u}\times n_{x}=100$ parameters to
learn from the target data. Meanwhile, multi-task imitation learning needs to
learn a total of $k\times n_{u}+k\times n_{x}=208$ parameters for the target
controller, but the $k\times n_{u}$ parameters are learned using source task
data. This leaves only $8$ parameters to learn using target task data.
Figure 1 plots three metrics that provide insight into the efficacy of these
approaches: the imitation gap given by $\max_{1\leq t\leq
T_{\textrm{test}}}\left\|\hat{y}[t]-y_{\star}[t]\right\|^{2}$ for length
$T_{\textrm{test}}=100$ observation trajectories $\hat{y}$ and $y_{\star}$
rolled out under the learned controller and expert controller, respectively,
with the same noise realizations; the parameter error,
$\left\|\hat{F}^{(H+1)}\hat{\Phi}-\bar{K}^{(H+1)}\right\|_{F}$; and the
percentage of trials where the learned controller is stabilizing. The trials
are over ten realizations of $G$, as well as ten realizations of the noise,
for a total of $100$ trials. For each trial, $N_{1}=10$, while $N_{2}$ sweeps
values in $[20]$. The medians for the imitation gap and parameter error are
shown, with the $20\%-80\%$ quantiles shaded.
---
(_a_) Tracking Error
---
(_b_) Parameter Error
---
(_c_) Percent Stable
Figure 2: We plot the tracking error between trajectories from the expert and
learned controllers, $\underset{{1\leq t\leq
T_{\textrm{test}}}}{\max}\left\|\hat{y}[t]-y_{\star}[t]\right\|^{2}$, the
parameter error,
$\mathopen{}\left(\left\|\hat{F}^{(H+1)}\hat{\Phi}-\bar{K}^{(H+1)}\right\|_{F}\right)\mathclose{}$,
and the percent of stable closed-loop systems for varying amounts of target
task data to compare multi-task IL to directly learning the controller from
target task data only. Similar to the setting of multi-task IL for transfer to
a new task, leveraging all source data to learn the controller for a single
source task provides a significant benefit in all three metrics over direct
IL.
In Figure 2, we additionally plot these metrics on one of the $H$ source
training tasks (arbitrarily chosen as $h=7$) for varying amounts of training
data to demonstrate the efficacy of the approach for multi-task learning.
Here, $N_{1}$ ranges from $1$ to $10$. We compare the controller
$\hat{F}^{(h)}\hat{\Phi}$ resulting from the shared training in Problem (1)
with the controller from directly training a controller on this task without
leveraging source task data. We note that our theoretical results, with mild
modification, also support the efficacy of this simultaneous training of a
representation and task weights.
## 5 Conclusion and Future Work
In this work, we study the sample complexity of multi-task imitation learning
for linear systems. We find that if the different sets of expert
demonstrations share a low dimensional representation, and the demonstrations
are sufficiently diverse, then doing multi-task representation learning will
lead to a smaller tracking error when deploying the learned policy in closed-
loop, compared to learning a policy only from target task data. Our results
are a first step towards understanding how the performance of a controller
trained on multi-task data relates to the characteristics of the multi-task
training data and the system being controlled. Some exciting directions for
future work would be to extend our analysis to nonlinear systems and nonlinear
representation functions, as well as other types of learning algorithms such
as model-based and model-free RL.
## Acknowledgements
Katie Kang is supported by a NSF Graduate Research Fellowship. Bruce D. Lee is
supported by the DoD through a National Defense Science & Engineering
Fellowship. Sergey Levine is supported in part by the DARPA Assured Autonomy
program. Nikolai Matni and Thomas Zhang are supported in part by NSF awards
CPS-2038873, CAREER award ECCS-2045834, and a Google Research Scholar award.
## References
* Codevilla et al. (2018) Felipe Codevilla, Matthias Müller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning. In _2018 IEEE international conference on robotics and automation (ICRA)_ , pages 4693–4700. IEEE, 2018.
* Schaal (1999) Stefan Schaal. Is imitation learning the route to humanoid robots? _Trends in cognitive sciences_ , 3(6):233–242, 1999.
* Ross et al. (2011) Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In _Proceedings of the fourteenth international conference on artificial intelligence and statistics_ , pages 627–635. JMLR Workshop and Conference Proceedings, 2011.
* Du et al. (2020) Simon S Du, Wei Hu, Sham M Kakade, Jason D Lee, and Qi Lei. Few-shot learning via learning the representation, provably. _arXiv preprint arXiv:2002.09434_ , 2020.
* Tripuraneni et al. (2021) Nilesh Tripuraneni, Chi Jin, and Michael Jordan. Provable meta-learning of linear representations. In _International Conference on Machine Learning_ , pages 10434–10443. PMLR, 2021.
* Teh et al. (2017) Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning, 2017. URL https://arxiv.org/abs/1707.04175.
* Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures, 2018. URL https://arxiv.org/abs/1802.01561.
* Hessel et al. (2018) Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multi-task deep reinforcement learning with popart, 2018. URL https://arxiv.org/abs/1809.04474.
* Singh et al. (2020) Avi Singh, Eric Jang, Alexander Irpan, Daniel Kappler, Murtaza Dalal, Sergey Levine, Mohi Khansari, and Chelsea Finn. Scalable multi-task imitation learning with autonomous improvement, 2020\. URL https://arxiv.org/abs/2003.02636.
* Deisenroth et al. (2014) Marc Peter Deisenroth, Peter Englert, Jan Peters, and Dieter Fox. Multi-task policy search for robotics. In _2014 IEEE International Conference on Robotics and Automation (ICRA)_ , pages 3876–3881, 2014. doi: 10.1109/ICRA.2014.6907421.
* Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks, 2017\. URL https://arxiv.org/abs/1703.03400.
* Rakelly et al. (2019) Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy meta-reinforcement learning via probabilistic context variables, 2019. URL https://arxiv.org/abs/1903.08254.
* Duan et al. (2016) Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning, 2016\. URL https://arxiv.org/abs/1611.02779.
* Yu et al. (2021) Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, and Chelsea Finn. Conservative data sharing for multi-task offline reinforcement learning, 2021. URL https://arxiv.org/abs/2109.08128.
* Yang and Nachum (2021) Mengjiao Yang and Ofir Nachum. Representation matters: Offline pretraining for sequential decision making, 2021. URL https://arxiv.org/abs/2102.05815.
* Lu et al. (2021) Rui Lu, Gao Huang, and Simon S. Du. On the power of multitask representation learning in linear mdp, 2021\. URL https://arxiv.org/abs/2106.08053.
* Cheng et al. (2022) Yuan Cheng, Songtao Feng, Jing Yang, Hong Zhang, and Yingbin Liang. Provable benefit of multitask representation learning in reinforcement learning, 2022. URL https://arxiv.org/abs/2206.05900.
* Lu et al. (2022) Rui Lu, Andrew Zhao, Simon S. Du, and Gao Huang. Provable general function class representation learning in multitask bandits and mdps, 2022. URL https://arxiv.org/abs/2205.15701.
* Xu et al. (2020) Zhiyuan Xu, Kun Wu, Zhengping Che, Jian Tang, and Jieping Ye. Knowledge transfer in multi-task deep reinforcement learning for continuous control, 2020. URL https://arxiv.org/abs/2010.07494.
* Maurer et al. (2015) Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. The benefit of multitask representation learning, 2015. URL https://arxiv.org/abs/1505.06279.
* Arora et al. (2020) Sanjeev Arora, Simon S. Du, Sham Kakade, Yuping Luo, and Nikunj Saunshi. Provable representation learning for imitation learning via bi-level optimization, 2020. URL https://arxiv.org/abs/2002.10544.
* Harrison et al. (2018) James Harrison, Apoorva Sharma, Roberto Calandra, and Marco Pavone. Control adaptation via meta-learning dynamics. In _Workshop on Meta-Learning at NeurIPS_ , volume 2018, 2018.
* Richards et al. (2021) Spencer M Richards, Navid Azizan, Jean-Jacques Slotine, and Marco Pavone. Adaptive-control-oriented meta-learning for nonlinear systems. _arXiv preprint arXiv:2103.04490_ , 2021.
* Richards et al. (2022) Spencer M Richards, Navid Azizan, Jean-Jacques Slotine, and Marco Pavone. Control-oriented meta-learning. _arXiv preprint arXiv:2204.06716_ , 2022.
* Shi et al. (2021) Guanya Shi, Kamyar Azizzadenesheli, Michael O’Connell, Soon-Jo Chung, and Yisong Yue. Meta-adaptive nonlinear control: Theory and algorithms. _Advances in Neural Information Processing Systems_ , 34:10013–10025, 2021.
* Muthirayan et al. (2022) Deepan Muthirayan, Dileep Kalathil, and Pramod P Khargonekar. Meta-learning online control for linear dynamical systems. _arXiv preprint arXiv:2208.10259_ , 2022.
* Wang et al. (2021) Rui Wang, Robin Walters, and Rose Yu. Meta-learning dynamics forecasting using task inference. _arXiv preprint arXiv:2102.10271_ , 2021.
* Modi et al. (2021) Aditya Modi, Mohamad Kazem Shirani Faradonbeh, Ambuj Tewari, and George Michailidis. Joint learning of linear time-invariant dynamical systems. _arXiv preprint arXiv:2112.10955_ , 2021.
* Li et al. (2022) Lidong Li, Claudio De Persis, Pietro Tesi, and Nima Monshizadeh. Data-based transfer stabilization in linear systems. _arXiv preprint arXiv:2211.05536_ , 2022.
* Baxter (1995) Jonathan Baxter. Learning internal representations. In _Proceedings of the eighth annual conference on Computational learning theory_ , pages 311–320, 1995.
* Crammer et al. (2008) Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. _Journal of Machine Learning Research_ , 9(8), 2008.
* Maurer et al. (2016) Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. The benefit of multitask representation learning. _Journal of Machine Learning Research_ , 17(81):1–32, 2016.
* Tripuraneni et al. (2020) Nilesh Tripuraneni, Michael Jordan, and Chi Jin. On the theory of transfer learning: The importance of task diversity. _Advances in Neural Information Processing Systems_ , 33:7852–7862, 2020.
* Chua et al. (2021) Kurtland Chua, Qi Lei, and Jason D Lee. How fine-tuning allows for effective meta-learning. _Advances in Neural Information Processing Systems_ , 34:8871–8884, 2021.
* Abbasi-Yadkori et al. (2011) Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Online least squares estimation with self-normalized processes: An application to bandit problems. _arXiv preprint arXiv:1102.2670_ , 2011.
* Pfrommer et al. (2022) Daniel Pfrommer, Thomas TCK Zhang, Stephen Tu, and Nikolai Matni. Tasil: Taylor series imitation learning. _arXiv preprint arXiv:2205.14812_ , 2022.
* Hong et al. (2021) J Hong, N Moehle, and S Boyd. Lecture notes in “introduction to matrix methods”, 2021.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Babuschkin et al. (2020) Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, John Quan, George Papamakarios, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Luyu Wang, Wojciech Stokowiec, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/deepmind.
* Vershynin (2018) Roman Vershynin. _High-dimensional probability: An introduction with applications in data science_ , volume 47. Cambridge university press, 2018.
* Laurent and Massart (2000) Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. _Annals of Statistics_ , pages 1302–1338, 2000.
* Jedra and Proutiere (2020) Yassir Jedra and Alexandre Proutiere. Finite-time identification of stable linear systems optimality of the least-squares estimator. In _2020 59th IEEE Conference on Decision and Control (CDC)_ , pages 996–1001, 2020. doi: 10.1109/CDC42340.2020.9304362.
* Chen et al. (2016) Yan Mei Chen, Xiao Shan Chen, and Wen Li. On perturbation bounds for orthogonal projections. _Numerical Algorithms_ , 73(2):433–444, 2016\.
* Xu (2020) Xuefeng Xu. On the perturbation of an l2-orthogonal projection. _Journal of Computational and Applied Mathematics_ , 368:112327, 2020.
* Liang et al. (2015) Tengyuan Liang, Alexander Rakhlin, and Karthik Sridharan. Learning with square loss: Localization through offset rademacher complexity. In _Conference on Learning Theory_ , pages 1260–1285. PMLR, 2015\.
* Khalil (1996) H.K. Khalil. _Nonlinear Systems_. Prentice Hall, 1996. ISBN 9780132280242. URL https://books.google.com/books?id=qiBuQgAACAAJ.
* Horn and Johnson (2012) Roger A Horn and Charles R Johnson. _Matrix analysis_. Cambridge university press, 2012.
* Fazel et al. (2018) Maryam Fazel, Rong Ge, Sham Kakade, and Mehran Mesbahi. Global convergence of policy gradient methods for the linear quadratic regulator. In _International Conference on Machine Learning_ , pages 1467–1476. PMLR, 2018.
* Mania et al. (2019) Horia Mania, Stephen Tu, and Benjamin Recht. Certainty equivalence is efficient for linear quadratic control. _Advances in Neural Information Processing Systems_ , 32, 2019.
* Villani (2021) Cédric Villani. _Topics in optimal transportation_ , volume 58. American Mathematical Soc., 2021.
#### Additional Notation
The set of $n\times m$ matrices $(n\geq m)$ with orthonormal columns is
denoted by $\mathcal{O}_{n,m}$. The set of $d$ dimensional unit vectors is
denoted by $\mathcal{S}^{d-1}$. We let $P_{A}=A(A^{\top}A)^{\dagger}A^{\top}$
denote the projection onto $\textbf{span}(A)$, and let $P_{A}^{\perp}=I-P_{A}$
denote the projection onto $\textbf{span}(A)^{\perp}$. The Kronecker product
of a matrix $A$ with a matrix $B$ is denoted $A\otimes B$. The vectorization
of a matrix $A$ with columns $A_{1},\dots,A_{n}$ is denoted
$\operatorname{Vec}(A)=\begin{bmatrix}A_{1}^{\top}&A_{2}^{\top}&\dots&A_{n}^{\top}\end{bmatrix}^{\top}$.
We denote the chi-squared distribution with $k$ degrees of freedom by
$\chi^{2}(k)$. Finally, we define the following $N_{1}T\times n_{u}$ data
matrices:
$\displaystyle\mathbf{U}^{(h)}$
$\displaystyle=\begin{bmatrix}u_{1}^{(h)}[0]&\dots&u_{1}^{(h)}[T-1]&\dots&u_{N_{1}}^{(h)}[0]&\dots&u_{N_{1}}^{(h)}[T-1]\end{bmatrix}^{\top},$
$\displaystyle\mathbf{Z}^{(h)}$
$\displaystyle=\begin{bmatrix}z_{1}^{(h)}[0]&\dots&z_{1}^{(h)}[T-1]&\dots&z_{N_{1}}^{(h)}[0]&\dots&z_{N_{1}}^{(h)}[T-1]\end{bmatrix}^{\top}.$
## Appendix A Bounding the covariance concentration
We introduce the following version of the Hanson-Wright inequality for sub-
gaussian quadratic forms (Vershynin, 2018, Theorem 6.3.2), specialized to the
Gaussian case via Laurent and Massart (2000).
###### Proposition A.1 (Concentration of Gaussian Quadratic Forms)
Let $z\sim\mathcal{N}(0,I_{n})$ be a random Gaussian vector, and
$R\in\mathbb{R}^{m\times n}$ is an arbitrary fixed matrix. Then the following
concentration inequalities hold for all $\varepsilon>0$:
$\displaystyle\mathbb{P}\mathopen{}\left[\left\|Rz\right\|^{2}\geq(1+\varepsilon)\left\|R\right\|_{F}^{2}\right]\mathclose{}\leq\exp\mathopen{}\left(-\frac{1}{4}\min\mathopen{}\left\\{\frac{\varepsilon^{2}}{4},\varepsilon\right\\}\mathclose{}\frac{\left\|R\right\|_{F}^{2}}{\left\|R\right\|^{2}}\right)\mathclose{}$
(12)
$\displaystyle\mathbb{P}\mathopen{}\left[{\left|\left\|Rz\right\|^{2}-\left\|R\right\|_{F}^{2}\right|}\geq\varepsilon\left\|R\right\|_{F}^{2}\right]\mathclose{}\leq
2\exp\mathopen{}\left(-\frac{1}{4}\min\mathopen{}\left\\{\frac{\varepsilon^{2}}{4},\varepsilon\right\\}\mathclose{}\frac{\left\|R\right\|_{F}^{2}}{\left\|R\right\|^{2}}\right)\mathclose{},$
(13)
Proof By the rotational invariance of Gaussian random vectors, we have
$z^{\top}R^{\top}Rz\overset{d}{=}\sum_{i=1}^{\min\mathopen{}\left\\{d,n\right\\}\mathclose{}}\sigma_{i}^{2}z_{i}^{2}$,
where $\sigma_{i}$ is the $i$th singular value of $R$. Noting that $z_{i}^{2}$
are i.i.d. $\chi^{2}$ random variables with $1$ degree of freedom, Laurent and
Massart (2000, Lemma 1) provides the following upper and lower tail bounds on
Gaussian quadratic forms:
$\displaystyle\mathbb{P}\mathopen{}\left[\left\|Rz\right\|^{2}\geq\left\|R\right\|_{F}^{2}+2\left\|R^{\top}R\right\|_{F}\sqrt{t}+2\left\|R\right\|^{2}t\right]\mathclose{}\leq\exp(-t)$
$\displaystyle\mathbb{P}\mathopen{}\left[\left\|Rz\right\|^{2}\leq\left\|R\right\|_{F}^{2}-2\left\|R^{\top}R\right\|_{F}\sqrt{t}\right]\mathclose{}\leq\exp(-t).$
We want to leverage these bounds to prove multiplicative concentration bounds.
Focusing on the upper tail first, we observe that
$\displaystyle\left\|R\right\|_{F}^{2}+2\left\|R^{\top}R\right\|_{F}\sqrt{t}+2\left\|R\right\|^{2}t$
$\displaystyle\leq\left\|R\right\|_{F}^{2}+2\left\|R\right\|\left\|R\right\|_{F}\sqrt{t}+2\left\|R\right\|^{2}t$
$\displaystyle\leq\left\|R\right\|_{F}^{2}+4\max\mathopen{}\left\\{\left\|R\right\|\left\|R\right\|_{F}\sqrt{t},\left\|R\right\|^{2}t\right\\}\mathclose{}$
$\displaystyle=:(1+\varepsilon)\left\|R\right\|_{F}^{2},$
such that by construction
$\mathbb{P}\mathopen{}\left[\left\|Rz\right\|^{2}\geq(1+\varepsilon)\left\|R\right\|_{F}^{2}\right]\mathclose{}\leq\exp(-t).$
Solving for the two cases of $t$ in terms of $\varepsilon$ and taking their
minimum, we find that
$\mathbb{P}\mathopen{}\left[\left\|Rz\right\|^{2}\geq(1+\varepsilon)\left\|R\right\|_{F}^{2}\right]\mathclose{}\leq\exp\mathopen{}\left(-\frac{1}{4}\min\mathopen{}\left\\{\frac{\varepsilon^{2}}{4},\varepsilon\right\\}\mathclose{}\frac{\left\|R\right\|_{F}^{2}}{\left\|R\right\|^{2}}\right)\mathclose{},$
which completes the upper tail bound. Repeating this for the lower tail bound,
we set
$t=\frac{1}{4}\frac{\left\|R\right\|_{F}^{2}}{\left\|R\right\|^{2}}\varepsilon^{2}$
to get
$\mathbb{P}\mathopen{}\left[\left\|Rz\right\|^{2}\leq\left\|R\right\|_{F}^{2}-\varepsilon\left\|R\right\|_{F}^{2}\right]\mathclose{}\leq\exp\mathopen{}\left(-\frac{1}{4}\frac{\left\|R\right\|_{F}^{2}}{\left\|R\right\|^{2}}\varepsilon^{2}\right)\mathclose{},$
which for a given $\varepsilon>0$ is always upper bounded by the upper tail
bound.
The following $\varepsilon$-net argument is standard, and a proof may be found
in Chapter 4 of Vershynin (2018).
###### Lemma A.1
Let $W$ be and $d\times d$ random matrix, and $\varepsilon\in[0,1/2)$.
Furthermore, let $\mathcal{N}$ be an $\varepsilon$-net of $\mathcal{S}^{d-1}$
with minimal cardinality. Then for all $\rho>0$,
$\operatorname{\mathbb{P}}\mathopen{}\left[\left\|W\right\|>\rho\right]\mathclose{}\leq\mathopen{}\left(\frac{2}{\varepsilon}+1\right)\mathclose{}^{d}\max_{x\in\mathcal{N}}\operatorname{\mathbb{P}}\mathopen{}\left[{\left|x^{\top}Wx\right|}>(1-2\epsilon)\rho\right]\mathclose{}.$
We now prove a generalized result on the concentration of covariances from
Jedra and Proutiere (2020) in which the covariates are pre and post multiplied
by a matrix with orthonormal rows and its transpose, respectively. This
demonstrates error bounds scaling with the lower dimension of the orthonormal
matrix.
###### Lemma A.2 (Modified Lemma 2 of Jedra and Proutiere (2020))
Let $A\in\mathbb{R}^{n\times n}$ satisfy $\rho(A)<1$ and let $\Sigma_{w}\succ
0$ with dimension $n\times n$. Let $\Sigma_{x}$ solve the discrete Lyapunov
equation:
$\Sigma_{x}=A\Sigma_{x}A^{\top}+\Sigma_{w}.$
Consider drawing $N_{1}$ trajectories of length $T$ from a linear system
$x_{i}[t+1]=Ax_{i}[t]+w_{i}[t]$, where
$w_{i}[t]\overset{i.i.d}{\sim}\mathcal{N}(0,\Sigma_{w})$ for $i=1,\dots N_{1}$
and $t=0,\dots T-1$, and
$x_{i}[0]\overset{i.i.d.}{\sim}\mathcal{N}(0,\Sigma_{x})$ for
$i=1,\dots,N_{1}$. Let $\mathbf{X}$ be the matrix of stacked state data as in
Equation (4). Let $U\in\mathcal{O}_{n,d}$ with $d\leq n$ be independent of
$\mathbf{X}$. Define
$M=\mathopen{}\left(N_{1}TU^{\top}\Sigma_{x}U\right)\mathclose{}^{-1/2}$. Then
$\left\|(\mathbf{X}UM)^{\top}\mathbf{X}UM-I\right\|>\max\mathopen{}\left\\{\varepsilon,\varepsilon^{2}\right\\}\mathclose{}$
holds with probability at most
$2\exp\mathopen{}\left(-c_{1}\varepsilon^{2}\frac{N_{1}T}{\kappa(\Sigma_{x})\mathcal{J}(A)^{2}}+c_{2}d\right)\mathclose{}$
for some absolute constants $c_{1}$ and $c_{2}$.
Proof
Note that we can write
$\mathbb{E}\mathopen{}\left[(\mathbf{X}U)^{\top}\mathbf{X}U\right]\mathclose{}=\mathbb{E}\mathopen{}\left[U^{\top}\sum_{i=1}^{N_{1}}\sum_{t=0}^{T-1}x_{i}[t]x_{i}[t]^{\top}U\right]\mathclose{}=N_{1}TU^{\top}\Sigma_{x}U.$
Then
$\displaystyle\left\|(\mathbf{X}UM)^{\top}(\mathbf{X}UM)-I\right\|$
$\displaystyle=\sup_{v\in\mathcal{S}^{d-1}}{\left|v^{\top}\mathopen{}\left((\mathbf{X}UM)^{\top}\mathbf{X}UM-I\right)\mathclose{}v\right|}$
$\displaystyle=\sup_{v\in\mathcal{S}^{d-1}}{\left|v^{\top}\mathopen{}\left((\mathbf{X}UM)^{\top}\mathbf{X}UM\right)\mathclose{}v-\mathbb{E}v^{\top}\mathopen{}\left((\mathbf{X}UM)^{\top}\mathbf{X}UM\right)\mathclose{}v\right|}$
$\displaystyle=\sup_{v\in\mathcal{S}^{d-1}}{\left|\left\|\mathbf{X}UMv\right\|^{2}-\mathbb{E}\left\|\mathbf{X}UMv\right\|^{2}\right|}$
$\displaystyle\overset{(i)}{=}\sup_{v\in\mathcal{S}^{d-1}}{\left|\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\xi\right\|^{2}-\mathbb{E}\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\xi\right\|^{2}\right|}$
$\displaystyle=\sup_{v\in\mathcal{S}^{d-1}}{\left|\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\xi\right\|^{2}-\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\right\|_{F}^{2}\right|},$
where $(i)$ follows by defining
$\Gamma=I_{N_{1}}\otimes\begin{bmatrix}\Sigma_{x}^{1/2}\\\
A\Sigma_{x}^{1/2}&\Sigma_{w}^{1/2}&&0\\\ &&\ddots\\\
A^{T-1}\Sigma_{x}^{1/2}&\ldots&A\Sigma_{w}^{1/2}&\Sigma_{w}^{1/2}\end{bmatrix},$
$\xi=\operatorname{Vec}\mathopen{}\left(\begin{bmatrix}\eta_{0}[-1]&\dots&\eta_{0}[T-1]&\dots&\eta_{N_{1}}[0]&\dots&\eta_{N_{1}}[T-2]\end{bmatrix}\right)\mathclose{},$
where $\eta_{i}[t]\overset{i.i.d.}{\sim}\mathcal{N}(0,I)$ for $t\in[-1,T-2]$,
and
$\sigma_{Mv}:=I_{N_{1}T}\otimes(Mv),\qquad\tilde{\Gamma}:=(I_{N_{1}T}\otimes
U^{\top})\Gamma,$
and recalling that
$\operatorname{Vec}(U^{\top}\mathbf{X}^{\top})=(I_{N_{1}T}\otimes
U^{\top})\Gamma\xi=\tilde{\Gamma}\xi$. Next oberserve that for any
$v\in\mathcal{S}^{d-1}$, we can apply Proposition A.1 to show that
$\operatorname{\mathbb{P}}\mathopen{}\left[{\left|\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\xi\right\|^{2}-\left\|\sigma_{Mv}\tilde{\Gamma}\right\|_{F}^{2}\right|}>\rho\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\right\|_{F}^{2}\right]\mathclose{}\leq
2\exp\mathopen{}\left(-C\min\mathopen{}\left\\{\rho^{2},\rho\right\\}\mathclose{}\frac{\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\right\|_{F}^{2}}{\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\right\|^{2}}\right)\mathclose{}$
for some positive universal constant $C$. Next observe that
$\left\|\sigma_{Mv}\tilde{\Gamma}\right\|_{F}^{2}=1$, and
$\left\|\sigma_{Mv}^{\top}\tilde{\Gamma}\right\|\leq\left\|M\right\|\left\|\tilde{\Gamma}\right\|$,
and thus the right hand side reduces to
$2\exp\mathopen{}\left(-\frac{C\min\mathopen{}\left\\{\rho^{2},\rho\right\\}\mathclose{}}{\left\|M\right\|^{2}\left\|\tilde{\Gamma}\right\|^{2}}\right)\mathclose{}.$
Applying Lemma A.1 with $\varepsilon=\frac{1}{4}$, we have that
$\operatorname{\mathbb{P}}\mathopen{}\left[\left\|(\mathbf{X}UM)^{\top}\mathbf{X}UM-I\right\|>\rho\right]\mathclose{}\leq
2\cdot
9^{d}\exp\mathopen{}\left(-\frac{C\min\mathopen{}\left\\{\rho^{2}/4,\rho/2\right\\}\mathclose{}}{\left\|M\right\|^{2}\left\|\tilde{\Gamma}\right\|^{2}}\right)\mathclose{}.$
Setting
$\rho=\max\mathopen{}\left\\{\varepsilon,\varepsilon^{2}\right\\}\mathclose{}$
and rearranging constants provides
$\operatorname{\mathbb{P}}\mathopen{}\left[\left\|(\mathbf{X}UM)^{\top}\mathbf{X}UM-I\right\|>\max\mathopen{}\left\\{\varepsilon,\varepsilon^{2}\right\\}\mathclose{}\right]\mathclose{}\leq
2\exp\mathopen{}\left(-\frac{c_{1}\varepsilon^{2}}{\left\|M\right\|^{2}\left\|\Gamma\right\|^{2}}+c_{2}d\right)\mathclose{}.$
Lastly, note that
$\displaystyle\left\|M\right\|=\frac{\left\|(U^{\top}\Sigma_{x}U)^{-1/2}\right\|}{\sqrt{N_{1}T}}=\frac{1}{\sqrt{N_{1}T}}\frac{1}{\lambda_{d}(U^{\top}\Sigma_{x}U)^{1/2}}\leq\frac{1}{\sqrt{N_{1}T}}\frac{1}{\lambda_{\min}(\Sigma_{x})^{1/2}}$
and
$\displaystyle\left\|\tilde{\Gamma}\right\|$
$\displaystyle\leq\left\|\Gamma\right\|$
$\displaystyle=\left\|\begin{bmatrix}\Sigma_{x}^{1/2}\\\
A\Sigma_{x}^{1/2}&\Sigma_{w}^{1/2}&&0\\\ &&\ddots\\\
A^{T-1}\Sigma_{x}^{1/2}&\ldots&A\Sigma_{w}^{1/2}&\Sigma_{w}^{1/2}\end{bmatrix}\right\|$
$\displaystyle\leq\left\|\begin{bmatrix}\Sigma_{x}^{1/2}\\\ \vdots\\\
A^{T-1}\Sigma_{x}^{1/2}\end{bmatrix}\right\|+\left\|\begin{bmatrix}\Sigma_{w}^{1/2}&&0\\\
&&\ddots\\\
A^{T-2}\Sigma_{w}^{1/2}&\ldots&A\Sigma_{w}^{1/2}&\Sigma_{w}^{1/2}\end{bmatrix}\right\|$
$\displaystyle\leq\sum_{s=0}^{T-1}\left\|A^{s}\right\|\left\|\Sigma_{x}\right\|^{1/2}+\left\|\begin{bmatrix}I&&0\\\
&&\ddots\\\
A^{T-2}&\ldots&A&I\end{bmatrix}\right\|\left\|\Sigma_{w}\right\|^{1/2}$
$\displaystyle\leq 2\sum_{s\geq
0}\left\|A^{s}\right\|\left\|\Sigma_{x}\right\|^{1/2}$
$\displaystyle=2\mathcal{J}(A)\left\|\Sigma_{x}\right\|^{1/2}.$
where the final inequality follows by applying Lemma 5 of Jedra and Proutiere
(2020) and the fact that $\Sigma_{w}\preceq\Sigma_{x}$. The lemma then follows
from the fact that
$\kappa(\Sigma_{x})=\frac{\left\|\Sigma_{x}\right\|}{\lambda_{\min}(\Sigma_{x})}$.
The previous lemma can now be applied to show concentration of the empirical
covariance for both the source and target task.
###### Lemma A.3
Suppose $N_{1}T\geq
C\max_{h}\mathcal{J}\mathopen{}\left(A+BK^{(h)}\right)\mathclose{}^{2}\kappa\mathopen{}\left(\Sigma_{x}^{(h)}\right)\mathclose{}(n_{x}+\log(H/\delta))$
for $\delta\in(0,1)$, where $C$ is a universal numerical constant. Then with
probability at least $1-\frac{\delta}{10}$ over the states
$\mathbf{X}^{(1)},\dots,\mathbf{X}^{(H)}$ in the source tasks, we have
$0.9\Sigma_{x}^{(h)}\preceq\frac{1}{N_{1}T}\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}\preceq
1.1\Sigma_{x}^{(h)},\qquad\forall h\in[H].$
Proof Applying Lemma A.2 with $U=I$ and $\varepsilon=0.1$, as long as $C\geq
100\frac{\max\mathopen{}\left\\{c_{2},\log(1/20)\right\\}\mathclose{}}{c_{1}}$,
then for any task $h\in[H]$, we have that
$\left\|(\mathbf{X}^{(h)}M)^{\top}\mathbf{X}^{(h)}M-I\right\|\leq 0.1,$
with probability at least
$1-2\exp\mathopen{}\left(-\frac{0.01c_{1}N_{1}T}{\mathcal{J}(A+BK^{(h)})^{2}\kappa(\Sigma_{x}^{(h)})}+c_{2}n_{x}\right)\mathclose{}\geq
1-\frac{\delta}{10H}$. This may be written equivalently as
$0.9\Sigma_{x}^{(h)}=0.9\frac{M^{-2}}{N_{1}T}\preceq\frac{\mathbf{X}^{(h)\top}\mathbf{X}^{(h)}}{N_{1}T}\preceq
1.1\frac{M^{-2}}{N_{1}T}=1.1\Sigma_{x}^{(h)}.$
Taking a union bound over the $H$ tasks provides the desired result.
###### Lemma A.4
Suppose $N_{2}T\geq
C\mathcal{J}\mathopen{}\left(A+BK^{(H+1)}\right)\mathclose{}^{2}\kappa\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}(k+\log(1/\delta))$
for $\delta\in(0,1)$, where $C$ is a universal numerical constant. Then for
any given matrix $\Phi\in\mathbb{R}^{n_{x}\times 2k}$ independent of
$\mathbf{X}^{(H+1)}$, with probability at least $1-\frac{\delta}{10}$ over
$\mathbf{X}^{(H+1)}$ we have
$0.9\Phi^{\top}\Sigma_{x}^{(H+1)}\Phi\preceq\frac{1}{N_{2}T}\Phi^{\top}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}\mathbf{X}^{(H+1)}\Phi\preceq
1.1\Phi^{\top}\Sigma_{x}^{(H+1)}\Phi.$
Proof Let $\Phi=USV^{\top}$ be the singular value decomposition of $\Phi$
with $U\in\mathbb{R}^{n_{x}\times 2k}$. Applying Lemma A.2 with
$\varepsilon=0.1$, we have that
$\displaystyle
0.9U^{\top}\Sigma_{x}^{(H+1)}U=0.9\frac{M^{-2}}{N_{2}T}\preceq\frac{U^{\top}\mathbf{X}^{(H+1)\top}\mathbf{X}^{(H+1)}U}{N_{2}T}\preceq
1.1\frac{M^{-2}}{N_{2}T}=1.1U^{\top}\Sigma_{x}^{(H+1)}U$ (14)
with probability at least
$1-2\exp\mathopen{}\left(-0.01c_{1}\frac{N_{2}T}{\mathcal{J}(A+BK^{(H+1)})^{2}\kappa(\Sigma_{x}^{(H+1)})}+2c_{2}k\right)\mathclose{}\geq
1-\frac{\delta}{10}$ as long as $C\geq
100\frac{\max\mathopen{}\left\\{c_{2},\log(1/20)\right\\}\mathclose{}}{c_{1}}$.
Left and right multiplying Equation (14) by $V^{\top}S$ and $SV$ respectively
results in
$0.9\Phi^{\top}\Sigma_{x}^{(H+1)}\Phi\preceq\frac{\Phi^{\top}\mathbf{X}^{(H+1)\top}\mathbf{X}^{(H+1)}\Phi}{N_{2}T}\preceq
1.1\Phi^{\top}\Sigma_{x}^{(H+1)}\Phi$
with probability at least $1-\frac{\delta}{10}$.
## Appendix B Data Guarantees
We will now use the covariance concentration results to show concentration of
the source and target controllers to the optimal controllers. We begin by
recalling Lemma A.5 of Du et al. (2020).
###### Lemma B.1
There exists a subset $\mathcal{N}\subset\mathcal{O}_{d_{1},d_{2}}$ that is an
$\varepsilon$-net of $\mathcal{O}_{d_{1},d_{2}}$ in Frobenius norm such that
${\left|\mathcal{N}\right|}\leq\mathopen{}\left(\frac{6\sqrt{d_{2}}}{\varepsilon}\right)\mathclose{}^{d_{1}d_{2}}$.
Also, note the following fact.
###### Fact B.1
For size conforming $Z,X$, with $X$ full column rank,
$\displaystyle\sup_{F}\left[4\langle
Z,XF^{\top}\rangle-\left\|XF^{\top}\right\|_{F}^{2}\right]=4\left\|(X^{\top}X)^{-1/2}X^{\top}Z\right\|_{F}^{2}=4\left\|P_{X}Z\right\|_{F}^{2}.$
The following result on perturbation of projection matrices may be found in
Chen et al. (2016); Xu (2020).
###### Lemma B.2 (Perturbation of projection matrices)
Let $A,B$ be size conforming matrices both of the same rank. We have:
$\left\|P_{A}-P_{B}\right\|\leq\min\left\\{\left\|A^{\dagger}\right\|,\left\|B^{\dagger}\right\|\right\\}\left\|A-B\right\|.$
###### Lemma B.3
Let $(x_{t})_{t\geq 1}$ be a $\mathbb{R}^{d}$-valued process adapted to a
filtration $(\mathcal{F}_{t})_{t\geq 1}$. Let $(\eta_{t})_{t\geq 1}$ be a
$\mathbb{R}^{m}$-valued process adapted to $(\mathcal{F}_{t})_{t\geq 2}$.
Suppose that $(\eta_{t})_{t\geq 1}$ is a $\sigma$-sub-Gaussian martingale
difference sequence, i.e.,:
$\displaystyle\mathbb{E}[\eta_{t}\mid\mathcal{F}_{t}]$ $\displaystyle=0,$ (15)
$\displaystyle\mathbb{E}[\exp(\lambda\langle
v,\eta_{t}\rangle)\mid\mathcal{F}_{t}]$
$\displaystyle\leq\exp\mathopen{}\left(\frac{\lambda^{2}\sigma^{2}\left\|v\right\|^{2}}{2}\right)\mathclose{}\>\>\forall\mathcal{F}_{t}\text{-measurable
}\lambda\in\mathbb{R},v\in\mathbb{R}^{m}.$ (16)
For $\Lambda\in\mathbb{R}^{m\times d}$, let $(M_{t}(\Lambda))_{t\geq 1}$ be
the $\mathbb{R}$-valued process:
$\displaystyle
M_{t}(\Lambda)=\exp\left(\frac{1}{\sigma}\sum_{i=1}^{t}\langle\Lambda
x_{i},\eta_{i}\rangle-\frac{1}{2}\sum_{i=1}^{t}\left\|\Lambda
x_{i}\right\|^{2}\right).$ (17)
The process $(M_{t}(\Lambda))_{t\geq 1}$ satisfies
$\mathbb{E}[M_{t}(\Lambda)]\leq 1$ for all $t\geq 1$.
Proof Let $M_{0}(\Lambda):=1$. Fix $t\geq 1$. Observe:
$\displaystyle\mathbb{E}[M_{t}(\Lambda)]$
$\displaystyle=\mathbb{E}[\mathbb{E}[M_{t}(\Lambda)\mid\mathcal{F}_{t}]]=\mathbb{E}\left[M_{t-1}(\Lambda)\mathbb{E}\left[\exp\mathopen{}\left(\frac{1}{\sigma}\langle\Lambda
x_{t},\eta_{t}\rangle\right)\mathclose{}\,\bigg{|}\,\mathcal{F}_{t}\right]\exp\mathopen{}\left(-\frac{1}{2}\left\|\Lambda
x_{t}\right\|^{2}\right)\mathclose{}\right]$
$\displaystyle\leq\mathbb{E}[M_{t-1}(\Lambda)].$
The following result generalizes the self-normalized martingale inequality
from Abbasi-Yadkori et al. (2011) to handle multiple matrix valued self-
normalized martingales.
###### Proposition B.1 (Generalized self-normalized martingale inequality)
Fix $H\in\mathbb{N}_{+}$. For $h\in[H]$, let $(x_{t}^{h},\eta_{t}^{h})_{t\geq
1}$ be a $\mathbb{R}^{d}\times\mathbb{R}^{m}$-valued process and
$(\mathcal{F}_{t}^{h})_{t\geq 1}$ be a filtration such that
$(x_{t}^{h})_{t\geq 1}$ is adapted to $(\mathcal{F}_{t}^{h})_{t\geq 1}$,
$(\eta_{t}^{h})_{t\geq 1}$ is adapted to $(\mathcal{F}_{t}^{h})_{t\geq 2}$,
and $(\eta_{t}^{h})_{t\geq 1}$ is a $\sigma$-sub-Gaussian martingale
difference sequence. Suppose that for all $h_{1}\neq h_{2}$, the process
$(x_{t}^{h_{1}},\eta_{t}^{h_{1}})$ is independent of
$(x_{t}^{h_{2}},\eta_{t}^{h_{2}})$. Fix (non-random) positive definite
matrices $\\{V^{h}\\}_{h=1}^{H}$. For $t\geq 1$ and $h\in[H]$, define:
$\displaystyle\bar{V}_{t}^{h}:=V^{h}+V^{h}_{t},\>\>V^{h}_{t}:=\sum_{i=1}^{t}x_{i}x_{i}^{\top},\>\>S_{t}^{h}:=\sum_{i=1}^{t}x_{i}\eta_{i}^{\top}.$
(18)
For any fixed $T\in\mathbb{N}_{+}$, with probability at least $1-\delta$:
$\displaystyle\sum_{h=1}^{H}\left\|(\bar{V}^{h}_{T})^{-1/2}S^{h}_{T}\right\|_{F}^{2}\leq
2\sigma^{2}\left[\sum_{h=1}^{H}\log\mathrm{det}((\bar{V}_{T}^{h})^{m/2}(V^{h})^{-m/2})+\log(1/\delta)\right].$
(19)
Proof By rescaling, we assume that $\sigma=1$ without loss of generality. For
$h\in[H]$, define:
$\displaystyle M_{T}^{h}(\Lambda):=\exp\left(\sum_{t=1}^{T}\langle\Lambda
x_{t}^{h},\eta_{t}^{h}\rangle-\frac{1}{2}\sum_{t=1}^{T}\left\|\Lambda
x_{t}^{h}\right\|^{2}\right),\>\>\Lambda\in\mathbb{R}^{m\times d}.$ (20)
By Lemma B.3, $\mathbb{E}[M_{T}^{h}(\Lambda)]\leq 1$ for any fixed $\Lambda$.
Now, we use the method of mixtures argument from Abbasi-Yadkori et al. (2011).
Let $\Gamma\in\mathbb{R}^{m\times d}$ be a random matrix with i.i.d. $N(0,1)$
entries, independent of everything else. By the tower property,
$\mathbb{E}[M_{T}^{h}(\Gamma)]=\mathbb{E}[\mathbb{E}[M_{T}^{h}(\Gamma)]\mid\Gamma]\leq
1$. On the other hand, let us compute
$\mathbb{E}[M_{T}^{h}(\Gamma)\mid\mathcal{F}^{h}_{\infty}]$. Let $p(\Lambda)$
denote the density of $\Gamma$, and also let $p(g)$ denote the density of a
$d$-dimensional isotropic normal vector. Let us momentarily drop the
superscript $h$ since the computation is identical for every $h$. First, we
note that the proof of Theorem 3 in Abbasi-Yadkori et al. (2011) shows the
following identity for any fixed $u\in\mathbb{R}^{d}$,
$V\in\mathbb{R}^{m\times m}$ positive semidefinite, and
$V_{0}\in\mathbb{R}^{m\times m}$ positive definite:
$\displaystyle\int\exp\mathopen{}\left(\langle
g,u\rangle-\frac{1}{2}\left\|g\right\|^{2}_{V}\right)\mathclose{}p(g)\,\mathrm{d}g=\mathopen{}\left(\frac{\mathrm{det}(V_{0})}{\mathrm{det}(V_{0}+V)}\right)\mathclose{}^{1/2}\exp\mathopen{}\left(\frac{1}{2}\left\|u\right\|^{2}_{(V+V_{0})^{-1}}\right)\mathclose{}.$
With this identity and independence of the rows of $\Gamma$:
$\displaystyle\mathbb{E}[M_{T}(\Gamma)\mid\mathcal{F}_{\infty}]$
$\displaystyle=\int\exp\mathopen{}\left(\langle\Lambda^{\top},S_{T}\rangle-\frac{1}{2}\left\|\Lambda^{\top}\right\|_{V_{t}}^{2}\right)\mathclose{}p(\Lambda)\,\mathrm{d}\Lambda$
$\displaystyle=\int\exp\mathopen{}\left(\sum_{i=1}^{m}\langle\Lambda^{\top}e_{i},S_{T}e_{i}\rangle-\frac{1}{2}\left\|\Lambda^{\top}e_{i}\right\|^{2}_{V_{t}}\right)\mathclose{}p(\Lambda)\,\mathrm{d}\Lambda$
$\displaystyle=\prod_{i=1}^{m}\int\exp\mathopen{}\left(\langle
g,S_{T}e_{i}\rangle-\frac{1}{2}\left\|g\right\|^{2}_{V_{t}}\right)\mathclose{}p(g)\,\mathrm{d}g$
$\displaystyle=\mathopen{}\left(\frac{\mathrm{det}(V)}{\mathrm{det}(V+V_{T})}\right)\mathclose{}^{m/2}\exp\mathopen{}\left(\frac{1}{2}\sum_{i=1}^{m}\left\|S_{T}e_{i}\right\|^{2}_{(V+V_{T})^{-1}}\right)\mathclose{}$
$\displaystyle=\mathopen{}\left(\frac{\mathrm{det}(V)}{\mathrm{det}(V+V_{T})}\right)\mathclose{}^{m/2}\exp\mathopen{}\left(\frac{1}{2}\left\|(V+V_{T})^{-1/2}S_{T}\right\|_{F}^{2}\right)\mathclose{}.$
That is, for every $h\in[H]$, we have:
$\displaystyle\mathbb{E}\left[\mathopen{}\left(\frac{\mathrm{det}(V^{h})}{\mathrm{det}(\bar{V}^{h}_{T})}\right)\mathclose{}^{m/2}\exp\mathopen{}\left(\frac{1}{2}\left\|(\bar{V}_{T}^{h})^{-1/2}S_{T}\right\|_{F}^{2}\right)\mathclose{}\right]\leq
1.$
Now by Markov’s inequality and independence of the processes across $h$:
$\displaystyle\operatorname{\mathbb{P}}\left(\sum_{h=1}^{H}\left\|(\bar{V}_{T}^{h})^{-1/2}S_{T}\right\|_{F}^{2}>2\left[\sum_{h=1}^{H}\log\mathrm{det}((\bar{V}_{T}^{h})^{m/2}(V^{h})^{-m/2})+\log(1/\delta)\right]\right)$
$\displaystyle=\operatorname{\mathbb{P}}\left(\exp\mathopen{}\left(\frac{1}{2}\sum_{h=1}^{H}\left\|(\bar{V}_{T}^{h})^{-1/2}S_{T}\right\|_{F}^{2}\right)\mathclose{}>\delta^{-1}\prod_{h=1}^{H}\mathopen{}\left(\frac{\mathrm{det}(\bar{V}_{T}^{h})}{\mathrm{det}(V^{h})}\right)\mathclose{}^{m/2}\right)$
$\displaystyle=\operatorname{\mathbb{P}}\left(\prod_{h=1}^{H}\mathopen{}\left(\frac{\mathrm{det}(V^{h})}{\mathrm{det}(\bar{V}_{T}^{h})}\right)\mathclose{}^{m/2}\exp\mathopen{}\left(\frac{1}{2}\left\|(\bar{V}_{T}^{h})^{-1/2}S_{T}\right\|_{F}^{2}\right)\mathclose{}>\delta^{-1}\right)$
$\displaystyle\leq\delta\mathbb{E}\left[\prod_{h=1}^{H}\mathopen{}\left(\frac{\mathrm{det}(V^{h})}{\mathrm{det}(\bar{V}_{T}^{h})}\right)\mathclose{}^{m/2}\exp\mathopen{}\left(\frac{1}{2}\left\|(\bar{V}_{T}^{h})^{-1/2}S_{T}\right\|_{F}^{2}\right)\mathclose{}\right]$
$\displaystyle=\delta\prod_{h=1}^{H}\mathbb{E}\left[\mathopen{}\left(\frac{\mathrm{det}(V^{h})}{\mathrm{det}(\bar{V}_{T}^{h})}\right)\mathclose{}^{m/2}\exp\mathopen{}\left(\frac{1}{2}\left\|(\bar{V}_{T}^{h})^{-1/2}S_{T}\right\|_{F}^{2}\right)\mathclose{}\right]$
$\displaystyle\leq\delta.$
###### Remark B.1
The original self-normalized martingale bound from Abbasi-Yadkori et al.
(2011) holds for an arbitrary stopping time, and consequently _uniformly_ for
all time $t\in\mathbb{N}$ by a simple reduction. In contrast, Proposition B.1
holds for a fixed $t\in\mathbb{N}$, which suffices for our purposes.
###### Lemma B.4 (source controller concentration)
Under the setting and data assumptions of Theorem 3.1, with probability at
least $1-\frac{\delta}{5}$,
$\displaystyle\sum_{h=1}^{H}\left\|\mathbf{X}^{(h)}\mathopen{}\left(\hat{F}^{(h)}\hat{\Phi}-F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}\lesssim\sigma_{z}^{2}\mathopen{}\left(kmH+kn\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}.$
Proof Before we start the proof, we first define two subsets:
$\displaystyle\Theta_{1}$ $\displaystyle:=\\{(F_{1}\Phi,\dots,F_{h}\Phi)\mid
F_{i}\in\mathbb{R}^{n_{u}\times k},\>\>\Phi\in\mathbb{R}^{k\times n_{x}}\\},$
$\displaystyle\Theta_{2}(a,b)$
$\displaystyle:=\\{(F_{1}\Phi,\dots,F_{h}\Phi)\mid
F_{i}\in\mathbb{R}^{n_{u}\times
b},\>\>\Phi^{\top}\in\mathcal{O}_{a,b}\\},\>\>a\geq b.$
It is easy to see that $\Theta_{1}=\Theta_{2}(n_{x},k)$. Furthermore, a simple
argument shows that
$\Theta_{2}(n_{x},k)-\Theta_{2}(n_{x},k)\subset\Theta_{2}(n_{x},2k),$
where the minus sign indicates the Minkowski difference of two sets.
Throughout this proof, we will condition on the following event, which holds
with probability at least $1-\frac{\delta}{10}$ by Lemma A.3:
$\displaystyle
0.9\Sigma_{x}^{(h)}\preceq\frac{1}{N_{1}T}\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}\preceq
1.1\Sigma_{x}^{(h)},\qquad\forall h\in[H].$ (21)
By optimality of $\hat{F}^{(1)},\hat{F}^{(2)},\dots,\hat{F}^{(H)}$ and
$\hat{\Phi}$ for Problem (1), we have that
$\displaystyle\sum_{h=1}^{H}\left\|\mathbf{U}^{(h)}-\mathbf{X}^{(h)}(\hat{F}^{(h)}\hat{\Phi})^{\top}\right\|_{F}^{2}\leq\sum_{h=1}^{H}\left\|\mathbf{U}^{(h)}-\mathbf{X}^{(h)}(F^{(h)}_{\star}\Phi_{\star})^{\top}\right\|_{F}^{2}.$
Substituting in
$\mathbf{U}^{(h)}=\mathbf{X}^{(h)}(F^{(h)}_{\star}\Phi_{\star})^{\top}+\mathbf{Z}^{(h)}$
and re-arranging yields the basic inequality:
$\displaystyle\sum_{h=1}^{H}\left\|\mathbf{X}^{(h)}(\hat{F}^{(h)}\hat{\Phi}-F^{(h)}_{\star}\Phi_{\star})^{\top}\right\|_{F}^{2}\leq
2\sum_{h=1}^{H}\operatorname{Tr}\mathopen{}\left(\mathopen{}\left(\mathbf{Z}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}(F^{(h)}_{\star}\Phi_{\star}-\hat{F}^{(h)}\hat{\Phi})^{\top}\right)\mathclose{}.$
(22)
Let $\Delta^{(h)}:=\hat{F}^{(h)}\hat{\Phi}-F^{(h)}_{\star}\Phi_{\star}$.
Multiplying the basic inequality above by two and re-arranging again, we
obtain the _offset basic inequality_ (Liang et al., 2015):
$\displaystyle\sum_{h=1}^{H}\left\|\mathbf{X}^{(h)}(\Delta^{(h)})^{\top}\right\|^{2}_{F}$
$\displaystyle\leq\sum_{h=1}^{H}4\operatorname{Tr}\mathopen{}\left({(\mathbf{Z}^{(h)})^{\top}}{\mathbf{X}^{(h)}}{(\Delta^{(h)})^{\top}}\right)\mathclose{}-\left\|\mathbf{X}^{(h)}(\Delta^{h}{(h)})^{\top}\right\|_{F}^{2}$
$\displaystyle\leq\sup_{\\{\Delta^{(h)}\\}_{h=1}^{H}\in\Theta_{1}-\Theta_{1}}\sum_{h=1}^{H}4\operatorname{Tr}\mathopen{}\left({(\mathbf{Z}^{(h)})^{\top}}{\mathbf{X}^{(h)}}{(\Delta^{(h)})^{\top}}\right)\mathclose{}-\left\|\mathbf{X}^{(h)}(\Delta^{h}{(h)})^{\top}\right\|_{F}^{2}$
$\displaystyle=\sup_{\\{\Delta^{(h)}\\}_{h=1}^{H}\in\Theta_{2}(n_{x},k)-\Theta_{2}(n_{x},k)}\sum_{h=1}^{H}4\operatorname{Tr}\mathopen{}\left({(\mathbf{Z}^{(h)})^{\top}}{\mathbf{X}^{(h)}}{(\Delta^{(h)})^{\top}}\right)\mathclose{}-\left\|\mathbf{X}^{(h)}(\Delta^{h}{(h)})^{\top}\right\|_{F}^{2}$
$\displaystyle\leq\sup_{\\{\Delta^{(h)}\\}_{h=1}^{H}\in\Theta_{2}(n_{x},2k)}\sum_{h=1}^{H}4\operatorname{Tr}\mathopen{}\left({(\mathbf{Z}^{(h)})^{\top}}{\mathbf{X}^{(h)}}{(\Delta^{(h)})^{\top}}\right)\mathclose{}-\left\|\mathbf{X}^{(h)}(\Delta^{h}{(h)})^{\top}\right\|_{F}^{2}$
$\displaystyle=\sup_{\Phi^{\top}\in\mathcal{O}_{n_{x},2k}}\sum_{h=1}^{H}\sup_{F_{h}\in\mathbb{R}^{n_{u}\times
2k}}\left[4\operatorname{Tr}\mathopen{}\left({(\mathbf{Z}^{(h)})^{\top}}{\mathbf{X}^{(h)}}\Phi^{\top}F_{h}^{\top}\right)\mathclose{}-\left\|\mathbf{X}^{(h)}\Phi^{\top}F_{h}^{\top}\right\|_{F}^{2}\right]$
$\displaystyle=4\sup_{\Phi^{\top}\in\mathcal{O}_{n_{x},2k}}\sum_{h=1}^{H}\left\|(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top})^{-1/2}\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{Z}^{(h)}\right\|_{F}^{2}.$
The last equality above follows from B.1. We now derive an upper bound on the
sum for a fixed $\Phi^{\top}\in\mathcal{O}_{n_{x},2k}$. To do this, we invoke
Proposition B.1 with the trajectories within each task concatenated in
sequence: $V^{h}\leftarrow 0.9N_{1}T\Phi\Sigma_{x}^{(h)}\Phi^{\top}$,
$x_{t}^{h}\leftarrow\Phi x_{i}^{(h)}[t]$, and $\eta_{t}^{h}\leftarrow
z_{i}^{(h)}[t]$. Note that since
$\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top}\succeq V^{h}$, we
then have that:
$(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top})^{-1}\preceq
2(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top}+V^{h})^{-1}.$
Consequently, with probability at least $1-\delta^{\prime},$
$\displaystyle\sum_{h=1}^{H}$
$\displaystyle\left\|(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top})^{-1/2}\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle\lesssim\sum_{h=1}^{H}\left\|(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top}+V^{h})^{-1/2}\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle\lesssim\sigma^{2}\mathopen{}\left[\sum_{h=1}^{H}\log\mathrm{det}\mathopen{}\left(\mathopen{}\left(\Phi\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}\Phi^{\top}+V^{h}\right)\mathclose{}^{n_{u}/2}\mathopen{}\left(V^{h}\right)\mathclose{}^{-n_{u}/2}\right)\mathclose{}+\log(1/\delta^{\prime})\right]\mathclose{}$
$\displaystyle=\sigma^{2}\mathopen{}\left[\frac{n_{u}}{2}\sum_{h=1}^{H}\log\mathrm{det}\mathopen{}\left(\mathopen{}\left(\Phi\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}\Phi^{\top}+V^{h}\right)\mathclose{}\mathopen{}\left(V^{h}\right)\mathclose{}^{-1}\right)\mathclose{}+\log(1/\delta^{\prime})\right]\mathclose{}$
$\displaystyle=\sigma^{2}\mathopen{}\left[\frac{n_{u}}{2}\sum_{h=1}^{H}\log\mathrm{det}\mathopen{}\left(\frac{1}{0.9N_{1}T}\mathopen{}\left(\Phi\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}\Phi^{\top}\right)\mathclose{}\mathopen{}\left(\Phi\Sigma_{x}^{(h)}\Phi^{\top}\right)\mathclose{}^{-1}+I_{2k}\right)\mathclose{}+\log(1/\delta^{\prime})\right]\mathclose{}$
$\displaystyle\leq\sigma^{2}\mathopen{}\left[\frac{n_{u}}{2}\sum_{h=1}^{H}\log\mathrm{det}\mathopen{}\left(\frac{1.1}{0.9}+1\right)\mathclose{}I_{2k}+\log(1/\delta^{\prime})\right]\mathclose{}$
$\displaystyle\lesssim\sigma_{z}^{2}(kHn_{u}+\log(1/\delta^{\prime})).$ (23)
This holds for a fixed $\Phi^{\top}\in\mathcal{O}_{n_{x},2k}$, so it remains
to union bound over $\mathcal{O}_{n_{x},2k}$. Let
$\\{\Phi_{i}^{\top}\\}_{i=1}^{N}$ be an $\varepsilon$-net of
$\mathcal{O}_{n_{x},2k}$ in the spectral norm of resolution to be determined.
For $\Phi^{\top}\in\mathcal{O}_{n_{x},2k}$, let $\Phi_{i}^{\top}$ denote the
nearest element in the cover. By triangle inequality and $(a+b)^{2}\leq
2(a^{2}+b^{2})$:
$\displaystyle\sum_{h=1}^{H}$
$\displaystyle\left\|(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top})^{-1/2}\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle=\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\Phi}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle\leq
2\sum_{h=1}^{H}\left\|(P_{\mathbf{X}^{(h)}\Phi}-P_{\mathbf{X}^{(h)}\Phi_{i}})\mathbf{Z}^{(h)}\right\|_{F}^{2}+2\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\Phi_{i}}\mathbf{Z}^{(h)}\right\|^{2}_{F}$
$\displaystyle\leq
2\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\Phi}-P_{\mathbf{X}^{(h)}\Phi_{i}}\right\|^{2}_{2}\left\|\mathbf{Z}^{(h)}\right\|_{F}^{2}+2\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\Phi_{i}}\mathbf{Z}^{(h)}\right\|^{2}_{F}.$
By Lemma B.2,
$\displaystyle\left\|P_{\mathbf{X}^{(h)}\Phi}-P_{\mathbf{X}^{(h)}\Phi_{i}}\right\|$
$\displaystyle\leq\left\|(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top})^{-1}\Phi(\mathbf{X}^{(h)})^{\top}\right\|\left\|\mathbf{X}^{(h)}(\Phi-\Phi_{i})\right\|$
$\displaystyle\leq\frac{1.1}{0.9}\frac{\lambda_{\max}(\Sigma_{x}^{(h)})}{\lambda_{\min}(\Sigma_{x}^{(h)})}\varepsilon\lesssim\frac{\overline{\lambda}}{\underline{\lambda}}\varepsilon.$
Next, we need to bound
$\sum_{h=1}^{H}\left\|\mathbf{Z}^{(h)}\right\|^{2}_{F}$. Because each
$z_{i}^{(h)}[t]$ is i.i.d. $\mathcal{N}(0,\sigma_{z}^{2}I)$, this sum is
distributed as a $\sigma_{z}^{2}\psi$, where $\psi$ is a $\chi^{2}$ random
variable with $HN_{1}Tn_{u}$ degrees of freedom. Hence by standard $\chi^{2}$
tail bounds, with probability at least $1-\delta/20$,
$\sum_{h=1}^{H}\left\|\mathbf{Z}^{(h)}\right\|_{F}^{2}\lesssim\sigma_{z}^{2}(HN_{1}Tn_{u}+\log(1/\delta))$.
This prompts us to select
$\varepsilon\lesssim\frac{k}{N_{1}T\overline{\lambda}/\underline{\lambda}}$,
from which we conclude:
$\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\Phi}-P_{\mathbf{X}^{(h)}\Phi_{i}}\right\|^{2}\left\|\mathbf{Z}^{(h)}\right\|_{F}^{2}\lesssim\sigma_{z}^{2}kHn_{u}.$
By Lemma B.1, we can then bound the size of the $\varepsilon$-covering of
$\mathcal{O}_{n_{x},2k}$ by:
$N\leq\mathopen{}\left(\frac{c\sqrt{k}}{k}N_{1}T\frac{\overline{\lambda}}{\underline{\lambda}}\right)\mathclose{}^{2n_{x}k}.$
So now we take $\delta^{\prime}=\delta/(20N)$, and conclude. In particular, by
union bounding over the elements of
$\mathopen{}\left\\{\Phi_{i}\right\\}\mathclose{}_{i=1}^{N}$, we have that
with probability at least $1-\frac{\delta}{5}$,
$\displaystyle\sup_{\Phi^{\top}\in\mathcal{O}_{n_{x},2k}}$
$\displaystyle\sum_{h=1}^{H}\left\|(\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{X}^{(h)}\Phi^{\top})^{-1/2}\Phi(\mathbf{X}^{(h)})^{\top}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle\lesssim\sigma_{z}^{2}\mathopen{}\left(kHn_{u}+n_{x}k\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}.$
The probability of $1-\frac{\delta}{5}$ arises from union bounding over
Equation (21), which holds with probability at least $1-\frac{\delta}{10}$,
the bound on $\sum_{h=1}^{H}\left\|\mathbf{Z}^{(h)}\right\|_{F}^{2}$ which
holds with probability at least $1-\frac{\delta}{20}$, and the bound in
Equation (23), which holds for all
$\Phi_{i}\in\mathopen{}\left\\{\Phi_{i}\right\\}\mathclose{}_{i=1}^{N}$ with
probability at least $1-\frac{\delta}{20}$.
###### Lemma B.5 (Lemma A.7 in Du et al. (2020))
For any two matrices $A_{1}$ and $A_{2}$ with the same number of columns which
satisfy $A_{1}^{\top}A_{1}\succeq A_{2}^{\top}A_{2}$, then for any matrix $B$,
we have
$A_{1}^{\top}P_{A_{1}B}^{\perp}A_{1}\succeq
A_{2}^{\top}P_{A_{2}B}^{\perp}A_{2}.$
As a consequence, for any matrices $B$ and $B^{\prime}$, we have
$\left\|P_{A_{1}B}^{\perp}A_{1}B^{\prime}\right\|_{F}^{2}\geq\left\|P_{A_{2}B}^{\perp}A_{2}B^{\prime}\right\|_{F}^{2}.$
###### Lemma B.6 (target controller concentration)
Under the setting and data assumptions of Theorem 3.1, with probability at
least $1-\frac{2\delta}{5}$,
$\displaystyle\frac{1}{N_{2}T}$
$\displaystyle\left\|\mathbf{X}^{(H+1)}\mathopen{}\left(\hat{F}^{(H+1)}\hat{\Phi}-F^{(H+1)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}\lesssim\bar{\sigma}_{z}^{2}\mathopen{}\left(\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta})}{N_{2}T}\right)\mathclose{}.$
Proof By the optimality of $\hat{\Phi},\hat{F}^{(1)},\dots,\hat{F}^{(H)}$ for
Problem (1), we know that
$\hat{F}^{(h)}=\mathopen{}\left(\hat{\Phi}\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{X}^{(h)}\hat{\Phi}^{\top}\right)\mathclose{}^{-1}\hat{\Phi}\mathopen{}\left(\mathbf{X}^{(h)}\right)\mathclose{}^{\top}\mathbf{U}^{(h)}.$
Therefore,
$\mathbf{X}^{(h)}\mathopen{}\left(\hat{F}^{(h)}\hat{\Phi}\right)\mathclose{}^{\top}=P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}\mathbf{U}^{(h)}=P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}\mathopen{}\left(\mathbf{X}^{(h)}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}+\mathbf{Z}^{(h)}\right)\mathclose{}$.
Then by applying Lemma B.4, we have that with probability at least
$1-\frac{\delta}{5}$,
$\displaystyle\bar{\sigma}_{z}^{2}$
$\displaystyle\mathopen{}\left(kmH+kn\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}\gtrsim\sum_{h=1}^{H}\left\|\mathbf{X}^{(h)}\mathopen{}\left(\hat{F}^{(h)}\hat{\Phi}-F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}$
$\displaystyle=\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}\mathopen{}\left(\mathbf{X}^{(h)}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}+Z^{(h)}\right)\mathclose{}-\mathbf{X}^{(h)}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}$
$\displaystyle=\sum_{h=1}^{H}\left\|(P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}-I)\mathbf{X}^{(h)}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}+P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle=\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(h)}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}+\left\|P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}\mathbf{Z}^{(h)}\right\|_{F}^{2}$
$\displaystyle\geq\sum_{h=1}^{H}\left\|P_{\mathbf{X}^{(h)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(h)}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}$
$\displaystyle\overset{(i)}{\geq}0.9N_{1}T\sum_{h=1}^{H}\left\|P_{\mathopen{}\left(\Sigma^{(h)}\right)\mathclose{}^{1/2}\hat{\Phi}^{\top}}^{\perp}\mathopen{}\left(\Sigma^{(h)}\right)\mathclose{}^{1/2}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}$
$\displaystyle\overset{(ii)}{\geq}0.9cN_{1}T\sum_{h=1}^{H}\left\|P_{\mathopen{}\left(\Sigma^{(H+1)}\right)\mathclose{}^{1/2}\hat{\Phi}^{\top}}^{\perp}\mathopen{}\left(\Sigma^{(H+1)}\right)\mathclose{}^{1/2}\mathopen{}\left(F^{(h)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}$
where $(i)$ follow from the inequality in Lemma A.3 in addition to Lemma B.5,
while $(ii)$ follows from the inequality in (5) (note that we have already
conditioned on this event through Lemma B.4) in addition to Lemma B.5. If we
define
$\displaystyle\mathbf{F}_{\star}=\begin{bmatrix}F_{\star}^{(1)}\\\ \vdots\\\
F_{\star}^{(H)}\end{bmatrix},$
we can write the above result concisely as
$\displaystyle\bar{\sigma}_{z}^{2}$
$\displaystyle\mathopen{}\left(kmH+kn\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}\gtrsim
0.9cN_{1}T\left\|P_{\mathopen{}\left(\Sigma^{(H+1)}\right)\mathclose{}^{1/2}\hat{\Phi}^{\top}}^{\perp}\mathopen{}\left(\Sigma^{(H+1)}\right)\mathclose{}^{1/2}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\right\|_{F}^{2}$
(24)
Now, letting
$\Phi=\begin{bmatrix}\hat{\Phi}^{\top}&\hat{\Phi}_{\star}^{\top}\end{bmatrix}$,
Lemma A.4 gives us
$\frac{1}{N_{2}T}\Phi^{\top}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}\mathbf{X}^{(H+1)}\Phi\preceq
1.1\Phi^{\top}\Sigma_{x}^{(H+1)}\Phi$. Combining with Lemma B.5, we have that
with probability at least $1-\frac{\delta}{10}$,
$\displaystyle
1.1\left\|P_{\mathopen{}\left(\Sigma^{(H+1)}\right)\mathclose{}^{1/2}\hat{\Phi}^{\top}}^{\perp}\mathopen{}\left(\Sigma^{(H+1)}\right)\mathclose{}^{1/2}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\right\|_{F}^{2}\geq\frac{1}{N_{2}T}\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\right\|_{F}^{2}.$
Plugging this in above, we have that with probability at least
$1-\frac{2\delta}{5}$,
$\displaystyle\bar{\sigma}_{z}^{2}$
$\displaystyle\mathopen{}\left(kmH+kn\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}\gtrsim\frac{cN_{1}}{N_{2}}\left\|P_{X^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\right\|_{F}^{2}.$
To conclude the proof, note that
$\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}\mathopen{}\left(\hat{F}^{(H+1)}\right)\mathclose{}^{\top}=P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{U}^{(h)}$
and
$\mathbf{F}_{\star}^{\dagger}:=(\mathbf{F}_{\star}^{\top}\mathbf{F}_{\star})^{-1}\mathbf{F}_{\star}^{\top}$
such that $\mathbf{F}_{\star}^{\dagger}\mathbf{F}_{\star}=I_{k}$. Thus
$\displaystyle\frac{1}{N_{2}T}{\left\|\mathbf{X}^{(H+1)}\mathopen{}\left(\hat{F}^{(H+1)}\hat{\Phi}-F^{(H+1)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}}$
$\displaystyle=\frac{1}{N_{2}T}{\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\mathopen{}\left(F^{(H+1)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}+P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}}$
$\displaystyle\leq\frac{1}{N_{2}T}{\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\mathopen{}\left(F^{(H+1)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}}+\frac{1}{N_{2}T}\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}$
$\displaystyle=\frac{1}{N_{2}T}{\operatorname{Tr}\mathopen{}\left(P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\Phi_{\star}^{\top}\mathopen{}\left(F^{(H+1)}_{\star}\right)\mathclose{}^{\top}F^{(H+1)}_{\star}\Phi_{\star}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\right)\mathclose{}}$
$\displaystyle\qquad\qquad+\frac{1}{N_{2}T}\left\|P_{X^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}$
$\displaystyle=\frac{1}{N_{2}T}{\operatorname{Tr}\mathopen{}\left(P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\mathopen{}\left(\mathbf{F}_{\star}^{\dagger}\right)\mathclose{}^{\top}\mathopen{}\left(F^{(H+1)}_{\star}\right)\mathclose{}^{\top}F^{(H+1)}_{\star}\mathbf{F}_{\star}^{\dagger}\mathbf{F}_{\star}\Phi_{\star}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\right)\mathclose{}}$
$\displaystyle\qquad\qquad+\frac{1}{N_{2}T}\left\|P_{X^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}$
$\displaystyle\leq\frac{1}{N_{2}T}\left\|\mathopen{}\left(\mathbf{F}^{\dagger}_{\star}\right)\mathclose{}^{\top}{\mathopen{}\left(F^{(H+1)}_{\star}\right)\mathclose{}^{\top}F^{(H+1)}_{\star}}\mathbf{F}^{\dagger}_{\star}\right\|\operatorname{Tr}\mathopen{}\left(P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\mathbf{F}_{\star}\Phi_{\star}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\right)\mathclose{}$
$\displaystyle\qquad\qquad+\frac{1}{N_{2}T}\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}$
$\displaystyle\lesssim\frac{1}{N_{2}T}\left\|F^{(H+1)}\mathbf{F}_{\star}^{\dagger}\right\|^{2}\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}^{\perp}\mathbf{X}^{(H+1)}\Phi_{\star}^{\top}\mathbf{F}_{\star}^{\top}\right\|_{F}^{2}+\frac{1}{N_{2}T}\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}$
$\displaystyle\lesssim\frac{1}{cN_{1}TH}\bar{\sigma}_{z}^{2}\mathopen{}\left(kn_{u}H+kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}+\frac{1}{N_{2}T}\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}$
where the last line follows from the inequality in (24) and applying
Assumption 3.1 on $\left\|F^{(H+1)}\mathbf{F}_{\star}^{\dagger}\right\|^{2}$.
As $\hat{\Phi}$ is independent from $\mathbf{X}^{(H+1)}$ and
$\mathbf{Z}^{(H+1)}$, the second term may be bounded by applying Proposition
B.1 as in Equation (23). In particular, with probability at least
$1-\frac{\delta}{5}$,
$\left\|P_{\mathbf{X}^{(H+1)}\hat{\Phi}^{\top}}\mathbf{Z}^{(H+1)}\right\|_{F}^{2}\leq\sigma_{z}^{2}(kn_{u}+\log(1/\delta)).$
Therefore,
$\displaystyle\frac{1}{N_{2}T}\left\|\mathbf{X}^{(H+1)}\mathopen{}\left(\hat{F}^{(H+1)}\hat{\Phi}-F^{(H+1)}_{\star}\Phi_{\star}\right)\mathclose{}^{\top}\right\|_{F}^{2}$
$\displaystyle\lesssim\frac{1}{cN_{1}TH}\sigma_{z}^{2}\mathopen{}\left(kn_{u}H+kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}+\log\mathopen{}\left(\frac{1}{\delta}\right)\mathclose{}\right)\mathclose{}+\frac{\sigma_{z}^{2}kn_{u}+\sigma_{z}^{2}\log(\frac{1}{\delta})}{N_{2}T}$
$\displaystyle=\sigma_{z}^{2}\mathopen{}\left(\frac{kn_{u}H}{cN_{1}TH}+\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{\log(1/\delta)}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta})}{N_{2}T}\right)\mathclose{}$
$\displaystyle\lesssim\sigma_{z}^{2}\mathopen{}\left(\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta})}{N_{2}T}\right)\mathclose{}.$
See 3.1
Proof The excess risk may be written as follows:
$\displaystyle\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$
$\displaystyle=\frac{1}{2T}\mathbb{E}_{x[0],w[0],\dots,w[T-1]}\mathopen{}\left[\sum_{t=0}^{T-1}\left\|(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi})x[t]\right\|_{2}^{2}\right]\mathclose{}$
$\displaystyle=\frac{1}{2T}\mathbb{E}_{x[0],w[0],\dots,w[T-1]}\mathopen{}\left[\sum_{t=0}^{T-1}\operatorname{Tr}\mathopen{}\left(\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}x[t]x[t]^{\top}\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}$
$\displaystyle=\frac{1}{2T}\sum_{t=0}^{T-1}\operatorname{Tr}\mathopen{}\left(\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\Sigma_{x}^{(H+1)}\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\right)\mathclose{}$
$\displaystyle=\frac{1}{2}\operatorname{Tr}\mathopen{}\left(\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\Sigma_{x}^{(H+1)}\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\right)\mathclose{}.$
Then by applying the inequality in Lemma A.4 with
$\Phi=\begin{bmatrix}\hat{\Phi}&\Phi_{\star}\end{bmatrix}$, we have
$\displaystyle
0.9\Phi^{\top}\Sigma_{x}^{(H+1)}\Phi\preceq\frac{1}{N_{2}T}\Phi^{\top}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}\mathbf{X}^{(H+1)}\Phi.$
Using this result this above, we have that
$\displaystyle\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$
$\displaystyle\leq\frac{1}{1.8N_{2}T}\operatorname{Tr}\mathopen{}\left(\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\mathopen{}\left(\mathbf{X}^{(H+1)}\right)\mathclose{}^{\top}\mathbf{X}^{(H+1)}\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\right)\mathclose{}$
$\displaystyle\leq\frac{1}{1.8N_{2}T}\left\|\mathopen{}\left(F^{(H+1)}\Phi_{\star}-\hat{F}^{(H+1)}\hat{\Phi}\right)\mathclose{}\mathbf{X}^{(H+1)}\right\|_{F}^{2}.$
The claim then follows by application of Lemma B.6.
## Appendix C Bounding the imitation gap
We recall that the issue with translating a bound on the excess risk of the
target task $\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$ into a bound on the
tracking error between the closed-loop learned and expert trajectories comes
from the fundamental distribution shift between the expert trajectories seen
during training and the trajectories generated by running the learned
controller in closed-loop. This has traditionally been a difficult problem to
analyze due to the circular nature of requiring the feedback controller error
to be small to guarantee small state deviation, which depends on the state
deviation being small to begin with.
Toward addressing this issue, recent work by Pfrommer et al. (2022) proposes a
theoretical framework for non-linear systems to bound the tracking error by
the imitation error via matching the higher-order input/state derivatives
$D^{p}_{x}\pi(x)$ of the expert controller. For linear systems, matching the
Jacobian is sufficient, where we note the Jacobian of a linear controller
$u[t]=Kx[t]$ is simply $K$. Furthermore, a generalization bound on the
empirical risk of a learned controller such as Theorem 3.1 implicitly bounds
the controller error $\left\|\hat{K}-K^{(H+1)}\right\|$.
However, the framework described in Pfrommer et al. (2022) crucially relies on
assuming a noiseless system, whereas the problem considered in this work is
only non-trivial in the presence of process noise. Furthermore, due to the
lack of excitatory noise and the generality of non-linear systems, the
tracking error bounds in Pfrommer et al. (2022) scale with
$\tilde{}\mathcal{O}\mathopen{}\left(\frac{\mathrm{\\#param}}{N_{2}\delta^{\prime}}\right)\mathclose{}$,
where $\delta^{\prime}$ is the failure probability over a new trajectory. To
that end, the main goal of this section is to introduce bounds on the tracking
error that scale multiplicatively with the favorable generalization bounds on
the excess risk, which improve with the trajectory length $T$, and also
improving the system-agnostic Markov scaling $1/\delta^{\prime}$ to
$\log(1/\delta^{\prime})$ in our linear systems setting.
Toward proving a bound on the imitation gap, we first introduce the notion of
general stochastic incremental stability ($\delta$-SISS). We recall
definitions of standard comparison functions (Khalil, 1996): a function
$\gamma(x)$ is class $\mathcal{K}$ if it is continuous, strictly increasing,
and satisfies $\gamma(0)=0$. A function $\beta(x,t)$ is class
$\mathcal{K}\mathcal{L}$ if it is continuous, $\beta(\cdot,t)$ is class
$\mathcal{K}$ for each fixed $t$, and $\beta(x,\cdot)$ is decreasing for each
fixed $x$.
###### Definition C.1 ($\delta$SISS)
Consider the discrete-time control system $x[t+1]=f(x[t],u[t],w[t])$ subject
to input perturbations $\Delta[t]$ and additive zero-mean process noise
$\mathopen{}\left\\{w[t]\right\\}\mathclose{}_{t\geq 0}$. The system
$f(x[t],u[t],w[t])$ is _incremental stochastic input-to-state stable_
($\delta$-SISS) if for all initial conditions $x[0],y[0]\in\mathcal{X}$,
perturbation sequences
$\mathopen{}\left\\{\Delta[0]\right\\}\mathclose{}_{t\geq 0}$ and noise
realizations $\mathopen{}\left\\{w[0]\right\\}\mathclose{}_{t\geq 0}$ there
exists a class $\mathcal{K}\mathcal{L}$ function $\beta$ and a class
$\mathcal{K}$ function $\gamma$ such that the trajectories
$x[t+1]=f(x[t],u[t],w[t])$ and $y[t+1]=f(y[t],u[t]+\Delta[t],w[t])$ satisfy
for all $t\in\mathbb{N}$
$\displaystyle\left\|x[t]-y[t]\right\|$
$\displaystyle\leq\beta\mathopen{}\left(\left\|x[0]-y[0]\right\|,t\right)\mathclose{}+\gamma\mathopen{}\left(\max_{0\leq
k\leq t-1}\left\|\Delta[k]\right\|\right)\mathclose{}.$ (25)
###### Lemma C.1 (Stable linear dynamical systems are $\delta$SISS)
Let $(A,B)$ describe a linear time-invariant system $x[t+1]=Ax[t]+Bu[t]$. Let
$A$ be Schur-stable, i.e. $\rho(A)<1$, where $\rho(\cdot)$ denotes the
spectral radius. Recall the constant $\mathcal{J}(A):=\sum_{t\geq
0}\left\|A^{t}\right\|$. Furthermore, for any given $\nu$ such that
$\rho(A)<\nu<1$, define the corresponding constant
$\displaystyle\tau(A,\nu):=\sup_{k\in\mathbb{N}}\frac{\left\|A^{k}\right\|}{\nu^{k}},$
which is guaranteed to be finite by Gelfand’s formula (Horn and Johnson, 2012,
Corollary 5.6.13). The linear system $(A,B)$ is incrementally stochastic
input-to-state stable, where we may set
$\displaystyle\beta(x,t)$ $\displaystyle:=\tau(A,\nu)\nu^{t}x$
$\displaystyle\gamma(x)$ $\displaystyle:=\mathcal{J}(A)\left\|B\right\|x.$
Proof Let us define the nominal and perturbed states $x_{t}$ and $y_{t}$
defined by
$\displaystyle x[t+1]$ $\displaystyle=Ax[t]+Bu[t]+w[t],\quad x[0]=\xi_{1}$
$\displaystyle y[t+1]$ $\displaystyle=Ay[t]+B(u[t]+\Delta[t])+w[t],\quad
y[0]=\xi_{2}.$
We observe that by linearity, we may write
$\displaystyle x[t]$
$\displaystyle=A^{t}x[0]+\sum_{k=0}^{t-1}(A^{t-k-1}Bu[k]+A^{t-k-1}w[k])$
$\displaystyle y[t]$
$\displaystyle=A^{t}y[0]+\sum_{k=0}^{t-1}(A^{t-k-1}Bu[k]+A^{t-k-1}B\Delta[k]+A^{t-k-1}w[k]).$
Therefore, their difference can be written as
$\displaystyle x[t]-y[t]$
$\displaystyle=A^{t}(x[0]-y[0])-\sum_{k=0}^{t-1}A^{t-k-1}B\Delta[k]$
$\displaystyle\implies\left\|x[t]-y[t]\right\|$
$\displaystyle\leq\left\|A^{t}\right\|\left\|x[0]-y[0]\right\|+\mathopen{}\left(\sum_{k=0}^{t-1}\left\|A^{t-k-1}\right\|\right)\mathclose{}\left\|B\right\|\max_{0\leq
k\leq t-1}\left\|\Delta[k]\right\|$
$\displaystyle\leq\tau(A,\nu)\nu^{t}\left\|x[0]-y[0]\right\|+\mathcal{J}(A)\left\|B\right\|\max_{0\leq
k\leq t-1}\left\|\Delta[k]\right\|$
where we can match the functions $\beta(\cdot,t)$ and $\gamma(\cdot)$ to their
respective quantities above. Crucially, we observe that due to linearity, the
noise terms in $x[t]$ and $y[t]$ cancel out, and therefore $\delta$SISS is
effectively the same as the standard $\delta$ISS.
By showing stable linear systems are $\delta$SISS, we may adapt Corollary A.1
from Pfrommer et al. (2022) to yield the following guarantee with respect to
the expert closed-loop system of the target task $A+BK^{(H+1)}$, which allows
us to bound the imitation gap in terms of the maximal deviation between the
learned and target controller inputs.
###### Proposition C.1
Let the target task closed-loop system $(A+BK^{(H+1)},B)$ be $\delta$SISS for
$\beta(\cdot,t)$ and $\gamma(\cdot)$ defined in Lemma C.1. For a given test
controller $K$, and realizations of randomness
$x[0]\sim\mathcal{N}(0,\Sigma_{x}^{(H+1)})$,
$z[t]\sim\mathcal{N}(0,\sigma_{z}^{2}I)$, $w[t]\sim
N(0,\Sigma_{w}^{(H+1)}),t=0,\dots,T-1$, we write
$\displaystyle\begin{split}x_{\star}[t+1]&=(A+BK^{(H+1)})x_{\star}[t]+Bz[t]+w[t],\quad
x_{\star}[0]=x[0]\\\
\hat{x}[t+1]&=(A+BK)\hat{x}[t]+Bz[t]+w[t],\quad\hat{x}[0]=x[0].\end{split}$
(26)
In other words, $x_{\star}[t]$ is the state from rolling out the expert target
task controller $K^{(H+1)}$, and $\hat{x}[t]$ is the state from rolling out
the test controller $K$ under the same conditions. If $K$ satisfies
$\displaystyle\left\|K-K^{(H+1)}\right\|\leq\frac{1}{2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|},$
then the tracking error satisfies for any $T\geq 1$,
$\displaystyle\max_{1\leq t\leq
T}\left\|\hat{x}[t]-x_{\star}[t]\right\|\leq\max_{0\leq t\leq
T-1}2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|\left\|\mathopen{}\left(K-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|.$
In other words, Proposition C.1 allows us to bound the imitation gap by the
excess risk induced by controller $K$, as long as $K$ is sufficiently close to
the expert controller $K^{(H+1)}$ in the spectral norm.
Proof We borrow Proposition 3.1 from Pfrommer et al. (2022), which states a
general result for non-linear controllers and non-linear systems, but in the
noiseless setting, and specify it to the linear systems setting with process
noise. In particular, the result states that for a given state-feedback
controller $\pi(x)$ and an expert state-feedback controller $\pi_{\star}(x)$
that induces a $\delta$-SISS closed-loop system
$x_{\star}[t+1]=f(x_{\star}[t],\pi_{\star}(x_{\star}[t]))+w[t]$ (the result
can be extended from $\delta$-ISS with no modification to the proof), then for
a given $\varepsilon>0$, if $\pi$ satisfies the following on an expert
trajectory generated by realizations of randomness $x[0],w[0],\dots,w[T-1]$:
$\displaystyle\max_{0\leq t\leq
T-1}\sup_{\left\|\delta\right\|\leq\varepsilon}\left\|\pi_{\star}(x_{\star}[t]+\delta)-\pi(x_{\star}[t]+\delta)\right\|$
$\displaystyle\leq\gamma^{-1}(\varepsilon),$ (27)
where $\gamma^{-1}$ is the (generalized) inverse of the class-$\mathcal{K}$
function $\gamma(\cdot)$, then the imitation gap is bounded by
$\max_{0\leq t\leq T-1}\left\|\hat{x}[t]-x_{\star}[t]\right\|\leq\varepsilon,$
where $\hat{x}[t]$ are the states generated by running controller $\pi$ in
closed-loop on the same noise realizations as those generating $x_{\star}[t]$.
Now, substituting $\pi(x):=Kx$, $\pi_{\star}(x):=K^{(H+1)}x$, and
$\gamma(x):=\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|x$, such that
$\gamma^{-1}(x)=\mathopen{}\left(\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|\right)\mathclose{}^{-1}x$,
we observe
$\displaystyle\max_{0\leq t\leq
T-1}\sup_{\left\|\delta\right\|\leq\varepsilon}\left\|\pi_{\star}(x_{\star}[t]+\delta)-\pi(x_{\star}[t]+\delta)\right\|$
$\displaystyle=$ $\displaystyle\max_{0\leq t\leq
T-1}\sup_{\left\|\delta\right\|\leq\varepsilon}\left\|(K-K^{(H+1)})x_{\star}[t]+(K-K^{(H+1)})\delta\right\|$
$\displaystyle\leq$ $\displaystyle\max_{0\leq t\leq
T-1}\left\|(K-K^{(H+1)})x_{\star}[t]\right\|+\left\|K-K^{(H+1)}\right\|\varepsilon.$
Therefore, in order to satisfy (27), it suffices to satisfy
$\displaystyle\max_{0\leq t\leq
T-1}\left\|(K-K^{(H+1)})x_{\star}[t]\right\|+\left\|K-K^{(H+1)}\right\|\varepsilon\leq\mathopen{}\left(\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|\right)\mathclose{}^{-1}\varepsilon.$
Since $\varepsilon>0$ is arbitrary, setting
$\varepsilon:=\max_{t}2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|\left\|(K-K^{(H+1)})x_{\star}[t]\right\|$,
it is sufficient for
$\left\|K-K^{(H+1)}\right\|\leq\frac{1}{2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|},$
to satisfy the above inequality, which leads to the following bound on the
imitation gap
$\displaystyle\max_{0\leq t\leq T-1}\left\|\hat{x}[t]-x_{\star}[t]\right\|$
$\displaystyle\leq\varepsilon=\max_{0\leq t\leq
T-1}2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|\left\|Kx_{\star}[t]-K^{(H+1)}x_{\star}[t]\right\|.$
This completes the proof.
Before moving on, we discuss a few qualitative properties of Proposition C.1.
First off, a high-probability bound on the tracking error is somewhat higher
resolution than the LQR-type bounds in Fazel et al. (2018); Mania et al.
(2019), where the quantity of interest is the difference in the expected
infinite-horizon costs of the two controllers, which does not directly imply a
bound on the deviation between the states induced by the two controllers at a
given time. Furthermore, we note Proposition C.1 is meaningful precisely
because there is driving process noise. If the target task is noiseless, and
test controller $K$ is stabilizing, then
$\left\|\hat{x}[t]-x_{\star}[t]\right\|$ is trivially upper bounded by an
exponentially decaying quantity, which for a sufficiently long horizon $T$
will beat out any generalization bound scaling with
$\mathrm{poly}(N_{1},N_{2},H,T)$–however, this noiseless setting is
correspondingly uninteresting to study as a statistical problem.
We propose the following key result in the form of a high probability bound on
the tracking error.
###### Theorem C.1 (Full version of Theorem 3.2, (9))
Let the target task closed-loop system $(A+BK^{(H+1)},B)$ be $\delta$SISS for
$\beta(\cdot,t)$ and $\gamma(\cdot)$ as defined in Lemma C.1. Given a
generalization bound on the excess risk of the learned representation and
target task weights $(\hat{\Phi},\hat{F}^{(H+1)})$ (such as Theorem 3.1) of
the form: with probability greater than $1-\delta$
$\displaystyle\mathrm{ER}\mathopen{}\left(\hat{\Phi},\hat{F}^{(H+1)}\right)\mathclose{}$
$\displaystyle\leq f(N_{1},T,H,N_{2},\delta),$
we have the following bound on the tracking error. Assuming we have enough
samples such that
$f(N_{1},T,H,N_{2},\delta)\leq\frac{\lambda_{\min}(\Sigma_{x}^{(H+1)})}{8\mathcal{J}(A+BK^{(H+1)})^{2}\left\|B\right\|^{2}},$
then with probability greater than $1-\delta-\delta^{\prime}$, for a new
target task trajectory sampled with i.i.d. process randomness
$x[0]\sim\Sigma_{x}^{(H+1)}$, $w[t]\sim\Sigma_{w}^{(H+1)}$, $t=0,\dots,T-1$,
the tracking error satisfies
$\displaystyle\max_{1\leq t\leq T}\left\|\hat{x}[t]-x_{\star}[t]\right\|^{2}$
$\displaystyle\leq
4\mathcal{J}(A+BK^{(H+1)})^{2}\left\|B\right\|^{2}\mathopen{}\left(1+4\log\mathopen{}\left(\frac{T}{\delta^{\prime}}\right)\mathclose{}\right)\mathclose{}\mathrm{ER}(\hat{\Phi},\hat{W}^{H+1})$
(28)
$\displaystyle\lesssim\mathcal{J}(A+BK^{(H+1)})^{2}\left\|B\right\|^{2}\log\mathopen{}\left(\frac{T}{\delta^{\prime}}\right)\mathclose{}f(N_{1},T,H,N_{2},\delta),$
(29)
where the expert and learned trajectory states $\hat{x}[t]$ and $x_{\star}[t]$
are as defined in Proposition C.1.
Proof By Proposition C.1, we have that if the learned controller
$\hat{K}=\hat{F}^{(H+1)}\hat{\Phi}$ satisfies
$\displaystyle\left\|\hat{K}-K^{(H+1)}\right\|\leq\frac{1}{2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|},$
where we plugged in the $\beta(\cdot,t)$, $\gamma(\cdot)$ functions derived
from Lemma C.1 on the target task expert closed-loop system
$(A+BK^{(H+1)},B)$, then we have for learned and expert trajectories generated
by independent instances of randomness $x[0],w[0],\dots,w[T-1]$ the following
imitation gap:
$\max_{1\leq t\leq T}\left\|\hat{x}[t]-x_{\star}[t]\right\|\leq\max_{0\leq
t\leq
T-1}2\mathcal{J}(A+BK^{(H+1)})\left\|B\right\|\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|.$
In order derive sample complexity bounds from this, we observe a
generalization bound on the excess risk
$\displaystyle\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$
$\displaystyle:=\frac{1}{2}\mathbb{E}_{\mathopen{}\left\\{x_{\star}[t]\right\\}\mathclose{}\sim\mathcal{P}^{(H+1)}}\mathopen{}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|^{2}\right]\mathclose{}$
$\displaystyle=\frac{1}{2T}\sum_{t=0}^{T-1}\mathbb{E}\mathopen{}\left[\operatorname{Tr}\mathopen{}\left((\hat{K}-K^{(H+1)})x[t]x[t]^{\top}(\hat{K}-K^{(H+1)})^{\top}\right)\mathclose{}\right]\mathclose{}$
$\displaystyle=\frac{1}{2}\operatorname{Tr}\mathopen{}\left((\hat{K}-K^{(H+1)})\Sigma_{x}^{(H+1)}(\hat{K}-K^{(H+1)})^{\top}\right)\mathclose{}$
$\displaystyle\leq f(N_{1},T,H,N_{2},\delta)\quad\text{w.p. }\geq 1-\delta$
directly implies a generalization bound on the Frobenius norm deviation
between the learned and expert controllers:
$\displaystyle\frac{1}{2}\operatorname{Tr}\mathopen{}\left((\hat{K}-K^{(H+1)})\Sigma_{x}^{(H+1)}(\hat{K}-K^{(H+1)})^{\top}\right)\mathclose{}\geq\frac{1}{2}\lambda_{\min}(\Sigma_{x}^{(H+1)})\left\|\hat{K}^{(H+1)}-K^{(H+1)}\right\|_{F}^{2}$
$\displaystyle\implies$
$\displaystyle\left\|\hat{K}^{(H+1)}-K^{(H+1)}\right\|_{F}^{2}\leq\frac{2}{\lambda_{\min}(\Sigma_{x}^{(H+1)})}f(N_{1},T,H,N_{2},\delta)\text{
w.p. }\geq 1-\delta.$
Since the Frobenius norm upper bounds the spectral norm, it suffices to have
enough samples $N_{1},N_{2},T,H$ such that
$f(N_{1},T,H,N_{2},\delta)\leq\frac{\lambda_{\min}(\Sigma_{x}^{(H+1)})}{8\mathcal{J}(A+BK^{(H+1)})^{2}\left\|B\right\|^{2}}$
If the sample complexity requirement is satisfied, then we have with
probability greater than $1-\delta$ that the spectral norm gap is satisfied
$\displaystyle\left\|\hat{K}-K^{(H+1)}\right\|\leq\frac{1}{4\mathcal{J}(A+BK^{(H+1)})^{2}\left\|B\right\|^{2}},$
which in turn implies the bound on the tracking error
$\displaystyle\max_{1\leq t\leq
T}\left\|\hat{x}[t]-x_{\star}[t]\right\|^{2}\leq\max_{0\leq t\leq
T-1}4\mathcal{J}(A+BK^{(H+1)})^{2}\left\|B\right\|^{2}\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|^{2}.$
(30)
In order to convert the RHS of the above expression, which involves a maximum
over time of the imitation error between $\hat{K}$ and $K^{(H+1)}$, into
something involving the excess risk of the controller $\hat{K}$, which is the
expected value of the imitation error, we again appeal to the Hanson-Wright
inequality, adapted specifically for Gaussian quadratic forms in Proposition
A.1.
We recall from (30) the tracking error is with probability greater than
$1-\delta$ bounded by
$\max_{1\leq t\leq T}\left\|\hat{x}[t]-x_{\star}[t]\right\|^{2}\leq\max_{0\leq
t\leq
T-1}4\mathcal{J}\mathopen{}\left(A+BK^{(H+1)}\right)\mathclose{}^{2}\left\|B\right\|^{2}\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|^{2}.$
In order to bound the RHS of the above inequality by a multiplicative factor
of the excess risk $\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$, we apply
Proposition A.1, setting
$\displaystyle
R:=\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}^{1/2},\quad
z_{t}:=\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}^{-1/2}x_{\star}[t].$
(31)
By an application of the union bound, we have
$\displaystyle\mathbb{P}\mathopen{}\left[\max_{t\leq
T-1}z_{t}^{\top}R^{\top}Rz_{t}\geq(1+\varepsilon)\left\|R\right\|_{F}^{2}\right]\mathclose{}$
$\displaystyle\leq\sum_{t=0}^{T-1}\mathbb{P}\mathopen{}\left[z_{t}^{\top}R^{\top}Rz_{t}\geq(1+\varepsilon)\left\|R\right\|_{F}^{2}\right]\mathclose{}$
$\displaystyle\leq
T\exp\mathopen{}\left(-\frac{1}{4}\min\mathopen{}\left\\{\frac{\varepsilon^{2}}{4},\varepsilon\right\\}\mathclose{}\frac{\left\|R\right\|_{F}^{2}}{\left\|R\right\|^{2}}\right)\mathclose{}.$
Setting the last line less than the desired failure probability
$\delta^{\prime}\in(0,1)$, we get
$\displaystyle\min\mathopen{}\left\\{\frac{\varepsilon^{2}}{4},\varepsilon\right\\}\mathclose{}$
$\displaystyle\geq
4\frac{\left\|R\right\|^{2}}{\left\|R\right\|_{F}^{2}}\log\mathopen{}\left(\frac{T}{\delta^{\prime}}\right)\mathclose{}.$
Since $\left\|R\right\|\leq\left\|R\right\|_{F}$, it suffices to choose
$\displaystyle\varepsilon$
$\displaystyle=4\log\mathopen{}\left(\frac{T}{\delta^{\prime}}\right)\mathclose{},$
such that with probability greater than $1-\delta^{\prime}$
$\displaystyle\max_{0\leq t\leq
T-1}\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|^{2}$
$\displaystyle\leq\mathopen{}\left(1+4\log\mathopen{}\left(\frac{T}{\delta^{\prime}}\right)\mathclose{}\right)\mathclose{}\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}^{1/2}\right\|_{F}^{2}$
$\displaystyle=\mathopen{}\left(1+4\log\mathopen{}\left(\frac{T}{\delta^{\prime}}\right)\mathclose{}\right)\mathclose{}\mathrm{ER}\mathopen{}\left(\hat{\Phi},\hat{F}^{(H+1)}\right)\mathclose{}.$
Plugging this back into the tracking error bound and union bounding over the
generalization bound event on
$\mathrm{ER}\mathopen{}\left(\hat{\phi},\hat{F}^{(H+1)}\right)\mathclose{}$ of
probability $1-\delta$ and the concentration event of probability
$1-\delta^{\prime}$ yields the desired result.
The high probability bound introduced in Theorem C.1 provides a very high
resolution control over the deviation between closed-loop learned and expert
states, where we control the maximum deviation of states over time by the
expected time-averaged excess risk of the learned controller, accruing only a
$\log(T)$ factor in the process. Furthermore, our high-probability bounds are
multiplicative with respect to only $\log(1/\delta)$ rather than the naive
Markov’s inequality factor $1/\delta$, which is immediately higher-resolution
than the corresponding in-expectation bound (for example derived from
Proposition C.2), on which only Markov’s inequality can be applied without
more detailed analysis. However, this being said, in much of existing
literature in control and reinforcement learning, the performance of a policy
is evaluated as an expected cost/reward over the trajectories it generates,
for example LQG cost (Fazel et al., 2018), or expected reward in online
imitation learning (Ross et al., 2011). This therefore motivates understanding
whether the tools we develop directly imply bounds in-expectation on general
cost functions evaluated on the learned trajectory distribution versus the
expert trajectory distribution. Intuitively, if we treat the tracking error
between two trajectories as a metric, if the cost function varies continuously
(e.g. Lipschitz) with respect to this metric, our generalization bounds should
imply some sort of distributional distance between the closed-loop learned and
expert trajectory distributions. This is made explicit in the following
result.
###### Theorem C.2 (Full version of Theorem 3.2, (10))
Let us denote stacked trajectory vectors
$\bm{x}_{1:T}=\begin{bmatrix}x[1]^{\top}&\cdots&x[T]^{\top}\end{bmatrix}^{\top}\in\mathbb{R}^{n_{x}T}$,
and denote $\bm{x}^{\star}_{1:T}\sim\mathcal{P}^{\star}_{1:T}$ and
$\hat{\bm{x}}_{1:T}\sim\hat{}\mathcal{P}_{1:T}$ as the distributions of
closed-loop expert and learned trajectories generated by $K^{(H+1)}$ and
$\hat{K}$, respectively. Let $h$ be any cost function that is $L$-Lipschitz
with respect to the metric between trajectories
$d\mathopen{}\left(\bm{x}_{1:T},\bm{y}_{1:T}\right)\mathclose{}=\max_{1\leq
t\leq T}\left\|x[t]-y[t]\right\|$, i.e.,
$\displaystyle{\left|h(x)-h(y)\right|}\leq Ld(x,y),\;\forall
x,y\in\mathcal{X}$
Assume the sample requirements of Theorem C.1 are satisfied for given
$\delta$. Then with probability greater than $1-\delta$, the following bound
holds on the gap between the expected costs of expert and learned
trajectories:
$\displaystyle{\left|\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h(\hat{\bm{x}}_{1:T})\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h(\bm{x}^{\star}_{1:T})\right]\mathclose{}\right|}$
$\displaystyle\leq
2\sqrt{3}L\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|\sqrt{1+\log(T)}\sqrt{\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})}$
where $A_{\mathsf{cl}}:=A+BK^{(H+1)}$.
###### Remark C.1
We note that since
$\frac{1}{T}\sum_{t=1}^{T}\left\|x[t]-y[t]\right\|\leq\max_{1\leq t\leq
T}\left\|x[t]-y[t]\right\|$, the above result holds with minimal modification
for
$d(\bm{x}_{1:T},\bm{y}_{1:T})=\frac{1}{T}\sum_{t=1}^{T}\left\|x[t]-y[t]\right\|$.
Before proceeding with the proof of Theorem C.2, the following are valid
examples of $h(\cdot)$:
* •
$h(\bm{x}_{1:T})=\max_{1\leq t\leq T}\left\|Q^{1/2}x[t]\right\|$,
* •
$h(\bm{x}_{1:T})=\max_{1\leq t\leq
T}\left\|x[t]-x_{\mathrm{goal}}[t]\right\|+\lambda\left\|Rx[t]\right\|$.
Proof We appeal to an application of Kantorovich-Rubinstein duality to
establish general in-expectation guarantees. Let us define the metric
$\displaystyle d\mathopen{}\left(\bm{x}_{1:T},\bm{y}_{1:T}\right)\mathclose{}$
$\displaystyle=\max_{1\leq t\leq T}\left\|x[t]-y[t]\right\|.$
Define $\Gamma(\mathcal{P}^{\star}_{1:T},\hat{}\mathcal{P}_{1:T})$ as the set
of all couplings between the distributions $\mathcal{P}^{\star}_{1:T}$ and
$\hat{}\mathcal{P}_{1:T}$. In particular, trajectories
$\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}$ following the same instances of
$w[t]$ and $z[t]$ considered in (26) is one such coupling. The Wasserstein
$1$-distance between $\mathcal{P}^{\star}_{1:T}$ and $\hat{}\mathcal{P}_{1:T}$
is defined as
$\displaystyle\mathcal{W}_{1}\mathopen{}\left(\mathcal{P}^{\star}_{1:T},\hat{}\mathcal{P}_{1:T}\right)\mathclose{}$
$\displaystyle=\inf_{\gamma\in\Gamma(\mathcal{P}^{\star}_{1:T},\hat{}\mathcal{P}_{1:T})}\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\gamma}\mathopen{}\left[d\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\right]\mathclose{}$
$\displaystyle=\inf_{\gamma\in\Gamma(\mathcal{P}^{\star}_{1:T},\hat{}\mathcal{P}_{1:T})}\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\gamma}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|\right]\mathclose{}.$
In particular, defining the coupling (26) as $\overline{\gamma}$, we
immediately have
$\displaystyle\mathcal{W}_{1}\mathopen{}\left(\mathcal{P}^{\star}_{1:T},\hat{}\mathcal{P}_{1:T}\right)\mathclose{}$
$\displaystyle\leq\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\overline{\gamma}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|\right]\mathclose{}.$
Kantorovich-Rubinstein duality (cf. Villani (2021)) provides the following
dual characterization of the Wasserstein distance:
$\displaystyle\frac{1}{L}\sup_{\left\|h\right\|_{\mathrm{Lip}}\leq
L}\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h(\hat{\bm{x}}_{1:T})\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h(\bm{x}^{\star}_{1:T})\right]\mathclose{}$
$\displaystyle=\mathcal{W}_{1}\mathopen{}\left(\mathcal{P}^{\star}_{1:T},\hat{}\mathcal{P}_{1:T}\right)\mathclose{},$
where
$\left\|h\right\|_{\mathrm{Lip}}=\sup_{x,y\in\mathcal{X}}\frac{{\left|h(x)-h(y)\right|}}{d(x,y)}.$
Therefore, given any cost function $h$ that is $L$-Lipschitz continuous with
respect to $d(x,y)$, the gap between the expected closed-loop costs of the
learned and expert controllers, we can chain inequalities to yield
$\displaystyle\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h(\hat{\bm{x}}_{1:T})\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h(\bm{x}^{\star}_{1:T})\right]\mathclose{}\leq
L\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\overline{\gamma}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|\right]\mathclose{}.$
It remains to provide a bound on
$\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\overline{\gamma}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|\right]\mathclose{}$. Recalling
the bound (30), this reduces to bounding the imitation error along expert
trajectories $x_{\star}[t]$. In order to convert a bound on $\max_{1\leq t\leq
T}\left\|x_{\star}[t]-\hat{x}[t]\right\|$ to $\max_{1\leq t\leq
T}\left\|x_{\star}[t]-\hat{x}[t]\right\|$, we apply Jensen’s inequality to
yield
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{1\leq t\leq
T}\left\|x_{\star}[t]-\hat{x}[t]\right\|\right]\mathclose{}^{2}\leq\mathbb{E}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|^{2}\right]\mathclose{}$
The rest follows from an application of the maximal inequality.
###### Proposition C.2
Given a test controller $K$, we have the following maximal-type inequality
between the excess maximal risk and the excess average risk over the expert
target task data:
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{0\leq t\leq
T-1}\left\|(K-K^{(H+1)})x_{\star}[t]\right\|^{2}\right]\mathclose{}$
$\displaystyle\leq
3(1+\log(T))\mathbb{E}\mathopen{}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|(K-K^{(H+1)})x_{\star}[t]\right\|^{2}\right]\mathclose{}.$
Proof We recall that the population distribution of $x_{\star}[t]$ is
$\mathcal{N}(0,\Sigma_{x}^{(H+1)})$. Recalling the definitions in (31), we can
re-write
$\displaystyle\left\|(K-K^{(H+1)})x_{\star}[t]\right\|^{2}$
$\displaystyle=z_{t}^{\top}R^{\top}Rz_{t},$
where $z_{t}\sim\mathcal{N}(0,I)$. Therefore, what we need to show is a
maximal inequality on
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{0\leq t\leq
T-1}z_{t}^{\top}R^{\top}Rz_{t}\right]\mathclose{}.$
Toward establishing such an inequality, we prove the following bound on the
moment-generating function of $z_{t}^{\top}R^{\top}Rz_{t}$.
###### Lemma C.2
Let $z\sim\mathcal{N}(0,I_{n})$ be a standard Gaussian random vector, and
$R\in\mathbb{R}^{d\times n}$ is an arbitrary fixed matrix. Then, we have the
following bound on the moment-generating function of $z^{\top}R^{\top}Rz$
$\displaystyle\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda
z^{\top}R^{\top}Rz\right)\mathclose{}\right]\mathclose{}$
$\displaystyle\leq\exp\mathopen{}\left(3\lambda\left\|R\right\|_{F}^{2}\right)\mathclose{},\quad\text{for
}{\left|\lambda\right|}\leq\frac{1}{3\left\|R\right\|^{2}}.$
Proof The proof largely follows standard results, for example from Vershynin
(2018, Chapter 6.2), but we derive the result from scratch to preserve
explicit numerical constants. By the rotational invariance of Gaussian random
vectors, we may write
$\displaystyle\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda
z^{\top}R^{\top}Rz\right)\mathclose{}\right]\mathclose{}$
$\displaystyle=\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\sum_{i=1}^{\min\mathopen{}\left\\{n,d\right\\}\mathclose{}}\lambda\sigma_{i}^{2}g_{i}^{2}\right)\mathclose{}\right]\mathclose{}$
$\displaystyle=\prod_{i=1}^{\min\mathopen{}\left\\{n,d\right\\}\mathclose{}}\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda\sigma_{i}^{2}g_{i}^{2}\right)\mathclose{}\right]\mathclose{},$
where $\sigma_{i}$ is the $i$th singular value of $R$ and
$g_{i}\sim\mathcal{N}(0,1)$ are i.i.d. standard Gaussian random variables.
Thus we have reduced the problem to bounding the MGF of a $\chi^{2}(1)$ random
variable. We recall that the MGF of a $\chi^{2}(1)$ random variable is given
by
$\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(tg_{i}^{2}\right)\mathclose{}\right]\mathclose{}=\frac{1}{\sqrt{1-2t}},\quad
t<\frac{1}{2}.$
We claim that there exists constants $C,c>0$ such that
$\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(tg_{i}^{2}\right)\mathclose{}\right]\mathclose{}\leq\exp\mathopen{}\left(Ct\right)\mathclose{},\quad
t\leq c<\frac{1}{2}.$
We can derive candidates for $C,c$ by solving
$\displaystyle\frac{1}{\sqrt{1-2t}}$
$\displaystyle\leq\exp\mathopen{}\left(Ct\right)\mathclose{}\;\land\;t<1/2$
$\displaystyle\iff-\frac{1}{2}\log(1-2t)$ $\displaystyle\leq
Ct\;\land\;t<1/2.$
We observe the function $f(t):=-\frac{1}{2}\log(1-2t)-Ct$ satisfies $f(0)=0$
and $f^{\prime}(t)=\frac{1}{1-2t}-C$ is monotonically increasing for $t<1/2$,
and $f^{\prime}(0)<0$ for $C>1$. Therefore, for a given $C>1$ it suffices to
find $0<c<1/2$ such that $f^{\prime}(c)=0$, which implies $f(c)\leq 0$.
Plugging in $C=3$, $c=1/3$ satisfies this. Therefore, we get the MGF bound
$\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(tg_{i}^{2}\right)\mathclose{}\right]\mathclose{}\leq\exp\mathopen{}\left(3t\right)\mathclose{},\quad
t\leq\frac{1}{3}.$
Applying this back on the MGF of $z^{\top}R^{\top}Rz$, we have
$\displaystyle\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda
z^{\top}R^{\top}Rz\right)\mathclose{}\right]\mathclose{}$
$\displaystyle=\prod_{i=1}^{\min\mathopen{}\left\\{n,d\right\\}\mathclose{}}\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda\sigma_{i}^{2}g_{i}^{2}\right)\mathclose{}\right]\mathclose{}$
$\displaystyle\leq\prod_{i=1}^{\min\mathopen{}\left\\{n,d\right\\}\mathclose{}}\exp\mathopen{}\left(3\lambda\sigma_{i}^{2}\right)\mathclose{},\quad\lambda\leq\frac{1}{3\sigma_{i}^{2}}\;\forall
i$
$\displaystyle=\exp\mathopen{}\left(3\lambda\sum_{i}\sigma_{i}^{2}\right)\mathclose{},\quad\lambda\leq\frac{1}{3\sigma_{i}^{2}}\;\forall
i$
$\displaystyle=\exp\mathopen{}\left(3\lambda\left\|R\right\|_{F}^{2}\right)\mathclose{},\quad\lambda\leq\frac{1}{3\left\|R\right\|^{2}},$
which completes the proof.
Now, leveraging Lemma C.2: for $\lambda>0$, we write
$\displaystyle\exp\mathopen{}\left(\lambda\mathbb{E}\mathopen{}\left[\max_{t=0,\dots,T-1}z_{t}^{\top}R^{\top}Rz_{t}\right]\mathclose{}\right)\mathclose{}$
$\displaystyle\leq\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda\max_{t}z_{t}^{\top}R^{\top}Rz_{t}\right)\mathclose{}\right]\mathclose{}$
Jensen’s
$\displaystyle=\mathbb{E}\mathopen{}\left[\max_{t}\exp\mathopen{}\left(\lambda
z_{t}^{\top}R^{\top}Rz_{t}\right)\mathclose{}\right]\mathclose{}$ monotonicity
$\displaystyle\leq\sum_{t=0}^{T-1}\mathbb{E}\mathopen{}\left[\exp\mathopen{}\left(\lambda
z_{t}^{\top}R^{\top}Rz_{t}\right)\mathclose{}\right]\mathclose{}$
We note that in the last line, we are taking the expectation of each $z_{t}$
over its population distribution, such that $z_{t}\sim\mathcal{N}(0,I)$. Thus,
$\displaystyle\exp\mathopen{}\left(\lambda\mathbb{E}\mathopen{}\left[\max_{t=0,\dots,T-1}z_{t}^{\top}R^{\top}Rz_{t}\right]\mathclose{}\right)\mathclose{}$
$\displaystyle\leq\sum_{t=0}^{T-1}\exp\mathopen{}\left(3\lambda\left\|R\right\|_{F}^{2}\right)\mathclose{}=T\exp\mathopen{}\left(3\lambda\left\|R\right\|_{F}^{2}\right)\mathclose{},\quad\lambda\leq\frac{1}{3\left\|R\right\|^{2}}.$
Taking the logarithm of both sides, re-arranging, and taking the infimum over
$\lambda$, we have
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{t=0,\dots,T-1}z_{t}^{\top}R^{\top}Rz_{t}\right]\mathclose{}$
$\displaystyle\leq\inf_{\lambda\in\Big{[}0,\frac{1}{3\left\|R\right\|^{2}}\Big{]}}\frac{\log(T)}{\lambda}+3\left\|R\right\|_{F}^{2}$
$\displaystyle=3\left\|R\right\|^{2}\log(T)+3\left\|R\right\|_{F}^{2}$
$\displaystyle\leq
3\mathopen{}\left(1+\log(T)\right)\mathclose{}\left\|R\right\|_{F}^{2}.$
Substituting back
$R=(K-K^{(H+1)})\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}^{1/2}$,
and observing
$\displaystyle\left\|R\right\|_{F}^{2}=\left\|(K-K^{(H+1)})\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}^{1/2}\right\|_{F}^{2}$
$\displaystyle=\operatorname{Tr}\mathopen{}\left((K-K^{(H+1)})\Sigma_{x}^{(H+1)}(K-K^{(H+1)})^{\top}\right)\mathclose{}$
$\displaystyle=\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\mathopen{}\left[\left\|Kx_{\star}[t]-K^{(H+1)}x_{\star}[t]\right\|^{2}\right]\mathclose{},$
the proof of Proposition C.2 is complete.
With the maximal inequality in hand, and defining
$A_{\mathsf{cl}}:=A+BK^{(H+1)}$, we may now bound:
$\displaystyle\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h(\hat{\bm{x}}_{1:T})\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h(\bm{x}^{\star}_{1:T})\right]\mathclose{}$
$\displaystyle\leq\;$ $\displaystyle
L\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\overline{\gamma}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|\right]\mathclose{}$
$\displaystyle\leq\;$ $\displaystyle
L\sqrt{\mathbb{E}_{\mathopen{}\left(\bm{x}^{\star}_{1:T},\hat{\bm{x}}_{1:T}\right)\mathclose{}\sim\overline{\gamma}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]-\hat{x}[t]\right\|^{2}\right]\mathclose{}}$
$\displaystyle\leq\;$ $\displaystyle
L\sqrt{\mathbb{E}_{\mathcal{P}^{\star}_{0:T-1}}\mathopen{}\left[\max_{0\leq
t\leq
T-1}4\mathcal{J}(A_{\mathsf{cl}})^{2}\left\|B\right\|^{2}\left\|\mathopen{}\left(\hat{K}-K^{(H+1)}\right)\mathclose{}x_{\star}[t]\right\|^{2}\right]\mathclose{}}$
$\displaystyle\leq\;$ $\displaystyle
2\sqrt{3}L\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|\sqrt{1+\log(T)}\sqrt{\mathbb{E}_{\mathcal{P}^{\star}_{0:T-1}}\mathopen{}\left[\frac{1}{T}\sum_{t=0}^{T-1}\left\|(K-K^{(H+1)})x_{\star}[t]\right\|^{2}\right]\mathclose{}}$
$\displaystyle\leq\;$ $\displaystyle
2\sqrt{3}L\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|\sqrt{1+\log(T)}\sqrt{\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})}.$
### C.1 Tightness of Dependence on Spectral Radius in Theorem 3.2
Consider the following scalar LTI system
$\displaystyle x[t+1]=ax[t]+u[t]+w[t],\quad w[t]\sim\mathcal{N}(0,1),$
where $a>0$ without loss of generality. We are given the expert controller
$u^{\star}[t]=k^{\star}x[t]$ that stabilizes the above system, such that we
have
$0<a+k^{\star}<1.$
Let’s say our learned controller $\hat{k}$ attains an $\varepsilon>0$ error
with respect to $k^{\star}$: $\hat{k}=k^{\star}+\varepsilon$, and
$a+\hat{k}<1$. We recall a couple of facts: the stationary distribution
induced by $k^{\star}$ can be derived by solving the scalar Lyapunov equation
$\displaystyle\sigma^{2}=(a+k^{\star})^{2}\sigma^{2}+1\iff\sigma^{2}=\frac{1}{1-(a+k^{\star})^{2}}.$
Therefore, akin to the setting considered in our paper, we assume the initial
state distribution is the expert stationary distribution. We now study the
tracking between the expert and learned states evolving in the coupled manner
analogous to (26)
$\displaystyle x_{\star}[t+1]$
$\displaystyle=(a+k^{\star})x_{\star}[t]+w[t],\quad
x_{0}\sim\mathcal{N}\mathopen{}\left(0,\frac{1}{1-(a+k^{\star})^{2}}\right)\mathclose{}$
$\displaystyle\hat{x}[t+1]$
$\displaystyle=(a+k^{\star}+\varepsilon)\hat{x}[t]+w[t],\quad
x_{0}\sim\mathcal{N}\mathopen{}\left(0,\frac{1}{1-(a+k^{\star})^{2}}\right)\mathclose{}.$
In particular, we show the following lower bound on the tracking error
###### Proposition C.3
Given the proposed scalar system, the tracking error satisfies the following
lower bound for sufficiently large $T$,
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{1\leq t\leq
T}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}\right]\mathclose{}$
$\displaystyle\gtrsim\;$
$\displaystyle\frac{1}{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}}\underbrace{\frac{1}{T}\mathbb{E}\mathopen{}\left[\sum_{t=0}^{T-1}{\left|(\hat{k}-k^{\star})x_{\star}[t]\right|}^{2}\right]\mathclose{}}_{\text{excess
risk of }\hat{k}\text{ on expert
data}}=:\frac{1}{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}}\mathrm{ER}(\hat{k}).$
We note that instantiating the in-expectation upper bound using the maximal
inequality in Proposition C.2 yields
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{1\leq t\leq
T}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}\right]\mathclose{}$
$\displaystyle\leq
12\mathcal{J}(a+k^{\star})^{2}\left\|B\right\|^{2}(1+\log(T))\mathrm{ER}(\hat{k})$
$\displaystyle=\frac{12}{(1-(a+k^{\star}))^{2}}(1+\log(T))\mathrm{ER}(\hat{k}).$
Therefore, Proposition C.3 states that up to a log-factor in horizon length
$T$, the polynomial dependence on the expert system’s spectral radius
$a+k^{\star}$ matches that in the upper bound.
Proof We can immediately compute the excess risk in closed form
$\displaystyle\mathrm{ER}(\hat{k}):=\frac{1}{T}\mathbb{E}\mathopen{}\left[\sum_{t=0}^{T-1}{\left|(\hat{k}-k^{\star})x_{\star}[t]\right|}^{2}\right]\mathclose{}$
$\displaystyle=(\hat{k}-k^{\star})^{2}\mathbb{E}\mathopen{}\left[\frac{1}{T}\sum_{t=0}^{T-1}{x_{\star}[t]}^{2}\right]\mathclose{}$
$\displaystyle=\frac{\varepsilon^{2}}{1-(a+k^{\star})^{2}}\quad\text{by
stationarity}.$
Therefore, in addition to the spectral radius showing up in the excess risk,
we need to show the tracking error accrues another factor of the spectral
radius on top of that. We observe
$\displaystyle\max_{1\leq t\leq T}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}$
$\displaystyle\geq{\left|x_{\star}[{T}]-\hat{x}[{T}]\right|}^{2}$
$\displaystyle\implies\mathbb{E}\mathopen{}\left[\max_{1\leq t\leq
T}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}\right]\mathclose{}$
$\displaystyle\geq\max_{t\leq
T}\mathbb{E}\mathopen{}\left[{\left|x_{\star}[T]-\hat{x}[T]\right|}^{2}\right]\mathclose{}$
$\displaystyle=\max_{t\leq
T}\mathbb{E}\mathopen{}\left[{x_{\star}[{T}]}^{2}\right]\mathclose{}+\mathbb{E}\mathopen{}\left[\hat{x}[{T}]^{2}\right]\mathclose{}-2\mathbb{E}\mathopen{}\left[x_{\star}[{T}]\hat{x}[{T}]\right]\mathclose{}$
$\displaystyle=\max_{t\leq
T}\frac{1}{1-(a+k^{\star})^{2}}+\frac{(a+\hat{k})^{2T}}{1-(a+k^{\star})^{2}}+\sum_{t=0}^{T-1}(a+\hat{k})^{2(T-t)}$
$\displaystyle\qquad-\frac{2(a+k^{\star})^{T}(a+\hat{k})^{T}}{1-(a+k^{\star})^{2}}-2\sum_{t=0}^{T-1}(a+k^{\star})^{T-t}(a+\hat{k})^{T-t},$
where the summations come from the fact that the initial condition is drawn
from the stationary distribution induced by the expert, not the learned
controller, in conjunction with the formula
$x[t]=a^{t}x_{0}+\sum_{k=0}^{t-1}a^{t-1-k}w[t]$ for system
$x[t+1]=ax[t]+w[t]$. Since in the limit we have
$\displaystyle\lim_{T\to\infty}\mathbb{E}\mathopen{}\left[{\left|x_{\star}[{T}]-\hat{x}[{T}]\right|}^{2}\right]\mathclose{}$
$\displaystyle=\frac{1}{1-(a+k^{\star})^{2}}+\frac{1}{1-(a+\hat{k})^{2}}-\frac{2}{1-(a+k^{\star})(a+\hat{k})},$
we simply take $T$ large enough such that
$\displaystyle\mathbb{E}\mathopen{}\left[{\left|x_{\star}[{T}]-\hat{x}[{T}]\right|}^{2}\right]\mathclose{}$
$\displaystyle\geq\frac{1}{2}\mathopen{}\left(\frac{1}{1-(a+k^{\star})^{2}}+\frac{1}{1-(a+\hat{k})^{2}}-\frac{2}{1-(a+k^{\star})(a+\hat{k})}\right)\mathclose{}.$
We now bound
$\displaystyle\frac{1}{1-(a+k^{\star})^{2}}+\frac{1}{1-(a+\hat{k})^{2}}-\frac{2}{1-(a+k^{\star})(a+\hat{k})}$
$\displaystyle=\;$
$\displaystyle\sum_{t=0}^{\infty}(a+k^{\star})^{2t}+(a+\hat{k})^{2t}-2(a+k^{\star})^{t}(a+\hat{k})^{t}$
$\displaystyle=\;$
$\displaystyle\sum_{t=0}^{\infty}((a+\hat{k})^{t}-(a+k^{\star})^{t})^{2}$
$\displaystyle=\;$
$\displaystyle\sum_{t=0}^{\infty}\mathopen{}\left((\hat{k}-k^{\star})\mathopen{}\left((a+k^{\star})^{t-1}+(a+k^{\star})^{t-2}(a+\hat{k})+\cdots+(a+k^{\star})(a+\hat{k})^{t-2}+(a+\hat{k})^{t-1}\right)\mathclose{}\right)\mathclose{}^{2}$
$\displaystyle\geq\;$
$\displaystyle\varepsilon^{2}\sum_{k=0}^{\infty}\mathopen{}\left(t(a+k^{\star})^{t-1}\right)\mathclose{}^{2}$
$\displaystyle=\;$
$\displaystyle\varepsilon^{2}\frac{1+(a+k^{\star})^{2}}{(1-(a+k^{\star})^{2})^{3}},$
where we used the algebraic identity
$u^{t}-v^{t}=(u-v)(u^{t-1}+u^{t-2}v+\cdots+uv^{t-2}+v^{t-1}),$
and the inequality (assuming $u\leq v$)
$u^{t-1}+u^{t-2}v+\cdots+uv^{t-2}+v^{t-1}\geq nu^{t-1},$
setting $u:=a+k^{\star}$ and $v:=a+\hat{k}=a+k^{\star}+\varepsilon$. Now
recalling that the empirical risk by stationarity is given by
$\mathrm{ER}(\hat{k})=\frac{\varepsilon^{2}}{1-(a+k^{\star})^{2}},$
we can immediately infer
$\displaystyle\mathbb{E}\mathopen{}\left[{\left|x_{\star}[{T}]-\hat{x}[{T}]\right|}^{2}\right]\mathclose{}$
$\displaystyle\geq
0.5\varepsilon^{2}\frac{1+(a+k^{\star})^{2}}{(1-(a+k^{\star})^{2})^{3}}$
$\displaystyle=0.5\mathrm{ER}(\hat{k})\frac{1+(a+k^{\star})^{2}}{(1-(a+k^{\star})^{2})^{2}}$
$\displaystyle>\frac{0.5}{(1-(a+k^{\star})^{2})^{2}}\mathrm{ER}(\hat{k}).$
In short, we have established a lower bound on the expected imitation gap:
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{0\leq t\leq
T-1}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}\right]\mathclose{}$
$\displaystyle\geq\max_{t\leq
T-1}\mathbb{E}\mathopen{}\left[{\left|x_{\star}[{T-1}]-\hat{x}[{T-1}]\right|}^{2}\right]\mathclose{}$
$\displaystyle\gtrsim\frac{1}{(1-(a+k^{\star})^{2})^{2}}\mathrm{ER}(\hat{k}).$
We now claim that $\frac{1}{(1-(a+k^{\star})^{2})^{2}}$ matches the dependence
on the spectral radius in the upper bound. Combining Proposition C.1 and
Proposition C.2, we have the following upper bound on the expected tracking
error:
$\displaystyle\mathbb{E}\mathopen{}\left[\max_{0\leq t\leq
T-1}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}\right]\mathclose{}$
$\displaystyle\leq\frac{12}{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}}(1+\log(T))\mathrm{ER}(\hat{k}).$
Comparing $\frac{1}{(1-(a+k^{\star})^{2})^{2}}$ to
$\frac{1}{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}}$, we get
$\displaystyle\frac{\frac{1}{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}}}{\frac{1}{(1-(a+k^{\star})^{2})^{2}}}$
$\displaystyle=\frac{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}\mathopen{}\left(1+(a+k^{\star})\right)\mathclose{}^{2}}{\mathopen{}\left(1-(a+k^{\star})\right)\mathclose{}^{2}}=\mathopen{}\left(1+(a+k^{\star})\right)\mathclose{}^{2}\geq
1,$
which essentially states the dependence on the spectral radius in the lower
and upper bounds match up to a constant factor:
$\displaystyle\frac{0.5}{(1-(a+k^{\star}))^{2}}\mathrm{ER}(\hat{k})\leq\mathbb{E}\mathopen{}\left[\max_{0\leq
t\leq
T-1}{\left|x_{\star}[t]-\hat{x}[t]\right|}^{2}\right]\mathclose{}\leq\frac{12}{(1-(a+k^{\star}))^{2}}(1+\log(T))\mathrm{ER}(\hat{k}),$
which completes the result.
## Appendix D In-Expectation Bounds for LQR via the Tracking Error
As previewed in Remark 3.2, the generalization bounds on the excess risk and
tracking error are strong enough to directly imply in-expectation bounds on
the LQR costs of the closed-loop expert and learned systems. However, this
does not trivially follow an application of Lipschitzness with respect to the
trajectory-wise metric
$d(\bm{x}_{1:T},\bm{y}_{1:T})=\max_{t}\left\|x[t]-y[t]\right\|$, due to the
input cost $u^{\top}Ru$ depending on different controllers $\hat{K}$ and
$K^{(H+1)}$, which cannot be captured purely as a metric on trajectory states.
Therefore, some massaging is required to get the analogous bounds.
Let us define the (root) LQR cost defined on
$\mathopen{}\left(\bm{x}_{1:T},K\right)\mathclose{}$
$\displaystyle\begin{split}h\mathopen{}\left(\mathopen{}\left(\bm{x}_{1:T},K\right)\mathclose{}\right)\mathclose{}&:=\max_{1\leq
t\leq
T}\sqrt{x[t]^{\top}\mathopen{}\left(Q+K^{\top}RK\right)\mathclose{}x[t]}\\\
&=\max_{1\leq t\leq T}\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K\end{bmatrix}x[t]\right\|\end{split}$ (32)
Our goal is to show that $h(\cdot)$ is somewhat Lipschitz with respect to the
trajectory (pseudo)metric
$\displaystyle
d\mathopen{}\left(\mathopen{}\left(\bm{x}_{1:T},K_{1}\right)\mathclose{},\mathopen{}\left(\bm{y}_{1:T},K_{2}\right)\mathclose{}\right)\mathclose{}$
$\displaystyle:=\max_{1\leq t\leq T}\left\|x[t]-y[t]\right\|,$ (33)
such that using the tools we developed for bounding the tracking error imply
an in-expectation bound for LQR. This culminates in the following result.
###### Proposition D.1
Let $(\hat{\Phi},\hat{F}^{(H+1)})$ denote the learned representation and
target task weights, and $\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$ denote the
corresponding excess risk. Define $A_{\mathsf{cl}}:=A+BK^{(H+1)}$. As in
Theorem 3.2, assume that the excess risk satisfies:
$\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})\lesssim\frac{\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}{\mathcal{J}\mathopen{}\left(A_{\mathsf{cl}}\right)\mathclose{}^{2}\left\|B\right\|^{2}}.$
Let the cost $h(\cdot)$ be the LQR cost (32). The following in-expectation
bound on the gap between the closed-loop LQR costs induced by the expert
$K^{(H+1)}$ and learned controller $\hat{K}$ holds:
$\displaystyle\begin{split}&{\left|\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h\mathopen{}\left(\mathopen{}\left(\bm{\hat{}}x_{1:T},\hat{K}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h\mathopen{}\left(\mathopen{}\left(\bm{x}^{\star}_{1:T},K^{(H+1)}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}\right|}\\\
\lesssim\;&\mathcal{C}^{(H+1)}\sigma_{z}\sqrt{\log(T)}\sqrt{\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta^{\prime}})}{N_{2}T}},\end{split}$
where
$\mathcal{C}^{(H+1)}:=\lambda_{\max}(Q)^{1/2}\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|+\lambda_{\max}(R)^{1/2}\mathopen{}\left(\left\|K^{(H+1)}\right\|+\sqrt{\frac{\operatorname{Tr}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}{\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}}\right)\mathclose{},$
is an expert system and LQ cost-dependent constant.
Proof First, consider the following elementary inequality:
$\displaystyle\left\|M_{1}x_{1}\right\|-\left\|M_{2}x_{2}\right\|$
$\displaystyle=\left\|M_{1}x_{1}\right\|-\left\|M_{1}x_{2}\right\|+\left\|M_{1}x_{2}\right\|-\left\|M_{2}x_{2}\right\|$
$\displaystyle\leq\left\|M_{1}(x_{1}-x_{2})\right\|+\left\|(M_{1}-M_{2})x_{2}\right\|$
$\displaystyle\leq\left\|M_{1}\right\|\left\|x_{1}-x_{2}\right\|+\left\|x_{2}\right\|\left\|M_{1}-M_{2}\right\|$
Applying this to
$h\mathopen{}\left(\mathopen{}\left(\bm{x}_{1:T},K_{1}\right)\mathclose{}\right)\mathclose{}-h\mathopen{}\left(\mathopen{}\left(\bm{y}_{1:T},K_{2}\right)\mathclose{}\right)\mathclose{}$,
for fixed $K_{1}$, $K_{2}$ we get
$\displaystyle\begin{split}&h\mathopen{}\left(\mathopen{}\left(\bm{x}_{1:T},K_{1}\right)\mathclose{}\right)\mathclose{}-h\mathopen{}\left(\mathopen{}\left(\bm{y}_{1:T},K_{2}\right)\mathclose{}\right)\mathclose{}\\\
=\;&\max_{1\leq t\leq T}\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{1}\end{bmatrix}x[t]\right\|-\max_{1\leq t\leq
T}\left\|\begin{bmatrix}Q^{1/2}\\\ R^{1/2}K_{2}\end{bmatrix}y[t]\right\|\\\
\leq\;&\max_{1\leq t\leq T}\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{1}\end{bmatrix}x[t]\right\|-\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{2}\end{bmatrix}y[t]\right\|\\\ \leq\;&\max_{1\leq t\leq
T}\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{1}\end{bmatrix}\right\|\left\|x[t]-y[t]\right\|+\left\|y[t]\right\|\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{1}\end{bmatrix}-\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{2}\end{bmatrix}\right\|\\\ \leq\;&\left\|\begin{bmatrix}Q^{1/2}\\\
R^{1/2}K_{1}\end{bmatrix}\right\|\max_{1\leq t\leq
T}\left\|x[t]-y[t]\right\|+\lambda_{\max}(R)^{1/2}\left\|K_{1}-K_{2}\right\|\max_{1\leq
t\leq T}\left\|y[t]\right\|\end{split}$ (34)
we can take the expectation of the inequality (LABEL:eq:_cost_gap_term_split)
to yield for any coupling of the learned and expert trajectory distributions
$\Gamma(\hat{}\mathcal{P}_{1:T},\mathcal{P}^{\star}_{1:T})$
$\displaystyle\begin{split}&{\left|\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h\mathopen{}\left(\mathopen{}\left(\bm{\hat{}}x_{1:T},\hat{K}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h\mathopen{}\left(\mathopen{}\left(\bm{x}^{\star}_{1:T},K^{(H+1)}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}\right|}\\\
\leq\;&\lambda_{\max}\mathopen{}\left(Q+\hat{K}^{\top}R\hat{K}\right)\mathclose{}^{1/2}\mathbb{E}_{\Gamma(\hat{}\mathcal{P}_{1:T},\mathcal{P}^{\star}_{1:T})}\mathopen{}\left[\max_{t\leq
T}\left\|\hat{x}[t]-x_{\star}[t]\right\|\right]\mathclose{}\\\
\quad\quad&+\lambda_{\max}(R)^{1/2}\left\|\hat{K}-K^{(H+1)}\right\|\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]\right\|\right]\mathclose{}.\end{split}$ (35)
Setting $\Gamma(\hat{}\mathcal{P}_{1:T},\mathcal{P}^{\star}_{1:T})$ to be the
coupling described in (26) where the learned and expert trajectories are
evaluated on the same realizations of randomness, we may apply the tools
developed in Theorem C.2 to bound the first term. As for the second term, we
may apply a maximal-type inequality akin to Proposition C.2. In particular,
from earlier computations we have
$\displaystyle\mathbb{E}_{\Gamma(\hat{}\mathcal{P}_{1:T},\mathcal{P}^{\star}_{1:T})}\mathopen{}\left[\max_{t\leq
T}\left\|\hat{x}[t]-x_{\star}[t]\right\|\right]\mathclose{}$
$\displaystyle\leq
2\sqrt{3}\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|\sqrt{1+\log(T)}\sqrt{\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})},$
(36)
and adapting Proposition C.2 we get
$\displaystyle\begin{split}\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[\max_{1\leq
t\leq
T}\left\|x_{\star}[t]\right\|\right]\mathclose{}&\leq\sqrt{\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[\max_{1\leq
t\leq T}\left\|x_{\star}[t]\right\|^{2}\right]\mathclose{}}\\\
&\leq\sqrt{3}\sqrt{1+\log(T)}\sqrt{\mathbb{E}\mathopen{}\left[\left\|x_{\star}[0]\right\|^{2}\right]\mathclose{}}\\\
&\leq\sqrt{3}\sqrt{1+\log(T)}\sqrt{\operatorname{Tr}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}.\end{split}$
Lastly, from the definition of the excess risk we have
$\displaystyle\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$
$\displaystyle=\frac{1}{2}\operatorname{Tr}\mathopen{}\left((\hat{K}-K^{(H+1)})\Sigma_{x}^{(H+1)}(\hat{K}-K^{(H+1)})^{\top}\right)\mathclose{}$
$\displaystyle\implies\left\|\hat{K}-K^{(H+1)}\right\|$
$\displaystyle\leq\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}^{-1/2}\sqrt{\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})}.$
(37)
Therefore, plugging expressions (36) and (D) back into the bound (35), we have
essentially written the bound on the expected cost gap to scale with
$\sqrt{\log(T)\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})},$
modulo the system/LQ cost-related parameters. The rest of the proof follows by
instantiating the generalization bound on
$\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})$ from Theorem 3.1, and combining the
problem-related parameters. In particular, the first term of (35) involves the
learned controller:
$\lambda_{\max}\mathopen{}\left(Q+\hat{K}^{\top}R\hat{K}\right)\mathclose{}$.
This can be crudely upper bounded by
$\displaystyle\lambda_{\max}\mathopen{}\left(Q+\hat{K}^{\top}R\hat{K}\right)\mathclose{}^{1/2}$
$\displaystyle\leq\lambda_{\max}(Q)^{1/2}+\lambda_{\max}(R)^{1/2}\left\|\hat{K}\right\|$
$\displaystyle\leq\lambda_{\max}(Q)^{1/2}+\lambda_{\max}(R)^{1/2}\mathopen{}\left(\left\|K^{(H+1)}\right\|+\frac{1}{2\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|}\right)\mathclose{},$
where the last line comes from reverse triangle inequality on the burn-in
requirement (8). A tighter bound can be derived by using (D), but the scaling
of the final bound is the same. Therefore, we have
$\displaystyle\begin{split}&{\left|\mathbb{E}_{\hat{}\mathcal{P}_{1:T}}\mathopen{}\left[h\mathopen{}\left(\mathopen{}\left(\bm{\hat{}}x_{1:T},\hat{K}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}-\mathbb{E}_{\mathcal{P}^{\star}_{1:T}}\mathopen{}\left[h\mathopen{}\left(\mathopen{}\left(\bm{x}^{\star}_{1:T},K^{(H+1)}\right)\mathclose{}\right)\mathclose{}\right]\mathclose{}\right|}\\\
\lesssim\;&\mathcal{C}^{(H+1)}\sqrt{\log(T)\mathrm{ER}(\hat{\Phi},\hat{F}^{(H+1)})}\\\
\lesssim\;&\mathcal{C}^{(H+1)}\sigma_{z}\sqrt{\log(T)}\sqrt{\frac{kn_{x}\log\mathopen{}\left(N_{1}T\frac{\bar{\lambda}}{\underline{\lambda}}\right)\mathclose{}}{cN_{1}TH}+\frac{kn_{u}+\log(\frac{1}{\delta^{\prime}})}{N_{2}T}},\end{split}$
where
$\displaystyle\mathcal{C}^{(H+1)}$
$\displaystyle=\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|\mathopen{}\left(\lambda_{\max}(Q)^{1/2}+\lambda_{\max}(R)^{1/2}\mathopen{}\left(\left\|K^{(H+1)}\right\|+\frac{1}{2\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|}\right)\mathclose{}\right)\mathclose{}$
$\displaystyle\quad\quad+\lambda_{\max}(R)^{1/2}\sqrt{\frac{\operatorname{Tr}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}{\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}}$
$\displaystyle\cong\lambda_{\max}(Q)^{1/2}\mathcal{J}(A_{\mathsf{cl}})\left\|B\right\|+\lambda_{\max}(R)^{1/2}\mathopen{}\left(\left\|K^{(H+1)}\right\|+\sqrt{\frac{\operatorname{Tr}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}{\lambda_{\min}\mathopen{}\left(\Sigma_{x}^{(H+1)}\right)\mathclose{}}}\right)\mathclose{}.$
This completes the proof.
|
Five Properties of Specific Curiosity You Didn't Know Curious Machines Should Have
Nadia M. Ady<EMAIL_ADDRESS>
Dept. of Computing Science, University of Alberta &
Alberta Machine Intelligence Institute (Amii)
Edmonton, Alberta, Canada
Roshan Shariff<EMAIL_ADDRESS>
Dept. of Computing Science, University of Alberta &
Alberta Machine Intelligence Institute (Amii)
Edmonton, Alberta, Canada
Johannes Günther<EMAIL_ADDRESS>
Dept. of Computing Science, University of Alberta
Edmonton, Alberta, Canada &
Sony AI
Patrick M. Pilarski<EMAIL_ADDRESS>
Depts. of Medicine and Computing Science, University of Alberta &
Alberta Machine Intelligence Institute (Amii) &
Edmonton, Alberta, Canada
Curiosity for machine agents has been a focus of lively research activity. The study of human and animal curiosity, particularly specific curiosity, has unearthed several properties that would offer important benefits for machine learners, but that have not yet been well-explored in machine intelligence. In this work, we conduct a comprehensive, multidisciplinary survey of the field of animal and machine curiosity. As a principal contribution of this work, we use this survey as a foundation to introduce and define what we consider to be five of the most important properties of specific curiosity: 1) directedness towards inostensible referents, 2) cessation when satisfied, 3) voluntary exposure, 4) transience, and 5) coherent long-term learning. As a second main contribution of this work, we show how these properties may be implemented together in a proof-of-concept reinforcement learning agent: we demonstrate how the properties manifest in the behaviour of this agent in a simple non-episodic grid-world environment that includes curiosity-inducing locations and induced targets of curiosity. As we would hope, our example of a computational specific curiosity agent exhibits short-term directed behaviour while updating long-term preferences to adaptively seek out curiosity-inducing situations. This work, therefore, presents a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning and provides a novel view into how specific curiosity operates and in the future might be integrated into the behaviour of goal-seeking, decision-making computational agents in complex environments.
§ BOOKS, BOOKSTORES, AND MACHINE CURIOSITY
Imagine you have a favourite corner bookstore near your home. You walk into the shop, browse the shelves for something new, take it home and read it end-to-end in less than a week—perhaps at the expense of sleep. The reading feels good; the unfolding plot makes you almost unable to put down the book. You read the last page and want more. There isn't any more book left, so you walk directly back to the bookstore for a chance at another great read. You don't buy and read the same book (that would be silly and you know how it ends); instead, you know that the bookstore can give you a new engaging reading experience. The more you read, the more you like reading—and that corner bookstore too!
This example is rooted in the properties of human curiosity. In this paper, we focus on improving the specificity of how we think about curiosity with the goal of facilitating the implementation of key properties of human curiosity in machines.
Humans have thought about their own curiosity for thousands of years, dating back at least to Aristotle in 350 BCE <cit.>. The study of human curiosity remains an active area of research with many diverse interpretations; recent psychological, neuroscientific and philosophical accounts by <cit.> and <cit.> review some of this diversity of thought. Over the last three decades, curiosity has started to also catch the focused attention of researchers seeking to create increasingly intelligent non-human machines. Select ideas from the study of human curiosity inspired fantastic breakthroughs in machine intelligence, from a robot dog shifting its own learning focus across progressively more difficult situations <cit.>
to a simulated agent achieving higher-than-ever-before scores in Montezuma's Revenge, one of the so-called “hard exploration” games in the Atari suite (; ; ; ). Work on machine curiosity is expected to continue to play an influential role in machine intelligence research. Advancing curiosity in the domain of machine intelligence is also expected to have substantial reciprocal benefits for research domains focused on human and animal curiosity, like those in psychology, education, philosophy, neuroscience, and behavioural economics.[Advances in machine intelligence have long supported the development of new theories of biological intelligence (
; ). We propose that the understanding of curiosity, as a facet of intelligence, can be similarly bolstered through the development of models of curiosity for machine intelligence. Given curiosity's political role “equip[ping] us to pursue a more intellectually vibrant and equitable world” (p. xi–xii), scholars like <cit.> have emphasized the urgency of transdisciplinary conversation on curiosity.]
To readers focused on curiosity in those domains: this paper is in large part addressed to you. One goal of this paper is to provide a new perspective on curiosity applicable to any learner, whether human, animal, or machine. Implementing any concept as an algorithm requires a different way of thinking. The abstractions that have been used thus far to improve our understanding of biological curiosity are different from those needed to build a curious machine. This approach and this work provide a unique perspective that may help researchers from multiple disciplines understand curiosity more deeply. This paper is meant to contribute as much to the field of curiosity studies <cit.> as to that of machine intelligence.
The synthesis completed in this work has led us to define and explore five key properties needed to capture the full value of human curiosity for machine curiosity. While existing frameworks for curiosity in machine intelligence have offered clear successes, we suggest that learners implementing those frameworks would not exhibit the full range of curious behaviours exhibited in our bookstore example[This argument can be found in Section <ref>,
but we recommend understanding the five key properties in Section <ref> first.]
—and we do want to see that full range. In contrast, with our five properties, we posit that a learner would exhibit recognizably curious behaviour.
Our five properties are all drawn from the study of specific curiosity. Specific curiosity has been described by <cit.> as “the desire for a particular piece of information.”
In this paper, the term specific curiosity refers to one of the most intuitive uses of the unadorned term curiosity, as it is the cognitive and emotional condition humans imply by saying, “I am curious to know $X$.” However, we aim not to consider specific curiosity through the lens of any particular definition in this work. Instead, we focus on detailed descriptions of its properties, allowing for future proposals of aspects of specific curiosity that this list does not include.
To understand why we refer to specific curiosity in particular, it helps to know some history of curiosity research. The term curiosity has been used as an umbrella term to describe a number of phenomena, generally information-seeking and knowledge-seeking behaviours in humans and other animals <cit.>. When a particular subset of these phenomena is studied, it often acquires a distinct name to define the scope of the study and clarify that the behaviours of interest may or may not represent a “different” phenomenon than other phenomena under the curiosity umbrella <cit.>. This choice allows authors to leave open the possibility that the phenomenon may be “different” in any of a number of ways, such as having different underlying mechanisms. Specific curiosity is one such subset of the curiosity umbrella.
In particular, the identifier `specific' is typically used in contrast with `diversive' <cit.>. While only later used with the word `curiosity,' the specific–diversive division derives from <cit.>, who differentiated taking exploratory action for the purpose of learning something specific (specific exploration) versus for the purpose of relieving boredom or increasing stimulation (diversive
exploration) (, , p. 80; , p. 26). While <cit.> felt that diversive exploration seemed “to be motivated by factors quite different from curiosity” (p. 26), the terms “specific curiosity” and “diversive curiosity” have been used by other authors over the intervening years. We have chosen to adopt the term specific curiosity not only to emphasize that we are interested in a motivation to learn something specific,
but also to differentiate our goals from those of works on machine curiosity typical today (see Section <ref> for an overview).
In Section <ref>, we describe the five key properties of specific curiosity in detail, specifically considering in Section <ref> their translation to the domain of reinforcement learning and related curiosity methodologies therein. In Sec. <ref>, we then offer an experimental demonstration of a computational specific curiosity agent inspired by those properties, along with detailed analysis of the resulting behaviour both with the properties intact and when each individual property is ablated in turn. Would a machine learner exhibit behaviour similar to that of the curious reader (you!) if placed in a similar setting? In our experiment, we will show how including just three of the key properties already helps a machine learner exhibit behaviour analogous to yours in the bookstore example. A machine learner might indeed return to the bookstore with the addition of a few specific and possibly easy to implement computational properties of specific curiosity.
§ UNDERSTANDING SPECIFIC CURIOSITY
As you were reading your book, why did you have trouble putting it down? A clever author can walk the reader from question to question along the narrative. Each individual question seems to be a variation on: “What's going to happen next?” but each question is new and specific to the moment (How did they get out of the locked room? What is that character's motivation? Did the butler do it?) You know how to find each answer and in doing so, satisfy your curiosity—keep reading!
§.§ A Framework for Expressing Specific Curiosity
Our first major act of synthesis in this manuscript will be to conceptually separate the moment where curiosity is induced from the moment where curiosity is satisfied. A curious learner cycles between these two types of situations. While <cit.> proposed a similar cycle—the Prediction, Appraisal, Curiosity, and Exploration (PACE) cycle (p. 1015)—their focus was on the development of a neuroscientific framework. Their neuroscientific focus does not emphasize the conceptual options and multiplicity of theoretical positions that we believe will best support the machine intelligence research community. In contrast, our synthesis is designed to support exploration of the range of possibilities for effective machine curiosity.
Two key ideas are needed for understanding specific curiosity: (1) specific curiosity involves the consideration and manipulation of something the learner does not know: an inostensible concept; (2) inducing and satisfying curiosity require substantially different cognitive (and often physical) activities from a learner. Neither of these ideas are commonplace in the machine curiosity literature to date. Within this section, we set the stage by providing detail to develop the reader's intuition of these ideas, as this intuition will be needed to understand the five key properties that follow. From a computing perspective, where we aim to implement these ideas, some of the language we will use to describe our framework will be uncomfortably vague. This language include abstractions like knowledge, concept, and object. However, given the research community's current understanding of minds—both biological and machine—these abstractions are still necessary.[Our view on our inability to define these abstractions mirrors 's defense of using the word concept without definition (as translated by Geach; , pp. 42-43): “If something has been discovered that is simple, or at least must count as simple for the time being, we shall have to coin a term for it, since language will not contain an expression that exactly answers."] The challenging work of understanding mechanisms of mind is ongoing, and with progress towards solidifying these abstractions, our understanding and implementations of curiosity will improve as well.
§.§.§ Inostensible Concepts
As you asked each question about the narrative of your book, you were able to think about something you wanted to know. This experience follows the perspective put forward by <cit.> where specific curiosity[<cit.> actually expresses the information-gap perspective as a description of specific epistemic state curiosity, a term which delineates his concept of interest on traditional axes of types of curiosity: specific vs. diversive, perceptual vs. epistemic, and state vs. trait. As we noted in Section <ref>, the specific-diversive axis stems from the difference between taking exploratory action for the purpose of learning something specific (specific exploration) versus the purpose of relieving boredom or increasing stimulation (diversive exploration) (; ). See Footnote <ref> for more description of the state-trait distinction. In this paper, we focus on specific state curiosity, but primarily use the simplified term specific curiosity with `state' implied throughout. Finally, the perceptual-epistemic axis is meant to provisionally differentiate motivation relieved by perception from motivation relieved “by the acquisition of knowledge" (; ; ). For the purposes of this paper, we need not make a distinction along this axis, as the properties effectively describe either perceptual or epistemic forms. Even , who made the original distinction, suggested that epistemic and perceptual curiosity seem to be closely related ().] arises when a learner becomes focused on an information gap[
While <cit.> appears to have popularized the information gap as a theory of curiosity, the connection between curiosity and “gaps in information” goes back at least to <cit.>, who extended 's () suggestion that thinking arises as a “reaction to a gap" <cit.> to suggest that such “gaps in information" (; cf. ) are similar to his own idea of conflict, and not only evoke thinking, but other knowledge-seeking behaviours <cit.>.
]—a gap between what they know and what they want to know. However, the term information gap is not well-specified[The prevalence of the term `information gap' in the study of curiosity, and the breadth of definitions and posited types of curiosity have led to the term occasionally being stretched beyond our area of interest in this paper. For example, <cit.> has recently extended the term to include “a gap between current knowledge and the as yet unknown, expanded knowledge that could be gained by unspecific exploration" (p. 908) to account for diversive curiosity. We do not include this non-specific viewpoint in our treatment of the idea, a choice which we believe is appropriate, as multiple authors have called into question whether diversive `curiosity' should be considered a form of curiosity at all (; ).]
and needs to be clarified before we can implement it algorithmically.
We can partially clarify the meaning of information gap via the term inostensible concept, as coined by (; ).[Beyond <cit.>'s idea of an information gap, Inan's idea of the inostensible concept follows earlier work that describes ideas similar to the inostensible concept. <cit.> described questions evoking “mediating `concepts' or `meaning' responses” (p. 182).
An inostensible concept can be simplified as a “known unknown": something you know you do not know. If you are experiencing specific curiosity, you have an inostensible concept at the focus of that specific curiosity. In thinking about something you don't know, you are manipulating a concept of that something you don't know.
To make this more concrete, let's think through an example.
As you were reading the book you acquired at the bookstore, perhaps you stumbled upon the following description:
“Except for an odd splash of some dark fluid on one of the white-papered walls, the whole place appeared neat, cheerful and ordinary.”
If you're anything like me, you might ask yourself, “Why is there dark fluid on the wall?" An inostensible concept is implicit to this question.[Here, we use the word question loosely, to refer to a feeling of recognizing an inostensible concept, because these can often be reasonably approximated as questions in the linguistic sense. We do not assume that curiosity requires linguistic abilities, and suggest that curiosity can arise prior to, or without, putting such a feeling into words. This view may be in contrast with <cit.>, who argues that pre-language children and animals cannot experience curiosity beyond instinctual “novelty seeking, sensation seeking, or exploratory behavior" (p. 125). Our understanding of concepts in the minds of pre-language children is still extremely limited <cit.> and it seems hasty to assume that if concepts aren't communicated to us, they don't exist.] The inostensible concept could be approximated as “how dark fluid ended up on the wall." If we knew the story of the dark fluid, our question would be answered: the concept would be ostensible, rather than inostensible.
“How dark fluid ended up on the wall" can be manipulated like any other concept in the mind. Note that you don't need to know how dark fluid ended up on the wall to be able to think about the inostensible concept “how dark fluid ended up on the wall." Your inostensible concept (your known unknown) is composed of other concepts you are already familiar with: you have enough of an idea of what “fluids" and “walls" are and what “dark" and “ended up" mean to roughly conceptualize what it would mean to know “how dark fluid ended up on the wall." This rough concept cobbled together from concepts you already know is the inostensible concept. It is in this sense that <cit.> indicates that curiosity does not require you “to conceive of its satisfier," rather, “curiosity requires you to conceive only of everything your questions are about" (p. 671). The inostensible concept is the concept formed from everything your question is about.
Each inostensible concept has an object that it refers to, also called an inostensible referent. For our example concept, the inostensible referent is the story of how dark fluid ended up on the wall. The term object is not well-defined from a computational perspective, but we can think instead about what we are trying to achieve. While we might talk about trying to acquire this object, this story, we're really thinking of a particular objective: we want to incorporate the story of how the dark fluid ended up on the wall into our knowledge base.
This incorporation is the act of making an inostensible concept ostensible. There may be multiple approaches to make the inostensible concept ostensible: while we could read the next few pages of the book, we could also ask someone who has read this book before. It is computationally relevant that there are likely many different possible sets of observations one could make through different sensory apparatuses (e.g., eyes or ears) to make a given inostensible concept ostensible.
The term inostensible concept gives us additional power over the information gap perspective alone, as it gives us a foundation for satisfying our curiosity, for closing the gap. Our inostensible concept is defined by properties of the object of our curiosity (for example, the story must involve dark fluid ending up on the white-papered walls) that help us differentiate our particular object of interest from others.
This foundation allows us to use mental simulation—“the capacity to imagine what will or what could be" <cit.>—to plan out sequences of actions we could take to make an inostensible concept ostensible. But before focusing on satisfying curiosity, we should talk about inducing curiosity.
§.§.§ Inducing and Satisfying Curiosity
Within the context of this work, we are focused on specific curiosity as temporary[
Curiosity has been studied both as occurring temporarily, as is our focus, and as a persistent personality characteristic (; ). In the literature, the former is termed state curiosity while the latter is called trait curiosity. In the study of reinforcement learning—a field central to the computational components of this text—the term state has a formal meaning (see Section <ref>) to which we will want to refer. For this reason, we avoid using the term state curiosity in this work, despite recognizing it as the accepted term. Following the <cit.> definition of `state' as “a particular mental or emotional condition" we will occasionally use condition in places the word state might be used in other works on curiosity.] and emphasize that specific curiosity is largely considered to be binary (it can be `on' or `off'). In particular, each time specific curiosity is induced, it is associated with exactly one inostensible concept at its focus.
When curiosity associated with a different inostensible concept arises, it is not a continuation of the same instance of specific curiosity.
For this reason, the recognition of an inostensible concept is key to an instance of specific curiosity being induced. Specific curiosity primarily arises after processing new observations, where we allow for both observations we might consider external, like your eyes falling across the phrase Except for an odd splash of some dark fluid on one of the white-papered walls, and observations we might consider internal, like a thought. We refer to a set of such observations as curiosity-inducing observations and the situation where we make such observations as a curiosity-inducing situation.
tar:whencuriousThere are multiple theories about what kinds of situations induce curiosity. theorized that curiosity was induced by observations resulting in his concept of conflict, where two or more incompatible responses to an observation are evoked and the brain lacks the information to reconcile which is more appropriate (; ).
<cit.> contended that “a certain kind of interest" (p. 126) is needed for awareness of an inostensible concept to result in curiosity.
<cit.> suggested that curiosity arises when a learner either obtains new information they can't make sense of or becomes aware of a potential way of obtaining “information that could help make sense of existing, stored, information" (p. 145).
<cit.> and <cit.> theorize that a “sense of control that it will be possible to close the gap" <cit.> is necessary to experience the condition of curiosity. However, while <cit.> considers a sense of control to be necessary, it isn't sufficient: the theory also includes “an urge to close the gap" (p. 906) as a separate necessary component of curiosity, leaving open the question of in which situations such an urge will occur.
This diversity of theories parallels the diversity of mechanisms suggested for machine `curiosity' that we will describe in Section <ref>.
Despite this question being a longstanding area of study, we still don't precisely understand the situational determinants of curiosity.
Once induced, specific curiosity is thought to be able to end in two different ways: either attention is distracted (a possibility we discuss further in Section <ref>) or curiosity is satisfied <cit.>.
Drawing from the terminology of inostensible concepts used by <cit.>, curiosity is satisfied when the inostensible concept at the focus of that instance of curiosity is made ostensible (pp. 35–36). While it may seem obvious to some readers, we wish to draw attention to the point that the curiosity-inducing situation and the curiosity-satisfying situation must be different.[Why is this distinction between curiosity-inducing situations and curiosity-satisfying situations so critical to us, the authors? In both the psychological literature on curiosity and the literature on intrinsic-reward-based computational curiosity, the curiosity-inducing situation is sometimes not differentiated from the curiosity-satisfying situation, limiting our understanding of how learning occurs through curiosity. We speak more to these limitations in Section <ref>.]
As an example where this requirement may not seem to hold, imagine you're moving through the bookstore and notice a peculiar noise from the floorboard as you transfer your weight onto it. If you experienced curiosity focused on whether your weight transfer caused the noise, you might find yourself satisfying your curiosity by repeating the same action that seemed to generate the noise the first time, transferring your weight back onto the same spot. In this case, it might seem that the curiosity-satisfying observation is the same as the curiosity-inducing observation.
We see this kind of “repeated trial” action for many scientific curiosity questions <cit.>.
For curiosity to be induced, however, the learner needs an inostensible concept. Critically, this means the learner knows there is something they do not know. If the curiosity-inducing situation provided the right information to satisfy this instance of curiosity, specific curiosity would not have been entered to begin with, because the known unknown would not be unknown after all. An observation of the peculiar noise as you transferred your weight does not tell you that your weight transfer caused the noise. Rather, it is the intervention and set of repeated, consistent observations that when you transfer your weight onto that spot, the peculiar noise reoccurs that brings you enough confidence in your understanding for your curiosity to be satisfied.
But what does it mean for curiosity to be satisfied? Turning back to our example inostensible concept of how dark fluid ended up on the wall, if I were to become curious about the content of this inostensible concept,[Yes, a learner can think about an inostensible concept without experiencing curiosity to resolve it <cit.>. The question of whether curiosity will occur or not leads us back to the open question of tar:whencuriouswhat kinds of situations induce curiosity.] my curiosity might be satisfied when I read the phrase The two clergymen, said the waiter, that threw soup at the wall, printed on the following page of the book; my observation of this phrase upon turning the page constitutes a curiosity-satisfying situation. Assuming I considered the waiter sufficiently trustworthy, I may be satisfied that I now know that two clergymen threw soup at the wall, leaving a dark stain is how a splash of dark fluid came to be on the wall. My initially inostensible concept is now ostensible, and my curiosity is satisfied.[An illuminating description of the satisfaction of curiosity has been put forth by <cit.>, where curiosity is satisfied “only when the curious being gains some new experience that [they believe] to be sufficient to come to know a certain object as being the object of [their] inostensible concept," and the interested reader might look to 's Chapter 9 for more detail. The curious reader, on the other hand, who simply wants to know where our example inostensible concept was lifted from can instead be directed to The Innocence of Father Brown by G. K. <cit.>.]
We use both the term observation and the term situation with the descriptors curiosity-inducing and curiosity-satisfying because multiple observations may be needed to enter or exit specific curiosity. For curiosity to be induced by reading the phrase Except for an odd splash of some dark fluid on one of the white-papered walls, you likely require multiple placements of gaze on the text. Similarly, you had to transfer your weight over that peculiar-sounding floorboard multiple times to be satisfied about the causal relationship. Without a sufficient set of the right observations, curiosity won't be induced or satisfied, respectively. Using the term situation allows us to refer to the moment of the final observation while recognizing that more observations beyond the final one may have been needed.[We considered following <cit.> in their use of the term curiosity-evoking events rather than curiosity-inducing situations. However, we felt that the connotations of the word event, while allowing for the inclusion of multiple observations, suggests that the observations happen “all at once"—temporally close together—while we mean for situation to imply that a complete set of curiosity-inducing observations might occur across more time than might be considered a single event.]
The primary goal of this preliminary section was to clarify specific curiosity as a short-term condition and to differentiate the terms inostensible concept, curiosity-inducing situation and curiosity-satisfying situation. While these terms are not all broadly used, in this work they are meant to help us be specific, as the concepts they refer to have sometimes been described interchangeably in the literature.
For example, <cit.> used the term goal stimuli as both curiosity-inducing (p. 92) and curiosity-satisfying (p. 91).
Similarly, <cit.> use the term stimulus as offering both the curiosity-inducing and curiosity-satisfying observations (e.g. a trivia question and its answer considered as one stimulus without differentiating which of the two the learner is seeking).
In this preliminary section, we have contributed an argument for the importance of separating what occurs when curiosity is induced from what happens when curiosity is satisfied. We believe that future work—both the study of biological curiosity and the design of machine curiosity—can proceed with improved clarity with this separation recognized.
§.§ Directedness Towards Inostensible Referents
Our first key property is directedness towards inostensible referents. When specific curiosity is induced, the learner is motivated to take actions directed towards satisfying their curiosity.
Directedness towards inostensible referents is inherent to many of the experiments used for studying curiosity. One of the most common experimental paradigms for this purpose is the trivia task. In trivia tasks, experimenters attempt to induce curiosity using a trivia question, which can theoretically be satisfied by showing the associated answer to the question. Many trivia task experiments require participants to take specific actions to gain access to a curiosity-satisfying situation, like paying a token <cit.>, breaking a seal to open an envelope <cit.>, or pressing a key to indicate they would like to wait a short period to see the answer rather than skip ahead to another question immediately (; ). When curious, participants generally took the specified actions to gain access to the inostensible referent. Even using an experimental setup that simply displayed the answer after a delay, <cit.> showed that, when curious, participants' behaviour was directed in anticipation of receiving the answer, as they moved their gaze to where the answer would be shown.
Another common experimental paradigm for studying curiosity requires participants to take an action to “uncover" a picture. For example, <cit.> required participants to press a key if they wanted to see an in-focus version of a just-seen blurred picture, while participants studied by <cit.> and <cit.> needed to click a computer mouse if they wanted to remove boxes occluding pictures of animals.
While the above paradigms elicit simple actions to satisfy curiosity, experimenters have also used more complex situations requiring participants to take more extended sequences of directed action to acquire curiosity-satisfying information.
<cit.> observed an increase in stairwell traffic when they placed a curiosity-inducing situation (a placard with a trivia question) by an elevator along with the explanation that the answer could be found in the nearby stairwell.
However, all of these experimental paradigms largely make use of what <cit.> call a “curiosity appeal,” where the experimenter or the context induces curiosity and offers a promise that a particular sequence of actions will lead to a curiosity-satisfying situation. However, outside of experiments, there isn't always an obvious plan to follow to satisfy one's curiosity. There have been multiple suggestions of a theoretical connection with creativity <cit.>, at least in part because curiosity appears to often require the creation of non-obvious plans of actions to acquire appropriate curiosity-satisfying observations <cit.>. While the theory that curious learners can generate complex, adaptable plans of action to satisfy their curiosity remains understudied, this idea remains a strong starting point for thinking about how machine learners might demonstrate the directedness characteristic of specific curiosity.
§.§ Cessation When Satisfied
Our second key property is cessation when satisfied. This property refers to the instance of specific curiosity ending immediately once curiosity has been satisfied, so the learner's motivation is no longer directed towards the same kind of observations that were or would have been curiosity-satisfying when the learner was still curious.
Once a learner has achieved the goal of transforming an inostensible concept into an ostensible one, they do not need to seek the same curiosity-satisfying situation again. You didn't repeatedly read the page describing how the protagonist escaped their brush with death; once you knew the answer, curiosity did not drive you to experience it again. Instead, in the process of transforming that particular inostensible concept, you found yourself with a new question as to the relationship of the protagonist with their mysterious saviour, and while curiosity motivates you to investigate the same book, you're no longer interested in the preceding pages, only the following ones.
Theories of specific curiosity regularly reference the satisfaction of curiosity (; ; ), and some authors consider the cessation of curiosity when “the information gap is closed or the conflict is resolved" inherent to curiosity's definition <cit.>.
The idea that specific curiosity ceases when satisfied has influenced the empirical study of curiosity. A number of studies have explored differences in behaviour or physiological changes when curiosity is satisfied. On the behavioural side, results shared by <cit.> suggest that when curiosity is left unsatisfied, humans are more likely to make indulgent choices—choices that provide short-term pleasure but are not in the chooser's long-term interest, like “the consumption of luxuries, hedonics, and other temptations" <cit.>. Another study, by <cit.>, similarly varied whether participants were provided with curiosity-satisfying observations or not, but did not find any significant difference in participants' rating of curiosity or willingness to bid to satisfy their curiosity on a next, unrelated trivia question. On the physiological side, in an fMRI (functional magnetic resonance imaging) experiment performed by <cit.>, participants were shown blurred pictures, sometimes followed by the corresponding clear picture, sometimes followed by an unrelated clear picture. In the condition with the corresponding clear picture—where curiosity induced by the blurred picture was thought to be relieved— <cit.> found both striatal and hippocampal activations were stronger than in the unrelated clear picture condition. Similarly, <cit.> performed an fMRI experiment where participants where shown trivia questions, sometimes followed by the corresponding answer, sometimes followed by an unrelated filler screen. In the condition with the corresponding trivia answer, <cit.> found that observing the answer yielded a ventral striatal response in the the brain. The striatum has been implicated in both pain relief and reward responses, while the hippocampus has been implicated in memory, which aligns well with the theory that specific curiosity is an uncomfortable experience which can be relieved and the evidence showing that curiosity improves memory.
Despite evident interest in physiological changes when curiosity is satisfied, there has been minimal empirical work to confirm that curiosity does indeed cease when satisfied. A notable exception is in an experiment performed by <cit.>. In this experiment, all participants were shown a blurred picture, but while the participants in one condition were then shown the clear version of the same picture, participants in the other condition were not.
Participants in both conditions responded to the “10-item state curiosity scale of the State-Trait Personality Inventory (STPI) developed by Spielberger and Reheiser (2009)," and participants who had not been shown the clear picture rated higher on the scale in terms of “the intensity of feelings and cognitions related to curiosity" (p. 1198).
Note that cessation when satisfied contrasts with the properties of behaviour motivated by extrinsic rewards. Extrinsic rewards, in the terminology of psychology, are material outcomes of an activity like obtaining food, water, or money <cit.>. Extrinsic rewards motivate behaviour repeatedly leading to the same target <cit.>. For example, animals confined to a box with a lever will learn to repeatedly press the same lever if pressing it results in the mechanism providing the same food reward <cit.>.
This kind of directly repetitive behaviour is not exhibited towards curiosity-satisfying observations, as specific curiosity is expected to provide no further motivation towards the same target if the target satisfies the learner's curiosity <cit.>.
One view that may appear to contradict cessation when satisfied is that proposed by <cit.>, who have suggested that curiosity may persist even after curiosity-satisfying observations have been provided. In their experiment, they found participants were more likely to demonstrate curiosity for the answer to a trivia question in a sequence of trivia questions if they had been curious for the answer to the preceding question in the sequence—whether or not they had been provided with the answer to that preceding question. <cit.> explain their findings as suggesting that curiosity persists even after the associated answer is provided and curiosity can transfer to a temporally contiguous information gap, and call this effect the curiosity carry-over effect. However, their results do not contradict the property of cessation when satisfied, as in our terminology, while the present instance of curiosity ceases when the associated inostensible concept becomes ostensible, this does not imply that a learner is unlikely to become immediately curious again, but for a different inostensible concept. 's () results rather suggest that human learners remain physiologically “ready" for curiosity for a time interval once curiosity has been induced, whether or not it is satisfied.
Further, we can clarify that the property we are calling “cessation when satisfied" is not the same as the “knowledge satiation" described by <cit.>, in which a learner feels that they “completely understand the topic" (p. 882). In our terminology, satisfaction occurs at the moment of making ostensible the inostensible concept associated with the current instance of specific curiosity—answering a single specific question—and does not imply a feeling of completely understanding an entire topic, where topics are seen as broader categorizations of related knowledge and activities (; ).
§.§ Voluntary Exposure
“An active striving to encounter new experiences, and to assimilate and understand them when encountered, underlies a huge variety of activities highly esteemed by society, from those of the scientist, the artist and the philosopher to those of the polar explorer and the connoisseur of wines." <cit.>.
Our third key property is voluntary exposure. This property refers to a preference for curiosity-inducing situations, and that learners act on that preference to purposefully make themselves curious.
The experience of unresolved curiosity is inherently frustrating, as well-demonstrated by a novel that ends with a cliff-hanger but has no sequel in sight. Curious humans modify their behaviour to alleviate the feeling of unresolved curiosity.[<cit.> provide an overview of the lengths people will go to to satisfy their curiosity, including paying for non-instrumental information (information that provides no benefit in terms of traditional extrinsic rewards, like money or food) or exposing themselves to pain or risk.] Despite the aversive quality or discomfort associated with curiosity,[The idea that being in a condition of curiosity is uncomfortable has sparked some debate. <cit.> has argued that the idea of curiosity as aversive is a longstanding assumption with little supporting evidence popularized by 's seminal work (). The difficulty in disentangling evidence of an aversive quality to curiosity from other possible motivating factors still stands in more recent work <cit.>. Indeed, recent accounts of how emotions are constructed in biological brains and bodies suggests that the experience of curiosity may vary by culture <cit.>, and individual differences implicated in interpretation the experience of curiosity may account for some of the controversy. In the computational part of this work (Section <ref>), we take inspiration from the aversive quality of curiosity, but our computational analogue of aversive quality is not needed for our computational learner to demonstrate recognizably curious behaviour (Section <ref>).] humans voluntarily expose themselves to curiosity, choosing to pick up mystery novels and puzzles because they will pique curiosity <cit.>. We aim to capture this tendency with our third property, voluntary exposure.
We want to remind you of the separation of curiosity-inducing observations from curiosity-satisfying observations as we introduced in Section <ref>. While your new book contains examples of both curiosity-inducing observations and curiosity-satisfying observations, if they are associated with the same inostensible concept, then they must be in different places in the book. Re-reading the passage about the butler's shifty behaviour during the officers' interrogation (a plausible curiosity-inducing situation) will not tell you what the butler has done that they don't want the officers to be aware of (the inostensible concept). It is instead in reading the passage where the officers confront the butler about damning evidence of the butler's theft of thousands of dollars worth of their employer's property (a curiosity-satisfying situation) that your curiosity about their behaviour is satisfied.
Voluntary exposure is perhaps best observed via the vast amount of time and money that people across the world devote to activities associated with curiosity. Two of the most obvious activities include engaging with puzzles and mysteries, both of which are hugely popular activities. As examples, the puzzle genre raked in the second-highest total revenue across mobile game genres in the United States and Canada in 2021 <cit.> and the mystery genre has held an enduring share of entertainment production over the years in multiple countries <cit.>. While mysteries and puzzles are some of the most obvious curiosity-generating activities, narrative elements that induce curiosity are pervasive across genres of storytelling <cit.>. Since storytelling features across media (including books, television, movies, games, and news), this single example of activities demonstrates a huge swatch of human life voluntarily engrossed in curiosity-inducing activities at any given time.
While humans, as a group, seem to be drawn to curiosity-inducing activities, the type of curiosity-inducing activity seems to vary from individual to individual. While one person might be drawn to formulating mathematical proofs, another might prefer crosswords or language puzzles, and another might instead spend time on puzzles of shape and geometry, and yet another may select for mystery novels. All of these individuals demonstrate voluntary exposure to curiosity, yet they are selective <cit.>. This selectivity is a starting point for our hypothesis that voluntary exposure might be learned over time, as an individual learns a preference for curiosity-inducing situations related to their preferred topics, domains, or puzzle styles. We will discuss this preference further in Section <ref>, with the property of coherent long-term learning.
§.§ Transience
Our fourth key property, transience, refers to an instance of curiosity ending when attention is distracted or diverted.
As you went to pay for your book, you became intensely curious to learn the current news of a Hollywood star's familial strife, but only while you paid attention to the magazines placed temptingly close to the checkout. Once you've torn yourself away to pay, your mind is happy to resume other functions, so once you're out the door and on your way home to start your new book, the star's struggles are as good as forgotten (example inspired by ).
When attention is distracted, the instance of curiosity ends, and this property is referred to as transience <cit.>.[The properties cessation when satisfied and transience are similar in that both refer to the condition of curiosity ending, but we have separated them to better align with how the terms are used in the literature. There may also be ways the mechanisms for each property should offer different effects. For example, there are some theories that the satisfactory resolution of curiosity is actively rewarding (; ).] While some authors have written about curiosity as though it can be sustained over long periods, even over years <cit.>, transience of curiosity is a frequently recognized property.[While the term transience was used in 's () seminal paper on the information-gap theory of curiosity, the property is sometimes simply referred to as dissipation or decline of curiosity, but specifically that caused by the distraction of attention (; ; ).]
Like cessation when satisfied, transience appears prominently in theories of curiosity and intuitive examples. Early on, <cit.> noted that curiosity can end if distraction occurs (p. 183). More recently, the property of transience appears to have shaped 's () information gap theory: one of the reasons that attention to an information gap is key to the formulation is that curiosity is thought to end when attention is distracted (p. 92).
In recent experiments, <cit.> had participants solve puzzles (1B) or identify emotions associated with facial expressions (2B). <cit.> manipulated the amount of time before participants were offered the solution to one of the more challenging puzzles if they failed to solve it (1B) or the amount of time before participants were offered the chance to view their score on the facial emotion recognition test (2B). Participants who were offered the opportunity to satisfy their curiosity immediately were more likely to click multiple times or complete an unrelated task to obtain the solution/score than those who were offered the same opportunity 24 hours later. Distanced from the original context of a concerted effort to solve the puzzle or test questions, participants showed less impetus to acquire the solution or scores. While this experiment is only a partial demonstration of transience, since by offering the solution/score, <cit.> draw attention back to the inostensible concept, this decrease in demonstrated curiosity suggests that for many participants, curiosity has ended and this simple return of attention is insufficient to rekindle curiosity.
The condition of specific curiosity is a concerted effort to make an inostensible concept ostensible. It requires adaptive planning, which is likely resource-heavy, and, in biological learners, active movement of the body towards perceiving curiosity-satisfying observations. Transience helps the learner manage an all-or-nothing effort to satisfy their curiosity, because it means that behaviour and use of attentional resources can be fully reallocated to other matters as needed.
§.§ Coherent Long-Term Learning
The property of coherent long-term learning refers to how specific curiosity works in concert with other mechanisms of attention and value to orient the learner towards inostensible concepts related to the learner's prior knowledge.
In this work, we have attempted to be very careful to model specific curiosity as a short-term motivational effect that begins when curiosity is induced and ends when curiosity is satisfied or when attention is diverted. However, curiosity is choosy. Moment-to-moment, humans are faced with a galaxy of unknowns, but the mechanisms of curiosity choose carefully—and it is not as though curiosity simply chooses the most readily available unknown; rather, curiosity often sends us out on a temporally extended plan to make our inostensible concept ostensible. Importantly, curiosity seems to be biased towards learning ideas related to the learner's pre-existing background knowledge <cit.>.
have recently proposed a connectional account of curiosity (), explicitly critiquing the `acquisitional' metaphors commonly used for curiosity in recent decades. Curiosity is often thought to drive us to acquire information (p. 259-261). The connectional model instead emphasizes curiosity as building connections between ideas (p. 261). The connectional account aligns with 's () notion that a learner's level of curiosity is well-predicted by their metacognitive estimates of their own knowledge (p. 1380). If a learner recognizes metacognitively that they have existing knowledge related to a potential learning opportunity, they are well-prepared to make that connection and integrate it into their knowledge base.
By including the property of coherent long-term learning in our list of key properties, we are formally emphasizing the importance of specific curiosity's integration with the learner's current knowledge base. In humans, this integration may occur via the mechanism of individual interest. Individual interest refers to a predisposition to repeatedly engage with a class of content, where a class of content usually refers to a domain or category of knowledge, objects, or ideas. The class of content may be thought of as broad as ‘science’ or ‘playing tennis’ <cit.> or more narrow, like ‘approaches to machine curiosity that offer the benefits of human curiosity’—the best description will be highly individual and depend on the learner’s organization of their knowledge. The connectional account of curiosity can help us think of a class of content as a set of ideas that have been connected in the learner's mind, woven together by the relationships that the learner recognizes among them. Individual interest is distinguished from other motivational concepts by two components: stored knowledge and stored value, both for the particular class of content.
Curiosity $\rightarrow$ Individual Interest: Curiosity may shape individual interest by increasing both knowledge and value for content areas that a learner experiences curiosity in. By driving learning, curiosity increases knowledge. <cit.> have provided initial evidence that individual interest is a consequence of learning, showing small but significant effects that growing knowledge results in increased individual interest. Indeed, the process of continually developing knowledge (availability of "cognitive challenges") in the content area of interest is required to maintain an individual interest <cit.>. Curiosity provides impetus for a process of continually developing your knowledge.
Curiosity may also play a role in increasing value, as individual interest in a class of content reflects high levels of not only knowledge but value for the content relative to other classes of content <cit.>. Experimentally, <cit.> found that, when subjects experienced the resolution of curiosity about particular well-known brands, they developed increased positive attitudes towards those brands. Such increases in positive attitudes may reflect increased value. In one experiment, <cit.> teased some participants with an animation of a gift card gradually being revealed from an envelope (so these participants needed to wait to find out where the card could be spent) and showed other participants the whole gift card immediately (so these participants immediately knew the card could be spent at Target) (p. 564). When surveyed after, the participants who had to wait for the gift card to be pulled from the envelope had a more positive average attitude toward Target (p. 565). <cit.> also found similar results with different manipulations creating and resolving uncertainty about different brands. Further research into this effect is needed, but we hypothesize that, more generally, learners may develop increased positive attitudes towards topics associated with the inostensible concept when curiosity is resolved.
Individual Interest $\rightarrow$ Curiosity: Individual interest may direct curiosity by directing a learner's attention. Individual interest schools our attention onto aspects of what we perceive that we relate to our pre-existing interests. <cit.> has described individual interest as acting like a filter on a learner's perception (p. 380). For example, I have an individual interest in curiosity, so when a character in my book says, “No, I'm not curious," my attention is drawn to how curiosity fits into the situation and how my understanding of curiosity explains or fails to explain the character's lack of motivation. A learner with other individual interests would likely focus on other aspects of the same scene. In this way, individual interest provides can bias curiosity towards inostensible concepts that connect with existing knowledge.
Curiosity $\longleftrightarrow$ Individual interest: Given the early evidence we have described, we hypothesize a bidirectional relationship between curiosity and individual interest. Related bi-directional proposals have been previously raised by <cit.> and <cit.>. There has been a recent surge in effort to understand curiosity’s relationship with interest (e.g., ). More recent work has focused on the direction that experiences of curiosity may build individual interest (; ) and substantial work remains to develop a complete account. However, a relationship with some mechanism to re-engage learning related to prior knowledge is likely necessary to provide specific curiosity with the property of coherent long-term learning.
The property of coherent long term-learning, the last of our five properties, closes the loop of how curiosity can guide a learner over a lifetime. Our list of properties began with the impetus to satisfy our curiosity in a specific, directed way (1, Directedness towards inostensible referents), an effect that ends relatively quickly, either via being satisfied (2, Cessation when satisfied) or via attention being diverted (3, Transience). Our final two properties speak to aspects of curiosity relevant to a learner's entire lifetime: learners should seek curiosity-inducing situations (4, Voluntary exposure) and curiosity should build up knowledge and value, biasing the learner's future experiences of curiosity towards learning opportunities to build on what they already know (5, Coherent long-term learning).
In this section, we described five properties of curiosity, and in particular, of specific curiosity (defined by <cit.> as “an intrinsically motivated desire for specific information”).
While specific curiosity is associated with other properties, particularly intensity, association with impulsivity, and a tendency to disappoint when satisfied <cit.>, the set of five properties we described above are expressly valuable to a learner, as we will argue in Section <ref>, after we have described existing computational alternatives in Section <ref>. As researchers work to design curious machine agents, we believe that these properties are ones we should strive to attain.
§ SPECIFIC CURIOSITY FOR MACHINE INTELLIGENCE
To create a computational form of specific curiosity, we in essence want an algorithm to exhibit the properties of specific curiosity identified and defined in this manuscript. This means that a robot or computer running such an algorithm would take actions reflecting these properties. In this section, we discuss what infrastructure is needed to make such an algorithm possible, and provide the preliminaries for the framework that we argue is most appropriate—reinforcement learning.
For an algorithm to exhibit the properties of specific curiosity, the machine running the algorithm should be able to decide how to act in the world and exhibit preferences about its available choices; we see this especially in the properties of directedness and voluntary exposure. Above the other properties, directedness involves a preference for a sequence of actions that should satisfy curiosity, and voluntary exposure involves a preference for curiosity-inducing situations. It is valuable if the agent can learn what kinds of situations induce curiosity and which sequences of actions might lead to those particular situations and develop the appropriate preferences over learning. The capability to demonstrate learned preferences is the primary reason we consider reinforcement learning to be especially appropriate for designing algorithms that reflect machine curiosity,
as reinforcement learning centres around algorithms that use access to sensations of their environments (at least partial) to choose actions that affect the environment <cit.>.
Within the framework, instances of reinforcement learning algorithms are often called agents, because they have agency to shape their own experience in the world and learn from their actions. This quality makes the framework well-suited for the design of machine curiosity algorithms.
§.§ Reinforcement Learning
In the remainder of the paper, we will rely on language and a choice of framework drawn from reinforcement learning. In this subsection, we introduce the framework and define some of the language that we will help us both to express the differences between the key properties proposed in this paper and existing related methods and to describe our case study in Section <ref>.
One way of representing an agent's experience of the world in a reinforcement learning framework is as an alternating sequence of observations and actions marked by time. We think of time as discrete, and at each time step, a single observation is made and a single action is taken, resulting in a sequence of the form
\begin{equation}
O_0, A_0, O_1, A_1, ..., O_t, A_t, O_{t+1}, A_{t+1}, ...
\end{equation}
The agent has a set of actions, $\mathcal A$, available to them, so the action taken at time $t$ is denoted $A_t \in \mathcal A$. Each observation, denoted $O_t$ for the observation at time $t$, provides (possibly partial) information about the current state of the environment $S_t$. Informally, the state of the environment is the situation that the agent finds itself in; depending on the situation (state), the agent's choice of action will have different effects and could lead to different next situations <cit.>. If I'm standing in the open doorway of the bookstore, a step forward could lead me into the splendorous observation of mountains of books; if I'm standing in front of the closed door, a step forward might lead to me bumping my nose.
In classical reinforcement learning, each observation from $t=1$ onwards includes a numerical reward signal $R_t \in \mathbb R$. The agent must choose actions to maximize how much[Grammatically, you may have expected “how many rewards" instead of “how much reward," but within reinforcement learning, each reward can be a different real number, and we are concerned with maximizing return, a function involving the sum of rewards over time. For instance, while the learner receives a reward at each timestep, one reward of $76.243$ is going to be more desirable than accumulating three rewards of $-8$, $0$, and $0.3$, so “how many rewards" wouldn't reflect the meaning of return.] reward accumulates over time, a quantity called the return, $G_t$. There are several possible definitions of return, $G_t$, but for simplicity in this paper, we use discounted return,[<cit.> offer further intuition about this choice.] which relies on a discount rate, $\gamma \in [0,1)$, to place less value on rewards the further into the future they occur.
\begin{equation}
G_t = \sum_{k=0}^\infty \gamma^k R_{t+k+1} \label{eq:return}
\end{equation}
It is common for a reinforcement learning agent to keep a running estimate of how valuable different parts of the world are so that they can map their representation of the current state $S_t$ (usually formed using the present observation $O_t$) to an estimate of its value and use these estimates to attempt to accumulate more value. A value function, denoted $v_\pi$, is defined as the expected return moving forward from that state, assuming the agent follows policy $\pi$.
\begin{equation}
v_\pi(s) := \mathbb{E}_\pi \left[ G_t \middle\vert S_t = s\right]
\end{equation}
We denote an agent's estimated value function as $V$. Estimated value functions offer an intuitive way to think about agent preferences: states with higher estimated value are preferred by the agent. In this way, we could algorithmically express the property of voluntary exposure as the agent estimating increased value for situations that are expected to induce curiosity.
While there are multiple ways that a reinforcement learning agent might maintain an estimated value function, one of the most important approaches is called temporal-difference (TD) learning <cit.>. When the agent transitions from state $S_t$ to state $S_{t+1}$, receiving reward $R_{t+1}$, we can form a new estimate for $V(S_t)$: $R_{t+1} + \gamma V(S_{t+1})$. However, since we may not always arrive in the same next state or receive the same reward when leaving state $S_t$, we usually only want to shift our estimate of $V(S_t)$ towards $R_{t+1} + \gamma V(S_{t+1})$ by a small step. We use a parameter $\alpha$, known as the step size, to determine the amount of shift, multiplying $\alpha$ by the difference between the new estimate and the old. This difference, the TD error, denoted $\delta$, is defined as
\begin{align}
\delta := R_{t+1} + \gamma V(S_{t+1}) - V(S_t)
\end{align}
The simplest TD method (and the approach we take in the case study described in Section <ref>) updates the estimate of the value of state $S_t$ upon transitioning from $S_t$ to $S_{t+1}$ and receiving a reward of $R_{t+1}$ as follows:
\begin{align}
V(S_t) \gets V(S_t) + \alpha \delta
\end{align}
The reinforcement learning framework, with these ideas of the state of the world having a specific value to a learning agent, and that an agent's choice of actions over time can influence how valuable the current state is to the agent, has been used to study not only what kind of algorithms make the best choices in such a problem setting, but also how humans and other animals choose actions, especially to consider which algorithms seem to best replicate biological decision-making.
Within the reinforcement learning framework, there has been a long-standing assumed or hypothesized link between curiosity and exploration <cit.>—some researchers hope that the study of curiosity holds the solution for the exploration–exploitation dilemma. The exploration–exploitation dilemma is a long-studied challenge of reinforcement learning <cit.>. Classically, reinforcement learning problems have an optimal solution: a policy of behaviour that can obtain the maximal return. However, the learner doesn't start out knowing what the right policy is. To learn a good policy, the learner has to balance taking actions which have offered the best value in their experience so far (exploiting what they've learned) with taking actions that they haven't tried enough times to be certain of those actions' values (exploring alternative possibilities). In the following section, we will describe a number of existing methods inspired by curiosity, and many of these methods are explicitly designed to improve exploration. In this work, however, we do not assume that curiosity should contribute to exploration in this return-driven sense. Indeed, curiosity might be most interesting in the context where the learner does not have a persistent objective.
§.§ Computational Approaches Inspired by Curiosity: Intrinsic Rewards
The argument that reinforcement learning is an appropriate framework for computational approaches to curiosity has been embraced by many authors over the past few decades.
The mechanisms that have been inspired by curiosity vary widely, with many using the amount of error in their machine-learning predictions (`prediction error') or ideas from information theory in the interests of simulating other constructs, like confidence <cit.>, learning progress <cit.>, surprise (), interest/interestingness (; ), novelty (; ), uncertainty <cit.>, compression progress (),
competence (),
and information gain (; ; ; ).
Most of the existing methods inspired by curiosity are centred on generating special reward-like signals, called intrinsic reward. In this section, we provide detail on intrinsic-reward methods, including their benefits and limitations. Because intrinsic reward is the approach most commonly associated with curiosity, this section sets up the context for a discussion of computational specific curiosity as an alternative. Specific curiosity may address some of the limitations of intrinsic reward and offer a better choice for some applications of machine curiosity. Our description of specific curiosity provides a specification for computational approaches that aligns with an interdisciplinary understanding of curiosity found in the literature, in part inspired by observing a poor alignment between intrinsic-reward methods and biological curiosity.
Intrinsic rewards can be described in relation to the term reward ($R_t$) that we described in [sec:compu:rl]the preceding subsection. As you may recall, reward is given as
part of the observations the agent makes of the environment.
The designer of the agent's learning algorithm cannot change the reward and so their algorithm must solve the optimization problem as it stands. Intrinsic rewards, on the other hand, are defined as part of the agent's learning algorithm (they are intrinsic to the agent), but can be optimized for just as the original reward signal could be. For clarity, the original reward signal is often called extrinsic reward to distinguish it from intrinsic reward in the intrinsic reward literature.[Use of the term intrinsic reward in computational reinforcement learning, as described here, differs from its use in psychology. <cit.> offer a discussion of how the terms extrinsic, intrinsic, external, and internal reward and motivation are used within the contexts of psychology versus computational systems.]
Intrinsic reward is usually either (a) treated as a reward bonus added to the extrinsic reward provided to the environment, or (b) treated as the only reward signal, with the learner effectively ignoring any reward provided by the environment. If the intrinsic reward at time $t$ is written $R^I_t$, then standard algorithms for maximizing return can be used on the new, modified return (compare with Equation <ref>):
\begin{align}
\text{(a)} \quad \sum_{k=0}^\infty \gamma^k \left( R_{t+k+1} + R^I_{t+k+1} \right) & & \text{(b)} \quad \sum_{k=0}^\infty \gamma^k R^I_{t+k+1}
\end{align}
While many intrinsic reward designs have been inspired by curiosity, there is a wider body of literature about intrinsic rewards that doesn't always reference curiosity. However, many intrinsic rewards in this wider literature are designed for the same reasons that researchers often want to include curiosity in their algorithms, like improved exploration of the environment or allowing for self-directed learning. For this reason, readers interested in learning more about current machine curiosity methods may wish to explore the larger literature on computational intrinsic rewards.[ <cit.>,
<cit.>, and
offer overviews and surveys of intrinsically motivated computational systems.]
§.§.§ Benefits of Intrinsic-Reward Approaches
Intrinsic rewards have been very useful for increasing exploration on some important testbeds <cit.>, and they have been used to perform well on problems where the objective outcome metric is unavailable to the agent <cit.> or to generate developmental behaviour <cit.>. In intrinsic-reward approaches, the agent is rewarded for being in interesting (novel, surprising, uncertainty-reducing, etc.) states. These rewards encourage the agent to stay in or return to the same state repeatedly. Repeatedly visiting the same state has some important benefits for learning.
* Offers a simple way to recognize and remain on an exploration `frontier.' Repeatedly visiting states that have not yet been visited many times can mean staying on the frontier of a part of the world that the agent has yet to explore. By frontier, we mean states of the world from which, if the agent takes a particular action, they can end up in a state of the environment that they have never experienced before. If the agent occasionally takes a random action,[Many reinforcement learning agents occasionally take random actions <cit.>. Such agents learn about the world and develop estimates about which actions will let them accumulate the best return, and take a best action (according to that metric) most of the time. However, the agent occasionally takes one of its other possible actions, just in case its estimates were wrong. This design element to take a random action is considered a type of exploration strategy, and has good properties for ensuring the agent tries all possible actions from any state an infinite number of times <cit.>, at least if the agent has infinite time! Two popular examples are $\epsilon$-greedy exploration and soft-max/Boltzmann policy exploration.] staying on a frontier makes it more likely that it will end up visiting unexplored parts of the world via such a random action[Or, with carefully designed search control, the agent could be biased to take actions it has never taken from a given state before.] than if the agent largely stayed in the middle of the part of the world already explored.
* Offers a way to check if an action results in a consistent reaction. Another perceived benefit of doing the same thing repeatedly is to check for consistency.
In Section <ref>, we described repeating a test of the floorboard to decide if it was the source of a peculiar noise. Similarly, for the experimental section of this paper, we completed multiple trials of each experiment because we are interested in patterns that hold over time, rather than one-time outliers. This benefit relates to an important assumption in many uses of reinforcement learning: that the world is a little bit random. When we want to estimate the value of a particular state, we are really interested in an average so we must observe a state multiple times to form a reasonable estimate. Because of this assumption, many exploration methods try to ensure the agent visits each state of its domain multiple times.
One way algorithm designers have encouraged agents to make exploratory visits to each state multiple times is through intrinsic rewards that decay over visits. This is an important area of study in exploration for reinforcement learning, and some notable approaches include Upper-Confidence bounds for Reinforcement Learning (UCRL, and UCRL2, ), Model-based Interval Estimation (MBIE, ), and Random Network Distillation (RND, )—even the early curiosity system by <cit.> is based on a decaying bonus (assuming it is applied in a deterministic environment). The purpose of the decay is to only temporarily encourage visits to any given state, enough to obtain sufficient samples.
§.§.§ Limitations of Intrinsic-Reward Approaches
Although intrinsic-reward approaches have important benefits, they are limited in their ability to achieve those benefits and lack some of the benefits we might expect from an analogue of biological curiosity.
Detachment: One limitation relates to the first benefit we described—that an intrinsic-reward approach offers a simple way to recognize and remain on an exploration `frontier,' because it doesn't always work. In particular, since most intrinsic rewards are designed to decrease as the agent returns to the same state over and over again, it is possible for the agent to essentially use up the intrinsic reward without ever taking the actions required to continue into the nearby unexplored part of the world. This is an example of the problem <cit.> called detachment, where an agent leaves and fails to return to parts of its environment that are likely on a frontier—likely to be close to new parts of the environment. This problem of detachment makes intrinsic reward approaches to seeking never-before-seen states quite brittle and unable to achieve this desired benefit in some situations.
Reactivity: If the goal of including a mechanism is to encourage the agent to experience something new, intrinsic reward offers an inelegant approach, as it can only drive that goal indirectly. A reward can only be provided for observing a state once it has been observed—at which point it is no longer new. As <cit.> put it, intrinsic reward methods are reactive and cannot direct a learner towards novel observations. The reward becomes associated with something already observed, not with novelty itself. To best achieve this goal, the agent should be directed towards the new part of the world, rather than pushed to dither near it. Of course, this is easier said than done, and so methods used to return to frontier states, like Go-Explore <cit.>, then focus on actions that may lead to novel states, instead of focusing on staying in such states, offer a useful interim measure.
Lack of motivation in non-stationary environments Another limitation relates to the second benefit we described: offering a way to check for consistency. Many types of intrinsic reward—decay-based intrinsic rewards in particular—only offer a way to check for consistency if we assume the environment is stationary. By stationary, we mean that patterns and distributions in the environment never change, so once you have collected enough samples to be confident in a pattern or distribution, you never have to return to collect more. If the environment is non-stationary, the pattern could change completely while you're not looking, so you must regularly return to check if you want to be sure of your estimates.
Decay-based rewards, in particular, are generally not designed to encourage an agent to return to parts of the environment that it has already visited a sufficient number of times. However, there are intrinsic rewards designed to account for this concern: one of the earliest intrinsic rewards, used as part of 's () Dyna-Q+ agent, was an additive intrinsic reward that, for a given state, grew with the amount of time since the agent's last visit. The longer it has been since the agent's last visit to that state, the more the value for the state would grow, motivating the agent to return.
Of course, Dyna-Q+ relies on a model of the environment. In reinforcement learning, a model of the environment, sometimes transition model, is traditionally refers to a function that takes a state and an action to take from that state and returns a next state and reward, mimicking the environment <cit.>. Models of the environment are notoriously challenging to formulate for real-world applications where environments are so large and complex that building full models would extend beyond real memory and computational limitations. However, the benefits of Dyna-Q+ point to a need to address these challenges to achieve effective curiosity or exploration: without being able to “think about" or simulate experiences far from your current position in the world, it will likely often be impossible to develop specific intentions to observe parts of the world containing the information that an agent needs or wants most.
Reliance on repetition of state: Another limitation connects to the second key benefit—that intrinsic-reward approaches are useful for checking if an action results in a consistent reaction. This benefit may not align with our goals for designing computational curiosity. Curiosity may be misaligned with an underlying assumption about state that is typical in computational reinforcement learning. We mentioned state in Section <ref> as the situation that the agent finds themself in.
State repetition is important for the trial-and-error aspect of reinforcement learning, as reinforcement learning is designed to evaluate how well an action went last time so the agent can adjust their behaviour next time they are in the same situation. If I didn't much like bumping my nose on the door last time, I might choose a different action when I'm next faced with a closed door.
State is usually thought of as essentially separate from the agent, and more importantly, as repeatable, meaning an agent can experience the same state multiple times. Of course, in large complex worlds, the exact same situation isn't likely to repeat multiple times, but with some generalization, this assumption is very helpful. Important features of the state can repeat multiple times and be useful for predicting reward. For example, imagine you're a rat in a box with a lever. Let's say that when a light in the box is turned on, pulling the lever results in the appearance of chocolate for you to eat, but when the light is off, nothing happens when the lever is pulled. In this case, thinking of light on and light off as repeatable features of state can prove very useful in optimizing your chocolate intake.
However, this assumption that state repeats should be complicated in the case of curiosity. Why? With curiosity, a learner's goal is to change their situation by making changes to their own knowledge state. By knowledge state, we mean the state of what the learner knows—what the agent has learned from its observations of the world. One curious learner wants to change their knowledge state from not knowing who the killer is in the book they are reading to include knowing who the killer is. Another wants to get into a knowledge state where they know if it was their own action that generated peculiar noises from the floorboard. In comparison to traditional reinforcement learning state, which we can call environment state, knowledge state is similar in that the learner can take actions to change it, but it is different in that it isn't helpful to think about returning to previous knowledge states. A learner's knowledge state is continually changing and does not have the same repeatability as environment state: it is much more useful to think of the agent's knowledge growing and adapting with each new observation of the world. Sure, an agent might forget things, but that doesn't mean it ever returns to a prior knowledge state.
When checking if an action results in a consistent reaction, the knowledge state of the agent actually changes after each trial. The inostensible concept of interest is not the result of a single trial, but actually some statistic about the distribution of possible results. For the agent trying to learn the value of a state, the inostensible concept might be the mean value, and for the scientist, the inostensible concept is more likely to be some underlying pattern or truth about the world. Appropriate directed behaviour, in this case, is to experience the same environmental state features multiple times, but each visit provides new information and leads to achieving a different knowledge state.
Specific curiosity still needs to make use of repeatable features of environment state. In fact, we believe that an agent learning what features of the environment tend to repeatably lead to curiosity-inducing situations might be critical to the property of voluntary exposure, (e.g. sections of bookstores labelled `Mysteries' could be a good feature). And without learning about repeating features of environment state, how could we plan directed action to satisfy our curiosity? (c.f., )
§.§.§ Specific Curiosity in Relation to the Limitations
of Intrinsic Reward Approaches
We believe that specific curiosity can address some of the limitations of intrinsic reward approaches. However, we also recognize that specific curiosity appears to function for a different purpose than intrinsic reward methods and compare the functions and goals of each type of method in this discussion.
Detachment: By being specific, curiosity has different goals than the methods that suffer from detachment. Specific curiosity does not attempt to cover an entire frontier and doesn't regret losing track of a state that is likely to be near novel states. Specific curiosity may be best-suited for huge environments where there is so much possible novelty that the learner needs to be choosy about which new information they seek. Intrinsic reward methods are generally not so choosy about the novelty they seek.
Reactivity: Specific curiosity is less defined by reactivity and is a forward-thinking method. The core piece of specific curiosity is the planning to go retrieve a particular piece of information to create the right knowledge at the right time.
A curiosity-inducing situation seems to stem from an update to the learner's knowledge state that results in the agent recognizing an inostensible concept, or specific piece of knowledge that they don't have. In large, complex worlds where a learner can't expect to do everything it is possible to do, specific curiosity helps the agent to go get the right observations for the agent's knowledge state at the right time.
In summary, while it may sometimes be reasonable to think of learners returning to the same environment state and action, this is not a return to the same knowledge state.
§.§.§ Goals in Reinforcement Learning
The word goal lives a conflicted life within the terminology of reinforcement learning. One traditional use of the word goal is specifically in reference to maximizing return <cit.>, in reference to the reward hypothesis, stated by <cit.> as:
“That all of what we mean by goals and purposes can be well thought of as the maximization of the expected value of the cumulative sum of a received scalar signal (called reward).
And yet, when speaking to the intuition around reinforcement learning, there is longstanding use of the the word goal to refer to abstract accomplishments like grasp a spoon or get to the refrigerator (; ). If we assume the reward hypothesis holds for human learners, the reward signals generated in our bodies were evolved over millions of years to shape our behaviour towards such goals, and it isn't obvious on what basis our reward signal is generated <cit.>.
The use of goal as specifically related to maximizing return is inspired by the way goal can be used in the context of human and animal motivation and behaviour, but defining goal this way is limiting. More recently, taking a computational approach has led authors like <cit.> to define specific curiosity as “the search for observations that explain or elaborate a particular goal concept” (p. 262). We suggest that further consideration of what is meant by goal is needed when approaching the relationships between objectives as they relate to both environment state and knowledge state, as described above, and when attempting to broker the relationship between human and machine curiosity literature.
§.§.§ Approaching the Five Properties in the Computational Literature
Computational reinforcement learning researchers have shown strong interest in aspects of the properties of directedness towards inostensible referents, cessation when satisfied, voluntary exposure, and transience. Their exploration has not always been done in the name of curiosity, however. For example,, the idea of directedness (though not necessarily towards inostensible referents) parallels work done on options (as early as ) and planning. The study of options, a mathematical abstraction of short-term policies, has resulted in a growing body of research. Part of the appeal of options is their potential to get an agent from point A to point B (which could be thought of as a goal) without emphasis on the path to get there. Purposeful exploration using options and related ideas has been actively pursued by researchers such as <cit.>.
The options framework, in particular, further reflects cessation when satisfied and aspects of transience via termination conditions for each option. Some termination conditions are naturally defined by goal states, so the directed behaviour ceases upon reaching a goal state, much like cessation when satisfied; other termination conditions can be based on when the option hasn't succeeded in reaching its goal state in a reasonable amount of time, one of the aspects of transience (). However, as <cit.> have pointed out, most work with options to date has largely only considered goals within the distribution of goals previously encountered (p. 1177). One notable exception is the IMAGINE architecture, in the design of which <cit.> leveraged the compositionality of language to generate goals—which could be seen as a step towards leveraging the compositionality of concepts to generate inostensible concepts.
Prior work has further aimed to address the lack of directedness that is a characteristic of intrinsic-reward methods. For example, the Model-Based Active eXploration algorithm presented by <cit.> uses planning to allow the agent “to observe novel events" (p. 1). They care about unknowns and about creating paths to them. The Go-Explore family of algorithms also centres on the idea of taking a direct sequence of actions to move to a specific state for the purpose of exploring from it, as per <cit.>. In these examples and others, it is clear that recent work has begun to seek ways to avoid the reactive approach to designing machine curiosity.
<cit.> developed a model rooted in reinforcement learning to describe the reward process involved in knowledge acquisition, designed to help explain curiosity and interest.
<cit.> “also found that methods which add a bonus to their value function tended to explore much more effectively than methods which add a bonus to their rewards" (p. ii). This is part of a growing body of evidence in the literature that additive reward bonuses do not in many cases reflect or lead to the same results as human curiosity. As stated by <cit.>, “the effects of reward and curiosity are not additive, and reward has been shown to undermine curiosity and its effect on memory" (in reference to ). Finally, active perception is a field of computing science concerned with building systems that take action to change what the system perceives towards specific goals. The needs that arise when considering how to design algorithms for specific curiosity overlap substantially with the concerns of active perception.
In summary, it is encouraging to see a wide body of literature begin to move toward effecting what could be well considered properties of specific curiosity. In the section that follows, we expand on some of the benefits that each of the properties of specific curiosity can bring to curious reinforcement learning agents by way of a concrete implementation and empirical study that highlight how multiple properties work together as a unified whole to generate curious behaviour in a learning machine.
§ CASE STUDY: A SIMPLE LEARNING AGENT THAT EXHIBITS PROPERTIES OF SPECIFIC CURIOSITY
We now present a case study that illustrates one possible way that three of the key properties of specific curiosity might be implemented to shape the behaviour of a reinforcement learning agent. Our intent is for this example to help the reader more deeply understand the properties of specific curiosity identified above, and how the computational principles they represent might be translated to algorithms and implementation. To support this understanding, the case study is designed to model our running bookstore example, so the agent, like you, has the opportunity to discover its own analogue of your corner bookstore.
We specifically hope to show that, even in a simple and focused setting, using the properties of specific curiosity we've highlighted as guidelines allows us to see machine behaviour emerge that approximates specific curiosity from the animal learning domain. Further, we aim to depict how these properties are modular and amenable to extension as future, more insoluble aspects of specific curiosity become computationally clear and tractable. This example is not, however, to be interpreted as a recommendation for a final or definitive computational implementation of specific curiosity. We diverge from the more common practice of fully tackling a problem without domain knowledge, instead implementing hand-designed rules of thumb or expert knowledge as solutions for some of the more challenging, unsolved aspects of computational specific curiosity, such as the process for recognizing inostensible concepts. The intended purpose of this section is for the reader to gain insight and motivation to further investigate the way the properties of specific curiosity might be integrated into different machine learning frameworks and problem settings.
To this end, we offer three sets of experiments. Sections <ref> and <ref> describe the base agent and base domain, respectively, that will be used throughout—agent interactions with the base domain are directly explored in the first set of experiments (Sections <ref> and <ref>). In our second set of experiments, we investigate agent behaviour when the domain is perturbed in terms of domain geometry and span (Sections <ref> and <ref>). In our third and final set of experiments (Sections <ref> and <ref>), we examine the ablation of individual properties of specific curiosity within the agent and the impact this has on agent behaviour.
§.§ Agent Implementation
In this section, we provide the specification for an agent that, if truly exhibiting the behaviour expected from the biological literature on specific curiosity, would be expected to:
* take a largely direct route to a curiosity-satisfying situation, which we term a target (directedness),
* not repeatedly return to situations that had satisfied curiosity (cessation when satisfied), and
* develop a preference for (increased estimated value for) parts of the world that repeatedly offer curiosity-inducing observations (voluntary exposure).
The full algorithm followed by our curious agent is described in Algorithm <ref>.
Sections <ref>–<ref> provide detail on how each property is included in the algorithm. The agent parameters used for our experiments are shown in Table <ref>.
A specific example of specific curiosity
[1]
Initialize $\alpha$, $\epsilon$, $\gamma$, $\gamma_{curious}$, $V$, $x$
Initialize $V_\textit{curious}$, $R_\textit{curious}$ to zeros
agent observation $x$ induces curiosity
generate a new curiosity target
generate $R_{curious} = \left\{
\begin{array}{@{}ll@{}}
0, & \text{if transitioning to target} \\
{\color{blue} \bf -1}, & \text{otherwise}
\end{array}\right.$ A-0.13em versive Quality
$V_{curious} \gets {ValueIteration}(R_{curious},\gamma_{curious})$
there is currently a curiosity target (ie. the agent is curious)
$x'$, $R \gets$ move greedily w.r.t. $V_{curious}(x)$ Directed Behaviour
$x'$, $R \gets$ move $\epsilon$-greedily w.r.t. $V(x)$ Ties broken uniform randomly
$\delta \gets R + \gamma \cdot V(x') - $ $[ V(x) + V_{curious}(x)] $ Voluntary Exposure
$V(x) \gets V(x) + \alpha \delta$
agent observation $x'$ is the target
destroy the current target
reinitialize $V_{curious}$ to zeros Cessation when Satisfied
$x \gets x'$
As a note on the scoping of our empirical work: In the design of the agent used in this case study, we aim to demonstrate interactions between the first three of the five key properties of specific curiosity we contributed in the sections above (directedness, cessation when satisfied, and voluntary exposure). This scope is deliberate: we place our initial focus on foundational properties of specific curiosity that for clarity of investigation can be well studied and perturbed in isolation from the experimental variability of long-term information search and the shifting focus (transience) related to life-long learning. We address these remaining two properties and their conceptual connection to our observed results in the discussion sections below, and explicitly in Section <ref>.
We further contain the scope of these initial experiments by limiting the comparison of secondary computational operations that are involved in specific curiosity but that might have a variety of possible algorithms and implementations—in such cases we chose the clearest, simplest implementation of the many possible alternatives. Specifically, in Section <ref>, we noted the importance of separating curiosity-inducing observations from curiosity-satisfying situations. However, recognizing appropriate curiosity-inducing observations and estimating where in the world the appropriate satisfying observations can be found are complex issues. In this initial case study we chose to isolate the key properties from these complexities so as to better see the impact of the properties themselves on agent behaviour. We achieved this isolation by assuming the existence of an oracle-like mechanism that indicates that curiosity has been induced and indicates the location of an observation that would satisfy it. In what follows, we often refer to this particular location in the domain as the target of curiosity, in reference to the idea that, while there may be many possible ways of making the inostensible concept of focus ostensible, the agent selects one potential curiosity-satisfying situation and then aims its behaviour towards experiencing that situation. We refer in what follows to the mechanism for recognizing curiosity-inducing situations and suggesting appropriate targets as a curiosity-recognizer module.
§.§.§ Base Algorithm
Since we are conceptualizing specific curiosity as resulting in a binary state of curiosity—at a given moment, the agent is either curious or not—we can start with a base algorithm in our experiments that determines the baseline agent behaviour when the agent is not curious. For simplicity, since the intent of this work is to explore behavioural change and not task optimality, we chose TD(0) <cit.> as our base algorithm,[We herein do not rely on eligibility traces to prevent confounding their impact during analysis with the way a system might present its developed preference for curiosity-inducing situations; we expect the practical impact of accumulating or replacing eligibility traces to be one of speeding up the acquisition of preference for curiosity inducing situations, but this is a detailed comparison intended for future work.] with an $\epsilon$-greedy policy[Epsilon-greedy ($\epsilon$-greedy) behaviour refers to choosing the action that has the highest estimated value (being greedy) nearly all of the time, but a small percentage of the time, choosing randomly from the available actions. The `epsilon,' $\epsilon$, in $\epsilon$-greedy is a parameter that sets how likely it is that a given action will be random rather than greedy. For more information on epsilon-greedy behaviour, see <cit.>.] with respect to its estimated value function $V$, with ties broken by equiprobable choice. This behaviour is defined in Line <ref> of Algorithm <ref>. Further, while the agent is not in a state of curiosity, its learning follows the standard TD(0) learning update:
\begin{align}V(x) \gets V(x) + \alpha \delta\text{ where }\delta = R_{t+1} + \gamma V(S_{t+1}) - V(S_t)
\end{align}
Note that, in Line <ref> of Algorithm <ref>, when the agent isn't curious, our $V_\textit{curious}$ is zero everywhere, so the learning update simplifies to the standard TD(0) update.
§.§.§ Recognizing Curiosity-Inducing Observations
To enter a state of curiosity, the algorithm relies on a curiosity-recognizer module, which, upon a curiosity-inducing observation, generates an associated target location (Line <ref>). In our bookstore analogue, looking around the bookstore offers a curiosity-inducing observation, like an intriguing back-of-book blurb, and upon this observation, the reader/agent automatically has a target observation or set of target observations in mind. The target might be observing the first page of the book, and based on the target, the agent can guess how best to act to achieve the target (open the book) and proceed.
As we mentioned earlier in Section <ref>, recognizing when an observation should induce curiosity and estimating where an appropriate satisfier might be found are complex issues with solutions beyond the scope of this paper. For this case study, we instantiated a specific location in the domain to induce curiosity and a set of locations of possible satisfiers. Each time curiosity is induced by visiting the curiosity-inducing location, one location for a satisfier is chosen randomly from the set; we refer to this location as the target. This simple target generator acts as the curiosity-recognizer module in our experiments. The exact locations used for our experiments will be described with the domains in Sections <ref> and <ref>. We use this simplified curiosity-recognizer module to recognize when curiosity is induced.
§.§.§ Directedness
Once curiosity is induced, the agent changes its behaviour. To achieve the property of directedness, the agent is no longer $\epsilon$-greedy with respect to $V$ and is instead fully greedy with respect to $V_\textit{curious}$, a temporary value function. As mentioned earlier in Section <ref>, the key property of $V_\textit{curious}$ is that it is a gradient leading the agent towards the target provided by the curiosity recognizer: if one location is fewer actions away from curiosity's satisfier than another, the former location has higher value. An agent acting greedily with respect to the temporary value function will travel directly to curiosity's satisfier.
In our implementation, the function $V_\textit{curious}$ is generated via value iteration <cit.> in Line <ref>.
Value iteration generates appropriate gradations in the value function, even taking into account any known obstacles or required detours between the agent's current location—or any given location—and the location of the target. See Figure <ref>(a, $V_\textit{curious}$) for a visualization of a gradient generated by value iteration. Value iteration is performed using the agent's transition model of the space, but uses a special reward model, $R_\textit{curious}: \mathcal S \times \mathcal A \times \mathcal S \to \mathbb R$, which maps a transition from any location other than the target to $-1$, but maps a transition from the target to $0$. Equivalently:
\begin{align} \label{eq:rcurious}
R_\textit{curious}(s,a,s') = \left\{ \begin{array}{cl}
0 & \text{if }s\text{ is the target} \\
-1 & \text{otherwise}
\end{array}\right.
\end{align}
This choice was inspired by the characteristic aversive quality of curiosity mentioned in Section <ref>.
Note that in this simplified agent, we provided the agent a perfect transition model of the world, so that its value iteration produces an exactly direct gradient to the target. The agent could instead learn this model from experience. Future work will need to consider the implications of not giving the agent a perfect model, as using a perfect model is a simplification rarely possible in real-world settings.
§.§.§ Cessation When Satisfied
The property of cessation when satisfied refers to the agent's behaviour no longer being affected by curiosity once the agent has observed the target of its curiosity. Once the agent has visited the target (Line <ref>), the agent is no longer curious and returns to its base behaviour. In the algorithm, this return to base behaviour is achieved by removing the target (Line <ref>) and zeroing out $V_\textit{curious}$ (Line <ref>). The agent will only become curious again when it has another curiosity-inducing observation as recognized by the curiosity-recognizer module.
§.§.§ Voluntary Exposure
After several cycles of curiosity being induced, followed, and satisfied, if a particular part of the world repeatedly induces curiosity, the agent can learn a preference for returning to that part of the world. This learning process exemplifies voluntary exposure. In our running example, if you visited your corner bookstore by largely random choice during a few strolls around your neighbourhood and each time you found your curiosity sparked by excellent reads, you might find yourself heading to the bookstore directly to shortcut the process.
While we designed directedness and cessation when satisfied as simple behaviours, voluntary exposure was more interesting because we wanted our design to let the agent learn where in the world it might repeatedly become curious, and therefore voluntarily expose itself to those parts of the world—in reference to our running example, returning to the bookstore. We made a simple change to the TD update that would let the temporary value function, $V_\textit{curious}$, influence the enduring value function, $V$. This change can be found in Line <ref> of Algorithm <ref>:
\begin{align*}
\delta \gets R + \gamma \cdot V(x') - {\color{blue}\boldmath{[ V(x) + V_{curious}(x)]}}
\end{align*}
This change means that when the agent is curious, the temporary value function, $V_\textit{curious}$, affects the learning update to its estimated value function, $V$. Since $V_\textit{curious}$ is negative everywhere, the enduring value for any locations the agent visits while curious will increase.
This design choice came from intuition more than a strong theoretical underpinning. Our intuition was that the experience of curiosity should affect internal value estimates more enduringly, but the result should not be an enduring push towards curiosity's satisfiers, as we might see if visiting a satisfier were intrinsically rewarding. Instead, we hoped to see an enduring effect of increased preference for curiosity-inducing situations, or, algorithmically, increased value. Our initial experiments were designed to uncover whether this algorithmic choice would offer behaviour and value function estimates characterized by the property of voluntary exposure, which we would observe as a learned preference (increased value) for locations where the agent repeatedly makes curiosity-inducing observations.
In summary, the agent implemented in this section was designed to incorporate three of the five key properties highlighted in this paper: directedness, cessation when satisfied, and voluntary exposure. As a reminder, this design represents only an initial example of how these properties might be simply achieved, and meant to inspire other approaches to agents exhibiting specific curiosity. In the remainder of this experimental section, we describe experiments designed to help us better understand the effects of our algorithmic choices for the agent.
$\alpha$ $0$ $01$
$\epsilon$ $0$ $2$
$\gamma$ $0$ $9$
$\gamma_\textit{curious}$ $0$ $9$
initial $V: \mathcal S \to \mathbb R$ 2c$V(s) = 0$ $\forall s \in \mathcal S$
Parameters used at initialization in our experiments.
§.§ Primary Domain
With the goal that our experiments should use a simple and focused setting to highlight machine behaviour approximating biological specific curiosity, we designed a primary domain mirroring our running example of the corner bookstore. The domain is a simple gridworld, meaning that the agent occupies a single square in a grid. Our primary domain is an 11 by 11 grid, depicted in Figure <ref>. In this paper, our references to locations on the grid are 0-indexed using (row, column) notation.
Actions can move the agent one space per step either up, on either upward diagonal, left, or right—but never down or on a downward diagonal. The agent also has a stay-here action that allows it to stay on the same location. These actions are shown visually in Figure <ref>(b). Directly left or right actions that would take an agent beyond the left or right boundary of the grid instead return the agent to the square where it attempted the action. Similarly, diagonal actions that would take an agent beyond the left or right boundary of the grid instead simply move the agent up. Any action that would take the agent beyond the upper boundary of the grid teleports the agent to the midpoint of the lowest row of the grid, $(10,5)$, which we will refer to as the junction location.[The choices to have no downward actions and to teleport off the top of the grid to the midpoint of the bottom of the grid may seem unexpected. This choice was made to allow greater clarity in the visual presentation of the outcomes of the case study. By removing backtracking, the visit counts for each state more clearly show where the agent chooses to move. There are other choices, such as standard cardinal direction actions. The choice to teleport to the junction location rather than treating the grid like a simple cylinder simplifies the learning problem by making it more likely that the agent will return to a state it has already learned about, speeding up the learning process. We evaluated a range of alternatives without some of these constraints on the domain and movement, but they have been omitted from this manuscript for what brevity we can hope to preserve; key observations in settings with backward motion, cardinal motion, cylindrical wrapping, and others are well captured by the presented results.]
A location in the centre of the grid at position $(5,5)$ is considered a permanent curiosity-inducing location, analogous to the bookstore in our example. This choice makes the curiosity-recognizer module mentioned in Section <ref> very simple. When the agent enters the curiosity-inducing location, the module generates a target. Much like your corner bookstore, the curiosity-inducing location reliably induces curiosity in the agent when visited. Every target is generated in row 1 of the grid (the second row from the top) with equal probability of being placed in any of the 9 grid columns besides those directly neighbouring the left or right boundary, i.e., a target chosen from the locations in the row between and including $(1,1)$ and $(1,9)$. Different stories have different endings and different narratives to take the reader to them, so the targets generated at your corner bookstore vary.
(domain) at (0,0) ;
(start) at (0.3,-1.71) ;
(0) at (-1.65,2.3) ;
(1) at (-1.25,2.3) ;
(2) at (-0.86,2.3) ;
(3) at (-0.47,2.3) ;
(4) at (-0.1,2.3) ;
(5) at (0.3,2.3) ;
(6) at (0.69,2.3) ;
(7) at (1.07,2.3) ;
(8) at (1.45,2.3) ;
(9) at (1.83,2.3) ;
(10) at (2.21,2.3) ;
(dlabel) at (0, -3) (a) Domain Mechanics;
[->] (0) to [out=90,in=270] (start);
[->] (1) to [out=90,in=270] (start);
[->] (2) to [out=90,in=270] (start);
[->] (3) to [out=90,in=270] (start);
[->] (4) to [out=90,in=270] (start);
[->] (5) to [out=90,in=270] (start);
[->] (6) to [out=90,in=270] (start);
[->] (7) to [out=90,in=270] (start);
[->] (8) to [out=90,in=270] (start);
[->] (9) to [out=90,in=270] (start);
[->] (10) to [out=90,in=270] (start);
(agentbg) at (4,-0.1) ;
(agent) at (4,-0.1) ;
(left) at (3.5,-0.1) ;
(right) at (4.5,-0.1) ;
(up) at (4,0.4) ;
(upleft) at (3.5,0.4) ;
(upright) at (4.5,0.4) ;
[->] (agent) to (left);
[->] (agent) to (right);
[->] (agent) to (up);
[->] (agent) to (upleft);
[->] (agent) to (upright);
(agent) edge [loop below] (agent);
(oracle) at (8,0) ;
[align=center] (olabel) at (8.0, -3) (c) Agent Target
Generation Mechanics;
[align=center] (alabel) at (4, -3) (b) Agent
This image provides a graphical expression of the mechanics of the primary domain described in Section <ref>. The junction location is shown at the bottom with a grey dashed outline in (10, 5). The arrows in (a) from the top row back to the start location represent teleportation back to the junction location when the agent takes an upward action off the top of the grid. The grey rectangle shown in (b) will represent the agent in later figures, and (b) also visually shows the six actions available to the agent from any location. For clarity, the target generation mechanics needed for the curiosity-recognizing module (not considered inherent to the domain) are shown separately in (c). The curiosity-generating location has a thick solid grey outline. The possible locations for curiosity targets to be generated, across the second row from the top, are highlighted in purple.
While a reward function is usually included as part of an experimental domain for reinforcement learning, we did not include a reward function as part of the domain for the experiments described in this paper.[As a reminder from Section <ref>, typically the goal pursued by reinforcement learning agents is to maximize their accumulation of extrinsic reward—the reward provided by the environment, usually defined as part of the domain. This goal is not directly relevant to the core of this paper, which is more focused on isolated mechanics of curiosity.] For a standard reinforcement learning agent that expects a reward for its learning algorithm, we could equivalently define $R_t$ to be $0$ at every time $t$. We leave the exploration of how best to balance the scale of $V_\textit{curious}$ with the value generated by a nonzero reward function to future work.
In the experiments showcased in this paper, we initialized each trial with the agent located at the curiosity-inducing location, as the random behaviour to find the curiosity-inducing location is not especially relevant to the mechanics central to this paper. We did run experiments with the agent starting at other locations: the results are not meaningfully affected, but the learning time is extended.
The primary domain described in this section acted as the environment that the agent interacts with in our first and third sets of experiments and as a starting point for domain modifications in the second set of experiments. By using a clear analogue of the bookstore throughout, our intention was to make behaviours characterizing specific curiosity obvious.
§.§ First Set of Experiments: Base Domain and Agent
§.§.§ Experimental Setup: Visit Count and Value Study in the Primary Domain with the Base Agent
As we noted early in Section <ref>, we wanted to design experiments to seek out machine behaviour approximating biological specific curiosity. To achieve this, we observed visit counts (where an agent goes) and the agent's estimated value function (what locations an agent learns to prefer).
To measure what the agent does or how the agent acts, we use visit counts. We use the term visit counts to refer to an array of integers, one integer for each location on the grid equal to the number of times the agent has visited that location. At any given timestep, the values in the visit count array will be identical to the values in the preceding timestep, except at the location that the agent visits, which will be larger by 1. At the end of a trial, the visit counts help us see where the agent spent more time and where it spent less time. Graphical examples of visit counts can be found in the right column of Figure <ref>.
To gain insight into what the agent learns and how its persistent value function changes over time, we can represent the persistent value function as an array with the value equal to the estimated value of that location. Graphical examples of the persistent value function can be found in the left column of Figure <ref>. The agent's curiosity value function can be represented similarly, and, in the context of Algorithm <ref>, a record of the curiosity value function at each timestep can provide insight as to why the agent acted in a particular way or learned a particular change in the persistent value function. Graphical examples of the curiosity value function can be found in the second row of Figure <ref>.
As our initial experiment, we recorded the estimated value function $V$, the curiosity value function $V_\textit{curious}$, and the visit counts of the agent described by Algorithm <ref> in the primary domain in 30 trials of 5000 timesteps each (with each timestep referring to an iteration over the loop in Lines <ref>-<ref> of Algorithm <ref>). For each trial, we recorded the value functions at each timestep. Recording these values allowed us to create frame-by-frame animations for each trial showing the agent's movement through the grid over time along with the changing value functions. An example of agent motion and value learning in video format is provided as supplementary material: <https://youtu.be/TDUpB7OefFc>.
To account for stochasticity in the agent's behaviour, we also aggregated the final estimated value functions and visit counts (after 5000 timesteps) over all 30 trials. Similarly, we aggregated the estimated value of the curiosity-inducing location and potential target locations at each timestep over all 30 trials. Observing the changes in the estimated value function, in particular, allowed us to test our hypothesis of voluntary exposure: that the curiosity-inducing location would strongly accumulate value, while the locations of the targets would accumulate relatively little value.
Overall, this initial experiment in the primary domain allowed us to look for patterns in the agent's behaviour and learning and then compare those patterns to the expectations we developed through conceptual analysis of the properties of curiosity in Section <ref>.
§.§.§ Results and Discussion: Visit Count and Value Study in the Primary Domain with the Base Agent
One question that motivated these experiments was: Does the agent learn to value the curiosity-inducing location, emulating the property of voluntary exposure? In particular, an agent demonstrating specific curiosity would learn a preference to return to the curiosity-inducing situation (think the bookstore) and not learn a preference to return to the curiosity-satisfying targets (think the specific pages of each book). Figure <ref>(c) shows that the final value function, aggregated over all trials, had this property, with the curiosity-inducing location having the highest persistent value of all locations in the grid. We can also see a gradient leading from the bottom row of the grid up to the curiosity-inducing location, showing that, after 5000 steps, the agent had a persistent preference to move to the curiosity-inducing location. Figure <ref>(d) shows that this preference was reflected in the agent's behaviour: visits were concentrated between the junction location (where the agent starts each upward traversal of the grid) and at the curiosity-inducing location.
(a) (b)
(c) (d)
This figure shows the learned value function and visit counts in the primary domain for a simple reinforcement learning agent that exhibits properties of specific curiosity. From this figure, we can see that the agent learned to value the curiosity-inducing location and therefore follow a direct path to that location, but it does not learn to value the targets of its curiosity. Shown here are (a,c) the learned value function $V$ and (b,d) the total visits the agent made to each state in the 11 x 11 grid domain. Totals plotted for trials of 5000 steps, with (a) and (b) showing value and visit counts for one representative trial, while (c) and (d) are averaged over 30 independent trials.
While it is promising to see this indication of voluntary exposure at the end of learning, we also would hope to see the difference in preference between the curiosity-inducing location and potentially curiosity-satisfying targets learned smoothly over time. Indeed, this desired pattern can be seen in the learning curves in Figure <ref>.
This figure shows the mean (line) and standard deviation (shaded area) of the estimated value of the curiosity-inducing location (in blue) and of all the possible target locations (in orange) over time, considering 30 trials. The estimated value of the curiosity-inducing location grows sublinearly while the learned values of the targets hover around $0$ throughout with little variation or growth.
To understand how the agent learned to travel directly to the curiosity-inducing location, it can be helpful to follow the agent through a cycle of curiosity being induced, followed, and satisfied. The first such cycle in one trial is followed in Figure <ref>. The agent started at the curiosity-inducing location at $t=0$, where curiosity is triggered. The leftmost column of Figure <ref> shows the temporary reward function ($R_\textit{curious}$), the temporary value function ($V_\textit{curious}$), the persistent value function ($V$), and the visit counts at time $t=0$. For the agent, the induction of curiosity meant generating a curiosity-satisfying target (in the figure, the target has a dashed line border and is located near the top right of the grid). An associated temporary reward function, $R_\textit{curious}$, was generated, shown in panel (a), which was used to compute an appropriate temporary value function, $V_\textit{curious}$, shown in panel (b).
[above right] (Rcurious) at (0,10.8)
[above right] (Rcuriouslabel) at (1,14) $R_\textit{curious}$;
[above right] (alabel) at (1,11.4) (a);
[above right] (elabel) at (4.85,11.4) (e);
[above right] (ilabel) at (8.75,11.4) (i);
[above right] (mlabel) at (12.6,11.4) (m);
[above right] (Vcurious) at (0,7.3)
[above right] (Vcuriouslabel) at (1,10.5) $V_\textit{curious}$;
[above right] (blabel) at (1,7.9) (b);
[above right] (flabel) at (4.85,7.9) (f);
[above right] (jlabel) at (8.75,7.9) (j);
[above right] (nlabel) at (12.6,7.9) (n);
[above right] (V) at (0,4)
[above right] (Vlabel) at (1,7.1) $V$;
[above right] (clabel) at (1,4.5) (c);
[above right] (glabel) at (4.85,4.5) (g);
[above right] (klabel) at (8.75,4.5) (k);
[above right] (olabel) at (12.6,4.5) (o);
[above right] (VisitCounts) at (0,0)
[above right, text=white] (Visitlabel) at (1,3.6) Visit Counts;
[above right, text=white] (dlabel) at (1,1) (d);
[above right, text=white] (hlabel) at (4.85,1) (h);
[above right, text=white] (llabel) at (8.75,1) (l);
[above right, text=white] (plabel) at (12.6,1) (p);
[ultra thick, ->] (2.6,0) – (,0);
(2.6,3pt) – (2.6,-3pt);
(2.6,0) node[below=3pt] $t=0$;
(6.5,3pt) – (6.5,-3pt);
(6.5,0) node[below=3pt] $t=3$;
(10.35,3pt) – (10.35,-3pt);
(10.35,0) node[below=3pt] $t=7$;
(14.2,3pt) – (14.2,-3pt);
(14.2,0) node[below=3pt] $t=16$;
This figure is meant to offer intuition into the agent's learning behaviour by showing the agent's internal learned value function $V$, curiosity value function $V_\textit{curious}$, the curiosity reward function $R_\textit{curious}$ used to generate $V_\textit{curious}$, and the visit counts at the initialization of the trial ($t = 0$), the first visit to an induced target ($t=3$), after it has crossed off the top of the grid back to the bottom centre ($t=7$) and the second visit to the curiosity-inducing location ($t=16$). Note the difference in scale between $V$ and $V_\textit{curious}$. While it is not visually obvious, location (6, 4) has a value $V$ of approximately $0.0003$ at $t=16$—the first step in learning a path to the curiosity-inducing location.
Acting according to the property of directedness, the agent moved directly to the target and reached that target at $t=3$, as shown in panel (h). At each step, the agent's persistent value function was updated according to Line <ref>, so we see the gradient we saw in $V_\textit{curious}$, panel (b), reflected in the learned value in panel (g). The further from the target, which is where $V_\textit{curious}$ is more negative, the more positive value was accumulated into the persistent value function.
When the agent observed the target at time $t=3$, its curiosity was satisfied, and in accordance with the property of ceases when satisfied, the target-driven behaviour ended. This means that $R_\textit{curious}$ and $V_\textit{curious}$ were zeroed out for all locations, as shown in panels (e) and (f), respectively. In this initial cycle, the agent's behaviour was wandering and largely random (as can be observed via its visits in panels (l) and (p)) until the agent reached a location adjacent to a location that has accumulated some persistent value—in this case, the agent reaches a location adjacent to the curiosity-inducing location, where a greedy action would be to move to the curiosity-inducing location. At time $t=16$, the agent visited the curiosity-inducing location where the cycle restarted with a new target.
We have seen the agent exhibit the properties of directedness, cessation when satisfied, and voluntary exposure, which was the desired result. However, this experiment was performed in a very small domain, so a next obvious question is whether these properties would still be exhibited in larger domains. Is the agent still able to learn a persistent preference for the curiosity-inducing location when the domain is larger, or when there are many possible targets? These questions motivated our second set of experiments, described in the next section.
§.§ Second Set of Experiments: Domain Geometry
§.§.§ Experimental Setup: Domain Geometry Manipulations
While the patterns we observed through the experiments described in Sections <ref> and <ref> are promising reflections of specific curiosity, we were curious about whether we would observe the same patterns in a larger domain. In a larger domain, there is more space for the agent to get `lost,' and not pick up the patterns of behaviour demonstrating learned voluntary exposure and repeated cycles of curiosity. For this reason, in our second set of experiments, we manipulated the geometry, or shape, of our original $11 \times 11$ domain to make similar wide ($11\times101$) and tall ($101\times 11$) domains. In these domains, we ran four experiments:
* 30 trials of 5000 steps in wide ($11\times101$) domain
* 30 trials of 5000 steps in wide ($11\times101$) domain without a junction location
* 30 trials of 5000 steps in tall ($101\times 11$) domain with curiosity-inducing location near the bottom of the grid
* 30 trials of 5000 steps in tall ($101\times 11$) domain with curiosity-inducing location in the centre of the grid
We explain these experiments in more detail in this section. In each of these domains with manipulated geometry, each key aspect of the primary domain has an analogue. The agent had the same six actions available (left, left-up diagonal, up, right-up diagonal, right, and stay-here). The targets were uniformly selected from the second row from the top of the grid: from $(1,1)$ to $(1,99)$ in the wide domain and from $(1,1)$ to $(1,9)$ in the tall domain.
In three of the four experiments, the junction location has an analogue: when the agent moves off the top of the grid, it is returned to the centre of the bottom row of the grid, which is $(10,50)$ in the wide domain and $(100,5)$ in the tall domain. In the second experiment, however, we removed the junction location, making the domain a true cylinder. When the agent moves off the top of the grid, it arrives at the bottom of the grid in the column it attempted to move into (e.g., if the agent moved on a left-up diagonal, it would arrive one column to the left of where it was along the top, unless it was against the left edge, in which case it would arrive in the bottom row in the same column).
Removing the junction location allowed us to explore how important it is for a curiosity-inducing location to be near the agent when the agent isn’t curious. If the agent fails to find a distant curiosity-inducing location, it might not demonstrate the key properties of specific curiosity. Understanding this effect has important implications for the design of an appropriate curiosity-recognizing module. For example, we may need to ensure the module has a sufficiently low threshold for the induction of curiosity to obtain useful behaviour.
We further explored this concern by manipulating the location of the curiosity-inducing location in the tall domain. It was not obvious where to put the curiosity-inducing location in the tall domain: five rows up from the bottom, or in the vertical centre of the grid?
As the third and fourth experiments of this set, we tried both natural possibilities for the curiosity-inducing location, with the third experiment performed with the curiosity-inducing location at $(95,5)$ and the fourth with it at $(50,5)$.
By manipulating the geometry of our original domain, we hoped to find out whether the initial patterns we observed in the first set of experiments generalized to larger domains. Further, larger domains might illuminate other patterns of behaviour that might improve our choices in the design of future, more sophisticated algorithms for machine curiosity.
§.§.§ Results and Discussion: Domain Geometry Manipulations
Through these experiments with larger geometry-manipulated domains, we learned three key lessons:
* Even in expanded domains, following Algorithm <ref> still results in properties of directedness, cessation when satisfied, and voluntary exposure. Of these properties, we were least certain that we would observe voluntary exposure, but by the end of every trial of these experiments, the persistent value is highest at the curiosity-inducing location, which reflects this property. For an aggregate view, see Figures <ref>a and <ref>a,b.
(a) Learned Value Function $V$
(b) Visit Count
This figure shows the learned value function and visit counts in the modified wide domain for our simple reinforcement learning agent exhibiting properties of specific curiosity. This figure shows how any locations that are visited repeatedly while curious will accumulate value. Shown here are (a) the learned value function $V$ and (b) the total visits the agent made to each location. Totals plotted for trials of 5000 steps and averaged over 30 independent trials. Note that the scale of the visit counts plot differs from that in Figure <ref>.
[above] (valuelow) at (0,0) ;
(a) at (0,0) (a);
[above] (valuemid) at (2.5,0) ;
(b) at (2.5,0) (b);
[above] (valuebar) at (4.7,0.3) ;
[above] (visitslow) at (7,0) ;
(c) at (7,0) (c);
[above] (visitslowbar) at (9,0.3) ;
[above] (visitsmid) at (11,0) ;
(d) at (11,0) (d);
[above] (visitsmidbar) at (13,0.3) ;
This figure shows the learned value function and visit counts in the modified tall domain for our simple reinforcement learning agent exhibiting properties of specific curiosity. This figure shows that learning is slowed when time to complete a cycle of curiosity is increased, and slowed even more when the curiosity-inducing location isn't near any repeatedly visited location. Shown here are the learned value function, $V$, with (a) the curiosity-inducing location at $(95,5)$ and (b) the curiosity-inducing location at $(50, 5)$, and the total visits the agent made to each location. Totals averaged over 30 independent trials of 5000 steps each.
Directedness and cessation when satisfied are determined directly by the algorithm and do not rely on any learning, so it is unsurprising to see these properties reflected in videos of the agent's behaviour. The directed behaviour of the agent is also reflected in the visit counts for the wide and tall domains shown in Figures <ref>b and <ref>c,d, primarily in the upward-opening funnel shape from the curiosity-inducing location, which occurs because once the agent is in state curiosity, it only takes upward (or upward diagonal) actions to reach the targets at the top of the grid.
* Any part of the world that is repeatedly visited while the agent is in state curiosity acquires persistent value. We already saw this phenomenon in the primary domain (Figure <ref>a,c), as persistent value accumulated in the funnel shape of locations leading from the curiosity-inducing location towards the targets. However, this phenomenon is more pronounced in the wide and tall domains: in Figures <ref>a and <ref>a,b, while a direct path from from the junction location to the curiosity-inducing location has accumulated some value, the magnitude of that value is imperceptible on the scale used for those figures, while the upward funnels are clear.
In the wide domain, this upward funnel includes some of the potential target locations (see the distinctive `bird-wing' shape in Figure <ref>a and the spread of orange lines in Figure <ref>), which might raise concern if you remember that we were aiming for targets not to accumulate value—remember, once you've satisfied your curiosity, you don't read the same page over and over again. However, this is a special case, where these locations accumulate value when they are visited for a different purpose: passing through them on the way to a curiosity-satisfying target. Depending on the context, there may be benefits to learning to value processes that have helped satisfy curiosity in the past, or this may be an undesirable side effect.
This figure shows the persistent value of the curiosity-inducing location (blue) and target locations (orange) over time for three trials. While the growth pattern for the curiosity-inducing location is similar to that seen for the primary domain (Figure <ref>), in the wide domain, some of the target locations grow in value over time. There are three blue lines, with each showing the value of the curiosity-inducing location for a single trial. In orange, the value for each target is shown as a separate line (meaning there are 297 separate orange lines, 99 for each trial).
The accumulation of value in any area visited by the agent while curious is important in the context of our exploration of whether an agent might `get lost' if the curiosity-inducing location is too far away: the agent can get stuck in these areas of accumulated value and not find its way back to the curiosity-inducing location. We observed this exact problem when we removed the junction location from the wide world: the agent spends the majority of the trial example trial used to generate Figure <ref> in an area to the left of the curiosity-inducing location, where it had previously accumulated value on the way to a target.
Visit Count (Single Trial in Wide Domain, No Junction Location)
16 40 65
This figure shows the visit counts in a single trial in the wide domain with the junction location removed. While the agent's persistent value function is greatest at the curiosity-inducing location, as desired for voluntary exposure, the agent still doesn't find its way back to the curiosity-inducing location because it gets stuck re-visiting an area of the grid that accumulated value while the agent travelled from the curiosity-inducing location to a target. In this trial, the agent only visited the curiosity-inducing location twice (the first visit resulting in a target to the right of the curiosity-inducing location, and the second resulting in a target to the left of the curiosity-inducing location.
Thinking this scenario out beyond the 5000 steps of one trial, the agent should gradually learn that this `sticky' area is not valuable. While the agent is not curious, the value of the locations it visits slowly return toward zero. In Figure <ref>, we see that the potential target locations visited repeatedly in this trial gradually decrease in value over time. Because the agent rapidly learned a persistent value function where the curiosity-inducing location has the highest persistent value, after many time steps, it should theoretically return to the curiosity-inducing location once the value of these areas had decreased sufficiently. However, we can see from the shape of those curves in Figure <ref> that this decrease will be ineffectually slow.
Persistent Value over Time (Single Trial in Wide Domain, No Junction Location)
This figure shows the persistent value of the curiosity-inducing location (blue) and target locations (orange) over time for one trial in the wide domain with no junction location. While the curiosity-inducing location accumulates the most value, the agent gets stuck re-visiting a region of the grid that was on the way to a previous target. Some of the potential target locations are in this region, and so we can see their value grow when the agent visits them while curious, When the agent returns to these locations after curiosity has been satisfied, their value slowly declines. This decline is so slow that the agent will not unlearn its preference for them in a timeframe that we would consider reasonable. The value for each potential target location is shown in orange as a separate line (meaning there are 99 separate orange lines).
In many cases, we suspect that this limitation would not pose a problem. For example, the existence of a junction location is typical to biological learners: where the curiosity-inducing location is like a bookstore, the junction location is much like a home—a place the agent returns to regularly. Once you've learned a path from your home to the bookstore, you are readily able to follow your desire to expose yourself to curiosity. If you didn't return home, however, you might not figure out how to get back to the bookstore, as we observed in our experiments. This observation of our agent getting stuck is the most extreme example of our third and following lesson.
* Learning voluntary exposure requires multiple visits, and the less likely the agent is to return to a curiosity-inducing location, the slower this learning process will be. In the wide world with the junction location removed, the agent rarely followed any repetitive path to the curiosity-inducing location. In many trials, the agent visited the curiosity-inducing location more than once, but did not have the opportunity to learn a habitual path. In these grid worlds, a single visit to the curiosity-inducing location extends the learned path by only one location. For readers unfamiliar with the learning behaviour of model-free reinforcement learning algorithms, you can think that, every time the agent stumbles upon a path it has already noted, it notes where it was before entering the path, then follows the path the rest of the way. This new note adds one more location to the path. Algorithmically, these `notes' are made as increased persistent value. This procedure means that while developing increased value for the curiosity-inducing location occurs with even a single visit, developing behaviour that reflects voluntary exposure takes multiple visits.
The two experiments in the tall domain reflect our third lesson with more gradation. When the curiosity-inducing location is near the junction location, the agent learns a direct path between the two relatively quickly. When the curiosity-inducing location is placed further away, the agent skips by the curiosity-inducing location more often and spends more time wandering in the part of the domain above the curiosity-inducing location—slowed down by the `sticky' parts of that region that have accumulated value by being visited when the agent is in a state of curiosity. As a result, the curiosity-inducing location accumulates more value and visits overall when it is placed close to the junction location (Figure <ref>a,c) than when it is placed further away (Figure <ref>b,d).
These lessons are valuable because, as described in Section <ref>, assuming curiosity can be used to direct agents towards fruitful learning opportunities, it is desirable for our agents to effectively and efficiently learn voluntary exposure to curiosity-inducing situations. Using Algorithm <ref> or an adaptation of it will require recognizing the effect of domains on whether the agent will visit a curiosity-inducing location enough times while following its non-curious policy to learn habitual paths. With these lessons in mind, our next set of experiments probes the interplay of the properties within Algorithm <ref>.
§.§ Third Set of Experiments: Ablation of Properties
§.§.§ Experimental Setup: Ablation Study
The third and final set of experiments is an ablation study. The term ablation comes from neuroscience, where one way to experimentally learn about the function of part of the brain is to destroy that part and see how the behaviour of the learner changes. In our case, particular design elements were included in the algorithm to account for each of three key properties of specific curiosity and in this set of experiments, we ablated (i.e., removed) each of these design elements in turn—directedness, cessation when satisfied, and voluntary exposure—running the same experiment as described in Section <ref> and observe what has changed from the results we observed in Section <ref>. For each property, a reminder from Sections <ref>-<ref> of how the property is incorporated into Algorithm <ref> and a description of how the algorithm proceeds with the property removed is included in the latter part of this subsection.
Beyond using ablations to study the design elements for each key property, in this section we also include an experiment with an ablation of the design element included to account for the aversive quality of specific curiosity. While we pointed out that there is some controversy in whether specific curiosity should be characterized as aversive in Footnote <ref> and did not argue for aversive quality to be a key property for the implementation of machine specific curiosity, we did include aversive quality in designing Algorithm <ref>, as described in Section <ref>. An aversive quality may not be necessary for specific curiosity generally, but removing it should have a notable effect on the results of using Algorithm <ref>, as the aversive quality both guides the agent to the target and determines the value that is learned in the persistent value function, so we tested its importance via an ablation of the associated algorithmic elements, detailed below.
However, since aversive quality defines the curiosity value function, $R_\textit{curious}$, we expected that removing it completely for an ablation should result in uninteresting, random behaviour: the agent will neither have a guide to the target to use for directedness nor learn to value the curiosity-inducing location for voluntary exposure. For this reason, asking what happens when aversive quality is ablated entirely is less interesting than asking what happens if it is replaced with positive quality. How does the agent's learning and behaviour change if $R_\textit{curious}$, rather than being negative everywhere except the target, is positive everywhere, most positive at the target? To answer this question, we additionally ran an experiment where we modified the curiosity reward function in this manner, as detailed below.
Running this series of ablations should allow us to better understand Algorithm <ref> by demonstrating how each property contributes to the agent's learning and behaviour. Each of these experiments is described in more detail in the following subsections.
Ablation of Directedness
To ablate directedness, we removed Line <ref> and the if statement structure around it.
[1]
there is currently a curiosity target (ie. the agent is curious)
$x'$, $R \gets$ move greedily w.r.t. $V_{curious}(x)$ Directed Behaviour
In Algorithm <ref>, the agent follows the gradient value function $V_\textit{curious}$ greedily to the target, but in the ablation, the agent instead follows an $\epsilon$-greedy policy with respect to $V$, whether or not a target exists. Equivalently, Line <ref>,
\begin{align*}
x', R \gets \text{move epsilon-greedily w.r.t. } V(x) &\qquad\triangleright\text{Ties broken uniform randomly,}
\end{align*}
always determines the agent's next action and get the next state, $x'$, and reward, $R$.
Ablation of Cessation When Satisfied
To ablate cessation when satisfied, we removed Lines <ref> and <ref> of Algorithm <ref> and the if statement structure around them.
[1]
agent observation $x'$ is the target
destroy the current target
reinitialize $V_{curious}$ to zeros Cessation when Satisfied
With these lines removed, if the agent visits the target, the target remains and the agent continues to greedily follow the gradient value function $V_\textit{curious}$.
Ablation of Voluntary Exposure
To ablate voluntary exposure, we removed the edit we made to the learning update in Line <ref>. As a reminder, Line <ref> in the original algorithm was as follows:
\begin{align*}
\delta \gets R + \gamma \cdot V(x') - {\color{blue}\boldmath{[ V(x) + V_{curious}(x)] }} &\qquad \triangleright {\color{blue} \textbf{Voluntary Exposure}}
\end{align*}
The ablation reverts that line to the standard TD error, as follows:
\begin{align*}
\delta \gets R + \gamma V(x') - V(x).
\end{align*}
With the $V_\textit{curious}(x)$ term removed, the temporary value function does not affect updates to the persistent value function.
Ablation of A-0.13em versive Quality
To ablate aversive quality, we removed Line <ref>,
\begin{align*}
\text{generate } R_{curious} = \left\{
\begin{array}{@{}ll@{}}
0, & \text{if transitioning into target state} \\
{\color{blue} \bf -1}, & \text{otherwise}
\end{array}\right. &\qquad \triangleright {\color{blue} \bf \textbf{A\kern-0.13em versive Quality}}
\end{align*}
which accounts for the aversive quality of specific curiosity in Algorithm 1. Without Line <ref>, $R_\textit{curious}$ remains zero for all state transitions, as $R_\textit{curious}$ was initialized to zero in Line <ref>.
Replacing A-0.13em versive Quality with Positive Quality
In addition to ablating aversive quality, we also tested replacing it with positive quality. To achieve this replacement, we modified $R_\textit{curious}$. In the original algorithm, the special reward function, $R_\textit{curious}$, is negative everywhere except at the target, inspired by the aversive quality of curiosity. A different, but still appropriate gradient (temporary value function) could be formulated using an alternative, positive reward model, $\tilde{R}_\textit{curious}$, that would similarly direct the agent towards the target. While there are many possible definitions, we used the following definition:
\begin{align}
\tilde{R}_\textit{curious}(s,a,s') = \left\{ \begin{array}{cl}
1 & \text{if }s\text{ is the target} \\
0 & \text{otherwise}
\end{array}\right.
\end{align}
When $\tilde{V}_\textit{curious}$ is generated via value iteration from $\tilde{R}_\textit{curious}$, it should guide the agent to the target much like the original $V_\textit{curious}$ does. However, in the original learning update in Line <ref>, subtracting the non-positive $V_\textit{curious}(x)$ meant that the agent learned a positive value. To get the same effect with the with the newly defined, non-negative $\tilde{R}_\textit{curious}$, $\tilde{V}_\textit{curious}(x)$ must be added; consequently, we modified Line <ref> to the following.
\begin{align*}
\delta \gets R + \gamma \cdot V(x') - {\color{blue}\boldmath{ V(x) + \tilde{V}_{curious}(x) }} &\qquad \triangleright {\color{blue} \textbf{Voluntary Exposure}}.
\end{align*}
§.§.§ Results and Discussion: Ablation Studies
Our primary result from our ablation studies was that ablating any algorithmic element that supports a key property or that supports the aversive quality results in behaviour that no longer reflects specific curiosity. In particular, the agent no longer exhibits the cycles of curiosity we observed in the primary domain or the wide and tall domains (with junction location). In this section, we will examine the resultant behaviour for each experiment in this set and their implications.
[above right] at (0,10.55) ;
[above right] at (4.1,10.6) ;
[above right] at (7.6,10.6) ;
[above right] at (11.1,10.6) ;
[above right] at (14.6,10.49) ;
[above right] (SingleTrialValueLabel) at (1,14.1) Persistent Value, $V$ (Single Trial);
[above right] (alabel) at (1,11) (a);
[above right] (elabel) at (4.6,11) (e);
[above right] (ilabel) at (8.1,11) (i);
[above right] (mlabel) at (11.6,11) (m);
[above right] at (0,6.65) ;
[above right] at (4.1,6.7) ;
[above right] at (7.6,6.7) ;
[above right] at (11.1,6.7) ;
[above right] at (14.6,6.7) ;
[above right] (SingleTrialVisitsLabel) at (1,10.2) Visits (Single Trial);
[above right, text=white] (blabel) at (1,7.1) (b);
[above right, text=white] (flabel) at (4.6,7.1) (f);
[above right, text=white] (jlabel) at (8.1,7.1) (j);
[above right, text=white] (nlabel) at (11.6,7.1) (n);
[above right] at (0,2.75) ;
[above right] at (4.1,2.8) ;
[above right] at (7.6,2.8) ;
[above right] at (11.1,2.8) ;
[above right] at (14.6,2.55) ;
[above right] (MeanValueLabel) at (1,6.3) Persistent Value, $V$ (Mean over 30 trials);
[above right] (clabel) at (1,3.2) (c);
[above right] (glabel) at (4.6,3.2) (g);
[above right] (klabel) at (8.1,3.2) (k);
[above right] (olabel) at (11.6,3.2) (o);
[above right] at (0,-1.15) ;
[above right] at (4.1,-1.1) ;
[above right] at (7.6,-1.1) ;
[above right] at (11.1,-1.1) ;
[above right] at (14.6,-1.1) ;
[above right] (Visitlabel) at (1,2.4) Visit Counts (Mean over 30 trials);
[above right, text=white] (dlabel) at (1,-0.7) (d);
[above right, text=white] (hlabel) at (4.6,-0.7) (h);
[above right, text=white] (llabel) at (8.1,-0.7) (l);
[above right, text=white] (plabel) at (11.6,-0.7) (p);
[align=center, anchor=north] at (2.6,-0.65) (directed) Directedness
[align=center, anchor=north] at (6.1,-0.65) (ceases) Cessation
[align=center, anchor=north] at (9.6,-0.65) (voluntary) Voluntary
[align=center, anchor=north] at (13.1,-0.65) (aversive) Aversive
This figure shows the persistent value function and visit counts in the primary domain with each ablation. From this figure, we can see that all of the properties are used together to achieve behaviour that learns to value the curiosity-inducing location, but not the targets. A single ablation is shown in each column. The top and third rows show the learned value function $V$ with zero-valued locations in white, while the second and bottom rows show the visit counts with zero-valued locations in black, each after 5000 time steps. The first two rows show a single representative trial for each ablation, while the bottom two rows are averaged over 30 trials. All subfigures are on logarithmic scales.
Ablation of Directedness
This figure shows a histogram of the number of visits to targets in each trial when directedness is ablated. The histogram is right-skewed. The height of a bar for a given number of target visits is the number of trials with exactly that number of visits. The experiment included 30 trials total.
When directedness is ablated, arbitrary paths through the domain accumulate value. This learning behaviour contrasts with what happens when using the original Algorithm <ref>, where direct paths from the curiosity-inducing location to the appropriate satisfier accumulate value (Figure <ref>a,c). Because the agent with the ablation chooses randomly when faced with equally-valuable maximally-valued alternatives, exactly which path accumulates value varies from trial to trial. This randomness results in the visual difference between the value function for a single trial (top panel of Figure <ref>a) and the aggregated value function across trials (third panel from the top of Figure <ref>a). Further, the persistent values for the curiosity-inducing location and the potential targets vary substantially from trial to trial, depending on which path through the domain the agent gets stuck on (Figure <ref>a).
Learned Value vs. Time
(directedness) at (0,0) ;
(cessation) at (5,0) ;
(voluntary) at (10,0) ;
(aversive) at (0,-6) ;
(positive) at (5,-6) ;
(original) at (10,-6) ;
[align=center, anchor=north] (dirlabel) at (0.4,-2.3) Directedness Ablation;
[left=0cm of dirlabel] (a);
[align=center, anchor=north] (ceslabel) at (5.4,-2.3) Cessation When
Satisfied Ablation;
[left=0cm of ceslabel] (b);
[align=center, anchor=north] (vollabel) at (10.4,-2.3) Voluntary Exposure
[left=0cm of vollabel] (c);
[align=center, anchor=north] (avelabel) at (0.4,-8.3) Aversive Quality
[left=0cm of avelabel] (d);
[align=center, anchor=north] (poslabel) at (5.4,-8.3) Positive Replacing
Aversive Quality;
[left=0cm of poslabel] (e);
[align=center, anchor=north] (orilabel) at (10.4,-8.3) Original
Algorithm <ref>;
[left=0cm of orilabel] (f);
This figure shows the estimated value of the curiosity-inducing location (blue) and target locations (orange) over time for all thirty trials for each ablation and for the original Algorithm <ref>. Panel (f) shows the same data as Figure <ref>. In the directedness ablation (a), both the curiosity-inducing locations and targets grow over time, with large variation. When cessation when satisfied is ablated (b), the values of both the curiosity-inducing location and the targets remain constant over time, with the value of the curiosity-inducing location reaching $0.315$ during the agent's first and only visit to the curiosity-inducing location at time $t=0$ and the value of the targets remaining $0$ throughout. The learned value for the ablations of voluntary exposure (c) and aversive quality (d) remains zero everywhere. When aversive quality is replaced with positive quality (e), the learned values for the curiosity-inducing location and the targets are similar to those in the original algorithm, but the value of the curiosity-inducing location grows slightly more quickly over time.
every node=[font=]
[anchor=south west] at (0,0) (bot) ;
[anchor=south west] at (6.2,6.9) (top) ;
[white, path fading=south] (0,14) rectangle (2,13.5);
[white, path fading=south] (4,13.98) rectangle (6,9);
[align=center, anchor=north] at (2.85,0.6) (directed) Directedness
[align=center, anchor=north] at (5.1,0.6) (ceases) Cessation
[align=center, anchor=north] at (7.3,0.6) (voluntary) Voluntary
[align=center, anchor=north] at (9.47,0.6) (aversive) Aversive
[align=center, anchor=north] at (11.7,0.6) (positive) Positive
[align=center, anchor=north] at (13.9, 0.6) (original) Original
Algorithm <ref>;
[align=center, anchor=south] at (2.85, 0.8) 1.2;
[align=center, anchor=south] at (5.06, 0.75) 1.0;
[align=center] at (5.06, 13.98) 4996.0;
[align=center, anchor=south] at (7.28, 1.73) 54.2;
[align=center, anchor=south] at (9.47, 1.16) 22.3;
[align=center, anchor=south] at (11.7, 6.93) 368.3;
[align=center, anchor=south] at (13.9, 6.82) 362.9;
[align=center] at (3.5,3) (cesslabel) First
at (4.7,0.63) (cessbar) ;
[->] (cesslabel) to [out=270,in=100] (cessbar);
This figure compares the number of times the agent visits a target (specified by the curiosity recognizer, not just possible target locations) for each experiment in Section <ref> with the original algorithm. The ablation of cessation when satisfied has two stacked bars: dark orange showing the number of first visits to a target (counting only one visit after the target has been generated) and light orange showing the number of subsequent visits. The upper right inset graph is zoomed out to show the full bar. The original algorithm and the modification replacing aversive quality with positive quality have similar target visit counts while the other ablations result in substantially fewer first visits. Error bars show the standard deviation across 30 trials.
On average, ablating directedness results in the fewest number of visits to generated targets as compared to the original algorithm and the other ablations (mean of $1.2$, comparison shown in Figure <ref>). While the agent sometimes chooses a path that visits the target, that target is removed once it is visited (cessation when satisfied). Because the agent has already accumulated so much value on its meandering path, it tends to remain on that path. If the next target is not generated on or near that path, then the agent is unlikely to visit it. The result is that the distribution of the number of target visits across trials is right skewed, with the agent failing to visit any targets at all in nearly half of the trials (Figure <ref>).
With directedness ablated, the agent's behaviour is characterized not by cycles of curiosity, but by randomly chosen cycles which continually accumulate more value. The agent does not seek out a satisfier, so unless it stumbles on a satisfier by chance, it can stay in a state of `curiosity,'[Of course, this state no longer reflects curiosity in any way, and is more reflective of wireheading <cit.>.] continually accumulating value in a randomly chosen region of the domain with no off switch.
Ablation of Cessation When Satisfied
When cessation when satisfied is ablated, the agent takes a direct path from the curiosity-inducing location to the target and remains at that target for the remainder of the trial. In each trial, the agent has one first visit to a target, and 4996 subsequent visits (Figure <ref>). As an example, the visit counts and persistent value at the end of a single trial are shown in the top two panels of Figure <ref>, showing how the agent accumulated persistent value on its path to its first target much like the agent following Algorithm <ref> shown in Figure <ref>g. Since this ablation agent's target is not removed, the agent does not move on from this location. The agent therefore only visits one target in every trial and does not benefit from curiosity motivating it towards multiple new experiences.
Of the ablation experiments, the ablation of cessation when satisfied is the only experiment where the agent consistently learns a persistent value function that is maximal at the curiosity-inducing location. Such a value function would reflect voluntary exposure, but since the agent remains fixated on a target, it never has the opportunity to reflect the behaviour component of voluntarily visiting curiosity-inducing situations. Neither the value of the curiosity-inducing location nor the targets changes over time, with the value of the curiosity-inducing location reaching $0.315$ during the agent’s first and only visit to the curiosity-inducing location at time $t = 0$ and the value of the targets remaining $0$ throughout (Figure <ref>). Because the agent remains fixated on a single target, the agent spends little time visiting areas with accumulated persistent value, instead spending the rest of its time at the target (Figure <ref>e–h).
The removal of cessation when satisfied might remind some readers of the reactive behaviour of intrinsic-reward learners, who are driven to visit a novel state repeatedly. Despite this parallel, the ablation of cessation when satisfied is not directly comparable to intrinsic reward methods. As we discussed in Section <ref>, multiple computational intrinsic rewards are designed to decay as the agent visits its target over and over. In our ablation, the level of motivation remains static throughout each trial. We experimented with a decaying motivation level, but do not include the (rather uninteresting) results here because the conceptual purpose of intrinsic rewards is so unlike that of specific curiosity that the comparison is inappropriate in our test domain. Again, two primary benefits of this decaying property of intrinsic rewards are promoting multiple visits to check for consistency (for example of a stochastic reward) or staying on an exploration frontier. In our simple, rewardless domain, there is no benefit to repeated visits, nor are the curiosity targets generated on an exploration frontier.
Ablation of Voluntary Exposure
When voluntary exposure is ablated, no persistent value accumulates in any part of the domain (Figure <ref>c and the line plot in Figure <ref> are zero everywhere). This occurs because the learning update step that flips value from the curiosity value function into the persistent value function has been removed. However, the agent does still demonstrate directed behaviour between the curiosity-inducing location and the targets. As a result, there is a faint but visible funnel shape above the curiosity-inducing location in the bottom panel of Figure <ref>c (compare with the bottom panel of Figure <ref>d, which reflects a true random walk through the domain). This directed behaviour helps the agent make more (first) target visits than any of the other ablations (mean of $54.2$, see Figure <ref>), though still far fewer than an agent following the original Algorithm <ref>.
Ablation of Aversive Quality
When the aversive quality of curiosity is ablated, $V_\textit{curious}$ is not generated, so the agent experiences no difference in value or reward throughout the domain. For this reason, the agent acts randomly throughout each trial. The resulting estimated value function and visit counts are shown in Figure <ref>(d). No value is accumulated anywhere in the grid, as emphasized by Figure <ref>(d), which shows that the estimated value for all of the targets and the curiosity-inducing location remain zero throughout each trial.
Replacing Aversive Quality with Positive Quality
(value1label) at (1.75,2) Value (single trial);
(value1positive) at (0,0) ;
(a1) at (-1.2,-1.2) (a);
(value1aversive) at (3.5,0) ;
(b1) at (2.3,-1.2) (b);
(valuemlabel) at (8.75,2) Value mean;
(valuempositive) at (7,0) ;
(a2) at (5.8,-1.2) (a);
(valuemaversive) at (10.5,0) ;
(b2) at (9.3,-1.2) (b);
(value1bar) at (13,0) ;
(visits1label) at (1.75,-2) Visits (single trial);
(visits1positive) at (0,-4) ;
[text=white] (a3) at (-1.2,-5.2) (a);
(visits1aversive) at (3.5,-4) ;
[text=white] (b3) at (2.3,-5.2) (b);
(visitsmlabel) at (8.75,-2) Visits (mean);
(visitsmpositive) at (7,-4) ;
[text=white] (a4) at (5.8,-5.2) (a);
(visitsmaversive) at (10.5,-4) ;
[text=white] (b4) at (9.3,-5.2) (b);
(value1bar) at (13,-3.85) ;
This figure shows the final learned value functions and visit counts for the experiment where (a) aversive quality is replaced with positive quality alongside the same for (b) the original algorithm (same data as Figure <ref>, but on a logarithmic scale) for visual comparison. The behaviour and value learned with positive quality (a) is very similar to that of the original algorithm (b)—indeed, given the same random seed, the behaviour is identical for 1071 steps—but value accumulates at different rates in each case, so the value functions do differ by more than just scale. All subfigures are on logarithmic scales.
More interesting than ablating aversive quality is replacing it with positive quality. In this experiment, the agent's behaviour is very similar to that of of the agent following original Algorithm <ref> as described in Section <ref>. The number of visits to generated targets for the agent with this replacement are within error of that of the original algorithm, shown in Figure <ref>. Both agents' final persistent value functions and visit counts are similar (Figure <ref>). The main difference between the persistent value functions is a matter of scale, in that the estimated values for the experiment using positive quality are generally higher. This difference is also visible in the associated lineplot in Figure <ref>, where the value of the curiosity-inducing locations grows more quickly when aversive quality is replaced with positive quality. However, the difference is not only in scale; for example, note that squares $(7,0)$ and $(4,1)$ have different mean values between in Figure <ref>a (positive quality) and <ref>b (aversive quality).
The agent using positive quality should and does behave differently than the agent following the original Algorithm <ref>, because the value functions generated by $R_\textit{curious}$ and $\tilde{R}_\textit{curious}$ have different shapes. For this reason, the persistent value function accumulates value at different rates in each case. However, a takeaway from this experiment is that using a negative value function, or what we call the aversive quality of Algorithm 1, is not necessary for creating cycles of behaviour reflecting specific curiosity.
In humans, it may be true that the information seeking associated with specific curiosity “is motivated by the aversiveness of not possessing the information more than it is by the anticipation of pleasure from obtaining it" <cit.>, but from the perspective of our simplistic computational RL agent, our choice of implementation for each did not result in appreciably different behaviour.
Taken together, the experiments in our ablation study show us that, in the context of Algorithm 1, the properties of directedness, cessation when satisfied, and voluntary exposure work together, and that curious behaviour is noticeably impaired when any one property is missing.
§ GENERAL DISCUSSION: BENEFITS OF THE PROPERTIES OF SPECIFIC CURIOSITY
Our ablation study provides initial evidence for the interconnected nature of the properties of specific curiosity—effective learning behaviour isn't achieved via one or two properties; the properties work together. Indeed, the benefits of each property are so interwoven that they are best understood via their combined influence on the whole of specific curiosity.
Flexible specialization to a learner's context: In Section <ref>, we noted that the property of coherent long term-learning, the last of our five properties, closes the loop of how curiosity can guide a learner over a lifetime. Curious biological learners, including humans, live long lives, but certainly not long enough to experience every possible situation that the world could throw at them. Further, humans have found ways to survive in a diverse set of possible climates, cultures, and contexts. We believe specific curiosity supports that ability.
Some of what we learn is passive—we learn just by `being there.' Our brains persistently and automatically take the observations from our senses and work to integrate them into our knowledge of the world <cit.>. This passive learning helps build up a foundation of knowledge that is somewhat local to the learner's particular context. Then, specific curiosity insists that we learn actively, almost any time we aren't attending to obvious needs to keeping our bodies going and species alive. And in particular, the property of coherent long-term learning biases our active learning towards specific concepts that we are ready to build onto our existing knowledge <cit.>, often towards new information defines a connection across a gap in our existing knowledge <cit.>, much of which may have been passively learned. The better connected our knowledge is, the more useful it is.
Very importantly, curiosity supports us when our context changes. By being biased to direct the learner towards information to support connections to the learners' existing knowledge, specific curiosity may direct us to learn new information that will help us transfer our existing skills and knowledge into a novel context. How many of our curiosity questions start by orienting on “Wait, that wasn't what I was expecting"? In those kinds of situations, whether we observed a toy performing an unexpected function or a suspect in our mystery performing a suspicious action, there is a waiting connection to be made. The jack-in-the-box doesn't appear except when ...? People don't dump heavy body-sized bags into the lake in the dead of night except when ...? Dark fluid doesn't end up on white-paper walls except when ...? In these situations, making our inostensible referent ostensible repairs the broken understanding created by our prior generalizations failing to hold in a new context, giving us a more accurate foundation of knowledge on which to act.
Specialization as contribution to societal knowledge: Looking at our favourite biological model of curiosity, the human, another key feature of humans is that they are social. Humans in particular seem to get an incredible benefit from individuals having different specialties <cit.>. If each individual instead developed unspecialized, broad knowledge, then the overlap—the knowledge held by our entire society—would be similarly broad, but unfortunately shallow. We would know very little about many things, as a group. Instead, the overlap of all these narrow, deep specializations developed over time lends itself to providing not only broad, but deep knowledge for our larger society, networked together by humanity's social nature.
When a piece of specialized knowledge turns out to be generally applicable, it can be transferred via social contact across a connected network of learners, a more general societal benefit. While we noted that humans are our favourite model of curiosity, the societal transmission of new, specialized behaviours—innovations—appears to benefit social non-human animals too. One example involves birds, British blue tits, who famously discovered how to pierce the foil caps on milk bottles to access the cream on top. The behaviour was first observed in 1921, but by the end of the 1940s, the behaviour was widespread across the U.K. (, ). Experiments by <cit.> involving teaching new foraging behaviours to blue tits have provided further evidence that blue tits socially transmit new, useful behaviours across their communities (p. 1230).
As another example, researchers on the isolated Japanese islet of Koshima observed a macaque (a variety of monkey) washing the sand off of a potato—a new behaviour that they had never observed before <cit.>. In the years thereafter, the researchers observed a wave of social learning until nearly the whole colony seemed to clean their potatoes before eating (p. 4). Interestingly, the same macaque who seems to have come up with the potato-cleaning behaviour appeared to later be the first macaque to demonstrate a behaviour of “wheat washing" (p. 13). Initially, when humans scattered wheat across the sand, the monkeys would painstakingly pick up each grain one by one. “Wheat washing," on the other hand, involves gathering up the sand with the grains and tossing them into water, which allows the sand to drift to the bottom while the wheat floats on top (p. 12). This behaviour also spread throughout the colony, though not quite as pervasively (p. 12). In analogy with the specialized human chef who might design new recipes and share them, the originator of these behaviours might have a specialized interest in food preparation, to the benefit of their community.
The need for directedness towards inostensible referents: Coherent long-term learning requires directedness towards inostensible referents. An inostensible concept, supported by the properties that the learner already knows will be true of the inostensible referent, is the form taken by the next—metacognitively most appropriate ()—learning opportunity to coherently build on existing knowledge. The only sensible activity to experience curiosity-satisfying observations is to take a systematic sequence of actions to obtain the specific information that will make their inostensible concept ostensible. Given that the learner will never have a perfect model of the world including the inostensible referent (it wouldn't be inostensible, in that case!), the learner must make a best guess and adapt their plan as they proceed.
The usefulness of cessation when satisfied: Cessation when satisfied creates efficiency by taking advantage of the following idea: what makes an appropriate answer depends on the question. For some inostensible referents, repetitive behaviour might be appropriate: just think back to the example with the peculiar-sounding floorboard. A reasonable way to acquire sufficient evidence to decide if your weight transfer caused the noise is indeed to try repeating that weight transfer several times—once might be a fluke, but three or four times seems sufficient to suggest you're causing the noise.
Our formulation of cessation when satisfied was directly inspired by the behaviour generated by intrinsic-reward methods and how it contrasts with specific curiosity. The reactive nature of intrinsic rewards motivate a learner to re-experience a state multiple times. Specific curiosity, on the other hand, doesn't require this kind of repetition for all inostensible concepts. For many questions, only single experiences of each curiosity-satisfying observation is required. After all, you don't need to re-read `whodunnit' out of curiosity—once you've read that part once, your curiosity for that particular inostensible referent can end.
In most cases, there are multiple possible forms of evidence that we would accept as curiosity-satisfying. One of the seemingly most important for humans is testimony from others <cit.>. This kind of evidence rarely requires repetitive behaviour (unless the person you're asking isn't listening). If anything, it may require probes into how reliable the source of information is, or seeking a second opinion via a different mode of behaviour. Not only does the kind of evidence required vary depending on the inostensible referent, the reliability required of an the answer varies even further. How important is it that we have the right answer, versus just a working theory?
In this way, humans demonstrate extreme flexibility when it comes to specifying what makes an acceptable curiosity-satisfying situation. While our next prototypes of curious machines may not have such beautifully tailored recognition systems for sufficient evidence for their curiosity to be satisfied, it is time to move away from simple repetition as a proxy for the satisfaction of curiosity.
The importance of transience: A close relative of cessation when satisfied, transience is necessary for functional curiosity in biological learners. After all, humans and animals can only (physically) be in one place at one time, and their attention is thought to be a similarly limited resource <cit.>. Constantly reorienting those limited bodily and attentional resources is impractical, and so committing to a single goal for a period of time benefits the learner <cit.>. Specific curiosity is one example of this kind of goal-directed behaviour. As detailed by <cit.>, goal-directed behaviour will be more effective in an uncertain environment if the behaviour of the agent can be interrupted by time-sensitive demands, like attending to a loud noise that might indicate danger, pangs of hunger <cit.>, or even the recognition that, in the past, you regretted a decision made in a similar situation <cit.>.
In this sense, transience also has a strong relationship with stay-switch decisions observed in animal decision making, wherein an animal constantly balances its near-term reward with its expectations of long-term average reward, thereby governing the persistence of its current behaviour (c.f., human patch foraging and the marginal value theorem; ).
Even more critically, transience resolves some of the trouble that `un-realizable' inostensible concepts could cause. When we say that some inostensible concepts are un-realizable, we are noting that the very nature of inostensible concepts is that, in some cases, they can't be made ostensible. Not everything that could be dreamt up by a learner is necessarily a thing that the learner could find, especially if the lifetime of the learner is limited. While I could find myself curious about the location of the nearest Earth-orbiting teapot, I would struggle to find out whether such a teapot exists, never mind its location. When asking about unknowns, it is necessary that a learner might sometimes ask the wrong questions, and so needs to be able to stop chasing curiosity-satisfying situations that don't exist.
The condition of specific curiosity is a concerted effort to make an inostensible concept ostensible. Directedness towards inostensible It requires adaptive planning, which is likely resource-heavy, and, in biological learners, active movement of the body towards perceiving curiosity-satisfying observations. Transience helps the learner manage an all-or-nothing effort to satisfy their curiosity, because it means that behaviour and use of attentional resources can be fully reallocated to other matters as needed.
Voluntary exposure over curiosity by chance:
Accepting the premise that curiosity will be valuable to our machine agents, we certainly don't want our agents to avoid curiosity. But do we really want voluntary exposure, or would it be sufficient for the agent to stumble across curiosity-inducing observations without increased preference for them?
Before we provide our answer to that question, we would like to note some subtlety to the voluntary exposure that humans exhibit. Humans have been observed to voluntarily expose themselves to some observations that they are aware will be curiosity-inducing <cit.>, like a puzzle or the latest bingeable TV show, but there are other curiosity-inducing observations that humans will not choose to expose themselves to.
<cit.> presented the results of some experiments where humans exhibited specific curiosity, but not voluntary exposure. Their experiments centred on what they called an “uncertainty creation–resolution process" (p. 556). In their experiments, this process consisted of the learner being “first teased with some missing information" (e.g. presented with a trivia question) “and then given that information" (p. 556). In four experiments (see the discussions of Choice for Studies 1 through 4, pp. 561–565), they found that, given a choice between experiencing an `uncertainty creation–resolution process' or not, most of their participants chose not, suggesting that they did not exhibit voluntary exposure.
The authors offered two hypotheses about why their participants failed to exhibit voluntary exposure. One hypothesis was that seeking uncertainty, or choosing to be exposed to curiosity-inducing observations, might be a trait exhibited by a minority of people <cit.>. The very healthy industries producing puzzles, mysteries, and cliff-hanger-laden television series that we mentioned in Section <ref> bring this hypothesis into doubt. Their other hypothesis was that, in cases where people voluntarily expose themselves to curiosity-inducing situations, they “have control over when they receive the missing information" <cit.>, which merits further study.
Based on our computational case study, we suggest a novel hypothesis that voluntary exposure might be learned via multiple experiences of curiosity being induced in similar situations. It is possible that while these people have learned to predict the positive experience associated with their favourite forms of curiosity-inducing situations, be they crossword puzzles, mystery novels, or mathematical problems, the experimental setup might be too unfamiliar to lead to voluntary exposure. In this way, considering the value of voluntary exposure brings us back to coherent long-term learning. Tying voluntary exposure to individual interest enhances learner specialization, a key benefit of coherent long-term learning as we argued above.
Whatever domains we specialize our voluntary exposure towards, specific curiosity tends to drive us into a solving process. Whether racking our brains for the right word for a crossword or picking out the right clues to solve a murder mystery, curiosity helps us build and solidify our knowledge. In particular, human learning benefits from retrieval practice, and curiosity helps us when we're in danger of forgetting something we have already been exposed to, and if that something is coming up again, it is likely a somewhat consistent part of the context we interact in day-to-day. Learners have to practice to develop skills, so if we don't have to attend to a more pressing matter like food or sleep or whatever, practicing these kinds of solving processes, especially within an area of individual interest, so as to build up knowledge in a specialized, individual way, is a really good idea.
Most learners are thought to juggle many competing interests. Which of an learner's needs should be prioritized over another is probably situational and difficult to answer, but we argue that all else being equal, intelligent agents imbued with curiosity should choose to expose themselves to curiosity-inducing situations. With the right implementation, artificial curiosity should direct the agent towards fruitful learning opportunities, much as biological curiosity is thought to <cit.>. Assuming that our design of machine curiosity manages to do the same, we want our machine agents to seek curiosity, which starts with a preference for curiosity-inducing situations—that is, voluntary exposure.
§ CONCLUSION
Curiosity is central to biological intelligence, and machine curiosity is an area of emerging activity for machine intelligence researchers in their pursuit of learning agents that can engage in complex, information-rich environments like the natural world. Throughout this work, we have directly connected insight and empirical evidence from the study of human and animal curiosity to advances in machine intelligence. In particular, we have for the first time translated the idea of specific curiosity to the domain of machine intelligence and shown how it can lead a reinforcement learning machine to exhibit key behaviours associated with curiosity. As a first major contribution of this work, we presented a comprehensive, multidisciplinary survey of animal and machine curiosity. We then used that body of evidence to synthesize and define what we consider to be five of the most important properties of specific curiosity:
* directedness towards inostensible referents;
* cessation when satisfied;
* voluntary exposure;
* transience;
* coherent long-term learning.
As a second main contribution of this work, we constructed a proof-of-concept reinforcement learning agent interleaving the most salient and immediate properties of specific curiosity. We then conducted empirical sweeps and ablations to probe the role that these integrated properties have on the agent's curious behaviour (and how the removal of individual properties substantially impacts this behaviour). Our computational specific curiosity agent was found to exhibit short-term directed behaviour, update its long-term preferences, and adaptively seek out curiosity-inducing situations. One major insight we draw from this work is that the separation of curiosity-inducing situations from curiosity-satisfying situations is critical to understanding curious behaviour.
We consider this study a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning. It is our hope that this exploration of computational specific curiosity will inspire a new frontier of interdisciplinary work by machine intelligence researchers, and that it will further provide new computational mechanisms to model and study the phenomenon of curiosity in the natural world.
Thank you to all the amazing people who made this work better and clearer, especially Kate Pratt. Both Brian Tanner and Niko Yasui provided valuable conversations on the contents of this paper. The authors also wish to thank their funding providers. NMA was supported by scholarships from the Natural Sciences and Engineering Research Council of Canada (NSERC), the University of Alberta, the Government of Alberta, and the Women Techmakers Scholars Program. Work by PMP was supported by grants or awards from the Canada CIFAR AI Chairs Program, Alberta Innovates, the Alberta Machine Intelligence Institute (Amii), the Government of Alberta, the University of Alberta, NSERC, and the Canada Research Chairs program.
|
# Mixed Neural Voxels for Fast Multi-view Video Synthesis
Feng Wang1 Sinan Tan1 Xinghang Li1 Zeyue Tian2 Yafei Song3 Huaping Liu1
1Beijing National Research Center for Information Science and
Technology(BNRist),
Department of Computer Science and Technology, Tsinghua University
2Hong Kong University of Science and Technology
3 XR Lab, DAMO Academy, Alibaba Group
<EMAIL_ADDRESS><EMAIL_ADDRESS>Corresponding author.
###### Abstract
Synthesizing high-fidelity videos from real-world multi-view input is
challenging due to the complexities of real-world environments and high-
dynamic movements. Previous works based on neural radiance fields have
demonstrated high-quality reconstructions of dynamic scenes. However, training
such models on real-world scenes is time-consuming, usually taking days or
weeks. In this paper, we present a novel method named MixVoxels to efficiently
represent dynamic scenes, enabling fast training and rendering speed. The
proposed MixVoxels represents the $4$D dynamic scenes as a mixture of static
and dynamic voxels and processes them with different networks. In this way,
the computation of the required modalities for static voxels can be processed
by a lightweight model, which essentially reduces the amount of computation as
many daily dynamic scenes are dominated by static backgrounds. To distinguish
the two kinds of voxels, we propose a novel variation field to estimate the
temporal variance of each voxel. For the dynamic representations, we design an
inner product time query method to efficiently query multiple time steps,
which is essential to recover the high-dynamic movements. As a result, with 15
minutes of training for dynamic scenes with inputs of 300-frame videos,
MixVoxels achieves better PSNR than previous methods. For rendering, MixVoxels
can render a novel view video with 1K resolution at 37 fps. Codes and trained
models are available at https://github.com/fengres/mixvoxels.
## 1 Introduction
Dynamic scene reconstruction from multi-view videos is a critical and
challenging problem, with many potential applications such as interactively
free-viewpoint control for movies, cinematic effects like freeze-frame bullet
time, novel view replays for sporting events, and various potential VR/AR
applications. Recently, neural radiance fields [26] have demonstrated the
possibility of rendering photo-realistic novel views for static scenes, with
physically motivated 3D density and radiance modelling. Many methods [19, 20,
49, 13, 10, 31, 28, 29, 44] extend the neural radiance fields to dynamic
scenes with additional time queries or an explicit deformation field. Many of
these methods focus on the monocular input video setting on relatively simple
dynamic scenes. To model more complex real-world dynamic scenes, a more
practical solution is to use multi-view synchronized videos to provide dense
spatial-temporal supervisions [55, 23, 3, 19].
Figure 1: Our method enables rapid reconstruction of 4D dynamic scenes. We
visualize the rendering results with different training schedules. With only
15 minutes of training, our approach achieves comparable PSNRs to other
methods. Increasing the training time further enhances the ability to recover
fine details.
Recently, Li et al. [19] propose a real-world dynamic scene dataset including
many challenging situations such as objects of high specularity, topology
changes, and volumetric effects. They address the problem by a hierarchical
training scheme and the ray importance sampling strategies. Although
significant improvements have been achieved, some challenges still exist: (1)
The training and rendering take a lot of time and computation resources. (2)
Highly dynamic scenes with complex motions are still difficult to track.
In this paper, we focus on the multi-view 3D video synthesis problem and
present a novel method named MixVoxels to address the above two challenges.
The proposed MixVoxels is based on the explicit voxel-grid representation,
which is recently popular due to its fast training and rendering speed on
static scenes [50, 40, 8, 27]. We extend the voxel-grid representations to
support dynamic scenes and propose an efficient inner product time querying
method that can query a large number of time steps simultaneously, which is
essential to recover the sharp details for highly-dynamic objects.
Additionally, we represent dynamic scenes as a mixed static-dynamic voxel-grid
representation. Specifically, the 3D spaces are split into static and dynamic
voxels by our proposed variation field. The two components are processed by
different models to reduce the redundant computations for the static space.
Theoretically, once a dynamic scene consists of some static spaces, the
training speed will benefit from the proposed mixed voxels. For a variety of
events that occur in the physical world, the static components of environments
are dominated in most cases, and the mixed voxels will speed up the training
significantly in these scenarios. Besides, the separation of voxels makes the
time-variant model focus on the dynamic regions, avoiding the time-aware
voxels being biased by the static spaces to produce blurred motions. Our
empirical validation confirms that the separation enables the model to learn
sharp and distinct boundaries in high-dynamic regions. This also frees our
method from the complex importance sampling strategies. With these designs,
our method is capable of reconstructing a dynamic scene consisting of 300
frames within 15 minutes. To summarize, the main contributions of this work
are:
* •
We propose a simple yet effective dynamic representation with inner product
time querying method that can efficiently query multiple times simultaneously,
improving the rendering quality for dynamic objects.
* •
We design an efficient variation field to separate static and dynamic spaces
and present a mixed voxel-grid representation to accelerate training and
rendering.
* •
We conduct qualitative and quantitative experiments to validate our method. As
a result, the proposed MixVoxels achieves competitive or better rendering
qualities with a $5000\times$ training speedup compared to implicit dynamic
scene representations.
## 2 Related Works
Novel View Synthesis for Static Scenes. Synthesizing novel views for static
scenes is a classical and well-studied problem. Different approaches represent
the underlying geometric with different representations. Mesh-based methods
[6, 9, 46, 48, 34, 43] represent the scenes with surfaces which is compact and
easy to render, while optimizing a mesh to fit complex scenes is challenging.
Volume-based methods such as voxel-grid [17, 35, 30, 23, 36] and multi-plane
images (MPIs) [54, 11, 25, 39, 38, 45] are more suitable to model the complex
and translucent scenes such as smooth and fluid. Particularly, Neural radiance
fields [26] represent the scenes with an implicit volumetric neural
representation, which employs a coordinate-based neural network to query the
density and color for each point. The achieved photo-realistic rendering
quality of NeRF led to an explosion of developments in the field. Advances
have been made including improving the rendering qualities [42, 4], adapting
to more general scenarios [52, 24, 41, 5], accelerating rendering or training
speed [22, 51, 50, 40, 8], etc.
Novel View Synthesis for Dynamic Scenes. Synthesizing novel views for dynamic
scenes is a more challenging and applicable problem. Recently, many extensions
of NeRF for non-rigid dynamic scenes were proposed, which take a monocular
video as input to learn the deformation and radiance fields. These methods can
be categorized to modelling deformation implicitly [20, 49, 13, 10] (learn the
non-decoupled deformation and appearance jointly) and explicitly [31, 28, 29,
44] (learn separated deformation and radiance fields and the deformation
fields are usually in the form of relative motion with a canonical static
space). Though improvements are achieved, reconstructing the complex general
scenes is still difficult with only monocular videos. Most methods are
constrained to fixed scenes like human-model or restricted motions. For real-
world complex scenes, reconstructing from synchronized multi-view videos is
more promising due to the dense supervision for every viewpoint and time
instant. Earlier works [14, 55] explore the problem and show the possibility
of rendering novel videos from a set of input views. Neural Volumes [23]
proposes to use volumetric representations. They employ an encoder-decoder
network to convert input images into a 3D volume, and decode the latent
representations by the differentiable ray marching operation. [3] presents a
data-driven approach for 4D space-time visualization of dynamic scenes by
splitting static and dynamic components and using a U-Net structure in screen
space to convert intermediate representation to image. Different with this
method, our method split the static and dynamic components in the 3D voxel
space instead of the pixel space. More recently, DyNerf [19] uses a temporal-
aware neural radiance field to address the problem, and proposes some sampling
strategies to train it efficiently. Compared with previous methods, they
propose a more complicated real-world dataset and validate their method. For
accelerating the reconstruction of dynamic scenes, FourierPlenoctree [47]
proposes to model the dynamics in frequency domain, and generate a Plenoctree
through multi-view blending to accelerate rendering. They focus on the
foreground moving objects extracted via chroma key segmentation, which
requires the background should be a pure color (or rely on segmentation
algorithms). Recently, the acceleration of training and rendering for dynamic
scenes has attracted much attention. Cocurrent works include StreamRF [18]
which proposes to accelerate the training of dynamic scenes by modeling the
differences of adjacent frames, NeRFPlayer [37] which decomposes the dynamic
scenes into static, new and deforming components, Hyperreel [2] which proposes
an efficient sampling network and models keyframes. K-Planes [12] and
HexPlanes [7] decompose the 4D dynamic scenes into different 2D
representations.
Acceleration of Neural Radiance Fields. While Neural radiance fields can
render novel views with high fidelity, training and rendering require querying
a deep MLP millions of times which is computationally intensive. Many recent
methods propose to accelerate the training and rendering speed of NeRF. For
rendering, Neural Sparse Voxel Fields [22] proposes a voxel-grid
representation to skip over many empty regions. PlenOctree [51] accelerates
the rendering process by pre-tabulating the NeRF into a PlenOctree and using
the spherical harmonic representation of radiance. Derf [32] and Kilonerf [33]
propose to accelerate the rendering speed by dividing the scenes into multiple
areas, and employ multiple small network in each area. AutoInt [21] proposes
to restructure the MLP network to accelerate the computations of ray
integrals, which helps accelerate the rendering speed. For accelerating the
training of NeRF, some methods use explicit voxel-grid representations [50,
40] to accelerate the training process and convergence speed. Instant-NGP [27]
proposes a multi-resolution hash table structure to accelerate the training.
The model sizes of most fast training methods are relatively large due to a
large number of voxels. TensoRF [8] proposes to reduce the model size by
factorizing the 4D scene tensor into multiple compact low-rank tensor
components.
## 3 Method
In this section, we introduce the proposed MixVoxels, which represents the 4D
dynamic scenes as mixtures of static and dynamic voxels. Fig. 2 illustrates
the overview of our method. In the following subsections, we will first
introduce the voxel-grid representations for static scenes and our extension
to dynamic scenes. Then we introduce the variation field for identifying the
dynamic voxels. At last, we introduce the training of MixVoxels.
### 3.1 Static Voxel-grid Representation
Neural radiance fields [26] have demonstrated photo-realistic novel viewpoint
synthesis, while the training of NeRF requires extensive computation due to
millions of neural network queries. For accelerating NeRF, many recent works
[22, 50, 40, 8] have explored the explicit volumetric representation, which
avoids the huge amount of computation of querying neural network.
Specifically, a 3D scene is split into $N_{x}\times N_{y}\times N_{z}$ voxels.
The densities and color features are stored in these voxels and denoted as
$\mathcal{S}^{\sigma}\in\mathbb{R}^{N_{x}\times N_{y}\times N_{z}}$ and
$\mathcal{S}^{c}\in\mathbb{R}^{N_{x}\times N_{y}\times N_{z}\times C}$.
$\mathcal{S}^{\sigma}_{i,j,k}$ and $\mathcal{S}^{c}_{i,j,k}$ represent the
learnable density and color feature of the voxel corner at a discrete position
$(i,j,k)$. For a continuous position $(x,y,z)$, the representation
$\mathcal{S}_{x,y,z}$ can be calculated by interpolating the nearest $8$
discrete positions. A small MLP network $\mathcal{C}_{\theta}$ is used to
parse the color features into RGB values, taking $S^{c}$ and view direction
$\bm{d}$ as input. Formally, the density $\sigma$ and color $c$ is formulated
as
$\sigma(x,y,z)=\mathcal{S}^{\sigma}_{x,y,z},\quad
c(x,y,z,\bm{d})=\mathcal{C}_{\theta}(\mathcal{S}^{c}_{x,y,z},\bm{d}).$ (1)
### 3.2 Dynamic Voxel-grid Representation
For dynamic scenes, a direct extension is to add the time dimension to the
static voxel-grid representation $\mathcal{S}$ explicitly. However, this
direct extension is almost memory-prohibitive due to the large and linearly
increasing memory footprint. For a 300-frame video, the learned models will
occupy $30$ GB of memory and be difficult to train with GPUs due to the
limitation of GPU memory. To address this problem, we propose a spatially
explicit and temporally implicit representation to reduce the memory
footprint. Specifically, we represent the dynamic scene as a 4D learnable
voxel-grid $\mathcal{G}^{\sigma}\in\mathbb{R}^{N_{x}\times N_{y}\times
N_{z}\times C_{1}}$ and $\mathcal{G}^{c}\in\mathbb{R}^{N_{x}\times N_{y}\times
N_{z}\times C_{2}}$. Different from the static scene representations, the
densities and colors for all time steps are implicitly encoded as compact
features stored in each voxel corner. The compact features will be processed
by a time-aware projection to acquire density and color for each time step.
Concretely, for the compact density feature $\mathcal{G}^{\sigma}_{x,y,z}$ and
color feature $\mathcal{G}^{c}_{x,y,z}$ in any position$(x,y,z)$, we employ
two MLPs $\mathcal{T}^{\sigma}_{\theta_{1}}$ and
$\mathcal{T}^{c}_{\theta_{2}}$ to increase the feature dimensions for better
parsing time-variant density and color. The MLPs here can be viewed as
decompressors that decompress the compact low-dimensional voxel-grid features
into more tractable ones. Compared with directly storing high-dimensional
features in each voxel, the temporally implicit representation reduces the
memory footprint significantly since the shared MLPs only increase memory
slightly.
Figure 2: Overview of our method. Given a ray, we first sample points, and
split them into static and dynamic ones using the variation field. After that,
we feed these points to the corresponding branches and query the required
properties. Then we merge the static output and dynamic output for rendering
the ray color. An L2 loss is employed to calculate loss and back-propagate.
Inner product time query. For a discrete time step $t$, we use a learnable
time-variant latent representation $\omega_{t}$ to represent the time query.
Instead of concatenating the time query with the intermediate features, we
propose to calculate the inner product between the learned time query and the
decompressed features as the required output $\sigma$ and $c$. Formally, the
density and color of a space-time query $(x,y,z,t)$ are formulated as
$\sigma(x,y,z,t)=\omega_{t}^{\sigma}\cdot\mathcal{T}^{\sigma}_{\theta_{1}}(\mathcal{G}^{\sigma}_{x,y,z}),$
(2)
$c(x,y,z,\bm{d},t)=\omega_{t}^{c}\cdot\mathcal{T}^{c}_{\theta_{2}}(\mathcal{G}^{c}_{x,y,z},\bm{d}).$
(3)
In practice, simultaneously querying multiple time steps helps reconstruct the
detail of high-dynamic motions and reduces the training iterations to traverse
through all time steps. The inner product based query will facilitate the
training speed when simultaneously querying many time steps in a training
iteration. Specifically, we denote the FLOPs of the MLP
$\mathcal{T}_{\theta_{1}}$ and the inner product operation as FLOPmlp and
FLOPinn, respectively. For a $T$-frame video, the FLOPs of the concatenation
query [19] is larger than $T\cdot$ FLOPmlp (due to the extra temporal
embedding dimension), while the FLOPs of our inner product query is only
FLOPmlp \+ $T\cdot$ FLOPinn (FLOP${}_{mlp}>>$ FLOPinn).
### 3.3 Variation Field
In this subsection, we introduce the variation field to identify which voxels
in the 3D space are dynamic, i.e., the densities or colors are not constant
over different time steps. By separating the static and dynamic voxels, the
redundant computations caused by using a relatively heavy time-varying model
to process the static components will be avoided, which accelerates the
training and rendering.
A simpler solution for accelerate training is to separate the static and
dynamic regions in pixel-level, i.e., using the temporal variance of pixels to
produce static and dynamic ones. However this scheme is actually not feasible
because we can only separate dynamic and static regions in training views
using the ground truth. For rendering novel views, we can not get the pixel
variance for rendering since we have no ground truth in novel views. Thus a
feasible solution is to learn the voxel-level temporal variance which is
shared for all possible views. In addition, separating in the voxel-level is
more efficient compared to pixel-level, even if we have an oracle to make the
pixel-level separation feasible in novel views. This is because not all voxels
projected to a dynamic pixel are dynamic, there will be only a small fraction
of voxels around the object surfaces are actually dynamic. Therefore, the
voxel level separation will produce much fewer dynamic queries.
To perform the voxel-level separation, we utilize the pixel-level temporal
variances from training videos as the supervision to estimate the voxel-level
variances. The pixel-level (or ray-level) temporal variances of different
videos are shown in Fig. 3. Formally, given a ray
$\bm{r}(\mathrm{s})=\bm{o}+\mathrm{s}\cdot\bm{d}$ with origin $\bm{o}$ and
direction $\bm{d}$, the corresponding pixel color at time step $t$ is defined
as $C(\bm{r},t)$. Then the pixel-level temporal variance $D^{2}(r)$ is
formalized as
$D^{2}(\bm{r})=\frac{1}{T}\sum_{t=1}^{T}(C(\bm{r},t)-\bar{C}(\bm{r}))^{2},\quad\bar{C}(\bm{r})=\frac{1}{T}\sum_{t=1}^{T}C(\bm{r},t),$
where $\bar{C}(\bm{r})$ is the mean color of pixel corresponding to the ray
$\bm{r}$. For identifying the dynamic pixels, the standard deviation
$D(\bm{r})$ is binarized to $M(\bm{r})$ with a threshold $\gamma$ to provide
pixel-level dynamic supervision, i.e. $M(\bm{r})=1$ if $D(\bm{r})\geq\gamma$,
else $M(\bm{r})=0$. In this way, we judge that a ray $\bm{r}$ is dynamic if
$M(\bm{r})=1$. Next, we use the $M(\bm{r})$ as supervision to estimate the
voxel-level variations $\mathcal{V}$.
Figure 3: Implicit interaction of multiple rays to decide which point is
dynamic. For a static ray (blue line), all points are set to be low dynamic.
For a dynamic ray (green line), at least one point is dynamic. The
intersection of multiple dynamic ray is more likely to be a dynamic point,
which is also physically intuitive.
The relations between the pixel-level variance and voxel-level variance lie in
the following aspects: (1) If a pixel is static, then all voxels passed
through by the ray corresponding to the pixel should be static in most cases
(We will discuss some special cases which violate this rule later in this
subsection.). (2) If a pixel is dynamic, then at least one of the voxels
passed through by the corresponding ray is dynamic. Fig. 3 shows the two
situations. With the above two relations, we design the variation field, which
is denoted as $\mathcal{V}\in\mathbb{R}^{N_{x}\times N_{y}\times N_{z}}$ to
represent the voxel-level temporal variance. Specifically, we uniformly sample
$N_{s}$ points from the near plane to the far plane in $\bm{r}$, and build the
following equation to satisfy the two relations mentioned above:
$\small\hat{M}(\bm{r})=\bm{s}(\mathrm{max}(\\{\mathcal{V}_{\bm{r}(s_{i})}|i\in\\{1,...,N_{s}\\}\\})),$
(4)
where $\hat{M}(\bm{r})$ is an estimation of $M(\bm{r})$, and $\bm{s}$ is the
sigmoid function. Then we train the variation field by minimizing the
following binary cross-entropy loss:
$\footnotesize\mathcal{L}_{v}=\mathbb{E}_{\bm{r}}\left[-M(\bm{r}){\rm
log}(\hat{M}(\bm{r}))-(1-M(\bm{r})){\rm log}(1-\hat{M}(\bm{r}))\right].$ (5)
By optimizing the above loss function to all rays, we can get the learned
variation field $\mathcal{V}$. The training of the variation field is very
efficient, usually taking less than 30 seconds.
The maximization operation well formulates the relations between a pixel and
its corresponding voxels. If a pixel is static, then the equations of Eq. 4
and Eq. 5 will force all voxels passed through by the corresponding ray to be
static ($\mathcal{V}_{x,y,z}=0$). If a pixel is dynamic, Eq. 4 requires at
least one of the corresponding voxels (i.e., the max value of the voxel
variances) to be dynamic ($\mathcal{V}_{x,y,z}=+\infty$). Although we provide
no information about which specific voxels in a dynamic ray are dynamic, the
implicit interaction of multiple different rays will force the solution to be
physically reasonable. To explain this, we focus on the observable voxels
which at least passed through by one ray. If a point $(x,y,z)$ is passed
through by at least one static ray, then $\mathcal{V}_{x,y,z}$ will tend to be
optimized to be close to zero. If a point $(x,y,z)$ is only passed through by
dynamic rays, and not occluded by other dynamic voxels, then
$\mathcal{V}_{x,y,z}$ will be optimized to $+\infty$. This is because without
occlusion, the above point of $(x,y,z)$ along the dynamic rays will be passed
by other static rays (the front space along these rays are observable from
other views). We illustrate this situation in Fig. 3.
Inference. After the training process, the temporal variation at a specific 3D
position $(x,y,z)$ is $\mathcal{V}_{x,y,z}$, which is easily acquired by
interpolating the discrete variation field. We then identify a voxel in the
scene as dynamic if $\mathcal{V}_{x,y,z}$ is larger than a hyper-parameter
$\beta$, and as static if it is smaller than $\beta$. Formally, we will get a
dynamic mask $\dot{{\rm M}}\in\\{0,1\\}^{N_{x}\times N_{y}\times N_{z}}$,
which will be used to split sampling points in a ray into static points and
dynamic points. We evaluate the effectiveness of this inference method in the
test views and find that the recall and precision are reasonable for splitting
the dynamic and static parts (recall: 0.97, precision: 0.94 when $\beta=0.9$).
Although the recall seems sufficient to retrieve most dynamic parts, we
empirically find some false negatives in the rendering images affect the
rendering quality. To address this problem, we use a ray-wise max-pooling
operation to identify the points near to a dynamic point as dynamic. The
kernel size of max-pooling is set to $k_{m}=21$, and the stride is set to $1$.
In this way, the recall is very closed to $1$. Although many hyper-parameters
are incorporated, we have empirically found that the thresholds $\gamma$ and
$\beta$ are not sensitive over a wide range of reasonable values.
Discussion. There may be some situations in which the rules (1) are broken due
to occlusion. Specifically, when a dynamic voxel is occluded by some static
voxels, the occluded parts should not be classified as static, which makes the
2D supervision noisy. However in practice, we found the learning-based
separation has a certain degree of tolerance for this situation. We conducted
experiments to verify this and present the results in Fig. 4. The occluded
region is visualized in the leftmost view (the area behind the static
roadblock). Although the dynamic region marked in yellow is occluded in the
left view, it is actually classified as dynamic region. This can be inferred
from another view where the occluded region has changed, as illustrated in the
middle and right images in Fig. 4). The variation field is learning-based and
will learn a solution that satisfies most constraints. If it forces some
dynamic voxels occluded by static voxels to be $0$, then the loss function
from other visible views will be high. As a result, the learning-based process
tends to assign a “middle solution” to voxels with inconsistent supervisions
from different views. We also attemped to use the transmittance as a weight
(in a way of volumetric rendering) to learn the variation field which can
explicitly handle the occlusion problem. However, we found a large efficiency
drop with similar performance. As a result, we use the proposed variation
field and find this formulation works well for most scenes, including some
challenging scenes with large areas of motions.
Figure 4: Impact of occlusions to the variation field.
### 3.4 Training of Mixed Neural Voxels
With the help of the variation field, we can split a scene into dynamic voxels
and static voxels. To reduce redundant computations, we use the lightweight
static model described in Sec. 3.1 to compute the densities and colors for
static voxels and the dynamic model described in Sec. 3.2 to compute the
densities and colors for dynamic voxels. The overall architecture is
illustrated in Fig. 2.
Specifically for a given ray $\bm{r}(s)=\bm{o}+s\bm{d}$ with origin $\bm{o}$
and view direction $\bm{d}$, we apply stratified sampling from the near to the
far planes and get $N_{s}$ points. Then the $N_{s}$ points are separated into
static and dynamic ones by inferring these points with the proposed variation
field. For the static points, we pass them into the static branch to retrieve
the colors and densities. For the dynamic points, we pass them into the
dynamic branch together with a deferred time query $\omega_{t}$ to retrieve
the corresponding properties. After that, we merge the static points and
dynamic points according to their order. Then we apply volumetric rendering to
the merged points to obtain the rendered color, which is formulated as
$C(\bm{r},t)=\sum_{i=1}^{N_{s}}T_{i,t}\cdot(1-{\rm
exp}(-\sigma_{i,t}\delta_{i}))\cdot c_{i,t},\vspace{-0.2em}$ (6)
where $\mathrm{T}_{i,t}$ is the accumulated opacity (or transmittance):
$\mathrm{T}_{i,t}={\rm exp}(-\sum_{j=1}^{i-1}\sigma_{j,t}\delta_{j})$, and
$\delta_{i}$ is the distance between adjacent samples. Given the ground truth
color $C_{g}(\bm{r},t)$, an $l2$-loss is employed to train the model:
$\vspace{-0.2cm}\mathcal{L}=\mathbb{E}_{(\bm{r},t)}\left[\|C_{g}(\bm{r},t)-C(\bm{r},t)\|_{2}^{2}\right].$
(7)
For both static and dynamic branches, we omit the computation of color for
points whose densities are close to zero, which is a widely adopted pruning
strategy [22, 50, 8].
Efficiency analysis. We define the proportion of dynamic points in a scene as
$\lambda$ ($\approx 0.05$ for most scenes). Besides, the FLOPs of static and
dynamic branches are denoted as FLOPsta and FLOPdyn, respectively. Then the
total FLOPs of MixVoxels are FLOPsta \+ $\lambda\cdot$FLOPdyn. Empirically,
FLOPdyn $/$FLOPsta ranges from $50$ to $100$ with different reasonable
dimension settings. Then the acceleration ratio of splitting static and
dynamic models is FLOPdyn $/$(FLOPsta \+ $\lambda$ FLOPdyn) $\approx 10$. In
practice, the actual speedup with a 3090 GPU is about $5$, the inconsistency
between analysis and experiment may come from the GPU features, which are
friendly to a more consistent network.
Figure 5: Visual comparisons with state-of-the-art methods. K-Planes [12] and
HexPlane [7] are concurrent works. We have selected four representative
patches to better inspect the details. Our method performs well on
reconstructing details and capturing movements.
### 3.5 Implementation Detail
The voxel-grid representations require large GPU memory to store the cubically
growing voxel numbers. To implement the voxel-grid representations more memory
efficient, we use the tensor factorization technique proposed in TensoRF [8]
to reduce the memory footprint. In this way, a 3D tensor is factorized into
the outer product of a vector and a 2D matrix. We factorize all the voxel-grid
tensors, including static voxels, dynamic voxels and the variation field. With
the help of tensor factorization, the learned model costs about 500MB for a
300-frame multi-view video scene. For the voxel resolutsions, we follow [8] to
start from an initial low resolution of $256^{3}$, and upsample the resolution
at steps 1500, 2000, 2500, and 2750 with a linear increase in the log space.
The final resolution is set to $640^{3}$. Once the resolution is changed, we
re-train the variation field, which only takes about 15-30s. The voxel-grid
feature dimension is set to $27$, and the hidden state of MLP is set to $512$.
For training, we use Adam [15] optimizer with a learning rate of $0.02$ for
voxels and $3e-3$ for MLPs. The total variation loss [50] is incorporated as a
regularization to encourage the space smoothness.
## 4 Experiments
### 4.1 Experiment Setting
Dataset. We validate our method on two datasets: (1) The Plenoptic Video
Dataset [19], which consists of 6 publicly accessible scenes: coffee-martini,
flame-salmon, cook-spinach, cut-roasted-beef, flame-steak and sear-steak. We
conduct experiments on all six scenes. Each scene contains 19 videos with
different camera views. The dataset contains many challenging scenes including
objects with topology changes, objects with volumetric effects, various
lighting conditions, etc. (2) Our proposed dataset including two more complex
dynamic scenes: moving-cars and solving-rubik. The moving-cars scene features
several vehicles passing across the screen, with significant motion and
displacement. Meanwhile, in the solving-rubik scene, a man solves a Rubik’s
cube at a rapid pace, averaging 4 rotations per second, providing an
opportunity to evaluate the model’s ability to capture swift movements. The
collection procedures used are similar to those of DyNeRF. More details are
presented in the appendix.
For training and evaluation, we follow the experiment setting in [19] that
employs 18 views for training and 1 view for evaluation. To quantitatively
evaluate the rendering quality on novel views, we measure PSNR, DSSIM and
LPIPS[53] on the test views. We also provide more metrics in the appendix
including FLIP [1] and JOD [16], which we find the comparisons are similar
with PSNR and LPIPS. We follow the setting of [19] to evaluate our model frame
by frame. For videos consisting of equal or more than 300 frames, we evaluate
our model every 10 frames [19] to calculate the frame-by-frame metrics except
for the JOD metrics, which requires a stack of continuous video.
Table 1: Results on our collected dataset, including two scenes. Scene | Model | PSNR$\uparrow$ | DSSIM$\downarrow$ | LPIPS$\downarrow$
---|---|---|---|---
Moving-Cars | MixVoxels-S | 18.72 | 0.251 | 0.689
MixVoxels-M | 18.97 | 0.228 | 0.552
MixVoxels-L | 18.89 | 0.222 | 0.540
MixVoxels-X | 19.11 | 0.210 | 0.516
Solving-Rubik | MixVoxels-S | 25.39 | 0.065 | 0.339
MixVoxels-M | 26.05 | 0.059 | 0.275
MixVoxels-L | 26.28 | 0.055 | 0.241
MixVoxels-X | 26.80 | 0.047 | 0.209
Training Schedules. For evaluating the effect of training time, we train
MixVoxels with different configurations shown in Tab. 2. The configurations
vary in terms of training iterations and the number of sample points per ray.
By default, the step size for sampling points is set to four times of the
voxel width. The $8\times$ means that there will be eight times as many
sampling points compared to the default.
Table 2: Different training configurations of MixVoxels. Model | Iterations | Sampling points | Training Time
---|---|---|---
MixVoxels-S | 5000 | 1 $\times$ | 15 min
MixVoxels-M | 12500 | 1 $\times$ | 40 min
MixVoxels-L | 25000 | 1 $\times$ | 80 min
MixVoxels-X | 50000 | 8 $\times$ | 300 min
### 4.2 Results
Quantitative results and comparisons. For quantitative results, we present the
metrics and compare with other methods in Tab. 3. Compared with the previous
state-of-the-art method DyNeRF, we reduce the training time from 1.3K GPU
hours to 15 minutes, making the training of complex dynamic scenes more
practical. For rendering, the MixVoxels has a 37.7 fps rendering speed for 1K
resolution. Compared with concurrent works, MixVoxels requires less training
time and achieves faster rendering speed, while achieving competitive PSNR and
LPIPS. For example, with only 15 minutes of training, MixVoxels achieve 31.03
PSNR which is comparable to other methods trained for hours. With sufficient
training, all metrics are further improved. For the quantitative results on
our collected more complex scenes, we present them on Tab. 1.
Table 3: Quantitative results comparisons. All metrics are measured on 300-frame scenes. We also report the training time, rendering speed (FPS) and model size. $\ast$ Note DyNeRF is trained on 8 GPUs, while others are trained on one GPU. Method | Train | Render | Size | PSNR$\uparrow$ | DSSIM$\downarrow$ | LPIPS$\downarrow$
---|---|---|---|---|---|---
DyNeRF[19] | 7 days$\ast$ | - | 28 MB | 29.58 | 0.0197 | 0.083
Concurrent work |
StreamRF[18] | 75 min | 8.3 | 5310MB | 28.26 | - | -
NeRFPlayer[37] | 360 min | 0.05 | - | 30.69 | 0.034 | 0.111
Hyperreel[2] | 540 min | 2.0 | 360 MB | 31.10 | 0.036 | 0.096
K-Planes[12] | 108 min | - | - | 31.63 | 0.018 | -
HexPlanes[7] | 720 min | - | 200MB | 31.71 | 0.014 | 0.075
MixVoxels-S | 15 min | 37.7 | 500 MB | 31.03 | 0.022 | 0.129
MixVoxels-M | 40 min | 37.7 | 500 MB | 31.22 | 0.019 | 0.102
MixVoxels-L | 80 min | 37.7 | 500 MB | 31.34 | 0.017 | 0.096
MixVoxels-X | 300 min | 4.6 | 500 MB | 31.73 | 0.015 | 0.064
Qualitative results and comparisons. Fig. 6 demonstrates the novel view
rendering results on different dynamic scenes. The first four rows are novel
view videos from Plenoptic Video Dataset [19]. The last two rows present the
novel view videos from our collected two more complex dynamic scenes. The
results show that our method can achieve near photo-realistic rendering
quality. We provide the video results at the supplementary material. For
qualitative comparisons, we show them in Fig. 5. MixVoxels can better
reconstruct the moving object (the firing gun) and textual details like the
hat and the salmon stripes.
Figure 6: Novel view synthesis of MixVoxels. We select some frames at
different views. The last column demonstrates the normalized depth. We provide
videos at the supplemental material.
We further investigate the relations between rendering efficiency and
rendering quality. As shown in the lower part of Tab. 3, it was observed that
an increase in training time leads to improvements in both PSNR and LPIPS.
Longer training facilitates the reconstruction of sharp boundaries and fine
details. Visual comparisons presented in Fig. 7 reveals that 15 minutes of
training produces satisfactory recovery of most scene components but resulted
in blurry motion details. With longer training, the moving objects become
clearer with a distinct boundary.
### 4.3 Ablation Study
In this subsection, we empirically justify the design of MixVoxels by ablating
or modifying several key features. We also provide analysis that intuitively
explains the ablations. We conduct all experiments in this subsection on the
coffee-martini scene, which we find is typical for demonstrating the fast-
moving complex objects.
Ablation on splitting voxels. To study the effect of splitting static and
dynamic voxels, we compare MixVoxels with a full-dynamic voxel-grid
representation, where all points are processed by the dynamic model. Tab. 4
shows the comparisons. With the same training iterations, the full-dynamic
model is more time-consuming, which is intuitive because it processes all
voxels with the dynamic models. Fig. 8 shows the qualitative comparison. The
full-dynamic model recovered blurred motions. We speculate the reason is
because the large area of static regions affects the capturing of dynamic
information. The network will be biased by most static voxels with no motions
and tend to learn low-frequency information.
Figure 7: Qualitative demonstration of different training schedules. Longer
training helps better reconstruct the high-dynamic parts.
Figure 8: Qualitative comparison of MixVoxels and the full-dynamic model. Training with full-dynamic models with the same iterations can not well reconstruct the motion details. Table 4: Ablation on the mixed voxels. Training with the full-dynamic voxel model hurts both efficiency and efficacy. Method | Time | PSNR$\uparrow$ | DSSIM$\downarrow$ | LPIPS$\downarrow$ | FLIP$\downarrow$ | JOD$\uparrow$
---|---|---|---|---|---|---
Full-Dynamic | 2.5h | 28.36 | 0.036 | 0.2236 | 0.1196 | 7.44
MixVoxels | 0.6h | 29.47 | 0.026 | 0.1167 | 0.1223 | 7.99
Table 5: Ablation on three different methods for time query. We only substitute the time query with different method, and train them on the proposed MixVoxels framework. Method | Time | PSNR$\uparrow$ | DSSIM$\downarrow$ | LPIPS$\downarrow$ | FLIP$\downarrow$ | JOD$\uparrow$
---|---|---|---|---|---|---
Concat | 58m | 28.95 | 0.037 | 0.2146 | 0.1294 | 7.44
Fourier | 43m | 28.67 | 0.029 | 0.1824 | 0.1286 | 7.60
Inner product | 40m | 29.47 | 0.026 | 0.1167 | 0.1223 | 7.99
Ablation on time query. We compare our inner product time query method with
other variants: (1) Concatenation which concatenates the temporal embedding
with the voxel features to be processed by an MLP. (2) Fourier head proposed
by [47] which reconstructs the dynamics in frequency-domain. Tab. 5 shows the
performance comparison. The concatenation query method is both space- and
time-consuming. Querying one time step requires forwarding the fused features
through the whole MLP. Limited by the GPU memory, we can only query 50 time
steps per-iteration with the concatenation way, which harms the performance on
high-dynamic regions. The Fourier head processes the features to predict the
magnitudes of different frequency components and the performance is
competitive, while it requires an additional inverse discrete Fourier
transform to recover the information in the temporal domain. Overall, the
inner product query is the simplest and most efficient way for querying.
Number of time queries per-iteration. We empirically find that simultaneously
querying multiple time steps in an iteration helps reconstruct the details of
moving parts. Fig. 9 demonstrates the effect of different numbers of time
queries denoted as Q. With more time queries, the boundaries of the moving
hand and the flowing coffee become clearer. Querying more time steps can
provide dense supervision and make the model acquire global temporal
information in every iteration, which accelerates the convergence speed. The
effective inner product time queries allow adding more time queries with
negligible increase in computation.
Figure 9: Ablation on the number of time query denoted as Q.
### 4.4 Limitations
Our method can synthesize novel view videos with a relative high quality.
However, for some scenes with complex lighting conditions, some inconsistent
property predictions may appear at the boundary between dynamic and static
voxels, which is shown in Fig. 10. We suspect that the phenomenon is caused by
the under-sampling of dynamic regions on scenes with some bad conditions. We
will investigate ways to address the problem in future works.
Figure 10: Some inconsistent density and color predictions in the boundaries
between dynamic and static regions.
## 5 Conclusion
This paper demonstrates a new method named MixVoxels to efficiently
reconstruct the 4D dynamic scenes and synthesize novel view videos. The core
of our method is to split the 3D space into static and dynamic components with
the proposed variation field, and process them with different branches. The
separation speeds up the training and makes the dynamic branch focus on the
dynamic parts to improve the performance. We also design an efficient dynamic
voxel-grid representation with an inner product time query. The proposed
method achieves competitive results with only 15 minutes of training, making
the training and rendering of complex dynamic scenes more practical. We
believe the fast training speed will enable potentially useful applications
that are bottlenecked by training efficiency.
## 6 Acknowledgement
This work was supported in part by the National Natural Science Fund for
Distinguished Young Scholars under Grant 62025304.
## References
* [1] Pontus Andersson, Jim Nilsson, Tomas Akenine-Möller, Magnus Oskarsson, Kalle Åström, and Mark D Fairchild. Flip: A difference evaluator for alternating images. Proc. ACM Comput. Graph. Interact. Tech., 3(2):15–1, 2020.
* [2] Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer, Johannes Kopf, Matthew O’Toole, and Changil Kim. Hyperreel: High-fidelity 6-dof video with ray-conditioned sampling. arXiv preprint arXiv:2301.02238, 2023.
* [3] Aayush Bansal, Minh Vo, Yaser Sheikh, Deva Ramanan, and Srinivasa Narasimhan. 4d visualization of dynamic events from unconstrained multi-view videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5366–5375, 2020.
* [4] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855–5864, 2021.
* [5] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470–5479, 2022.
* [6] Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. Unstructured lumigraph rendering. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 425–432, 2001.
* [7] Ang Cao and Justin Johnson. Hexplane: a fast representation for dynamic scenes. arXiv preprint arXiv:2301.09632, 2023.
* [8] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. arXiv preprint arXiv:2203.09517, 2022.
* [9] Paul E Debevec, Camillo J Taylor, and Jitendra Malik. Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 11–20, 1996.
* [10] Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B Tenenbaum, and Jiajun Wu. Neural radiance flow for 4d view synthesis and video processing. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14304–14314. IEEE Computer Society, 2021.
* [11] John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. Deepview: View synthesis with learned gradient descent. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2367–2376, 2019.
* [12] Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, and Angjoo Kanazawa. K-planes: Explicit radiance fields in space, time, and appearance. arXiv preprint arXiv:2301.10241, 2023.
* [13] Chen Gao, Ayush Saraf, Johannes Kopf, and Jia-Bin Huang. Dynamic view synthesis from dynamic monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5712–5721, 2021.
* [14] Takeo Kanade, Peter Rander, and PJ Narayanan. Virtualized reality: Constructing virtual worlds from real scenes. IEEE multimedia, 4(1):34–47, 1997.
* [15] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [16] Vamsi Kiran Adhikarla, Marek Vinkler, Denis Sumin, Rafal K Mantiuk, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. Towards a quality metric for dense light fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 58–67, 2017.
* [17] Kiriakos N Kutulakos and Steven M Seitz. A theory of shape by space carving. International journal of computer vision, 38(3):199–218, 2000.
* [18] Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, and Ping Tan. Streaming radiance fields for 3d video synthesis. arXiv preprint arXiv:2210.14831, 2022.
* [19] Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, et al. Neural 3d video synthesis from multi-view video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5521–5531, 2022.
* [20] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural scene flow fields for space-time view synthesis of dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6498–6508, 2021.
* [21] David B Lindell, Julien NP Martel, and Gordon Wetzstein. Autoint: Automatic integration for fast neural volume rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14556–14565, 2021.
* [22] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. Advances in Neural Information Processing Systems, 33:15651–15663, 2020.
* [23] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. arXiv preprint arXiv:1906.07751, 2019.
* [24] Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7210–7219, 2021.
* [25] Ben Mildenhall, Pratul P Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 38(4):1–14, 2019.
* [26] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021.
* [27] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. arXiv preprint arXiv:2201.05989, 2022.
* [28] Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5865–5874, 2021.
* [29] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M Seitz. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228, 2021.
* [30] Eric Penner and Li Zhang. Soft 3d reconstruction for view synthesis. ACM Transactions on Graphics (TOG), 36(6):1–11, 2017.
* [31] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318–10327, 2021.
* [32] Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, and Andrea Tagliasacchi. Derf: Decomposed radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14153–14161, 2021.
* [33] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14335–14345, 2021.
* [34] Gernot Riegler and Vladlen Koltun. Free view synthesis. In European Conference on Computer Vision, pages 623–640. Springer, 2020.
* [35] Steven M Seitz and Charles R Dyer. Photorealistic scene reconstruction by voxel coloring. International Journal of Computer Vision, 35(2):151–173, 1999.
* [36] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2437–2446, 2019.
* [37] Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, and Andreas Geiger. Nerfplayer: A streamable dynamic scene representation with decomposed neural radiance fields. arXiv preprint arXiv:2210.15947, 2022.
* [38] Pratul P Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T Barron, Richard Tucker, and Noah Snavely. Lighthouse: Predicting lighting volumes for spatially-coherent illumination. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8080–8089, 2020.
* [39] Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. Pushing the boundaries of view extrapolation with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 175–184, 2019.
* [40] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5459–5469, 2022.
* [41] Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. Block-nerf: Scalable large scene neural view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8248–8258, 2022.
* [42] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33:7537–7547, 2020.
* [43] Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics (TOG), 38(4):1–12, 2019.
* [44] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, and Christian Theobalt. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12959–12970, 2021.
* [45] Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 551–560, 2020.
* [46] Michael Waechter, Nils Moehrle, and Michael Goesele. Let there be color! large-scale texturing of 3d reconstructions. In European conference on computer vision, pages 836–850. Springer, 2014.
* [47] Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Jingyi Yu, and Lan Xu. Fourier plenoctrees for dynamic radiance field rendering in real-time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13524–13534, 2022.
* [48] Daniel N Wood, Daniel I Azuma, Ken Aldinger, Brian Curless, Tom Duchamp, David H Salesin, and Werner Stuetzle. Surface light fields for 3d photography. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 287–296, 2000.
* [49] Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. Space-time neural irradiance fields for free-viewpoint video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9421–9431, 2021.
* [50] Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. arXiv preprint arXiv:2112.05131, 2021.
* [51] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5752–5761, 2021.
* [52] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020.
* [53] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018.
* [54] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817, 2018.
* [55] C Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, and Richard Szeliski. High-quality video view interpolation using a layered representation. ACM transactions on graphics (TOG), 23(3):600–608, 2004.
|
# Lasing at a Stationary Inflection Point
A. Herrero-Parareda 1 https://orcid.org/0000-0002-8501-5775 N. Furman 1
https://orcid.org/0000-0001-7896-2929 T. Mealy 1
https://orcid.org/0000-0001-7071-9705 R. Gibson 2
https://orcid.org/0000-0002-2567-6707 R. Bedford 3
https://orcid.org/0000-0002-0457-0081 I. Vitebskiy 2
https://orcid.org/0000-0001-8375-2088 and F. Capolino 1
https://orcid.org/0000-0003-0758-6182 1Department of Electrical Engineering
and Computer Science, University of California, Irvine, CA 92617, USA
2Air Force Research Laboratory, Sensors Directorate, Wright-Patterson Air
Force Base, Ohio 45433, USA
3Air Force Research Laboratory, Materials and Manufacturing Directorate,
Wright-Patterson Air Force Base, Ohio 45433, USA<EMAIL_ADDRESS>
###### Abstract
The concept of lasers based on the frozen mode regime in active periodic
optical waveguides with a 3rd-order exceptional point of degeneracy (EPD) is
advanced. The frozen mode regime in a lossless and gainless waveguide is
associated with a stationary inflection point (SIP) in the Bloch dispersion
relation, where three Bloch eigenmodes coalesce forming the frozen mode. As a
practical example, we consider an asymmetric serpentine optical waveguide
(ASOW). An ASOW operating near the SIP frequency displays a large group delay
of a non-resonant nature that scales as the cube of the waveguide length,
leading to a strong gain enhancement when active material is included.
Therefore, a laser operating in the close vicinity of an SIP has a gain
threshold that scales as a negative cube of the waveguide length. We determine
that this scaling law is maintained in the presence of small distributed
losses, such as radiation associated with waveguide bends and roughness. In
addition, we show that although gain causes a distortion in the modes
coalescing at the SIP, the properties of the frozen mode are relatively
resistant to such small perturbations and we still observe a large degree of
exceptional degeneracy for gain values that bring the system above threshold.
Finally, our study also reveals that lasing near an SIP is favored over lasing
near a photonic band edge located in close proximity to the SIP. In
particular, we observe that an SIP-induced lasing in an ASOW displays lower
gain threshold compared to lasing near the photonic regular band edge (RBE),
even though the SIP resonance has a lower quality factor than the RBE
resonance.
††journal: ome††articletype: Research Article
## 1 Introduction
An exceptional point of degeneracy (EPD) is a point in a parameter space
associated with the coalescing of the eigenvalues and the eigenvectors of a
system, see for example Ref. [1, 2, 3, 4, 5, 6], or [7] where Figotin and
Vitebskiy refer to EPDs as stationary points, without explicitly using the
term EPD but providing detailed math and physics aspects. It has been shown
that EPDs in photonic systems exist when waveguide modes are coupled in the
presence of gain and loss, as in PT-symmetric systems [5, 8, 9, 10, 11, 12].
However, systems do not need to have loss and gain to exhibit an EPD. In this
paper, indeed, we focus on the stationary inflection point (SIP) formation in
periodic lossless and gainless waveguides [13, 14, 15, 16, 17, 18, 19, 20, 21,
22]. The SIP is an EPD of order three [19, 23]. In lossless and gainless
waveguides, a second, third, and a fourth-order exceptional degeneracy of
Floquet-Bloch eigenmodes are associated with a regular band edge (RBE), an SIP
[24, 19], and a degenerate band edge (DBE) [25] in the frequency-wavenumber
dispersion relation of the waveguide modes, respectively. In the vicinity of
an EPD of order $m$ at $\left(k_{m},\omega_{m}\right)$, the frequency-
wavenumber dispersion relation is approximated as [26, 19]
$(\omega-\omega_{m})\propto(k-k_{m})^{m},$ (1)
which shows that the higher the order of the EPD, the flatter the dispersion
diagram in its vicinity. For example, an SIP ($m=3$) has a flatter dispersion
diagram than an RBE ($m=2$). The DBE has a flatness given by $m=4$ and it is a
degenerate version of the RBE. In contrast to the SIP, the RBE and the DBE
have a bandgap on one side of the RBE or DBE frequency, therefore the SIP
exhibits physical properties that differ from those of the RBE and DBE. The
flatness at the SIP frequency $\omega_{S}$ is associated with the inflection
point developed in the dispersion relation of the propagating component of the
frozen mode [27], as is illustrated in Fig. 1(b). The flatness at the RBE
frequency, however, is due to the coalescence of two counter-propagating
modes. The common slow-wave resonances near an RBE are fundamentally different
than the resonances that couple to the SIP-associated frozen mode because the
SIP is not sided by a bandgap [28] and the frozen mode is composed also by
evanescent waves. Moreover, the SIP condition is not particularly sensitive to
the size and shape of the underlying photonic structure [29]. The frozen mode
regime exhibits exceptional properties including low group velocity, greatly
enhanced quality factor and group delay characterized by a strong scaling with
the waveguide length [28].
(a)
(b)
Figure 1: (a) SIP laser made of a periodic ASOW; gain is assumed to be
distributed uniformly along the waveguide. The SOI waveguide follows a
serpentine path. A unit cell of length $d=2R$ is defined within the two
oblique dashed lines as in [30]. There are bidirectional coupling points
between adjacent loops (denoted by red arrows). (b) Dispersion diagram of the
propagating modes, with SIPs at $\omega_{S}$.
An enhanced group delay allows for large one-pass amplification compared to
conventional lasers of the same cavity length [31]. This boost has been
observed theoretically in lasing systems operating near an RBE [32] and a DBE
[31, 33]. The concept of unidirectional lasing near SIP in a periodic non-
reciprocal multilayered structure was discussed in [18]. A fundamental problem
with the SIP realization in multilayered structures is that it essentially
requires nonreciprocity, which at optical wavelengths is too small to achieve
a well-defined SIP in the Bloch dispersion relation. In other words, the model
considered in [18] can be practically relevant only at microwave frequencies,
where the nonreciprocal effects can be strong enough. By contrast, SIP
realization in three-way periodic optical waveguides does not require the use
of magneto-optical materials and can be achieved in perfectly reciprocal
systems at optical wavelengths. We advance the theory of SIP lasing in a
silicon-on-insulator (SOI) reciprocal waveguide, namely in the periodic
asymmetric serpentine optical waveguide (ASOW). The ASOW is examined using
coupled-mode theory and the transfer matrix method as was done for the
waveguides studied in [34, 19], and therefore our analysis is restricted to
the linear regime. In the future, we aim to expand on the findings presented
in this article by conducting time-domain simulations to provide an insight
into mode selectivity and the impact of nonlinear effects in SIP lasers. The
principal result of this paper is that resonances near the SIP frequency
experience giant power gain when an active material [35] is integrated into
the periodic structure. We establish that the lasing threshold corresponding
to these resonances scales as $N^{-3}$, where $N$ denotes the number of unit
cells in the finite-length waveguide. The presence of distributed losses, such
as those resulting from the roughness or radiation of waveguide bends,
naturally leads to an increase in the lasing threshold. For small distributed
losses, however, the inverse cubic scaling of the lasing threshold with
waveguide length remains intact. In fact, we demonstrate that even though
small values of gain and loss distort the dispersion diagram of the modes near
the SIP, due to the exceptional sensitivity of the mode wavenumbers at an EPD
[7], the distorted modes retain similar characteristics as those exhibited by
the frozen mode. The SIP condition is shown to be superior to the RBE
condition for enhancing the power gain in cavities with the same gain medium
and length. We observe that the SIP resonance induces a lower lasing threshold
than the RBE resonance, even if the SIP resonance has a lower quality factor.
The paper is organized as follows: in Section 2 the ASOW is defined as in [30]
and we show an example of an SIP in ASOW. In Section 3 a gain medium is added
to the ASOW. The effect of bending losses and gain-induced mode distortion on
laser performance is analyzed in Section 4. In Section 5 we study potential
lasing in the vicinity of RBEs close to the SIP and introduce design
considerations to prevent it. The results are summarized in Section 6.
## 2 SIP in ASOW
The ASOW, introduced in [30] as a modification of the symmetric serpentine
optical waveguide in [34], is a SOI periodic asymmetric serpentine optical
waveguide depicted in Fig. 1(a). It is composed of small rings of radius $R$
connected at angles $\alpha_{1}$ and $\alpha_{2}$, which are defined at the
center of the rings and with respect to the horizontal axis. These rings are
also side-coupled to the adjacent rings. Here, the evanescent coupling between
adjacent rings is considered point-like, bidirectional, and lossless, as
traditionally assumed [36, 37]. The lossless condition ensures
$\kappa^{2}+\tau^{2}=1$, where $\kappa$ and $\tau$ are the coupling and
transmission coefficients, respectively. The unit cell of the ASOW is defined
between two oblique lines that descend from the top of two adjacent rings at
an angle $\beta=\alpha_{1}-\alpha_{2}$. The unit cell has a length of $d=2R$,
and it is portrayed in Fig. 2. It is convenient to split the unit cell into
three different sections as was done in [30]: Section A, of which there are
four per unit cell, describes a quarter of a circle and adds a phase of
$\phi_{A}=k_{0}n_{w}\pi R/2$ to the waves traveling through it. The term
$k_{0}=\omega/c$ is the wavenumber in vacuum where $\omega$ is the angular
frequency, $c$ is the speed of light in vacuum, and $n_{w}$ is the mode
effective refractive index. Sections B and C describe the arcs connecting the
top loops with the bottom loop, at the left and right sides of the unit cell,
at angles $\alpha_{1}$ and $\alpha_{2}$ from the horizontal axis,
respectively. They describe the asymmetry of the ASOW through the angle
difference $\beta=\alpha_{1}-\alpha_{2}$, and have an associated phase of
$\phi_{B}=k_{0}n_{w}2\alpha_{1}R$ and $\phi_{C}=k_{0}n_{w}2\alpha_{2}R$. For
an SIP to form, three eigenmodes must collapse on each other (in terms of
wavenumber and polarization states, i.e., eigenvalues and eigenvectors), which
requires them to belong to the same irreducible wavevector symmetry group. A
way to do that is by setting $\beta\neq 0$ [30], which breaks the glide
symmetry of the ASOW, though the SIP has been found also in glide symmetric
waveguides [23].
Figure 2: The $n$-th unit cell of the ASOW is divided in four Sections A, a
quarter of a circle long; Sections B and C, the arcs connecting the top and
bottom loops at the left and right sides of the unit cell, respectively. The
angles $\alpha_{1}$ and $\alpha_{2}$ determine the lengths of Sections B and C
(respectively), and their difference $\beta$ determines the asymmetry of the
unit cell. Besides the coupling to adjacent unit cells via Port 1, at each
left and right sides of the unit cell, there are also proximity coupling
points through Ports 2 and 3, at each side of the unit cell.
The electromagnetic guided fields are modeled in terms of the forward and
backward waves [30]. Figure 2 shows the three ports at both the $n-1$ and $n$
sides of the unit cell, and the two waves defined at the right side of each
port. The state vector $\bm{\text{$\psi$}}(n)$ with all the six electric field
wave amplitudes is defined as
$\bm{\text{$\psi$}}(n)=\left(\begin{array}[]{cccccc}E_{1}^{+},\hskip
5.69046ptE_{1}^{-},\hskip 5.69046ptE_{2}^{+},\hskip 5.69046ptE_{2}^{-},\hskip
5.69046ptE_{3}^{+},\hskip 5.69046ptE_{3}^{-}\end{array}\right)^{T},$ (2)
(a)
(b)
(c)
(d)
Figure 3: Modal dispersion diagram of the lossless and gainless SIP-ASOW. (a)
The real part of the eigenvalue versus angular frequency in the fundamental
BZ. Solid black: modes with purely real $k$; dashed colors: modes with complex
$k$ (overlapping dashed colors imply two overlapping branches). Three curves
meet at an inflection point, with reciprocal $k_{S}$ and $-k_{S}$ positions.
(b) The imaginary part of $k$ versus angular frequency. At the SIP, the three
degenerate modes have $\text{Im}(k)=0$. The black curve has $\text{Im}(k)=0$
also near the SIP frequency. (c) Alternative representation of the dispersion
diagram in the complex $k$ space. The arrows point in the direction of
increasing frequency. (d) Coalescence parameter $\sigma$ versus angular
frequency. The coalescence parameter $\sigma$ vanishes at an SIP. The local
minima of $\sigma$ indicate the presence of RBEs near the SIP.
and its “evolution” from unit cell to cell is given by
$\bm{\text{$\psi$}}(n)=\underline{\mathbf{T}}_{u}\bm{\text{$\psi$}}(n-1)$,
where $\underline{\mathbf{T}}_{u}$ is the transfer matrix of the unit cell,
given in [30]. The time convention $e^{j\omega t}$ is adopted in this paper.
The ASOW is a reciprocal structure; it supports six independent modes (unless
an EPD occurs) that appear in symmetric positions with respect to the center
of the Brillouin zone (BZ) [19]. The unit cell is engineered so the three
modes with the same sign of $\text{Re}(k)$ coalesce at the SIP angular
frequency $\omega_{S}$ [30]. The occurrence of the SIP (which is an EPD of
order three) is demonstrated by the vanishing of the coalescence parameter
$\sigma$, whose concept was introduced in [12] for a DBE (where the authors
referred to it as a figure of merit or hyperdistance), and applied to an SIP
in Refs. [23, 30]. In Fig. 3 we show the modal dispersion diagram of an SIP in
ASOW, an example which is referred to throughout the paper as the SIP-ASOW.
Its SIP frequency is $f_{S}=193.54$ THz and it has structure parameters: $R=6$
$\upmu\text{m}$, $\alpha_{1}=67.4^{\circ}$, $\alpha_{2}=55.6^{\circ}$ and
$\kappa=0.5$. The ASOW is made of a rectangular waveguide with a width of
$450\;\mathrm{nm}$ and a height of $220\;\mathrm{nm}$, whose modal effective
refractive index is $n_{w}=2.36$ assumed constant in the frequency range of
interest [30, 34].
Figures 3(a) and (b) depict the real and the imaginary parts of the wavenumber
against the angular frequency in the fundamental BZ. Solid black lines
represent modes with purely real $k$ whereas dashed colors denote modes with
complex $k$. The propagating modes (black solid line) in Figures 3(a) are the
same ones shown also in Fig. 1(b). At the SIP angular frequency $\omega_{S}$,
the wavenumbers of the three modes coalesce at the two inflection points in
reciprocal positions with respect to the center of the BZ, and the imaginary
part of all modes vanishes. Figure 3(c) shows the dispersion diagram in the
complex $k$ space, where the arrows point in the direction of increasing
frequency. The three wavenumbers coalescence is very clear in this figure, and
at each SIP the angles between the curves are 60$\degree$. Figure 3(d) shows
the coalescence parameter $\sigma$, defined as in [30], versus angular
frequency. It vanishes at the SIP frequency, clearly showing the coalescence
of three eigenvectors and therefore indicating the occurrence of an EPD of
order three (the SIP). The two local minima indicate the presence of RBEs at
frequencies near the SIP. While the transfer matrix
$\underline{\mathbf{T}}_{u}$ of the unit cell is non-Hermitian and J-unitary
at any frequency (see Refs. [11, 12, 38]), at the SIP frequency,
$\underline{\mathbf{T}}_{u}$ is similar to a Jordan canonical matrix; it
cannot be diagonalized [25, 19]. The EPD in this paper (the SIP) is found in a
lossless and gainless ASOW, demonstrating that the presence of gain and loss
is not necessary to produce an EPD.
The three descriptors of an SIP in a loaded finite-length ASOW of $N$ unit
cells are the transfer function $T_{f}$, the group delay $\tau_{g}$, and the
quality factor $Q$. The finite-length ASOW is shown in Fig. 1(a) where the
first and last unit cells are terminated on straight waveguides, without any
mirrors and any discontinuity in the waveguide, using Port $1$ on either the
left or the right sides of the unit cell shown in Fig. 2. However, since each
unit cell is defined with three ports on each side, Ports $2$ and $3$ on
either end of the finite-length ASOW are terminated without coupling to
adjacent cells, as in [30]. The transfer function is defined as
$T_{f}(\omega)=\frac{E_{out}}{E_{inc}}=\frac{E_{1}^{+}(N)}{E_{1}^{+}(0)},$ (3)
where $E_{1}^{+}(0)$ is the incident signal and $E_{1}^{+}(N)$ is the
transmitted wave onto the attached straight waveguide on the right side of
Fig. 1(a). The group delay is [31],
$\tau_{g}(\omega)=-\frac{\partial\angle T_{f}(\omega)}{\partial\omega}$ (4)
where $\angle T_{f}(\omega)$ is the phase of the transfer function. The
quality factor for large $N$ is reliably approximated by [19]
$Q=\frac{1}{2}\tau_{g}(\omega_{S,res})\omega_{S,res},$ (5)
where $\omega_{S,res}$ is the resonance frequency closest to the SIP frequency
$\omega_{S}$, referred to as the SIP resonance. For long lossless-gainless
periodic waveguides, the asymptotic trend is that $Q\propto N^{3}$ [28, 30].
From Eq. (5), the group delay at the SIP resonance also scales asymptotically
as $\tau_{g}(\omega_{S,res})\propto N^{3}$. The concept of transfer function
is used later on to determine the lasing threshold.
## 3 Lasing theory in coupled waveguides with SIP
### 3.1 Steady-state gain medium response
Lasing in the lossless ASOW requires doping it with a gain medium, e.g.
Erbium, or with other methods, such as using quantum wells. We assume that the
gain bandwidth of the active medium includes the SIP frequency of the
underlying passive structure. Assuming the gain is uniformly distributed along
the waveguide, the mode complex effective refractive index is [37]
$n=n_{w}-jn_{g}^{\prime\prime},$ (6)
where the subscript $g$ stands for gain and ′′ indicates the number has an
imaginary value. In the first-order approximation, we neglect the effect of
gain on the real part of the mode effective refractive index within the
frequency range of interest. The conversion between gain modeled as an
imaginary part of the mode effective refractive index $n_{g}^{\prime\prime}$
and as the gain coefficient $\alpha_{g}$ (in units of $\text{dB}/\text{cm}$)
at an angular frequency $\omega$ is
$n_{g}^{\prime\prime}(\omega)=-\alpha_{g}(100/8.686)(c/\omega)$, where $c$ is
the speed of light in m/s. The imaginary part of the mode effective refractive
index is negative for gain, i.e., $n_{g}^{\prime\prime}<0$.
### 3.2 One-pass amplification in ASOW
Waves traveling in an ASOW at a frequency close to $\omega_{S}$ experience a
high group delay [30], which increases the effective length of the cavity
$L_{eff}=c\tau_{g}$ [31, 39]. In turn, this increases the effective power gain
coefficient $g_{eff}=\gamma L_{eff}$ in active devices, where
$\gamma=-2k_{0}n_{g}^{\prime\prime}f$ is the per-unit-length power gain
coefficient [37]. The term $f$ is the filling factor of the active material in
the host medium. Therefore, operating the active ASOW at a resonance near
$\omega_{S}$ results in an enhanced effective power gain coefficient, which is
estimated as
$g_{eff}\approx-2\omega_{res}n_{g}^{\prime\prime}\tau_{g}(\omega_{res})f,$ (7)
and the one-pass effective power gain as
$G_{eff}\approx|T_{f}(\omega_{res})|^{2}e^{g_{eff}},$ (8)
where the $T_{f}$ and the $\tau_{g}$ (used to calculate $g_{eff}$) belong to
the lossless-gainless finite-length ASOW. This approximation is valid for
small values of gain for which the modal properties of the SIP are not
significantly perturbed [31, 40]. Section 4.2 provides a more in-depth
discussion on the effects of gain and loss on the distortion of the modes
associated with an SIP by examining its effect on the dispersion diagram and
on the coalescence parameter of the waveguide modes. The effective power gain
coefficient $g_{eff}$ is rewritten as
$g_{eff}=-4n_{g}^{\prime\prime}Qf,$ (9)
which shows why high-$Q$ lasers, such as those operating near an SIP [30],
require less gain than lasers with lower $Q$ factors to provide the same
output.
### 3.3 SIP lasing threshold scaling
We define the lasing threshold as the minimum gain required to maintain
oscillations within a cavity. In a finite-length waveguide operating at a
resonance and with a set gain $n_{g}^{\prime\prime}=n_{th}^{\prime\prime}$,
the corresponding value of the effective power gain coefficient
$g_{eff}(\omega_{res})$ is calculated using Eq. (7). Since the $g_{eff}$
parameter is the actual gain parameter that determines the system to start
oscillations, when we set this to assume the threshold value, from Eq. (9) one
infers that the refractive index gain threshold $n_{th}^{\prime\prime}\propto
1/Q$. This relationship indicates a trend, because this discussion (as the one
in the previous section, where Eq. (9) is derived) neglects the fact that the
dispersion diagram, hence $\tau_{g}$ and $Q$, is affected by the presence of
gain [31, 40].
Figure 4: Magnitude of the SIP lasing threshold $|n_{S,th}^{\prime\prime}|$ of
a finite-length SIP-ASOW of length $N$. The SIP lasing threshold is fit by
$n_{th}^{\prime\prime}=-0.016\times N^{-3}+7\times 10^{-8}$, calculated for
$N\in[5,40]$. The inset shows a finite-length ASOW of $N$ unit cells
terminated without any discontinuity on straight waveguides. The cavity has no
mirrors.
For asymptotic gainless and lossless periodic waveguides operating near an
SIP, $Q\propto N^{3}$ [28, 30]. Hence, the SIP lasing threshold, which is
calculated at the SIP resonance $\omega_{S,res}$ of an ASOW of finite length,
satisfies $n_{S,th}^{\prime\prime}\propto N^{-3}$ asymptotically. Figure 4
depicts this trend for the lossless SIP-ASOW, where the SIP lasing threshold
is represented by blue dots. The blue line is the fitting curve
$n_{th}^{\prime\prime}=-0.016\times N^{-3}+7\times 10^{-8}$, calculated for
$N\in[5,40]$. We calculate the SIP lasing threshold with the approach
described in Appendix A. The inversely proportional relationship between the
quality factor and the lasing threshold in a conventional cavity [41] has been
observed having different scaling laws near RBEs
($n_{th}^{\prime\prime}\propto N^{-3}$), DBEs ($n_{th}^{\prime\prime}\propto
N^{-5}$) in photonic crystals [31] and waveguides [33], and $6$DBEs
($n_{th}^{\prime\prime}\propto N^{-7}$) in periodic waveguides [42]. In
comparison, here we have demonstrated that when working near the SIP, the
lasing threshold scaling is $n_{th}^{\prime\prime}\propto N^{-3}$. Note that
cavities made of waveguides possessing the SIP have no mirrors at their left
and right ends; the cavity effect is due to the "structured" degenerate
resonance where the energy distribution vanishes at the cavity edges due to
the frozen mode regime, as was shown in [31] for the DBE and in [43] for the
SIP.
## 4 Practical considerations
### 4.1 Distributed losses
We consider distributed losses and analyze their effect on the lasing
threshold of an active ASOW. Radiation losses due to the waveguide bends in
the ASOW are modeled as a positive imaginary part $n_{r}^{\prime\prime}$ of
the mode effective refractive index, leading to
$n=n_{w}-j\left(n_{g}^{\prime\prime}+n_{r}^{\prime\prime}\right).$ (10)
Based on the full-wave simulations in Ref. [44], bending losses are estimated
as $n_{r}^{\prime\prime}=2.3\times 10^{-5}$ for a waveguide with radius $R=6$
$\upmu$m and $n_{r}^{\prime\prime}=4.7\times 10^{-6}$ for $R=10$ $\upmu$m.
Therefore, the magnitude of the lasing threshold $|n_{L,th}^{\prime\prime}|$
of the lossy ASOW is
$|n_{L,th}^{\prime\prime}|=|n_{th}^{\prime\prime}|+n_{r}^{\prime\prime},$ (11)
which shows radiation losses shift the curve shown in Fig. 4 upwards. Other
distributed losses can also be modeled as a positive imaginary part of the
effective refractive index, causing a further shift of the lasing threshold.
In the following, we will show that the considered losses and gain, even
though they distort the dispersion diagram, are not large enough to
fundamentally modify the properties of the frozen mode associated to the SIP,
i.e., the coalescence parameter still tends to vanish (Fig. 5), which explains
why the SIP lasing threshold still scales asymptotically as a negative cube of
the waveguide length.
### 4.2 Gain and loss-induced mode distortion
We discuss the modal dispersion diagram distortion induced by the presence of
either gain or loss in the otherwise lossless-gainless SIP-ASOW. Eigenmodes at
an EPD display exceptional sensitivity when a system parameter is perturbed by
a small quantity [7]. The insertion of gain or loss in the lossless-gainless
ASOW perturbs the mode effective refractive index (i.e., of the quasi-TE mode
in the infinitely long homogeneous straight rectangular waveguide), by a small
quantity $\Delta n$. Therefore, the degenerate wavenumber $k_{S}$ at the SIP
frequency splits into three wavenumbers $k_{q}$, with $q=1,2,3$, estimated by
a first-order approximation of the Puiseux fractional power series [7, 19, 2]
as
$k_{q}(\Delta n)\approx k_{S}+a_{1}e^{j\frac{2\pi}{3}q}(\Delta n)^{1/3},$ (12)
where $a_{1}$ is a constant calculated following the expressions in Ref. [45].
The wavenumber perturbation at and near the SIP frequency, due to gain or
loss, is stronger than the wavenumber perturbation at frequencies away from
the SIP. This important phenomenon is understood by comparing the result in
Eq. (12) with the conventional Taylor expansion for the wavenumber
perturbation when the frequency is sufficiently far from the SIP, leading to
each of the three wavenumbers (in each direction) $k_{q}$ (with $q=1,2,3$) at
any $\omega$ away from the SIP to be approximated as $k_{q}(\Delta n)\approx
k_{q,0}+b_{q}(\Delta n)$, where $b_{q}=\partial k_{q}/\partial n$ is the
standard coefficient of a Taylor expansion when a perturbation $\Delta n$ of
the modal refractive index $n$ (of the homogeneous straight waveguide quasi-TE
mode) is applied. Therefore, when we look at the wavenumber perturbation from
the SIP (i.e., when the system operates at the SIP frequency), one has $\Delta
k_{SIP}=k_{q}-k_{S}\propto(\Delta n)^{1/3}$. This generates a larger
wavenumber perturbation than $\Delta k_{q}=k_{q}-k_{q,0}\propto(\Delta n)$,
i.e., when not working at the SIP frequency. As an example, when $|\Delta
n|=10^{-3}$, i.e., we modify the third decimal digit of the effective
refractive index, one has $|\Delta n|^{1/3}=10^{-1}$ which is much larger than
$10^{-3}$. Therefore, when in Eq. (10) we consider a small $\Delta
n=n_{g}-n_{w}=-j\left(n_{g}^{\prime\prime}+n_{r}^{\prime\prime}\right)$, it is
clear that the dispersion diagram at the SIP is more perturbed than at any
other frequency. This is confirmed by the following simulations.
Figure 5 shows the eigenmode perturbation in a SIP-ASOW without loss but with
gain $n_{g}^{\prime\prime}=-8\times 10^{-6}$, equivalent to $\alpha_{g}=2.82$
$\text{dB}/\text{cm}$, superimposed on the dispersion diagram of the same
waveguide without gain (thin gray line). Figures 5(a) and (b) depict the real
and imaginary parts of the wavenumbers of the modes. The inset shows the
details of the perturbation of the wavenumber due to gain at the SIP. Any
wavenumber perturbation is much less visible at any other frequency. The
distortion of the SIP due to the presence of gain is observed more clearly in
Fig. 5(c), which shows the dispersion diagram in the complex $k$ space. The
three wavenumbers do not fully coalesce anymore, and they slightly depart from
$k_{S}$ at angles of $120^{\circ}$ from each other, as shown in Eq. (12).
Larger gain would cause an even larger departure from $k_{S}$. These
observations are in agreement with what was discussed in [40] for an RBE and
in [46] for an SIP: adding gain to the lossless-gainless waveguide
deteriorates the degeneracy and its exceptional properties. The extent of the
deterioration of the properties, however, remains unclear. As discussed in
[40], the presence of a small amount of either gain or loss causes a
perturbation of the wavenumber (and hence of the group velocity) of the same
magnitude.
Despite having explained why the dispersion diagram experiences the largest
perturbation in the neighborhood of the SIP, we also show that the EPD-related
properties associated to the SIP are however retained. Indeed, with the amount
if gain here considered, we still observe the coalescence of the three
eigenvectors (i.e., polarization states), that fully describe the degree of
the exceptional degeneracy. Indeed, in Fig. 5(d) we show that even though the
coalescence parameter $\sigma$ does not fully vanish at the SIP, it is still
significantly smaller than unity at $\omega_{S}$. Therefore, the SIP feature
of three eigenvectors approaching each other is still present despite the SIP
distortion due to gain. Experimental verifications of the existence of an SIP
in non-reciprocal and reciprocal waveguides with small losses were reported in
[16, 23], where the distorted dispersion diagrams were reconstructed, which
resemble the one shown in Fig. 5. Despite the distortion, the modes still
exhibit properties typically associated with the frozen mode. Therefore, the
SIP condition is relatively resistant to the presence of small values of loss
or gain. For increasingly large values of gain, however, the group delay at
frequencies near the SIP eventually diminishes (because the SIP gracefully
loses its properties), and this effect curbs the effective power gain
coefficient $g_{eff}$ in Eq. (7).
In summary, despite the stronger perturbation of the wavenumbers in proximity
of the SIP frequency compared to other frequencies, the coalescence parameter
in Fig. 5(d) still shows that the three eigenvectors are close to each other,
demonstrating a good degree of exceptional degeneracy. On the other hand,
larger and larger gain values would strongly perturb the SIP degeneracy till
there is no degeneracy anymore.
(a)
(b)
(c)
(d)
Figure 5: Modal dispersion diagram of the SIP-ASOW with a gain of
$n_{g}^{\prime\prime}=-8\times 10^{-6}$, equivalent to $\alpha_{g}=2.82$
$\text{dB}/\text{cm}$. The black curves represent propagating modes, with
purely real wavenumbers; the dashed, colored curves represent evanescent
modes. The dispersion diagram for the SIP-ASOW without gain is shown in light
grey. (a) Real part of the wavenumber versus angular frequency in the
fundamental BZ. The inset shows the eigenvalue distortion is accentuated at
the SIP. (b) The imaginary part of $k$ versus angular frequency. Due to gain,
$\text{Im}(k)\neq 0$ at all the frequencies, especially around $\omega_{S}$.
(c) Alternative representation of the dispersion diagram in the complex $k$
space. The lack of mode coalescence due to gain is very clear in this figure,
with the three modes departing from $k_{S}$ at $120^{\circ}$ from each other
as expected from Eq. (12). (d) The coalescence parameter $\sigma$ versus
angular frequency. The non vanishing $\sigma$ indicates the SIP does not fully
form, but the dip still demonstrates a degree of exceptional three-mode
degeneracy to be exploited for the SIP laser.
## 5 Lasing at SIP versus RBE
It is typical that optical structures capable of supporting an SIP also have
RBEs close to $\omega_{S}$ [28, 18, 19, 30, 21, 47, 48]. Tolerances in
fabrication might prevent the formation of the SIP, and the laser would then
operate near an RBE. The lossless-gainless ASOW is relatively robust to
fabrication tolerances because it is formed by a single waveguide with a
continuous curvature slope, as seen in Fig. 2, and therefore some changes
would only cause a shift of the SIP frequency. For example, a perturbation in
$n_{w}$ is expected to occur uniformly, so the SIP would still be formed,
although at a different frequency. However, some other perturbations may
degrade the quality of the SIP (the coalescence parameter would not approach
zero), and therefore the properties of the frozen mode might be compromised.
In some scenarios, despite the occurrence of an SIP, a laser might emit at the
RBE resonance frequency $\omega_{R,res}$, the closest to the RBE frequency, if
it has a lower lasing threshold than the one associated to the resonance at
$\omega_{S,res}$.
To prevent RBE lasing, one potential solution is to increase the frequency
difference between the SIP and RBE. Consequently, we consider the free
spectral range (FSR), defined by the phase accumulated by propagation along
the optical length of an ASOW unit cell without coupling (ie: without the
frozen mode). From [34] adapted to the asymmetric case studied in [30] and
here, we have
$\Delta f_{FSR}=\frac{c/n_{w}}{2R(\pi+\alpha_{1}+\alpha_{2})}.$ (13)
Therefore, the distance between the SIP and the RBE is increased by decreasing
$R$, which effectively stretches the dispersion diagram. A smaller loop
radius, however, is associated with higher radiation losses, as shown in Fig.
6. As a compromise, we choose $R=6$ $\upmu$m in the SIP-ASOW example. In the
following discussion, we compare the performance of the SIP-ASOW of different
lengths operating either near the SIP or near an RBE (shown in Fig. 3). We
choose to consider the RBE at a lower frequency ($f_{R}$) than the SIP
($f_{S}$) because it is the one closest to the SIP (versus the RBE at a higher
frequency than the SIP). The frequency difference between these two EPDs is
$\Delta f=f_{S}-f_{R}=76.4\;\mathrm{GHz}$.
Figure 6: The loss values in terms of the imaginary effective refractive index
are extracted from [44], and the FSR values are calculated using Eq. (13) with
the structure parameters from the SIP-ASOW. Low radiation losses imply small
FSR, which in turns implies a large spectral density of features like SIP and
RBE frequencies.
Assuming both the SIP and the RBE frequencies fall within the active frequency
region of the gain medium, as would be the case for an Erbium-doped waveguide
[49], the system may start oscillations at the SIP resonance frequency
$f_{S,res}$ or at the RBE resonance frequency $f_{R,res}$. We denote by
$n_{S,th}^{\prime\prime}$ and $n_{R,th}^{\prime\prime}$ the two gain values
that would start oscillations at those two frequencies, respectively. Lasing
near the RBE would occur if the RBE lasing threshold, which is calculated at
$\omega_{R,res}$ following the method described in Appendix A, satisfies
$|n_{R,th}^{\prime\prime}|<|n_{S,th}^{\prime\prime}|$. Following the
discussion from Section 3.3, one would expect lasing near the RBE if the
quality factor near the RBE is larger than near the SIP. Figure 7(a) shows the
RBE quality factor $Q_{R}$, calculated using Eq. (5) at $\omega_{R,res}$, for
different $N$ (in red dots), and the SIP quality factor $Q_{S}$, calculated at
$\omega_{S,res}$ (in blue dots). Both quality factors are calculated at a
loaded ASOW, continued on the rectangular waveguides on the left and right
sides without mirrors, as in the inset in Fig. 4. We also plot the fitting
curves (solid lines) governed by
$\begin{split}Q_{S}&\approx aN^{3}+b,\\\ Q_{R}&\approx cN^{3}+d,\\\ \Delta
Q&\approx Q_{R}-Q_{S}=(c-a)N^{3}+(d-b),\end{split}$ (14)
with coefficients $a=69.8$, $b=5.2\times 10^{4}$, $c=139.6$, and $d=10^{5}$.
As expected, $Q_{S}$ and $Q_{R}$ scale asymptotically as $N^{3}$, when the
cavity resonant frequency is near the SIP [28, 30] and near the RBE [19],
respectively. Their difference $\Delta Q=Q_{R}-Q_{S}$ is shown as a dashed
black line, also follows the trend $\Delta Q\propto N^{3}$ for large $N$, and
for the specific SIP-ASOW under consideration, $Q_{S}<<Q_{R}$. This shows that
the SIP-associated frozen mode regime in the current design is not as
mismatched to its termination impedances as the cavity associated with the RBE
resonance. The small zig-zag distribution of $Q_{S}$ with waveguide length is
discussed in [30].
(a)
(b)
(c)
Figure 7: Comparison between the SIP and RBE ASOW lasers of different lengths,
operating either near the SIP frequency $\omega_{S}$ (in blue) or the RBE
frequency $\omega_{R}$ (in red), respectively. (a) The RBE quality factor
$Q_{R}$ (represented by red dots), fitted by the red curve from the first
equation in Eq. (14); the SIP quality factor $Q_{S}$ (represented by blue
dots) is fitted by the blue curve following the second equation in Eq. (14);
and their difference $\Delta Q\approx Q_{R}-Q_{S}$ is represented as a dashed,
black line. The three curves scale as $N^{-3}$. (b) Magnitude of the RBE
lasing threshold (represented by red dots), fitted by the magnitude of the red
curve from the first equation in Eq. (15); and the magnitude of the SIP lasing
threshold (represented by blue dots), fitted by the magnitude of the blue
curve following the second equation in Eq. (15); and the magnitude of their
difference $\Delta
n_{th}^{\prime\prime}=n_{R,th}^{\prime\prime}-n_{S,th}^{\prime\prime}$ is
represented as a dashed, black line. The three thresholds decrease as
$N^{-3}$. The lasing threshold associated to the SIP resonance is smaller than
the RBE one for an ASOW of the same length. (c) Magnitude of the lasing
threshold versus quality factor for an RBE -based laser operating near
$\omega_{R}$ (in black) and for an SIP -based laser operating near
$\omega_{S}$ (in blue). Each dot (red and blue) represents a finite-length
ASOW with a given $Q$ provided by some $N\in[5,40]$. An active ASOW operating
near an SIP has a lower lasing threshold and lower quality factor than the
same ASOW operating near an RBE.
Because of the $Q_{R}\propto N^{3}$ asymptotic scaling, we expect the RBE
lasing threshold to also scale asymptotically as
$n_{R,th}^{\prime\prime}\propto N^{-3}$. Indeed, that is what we observe. Fig.
7(b) shows $|n_{R,th}^{\prime\prime}|$ in red dots, fitted by the red solid-
line curve, and $|n_{S,th}^{\prime\prime}|$ in blue dots, which is fitted by
the blue curve given by
$\begin{split}n_{S,th}^{\prime\prime}&\approx eN^{-3}+f,\\\
n_{R,th}^{\prime\prime}&\approx gN^{-3}+h,\\\ \Delta
n_{th}^{\prime\prime}&=n_{R,th}^{\prime\prime}-n_{S,th}^{\prime\prime}\approx(g-e)N^{-3}+(h-f),\end{split}$
(15)
with coefficients $e=-0.016$, $f=7\times 10^{-8}$, $g=-0.02$, and $h=6.9\times
10^{-8}$. The magnitude of the difference between the lasing thresholds for
lasers working at the RBE resonance or at the SIP resonance, $\Delta
n_{th}^{\prime\prime}=n_{R,th}^{\prime\prime}-n_{S,th}^{\prime\prime}$, is
depicted as a dashed black line. The difference in lasing thresholds between
the SIP and RBE also scales asymptotically as $N^{-3}$, resulting in
comparable thresholds for large waveguides. Therefore, to balance the
preservation of the frozen mode properties with the prevention of accidental
lasing near an RBE, the waveguide length should be chosen as a trade-off
between achieving a small $|n_{S,th}^{\prime\prime}|$ and a relatively large
$\Delta n_{th}^{\prime\prime}$.
(a)
(b)
(c)
Figure 8: The magnitude of the function $|T_{f}(\omega)|$ in (dB) for the SIP-
ASOW with $N=25$ for different values of gain. The SIP and the RBE frequencies
are shown as vertical dashed blue and red lines, respectively. (a) With no
gain, resonances accumulate around $\omega_{S}$. The function $|T_{f}|$ near
the SIP is already higher than near the RBE. (b) With
$n_{g}^{\prime\prime}=n_{S,th}^{\prime\prime}$, the highest peak occurs at the
SIP resonance frequency. (c) With
$n_{g}^{\prime\prime}=n_{R,th}^{\prime\prime}$, $|T_{f}(\omega_{S,res})|$ is
reduced and the maximum occurs at the RBE resonance $\omega_{S,res}$.
Figure 7(c) shows the lasing threshold against the quality factor of an SIP-
ASOW laser operating near the RBE (in black) and near the SIP (in blue), where
each dot refers to a finite-length ASOW of $N$ unit cells, with $N\in[5,40]$.
Each ASOW has a higher $Q$ at the RBE resonance (nearest resonance to the RBE
frequency) than that at the SIP resonance (nearest resonance to the SIP
frequency). Therefore, one would a priori expect the RBE lasing threshold to
be lower than the SIP lasing threshold. However, the opposite has been
observed. This discrepancy may be attributed to the impact of the field
amplitude distribution within the underlying passive cavity on the lasing
threshold after the inclusion of gain in the cavity. Indeed, Figure 7(c),
which compares the SIP-induced lasing threshold with the RBE-induced lasing
threshold, shows similar curves than those in Figure 16 in [33], which
compares a DBE cavity and an RBE cavity with the same quality factors.The
authors show that the lasing threshold induced by the DBE is lower than that
induced by the RBE because the field amplitude distribution is greater in the
DBE cavity. See also a related discussion in [31] in terms of local density of
states distribution within the cavity, for the DBE and RBE cases. However, in
this paper, we examine the lasing threshold in the vicinity of two EPDs (an
SIP and an RBE) in the same cavity. Our numerical experiments reveal that the
SIP-induced lasing threshold is lower than that induced by the RBE, even
though the RBE quality factor is larger.Although additional investigation is
necessary to understand the difference in the field amplitude distribution in
SIP and RBE cavities, and its effect on the lasing threshold, the evidence
presented in this paper shows that in an active ASOW, SIP-induced lasing is
preferred over lasing near an RBE.
To further illustrate the threshold difference between lasing near the SIP and
the RBE, Figure 8 depicts the magnitude of $T_{f}$ (in dB) against $\omega$
for a finite-length SIP-ASOW with $N=25$ for different values of gain. The
vertical blue and red dashed lines depict the SIP and the RBE frequencies,
respectively. In Fig. 8(a) the ASOW has no gain, $n_{g}^{\prime\prime}=0$. The
magnitude of the $T_{f}$ function at the SIP resonance is already higher than
at the RBE resonance in the passive ASOW. Then, one would a priori expect the
lasing threshold near the SIP to be lower than near the RBE, as it is indeed
the case observed. In Fig. 8(b) the gain is set to the SIP lasing threshold,
$n_{g}^{\prime\prime}=n_{S,th}^{\prime\prime}$, where the $|T_{f}|$
experiences a local maximum close to $\omega_{S}$. The peak at the SIP
resonance decreases as the gain increases above its threshold. This effect is
explained in detail in Appendix A. In Fig. 8(c) the gain is further increased
to the RBE lasing threshold, $n_{g}^{\prime\prime}=n_{R,th}^{\prime\prime}$,
and the maximum of $|T_{f}|$ occurs instead near $\omega_{R}$. Further
investigation is necessary to determine how the enhanced field amplitude
distribution in passive cavities operating near an EPD affects $T_{f}$ and the
lasing threshold of the system with gain. In summary, resonances near the SIP
frequency require a smaller effective power gain coefficient $g_{eff}$ to lase
than resonances near the RBE frequency. Therefore, though not shown here, we
expect that an SIP-based laser has higher lasing efficiency, meaning that it
emits more output power with less gain than an RBE-based laser. In all
likelihood, the SIP lasing threshold is lower than the RBE lasing threshold
due to a combination of factors, which ultimately arise from the contrast
between the exceptional properties of the SIP-associated frozen mode and those
of the standing wave characteristic of the RBE. Moreover, the threshold
difference between lasing near an SIP or an RBE could be increased by using
the two degrees of freedom associated with the two left and right
terminations, which may affect the RBE and SIP resonances differently.
## 6 Conclusion
Lasing conditions in the vicinity of an SIP have been investigated for an ASOW
terminated on a straight waveguide at each end of the ASOW cavity. The ASOW
cavity does not need mirrors to display a high quality factor and low gain
threshold. The mode mismatch between the SIP-ASOW and the straight waveguide
is attributed to the frozen mode, i.e., to its three degenerate modes and the
resulting Bloch impedance. By analyzing the degenerate modes in the periodic
ASOW, we have shown that the eigenmode distortion due to the presence of a net
gain or loss is more severe near the SIP frequency than at other frequencies;
however, the properties associated with the three-mode exceptional degeneracy
are in part preserved for relatively small values of gain, which can
nevertheless bring the system above threshold. We have observed that the
quality factor of both the SIP resonance and the RBE resonance scales as a
cube of the waveguide length, and the lasing threshold as a negative cube of
the waveguide length. While an SIP cavity displays a lower $Q$ factor than an
RBE cavity of the same gain medium and length, it also displays a lower lasing
threshold. Although we propose that this disparity is caused by the different
field distributions in the cavity at the two resonances, it has yet to be
confirmed. Nonetheless, our research has revealed that SIP lasing is
preferred. While the results here have been shown specifically for the ASOW,
the insights and conclusions outlined in this paper can be readily applied to
other periodic optical waveguides that display an SIP. Further control of the
SIP and RBE lasing thresholds could be exerted by changing the loading at the
two ends of the lasing cavity. Future considerations into SIP lasers shall
also focus on mode selectivity, the impact of nonlinear effects, and their
transient response.
## Acknowledgment
This research is based upon work supported by the Air Force Office of
Scientific Research award numbers LRIR 21RYCOR019 and FA9550-18-1-0355. It has
been approved for public release with unlimited distribution.
## Disclosure
The authors declare no conflicts of interest.
## Data availability statement
Data underlying the results presented in this paper are not publicly available
at this time but may be obtained from the authors upon reasonable request.
(a)
(b)
(c)
Figure 9: Example of three complex poles (red dots) of the transfer function
$T_{f}(\omega)$, mathematically extended also to the case when the system is
unstable, i.e., when a pole has $\text{Im}(p_{i})>0$. The function $T_{f}$ is
calculated as in Eq. (16) for real $\omega$ (shown with an orange arrow ) for
different values of gain. (a) With no gain, the system is stable. (b) The gain
is set at the lasing threshold associated either to $\omega_{S}$ or
$\omega_{R}$. The transfer function has a maximum because the distance between
$\omega$ and the poles is minimum. (c) The gain is increased above the lasing
threshold. The value of $T_{f}(\omega)$ decreases as the distance between the
poles and $\omega$ increases.
## Appendix A: Lasing threshold calculations near an SIP and an RBE
We define the lasing threshold in a finite-length ASOW as the minimum gain
required to maintain oscillations in the cavity. It is calculated by gradually
increasing $|n_{g}^{\prime\prime}|$ until the ASOW of finite length terminated
on two straight waveguides is marginally stable. Its stability is tracked
through the poles $p_{i}$ of the function $T_{f}(\omega)$, rewritten as [50,
42]
$T_{f}(\omega)\propto\frac{1}{\Pi|\omega-p_{i}|},$ (16)
where $\omega$ is the sweeping real frequency. This function is the extension
of the transfer function in Eq. (3) to the case of complex poles associated
with an unstable regime. Since the electric field is a real-valued quantity,
poles occur in pairs [42]: if $p_{i}$ is a pole of the system, $-p_{i}^{*}$ is
a pole as well. Figure 9 depicts three poles (the red dots) of the function
$T_{f}(\omega)$ in the complex $\omega$ space for different values of gain.
The orange arrow illustrates that $\omega$ sweeps the real axis while we
calculate $T_{f}(\omega)$. Figure 9(a) shows the stable pole distribution in a
system with no gain. All the poles of the transfer function in a stable ASOW
have negative imaginary parts. The system is unstable when at least one pair
of poles has a positive imaginary part. Increasing the gain, which is modeled
as $n_{g}^{\prime\prime}<0$, moves the poles upwards. Operating at a
resonance, defined here as the real frequency of the peak,
$|T_{f}(\omega_{res})|$ reaches a maximum when the system is marginally stable
[42], i.e., the lasing threshold is the gain that renders the first pair of
poles such that $\text{Im}(p_{i})=0$. Figure 9(b) depicts the complex $\omega$
space for a marginally stable waveguide, where the gain is set at the lasing
threshold. Therefore, there is a real $\omega$ such that $|\omega-p_{i}|=0$,
which renders $|T_{f}|\rightarrow\infty$. If this happens near the SIP
frequency we say we have an SIP laser, if this peak happens near the RBE
frequency we say we have an RBE laser. Further adding gain moves those poles
into the unstable region, reducing $|T_{f}|$. Figure 9(c) depicts a pole
distribution for gain above the lasing threshold (depicted by a larger
vertical red arrow on the right of the figure). The resonator made of the
finite-length ASOW terminated onto two straight waveguides is unstable and
$|T_{f}(\omega)|$ would be reduced as the distance between any $\omega$ on the
real axis and the closest pole has increased. Each plot in Fig. 8 corresponds
with a stability region in the three scenarios depicted in Fig. 9 when
considering the SIP resonance. Fig. 8 also shows the peak associated with the
threshold at the RBE resonance, happening for a gain value larger than the SIP
threshold.
Losses increase the lasing threshold because they move the poles downwards.
## References
* [1] P. Lancaster, “On eigenvalues of matrices dependent on a parameter,” Numerische Mathematik 6, 377–387 (1964).
* [2] T. Kato, _Perturbation theory for linear operators_ (Springer-Verlag New York Inc., New York, 1966), chap. 2.
* [3] A. P. Seyranian, “Sensitivity analysis of multiple eigenvalues,” Mech. Based Des. Struct. Mach. 21, 261–284 (1993).
* [4] W. Heiss, “Exceptional points of non-hermitian operators,” J. Phys. A Math. Theor. 37, 2455–2464 (2004).
* [5] C. E. Rueter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, “Observation of parity-time symmetry in optics,” Nat. Phys. 6, 192–195 (2010).
* [6] W. Heiss, “The physics of exceptional points,” Journal of Physics A Mathematical and Theoretical 45, 444016 (2012).
* [7] A. Figotin and I. Vitebskiy, “Slow light in photonic crystals,” Waves in Random and Complex Media 16, 293–382 (2006).
* [8] H. Ramezani, T. Kottos, R. El-Ganainy, and D. N. Christodoulides, “Unidirectional nonlinear $\mathcal{PT}$-symmetric optical structures,” Phys. Rev. A 82, 043803 (2010).
* [9] J. Schindler, A. Li, M. C. Zheng, F. M. Ellis, and T. Kottos, “Experimental study of active lrc circuits with pt symmetries,” Phys. Rev. A 84, 040101 (2011).
* [10] J. Gear, F. Liu, S. T. Chu, S. Rotter, and J. Li, “Parity-time symmetry from stacking purely dielectric and magnetic slabs,” Phys. Rev. A 91, 033825 (2015).
* [11] M. A. K. Othman and F. Capolino, “Theory of exceptional points of degeneracy in uniform coupled waveguides and balance of gain and loss,” IEEE Trans. Antennas Propag. 65, 5289–5302 (2017).
* [12] A. F. Abdelshafy, M. A. K. Othman, D. Oshmarin, A. T. Almutawa, and F. Capolino, “Exceptional points of degeneracy in periodic coupled waveguides and the interplay of gain and radiation loss: Theoretical and experimental demonstration,” IEEE Trans. Antennas Propag. 67, 6909–6923 (2019).
* [13] A. Figotin and I. Vitebskiy, “Electromagnetic unidirectionality in magnetic photonic crystals,” Physical Review B 67, 165210 (2003).
* [14] M. B. Stephanson, K. Sertel, and J. L. Volakis, “Frozen modes in coupled microstrip lines printed on ferromagnetic substrates,” IEEE Microw Wirel Compon Lett 18, 305–307 (2008).
* [15] G. Mumcu, K. Sertel, and J. L. Volakis, “Lumped circuit models for degenerate band edge and magnetic photonic crystals,” IEEE Microw Wirel Compon Lett 20, 4–6 (2010).
* [16] N. Apaydin, L. Zhang, K. Sertel, and J. L. Volakis, “Experimental validation of frozen modes guided on printed coupled transmission lines,” IEEE Trans Microw Theory Tech 60, 1513–1519 (2012).
* [17] N. Gutman, W. H. Dupree, Y. Sun, A. A. Sukhorukov, and C. M. de Sterke, “Frozen and broadband slow light in coupled periodic nanowire waveguides,” Optics Express 20, 3519–3528 (2012).
* [18] H. Ramezani, S. Kalish, I. Vitebskiy, and T. Kottos, “Unidirectional lasing emerging from frozen light in nonreciprocal cavities,” Phys. Rev. Lett. 112, 043904 (2014).
* [19] M. Y. Nada, M. A. K. Othman, and F. Capolino, “Theory of coupled resonator optical waveguides exhibiting high-order exceptional points of degeneracy,” Physical Review B 96, 184304 (2017).
* [20] I. A. Volkov and R. S. Savelev, “Unidirectional coupling of a quantum emitter to a subwavelength grating waveguide with an engineered stationary inflection point,” Phys Rev B 104, 245408 (2021).
* [21] B. Paul, N. K. Nahar, and K. Sertel, “Frozen mode in coupled silicon ridge waveguides for optical true time delay applications,” Journal of the Optical Society of America B - Optical Physics 38, 1435–1441 (2021).
* [22] W. Tuxbury, R. Kononchuk, and T. Kottos, “Non-resonant exceptional points as enablers of noise-resilient sensors,” Commun. Phys. 5, 210 (2022).
* [23] M. Y. Nada, T. Mealy, and F. Capolino, “Frozen mode in three-way periodic microstrip coupled waveguide,” IEEE Microwave and Wireless Components Letters 31, 229–232 (2021).
* [24] A. Figotin and I. Vitebskiy, “Oblique frozen modes in periodic layered media,” Phys. Rev. E 68, 036609 (2003).
* [25] A. Figotin and I. Vitebskiy, “Frozen light in photonic crystals with degenerate band edge,” Physical Review E 74, 066613 (2006).
* [26] N. Gutman, C. M. de Sterke, A. A. Sukhorukov, and L. C. Botten, “Slow and frozen light in optical waveguides with multiple gratings: Degenerate band edges and stationary inflection points,” Physical Review A 85, 033804 (2012).
* [27] A. Figotin and I. Vitebskiy, _Electromagnetic Unidirectionality in Magnetic Photonic Crystals_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2013), pp. 35–50.
* [28] A. Figotin and I. Vitebskiy, “Slow wave phenomena in photonic crystals,” Laser & Photonics Reviews 5, 201–213 (2011).
* [29] H. Li, I. Vitebskiy, and T. Kottos, “Frozen mode regime in finite periodic structures,” Phys. Rev. B 96, 180301 (2017).
* [30] A. Herrero-Parareda, I. Vitebskiy, J. Scheuer, and F. Capolino, “Frozen mode in an asymmetric serpentine optical waveguide,” Advanced Photonics Research p. 2100377 (2021).
* [31] M. A. K. Othman, F. Yazdi, A. Figotin, and F. Capolino, “Giant gain enhancement in photonic crystals with a degenerate band edge,” Physical Review B 93, 024301 (2016).
* [32] J. Dowling, M. Scalora, M. Bloemer, and C. Bowden, “The photonic band-edge laser - a new approach to gain enhancement,” Journal of Applied physics 75, 1896–1899 (1994).
* [33] M. Veysi, M. A. K. Othman, A. Figotin, and F. Capolino, “Degenerate band edge laser,” Physical Review B 97, 195107 (2018).
* [34] J. Scheuer and O. Weiss, “The serpentine optical waveguide: engineering the dispersion relations and the stopped light points,” Optics Express 19, 11517–11528 (2011).
* [35] A. E. Siegman, _Lasers_ (Univ. Science books, Mill Valley, Calif, 1986), chap. 2.
* [36] K. Vahala, _Optical Microcavities_ (World Scientific, 2004), vol. 5 of _Advanced Series in Applied Physics_ , chap. 4.
* [37] A. Yariv, P. Yeh, and A. Yariv, _Photonics: optical electronics in modern communications_ (Oxford University Press, New York, 2007), chap. 12, The Oxford series in electrical and computer engineering, 6th ed.
* [38] A. Figotin and I. Vitebskiy, “Gigantic transmission band-edge resonance in periodic stacks of anisotropic layers,” Physical Review E 72, 036619 (2005).
* [39] J. T. Verdeyen, _Laser electronics_ (Prentice Hall, Englewood Cliffs, NJ, 1995), chap. 6, Prentice-Hall series in solid state physical electronics, 3rd ed.
* [40] J. Grgic, J. R. Ott, F. Wang, O. Sigmund, A.-P. Jauho, J. Mork, and N. A. Mortensen, “Fundamental limitations to gain enhancement in periodic media and waveguides,” Physical Review Letters 108, 183903 (2012).
* [41] I. V. Doronin, A. A. Zyablovsky, E. S. Andrianov, A. A. Pukhov, Y. E. Lozovik, and A. P. Vinogradov, “Universal lasing condition,” Sci. Rep. 11, 4197 (2021).
* [42] M. Y. Nada and F. Capolino, “Exceptional point of sixth-order degeneracy in a modified coupled-resonator optical waveguide,” Journal of the Optical Society of America B - Optical Physics 37, 2319–2328 (2020).
* [43] J. Ballato, A. Ballato, A. Figotin, and I. Vitebskiy, “Frozen light in periodic stacks of anisotropic layers,” Phys. Rev. E 71, 036612 (2005).
* [44] M. Y. Nada, T. Mealy, M. S. Islam, I. Vitebskiy, R. Gibson, R. Bedford, O. Boyraz, and F. Capolino, “Design of a modified coupled resonators optical waveguide supporting a frozen mode,” arXiv:2211.01408 (2022).
* [45] A. Welters, “On explicit recursive formulas in the spectral perturbation analysis of a jordan block,” SIAM Journal on Matrix Analysis and Applications 32, 1–22 (2011).
* [46] F. Yazdi, M. A. K. Othman, M. Veysi, A. Figotin, and F. Capolino, “A new amplification regime for traveling wave tubes with third-order modal degeneracy,” IEEE Trans Plasma Sci 46, 43–56 (2018).
* [47] Z. M. Gan, H. Li, and T. Kottos, “Effects of disorder in frozen-mode light,” Opt. Lett. 44, 2891–2894 (2019).
* [48] W. Tuxbury, L. J. Fernandez-Alcazar, I. Vitebskiy, and T. Kottos, “Scaling theory of absorption in the frozen mode regime,” Opt. Lett. 46, 3053–3056 (2021).
* [49] Y. Xu, B. Wu, X. Jiang, H. Guo, and F. Wen, “Experimental measurement of absorption coefficients for effective erbium-doping concentration to optimize few-mode erbium-doped fiber amplifiers with low differential mode gain,” Photonics 8, 185 (2021).
* [50] G. F. Franklin, D. J. Powell, and A. Emami-Naeini, _Feedback Control of Dynamic Systems_ (Prentice Hall PTR, USA, 2001), chap. 3, 4th ed.
|
# Towards Practical Few-shot Federated NLP
Dongqi Cai Beiyou Shenzhen Institute , Yaozong Wu Beiyou Shenzhen Institute
, Haitao Yuan Beiyou Shenzhen Institute , Shangguang Wang Beiyou Shenzhen
Institute , Felix Xiaozhu Lin University of Virginia and Mengwei Xu Beiyou
Shenzhen Institute
(2023)
###### Abstract.
Transformer-based pre-trained models have emerged as the predominant solution
for natural language processing (NLP). Fine-tuning such pre-trained models for
downstream tasks often requires a considerable amount of labeled private data.
In practice, private data is often distributed across heterogeneous mobile
devices and may be prohibited from being uploaded. Moreover, well-curated
labeled data is often scarce, presenting an additional challenge. To address
these challenges, we first introduce a data generator for federated few-shot
learning tasks, which encompasses the quantity and skewness of scarce labeled
data in a realistic setting. Subsequently, we propose AUG-FedPrompt, a prompt-
based federated learning system that exploits abundant unlabeled data for data
augmentation. Our experiments indicate that AUG-FedPrompt can perform on par
with full-set fine-tuning with a limited amount of labeled data. However, such
competitive performance comes at a significant system cost.
Federated Learning, Natural Language Processing, Few-shot Learning
††ccs: Human-centered computing Ubiquitous and mobile computing††ccs:
Computing methodologies Machine learning††journalyear: 2023††copyright:
acmlicensed††conference: 3rd Workshop on Machine Learning and Systems; May 8,
2023; Rome, Italy††booktitle: 3rd Workshop on Machine Learning and Systems
(EuroMLSys ’23), May 8, 2023, Rome, Italy††price: 15.00††doi:
10.1145/3578356.3592575††isbn: 979-8-4007-0084-2/23/05
## 1\. Introduction
#### Federated NLP
The development of pre-trained models is overwhelming with the rise of BERT
(devlin2018bert, ). Their deployment (zhang2018deep, ; shao2019transformer, ;
van2019does, ; kim2021code, ; svyatkovskiy2020intellicode, ) is commonly
composed of two-step training: pre-training and fine-tuning. Unlike self-
supervised pre-training, fine-tuning is supervised, requiring task-specific
tremendous labeled data. However, the exploitation of private user data is
restricted and even prohibited in some cases by several data protection
regulations such as GDPR (voigt2017eu, ) and CCPA (pardau2018california, ).
Recently, federate learning (FL) (mcmahan2017communication, ;
yang2019federated, ) becomes the de-facto approach to train a model with
privacy preserved. As such, federated NLP (FedNLP) (lin-etal-2022-fednlp, ;
cai2022autofednlp, ) is now an important topic towards practical NLP
applications.
#### Problem and challenge
A key obstacle to practical FedNLP is data labeling. It’s much more difficult
to label data on client devices than on centrally collected data (xu2021limu,
; li2017crowdsourced, ). Lack of sufficient labeled data severely limits the
practicality and scalability of FedNLP in real-world NLP applications.
Therefore, it is important to address the issue of few-shot or even zero-shot
FedNLP tasks. There are very few efforts on this topic (fan2021federated, ;
chen2018federated, ; huang2022unsupervised, ), which still assume a fairly
large number (typically $>$1000 in total) of labels that are uniformly
distributed across clients. However, in practice, the labeled data
distribution could be skewed across clients, and such skewness would result in
a significant drop in the accuracy according to our experiments in $\S$2.
#### Our solution and contribution
(1) To tackle the issue of insufficient and skewness of labeled data, we
design a comprehensive data generator as the first step towards simulating the
distribution of labeled data for few-shot FedNLP tasks. The generator has two
meta-parameters: data quantity and skewness, which encompass most, if not all,
potential scenarios for practical few-shot FedNLP.
(2) To boost the performance of few-shot FedNLP, we design a data-augmented
prompt system, namely AUG-FedPrompt. AUG-FedPrompt orchestrates prompt
learning (schick2020exploiting, ) and pseudo labeling (lee2013pseudo, ).
Prompt learning introduces a task description in NLP training. It helps task-
specific fine-tuning achieve high accuracy with very few labeled data samples
in FedNLP. Furthermore, to tackle performance degradation caused by skewed
label distribution, AUG-FedPrompt leverages enormous and easily accessible
unlabeled data for pseudo labeling-based data augmentation.
(3) Our extensive experiments on four English datasets demonstrate that AUG-
FedPrompt can achieve a substantial performance gain (25%–55% higher accuracy)
over the state-of-the-art FedNLP approaches under various few-shot settings.
Augmentation with unlabeled data enhances AUG-FedPrompt to perform well with
highly skewed labeled distribution across clients. Overall, AUG-FedPrompt can
achieve a comparable performance with the state-of-the-art FedNLP approaches
with less than 0.1% labeled data.
## 2\. Problem setup
#### Federated NLP Training Procedure
The two NLP training phases, i.e., pre-training and fine-tuning, require data
of disparate natures. Pre-training is typically done on public text corpora
such as Wikipedia articles, while fine-tuning requires domain-specific
samples, such as user reviews, messages, or emails. For mobile computing,
domain-specific samples are gathered from end-users and distributed across
mobile devices, while ensuring the protection of privacy. To fine-tune models
on such private, distributed data, federated learning is the de-facto approach
(lin-etal-2022-fednlp, ; cai2022autofednlp, ). Prior to training, a cloud
service distributes a pre-trained model to all client devices. In a training
session targeting a specific NLP task and domain, a cloud service selects
multiple mobile devices to participate in training. Each device trains a local
copy of the model with its private data and sends the model updates to the
cloud. Upon aggregating the model updates from multiple devices, the cloud
sends an updated model to the devices. This training procedure is repeated
until the model converges.
Figure 1. Visualizing the skewness of labeled data on YAHOO
(zhang2015character, ) with $n$=1024, $\xi$=32, $\gamma$ being $10^{n}$,
n=-3,-2,..,2. Each sub-figure is a 32$\times$10 matrix, where 32 is the number
of clients and 10 is the number of labels. The intensity of each cell
represents the number of labeled samples for a specific label in the client-
side local data.
#### Federated few-shot data generator
Apart from data privacy, lack of sufficient labeled data is a crucial issue
and an inherent feature in mobile scenarios. Alike data feature could be non-
independent and identically distributed (non-iid), the scarce labels is not
always uniformly distributed in the real world. Based on the definition of
non-iid partition strategies (li2022federated, ; lin-etal-2022-fednlp, ), we
further define the quantity and skewness of labels under federated few-shot
learning scenario.
We define a new tuple ($n$, $\gamma$) to represent the practical few-shot
learning training data distribution, where $n$ represents the total numbers of
labeled data, $\gamma$ represents the skewness of labeled data.
The quantity of labeled data assigned to each client follows a Dirichlet
allocation $z$ $\sim$ Dirξ ($\gamma$), where $\xi$ is the number of clients
with labeled data111$\xi$ could be an optional hyper-parameter to strict the
maximum of clients owing labeled data. In this manuscript, we fix $\xi$ as 32
for simplicity.. We can then allocate labeled data from the global labeled
dataset to selected clients based on the distribution $z$, with clienti being
assigned a labeled dataset of size $|\mathcal{T}_{i}|$ = $z_{i}$$n$. For
example, in Figure 1, we visualize the labeled data skewness on Yahoo
(zhang2015character, ) with $n$=1024, $\xi$=32, $\gamma$ being $10^{n}$,
n=-3,-2,..,2. Each sub-figure is a 32$\times$10 matrix, the intensity of which
represents the labeled samples of a particular label. When $\gamma$ is small
($10^{-3}$, $10^{-2}$, $10^{-1}$), the labeled data will be skewed
distributed, i.e., only few clients own labeled data; when $\gamma$=$10^{2}$,
labeled data is nearly uniformly distributed on all clients.
Performance degradation under skewed distribution
Figure 2. Average accuracy of federated few-shot learning under different data
quantity and skewness. When skewness $\gamma$ grows larger, labeled data will
be more uniformly distributed, and vice versa. Dataset: YAHOO
(zhang2015character, ).
In Figure 2, we present the impact of label skewness on federated few-shot
learning. We observe that as $\gamma$ decreases, i.e., the labeled data
becomes more skewed, the convergence performance of the model degrades. For
example, when labeled data points are 1024, uniform distribution
($\gamma$=100) will be 26% better than skewed distribution ($\gamma$=0.001).
The rationale behind this phenomenon is that under common non-iid data
distribution, individual clients tends to possess more specific data features.
The labels concentrated on certain clients results in a skewed feature
distribution of training data. This bias can lead to unfairness, as the
aggregated model may favor certain labels over others, resulting in a
significant drop in convergence accuracy. We provide a more detailed analysis
of this phenomenon in Section 4.3.
## 3\. System design
We propose AUG-FedPrompt as a solution to address the challenges posed by data
privacy concerns and label scarcity. AUG-FedPrompt leverages large amounts of
unlabeled data via the federated orchestration of pseudo labeling and prompt
learning. AUG-FedPrompt fine-tunes the pre-trained model through prompt
learning on client devices. After local training and federated aggregation,
AUG-FedPrompt makes inference on unlabeled training data, from which high-
confident results, i.e., pseudo labels (lee2013pseudo, ; arazo2020pseudo, ;
bengio2009curriculum, ) are selected for subsequent training.
Figure 3. Workflow of AUG-FedPrompt.
We describe the training workflow of AUG-FedPrompt in Figure 3. A public pre-
trained transformer-based language model $M$ is transferred to chosen clients.
We assume that each client has access to a tiny training set $\mathcal{T}$
(typically $<$ 10) and a much larger set of unlabeled examples $\mathcal{D}$
(typically $>$ 1000).
For local prompt training, we annotate $T$ as the vocabulary of model $M$,
$\\_\\_\in T$ as the mask token and $T^{*}$ as the set of all token sequences.
To clarify, $T$ is composed of tokens representing labels description and
$T^{*}$ is composed of tokens representing input text, which is a larger
corpus. The sequence of input phrases is
$\mathbf{x}=\left(s_{1},\ldots,s_{k}\right)$ where $s_{i}\in T^{*}$. The
pattern-verbalizer pair $\mathbf{p}$ includes: 1) a pattern $P:X\rightarrow
T^{*}$ maps inputs $x$ to a cloze question containing a single mask; 2) a
verbalizer $v:Y\rightarrow T$ maps each output $y$ to a single token
representing its task-specific meaning in the pattern.
The purpose of local prompt training is to derive the probability that y is
the correct output of x from the probability that v(y) is the most likely
token at the masked position in $P(x)$. Based on this rationale, we define the
conditional probability distribution $s_{\mathbf{p}}$ of $y$ given $x$ as:
(1) $s_{\mathbf{p}}(y\mid x)=\frac{\exp q_{\mathbf{p}}(y\mid
x)}{\sum_{y^{\prime}\in Y}\exp q_{\mathbf{p}}\left(y^{\prime}\mid x\right)}$
where $q_{\mathbf{p}}(y\mid x)=M(v(y)\mid P(x))$ is the probability that $M$
assigns to $v(y)$ in sequence $P(x)$.
For client-side fine-tuning, the pre-trained model $M$ is fine-tuned on local
labeled data $(x,y)$ by minimizing the cross-entropy between
$s_{\mathbf{p}}(y\mid x)$ and y. For server-side aggregation, in each
iteration $i$, client $k$ sends its updated model $M^{i}_{k}$ to the cloud for
aggregation using FedAVG algorithm (mcmahan2017communication, ); the
aggregated model is denoted as $M^{i}$.
For data augmentation, $M^{i}$ is distributed to clients with large amount of
unlabeled data for pseudo labeling. Each unlabeled example $\hat{x}\in D$ is
labeled with pseudo label $\hat{y}$ based on
$s_{\mathbf{p}}(\hat{y}\mid\hat{x})$. The pseudo-labeled dataset then is
utilized for fine-tuning the client-side model in the subsequent iteration.
The resulting pseudo-labeled dataset could consist of enormous samples with
wrong labels. Directly involving them in the next training iteration will
poison the foundation model, which makes it could be even worse than purely
using the limited labeled data. To address this issue, we propose two
techniques to filter out those wrong samples and remain the purity of augment
dataset: 1) Filtering by model capacity: we eliminate those models with low
model capacity, i.e., those that perform poorly on validation datasets. 2)
Filtering by confidence: we remove samples with low confidence, i.e., those
with a probability of the most likely label lower than a threshold. Both
capacity and confidence are hyper-parameters that can be tuned flexibly
depending on particular tasks or datasets.
## 4\. Preliminary Experiments
In this section, we evaluate the performance of AUG-FedPrompt across data
scales. AUG-FedPrompt significantly outperforms naive federated fine-tuning.
It could perform on par with full-set training while saving up to 99.9%
labeled data. Apart from data efficiency, AUG-FedPrompt shows great robustness
under various practical few-shot scenario regardless of skewed or uniform
label distribution.
### 4.1. Experiment Setup
Dataset | Prompt | Train | Test
---|---|---|---
AGNEWS (zhang2015character, ) | a ( ____ ) b | 120,000 | 7.600
MNLI (williams2017broad, ) | “a” ? ‖ ____, “b” | 392,702 | 9,815
YAHOO (zhang2015character, ) | [ Category: ] a ____ b | 1,400,000 | 60,000
YELP-F (zhang2015character, ) | It was ____. a | 650,000 | 50,000
Table 1. Evaluation datasets. Each dataset is distributed to 1000 clients.
Label quantity of each class follows the non-iid label distribution in (lin-
etal-2022-fednlp, ) where $\alpha=1$.
(a) AGNEWS
(b) MNLI
(c) YAHOO
(d) YELP-F
Figure 4. Average accuracy and standard deviation for AUG-FedPrompt across
data scales. FedCLS stands for the vanilla federated fine-tuning. Full-set
stands for fine-tuning on the full labeled data.
#### Dataset and models
We perform our evaluation on four English datasets and manually designed
prompts222We try 6, 2, 6, 4 different prompts for each datasets separately and
report the chosen one that performs best. The verbalizers are the same as
previous literature (schick2020exploiting, )., detailed information is shown
in Table 1. (1) AGNEWS (zhang2015character, ) is a news classification
dataset. Given headline and text body, news needs to be classified as one of
the four categories: World, Sports, Business or Science/Tech. (2) MNLI
(williams2017broad, ) is a sentence understanding dataset. Given text pairs x
= (a, b), the task is to find out whether a implies b, a and b contradict each
other or neither. (3) YELP Review Full (YELP-F) (zhang2015character, ) is a
restaurant rating dataset. Given a customer’s review, text should be estimated
on a 1-5 star scale. (4) YAHOO (zhang2015character, ) is a text classification
dataset. Given a question and an answer, one of ten possible categories needs
to be assigned. All experiments are conducted using the same pre-trained
model, RoBERTa-large (355M parameters) (liu2019roberta, ), which we load from
the transformers (wolf2019huggingface, ) library.
#### Hyper-parameters
In line with previous observations (scao2021many, ), few-shot fine-tuning
performance varies across chosen labeled data considerably. We run every
experiment 3 times in order to reduce variance. Unless otherwise stated, we
use the recommended set of hyper-parameters from previous work
(schick2020exploiting, ): mini-batch size as 4; local training iteration as 1;
learning rate as 10-5; max sequence length as 256. For pseudo labeling, we
filter out those aggregated models performing worse than the zero-shot model
and those pseudo-labeled data with confidence lower than 0.9. For the FL
configurations at the server side, we follow the prior FedNLP literature
(cai2022autofednlp, ; lin-etal-2022-fednlp, ) to select 5 participants for
each training round by default. The fine-tuned models will be collected in the
central server and aggregated through FedAvg algorithm
(mcmahan2017communication, ).
### 4.2. Performance across Data Scales
AUG-FedPrompt enjoys a substantial advantage on each task. As shown in Figure
4, we compare our AUG-FedPrompt performance with FedCLS, i.e., the vanilla
federated fine-tuning where a generic classifier layer inserted after pre-
trained models is fine-tuned. Highlighted region shows the accuracy gap
between AUG-FedPrompt and FedCLS. There are up to 50%, 25%, 55%, 38% accuracy
improvement separately for 4 datasets. Both approaches improve with more
labeled data, but AUG-FedPrompt remains better by a varying amount. AUG-
FedPrompt reaches 99% relative performance of full-set with 90% less training
data compared to full-set federated training. AUG-FedPrompt shows a strong
zero-shot inferring capability, i.e., without task-specific fine-tuning,
expect for MNLI dataset. MNLI dataset may need more labeled data to make full
use of the prompt to the pre-trained models. For a usable accuracy, i.e., 90%
relative performance of full-set training accuracy, AUG-FedPrompt only needs
64, 256, 256 in total for AGNEWS, YAHOO and YELP-F, saving up to 99.9%
training data compared to full-set federated fine-tuning. Please note that 64
is the total number of labels across all clients, not per client.
### 4.3. Impact of Data Augmentation
Dataset | AGNEWS | MNLI | YAHOO | YELP-F
---|---|---|---|---
Uniform | FedCLS | 66.1$\pm$12.8 | 60.1$\pm$0.4 | 57.6$\pm$1.9 | 54.0$\pm$0.1
FedPrompt | 87.0$\pm$0.8 | 77.6$\pm$0.8 | 66.0$\pm$0.1 | 61.9$\pm$0.7
Skewed | FedCLS | 64.8$\pm$3.1 | 37.7$\pm$5.6 | 24.4$\pm$10.3 | 38.3$\pm$8.8
FedPrompt | 68.4$\pm$2.4 | 42.4$\pm$5.8 | 41.8$\pm$4.3 | 51.2$\pm$1.8
w/ augment | 90.2$\pm$0.5 | 75.7$\pm$1.2 | 66.9$\pm$1.1 | 58.2$\pm$2.4
Table 2. AUG-FedPrompt enhances performance under different few-shot learning
settings. FedPrompt stands for AUG-FedPrompt without unlabeled data
augmentation. Datapoint: 64 for AGNEWS, 1024 for MNLI, 256 for YHAOO and
YELP-F.
AUG-FedPrompt enhances FedPrompt performance when labeled data is skewed
distributed. As shown in Table 2, FedPrompt, i.e., AUG-FedPrompt without data
augment shows competitive performance when labeled data is uniformly
distributed on clients. While skewed distribution of labeled data will hurt
FedPrompt performance significantly. For example, FedPrompt performance
degrades to 41.8% on YHAOO when 256 labeled data is skewed distributed on 32
clients. Considering that skewed distribution is common in real-world, we
integrate AUG-FedPrompt with data augmentation to mitigate the performance
degradation.
It is important to recall that prompts learning introduces a task description
in NLP training. Prompt helps task-specific fine-tuning perform well even with
few labeled training data. This rationale paves the way for the efficiency of
pseudo labeling; it helps to label more data correctly at the early stage of
training. Together with our confidence filter for pseudo-labeling, AUG-
FedPrompt makes pseudo-labeled data seldom hurt. For example, we annotate 100
unlabeled data on each client involved in per round for AGNEWS. In the first
three rounds, the average ratio of correctly labeled data by pseudo-labeling
on unlabeled data is 92.5%. The inference accuracy will further increase along
with the FL training moves on, reaching 95.3% at the convergence round. Those
‘nail’ data, about 5 out of 100 in total, is hard to be correctly annotated
and filtered out. Fortunately, we observe that they do not affect the model
convergence as shown in Table 2. After pseudo labeling, AUG-FedPrompt performs
on par with full-set fine-tuning and greatly outperforms vanilla few-shot
fine-tuning, reaching a usable accuracy with scarce labeled data.
## 5\. System Cost
Challenges | Possible Solutions
---|---
Huge training latency | Model structure optimization (sanh2019distilbert, ; lan2019albert, ).
Large memory requirement | Rematerialization (chen2016training, ; wang2022melon, ), paging (peng2020capuchin, ).
Excessive inference for pseudo labeling | Pacing (cascante2021curriculum, ; bengio2009curriculum, ), early-exit (zhou2020bert, ; laskaridis2021adaptive, ).
High communication cost | Quantization (wu2018error, ; abdelmoniem2021towards, ), sparsity (li2021hermes, ; frankle2018lottery, ).
Table 3. Challenges and possible solutions.
There is no free lunch for the performance improvement of AUG-FedPrompt. The
orchestrating of pseudo labeling and prompt learning results in promising few-
shot performance, but it also incurs a non-trivial system cost. In this
section, we discuss the necessity of large models for AUG-FedPrompt, as well
as the associated system cost. Challenges and possible solutions are concluded
in Table 3.
Figure 5. AUG-FedPrompt convergence performance with different models and
datasets. 0.1% labeled data uniformly distributed in 32 clients.
To begin with, we conduct experiments to evaluate the performance of AUG-
FedPrompt on various foundation models. As demonstrated in Figure 5, RoBERTa-
large outperforms all other models across all four datasets, particularly MNLI
and YELP-F, where it shows a significant improvement (up to 38.2%). In
contrast, BERT-large, despite having similar parameters to RoBERTa-large,
performed poorly. Interestingly, certain small models, e.g. ALBERT-base
(lan2019albert, ), which is optimized from BERT-base achieved superior results
compared to the standard BERT-base model, despite containing only 10.7% of the
parameters. These findings suggest that large models can help augment the few-
shot learning abilities of AUG-FedPrompt, and that model structure
optimization shows promise in making AUG-FedPrompt a more practical solution.
The excellent performance of RoBERTa-large aligns with previous research
(schick2020exploiting, ; scao2021many, ; schick2022true, ), highlighting the
need for large-scale foundational models to fully leverage prompt learning.
However, despite its merits, the model’s high memory usage and latency cannot
be overlooked. As shown in Table 4, even on a powerful GPU-embedded edge
device like NVIDIA TX2 (tx2, ), training RoBERTa-large leads to long latency
(about 8.1s per batch). Moreover, during training, our testbed device, which
has only 8GB of RAM, ran out of memory during training. Because the peak
memory usage of RoBERTa-large fine-tuning is over 10GB333Tested on a central
server..
Model | | ALBERT-base
---
(lan2019albert, )
| BERT-base
---
(devlin2018bert, )
| BERT-large
---
(devlin2018bert, )
| RoBERTa-base
---
(liu2019roberta, )
| RoBERTa-large
---
(liu2019roberta, )
Memory (GB) | 3.7 | 5.4 | OOM (9.8) | 5.8 | OOM (10.4)
Latency (s) | 1.4 | 1.9 | $\sim$7.8 | 2.1 | $\sim$8.1
Param. (M) | 11.7 | 109.5 | 334.9 | 124.6 | 355.3
Table 4. System cost of different NLP models. Tested on NVIDIA TX2. Batch
size: 4.
Apart from local prompt training, a mobile client need to perform inference on
all of its unlabeled data to generate pseudo labels. However, most of this
inference is ultimately unnecessary, as only a small fraction (the most
confident) of pseudo labels will be selected for subsequent training. As a
result, the inference process dominates the total delay due to the large
volume of unlabeled data that needs to be processed. According to our
measurements, this process accounts for up to 87.4% of the total computation
time. Keeping a balanced pace between training and labeling is crucial to
reduce those redundant inference.
In addition, it should be noted that the overall resource cost of AUG-
FedPrompt system should be extremely higher, let alone long heavy-duty
computing. The reason for this is the need to transfer the entire model, which
can be several GBs in size, in a federated learning scenario. As the size of
the model increases, so too does the amount of data that needs to be
transferred, leading to higher communication costs. This can be particularly
problematic in settings with limited network bandwidth, such as mobile
devices, where large network traffic can significantly impact system
performance (wu2018error, ; xu2020client, ; reisizadeh2020straggler, ;
wang2021device, ).
## 6\. Conclusions and Future Work
This manuscript explores a crucial but less explored issue: data labels can be
scarce in federated learning. We provide a comprehensive definition of a data
generator for federated few-shot learning tasks and demonstrate that the lack
and skewness of labeled data can significantly degrade federated learning
convergence performance. To mitigate this issue, we propose AUG-FedPrompt, a
novel federated few-shot learning system that orchestrates prompt learning and
pseudo labeling. AUG-FedPrompt shows competitive performance under various
federated few-shot learning settings, requiring less than 0.1% data to be
manually labeled.
In conclusion, our experiments have demonstrated the impressive few-shot
performance of AUG-FedPrompt when used with large-scale pre-trained models.
However, fine-tuning these ‘behemoths’ can be extremely resource-intensive,
requiring significant computational power and memory. Additionally, the
communication of large model parameters can consume a considerable amount of
bandwidth. Future work will focus on the development of an optimized system
solution for AUG-FedPrompt to enhance its resource efficiency.
## Acknowledgments
This research was supported by National Key Research and Development Program
of China #2020YFB1805500, the Fundamental Research Funds for the Central
Universities, and NSFC #62032003, #61922017, #61921003. Mengwei Xu was partly
supported by NSFC #62102045, Beijing Nova Program #Z211100002121118, and Young
Elite Scientists Sponsorship Program by CAST #2021QNRC001. The authors thank
the anonymous reviewers for their insightful feedback.
## References
* (1) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
* (2) Lei Zhang, Shuai Wang, and Bing Liu, “Deep learning for sentiment analysis: A survey,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 8, no. 4, pp. e1253, 2018.
* (3) Taihua Shao, Yupu Guo, Honghui Chen, and Zepeng Hao, “Transformer-based neural network for answer selection in question answering,” IEEE Access, vol. 7, pp. 26146–26156, 2019.
* (4) Betty Van Aken, Benjamin Winter, Alexander Löser, and Felix A Gers, “How does bert answer questions? a layer-wise analysis of transformer representations,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp. 1823–1832.
* (5) Seohyun Kim, Jinman Zhao, Yuchi Tian, and Satish Chandra, “Code prediction by feeding trees to transformers,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 2021, pp. 150–162.
* (6) Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan, “Intellicode compose: Code generation using transformer,” in Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020, pp. 1433–1443.
* (7) Paul Voigt and Axel Von dem Bussche, “The eu general data protection regulation (gdpr),” A Practical Guide, 1st Ed., Cham: Springer International Publishing, vol. 10, no. 3152676, pp. 10–5555, 2017.
* (8) Stuart L Pardau, “The california consumer privacy act: Towards a european-style privacy regime in the united states,” J. Tech. L. & Pol’y, vol. 23, pp. 68, 2018.
* (9) Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics. PMLR, 2017, pp. 1273–1282.
* (10) Qiang Yang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu, “Federated learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 13, no. 3, pp. 1–207, 2019.
* (11) Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, and Salman Avestimehr, “Fednlp: Benchmarking federated learning methods for natural language processing tasks,” Findings of NAACL, 2022.
* (12) Dongqi Cai, Yaozong Wu, Shangguang Wang, Felix Xiaozhu Lin, and Mengwei Xu, “Autofednlp: An efficient fednlp framework,” arXiv preprint arXiv:2205.10162, 2022.
* (13) Huatao Xu, Pengfei Zhou, Rui Tan, Mo Li, and Guobin Shen, “Limu-bert: Unleashing the potential of unlabeled data for imu sensing applications,” in Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 2021, pp. 220–233.
* (14) Guoliang Li, Yudian Zheng, Ju Fan, Jiannan Wang, and Reynold Cheng, “Crowdsourced data management: Overview and challenges,” in Proceedings of the 2017 ACM International Conference on Management of Data, 2017, pp. 1711–1716.
* (15) Chenyou Fan and Jianwei Huang, “Federated few-shot learning with adversarial learning,” in 2021 19th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt). IEEE, 2021, pp. 1–8.
* (16) Fei Chen, Mi Luo, Zhenhua Dong, Zhenguo Li, and Xiuqiang He, “Federated meta-learning with fast convergence and efficient communication,” arXiv preprint arXiv:1802.07876, 2018.
* (17) Tony Huang, Jack Chu, and Fangyun Wei, “Unsupervised prompt learning for vision-language models,” arXiv preprint arXiv:2204.03649, 2022.
* (18) Timo Schick and Hinrich Schütze, “Exploiting cloze questions for few shot text classification and natural language inference,” arXiv preprint arXiv:2001.07676, 2020.
* (19) Dong-Hyun Lee et al., “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” in Workshop on challenges in representation learning, ICML, 2013, vol. 3, p. 896.
* (20) Xiang Zhang, Junbo Zhao, and Yann LeCun, “Character-level convolutional networks for text classification,” Advances in neural information processing systems, vol. 28, 2015\.
* (21) Qinbin Li, Yiqun Diao, Quan Chen, and Bingsheng He, “Federated learning on non-iid data silos: An experimental study,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022, pp. 965–978.
* (22) Eric Arazo, Diego Ortego, Paul Albert, Noel E O’Connor, and Kevin McGuinness, “Pseudo-labeling and confirmation bias in deep semi-supervised learning,” in 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020, pp. 1–8.
* (23) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston, “Curriculum learning,” in Proceedings of the 26th annual international conference on machine learning, 2009, pp. 41–48.
* (24) Adina Williams, Nikita Nangia, and Samuel R Bowman, “A broad-coverage challenge corpus for sentence understanding through inference,” arXiv preprint arXiv:1704.05426, 2017.
* (25) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
* (26) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al., “Huggingface’s transformers: State-of-the-art natural language processing,” arXiv preprint arXiv:1910.03771, 2019.
* (27) Teven Le Scao and Alexander M Rush, “How many data points is a prompt worth?,” arXiv preprint arXiv:2103.08493, 2021.
* (28) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf, “Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.
* (29) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut, “Albert: A lite bert for self-supervised learning of language representations,” arXiv preprint arXiv:1909.11942, 2019.
* (30) Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin, “Training deep nets with sublinear memory cost,” arXiv preprint arXiv:1604.06174, 2016.
* (31) Qipeng Wang, Mengwei Xu, Chao Jin, Xinran Dong, Jinliang Yuan, Xin Jin, Gang Huang, Yunxin Liu, and Xuanzhe Liu, “Melon: Breaking the memory wall for resource-efficient on-device machine learning,” 2022\.
* (32) Xuan Peng, Xuanhua Shi, Hulin Dai, Hai Jin, Weiliang Ma, Qian Xiong, Fan Yang, and Xuehai Qian, “Capuchin: Tensor-based gpu memory management for deep learning,” in Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, 2020, pp. 891–905.
* (33) Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, and Vicente Ordonez, “Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, pp. 6912–6920.
* (34) Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei, “Bert loses patience: Fast and robust inference with early exit,” Advances in Neural Information Processing Systems, vol. 33, pp. 18330–18341, 2020.
* (35) Stefanos Laskaridis, Alexandros Kouris, and Nicholas D Lane, “Adaptive inference through early-exit networks: Design, challenges and directions,” in Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning, 2021, pp. 1–6.
* (36) Jiaxiang Wu, Weidong Huang, Junzhou Huang, and Tong Zhang, “Error compensated quantized sgd and its applications to large-scale distributed optimization,” in International Conference on Machine Learning. PMLR, 2018, pp. 5325–5333.
* (37) Ahmed M Abdelmoniem and Marco Canini, “Towards mitigating device heterogeneity in federated learning via adaptive model quantization,” in Proceedings of the 1st Workshop on Machine Learning and Systems, 2021, pp. 96–103.
* (38) Ang Li, Jingwei Sun, Pengcheng Li, Yu Pu, Hai Li, and Yiran Chen, “Hermes: an efficient federated learning framework for heterogeneous mobile clients,” in Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, 2021, pp. 420–437.
* (39) Jonathan Frankle and Michael Carbin, “The lottery ticket hypothesis: Finding sparse, trainable neural networks,” arXiv preprint arXiv:1803.03635, 2018.
* (40) Timo Schick and Hinrich Schütze, “True few-shot learning with prompts—a real-world perspective,” Transactions of the Association for Computational Linguistics, vol. 10, pp. 716–731, 2022.
* (41) NVIDIA JETSON TX2, “256-core nvidia pascal gpu,” https://developer.nvidia.com/embedded/jetson-tx2.
* (42) Jie Xu and Heqiang Wang, “Client selection and bandwidth allocation in wireless federated learning networks: A long-term perspective,” IEEE Transactions on Wireless Communications, vol. 20, no. 2, pp. 1188–1200, 2020.
* (43) Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, and Ramtin Pedarsani, “Straggler-resilient federated learning: Leveraging the interplay between statistical accuracy and system heterogeneity,” arXiv preprint arXiv:2012.14453, 2020.
* (44) Su Wang, Mengyuan Lee, Seyyedali Hosseinalipour, Roberto Morabito, Mung Chiang, and Christopher G Brinton, “Device sampling for heterogeneous federated learning: Theory, algorithms, and implementation,” in IEEE INFOCOM 2021-IEEE Conference on Computer Communications. IEEE, 2021, pp. 1–10.
|
# Distilling Reasoning Capabilities into Smaller Language Models
Kumar Shridhar ∗ Alessandro Stolfo Mrinmaya Sachan
Department of Computer Science, ETH Zürich
{shkumar<EMAIL_ADDRESS>Equal contribution;
###### Abstract
Step-by-step reasoning approaches like chain of thought (CoT) have proved to
be very effective in inducing reasoning capabilities in large language models.
However, the success of the CoT approach is fundamentally tied to the model
size, and billion parameter-scale models are often needed to get CoT to work.
In this paper, we propose a knowledge distillation approach that leverages the
step-by-step CoT reasoning capabilities of larger models and distills these
abilities into smaller models.
In this work, we propose an alternative reasoning scheme, Socratic CoT that
learns a decomposition of the original problem into a sequence of subproblems
and uses it to guide the intermediate reasoning steps. We use Socratic CoT to
train a combination of two small distilled models: a problem decomposer and a
subproblem solver. In practice, given a new problem, the two distilled models
work in sync to decompose and solve complex problems. On multiple reasoning
datasets (GSM8K, StrategyQA, and SVAMP), our proposed distillation strategies
boosts the performance of smaller models over 70% compared to the baselines.
Finally, we investigate when Socratic CoT is an effective alternative to CoT,
demonstrating cases where a much smaller model (GPT-2 large) can outperform a
10X larger model (GPT-3 6B). Our code is available here.
## 1 Introduction
Large language models (LLMs) have demonstrated strong performance on a variety
of reasoning tasks (Brown et al., 2020; Hoffmann et al., 2022; Chowdhery et
al., 2022, inter alia). One particularly interesting strategy for prompting
these models is chain-of-thought (CoT), which has been shown to elicit
reasoning abilities in LLMs by asking the model to incorporate intermediate
reasoning steps while solving a problem Nye et al. (2021); Wei et al. (2022b);
Wang et al. (2022). However, CoT has been shown to work primarily on models
with hundreds of billions of parameters Wei et al. (2022b, a) or those tuned
to a wide range of tasks Chung et al. (2022); Iyer et al. (2022).
Figure 1: Illustration of the proposed framework. First, an LLM is prompted to
decompose a multi-step problem providing annotation for the intermediate steps
leading to the final solution. Then, the generated annotation is used to
provide additional supervision when fine-tuning smaller models.
Due to the significant computational resources or expensive API calls required
to access CoT-capable LLMs, we ask whether it is possible to elicit such
reasoning capabilities in smaller models.111Following Li et al. (2022), we
argue that small and large models are relative terms and context-dependent. We
consider models with billions of parameters to be large, and models with
millions of parameters to be small.
Small-sized, non-fine-tuned language models are known to be poor reasoners
Stolfo et al. (2022). Therefore, a possible approach to induce CoT-like
reasoning abilities in smaller models would be fine-tuning them on step-by-
step examples.
In our work, we propose a framework for leveraging the reasoning capabilities
of LLMs to supervise the training of smaller models. This approach can be
thought of as a form of knowledge distillation Hinton et al. (2015), where a
larger teacher model transfers knowledge to a smaller student model. However,
unlike standard knowledge distillation, our method transfers the reasoning
abilities of the teacher model only using its generated solutions as a proxy,
i.e., we do not assume access to the teacher model parameters. Our approach
consists of prompting an LLM to produce step-by-step annotations leading to
the answer for a set of problems. This annotation is then used as supervision
to fine-tune the student model. A high-level illustration of the process is
provided in Figure 1.
Within this framework, we study three different types of annotation structure
for supervising our distillation approach: (i) We consider fine-tuning on the
gold step-by-step solution procedure for datasets where the step-by-step
solutions are available. (ii) We study whether procedural supervision, coming
from the chain of thought (CoT) of the teacher model can improve upon the
baseline. (iii) We propose a third type of supervision structure, which we
call Socratic CoT. This approach relies on learning a semantic decomposition
of the original problem into a sequence of subproblem-solution pairs using two
models – a) a question generator that learns to decompose the problem into a
sequence of subproblems, and b) a question-answering model that solves the
various generated subproblems (more details are in section 3.2). This approach
can be thought of as an extension of the typical chain of thought reasoning
where, unlike CoT, the intermediate steps are now decomposed into subquestion-
solution pairs; the subquestions guide the generation of intermediate steps
that lead to the final answer to the problem.
We train distilled student models with various annotation structures mentioned
above. Depending on the annotation available for the given data, we use the
teacher model to generate either a CoT-like solution to a problem or, if the
step-by-step annotation is available, a set of subquestions leading to the
solution of the problem, or both (examples of different annotations are shown
in Figure 2).
We perform our analyses on three multi-step reasoning datasets: GSM8K Cobbe et
al. (2021), StrategyQA Geva et al. (2021), and SVAMP Patel et al. (2021). We
consider data with various types of annotation to cover a range of realistic
data scenarios. Our results show that supervision by CoT-decomposed examples
helps smaller models perform better, and subquestioning introduced by Socratic
CoT can provide further improvement. We observe performance gains of up to 40%
with LLM-generated step-by-step annotations – this validates the effectiveness
of our distillation framework (detailed analysis in Section 5).
## 2 Related Work
##### Decomposing Multi-Step Reasoning Tasks
Solving multi-step reasoning tasks like MWPs has been a popular area of
research for the last couple of years Kushman et al. (2014); Hosseini et al.
(2014); Roy et al. (2015); Amini et al. (2019); Zhang et al. (2020); Shridhar
et al. (2022); Opedal et al. (2023). However, the majority of the modern
approaches for these problems are shifting towards using large language
models, often relying on approaches involving prompting or in-context learning
(Cobbe et al., 2021; Kojima et al., 2022; Wei et al., 2022b; Chowdhery et al.,
2022; Lewkowycz et al., 2022; Srivastava et al., 2022). One such prompting
approach is the chain of thought prompting Wei et al. (2022b), which prompts
the language model to generate a series of intermediate steps that improve the
reasoning capabilities in LLMs. Wang et al. (2022) took another step forward
and sampled multiple reasoning paths and selected the most relevant output
using majority voting. Huang et al. (2022) used the most voted outputs to
further fine-tune the model for better performance. Kojima et al. (2022)
further improved the reasoning of LLM in a zero-shot manner by appending
“Let’s think step by step” to the prompt. In contrast, our work does not
propose prompting solutions; instead, we explicitly guide the student model
reasoning using sub-questions at each step. Most similar to our work is the
work by Zhou et al. (2022) which decomposes questions into sub-questions and
asks the language model to solve each sub-question sequentially. However, this
work is also restricted to prompting and only works with LLMs with billions of
parameters.
##### Knowledge Distillation
Our approach is reminiscent of knowledge distillation (Ba and Caruana, 2014;
Hinton et al., 2015) in that we use a student network to mimic the large
teacher language model. Snell et al. (2022) demonstrated the usefulness of
providing instruction that can help models achieve better reasoning skills.
Similar to our hypothesis, Eisenstein et al. (2022) argued that question-
answering systems should focus not only on the final answer, but also on the
rationale that justifies their reasoning, to help them reason better. We go
beyond this; in our work, in addition to the question-answering system, we
also focus on what questions need to be asked at each step that can help to
learn that reasoning step better. Finally, similar to our hypothesis of
injecting reasoning capabilities into smaller models, Li et al. (2022) used
CoT-like reasoning from LLMs to train smaller models on a joint task of
generating the solution and explaining the generated solution. We, on the
other hand, use the LLM to generate subquestions and solution pairs and use
them together to inject reasoning capabilities into smaller models.
##### Subquestioning as supervision
The idea of inquiring or asking information-seeking questions for discovery
learning has been studied well in the past Bruner (1961). Rao and Daumé III
generated clarification questions based on Stack Exchange questions as
supervision, Klein and Nabi (2019) used a joint question answering model to
ask questions from a given span of text and later answer them, and Rajani et
al. (2019); Shwartz et al. (2020) asked questions to improve common sense QA
models. In contrast, our work focuses on multistep reasoning tasks where
intermediate clarifying questions and reasoning steps may not always be
available and may need to be extracted from a teacher model.
## 3 Methodology
The setting we consider consists of a data set $\mathcal{D}$, where each
problem $P_{i}$ is accompanied by a final answer $a_{i}$ that can be reached
by several steps of reasoning. The task of solving the problem using a model
$\psi$ is to predict an answer $\hat{a}=\psi(P)$ such that $\hat{a}=a$. We
consider different data scenarios where intermediate annotations of the
solution may be available in different forms (e.g., step-by-step, as a
semantic decomposition by subquestions) or may not be present. Depending on
the availability of annotations, we propose different approaches to augment
the training of a small model on $\mathcal{D}$ by using LLMs.
Figure 2: Illustration of the three different kinds of annotation structure.
Our proposed approach, Socratic CoT, augments the typical chain-of-thought
step-by-step solution with subquestioning.
### 3.1 Distilling step-by-step reasoning via CoT
A data set may present an annotation that contains intermediate reasoning
steps that lead to the answer $a_{i}$ (i.e., a chain-of-thought annotation).
This intermediate annotation can be used directly to fine-tune a small model.
However, in cases where such step-by-step information is not available, we use
a LLM to generate the reasoning steps that might improve the performance of
the small model.
To achieve this, we consider a small subset of the dataset $\mathcal{D}$ and
decompose each problem $P_{i}$ into $n_{i}$ intermediate reasoning steps. We
construct these intermediate reasoning steps manually, since we only need a
few examples as prompts (examples are provided in Appendix Table 6).
For each remaining problem $P\in\mathcal{D}$, we then prompt a large language
model $\mathcal{M}$ to generate the intermediate reasoning steps. We make sure
that the chain of reasoning steps is meaningful by checking whether the last
solution matches the ground truth answer, i.e. whether
$a_{i}^{(n_{i})}=a_{i}$, where $a_{i}^{(n_{i})}$ represents the answer
corresponding to the last reasoning step. If this is not the case, we discard
the problem and sample a new chain by prompting the model again (for a maximum
of 3 times). In this way, we obtain an augmented dataset $\mathcal{D}^{*}$ in
which a subset of problems is paired with a sequence of reasoning steps
leading to the correct result. Finally, we can distill the reasoning
capabilities into smaller models by fine-tuning them with the generated
intermediate steps.
Figure 3: Detailed explanation of our framework. First, a LLM is prompted to
decompose the input problem $P$ into a series of subquestion-solution pairs
($q_{i}^{(j)},s_{i}^{(j)}$, $j\in\\{1,\dots,n_{i}\\}$) with an answer at each
step $a_{i}^{(j)}$. The generated subquestions-solutions are used to train two
student models: a) the QG model which learns to mimic sub questioning
capability of the LLM and b) the QA model, which learns to solve each
subquestion. At the bottom, the inference process is depicted for an unseen
problem and no LLM is involved. The QG model breaks the unseen problem into
simpler subquestions and the QA model solves each one of them eventually
leading to the final answer $a_{i}^{(n_{i})}$.
### 3.2 Distilling step-by-step reasoning through Socratic CoT
In this section, we describe how CoT can be enhanced through subquestioning.
An illustration of our approach is shown in Figure 3.
#### 3.2.1 Extracting the Reasoning Capability from the Teacher
In Section 3.1, we detailed how an LLM can be used to generate the
intermediate annotation of a problem $P_{i}$ as a chain of steps leading to
the answer $a_{i}$. We now extend this procedure to include a subquestion at
each step of the solution. Following a similar procedure as described in
Section 3.1, we prompt the LLM with few exemplars of problems decomposed as a
set of intermediate subquestion-solution pairs (the prompts are reported in
Appendix Table 6). This way, we obtain an intermediate annotation that
includes subquestioning. In particular, each of the $n_{i}$ steps constituting
the overall solution is a subquestion-solution pair, denoted
$q_{i}^{(j)},s_{i}^{(j)}$, $j\in\\{1,\dots,n_{i}\\}$ (an example is shown in
Figure 2). We refer to the ordered list of subquestion-solution pairs for
problem $P_{i}$ as
$(q_{i}^{(1)},s_{i}^{(1)}),\dots,(q_{i}^{(n_{i})},s_{i}^{(n_{i})})$.
#### 3.2.2 Transferring the Reasoning Capability into the Student
We present two strategies to distill the reasoning annotation provided by the
LLM into smaller models.
In the first strategy, a single unified student is trained to generate the
subquestion-solution pairs simultaneously, while in the second strategy, the
question generation and question-answering tasks are assigned to two separate
models. We call this second strategy iterative because the question-answering
model is trained to solve each subquestion iteratively.
##### Unified.
Using the problems in $\mathcal{D}$ that contain the chain of intermediate
questions and solutions, we train a unified student model $\mathcal{M}_{uni}$
that learns to generate the sequence of subquestion-solution pairs
$\\{(q^{(1)},s^{(1)}),(q^{(2)},s^{(2)}),\dots\\}$ that lead to the solution of
a given problem. We use a pre-trained transformer-based model Vaswani et al.
(2017) and train it on the chain of subquestion-solution pairs for each
problem $P$. Given a step $j$ of problem $P$ (i.e., the concatenation of
$q^{(j)}$ and $s^{(j)}$) consisting of a sequence of $m_{j}$ tokens
$\\{x_{j}^{(1)},\dots,x_{j}^{(m_{j})}\\}$, we use a typical auto-regressive
language modeling loss, $\mathcal{L}$:
$\displaystyle\mathcal{L}_{j}(P)=$
$\displaystyle-\sum_{k=1}^{m_{j}}\log\mathbb{P}_{{uni}}\
(x_{j}^{(k)}|x_{j}^{:(k-1)},P)$ (1)
where $\mathbb{P}_{{uni}}(x|c)$ is the probability assigned by
$\mathcal{M}_{uni}$ to token $x$ given context $c$, and $x^{:(y)}$ indicates
the sequence $\\{x^{(1)},\dots,x^{(y)}\\}$. The loss $\mathcal{L}_{j}$ is
computed for each problem $P_{i}$ and for each pair $(q^{(j)},s^{(j)})$
leading to the final answer $a_{i}$.
Unified
---
Input: | Output:
A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? | How many bolts of white fiber does it take? It takes 2/2 = $<<$2/2=1$>>$ 1 bolt of white fiber. How many bolts in total does it take? So the total amount of fabric is 2+1 = $<<$2+1=3$>>$ 3 bolts of fabric. The answer is 3.
Iterative
Iteration 1
Input: | Output:
A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? | QG: How many bolts of white fiber does it take?
QA: It takes 2/2 = $<<$2/2=1$>>$ 1 bolt of white fiber.
1-2 Iteration 2 |
Input: | Output:
A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? How many bolts of white fiber does it take? It takes 2/2 = $<<$2/2=1$>>$ 1 bolt of white fiber. | QG: How many bolts in total does it take?
QA: So the total amount of fabric is 2+1 = $<<$2+1=3$>>$ 3 bolts of fabric.
The answer is 3.
Table 1: Example demonstraing the input-output format for unified vs iterative
setup. QG represents the question generation model and QA is the question
answerer mdoel. Note that QA model uses the QG output to answer it as shown in
Figure 3.
##### Iterative.
The iterative version of the student separates the tasks of generating the
subquestions and providing an intermediate answer to each subquestion into two
distinct models: a question generation (QG) model and a question answering
(QA) model. Both the QG and QA models are implemented using a Transformer-
based language model Vaswani et al. (2017). In particular, the QA model
$\mathcal{M}_{qa}$ is iteratively trained to answer the teacher-generated sub-
questions. The learning objective is computed at the token level for each
intermediate solution:
$\begin{aligned}
\mathcal{L}(P,s^{(j)})=-\sum_{k=1}^{l_{j}}\log\mathbb{P}_{\mathcal{QA}}\
(y_{j}^{(k)}|y_{j}^{:(k-1)},q^{:(j)},s^{:(j-1)},P)\end{aligned}$
where $l_{j}$ and the $y_{j}$’s represent, respectively, the length and the
tokens of the intermediate solution $s^{(j)}$. $s^{:(j-1)}$ consists of the
previous solution generated by the QA model iteratively in the past
iterations.
Similarly, the QG model is trained to acquire the ability of the teacher model
to decompose the problem’s main question into a series of sub-steps, each of
which corresponds to a subquestion. The loss for this model is analogous to
Equation 1, with the only difference being that the intermediate solutions are
not considered for the QG model. During training, the previous intermediate
solutions generated by the QA model are replaced with the teacher-generated
solutions using teacher forcing Cho et al. (2014). However, the intermediate
solutions generated by the model are used at inference time.
### 3.3 Inference-time Predictions
Given an unseen problem $P$, the unified student model can directly predict a
solution as a sequence of subquestions and answers. In the iterative approach,
we first generate the subquestions conditioning the generation of the QG model
on $P$. After these questions are generated, they are provided to the QA model
one by one, decoding the intermediate solution $\hat{s}^{(j)}$ at step $j$
token by token according to the model’s probability distribution over its
vocabulary:
$\displaystyle\mathbb{P}_{\mathcal{QA}}\
(y_{j}^{(k)}|y_{j}^{:(k-1)},\hat{q}^{:(j)},\hat{s}^{:(j-1)},P),$ (2)
where $y_{j}^{(k)}$ is the $k$-th token being decoded in greedy fashion.
After the last solution $\hat{s}^{(n)}$ has been generated, the numerical
prediction $\hat{a}^{(n)}$ is parsed from the text using simple heuristics.
## 4 Empirical Analysis
### 4.1 Datasets
We study how smaller models can learn to reason better on three multi-step
reasoning datasets: GSM8K Cobbe et al. (2021), StrategyQA Geva et al. (2021),
and SVAMP Patel et al. (2021). GSM8K consists of 8.5K grade school math word
problems, each requiring 2 to 8 steps of reasoning to solve. The solutions
primarily involve a sequence of elementary calculations using basic arithmetic
operations ($+$, $-$, $\times$, $\div$). The dataset is divided into 7.5K
training problems and 1K test problems. To evaluate the model on SVAMP, we
train the model on 761 multi-step math word problems taken from the ASDiv Miao
et al. (2020) training set and evaluate it on 237 multi-step SVAMP problems.
For StrategyQA, the test set with facts is not available, so we split the data
into 80% training, 10% as validation data, and the last 10% as test data. We
do not shuffle the data to maintain reproducibility.
### 4.2 Experimental Setup
We use three kinds of annotation, corresponding to the three datasets that we
consider.
##### Step-by-step solution
: The GSM8K dataset falls into this category and includes a Socratic version
where intermediate subquestion-solution pairs are provided for each MWP. While
the intermediate step-by-step solutions were manually annotated, the authors
report that the subquestions were generated by prompting GPT-3. We reproduced
a subset of these subquestions using a GPT-3 model with prompts, and we
observed a high similarity between the questions provided and the ones
generated by us (BERT $F_{1}$ score of 95%). For Socratic CoT, we thus use the
subquestioning annotation already provided.
##### Supporting facts
: We study the StrategyQA dataset, which falls in this category. Strategy QA
consists of a factual question with binary True/False as the final answer.
Additional supporting facts and decomposed questions are provided. However,
the set of facts and the decomposed questions provided with a given question
are not always aligned (i.e., a fact is not necessarily the answer to one
subquestion). Therefore, having a setup similar to the one for GSM8K is not
possible. We thus consider two versions of the data. One in which the
supporting facts are used as CoT and the corresponding questions are generated
by prompting a GPT-3 model, and a second in which we take the provided
questions and generate the facts (this time aligned with the questions) using
GPT-3.
##### Final answers only
: AsDiv/SVAMP falls in this category and for training, we use GPT-3 to
generate both intermediate subquestions and solutions. Intermediate solutions
are used as CoT and the generated subquestion-solution pairs for Socratic CoT.
### 4.3 Implementation Details
We use GPT-2 variants Radford et al. (2019) as student models. GPT-3 175B
Brown et al. (2020) served as the teacher model for decomposing complex
problems into a series of simpler substeps (we report the prompts used in
Appendix Table 6).
All models were trained using the Huggingface library Wolf et al. (2020) on an
NVIDIA Tesla A100 GPU with 40 GB of memory. Each experiment was run for the
same number of iterations to ensure fairness with periodic evaluation over the
validation set. Teacher forcing was used during training to replace the
generated responses with ground truth answers from the training dataset.
##### Evaluation Metric.
To evaluate the question-answering performance on the GSM8K, SVAMP, and
StrategyQA datasets, we compute the accuracy based on the final answer
provided by the student model.
| | | | | | Iterative | Unified
---|---|---|---|---|---|---|---
Dataset | Model | Answer Only | GT Steps | GT Facts | CoT | SocCoT | SocGT | SocCoT
| Small (124M) | 1.45 | 5.05 | - | 4.70 | 5.98 | 6.44 ($\uparrow 20\%$) | 5.10
GSM8K | Medium (355M) | 2.90 | 7.88 | - | 7.10 | 11.57 | 12.74 ($\uparrow 38\%$) | 7.90
| Large (774M) | 4.62 | 14.10 | - | 12.85 | 17.89 | 21.08 ($\uparrow 33\%$) | 13.25
2-9 | GPT-3 (6B) | - | 21.00 | - | - | - | - | -
| Medium (355M) | 54.10 | - | 52.02 | 55.01 | 52.05 | 60.31 ($\uparrow 13\%$) | 52.05
StrategyQA | Large (774M) | 61.10 | - | 62.80 | 55.90 | 61.32 | 66.40 ($\uparrow 5\%$) | 59\. 45
| XL (1.5B) | 60.51 | - | 66.30 | 58.07 | 62.30 | 63.56 ($\downarrow 4\%$) | 62.05
| Small (124M) | 2.15 | - | - | 5.35 | 6.79 | - | 5.82
SVAMP | Medium (355M) | 4.80 | - | - | 17.30 | 18.99 | - | 17.62
| Large (774M) | 7.40 | - | - | 23.60 | 18.14 | - | 17.45
Table 2: Accuracy comparison (in %) on the three considered datasets. We
consider three human-annotated baselines: final answers only (Answer Only),
ground-truth step-by-step solution (GT Steps), and supporting facts (GT
Facts). We compare the different supervision strategies for fine-tuning the
small models: CoT represents the case where the chain of intermediate
reasoning steps is generated by GPT-3, SocCoT represents the case where both
the chain of intermediate solutions and the subquestions are generated by LLM
and used to fine-tune small models. SocGT represents the case where GT
solutions/facts are used when prompting GPT-3 to generate the subquestions.
Iterative and Unified represent the two SocCoT strategies described above. All
models are GPT-2 versions and their size is reported within parentheses. All
experiments were run at least 3 times and the average is reported. GPT-3 6B
results are taken from Cobbe et al. (2021).
## 5 Results and Discussion
##### Can our framework improve the reasoning capabilities of smaller models?
Table 4.3 demonstrates that leveraging LLMs reasoning capabilities using our
framework can improve the reasoning results for all dataset types.
##### Step-by-Step Solution.
When human-annotated step-by-step solutions are available, training smaller
models with LLM-generated CoT is not advantageous, as shown on GSM8K. This is
to be expected since the annotation generated by an LLM is likely to be
noisier and of lower quality than human-annotated data. However, the ground-
truth step-by-step annotation can be leveraged to prompt an LLM to generate
subquestions for the Socratic CoT approach, giving a performance boost of up
to 38% when the LLM-generated subquestions are used at inference time. When
the subquestions are learned by the QG model (Iterative SocCoT), the accuracy
of the student model decreases slightly but still improves over the step-by-
step annotation without subquestions (17.89 vs. 14.10). Figure 5 shows a
comparison of predictions generated by SocCoT models and a model trained on
the GT step-by-step annotation. Unified Socratic CoT performs similarly to
training with the step-wise ground-truth annotation. We additionally include
the score produced by GTP-3 6B to show that training with Socratic CoT can
help a small model (GPT-2 large with 774M parameters) perform as well as a
nearly 10x larger model fine-tuned with human annotated data.
Figure 4: Accuracy comparison for different supervision strategies on
StrategyQA. The baseline method consists of fine-tuning on final answers only
(Ans only), and it is compared to fine-tuning with: ground-truth supporting
facts (GT Facts), GPT-3-generated supporting facts (CoT), ground-truth
supporting facts with GPT-3-generated subquestions (SocCoT), and LLM-generated
facts with human-annotated subquestions (SocGT).
##### Supporting facts.
On StrategyQA, we observe that the inclusion of ground-truth supporting facts
in the fine-tuning procedure improves the performance of the small models.
However, surprisingly, when the supporting facts are generated by GPT-3, their
inclusion actually hurts performance (58.07 vs 60.51 for GPT-2 Large). We
hypothesize that this is likely due to the imperfect factual knowledge
provided by the LLM, which mars the quality of the supervision. We have
observed that the GT supporting facts provided often do not represent a
logical sequence of propositions leading to the final answer. This is likely
the reason why decomposing the problem through subquestions based on such
facts actually harms accuracy (see SocCoT column in Table 4.3). Instead, using
the provided subquestions and using an LLM to generate the answers
(representing coherent facts leading to the final answer) proves to be an
effective strategy (60.31 vs. 52.02 for GPT-2 Medium). A more detailed
comparison between our proposed approaches is presented in Figure 4. However,
GPT-2 XL models perform well when trained on facts as unlike smaller models,
larger models can encode more facts at once in their parameters, which assists
in answering a factual question.
##### Answers only.
On the SVAMP dataset, which includes only final answers and no intermediate
annotation, LLMs can be used to generate both the intermediate steps and the
subquestions. Both the consideration of intermediate solutions without
subquestions (CoT) and the consideration of intermediate solutions with
subquestions (SocCoT) lead to an improvement in performance. The trend here is
similar to what was observed for StrategyQA, with Socratic CoT being more
effective for the two smaller models but falling back to CoT for the larger
model.
Figure 5: Example of predictions generated by a GPT-2 Large model fine-tuned
with GT steps and Socratic CoT on GSM8K dataset.
Models | Methodology | Accuracy
---|---|---
GPT-3 (1-shot) | CoT | 27.5
2-3 (175B) | Sub-ques | 47.1 ($\uparrow 41\%$)
Table 3: Accuracy comparison (in %) of using CoT vs Socratic CoT (Sub-ques) on
the GSM8K dataset for GPT-3 model with prompting.
##### Can Socratic CoT be used as a prompting strategy?
We experimented with Socratic CoT as a prompting strategy. First, we prompted
GPT-3 (175B) to decompose the main problem into simpler steps by formulating
subquestions. Then, GPT-3 is used again to solve the sequence of subproblems
in a single-shot setting with a problem decomposed into intermediate
subquestions and solutions included in the prompt. The introduction of
subquestioning boosts accuracy by over 40% compared to standard CoT prompting
(Table 5). Other work (e.g., Wei et al. 2022b) has used a larger number of
exemplars in the few-shot prompt, achieving higher overall accuracy. We
limited our experiments to single-shot prompts due to budget constraints.
## 6 Ablation Studies
In this Section, we describe additional analyses regarding specific components
of the framework we propose, as well as negative results that we obtained with
alternative strategies.
##### How good are the sub-questioning capabilities of a smaller model?
We investigate in more detail the ability of a small model to decompose a
problem by generating meaningful subquestions. We fine-tuned GPT-2 Large on
the GPT-3 generated subquestions provided in the GSM8K dataset. We then
evaluated the quality of the generated questions in terms of BLEU score Post
(2018), BERT F1 score Zhang et al. (2019), and by measuring for how many
problems the number of questions generated by GPT-2 (#Q) matches the number of
GPT-3 annotated questions for a given problem.
We found that the fine-tuned GPT-2 predicted an incorrect number of
subquestions for the majority of problems (see Table 4, first row). Thus,
following previous work on subquestion generation Shridhar et al. (2022), we
introduced a guidance mechanism that conditions the generation of subquestions
for a problem $P$ on the equations describing the intermediate solutions of
$P$. This strategy improved the quality of the generated questions for all
three metrics considered (Table 4, second row). To avoid the dependence on the
step-by-step annotation of the equations for each problem $P$ at inference
time, we train an additional sequence-to-sequence model to predict, given $P$,
the set of equations that lead to the solution of the problem. At inference
time, the predictions for the guidance model are used to condition the
generation by the QG model. Although the predicted equations often do not lead
to the correct solution of the problem, they help the QG model to generate
more meaningful sub-questions. Figure 6 shows the overall accuracy of the
GPT-2 student models (QA + QG) fine-tuned with Socratic CoT on the GSM8K data
with and without equation conditioning provided by the guide model. We have
extended this guidance mechanism to StrategyQA and SVAMP, where the generation
of subquestions is conditioned on the number of facts (StrategyQA) or steps
(SVAMP) needed to answer the problem.
Methodology | BLEU | BERT $F_{1}$ | # Q
---|---|---|---
No-guidance | 51.5 | 0.78 | 0.42
Guidance | 58.8 | 0.81 | 0.80
Table 4: BLEU, BERT $F_{1}$ and the number of questions (# Q) comparison
between the question generator model and the Socratic subquestions present in
the GSM8K dataset using GPT2-large model. Figure 6: Accuracy of student models
(QA + QG) when the question generation is conditioned using the guidance model
(Guide) and with non-guided question generation (No guide). Ans only
represents the baseline. All models are GPT-2 versions.
##### Eliminating the need for a subquestion module.
We have experimented with an alternative training solution that does not
involve a question-generation model. This strategy aims to improve the
supervision for fine-tuning a small model through subquestioning, but without
relying on the presence of subquestions at test time. The procedure consists
of training the student model to generate the entire chain of steps leading to
an intermediate answer. That is, when the sub-question $q^{(1)}$ is asked, the
model is trained to generate the answer $s^{(1)}$, but when $q^{(j)}$ is
asked, the model is trained to generate the chain of thought reasoning
$\\{s^{(1)},s^{(2)},\dots,s^{(j)}\\}$ (instead of just $s^{(j)}$). This
eliminates the need for the intermediate sub-questions at inference time, as
the model is trained to implicitly decompose the main problem into smaller
reasoning steps. However, this method leads to significant performance
degradation (results are reported in Table 5), highlighting the need for
subquestions at inference time.
##### Example outputs
In Figures 5 and 7, we report example outputs predicted by GPT-2 models for a
set of GSM8K and SVAMP problems.
GPT-2 | No SubQ | SubQ with QG
---|---|---
Small | 2.70 | 5.98
Medium | 7.20 | 11.57
Large | 8.18 | 17.89
Table 5: Accuracy comparison (in %) of student models trained with (SubQ with
QG) and without (No SubQ) question generation model on GSM8K. Figure 7:
Example of predictions generated by a GPT-2 Medium model fine-tuned with GT
steps and Socratic CoT on the SVAMP dataset.
## 7 Conclusion
The chain-of-thought style of step-by-step reasoning has proven to be very
effective for reasoning in LLMs. In this work, we propose ways to distill
these reasoning capabilities into smaller models and suggest ways to further
improve them by explicitly asking stepwise questions. We demonstrate the
effectiveness of our proposed methodology on three popular multi-step
reasoning datasets, and discuss cases where one method should be preferred
over the other for different datasets.
## Limitations
In our work, we use only one solution from the LLM to distill information into
the student model, and according to Wang et al. (2022), multiple subquestion-
solution pairs can be sampled, and using majority voting, all pairs leading to
the most frequent answer can be used to distill knowledge into the student
models. Also, due to computational budget, we used a single prompt to compare
the CoT and Socratic CoT and using more prompts (up to 8) might lead to a
fairer comparison and better results Wei et al. (2022b). We leave these
experiments for the future.
## Ethical Considerations
Although this work improves the reasoning capabilities of smaller models, the
models are still not powerful enough to be used in sensitive settings such as
education. We plan to release our code and model checkpoints, but the models
must be used carefully by users, as many generative models, including ours,
are prone to hallucination.
## Acknowledgements
Alessandro Stolfo is supported by Armasuisse Science and Technology through a
CYD Doctoral Fellowship.
## References
* Amini et al. (2019) Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. _arXiv preprint arXiv:1905.13319_.
* Ba and Caruana (2014) Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? _Advances in neural information processing systems_ , 27.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901.
* Bruner (1961) Jerome S Bruner. 1961. The act of discovery. _Harvard educational review_ , 31:21–32.
* Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. _arXiv preprint arXiv:1406.1078_.
* Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_.
* Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_.
* Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. _arXiv preprint arXiv:2110.14168_.
* Eisenstein et al. (2022) Jacob Eisenstein, Daniel Andor, Bernd Bohnet, Michael Collins, and David Mimno. 2022\. Honest students from untrusted teachers: Learning an interpretable question-answering pipeline from a pretrained language model. _arXiv preprint arXiv:2210.02498_.
* Geva et al. (2021) Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies. _Transactions of the Association for Computational Linguistics (TACL)_.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_ , 2(7).
* Hoffmann et al. (2022) Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_.
* Hosseini et al. (2014) Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014\. Learning to solve arithmetic word problems with verb categorization. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 523–533.
* Huang et al. (2022) Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large language models can self-improve. _arXiv preprint arXiv:2210.11610_.
* Iyer et al. (2022) Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Dániel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. 2022. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. _arXiv preprint arXiv:2212.12017_.
* Klein and Nabi (2019) Tassilo Klein and Moin Nabi. 2019. Learning to answer by learning to ask: Getting the best of gpt-2 and bert worlds. _arXiv preprint arXiv:1911.02365_.
* Kojima et al. (2022) Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. _arXiv preprint arXiv:2205.11916_.
* Kushman et al. (2014) Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 271–281, Baltimore, Maryland. Association for Computational Linguistics.
* Lewkowycz et al. (2022) Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. _arXiv preprint arXiv:2206.14858_.
* Li et al. (2022) Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, et al. 2022. Explanations from large language models make small reasoners better. _arXiv preprint arXiv:2210.06726_.
* Miao et al. (2020) Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing English math word problem solvers. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 975–984, Online. Association for Computational Linguistics.
* Nye et al. (2021) Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language models. _arXiv preprint arXiv:2112.00114_.
* Opedal et al. (2023) Andreas Opedal, Niklas Stoehr, Abulhair Saparov, and Mrinmaya Sachan. 2023. World models for math story problems. In _Findings of the Association for Computational Linguistics: ACL 2023_ , Toronto, Canada.
* Patel et al. (2021) Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? _arXiv preprint arXiv:2103.07191_.
* Post (2018) Matt Post. 2018. A call for clarity in reporting BLEU scores. In _Proceedings of the Third Conference on Machine Translation: Research Papers_ , pages 186–191, Belgium, Brussels. Association for Computational Linguistics.
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
* Rajani et al. (2019) Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain Yourself! Leveraging Language Models for Commonsense Reasoning. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
* (28) Sudha Rao and Hal Daumé III. Answer-based Adversarial Training for Generating Clarification Questions. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_.
* Roy et al. (2015) Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. _Transactions of the Association for Computational Linguistics_ , 3:1–13.
* Shridhar et al. (2022) Kumar Shridhar, Jakub Macina, Mennatallah El-Assady, Tanmay Sinha, Manu Kapur, and Mrinmaya Sachan. 2022. Automatic generation of socratic subquestions for teaching math word problems. _arXiv preprint arXiv:2211.12835_.
* Shwartz et al. (2020) Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020\. Unsupervised commonsense question answering with self-talk. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4615–4629, Online. Association for Computational Linguistics.
* Snell et al. (2022) Charlie Snell, Dan Klein, and Ruiqi Zhong. 2022. Learning by distilling context. _arXiv preprint arXiv:2209.15189_.
* Srivastava et al. (2022) Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. _arXiv preprint arXiv:2206.04615_.
* Stolfo et al. (2022) Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, and Mrinmaya Sachan. 2022. A causal framework to quantify the robustness of mathematical reasoning with language models. _arXiv preprint arXiv:2210.12023_.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30.
* Wang et al. (2022) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. _ArXiv_ , abs/2203.11171.
* Wei et al. (2022a) Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. _arXiv preprint arXiv:2206.07682_.
* Wei et al. (2022b) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. _arXiv preprint arXiv:2201.11903_.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Zhang et al. (2020) Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-to-tree learning for solving math word problems. Association for Computational Linguistics.
* Zhang et al. (2019) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019\. Bertscore: Evaluating text generation with bert. _arXiv preprint arXiv:1904.09675_.
* Zhou et al. (2022) Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. _ArXiv_ , abs/2205.10625.
Let’s generate sub-questions for these problems. Use exactly one operation per
step.
---
—
Q: Zoe was unboxing some of her old winter clothes . She found number0 boxes
of clothing and inside each box there were number1 scarves and number2 mittens
. How many pieces of winter clothing did Zoe have total ?
SQ1: How many pieces of winter clothing did Zoe have in each box?
A1: Zoe had <<\+ number1 number2>> pieces of winter clothing in each box.
SQ2: How many pieces of winter clothing did Zoe have total ?
A2: Zoe had <<* number0 + number1 number2>> pieces of winter clothing in
total.
—
Q: Katie picked number0 tulips and number1 roses to make flower bouquets . If
she only used number2 of the flowers though , how many extra flowers did Katie
pick ?
SQ1: How many flowers did Katie pick in total?
A1: Katie picked <<\+ number0 number1>> flowers in total.
SQ2: How many extra flowers did Katie pick ?
A2: Katie picked <<\- + number0 number1 number2>> extra flowers.
—
Q: Conner has number0 dollars in his bank account . Every month he spends
number1 dollars . He does not add money to the account . How much money will
Conner have in his account after number2 months ?,
SQ1: How much money does Conner spend in total? A1: Conner spends <<* number1
number2>> dollars. SQ2: How much money will Conner have in his account after
8.0 months ? A2: After 8.0 months, Conner will have ¡¡- number0 * number1
number2>> dollars.
For each of the following topics, generate intermediate answers to the
subquestions leading to the final answer.
—
Topic: Albany, Georgia (City in Georgia, United States)
Will the Albany in Georgia reach a hundred thousand occupants before the one
in New York?
Albany, GA has around 75,000 people.
Albany, NY has almost 100,000 people.
The difference is 100,000-75,000=25,000
The difference is 100,000-100,000=0
No, 25,000 is not smaller than 0.
The final answer is NO.
—
Topic: The Police (English rock band)
Could the members of The Police perform lawful arrests?
Only law enforcement officers can perform lawful arrests.
No, the members of The Police (rock band) are not law enforcement officers.
The final answer is NO.
—
Topic: Wonder Woman (2017 film) (American superhero film directed by Patty
Jenkins) Is a Boeing 737 cost covered by Wonder Woman (2017 film) box office
receipts?
The average cost of a US Boeing 737 plane is 1.6 million dollars.
Wonder Woman (2017 film) grossed over 800 million dollars at the box office.
Yes, 800 is larger than 1.6.
The final answer is YES.
Table 6: Exemplars included in the few-shot prompt for the decomposition of
the problems from the ASDiv (upper row) and StrategyQA (lower row) datasets.
|
# Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Hamish Ivisonα Noah A. Smithαβ Hannaneh Hajishirziαβ Pradeep Dasigiα
αAllen Institute for AI
βPaul G. Allen School of Computer Science & Engineering, University of
Washington
<EMAIL_ADDRESS>
###### Abstract
Obtaining labeled data to train a model for a task of interest is often
expensive. Prior work shows training models on multitask data augmented with
task descriptions (prompts) effectively transfers knowledge to new tasks.
Towards efficiently building task-specific models, we assume access to a small
number (32–1000) of unlabeled target-task examples and use those to retrieve
the most similar labeled examples from a large pool of multitask data
augmented with prompts. Compared to the current practice of finetuning models
on uniformly sampled prompted multitask data (e.g., FLAN, T0), our approach of
finetuning on cross-task nearest neighbors is significantly more data-
efficient. Using only 2% of the data from the P3 pool without any labeled
target-task data, our models outperform strong baselines trained on all
available data by 3–30% on 12 out of 14 datasets representing held-out tasks
including legal and scientific document QA. Similarly, models trained on
cross-task nearest neighbors from SuperNaturalInstructions, representing about
5% of the pool, obtain comparable performance to state-of-the-art models on 12
held-out tasks from that pool. Moreover, the models produced by our approach
also provide a better initialization than single multitask finetuned models
for few-shot finetuning on target-task data, as shown by a 2–23% relative
improvement over few-shot finetuned T0-3B models on 8 datasets. We publicly
release our code.111https://github.com/allenai/data-efficient-finetuning
## 1 Introduction
Figure 1: Overview of the DEFT method. Given some unlabeled target-task
instances, we find the most similar instances in a large pool of multitask
data. We train a model on these instances. If we have access to labeled data,
we optionally few-shot finetune the DEFT model.
Finetuning large models with data from a diverse set of tasks, augmented to
include brief descriptions of the tasks (i.e., prompts) has been shown to help
models generalize to unseen tasks (Wei et al., 2021a; Sanh et al., 2021). This
cross-task generalization capability is particularly helpful in cases where it
is expensive to collect labeled target task training sets. Prior work trained
single models with as much prompted data as possible — for example, Sanh et
al. (2021) train a model on roughly 11 million instances (counting different
prompt variations). The training datasets were selected without using any
information about the target tasks, with the goal of allowing models to
generalize to new tasks from instructions alone, making the evaluation “zero-
shot”. However, it is unclear if all the training data is required for good
performance on any given single target task. Furthermore, given that neural
network models have previously been shown to suffer from negative interference
(wherein training on more datasets results in worse performance on certain
downstream tasks) in multitask setups (Aribandi et al., 2022) and benefit from
pretraining on domain-relevant data (Gururangan et al., 2020; Phang et al.,
2018a), it is possible that training only on relevant prompted data could
further improve task generalization while being data-efficient.
Based on this hypothesis, we seek to make use of unlabelled data to find
relevant subsets of training data in the massive pool of multitask data,
allowing similar-to-better performance than training on the entire pool for a
given target task. Manually finding relevant training data in a massive pool
of data is infeasible since it is not obvious which of the source tasks are
relevant for a given target task, and which instances are most relevant for
target task generalization within a source task dataset (see Section 5.1).
Hence we rely on a simple method to automatically select these subsets.
Additionally, as only some samples within a given dataset may be relevant to a
target task, we select per-instance rather than per-dataset, unlike prior
work, which tries to identify useful datasets for transfer learning (Aribandi
et al., 2022; Phang et al., 2018a) and train on all data within the chosen
datasets. We use a setup similar to work examining retrieval-augmented cross-
task generalization (Lin et al., 2022): we assume access to a small number of
unlabeled target task instances and use these to retrieve cross-task nearest
neighbors—labeled instances from the massive pool of data most similar to our
unlabeled target task instances. The similarity is computed as the distance
between the representations produced by the encoder of a pretrained seq2seq
model. Unlike prior work, we then finetune target task specific models on
these neighbors alone, without using any target task specific labeled data or
any extra data from the pool of multitask data. We hope that the similarity
between the cross-task neighbors and our target task data will enable better
generalization to our target task, with dissimilar examples that may cause
interference removed from the training mixture. We ultimately aim to produce
models that perform at least as well as models trained on the entire multitask
pool despite being trained on a fraction of data, greatly reducing the cost of
training through the use of a few cheap-to-collect unlabelled examples.
We run experiments with T5 (Raffel et al., 2020) models, and use Public Pool
of Prompts (P3; Sanh et al., 2021) as the main pool of prompted multitask data
from which to retrieve cross-task nearest neighbors. We evaluate on the 11
datasets originally used to evaluate T0 (a collection of natural language
understanding and commonsense tasks), as well as 3 additional datasets with
varied domains (e.g., legal, NLP domains). We also experiment with the train
set of SuperNaturalInstructions (SNI; Wang et al., 2022) as a pool of
multitask data, and evaluate on 12 tasks from SNI’s held-out set of test
tasks. Our findings are as follows:
* •
For 12 out of 14 target datasets, we find that their cross-task nearest
neighbors, at most 2% of instances retrieved from P3, are much more relevant
as training data than the rest of the P3 pool—training T5 models, sometimes
even variants smaller than T0-3B, on these subsets yields models with
performance 3–30% better than T0-3B evaluated zero-shot. Similarly, models
trained on cross-task neighbors in SuperNaturalInstructions (at most 5% of the
pool), perform similarly to state-of-the-art models trained on all available
data.
* •
For some target tasks on which T0-3B performs close to random chance, T5
models of the same size trained using cross-task nearest neighbors perform
significantly above chance, supporting our hypothesis that massive multitask
prompted training could lead to negative interference between tasks.
* •
When target task labeled data is available for few-shot finetuning, we find
that T5 models trained with cross-task nearest neighbors provide better
initialization for parameter-efficient finetuning methods than T0-3B,
performing 2–23% better than T0-3B with few-shot finetuning across 10 out of
11 datasets.
* •
An analysis of what relevant data gets retrieved shows that most of the tasks
in the massive pool of multitask data are not retrieved for any target tasks,
confirming our hypothesis that only a small subset of data within the pool is
relevant to any given target task.
* •
We compare model performance from DEFT with that from full-finetuning across a
variety of labelling budgets and find that DEFT is more effective for smaller
labelling budgets.
These findings suggest that instead of training single models on all available
data, multi-task data can be used much more efficiently towards improving
model performance on specific target tasks by selecting training data relevant
to those tasks, even with a simple method for identifying such data.
## 2 Related Work
#### Multi-task transfer models
Training on large multi-task mixtures is a common trend within NLP, with most
existing approaches first training a pretrained language model on a large
collection of tasks, and then evaluating these models in either zero- or few-
shot settings on a collection of held-out datasets (Wei et al., 2021a; Sanh et
al., 2021; Khashabi et al., 2020; Mishra et al., 2021; Aribandi et al., 2022).
Most approaches do not customise their task selection to downstream tasks and
assume no knowledge of the target tasks ahead of time, instead focusing on
building a single model most applicable to any arbitrary evaluation task. In
contrast, we show that if we assume access to unlabeled target task instances,
we can make much better use of the multitask data, selecting only instances
useful to a given task. Relatedly, Vu et al. (2020) propose a method for using
gradients from labelled task data to construct task embeddings for predicting
task transferability. Our method instead uses unlabeled data, which is much
cheaper and easier to collect, and does not use gradients, making it easier to
scale to large models such as T5-XL.
#### Retrieval-based methods for NLP
Adding retrieval components to language models has been shown (Khandelwal et
al., 2019; Guu et al., 2020; Lewis et al., 2020) to augment their
generalization capabilities by externalizing memorization. In contrast to
prior work in this direction that mostly focused on language modeling as the
end task, we evaluate on a variety of language understanding tasks. The work
from Shi et al. (2022) used retrieval-based methods for classification tasks
by heuristically mapping the label space of the end-tasks to that of the
predicted next words of the nearest neighbors from a language model. We
instead finetune the models on the nearest neighbors. Lin et al. (2022) also
use unlabeled examples to retrieve relevant data for improving performance but
focus on further finetuning multi-task models. They use representations from
the encoder of a multi-task finetuned model (e.g. T0) to retrieve subsets of
its training data closest to the instances of a target dataset and further
finetune the model to specialize it for the target task. While their results
suggest that using a multi-task model is crucial for good retrieval
performance, we show gains using a model before multitask finetuning. Our
setup allows for data-efficiency via pruning the amount of multi-task data
used during training, letting a practitioner who only cares about specific
downstream tasks train strong task-specific models using much less data and
compute than if they trained on the entire pool of multi-task data.
#### Parameter-efficient fine-tuning
In contrast to work that focused on finetuning fewer parameters in large
models to adapt them to new tasks (Houlsby et al., 2019; Hu et al., 2021; Liu
et al., 2022), our proposal is a data-efficient training method for obtaining
task-specific models without using target task labels. Our method is
complementary to parameter-efficient methods, and they can be used in
conjunction, as shown in section 4.3.
#### Instance attribution
Our approach works by identifying the most relevant training examples for a
given data point, which is called instance attribution. Prior work (Koh and
Liang, 2017; Yeh et al., 2018; Pruthi et al., 2020; Han and Tsvetkov, 2022)
used instance attribution methods to interpret predictions of neural network
models. These methods generally relied on the gradients of the model to
identify the effect specific data points, either in the pretraining or the
finetuning stage, have on the model’s predictions. Our method for identifying
cross-task neighbors is simpler because we do not use gradients and we do not
even rely on the labels of the data. Results from Pezeshkpour et al. (2021)
show that instance attribution based on similarity between the model’s
representations is comparable to gradient-based approaches in terms of finding
the most important training data points.
#### Making use of auxiliary data
Training on intermediate data has been shown to improve performance on target
NLP tasks (Phang et al., 2018b). Recent work has shown that intermediate
datasets can be selected by embedding-based methods (Vu et al., 2020; Poth et
al., 2021; Kung et al., 2021). Most prior work relies on expensive embedding
computation methods, either training a model to generate task embeddings, or
using methods that are difficult to scale to large models.222E.g., the Fisher
information matrix used by Vu et al. (2020). In contrast, we use an extremely
cheap embedding method (mean-pooling over an encoder), and additionally
consider sample-wise selection over a massive pool of tasks, as opposed to
selecting entire tasks.
## 3 Data Efficient Finetuning across Multiple Tasks
Given a large collection of labeled prompted data (i.e., data converted into a
text-to-text form, with task instructions included in the input, e.g., P3),
our core hypothesis is that some tasks in this massive pool of data are more
similar to a given target task than others. Given a target task, we assume we
have access to a small amount of unlabeled target task data, which is often
much easier and cheaper to collect than labeled data (see Section 5.2). Our
aim is to find a relevant subset of data from our pool given a single target
task, ideally allowing us to train a model using this subset that outperforms
a similar model trained on the entire pool of data.
Manually identifying the relevant subsets of these datasets is not feasible
since task boundaries are usually not clearly defined in NLP, and it is hard
to interpret what skills get transferred when a model is trained on one
dataset and evaluated on other. Hence, we use the similarity between the
pretrained model’s representations to compute relevance. We encode all
instances in the large pool of multitask data with a pretrained language model
and build a search index over the resulting representations. Given small
amounts of unlabeled target task data, we retrieve relevant multitask subsets
from the index, which we call cross-task nearest neighbors of the target
tasks. We then build task-specific models by finetuning the pretrained models
on the cross-task neighbors. We refer to this approach as Data-Efficient
FineTuning (DEFT).
We evaluate our approach both in cases where no labeled data is available, and
when a few (20–70) annotated labels are available. In the former case, we
simply use the unlabeled data for retrieval and evaluate the resulting DEFT
model “zero-shot” on the target task. In the latter case, we first train a
DEFT model and then perform parameter-efficient few-shot tuning using IA3 (Liu
et al., 2022) to make use of the labeled data.
#### Retrieving cross-task nearest neighbors
To retrieve the most similar instances to a given set of target task
instances, we first build an index over the massive pool of multi-task data
for efficient retrieval, encoding samples using a pretrained encoder. Then,
given a set of query instances $Q$, we retrieve our subset of similar data by
computing a union of the $k$-nearest neighbors to all $q\in Q$. Note that
there may be an overlap between the sets of nearest neighbors retrieved for
different queries, and hence $|R|\leq|Q|\cdot k$, where $R$ is the retrieved
subset. Empirically, we find $|R|$ tends to be 5–50$\times$ smaller than
$|Q|\cdot k$ due to this overlap.
#### Data-Efficient FineTuning (DEFT)
Given a retrieved set of data $R$, we can then finetune a pretrained language
model on the mixture of data using a cross-entropy loss, as all data are in a
unified text-to-text prompted format. This training is similar to the
multitask prompted training of T0 (Sanh et al., 2021). We refer to models
trained on $R$ as DEFT models. In settings where we have no labeled data
available, we directly evaluate these models on our target tasks.
#### Parameter-efficient few-shot finetuning
For the case where a few annotated labels are available, we make use of
parameter-efficient few-shot finetuning. For this, we take our multi-task
trained DEFT checkpoints and finetune them using IA3 (Liu et al., 2022) on
task-specific few-shot data. Concretely, given a trained transformer model, we
introduce three vectors $l_{\text{k}}$, $l_{\text{v}}$, and $l_{\text{ff}}$
into the attention and feed-forward mechanisms of each layer:
$\displaystyle\text{Attn}(Q,K,V)$
$\displaystyle=\text{softmax}\left(\frac{Q(l_{\text{k}}\odot
K^{T})}{\sqrt{d_{k}}}\right)(l_{\text{v}}\odot V)$ (1)
$\displaystyle\text{FFN}(x)$ $\displaystyle=(l_{\text{ff}}\odot
f(W_{1}x))W_{2}$ (2)
We initialize these vectors with all ones and only update them during the few-
shot finetuning. This provides an efficient method of further training our
DEFT models on task-specific data and has been shown to outperform full
finetuning in the few-shot setting (Liu et al., 2022).
## 4 Experiments
### 4.1 Setup & Hyperparameters
#### Indexing P3
We construct an index of P3 data using FAISS (Johnson et al., 2019), a library
for efficient similarity search over dense vectors. We use a Hierarchical
Navigable Small World index (Malkov and Yashunin, 2020) to approximate the
$k$-nearest neighbor search. To keep the size of the index manageable, we use
Product Quantization (Jegou et al., 2010) and reduce the dimensionality of the
encoded representations using an optimized product quantization transform (Ge
et al., 2013). We encode our instances using the T5 v1.1 model with extra
language model pretraining introduced by Lester et al. (2021). For all
experiments, we match the size of the encoder used to index data and the size
of downstream models trained on this data (e.g., if we train a T5-XL sized
model, we use T5-XL to index and retrieve the data). We use the subset of P3
used to train T0 as our pool of multitask data unless otherwise stated.
#### DEFT
Following T0, we start with the T5 v1.1 model with extra language model
pretraining. Unless otherwise stated, we use the ‘XL’ variant with 3 billion
parameters across our experiments. When training on cross-task nearest
neighbors, we train for 5 epochs with a batch size of 8 using the Adam
optimizer (Kingma and Ba, 2015) and a learning rate of 0.00005. We use a
linear warmup schedule for the first 10% of the total training steps and
linear decay for the rest of training.
#### Few-shot training
We follow the settings suggested by Liu et al. (2022): training for 1000 steps
with a batch size of 8. We use the Adafactor optimizer with a maximum learning
rate of 0.003 and a linear decay schedule with 60 warmup steps. We only update
the IA3 vectors during training.
#### Evaluation datasets
We evaluate on the set of 11 datasets used to evaluate T0 (RTE, ANLI R1/2/3,
CB, HellaSwag, Story Cloze, WinoGrande, WSC, COPA, WiC), which include natural
language inference and commonsense reasoning datasets. In addition to the T0
evaluation datasets, we also evaluate on three additional datasets from
diverse domains: CaseHold (Chalkidis et al., 2022; Zheng et al., 2021), a
legal QA dataset, DROP (Dua et al., 2019), a QA dataset that requires discrete
operations, and a subtask of Qasper (Dasigi et al., 2021), a QA dataset over
entire NLP papers. Qasper has two subtasks—selecting paragraphs in the paper
that provide evidence for answering the questions, and generating the answers.
We focus on the former because it was shown to be the more difficult of the
two, and convert it into a binary classification task where the inputs are
combinations of questions and single paragraphs. We refer to this subtask as
QasperEvidence henceforth and evaluate model performance in terms of document-
level F1 as described by Dasigi et al. (2021). For evaluation and few-shot
training, we convert all datasets to a prompted text-to-text format333For
example, ANLI instances were converted to ‘{premise} Question: {hypothesis}
True, False, or Neither?’, with the answers as ‘true’, ‘false’, or ‘neither’.
either using the prompt templates from P3 for the T0 evaluation datasets or an
original prompt for the other datasets. For CaseHold, DROP, and QasperEvidence
we randomly split out 1000 examples from the existing validation sets to use
for retrieval, and use the remaining data for evaluation. For all other
datasets, we retrieve using up to 1000 randomly chosen examples from the
training splits (if a dataset has less than 1000 training examples, we use all
available training data for retrieval). We provide further details in Appendix
B.
#### Model evaluation
Following Sanh et al. (2021) and Brown et al. (2020), we calculate accuracy on
all datasets except DROP using rank classification, where we pick the answer
with lowest loss across possible answer choices given the instance input as
the model prediction. As DROP is a QA dataset that requires selecting spans or
generating numbers, and does not have answer choices, we generate the
prediction using greedy decoding.
#### Baselines
For zero-shot evaluation, we primarily compare against 4 baselines: 1) T0-3B,
trained on about 10% of the P3 data,444Sanh et al. (2021) report that they
train T5-XL on at most 500K instances per prompted dataset in P3, which
amounts to about 10% of the pool. 2) Random, a model trained on a random
selection of P3 data the same size as the subsets selected by DEFT, 3) T5-XL
not finetuned any further, and 4) BM25, using BM25555We use Pyserini (Lin et
al., 2021) with default settings for the BM25 index. We retrieve the same
amount of data as the subsets retrieved by DEFT. (Robertson and Zaragoza,
2009) for retrieval instead of dense representations. For few-shot settings,
we compare T0-3B with additional few-shot training with DEFT checkpoints
trained on subsets chosen using (a) 1000 unlabeled instances and (b) the
instances used in the few-shot training without labels. This means (b) uses no
additional data compared to T0-3B with few-shot finetuning.
### 4.2 Data-Efficient Fine-Tuning vs. Massive Multitask Training
Task DEFT-XL T0-3B Rand-XL Rand-Bal T5-XL BM25-XL DEFT-base Rand-base T5-base
Maj. Class CaseHold 37.2 30.9 19.0 38.7 11.4 27.9 18.9 17.5 11.4 6.6 DROP 31.0
27.4 24.3 27.6 11.3 22.6 21.3 18.0 4.0 - QasperEv. 28.5 19.9 17.9 23.2 8.2
20.3 15.9 11.0 8.2 19.9 RTE 74.0 70.4 78.3 78.0 53.1 74.3 61.7 61.0 52.0 53.4
ANLI R1 39.8 35.0 35.3 40.0 32.9 37.5 29.6 33.3 32.9 33.4 ANLI R2 37.5 32.6
35.3 36.9 33.5 36.9 32.5 22.3 33.5 33.4 ANLI R3 41.4 35.3 38.0 41.7 33.8 41.1
31.6 33.1 32.7 33.5 CB 60.7 58.9 60.7 55.4 44.6 50.0 50.0 48.2 44.6 50.0
HellaSwag 33.1 28.2 27.4 29.3 23.0 28.7 25.9 25.0 23.0 25.7 StoryCloze 95.3
86.5 79.1 94.1 53.0 82.3 83.5 57.4 53.0 51.4 WinoGrande 50.6 50.0 49.2 49.2
50.8 50.1 50.8 50.1 50.8 50.4 WSC 39.4 50.0 47.1 46.2 36.3 36.5 42.3 36.5 36.3
63.5 COPA 95.0 74.0 80.0 88.0 60.0 79.0 66.0 44.0 60.0 55.0 WiC 54.9 51.1 51.4
57.5 51.7 51.9 49.4 50.0 51.7 50.0 Average 51.3 46.5 45.9 50.4 35.9 45.7 41.4
37.0 35.3 -
Table 1: Performance of XL (3B) and base size ($\sim$250 million) models
across datasets. ‘Rand’ refers to performance of models trained on randomly
chosen P3 subsets of equivalent size to the ones chosen by DEFT, with ‘Rand-
bal’ using uniform random sampling across tasks for subset selection. ‘T5’
refers to performance of a non-finetuned T5 model. ‘BM25’ refers to models
trained on subsets of equivalent size to DEFT subsets from P3 retrieved using
BM25. DROP and QasperEv. Results are F1 scores, CaseHold micro F1, all else
accuracy.
We first assume we have access only to unlabeled task-specific data and cannot
train on any target task labeled data. We sample 1000 unlabeled instances per
dataset and retrieve the 500 nearest neighbors666We retrieve 2500 nearest
neighbours for T5-base as more retrieved neighbors led to better performance.
of each instance. We then train dataset-specific models on each of the
retrieved sets. As seen in Table 1, our DEFT-XL models generally
outperform777The exceptions WSC and RTE have small evaluation sets and large
variance (see Appendix C), leading us to believe these differences are not
significant. T0-3B and other baselines, with a median relative improvement of
13% over T0-3B. We also see that base-sized models also improve over baselines
in Table 1—the DEFT-base models have a median relative improvement of 8% over
the random baseline. All DEFT models are trained on subsets of P3 consisting
of 0.1–2% of all P3 data. This confirms our hypothesis that training on a
well-chosen subset of P3 is more beneficial for target task performance than
training on a uniform sample of all available data. We also note that using
dense representations appears crucial, as using BM25 for retrieval
underperforms most baselines. Our results suggest that a general language
model encoder can still retrieve relevant cross-task neighbors, contrary to
the claims made by Lin et al. (2022).
Remarkably, DEFT-XL outperforms the majority baselines on two target datasets
(QasperEvidence, ANLI R2) where T0-3B does not, and DEFT-base on one (COPA).
This observation further confirms that multitask models trained on uniformly
sampled data might be suffering from negative interference between tasks.
We run a similar experiment with SuperNaturalInstructions (SNI; Wang et al.,
2022), a recent instruction-tuning dataset, as our pool of multitask data888We
use the split of SNI used by Wang et al. (2022) with only 100 train samples
per task as our underlying pool for fair comparison with Tk-Instruct. and
evaluate on a set of 12 diverse held-out test tasks. We use the same pool of
data used to train Tk-Instruct (Wang et al., 2022), which consists of 100
examples from each English-language task in SNI. Notably, this means that DEFT
has a much smaller pool of data to retrieve over compared to P3 (75K vs. 100M
examples). We find in Table 2 that DEFT models are able to achieve performance
similar to a model trained on all data, with each DEFT model only trained on
5% of the total available data. DEFT models also significantly outperform
training on randomly-chosen subsets. See Appendix E for more details.
Model Avg. RougeL Avg. # Training Samples DEFT-XL 49.2 3523 Rand-XL 45.7 3523
Tk-Instruct 50.7 75317
Table 2: Performance of models over 12 held-out tasks from SNI. Models are
trained on data retrieved from SNI (DEFT, Rand), or all SNI data (Tk-
Instruct).
### 4.3 Few-shot Finetuning of DEFT Models
T0-3B+IA3 T5+IA3 Rand+IA3 Rand-Bal+IA3 DEFT-Few (1kQ) DEFT-Few (20-70Q) RTE
$77.5_{{2.0}}$ $57.0_{{4.3}}$ $\mathbf{83.3_{{1.1}}}$ $82.9_{{1.0}}$
$79.4_{{1.3}}$ $81.3_{{1.6}}$ ANLI R1 $44.9_{{3.0}}$ $39.6_{{1.8}}$
$43.3_{{2.3}}$ $46.5_{{0.9}}$ $\mathbf{47.3_{{1.4}}}$ $\mathbf{47.3_{{1.5}}}$
ANLI R2 $39.5_{{1.7}}$ $36.5_{{1.4}}$ $40.3_{{1.6}}$ $\mathbf{42.9_{{1.8}}}$
$40.8_{{2.8}}$ $42.2_{{2.7}}$ ANLI R3 $40.2_{{2.2}}$ $34.8_{{1.1}}$
$39.3_{{2.3}}$ $\mathbf{44.3_{{2.1}}}$ $\mathbf{44.3_{{2.1}}}$ $42.9_{{1.8}}$
CB $78.9_{{3.9}}$ $67.9_{{2.5}}$ $81.4_{{3.5}}$ $81.4_{{2.0}}$ $82.5_{{2.6}}$
$\mathbf{84.6_{{4.3}}}$ HellaSwag $34.7_{{0.6}}$ $27.5_{{1.1}}$ $38.1_{{1.1}}$
$42.1_{{1.6}}$ $42.5_{{2.1}}$ $\mathbf{45.9_{{1.8}}}$ StoryCloze
$93.0_{{0.6}}$ $83.0_{{3.1}}$ $92.6_{{0.8}}$ $95.7_{{0.3}}$ $96.2_{{0.2}}$
$\mathbf{96.5_{{0.2}}}$ WinoGrande $50.6_{{1.3}}$ $49.8_{{0.8}}$
$51.4_{{2.3}}$ $54.0_{{2.6}}$ $\mathbf{55.9_{{3.0}}}$ $55.2_{{3.1}}$ WSC
$\mathbf{64.8_{{3.5}}}$ $51.0_{{1.0}}$ $55.8_{{3.0}}$ $61.5_{{5.3}}$
$63.3_{{5.2}}$ $59.6_{{3.8}}$ COPA $82.0_{{2.7}}$ $61.6_{{4.2}}$
$86.6_{{1.7}}$ $91.4_{{3.0}}$ $\mathbf{95.4_{{1.5}}}$ $92.6_{{2.2}}$ WiC
$54.9_{{1.9}}$ $56.6_{{3.0}}$ $54.5_{{2.4}}$ $56.2_{{2.2}}$
$\mathbf{57.7_{{2.9}}}$ $57.4_{{2.9}}$ Average 60.1 51.4 60.6 63.6 64.1 64.1
Table 3: Performance of IA3 few-shot finetuned models using XL-size
checkpoints. For all models we report the mean over 5 runs with the standard
deviation as subscript. We report performance for DEFT-Few models using 1000
unlabeled queries (‘1kQ’) and few-shot queries (‘20-70Q’). We find both
iterations of DEFT-Few perform statistically significantly better ($p<0.5$)
than all baselines. See section 4.3 for details.
Next, we assume we are able to label a small number of task-specific examples,
and further train our DEFT models. We reuse the XL-size models trained in
Section 4.2 and further train them using the parameter-efficient IA3 on the
few-shot data used by Liu et al. (2022). As seen in table 3, DEFT models with
few-shot finetuning (‘DEFT-Few (1kQ)’) perform on average 7% better than T0-3B
models with few-shot finetuning (‘T0-3B+IA3’), with statistically significant
gains on 5 datasets. This shows that DEFT models serve as better starting
points for few-shot finetuning than T0-3B, providing similar or better
performance across all datasets despite being exposed to much less training
data. Notably, DEFT-Few significantly outperforms T0-3B+IA3 on WinoGrande, for
which zero-shot DEFT did not significantly outperform zero-shot T0-3B. These
results suggest DEFT models are more amenable to few-shot fine-tuning than
T0-3B. We also find that DEFT-few performs statistically significantly better
than the strong Rand-Bal baseline with few-shot finetuning, further
highlighting that DEFT is preferable for both zero and few-shot settings.
#### Few-shot retrieval
In this experiment, we evaluate DEFT in a setting where we have access only to
a small number of target-task labeled examples (exactly what is available to
T0-3B+IA3), and no additional unlabeled examples. We construct 5 few-shot sets
for each dataset, for each set retrieve cross-task neighbors using the few-
shot data, finetune T5 models on the retrieved data, and then finally finetune
using IA3 on the labeled few-shot data itself. To make up for the smaller
query set, we retrieve the closest 2000 neighbors per query instance. As seen
in Table 3, this still results in a model that outperforms T0-3B with few-shot
tuning (‘DEFT-Few (20-70Q)’), and overall achieves similar performance to
DEFT-Few (1kQ). Crucially, this shows that DEFT followed by few-shot
finetuning may be a better alternative to few-shot finetuning T0-3B even when
both methods have exactly the same target-task information available.
## 5 Analysis
### 5.1 Cross-Task Retrieval
#### What gets retrieved?
We analyse what source datasets get selected during retrieval for each
evaluation dataset (see Appendix F, Figure 4). We find that for most target
datasets, the majority of source datasets are not selected, further
strengthening our hypothesis that much of the massive multitask pool is not
relevant to a given target task, and no single mixture of datasets is optimal
for all target tasks. We additionally find that no more than 27% of all
instances within any source dataset is retrieved, suggesting that our approach
is also effective at finding relevant subsets of data within large datasets.
Figure 2: RTE accuracy (above) and CaseHold micro F1 (below) by number of P3
samples retrieved for DEFT-XL across a varying number of neighbors.
#### Retrieval hyperparameters
When retrieving cross-task data, the amount and quality of data retrieved is
highly dependent on the query size (i.e., the number of task-specific
instances used for retrieval) and number of neighbors (i.e., the number of
cross-task samples retrieved per task-specific instance). In Figure 2, we show
the effect of varying both query size (sweeping from 32 to all training data)
and the number of neighbors (sweeping from 1 to 5000) on dataset performance
on RTE and CaseHold. We find that increasing the amount of data retrieved,
whether through increasing the number of neighbors or query set size, results
in improved performance up to a point, and then either plateaus or decreases,
providing evidence for our hypothesis that using ‘too much’ data can result in
reduced downstream performance due to negative interference.
Figure 3: Performance of DEFT-XL and full-finetuning methods with the same
annotation budget used for obtaining either labeled or unlabeled data for
QasperEvidence. Unless one has a large annotation budget, collecting unlabeled
examples is superior to collecting labelled ones. $|R|$ and $|T|$ refer to the
size of the retrieval and train sets respectively. See Section 5.2 for
details.
#### What model should you use for retrieval?
To determine the effect of of model size on indexing and retrieval, we train
models using the cross-task neighbors retrieved by base and XL-size models
when the query size and number of neighbors are held constant. We find that
using a larger (XL size) indexing model generally results in better
performance, but this gap is much larger when training a base size model (8%)
than when training XL-size models (1%), suggesting that smaller models benefit
more from larger retrieval models. We provide detailed results in Appendix D.
#### Are prompts useful for retrieval?
All P3 data is in a prompted format, where the input is made up of (a) the
input instance and (b) a prompt that contains information about the task.
Training on prompted data greatly aids zero-shot generalisation (Wei et al.,
2021b; Sanh et al., 2021), but it is unclear how useful it is for retrieval.
To examine this, we run experiments using SuperNaturalInstructions. We index
and retrieve the data with and without instructions in the input and compare
the performance after training on retrieved subsets.999We add instructions
back into samples without them in order to isolate the effect of instructions
on retrieval separate from their effect during finetuning. We find that
retrieving without instructions outperforms retrieving with instructions by a
small margin, suggesting that DEFT relies more on instance information rather
than task information for retrieval. We provide details in Appendix E.
### 5.2 Practicality of Assuming Access to Unlabeled Data
Contrary to prior work, our approach assumes access to unlabeled data. This is
a practical assumption given that unlabeled data is often readily available or
is far cheaper to acquire than labeled data. This is especially true for tasks
such as Qasper or CaseHold, which require experts to carefully read (sometimes
quite long) texts to provide labels. We argue that DEFT’s use of unlabeled
data can make it a cost-efficient method to obtain a well-performing task-
specific model when the data labeling budget is limited.
We examine this by studying a scenario where QasperEvidence data was collected
and assume we have access to P3 and DEFT to make efficient use of it.
Obtaining labeled instances for QasperEvidence cost 3.25 times acquiring
unlabeled (question-paragraph) instances.101010Based on an estimate provided
by the authors of the dataset. Questions were written after reading paper
abstracts, and evidence selection required reading entire papers. We compare
(Figure 3) performance on the test set of a T5-XL model trained on a varying
number of labeled instances with a DEFT-XL model trained on cross-task nearest
neighbors of 3.25 as many unlabeled instances. DEFT yields better results for
smaller annotation budgets ($<1000$ labelled examples), and underperforms
models trained on thousands of labelled examples. This confirms our suggestion
that DEFT is preferable to regular finetuning for limited data budgets. We
also note the DEFT setup makes it easy to use target-task labeled data when
available, as shown in Section 4.3.
## 6 Conclusion
In this work, we propose Data-Efficient FineTuning, a novel method for
efficiently using multitask data by training task-specific models using only a
small amount of unlabeled target task data. We use the unlabeled data to
select subsets of the multitask data, and train models on these subsets. Our
approach performs strongly even when as few as only 20 unlabeled examples are
available, and is more effective than full-finetuning on labelled data when it
is expensive to gather labelled data, or few (< 3000) labelled data points are
available. DEFT models can outperform same-sized models trained on all
available data (e.g., T0), despite being trained on significantly less data.
Overall, our results strongly suggest that training on all available data,
even with large models, is not always the optimal choice and that focusing on
ways to better curate higher-quality, smaller datasets is a better path
forward.
## Limitations
Our approach is based on the assumption of a limited data budget, and the
observation that general multi-task training may not be the most efficient
method when one cares about single target tasks. As such, DEFT is not
applicable to “true” zero-shot settings where one has no information about the
target task, since it relies on the existence of at least some unlabelled
examples. Furthermore, for some tasks it may be possible to cheaply gather
many examples for finetuning beyond the point where DEFT is useful. In some
cases, gathering unlabelled examples may not be so much cheaper than gathering
labelled examples that it is worth considering whether to gather unlabelled or
labelled examples. Additionally, the recent rise of sparse mixture-of-expert
models (Shazeer et al., 2017; Fedus et al., 2022) may reduce the negative
interference effect observed throughout our work, where DEFT models often
outperform models trained on all multitask data and random subsets of the
multitask data. Finally, we note that in pilot experiments we found that task
diversity was a key element of strong held-out task performance. However, DEFT
does not explicitly correct for task diversity, and we leave further
exploration for extending DEFT to account for this to future work.
## Ethics Statement
We believe that the impact of our work is largely positive, showing a case
where we are able to achieve good results with significant reductions in the
amount of data used to train a model. We hope that this encourages future work
in data-efficiency, where we attempt to reduce the amount of data required to
train an effective NLP model. Such research could aid in making the analysis
of the data used to train models easier and cheaper, and reduce the training
time and associated carbon cost (Strubell et al., 2020) of models. However, we
note also that our work currently assumes access to a large pool of multitask
data, making it data-efficient only when it comes to training models, and
relies on large language models already pretrained over massive datasets.
## References
* Aribandi et al. (2022) Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2022. Ext5: Towards extreme multi-task scaling for transfer learning. In _International Conference on Learning Representations_.
* Bar Haim et al. (2006) Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge.
* Bentivogli et al. (2009) Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_ , volume 33, pages 1877–1901. Curran Associates, Inc.
* Chalkidis et al. (2022) Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras. 2022. Lexglue: A benchmark dataset for legal language understanding in english. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics_ , Dubln, Ireland.
* Dagan et al. (2006) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In _Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment_ , pages 177–190. Springer.
* Dasigi et al. (2021) Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4599–4610, Online. Association for Computational Linguistics.
* De Marneffe et al. (2019) Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The CommitmentBank: Investigating projection in naturally occurring discourse. To appear in proceedings of Sinn und Bedeutung 23. Data can be found at https://github.com/mcdm/CommitmentBank/.
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics.
* Fedus et al. (2022) William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. _Journal of Machine Learning Research_ , 23(120):1–39.
* Ge et al. (2013) Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. 2013. Optimized product quantization for approximate nearest neighbor search. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 2946–2953.
* Giampiccolo et al. (2007) Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In _Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing_ , pages 1–9. Association for Computational Linguistics.
* Gururangan et al. (2020) Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t stop pretraining: Adapt language models to domains and tasks. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8342–8360, Online. Association for Computational Linguistics.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In _International Conference on Machine Learning_ , pages 3929–3938. PMLR.
* Han and Tsvetkov (2022) Xiaochuang Han and Yulia Tsvetkov. 2022. Orca: Interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data. _arXiv preprint arXiv:2205.12600_.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_ , pages 2790–2799. PMLR.
* Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint arXiv:2106.09685_.
* Jegou et al. (2010) Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. _IEEE transactions on pattern analysis and machine intelligence_ , 33(1):117–128.
* Johnson et al. (2019) Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. _IEEE Transactions on Big Data_ , 7(3):535–547.
* Khandelwal et al. (2019) Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019\. Generalization through memorization: Nearest neighbor language models. _arXiv preprint arXiv:1911.00172_.
* Khashabi et al. (2020) Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 1896–1907, Online. Association for Computational Linguistics.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In _ICLR (Poster)_.
* Koh and Liang (2017) Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In _International conference on machine learning_ , pages 1885–1894. PMLR.
* Kung et al. (2021) Po-Nien Kung, Sheng-Siang Yin, Yi-Cheng Chen, Tse-Hsuan Yang, and Yun-Nung Chen. 2021. Efficient multi-task auxiliary learning: Selecting auxiliary data by feature similarity. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 416–428, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Levesque et al. (2011) Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd schema challenge. In _AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning_ , volume 46, page 47.
* Lewis et al. (2020) Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. _Advances in Neural Information Processing Systems_ , 33:9459–9474.
* Lin et al. (2022) Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, and Xiang Ren. 2022. Unsupervised cross-task generalization via retrieval augmentation. _ArXiv_ , abs/2204.07937.
* Lin et al. (2021) Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In _Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)_ , pages 2356–2362.
* Liu et al. (2022) Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. _arXiv preprint arXiv:2205.05638_.
* Malkov and Yashunin (2020) Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 42(4):824–836.
* Mishra et al. (2021) Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Natural instructions: Benchmarking generalization to new tasks from natural language instructions. _CoRR_ , abs/2104.08773.
* Mostafazadeh et al. (2017) Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. Lsdsem 2017 shared task: The story cloze test. In _Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics_ , pages 46–51.
* Nie et al. (2020) Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4885–4901, Online. Association for Computational Linguistics.
* Pezeshkpour et al. (2021) Pouya Pezeshkpour, Sarthak Jain, Byron C Wallace, and Sameer Singh. 2021. An empirical comparison of instance attribution methods for nlp. _arXiv preprint arXiv:2104.04128_.
* Phang et al. (2018a) Jason Phang, Thibault Févry, and Samuel R. Bowman. 2018a. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. _ArXiv_ , abs/1811.01088.
* Phang et al. (2018b) Jason Phang, Thibault Févry, and Samuel R. Bowman. 2018b. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. _arXiv preprint arXiv:1811.01088v2_.
* Pilehvar and Camacho-Collados (2019) Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: The word-in-context dataset for evaluating context-sensitive meaning representations. In _Proceedings of NAACL-HLT_.
* Poth et al. (2021) Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, and Iryna Gurevych. 2021. What to pre-train on? Efficient intermediate task selection. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 10585–10605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Pruthi et al. (2020) Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating training data influence by tracing gradient descent. _Advances in Neural Information Processing Systems_ , 33:19920–19930.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21(140):1–67.
* Robertson and Zaragoza (2009) Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. 3(4):333–389.
* Roemmele et al. (2011) Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In _2011 AAAI Spring Symposium Series_.
* Sakaguchi et al. (2021) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. _Commun. ACM_ , 64(9):99–106.
* Sanh et al. (2021) Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. _arXiv preprint arXiv:2110.08207_.
* Shazeer et al. (2017) Noam Shazeer, *Azalia Mirhoseini, *Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In _International Conference on Learning Representations_.
* Shi et al. (2022) Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer. 2022. Nearest neighbor zero-shot inference. _arXiv preprint arXiv:2205.13792_.
* Strubell et al. (2020) Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2020. Energy and policy considerations for modern deep learning research. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 34(09):13693–13696.
* Vu et al. (2020) Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and Mohit Iyyer. 2020. Exploring and predicting transferability across NLP tasks. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7882–7926, Online. Association for Computational Linguistics.
* Wang et al. (2019) Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. _arXiv preprint 1905.00537_.
* Wang et al. (2022) Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. 2022. Super-naturalinstructions:generalization via declarative instructions on 1600+ tasks. In _EMNLP_.
* Wei et al. (2021a) Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021a. Finetuned language models are zero-shot learners. _arXiv preprint arXiv:2109.01652_.
* Wei et al. (2021b) Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, and Shiqi Xu. 2021b. Few-shot text classification with triplet networks, data augmentation, and curriculum learning. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5493–5500, Online. Association for Computational Linguistics.
* Yeh et al. (2018) Chih-Kuan Yeh, Joon Kim, Ian En-Hsu Yen, and Pradeep K Ravikumar. 2018. Representer point selection for explaining deep neural networks. _Advances in neural information processing systems_ , 31.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_.
* Zheng et al. (2021) Lucia Zheng, Neel Guha, Brandon R. Anderson, Peter Henderson, and Daniel E. Ho. 2021\. When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings. In _Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law_ , ICAIL ’21, page 159–168, New York, NY, USA. Association for Computing Machinery.
## Appendix A Compute Resources
We ran all experiments on a server with 8 80GB A100 GPUs. Most models took
7-10 hours to train on a single 80GB A100 GPU.
## Appendix B Dataset Details
Dataset | Retrieval | Eval | #Shots | Retrieval from
---|---|---|---|---
CaseHold (Zheng et al., 2021) | 1000 | 2900 | - | Validation
DROP (Dua et al., 2019) | 1000 | 8535 | - | Validation
QasperEvidence (Dasigi et al., 2021) | 1000 | 43673 | - | Validation
RTE* | 1000 | 277 | 32 | Train
ANLI R1 (Nie et al., 2020) | 1000 | 1000 | 50 | Train
ANLI R2 (Nie et al., 2020) | 1000 | 1000 | 50 | Train
ANLI R3 (Nie et al., 2020) | 1000 | 1000 | 50 | Train
CB (De Marneffe et al., 2019) | 250 | 56 | 32 | Train
HellaSwag (Zellers et al., 2019) | 1000 | 10003 | 20 | Train
StoryCloze (Mostafazadeh et al., 2017) | 1000 | 1871 | 70 | Train
WinoGrande (Sakaguchi et al., 2021) | 1000 | 1767 | 50 | Train
WSC (Levesque et al., 2011) | 554 | 104 | 32 | Train
COPA (Roemmele et al., 2011) | 400 | 100 | 32 | Train
WiC (Pilehvar and Camacho-Collados, 2019) | 1000 | 638 | 32 | Train
Table 4: Size of splits used for experiments across datasets. ‘#Shots’
indicates the number of shots used in few-shot experiments, and ‘retrieval
from’ indicates which split we selected retrieval data from. *Following
SuperGLUE Wang et al. (2019), RTE data is from RTE 1/2/3/5 (Dagan et al.,
2006; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al.,
2009).
#### Sizes and Splits
For each dataset used, we provide the number of retrieval and validation
examples used in Table 4. We also indicate if the retrieval data was split
from the validation or training split. Note any data used to retrieve is held
out of the validation split to avoid information leakage. We additionally
provide the number of shots used for each dataset. We follow the number of
splits used by Liu et al. (2022) and use the data shared by the authors
(available at https://github.com/r-three/t-few/tree/master/data/few_shot).
#### Prompts
We list the prompts used for each dataset. {x} indicates a space that is
filled in by instance data.
* •
CaseHold: What is the correct holding statement for the following text? Text:
{context} (A): {ending 1} (B): {ending 2} (C): {ending 3} (D): {ending 4} (E):
{ending 5}
* •
DROP: Passage: {passage} Question: {question} Answer:
* •
QasperEvidence: Question: {question} Paragraph: {paragraph} Is the answer to
the question in the paragraph? Answer Yes or No.
* •
RTE: {premise} Question: Does this imply that “{hypothesis}”? Yes or no?
* •
ANLI: {premise} Question: {hypothesis} True, False, or Neither?
* •
CB: {premise} Question: {hypothesis} True, False, or Neither?
* •
HellaSwag: Complete the description with an appropriate ending: First,
{context a} Then, {context b} … (a) {ending 1} (b) {ending 2} (c) {ending 3}
(d) {ending 4}
* •
StoryCloze: {input sentence 1} {input sentence 2} {input sentence 3} {input
sentence 4} What is a possible continuation for the story given the following
options ? - {answer 1} - {answer 2}
* •
WinoGrande: {sentence} What does the _ in the above sentence refer to?
{option1} or {option2}?
* •
WSC: Passage: {text} Question: In the passage above, does the pronoun ‘{span
1}’ refer to ‘{span 2}’? Answer:
* •
COPA: {premise} As a consequence… Help me pick the more plausible option: -
{choice 1} - {choice 2}
* •
WiC: {sentence 1} {sentence 2} Question: Is the word ‘{word}’ used in the same
sense in the two sentences above? Yes, No?
## Appendix C Few-shot Results without IA3
For ‘DEFT-Few (20-70Q)’ in Table 3, we trained 5 models using DEFT (as we used
5 few-shot sets per dataset). In Table 5 we report the performance of these
models without IA3 training. Note we did not train few-shot models for
CaseHold, QasperEvidence, or DROP, and so do not report results on these
datasets. Notably, RTE, CB, and WSC all have quite large standard deviation
($>3.0$), which suggests our improvements (or deterioration, for WSC) over
T0-3B for these datasets may not be significant.
## Appendix D Index Model Size Experiments
We explored mismatching the index model sizes, training XL size models on
cross-task neighbor splits indexed and retrieved using T5-base, and vice-
versa. We use a query size of 1000 and retrieve 500 neighbors per query
instance. We present the results in Table 6.
## Appendix E SuperNaturalInstructions Experiments
We use version 2.7 of the SuperNaturalInstructions dataset and use the
official splits provided, with 100 samples per train and evaluation tasks.
This results in a pool of 75,317 train examples. For evaluation, we randomly
select one task per evaluation category in Table 5 of Wang et al. (2022). Task
names are given in Table 7. We then generate two indices for retrieval: one
where each sample is encoded including the task instruction, and one where
each sample is encoded without any instruction. We then retrieve using the 100
unlabeled test instances from each chosen evaluation task, matching the format
used for the index (i.e., if we retrieve from the index with instructions, we
encode our query data with instructions included). In order to isolate the
effect of instructions on retrieval, after retrieving examples, we always
train on the corresponding examples with instructions included (i.e., when we
retrieve examples without using instructions, we add the instructions back
into the inputs before finetuning). On average, we retrieve 3.5k training
examples, roughly 5% of the total training data. Additionally, we finetune a
T5-XL model using all available training data (‘Tk-instruct’), and a random
baseline using random subsets of the training data of the same size as the
retrieved subsets (‘Rand-XL’).
We present our results in Table 8. We find that the instruction-augmented and
no-instruction retrieval DEFT models achieve similar performance on average,
although the no-instruction variant performs slightly higher. Both DEFT models
significantly outperform the Rand-XL baseline, suggesting that the retrieval
is still effective even when using a large pool of multitask data without
instructions or prompts. However, we find that neither DEFT model
significantly outperforms Tk-instruct, which we hypothesise is related to the
significantly smaller size of SuperNaturalInstructions compared to P3.
However, we note that our DEFT-XL models are trained on significantly less
data than Tk-instruct, and training all 12 DEFT models is still cheaper than
training the Tk-instruct model, using roughly 42,000 examples overall, roughly
56% of the data used to train Tk-instruct.
Task DEFT-Few (20-70Q) RTE $73.2_{{4.0}}$ ANLI R1 $36.1_{{3.0}}$ ANLI R2
$34.1_{{0.9}}$ ANLI R3 $40.6_{{2.0}}$ CB $58.2_{{10.5}}$ HellaSwag
$34.1_{{0.7}}$ StoryCloze $95.1_{{0.3}}$ WinoGrande $50.6_{{1.2}}$ WSC
$51.0_{{5.1}}$ COPA $87.8_{{1.1}}$ WiC $50.8_{{1.7}}$ Average 55.6
Table 5: Performance of XL size models trained using DEFT with few-shot
queries. We report the mean and standard deviation over 5 runs.
Train Model Size Base XL Index Model Size Base XL Base XL CaseHold 14.8 15.8
32.6 37.2 DROP 20.8 21.3 30.4 31.0 Qasper 15.7 18.0 23.3 28.5 RTE 53.4 61.7
77.3 74.0 ANLI R1 33.3 33.3 39.5 39.8 ANLI R2 33.4 32.8 35.3 37.5 ANLI R3 33.2
33.3 42.5 41.4 CB 50.0 50.0 75.0 60.7 HellaSwag 26.0 27.9 31.7 33.1 StoryCloze
74.0 76.8 94.4 95.3 WinoGrande 49.5 50.4 51.4 50.6 WSC 41.4 42.3 43.3 39.4
COPA 63.0 60.0 85.0 95.0 WiC 48.8 48.3 49.5 54.9 Average 39.8 42.8 50.8 51.3
Table 6: Performance of DEFT models trained on cross-task neighbors retrieved
using different-size models.
Evaluation Category Task Answerability
task020_mctaco_answerability_classification Cause Effect Classification
task391_cod3s_cause_effect_classification Coreference
task1391_winogrande_coreference_resolution Data to Text
task957_e2e_data_to_text Dialogue Act Recognition
task879_schema_guided_dstc8_dialogue_act_recognition Entailment
task937_defeasible_nli_atomic_textual_entailment Grammar Error Correction
task1557_jfleg_grammar_error_correction Keyword Tagging
task613_liar_keyword_tagging Overlap task039_qasc_overlap_extraction Question
Rewriting task670_ambigqa_question_rewriting Title Generation
task1356_xlsum_title_generation Word Analogy task1155_bard_word_analogy
Table 7: List of tasks used for each evaluation category given in Table 8.
DEFT-XL Evaluation Category Instr. No Instr. Rand Tk-Instruct Answerability
48.0 48.0 49.0 47.0 Cause Effect Classification 83.3 83.3 84.7 87.7
Coreference 61.0 51.0 43.0 83.0 Data to Text 34.0 34.4 33.4 37.9 Dialogue Act
Rec. 65.0 61.0 59.0 68.0 Entailment 50.0 68.0 13.0 19.0 Grammar Error
Correction 86.3 84.8 84.7 84.8 Keyword Tagging 17.4 17.6 19.2 13.3 Overlap
17.7 20.2 22.3 17.8 Question Rewriting 45.8 64.0 59.9 68.8 Title Generation
21.4 20.9 20.3 20.4 Word Analogy 60.0 41.0 60.0 61.3 Average 49.2 49.5 45.7
50.7
Table 8: Performance of XL-size models on 12 tasks from evaluation categories
in Wang et al. (2022). All results are in RougeL. ‘Instr.’ and ‘No Instr.’
variants of DEFT-XL refer to models trained using subsets of
SuperNaturalInstructions that were retrieved using instructions and without
using instruction respectively.
## Appendix F Retrieved Data
We present a breakdown of the data retrieved for each task using DEFT in
Figure 4.
Figure 4: Proportion of the retrieved training data for each evaluation
dataset (columns) that comes from each dataset in P3 (rows). The final column
shows these values for all of P3.
## Appendix G Retrieved Examples
For a single query from each dataset, we present the top two closest
datapoints retrieved below. Content warning: some of these datapoints
reference sensitive topics. Queries are chosen randomly. Answers are in
italics.
RTE Query: Thanks to a global ban on the ivory trade that was passed in 1989
by the Convention on International Trade in Endangered Species of Wild Fauna
and Flora (CITES), the African elephant population may be reversing its spiral
toward extinction\n Question: Does this imply that "The ban on ivory trade has
been effective in protecting the elephant from extinction."? Yes or no?
Retrieved #1: Title: Dissappointed\n Review: The software works OK, but
haven’t gotten any more than three numbers on a draw six lottery after 8
months of trying. The biggest thing to watch out for is support, or lack of.
If you rebuild your computer or buy a new one and have to re-install their
software, you have to get another product ID from them. It took me almost two
weeks of begging and a phone call (just an answering machine on their end) to
get a response from them. I am coming up on a week of trying to get a response
from them for a product ID for my new computer. Funny, because they responded
the next day when I first baught the program and they had my money in hand!\n
Does this product review convey a negative or positive sentiment? Negative
Retrieved #2: You are considering whether to buy a product. You look at the
reviews. Would the following review decrease or increase the chances of you
buying the product?\n Review title: Amazon Rip Off\n Product review: What a
huge waste of money. I paid $$$ on this very site not but a month ago, now it
is $$. Got it home, followed the instructions and the silly thing will not get
but about a foot off the ground if that, and then it just falls over and beats
itself into the ground. Don’t waste your cash on this, give your kid a fifty
dollar bill and let them light it on fire, they’ll have for fun. decrease
ANLI R1 Query: Secrets of the Cryptkeeper’s Haunted House was a childrenś
Saturday-morning game show that ran on CBS. It premiered on September 14, 1996
and lasted until August 23, 1997. It featured the Cryptkeeper of "Tales from
the Crypt" (with John Kassir as the voice) now serving as an announcer. It is
the last TV series in the "Tales From the Crypt" franchise.\n Question: The
Secrets of the Crypt Keeperś House television show aired on CBS until 1997,
and then was picked up and aired on NBC for an additional season. True, False,
or Neither? Retrieved #1: Is there a negative or positive tone to this product
review?\n ===\n Title: Not quite as good as some others\n Review: This is a
fair book, but it is not near as good as Peter O. Steiner’s "Thursday Night
Poker." Andy Nelson’s book can’t decide whether it is for beginners or
advanced, so it tries to fit advanced technique into too short of space. It
barely scratches the surface of any of the topics it brings up. When it
doesn’t do that, it simply says, "Play so tight that you don’t even have to
think. Fold 99% of your hands." That does not make for a fun night, in my
opinion.\n Answer: Negative
Retrieved #2: a delegation from the islamic resistance movement -lrb- hamas
-rrb- left the gaza strip monday morning , heading for egypt to hear israel ’s
response regarding a cairo - mediated ceasefire . In a nutshell, hamas leaders
leave to cairo for final ceasefire discussions
ANLI R2 Query: The Sea Wall (French: Un barrage contre le Pacifique ) is a
2008 film by Cambodian director Rithy Panh in a French/Cambodian/Belgian co-
production. The film opened on 7 January 2009 in France. It was adapted from
the 1950 novel "The Sea Wall" by Marguerite Duras. The novel had previously
been adapted as "This Angry Age" by René Clément in 1958.\n Question:
Marguerite Duras directed the film. True, False, or Neither? Retrieved #1:
Title: Exactly what I had been looking for!\n Review: I’ve gone through two
other iPod FM transmitters that I ended up giving away because the quality was
less than desirable. After seeing this one pop up in my Quick Picks last week
I decided to give it a try. I used it the very first evening I received it and
I’m happy to say my search is over. As others noted, use a low FM frequency
for the best results (87.9 in my area works well). I don’t receive any
interference and the music on my iPod comes through just like I expected. For
the price, this is definitely the best deal out there.\n Is this product
review negative? No
Retrieved #2: Based on this review, would the user recommend this
product?\n===\n Review: My friend tried to commit suicide, and while he was
bleeding to death, he was watching mtv, and the video for "Hold On" was
playing, and he was like "yeah" and after he was done rocking out he got all
inspired and called for an ambulance. And now he’s still here, and he takes
pills that make him tired, and everyone is careful to be very nice to him and
be his best friend, even though we all secretly pity him. Thank you so much.\n
Answer: No
ANLI R3 Query: Well, I think during the campaign, particularly now during this
difficult period, we ought to be speaking with one voice, and I appreciate the
way the administration has worked hard to calm the tensions. Like the vice
president, I call on Chairman Arafat to have his people pull back to make the
peace.\n Question: Chairman Arafat needs to pull back his people during this
difficult time. True, False, or Neither? Retrieved #1: Title: clinton pushes
for greater diversity on wall street\n\n===\n\n Write an article with the
given title: u.s. president bill clinton urged wall street brokers to pursue
business in america ’s economically distressed cities , saying it ’s an
untapped market with more buying power than mexico .
Retrieved #2: You are considering whether to buy a product. You look at the
reviews. Would the following review decrease or increase the chances of you
buying the product?\n Review title: Mistake\n Product review: I didn’t want to
"purchase" Bars and Tones". It was a mistake to click on it. This review
doesn’t deserve so many words.\n decrease
WiC Query: It may rain in which case the picnic will be canceled.\n A window
case.\n Question: Is the word ’case’ used in the same sense in the two
sentences above? Yes, No? Retrieved #1: Title: remains of ## exhumed from mass
graves in eastern croatia\n \n===\n \n Write an article with the given title:
thirty bodies believed to be croats killed by ethnic serbs at the outbreak of
the ####-## serbo-croatian war in former yugoslavia have been exhumed from two
mass graves in eastern croatia , an official said tuesday .
Retrieved #2: You are considering whether to buy a product. You look at the
reviews. Would the following review decrease or increase the chances of you
buying the product?\n Review title: For the 50-cent table\n Product review: My
favorite author has run out of steam! His co-author does not, repete, does not
have the Paterson style. After sampling this "tandemly"-wriiten book, it
becomes obvious that this is a time-waster. Even the editing is bad. I didn’t
feel guilty about not finishing it. It’s headed for the community library’s
monthly book sale–fifty cent table.\n decrease
COPA Query: The woman filed a restraining order against the man. As a
consequence… \n Help me pick the more plausible option:\n- The man called
her.\n- The man stalked her. Retrieved #1: First sentence of the article: when
christopher darden got a recent early-morning call from his publisher that his
book “ in contempt ” had become no. # on the new york times best-seller list ,
he mumbled something like “ ok , ” then rolled over and went back to sleep
.\n\n Title: contempt does n’t fill christopher darden
Retrieved #2: "Extract the answer to the following question from the movie
plot. If the question isn’t answerable, please output "Can’t answer".\n
Question: Who is the toy’s leader and Andy’s favorite toy?\n Title: Toy
Story\n Movie plot: A boy called Andy Davis (voice: John Morris) uses his toys
to act out a bank robbery. The bank is a cardboard box, the robber is Mr.
Potato Head (voice: Don Rickles) assisted by Slinky Dog (voice: Jim Varney),
and the bystanders include Bo Peep (voice: Annie Potts) and her sheep. The day
is saved by cowboy doll Woody (voice: Tom Hanks) playing the sheriff, with
help from Rex the dinosaur (voice: Wallace Shawn). Woody is the only toy who
gets to say his own lines because he has a pull-string that makes him say
things like "Reach for the sky!" and "You’re my favorite deputy!"During the
opening credits (soundtrack: Randy Newman’s "You’ve Got a Friend in Me"), Andy
takes Woody downstairs to find his mother (voice: Laurie Metcalf) decorating
the dining room for his birthday party. He asks if they can leave the
decorations up until they move, and his mom agrees. She says the guests will
arrive soon and sends him back upstairs to get his baby sister Molly (voice:
Hannah Unkrich), whose crib is in his room. Andy tosses Woody onto his bed
before he pulls Molly out of her crib and carries her away.Woody and the other
toys have seemed limp and inanimate up to this point, but as soon as Andy
leaves the room, Woody sits up and expresses surprise that the birthday party
is today. <cut for space> …\n Woody
WSC Query: Passage: Dan took the rear seat while Bill claimed the front
because his "Dibs!" was quicker. \n Question: In the passage above, does the
pronoun "his" refer to Dan?\n Answer: Retrieved #1: Title: I want to READ it
on my Kindle\n Review: Why can’t I get the readable version of night for my
kindle? I don’t want the auidio version…Help! I downloaded it thinking that I
would have the choice to read it or to listen to it but that was not the case
at all. I’m extremely disappointed.\n Does this product review convey a
negative or positive sentiment? Negative
Retrieved #2: You are considering whether to buy a product. You look at the
reviews. Would the following review decrease or increase the chances of you
buying the product?\n Review title: Look weird - feel great!\n Product review:
These look so weird and also feel weird when you first put them on but they
are so much fun. I love them for my yoga class, and sometimes wear them at
night watching TV because the separation they give your toes is good for your
feet overall. Try them… you’ll become a fan too!\n increase
WinoGrande Query: The phone of Donald is a lot better than Adam’s because _
paid extra for his phone.\n What does the _ in the above sentence refer to?
Donald or Adam? Retrieved #1: Title: more than you expect\n Product review:
The thing about these tillers is that they do things you might not think
about. For instance, they’re great for dealing with long-rooted weeds. You can
hack your way down to the root, then pull up the plant and not leave a huge
hole in the ground.\n Would you say this review depicts the product in a
flattering or unflattering light?\n flattering
Retrieved #2: Title: purported statement from al-qaida-linked group says
ultimatum against italy ends threatens attacks\n\n===\n\n Write an article
with the given title: a statement released sunday in the name of an al-qaida-
linked group said the italian government has “ dug its grave by its own hands
” after it ignored a warning to withdraw its troops from iraq by aug. ## .
HellaSwag Query: Complete the description with an appropriate ending:\n First,
[header] How to make a butterfly out of plastic spoons [title] Gather the
materials you will need for this project, listed below. [title] Put a craft
cloth or some newspaper down on your working surface. [title] Cut the top
portion of the four spoons off (leaving about half an inch of the handle left.
Then, … Retrieved #1: Title: hmm…\n Review: I bought this costume in hopes of
wearing for Halloween ( last year). I had even separately purchased the duster
( which I am now using to really dust things). Uhh… I tried it on ( I got a
X-Small) and its just big… the net piece ( part of the dress with the dots) go
all the way down to almost my knees. Which makes it awkward and not sexy at
all- its just weird I tried tucking the net part in to my undies to hold it,
but it just becomes supper puffy- again looks weird. I never wore it and its
still brand new sitting in my closet somewhere.Maybe its just for my body- I
am not sure, but the material isn’t as great either compared to the picture.
Def. does not look anything close to how the model looks in it.Sorry- this was
not a good buy at all. The model sure looks good in it.\n Does this product
review convey a negative or positive sentiment? Negative
Retrieved #2: What type of details about adolf heeb\n can be gathered from the
following bio?\n\n Bio: adolf heeb -lrb- born 11 july 1940 -rrb- is a former
cyclist and politician from liechtenstein .\n he competed in the individual
road race at the 1960 summer olympics .\n he later served as a member of the
landtag of liechtenstein and leader of the patriotic union party.
CB Query: B: boy, he’s a big one. A: he’s pretty big. That’s why it really
surprises me, you know, that he hasn’t come back, because, like I said, he’s
never gone away like this before, and, I would think, you know, I mean, he
might could get hurt by a car or something. I don’t know that he could really
get killed that easily because he is so big.\n Question: he could really get
killed that easily True, False, or Neither? Retrieved #1: Summarize this
document: Glen Water Limited also paid costs of \u00a31,600 to restore fish
stocks in the Tall River near Richhill.\n About 250 metres of the river was
affected when untreated sewage was discharged into it.\n It caused what was
described as a m̈oderatef̈ish kill.\n Inspectors found a plume of untreated
sewage coming from a discharge pipe at Richhill waste water treatment works in
2014.\n An investigation found that an üninterruptable power sourceät the
plant had failed.\n In addition, a power cut to the alarm system meant staff
were unaware of the problem.\n Glen Water Limited is based at Dartford in
Kent.\n Under a 25-year public private partnership it has the contract for 25%
of Northern Ireland’s waste water treatment capacity.\n It operates and
maintains nine treatment works or pumping stations up to 2032 in return for
monthly payments.\n Summary: A company which treats sewage for NI Water under
a public private partnership contract has been fined \u00a32,500 for polluting
a County Armagh river.
Retrieved #2: Title: Good\n Review: Well, I’d say all of these songs are well
constructed, dope lyrics whatever… but wth? all the basslines sound the same
or what? Personally i prefer Violent By Design over this.\n Is this product
review negative? No
StoryCloze Query: Andy had always wanted a big kids bike. When he turned six
Year’s old he asked for a bike for his birthday. He did not know how to ride a
bike. On Andy’s birthday his mother gave him a bike. What is a possible
continuation for the story given the following options ?\n - Andy cried for
hours.\n - His dad taught him how to ride it. Retrieved #1: Based on this
review, would the user recommend this product?\n ===\n Review: I love most
Neil Young but every fan knows that about one in three of his albums really
sucks. After Greendale and Greatest hits, I’m very disapointed.\n Answer: No
Retrieved #2: hong kong share prices rose a mere #.## percent on late overseas
buying thursday despite early profit-taking , dealers said .\n \n ===\n \n
Given the above sentence, write its title: hong kong shares close #.## percent
firmer
CaseHOLD Query: What is the correct holding statement for the following
text?\n Text: component of the res judicata doctrine. The Ohio Supreme Court
held that the original criminal proceedings in Krahn were insufficient to
invoke collateral estoppel in the later malpractice case because the claimed
error by Krahn’s criminal lawyer in plea negotiations was not “ ‘actually and
necessarily litigated and determined’ in the denial of her motion to vacate
the criminal judgment against her.” Krahn, 43 Ohio St.3d at 108, 538 N.E.2d
1058, quoting Goodson v. McDonough Power Equip., Inc. (1983), 2 Ohio St.3d
193, 195, 2 OBR 732, 443 N.E.2d 978. The Supreme Court by no means suggested
that collateral estoppel was completely inapplicable in the context of a
criminal conviction when, as here, matters genuinely were litigated and
determined. Id. at 107, 538 N.E.2d 1058 (<HOLDING>). Decisions in Ohio other
than Krahn relative \n (A): recognizing the doctrine of collateral estoppel in
agency proceedings\n (B): holding that the facts prevent the invocation of
collateral estoppel as a bar to krahns cause of action in this case\n (C):
holding collateral estoppel elements met considering changed circumstances in
the context of an exception to the general rule of collateral estoppel\n (D):
recognizing the cause of action\n (E): holding that collateral estoppel
applies to 1983 claims Retrieved #1: Is there a negative or positive tone to
this product review?\n ===\n Title: Too steep\n Review: I bought this for my
dog who had back problems, it was way too steep and my dog had to jump about
3/4’s of the way up to my bed because the measurement of the ramp on the
description was incorrect. It totally defeated the purpose of my dog having to
not jump. I had to go back to the stairs I had been using\n Answer: Negative
Retrieved #2: Write a title for this sentence: the fate of president barack
obama ’s top domestic priority – a remake of the u.s. health care system – now
rests in the hands of a pivotal but deeply divided senate committee . \n \n
Title: toughest test coming up for health care overhaul
DROP Query: Passage: Coming off their overtime win at San Diego, the Broncos
traveled to the Mall of America Field at the Hubert H. Humphrey Metrodome for
an interconference duel with the Minnesota Vikings. The game’s first points
came from the Vikings, when defensive end Jared Allen tackled running back
Willis McGahee in the end zone for a safety. The Broncos grabbed the lead when
linebacker Mario Haggan returned an interception off Vikings’ quarterback
Christian Ponder 16 yards for a touchdown … <cut for space> … On the Broncos’
next possession, McGahee rushed 24 yards for a touchdown and Tebow scrambled
for a two-point conversion to tie the game at 29. The Vikings subsequently
reclaimed the lead on Longwell’s 39-yard field goal with 3:06 left in the
game. The Broncos answered with kicker Matt Prater’s 46-yard field goal with
1:33 left to tie the game at 32. On the Vikings’ ensuing possession, Broncos’
cornerback André Goodman returned an interception off Ponder to the
Vikings’ 15-yard line. Six plays later, Prater nailed the game-winning 23-yard
field goal as time expired to give the Broncos their fifth consecutive win.\n
Question: how many yards did longwell make?\n Answer: Retrieved #1: Make a
title for this article: andy roddick hit a record-breaking ### mph -lrb- ###.#
kph -rrb- serve friday in a lopsided win over stefan koubek as the united
states took a #-# davis cup lead over austria . \n \n roddick ginepri give
united states #-# lead over austria
Retrieved #2: Orton does not start against Ohio State Purdue quarterback Kyle
Orton did not start Saturday #39;s game against Ohio State, though he was
listed as available to play. Orton has been bothered by a right hip injury for
the last month. \n \n Which of the following sections of a newspaper would
this article likely appear in? World News, Sports, Business, or Science and
Technology? Sports
Qasper Query: Question: How big is Augmented LibriSpeech dataset? Paragraph:
We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11
languages into English, diversified with over 11,000 speakers and over 60
accents. We also provide baseline results, including, to our knowledge, the
first end-to-end many-to-one multilingual model for spoken language
translation. CoVoST is free to use with a CC0 license, and the additional
Tatoeba evaluation samples are also CC-licensed. Is the answer to the question
in the paragraph? Answer Yes or No. Retrieved #1: Title: make your july #
celebration sizzle\n \n ===\n \n Write an article with the given title: you
have less than a week to get your fourth of july cookout menu set and we
thought we ’d help .
Retrieved #2: Title: A good idea…\n Review: that went terribly bad. I cannot
comprehend how some of these "artists" were chosen for this. "Atlantic City"
and "State Trooper" are embarrasing to say the least, but they sadly showcase
what is now Nashville’s finest. If Johnny Cash and Dar Williams recordings had
not appeared on this CD, one star would have been too many. Thankfully, these
mostly pathetic renderings cannot tarnish the greatness of Mr. Springsteen or
his amazing album. Go get the original. You won’t be sorry.\n Does this
product review convey a negative or positive sentiment? Negative
|
# GAN-MC: a Variance Reduction Tool for Derivatives Pricing
Weishi Wang<EMAIL_ADDRESS>Department of Statistics, The University of
Chicago
## 1 Abstract
We propose a parameter-free model for estimating the price or valuation of
financial derivatives like options, forwards and futures using non-supervised
learning networks and Monte Carlo. Although some arbitrage-based pricing
formula performs greatly on derivatives pricing like Black-Scholes on option
pricing, generative model-based Monte Carlo estimation(GAN-MC) will be more
accurate and holds more generalizability when lack of training samples on
derivatives, underlying asset’s price dynamics are unknown or the no-arbitrage
conditions can not be solved analytically. We analyze the variance reduction
feature of our model and to validate the potential value of the pricing model,
we collect real world market derivatives data and show that our model
outperforms other arbitrage-based pricing models and non-parametric machine
learning models. For comparison, we estimate the price of derivatives using
Black-Scholes model, ordinary least squares, radial basis function networks,
multilayer perception regression, projection pursuit regression and Monte
Carlo only models.
## 2 Introduction
Financial derivatives are used for risk management, hedging, speculation, and
arbitrage. Better understanding of pricing of derivatives could help traders
better hedge against risk, and the price of derivatives could reflect the
fluctuations on the underlying assets. Much of the success and growth of the
market for options and other derivatives securities should be traced to the
seminal work by Black and Scholes [1] and Merton [2]. They introduced the
closed-form option pricing formulas through no-arbitrage conditions and
dynamic hedging arguments. Such celebrated Black-Scholes and Merton formulas
have been well generalized, extended and applied to various securities.
Nicole, Monique and Steven [3] provide conditions under which the
Black–Scholes formula is robust with respect to a misspecification of
volatility. Wu [4] introduces the fuzzy set theory to the Black–Scholes
formula, which attaches belief degree on the European option. Marcin [5]
introduces a subdiffusive geometric Brownian motion to underlying asset
prices’ dynamics and tests the pricing model on prices of European option.
Carmona and Valdo [6] generalize the Black-Scholes formula in all dimensions
by approximate formulas and provide lower and upper bounds for hedging of
multivariate contingent claims. Moreover, while closed-form expressions are
not available in some generalizations and extensions, pricing formulas may
still take effects numerically.
However, the derivation of the pricing formula via the hedging or no-arbitrage
approach, either analytically or numerically, highly depends on the particular
parametric form of the underlying asset’s price dynamics. Thus the
misspecification of the stochastic process will lead to system pricing and
hedging errors for derivatives related to this price. Therefore, previous
parametric pricing methods are closely tied to the ability of capturing the
dynamics of underlying asset prices’ process.
In this paper, we creatively introduce the generative model-based Monte Carlo
estimation(GAN-MC) for derivatives pricing and hedging. We will not assume any
specific dynamics on the underlying asset prices. We only treat the asset
prices as simple multivariate random variables and try to approximate its
distribution by a neural network. Then we get the pricing formula from
derivatives’ definition and Monte Carlo estimation for the statistical
stability. Compared to the previous non-parametric pricing approach like
Hutchinson [7], our model relies less on derivatives’ regime and more stable
with the advantage of Monte Carlo.
In order to better capture the dynamics of underlying asset prices through
non-parametric approach, we introduce generative adversarial nets(GAN) [8] to
approximate underlying asset prices’ distribution. GAN is the framework for
estimating generative models via an adversarial process. The celebrated neural
network model has been widely generalized and extended. Zhang, Goodfellow, etc
[9] propose the Self-Attention Generative Adversarial Network which allows
attention-driven and long-range dependency modeling for generation tasks.
Mehdi and Simon [10] introduce Conditional GAN, which uses additional
information to direct the data generation process. Chen, Lin, etc [11] propose
the Depth-image Guided GAN which adds some architectural constraints to
network and generates realistic depth maps conditioned on input image. Martin
and Soumith [12] introduce the Wasserstein GAN which stabilize the training
process by replacing the original metric by Wasserstein-1 distance. Chen,
Duan, etc [13] propose InfoGAN, an information-theoretic extension to the
generative adversarial net which is able to learn disentangled representations
in a completely unsupervised manner by attempting to make conditional learned
automatically.
Monte Carlo could be used for option pricing under different underlying asset
prices’ dynamics assumptions. The original approach is raised by Boyle [14],
he uses risk neutrality to obtain equilibrium rate of return on underlying
assets and uses Monte Carlo to improve efficiency of estimation. Fu and Hu
[15] introduce techniques for the sensitivity analysis of Monte Carlo option
pricing and they propose an approach for the pricing of options with early
exercise features. Birge [16] introduces quasi Monte Carlo sequences which
have order of magnitude better asymptotic error rate and such sequences could
be used in option pricing. Mark [17] presents several enhancements to reduce
the bias as well as variance of Monte Carlo estimators and improve the
efficiency of the branching based estimators. Poirot and Tankov [18] relate
the underlying asset prices to the tempered stable (also known as CGMY)
processes and under an appropriate equivalent probability measure a tempered
stable process becomes a stable process, thus provide a fast Monte Carlo
algorithm for European option pricing.
The attention on training on biased datasets is increasing in recent days. The
work from Yo-whan Kim, Samarth Mishra, etc [19] raise the idea of pre-training
on synthetic video data. Compared with directly training on real video clips
data, the model will perform better on downstream tasks when pre-training on
the synthetic or biased datasets. Our GAN-MC model’s success on derivatives
pricing could be analogous to their success. Estimation based on synthetic or
fake underlying asset prices outperforms non-parametric models directly
trained on real derivatives prices.
### 2.1 Our Contributions
We summarize the major contributions of our paper as follows:
* •
We first introduce the generative model-based Monte Carlo estimation for
derivatives pricing. We assume that the underlying asset prices follow
multivariate random variable distribution and use GAN to approximate the
distribution. Then we use Monte Carlo estimation to get pricing formula for
each derivative: option, forward and futures.
* •
We get the consistent estimators for prices of different derivatives
theoretically and validate the accuracy of our pricing algorithms on real
market data. Compared with arbitrage-based pricing formula like Black-Scholes
formula, non-parametric pricing models like radial basis function networks,
multilayer perception regression, and projection pursuit regression, and some
simple models like linear regression and Monte Carlo only models, our GAN-MC
pricing model always reaches state-of-the-art on real market data.
We organize this paper as follows: In Section 3, we define the problem setup
and some assumptions for data representation and training. In Section 4, we
introduce our GAN-MC model for derivatives pricing, including option, forward
and futures. We cover European call option, European put option, American call
option and American put option in option pricing. And we cover commodity and
equity for forward and futures pricing. We still prove the variance of
estimator will decrease with the increase of generated sample size in this
section. In Section 5, we conduct experiments to test the generated sample and
test the accuracy of our algorithms. Compared with other models, our GAN-MC
always reaches state-of-the-art on real market option, futures and forward
data.
## 3 Problem setup
$\Omega_{1}$$\mathbb{R}^{n}$$\times$$\mathbb{R}^{m_{1}}$$(\Omega_{1},F_{1},\mathbb{P}_{Y})$$\mathbb{R}^{m\times
m}$$\mathbb{R}^{m_{2}}${0,1} or
$\mathbb{R}$$\Omega_{2}$$(\Omega_{2},F_{2},\mathbb{P}_{\gamma})$$\times$$Z$$Y=G_{\theta
g}\circ Z$$G$$D$$\gamma$ Figure 1: GAN structure
The pricing methods for financial derivatives are highly based on the
prediction of underlying assets, like stock prices. The basic idea of Monte
Carlo for derivatives pricing is about generating fake stock price samples.
Similarly here, if we denote the stock price vector
$\mathbf{S}_{t,T}=\left(S_{t},S_{t+1},\cdots,S_{t+T-1}\right)^{\top}\in\mathbb{R}_{+}^{T}$
as a multivariate random variable from time $t$ to time $t+T-1$, where
$\mathbf{S}_{t,T}:\Omega\to\mathbb{R}_{+}^{T}$ is a measurable function
mapping from sample space to $T$-dimensional positive real space. Then one
stock price vector on real stock market
$\mathbf{s}_{t,T}=\left(s_{t},s_{t+1},\cdots,s_{t+T-1}\right)$ would be a
realization of $\mathbf{S}_{t,T}$.
For the generative adversarial nets, as shown in figure 1, we denote
$G:\mathbb{R}^{n}\times\mathbb{R}^{m_{1}}\to\mathbb{R}^{m\times m}$ as the
generator mapping, where $m_{1}$ is the number of parameters for generator
network, $n$ is the dimension of random noise $\mathbf{Z}$, we denote
$D:\mathbb{R}^{T}\times\mathbb{R}^{m_{2}}\to\\{0,1\\}$ as the discriminator
mapping, where $m_{2}$ is the number of parameters for discriminator network,
and we denote $\gamma:\Omega_{2}\rightarrow\mathbb{R}^{m\times m}$ is the real
distribution mapping to image space. Then the training loss of GAN could be
described as $\min_{G}\max_{D}M(\mathbb{P}_{Y},\mathbb{P}_{\gamma})$, where
$M(\cdot,\cdot)$ is the metric between two measurable functions,
$\mathbb{P}_{\gamma}$ is the probability measure of random variable $\gamma$,
$\mathbb{P}_{Y}$ is the probability measure of random noise after mapping of
generator $G$.
We first consider the dynamic structure of $\mathbf{S}_{t,T}$.
###### Assumption 1 (Date Independence).
If $T$ is a relative large number, the distribution of $\mathbf{S}_{t,T}$ does
not depend on initial time point $t$, or equivalently the distribution of
$\mathbf{S}_{t,T}$ could be written as the distribution of $\mathbf{S}_{T}$.
Assumption 1 states that if $T$ is large, the high dimension distribution
would be complex enough to include all the volatility on stock market within a
period of time, like three years. GAN is a powerful tool to learn a specific
high dimensional distribution from training set. Thus we can treat the stock
price data as 1-D real distribution($\gamma$ in figure 1) under such
assumption.
###### Assumption 2 (Covariance Rank).
The matrix rank of $\textrm{Cov}\left(\mathbf{S}_{T}\right)$ should not be too
small.
The successful training of GAN models requires that we should not train the
generator on highly-correlated samples to avoid loss collapse. For a given
$T$, The rank of $\textrm{Cov}\left(\mathbf{S}_{T}\right)$ ranges from $1$ to
$T$. Assumption 2 states that the rank should not be close to 1, which
guarantees the covariance structure of stock market data is complex enough for
GAN to learn and the training process would not collapse at most time.
## 4 Methodology
In this section, we introduce the main algorithm, Generative Adversarial Nets-
Monte Carlo(GAN-MC) model, for derivatives pricing like option, futures and
forward pricing.
### 4.1 GAN-MC for Option Pricing
The theory of option pricing estimates the value of an option contract by
assigning a price, known as premium, based on the calculated probability that
the contract will finish in the money(ITM) at expiration. The option pricing
theory provides an evaluation of an option’s fair value, and the accurate
pricing model could help traders better incorporate the option value into
their strategies.
If we denote a specific underlying stock price data as
$\mathbf{s}_{1,n}=(s_{1},s_{2},\cdots,s_{n})$ which starts at day 1 with
length $n-1$, the continuously compounded annual risk-free interest rate as
$r$, the value of a call or put option on a stock that pays no dividend as $C$
or $P$, the exercise price for the given option as $X$, the proportion of a
year before the option expires as $T_{0}$ and $\Delta t$ is the time unit. And
$T$ is a fixed parameter which satisfies Assumption 1 and Assumption 2. We set
$N_{1}$ as the sample size threshold for training, and $N_{2}$ as the size of
fake data generation. While $\alpha$ is a proportion parameter which controls
the weights that different $\mathbf{s}_{t,T}$ contributes for the generation.
Then algorithm 1 illustrates how to use GAN-MC on option pricing.
Input: $\mathbf{s}_{1,n},r,X,T_{0},T,N_{1},N_{2},\alpha$
Output: Estimated call or put option price $\widehat{C}$ or $\widehat{P}$
1 for _$d=1,2,\cdots,T$_ do
2 Partition the stock price data $\mathbf{s}_{1,n}$ into a training set
$\mathcal{S}^{n}_{d,T}$ according to (1);
3 Train GAN on training set $\mathcal{S}^{n}_{d,T}$ and check training loss;
4 if _loss does not collapse and $\left|\mathcal{S}^{n}_{d,T}\right|\geq
N_{1}$_ then
5 break
6
7 end for
8for _$i=1,2,\cdots,N_{2}$_ do
9 Generate random noise $\mathbf{Z}_{i}$ and denote
$\tilde{\mathbf{S}}^{(i)}_{T}=G(\mathbf{Z}_{i})=(\tilde{s}^{(i)}_{n+1},\tilde{s}^{(i)}_{n+2},\cdots,\tilde{s}^{(i)}_{n+T})$
;
10 Calculate
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(i)}_{T},\mathbf{s}_{n-T,T}\right)$
between two stock prices by (2) for each $i$;
11
12 end for
13Sort the list of similarities to form
$\pi_{\alpha}=(p_{1},p_{2},\cdots,p_{N_{2}})$ such that
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(p_{i})}_{T},\mathbf{s}_{n-T,T}\right)\leq\textrm{tSim}\left(\tilde{\mathbf{S}}^{(p_{j})}_{T},\mathbf{s}_{n-T,T}\right)$
for every $1\leq i\leq j\leq N_{2}$, take
$\mathcal{I}_{\alpha}=\\{p_{i}:i\geq\lceil\alpha N_{2}\rceil\\}$ ;
14 Calculate $\widehat{C}$ or $\widehat{P}$ accordingly;
return _$\widehat{C}$ or $\widehat{P}$_
Algorithm 1 GAN-MC for option pricing
First if we have the historical stock price data $\mathbf{s}_{1,n}$, we need
to separate the data into realizations of $\mathbf{S}_{T}$ to construct
training set for the following missions. The training set
$\mathcal{S}^{n}_{d,T}$ with the length of sliding window $d$ and given $T$ is
defined as
$\mathcal{S}^{n}_{d,T}=\left\\{\mathbf{s}_{1,T},\mathbf{s}_{1+d,T},\cdots,\mathbf{s}_{1+\lfloor\frac{n-T}{d}\rfloor
d,T}\right\\}$ (1)
Obviously, when $d$ equals to $0$, all the realizations will be the same and
the training process of GAN would fail and collapse. With the increase of $d$,
the overlap proportion between different training sample will become smaller,
which will make it much more harder for generator to deceive the
discriminator. But the size of training set would become smaller at the same
time. Such trade-off is considered during the iteration, we start from a small
$d$ and keep checking the training loss of GAN, if the loss does not collapse
and the training set is not so small, we keep the GAN model for the following
generation.
Inspired from the work of GAN [8], the 1-D optimization process of GAN here
could be stated as
$\min_{G}\max_{D}\mathbb{E}_{\mathbf{s}_{t,T}\sim p(\mathbf{S}_{T})}\left[\log
D(\mathbf{s}_{t,T})\right]+\mathbb{E}_{\mathbf{Z}\sim
p(\mathbf{Z})}\left[\log(1-D(G(\mathbf{Z})))\right]$
where $p(\mathbf{S}_{T})$ is the probability distribution of $\mathbf{S}_{T}$
and $p(\mathbf{Z})$ is the probability distribution of noise $\mathbf{Z}$. We
design both the generator and discriminator by fully connected networks(FCNNs)
[20]. Instead of convolutional neural network, the dense layer could better
capture the structure information on 1 dimensional data.
After training, the generated data
$\\{\tilde{\mathbf{S}}^{(i)}_{T}\\}_{i=1}^{N_{2}}$ could be treated as fake
stock prices which follow the distribution of $\mathbf{S}_{T}$. Under
Assumption 1, these fake stock prices could be treated as the predictions for
the following $T$ days, which means we could denote
$\tilde{\mathbf{S}}^{(i)}_{T}=(\tilde{s}^{(i)}_{n+1},\tilde{s}^{(i)}_{n+2},\cdots,\tilde{s}^{(i)}_{n+T})$.
However, there will be endogenous variation within real world stock prices,
which makes Assumption 1 hard to be completely satisfied. Obviously the price
of a option should rely much more on recent stock prices, and the old stock
price will contribute less to the option pricing. Similar to the work of
Cassisi [21], here we define the similarity of two time series
$X=(x_{1},x_{2},\cdots,x_{n}),Y=(y_{1},y_{2},\cdots,y_{n})$ by
$\textrm{tSim}\left(X,Y\right)=\frac{1}{n}\sum_{i=1}^{n}\left(1-\frac{|x_{i}-y_{i}|}{|x_{i}|+|y_{i}|}\right)$
(2)
and we calculate
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(i)}_{T},\mathbf{s}_{n-T,T}\right)$
for each $i$. Some of the generated fake stock prices will be similar to
recent stock trend, while others may look completely different and are close
to earlier stock data. Thus we rank the list of similarities to form a unique
order $\pi_{\alpha}=(p_{1},p_{2},\cdots,p_{N_{2}})$ such that the similarities
are placed in a increasing trend, for every $1\leq i\leq j\leq N_{2}$ we have
$\textrm{tSim}(\tilde{\mathbf{S}}^{(p_{i})}_{T},\mathbf{s}_{n-T,T})\leq\textrm{tSim}(\tilde{\mathbf{S}}^{(p_{j})}_{T},\mathbf{s}_{n-T,T})$.
For a proportion parameter $\alpha\in(0,1)$ we take the generated fake stock
prices which are close to recent market trend
$\mathcal{I}_{\alpha}=\\{p_{i}:i\geq\lceil\alpha N_{2}\rceil\\}$ as the
candidate samples index set for Monte Carlo.
The basic theory of option pricing relies on risk neutral valuation. According
to the original Monte Carlo methods used on European option pricing [14], the
contract holder can purchase the stock at a future date $\frac{T_{0}}{\Delta
t}$ at a price $X$ agreed upon in the contract. The payoff function of a call
option could be stated as $f(S)=\max(S-X,0)$ where $S$ is the stock price at
expiration date. We need the investment payoff is equal to the compound total
return obtained by investing the option premium $C$, for European call option
$\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in|\mathcal{I}_{\alpha}|}f(\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}})=(1+r\Delta t)^{\frac{T_{0}}{\Delta t}}C$
and solving the equation we get the GAN-based Monte Carlo estimation for call
option
$\widehat{C}=\left(1+r\Delta t\right)^{-\frac{T_{0}}{\Delta
t}}\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\max\left(\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}}-X,0\right)$ (3)
Similarly, the only difference for put option pricing lies on the payoff
function. The put option is a contract giving the option buyer the right to
sell a specified amount of an underlying security, therefore the payoff
function for put option should be $f(S)=\max(X-S,0)$. And we get the GAN-based
Monte Carlo estimation for European put option
$\widehat{P}=\left(1+r\Delta t\right)^{-\frac{T_{0}}{\Delta
t}}\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\max\left(X-\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}},0\right)$ (4)
For American option, since the contract allows holders to exercise their right
at any time before and including the expiration date, the equation of the call
option should be changed to
$C\leq\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in|\mathcal{I}_{\alpha}|}f(\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}})\leq(1+r\Delta t)^{\frac{T_{0}}{\Delta t}}C$
and we get the lower and upper bound for American call option
$\left(1+r\Delta t\right)^{-\frac{T_{0}}{\Delta
t}}\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\max\left(\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}}-X,0\right)\leq\widehat{C}\leq\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\max\left(\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}}-X,0\right)$
Similarly for American put option
$\left(1+r\Delta t\right)^{-\frac{T_{0}}{\Delta
t}}\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\max\left(X-\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}},0\right)\leq\widehat{P}\leq\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\max\left(\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}}-X,0\right)$
After getting the lower and upper bound for American put option, we could take
the average of the two bounds as the final pricing formula for American
option.
One significant advantage for Monte Carlo estimation methods is that the
variance of statistics will decrease with the increase on sample size. Lower
variance is associated with lower risk for investors. Such advantage could be
stated as the following theorem.
###### Theorem 1.
Given $r,T_{0},\alpha,X$, $\textrm{Var}(\widehat{C})$ or
$\textrm{Var}(\widehat{P})$ would not increase with the increase of $N_{2}$.
###### Proof.
We design generator as fully connected network consisting of dense layers and
activation layers, thus $G$ is a continuous function. And $\mathbf{Z}$ is a
random noise, we generate
$\\{\mathbf{Z}_{1},\mathbf{Z}_{2},\cdots,\mathbf{Z}_{N_{2}}\\}$ as independent
and identically distributed random variables, if we denote
$G(\mathbf{Z}_{i})_{j}$ as the j-th element of $G(\mathbf{Z}_{i})$, then the
random variables $\max(G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta t}}-X,0)$ or
$\max(X-G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta t}},0)$ are continuous
functions of independent variables $\\{\mathbf{Z}_{i}\\}_{i=1}^{N_{2}}$.
Therefore $\\{\max(G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta
t}}-X,0)\\}_{i=1}^{N_{2}}$ are i.i.d. random variables and
$\\{\max(X-G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta t}},0)\\}_{i=1}^{N_{2}}$
are i.i.d. random variables. We know that
$\mathcal{I}_{\alpha}\subseteq[N_{2}]$ and we take
$\sigma^{2}_{1}\coloneqq\textrm{Var}(\max(G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta
t}}-X,0))$ and
$\sigma^{2}_{2}\coloneqq\textrm{Var}(\max(X-G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta
t}},0))$, then the variance of estimated call and put option could be stated
as
$\textrm{Var}(\widehat{C})\propto\frac{\sigma^{2}_{1}}{|\mathcal{I}_{\alpha}|}\quad\textrm{Var}(\widehat{P})\propto\frac{\sigma^{2}_{2}}{|\mathcal{I}_{\alpha}|}$
therefore with the increase of $N_{2}$, the size of set $\mathcal{I}_{\alpha}$
will increase or stay the same. So if we keep $r,T_{0},\alpha,X$ unchanged,
$\textrm{Var}(\widehat{C})$ or $\textrm{Var}(\widehat{P})$ will decrease or
stay the same. ∎
### 4.2 GAN-MC for Forward and Futures Pricing
A forward contract is a customized contract between two parties to buy or sell
an asset at a specified price on a future date, and a futures contract is a
standardized legal contract to buy or sell the underlying assets at a
predetermined price for delivery at a specified time in the future. Forward
contract is similar with futures, but settlement of forward contract takes
place at the end of the contract, different with futures which settles on a
daily basis. The underlying asset transacted is usually a commodity or
financial instrument. Based on different commodities, securities, currencies
or intangibles such as interest rates and stock indexes, the forward or
futures could be categorized into markets like foreign exchange market, bond
market, equity market and commodity market. Here we focus our pricing model on
equity market and commodity market. And the valuation of equity forward or
futures origins from a single stock, a customized basket of stocks or on an
index of stocks, the valuation of commodity forward or futures depends on the
cost of carry during the interim before delivery.
#### 4.2.1 Equity Market
Similar to the setup in section 4.1, we denote the underlying stock or index
price as $\mathbf{s}_{1,n}=(s_{1},s_{2},\cdots,s_{n})$, the value of equity
forward or futures as $F^{\textrm{eq}}$, the historical data set of annual
dividend per share of this stock as $\\{D(t)\\}_{t=1}^{n}$, the proportion of
a year before the delivery date as $T_{0}$ and $\Delta t$ is the time unit.
Other parameters $r,T,N_{1},N_{2},\alpha$ are the same defined as section 4.1.
We introduce algorithm 2 to use GAN-MC on equity forward or futures pricing.
Input: $\mathbf{s}_{1,n},r,\\{D(t)\\}_{t=1}^{n},T_{0},T,N_{1},N_{2},\alpha$
Output: Estimated equity futures or forward price $\widehat{F}^{\textrm{eq}}$
1 for _$d=1,2,\cdots,T$_ do
2 Partition the stock price data $\mathbf{s}_{1,n}$ into a training set
$\mathcal{S}^{n}_{d,T}$ according to (1);
3 Train GAN on training set $\mathcal{S}^{n}_{d,T}$ and check training loss;
4 if _loss does not collapse and $\left|\mathcal{S}^{n}_{d,T}\right|\geq
N_{1}$_ then
5 break
6
7 end for
8for _$i=1,2,\cdots,N_{2}$_ do
9 Generate random noise $\mathbf{Z}_{i}$ and denote
$\tilde{\mathbf{S}}^{(i)}_{T}=G(\mathbf{Z}_{i})=(\tilde{s}^{(i)}_{n+1},\tilde{s}^{(i)}_{n+2},\cdots,\tilde{s}^{(i)}_{n+T})$
;
10 Calculate
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(i)}_{T},\mathbf{s}_{n-T,T}\right)$
between two stock or index prices by (2) for each $i$;
11
12 end for
13Sort the list of similarities to form
$\pi_{\alpha}=(p_{1},p_{2},\cdots,p_{N_{2}})$ such that
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(p_{i})}_{T},\mathbf{s}_{n-T,T}\right)\leq\textrm{tSim}\left(\tilde{\mathbf{S}}^{(p_{j})}_{T},\mathbf{s}_{n-T,T}\right)$
for every $1\leq i\leq j\leq N_{2}$, take
$\mathcal{I}_{\alpha}=\\{p_{i}:i\geq\lceil\alpha N_{2}\rceil\\}$ ;
14 Fit a linear model $D(t)=at+b+\epsilon_{t}$ on set $\\{D(t)\\}_{t=1}^{n}$
where $\\{\epsilon_{t}\\}_{t=1}^{n}$ is the set of random noise and predict
the annual dividend per share by $\widehat{D}(n+\frac{T_{0}}{\Delta
t})=\hat{a}(n+\frac{T_{0}}{\Delta t})+\hat{b}$;
Calculate
$\widehat{F}^{\textrm{eq}}=s_{n}\cdot\exp{\left\\{\left(r-\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\frac{\widehat{D}(n+\frac{T_{0}}{\Delta
t})}{\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta t}}}\right)T_{0}\right\\}}$ (5)
return _$\widehat{F}^{\textrm{eq}}$_
Algorithm 2 GAN-MC for equity futures/forward pricing
The first step for equity market pricing is the same with option pricing. We
need to partition the stock or index data into training set
$\mathbf{S}^{n}_{d,T}$ with a proper sliding window $d$, which could make the
GAN trained successfully with the given $T$. Similarly the stock or index
price at different time $t$ will contribute differently for forward or futures
pricing. We still use the rank of similarities between generated fake stock or
index data $\\{\tilde{\mathbf{S}}^{(i)}_{T}\\}_{i=1}^{N_{2}}$ and
$\mathbf{s}_{n-T,T}$ to control the effects of training sample at different
time. Equity forward or futures prices are usually quoted in the same way as
equity prices quoted in the underlying cash market by exchanges. And a pricing
model is mainly used to calculate risk for a future contract, although it is
utilized for computing both price and risk for a forward. The theoretical
value of a equity forward or futures depends on the dividend model assumption
[22], under dividend yield assumption, the theoretical equity forward’s or
futures’ price is given by
$F^{\textrm{eq}}_{\tau}=s_{t}\exp{\left\\{\left(r-\frac{D(\tau)}{s_{\tau}}\right)(\tau-t)\right\\}}$
(6)
Where $F^{\textrm{eq}}_{\tau}$ denotes the forward or futures price at
delivery date $\tau$, $s_{t}$ and $s_{\tau}$ denote the stock or index price
at time $\tau,t$ and $D(\tau)$ means the annual dividend per share at time
$\tau$.
Given the historical annual dividend per share data $\\{D(t)\\}_{t=1}^{n}$, we
need to predict the annual dividend per share at time $n+\frac{T_{0}}{\Delta
t}$ which is the delivery date for the equity forward or futures. There are
lots of methods for prediction and here we just consider the simple linear
model
$D(t)=at+b+\epsilon_{t}\quad\epsilon_{t}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,\sigma^{2})\quad
t\in[n]$
Where $\epsilon_{t}$ denotes random noise and we use least squares estimation
to get
$\hat{a}=\frac{\sum_{t=1}^{n}(t-\bar{t})(D(t)-\overline{D(t)})}{\sum_{t=1}^{n}(t-\bar{t})^{2}}$
and $\hat{b}=\overline{D(t)}-\hat{a}\bar{t}$ where
$\bar{t}=\frac{1}{n}\sum_{t=1}^{n}t$ and
$\overline{D(t)}=\frac{1}{n}\sum_{t=1}^{n}D(t)$. Therefore we predict the
annual dividend per share at the delivery date as
$\widehat{D}(n+\frac{T_{0}}{\Delta t})=\hat{a}(n+\frac{T_{0}}{\Delta
t})+\hat{b}$.
Given the annual dividend per share, the dividend yield estimation by GAN-
based Monte Carlo should be
$\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\frac{\widehat{D}(n+\frac{T_{0}}{\Delta
t})}{\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta t}}}$. Put it into the equity
futures pricing formula 6 we get the GAN-based Monte Carlo estimation formula
5.
Similar to the variance reduction in our method on option pricing. We could
still reduce the variance of estimated prices.
###### Theorem 2.
Given $r,T_{0},\alpha,\\{D(t)\\}_{t=1}^{n}$,
$\textrm{Var}(\widehat{F}^{\textrm{eq}})$ would not increase with the increase
of $N_{2}$.
###### Proof.
We design generator as fully connected network consisting of dense layers and
activation layers, thus $G$ is a continuous function. And $\mathbf{Z}$ is a
random noise, we generate
$\\{\mathbf{Z}_{1},\mathbf{Z}_{2},\cdots,\mathbf{Z}_{N_{2}}\\}$ as independent
and identically distributed random variables, thus random variables set
$\\{X_{i}=\frac{\widehat{D}(n+\frac{T_{0}}{\Delta
t})}{G(\mathbf{Z}_{i})_{n+\frac{T_{0}}{\Delta
t}}}\\}_{i\in\mathcal{I}_{\alpha}}$ are collections of independent and
identically distributed random variables if $\\{D(t)\\}_{t=1}^{n}$ is fixed
because $X_{i}$ is continuous functions of independent variables
$\\{\mathbf{Z}_{i}\\}_{i=1}^{N_{2}}$. We denote $\mu_{0}$ as the mean of
$X_{i}$ and $\sigma^{2}_{0}$ as the variance of $X_{i}$, then
$\mathbb{E}\left(\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}X_{i}\right)=\mu_{0}\quad\textrm{Var}\left(\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}X_{i}\right)=\frac{1}{|\mathcal{I}_{\alpha}|}\sigma^{2}_{0}$
And with the increase of $N_{2}$, $\mathcal{I}_{\alpha}\subseteq[N_{2}]$,
$|\mathcal{I}_{\alpha}|$ will increase or stay the same, and
$\text{Var}(\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}X_{i})$
will not increase. If $\mathbb{E}(\widehat{F}^{\textrm{eq}})$ keeps unchanged,
$\text{Var}(\widehat{F}^{\textrm{eq}})$ will decrease or stay the same. ∎
#### 4.2.2 Commodity Market
For commodity forward contract or futures, the underlying asset could usually
be divided into food, energy and materials. Similar to the parameter setup in
section 4.1, we denote the underlying commodity spot price as
$\mathbf{s}_{1,n}=(s_{1},s_{2},\cdots,s_{n})$, the average cost of carry from
time $t$ to $\tau$ as $P(t,\tau)$, the historical commodity forward contract
or futures price as $\\{F^{\textrm{co}}_{t}\\}_{t=n-N_{3}}^{n}$, the
proportion of a year before the delivery date as $T_{0}$ and $\Delta t$ is the
time unit. Other parameters $r,T,N_{1},N_{2},\alpha$ are the same defined as
section 4.1. We introduce algorithm 3 to use GAN-MC on commodity forward or
futures pricing.
Input:
$\mathbf{s}_{1,n},r,\\{F^{\textrm{co}}_{t}\\}_{t=n-N_{3}}^{n},T_{0},T,N_{1},N_{2},N_{3},\alpha$
Output: Estimated commodity forward or futures price
$\widehat{F}^{\textrm{co}}$
1 for _$d=1,2,\cdots,T$_ do
2 Partition the spot price data $\mathbf{s}_{1,n}$ into a training set
$\mathcal{S}^{n}_{d,T}$ according to (1);
3 Train GAN on training set $\mathcal{S}^{n}_{d,T}$ and check training loss;
4 if _loss does not collapse and $\left|\mathcal{S}^{n}_{d,T}\right|\geq
N_{1}$_ then
5 break
6
7 end for
8for _$i=1,2,\cdots,N_{2}$_ do
9 Generate random noise $\mathbf{Z}_{i}$ and denote
$\tilde{\mathbf{S}}^{(i)}_{T}=G(\mathbf{Z}_{i})=(\tilde{s}^{(i)}_{n+1},\tilde{s}^{(i)}_{n+2},\cdots,\tilde{s}^{(i)}_{n+T})$
;
10 Calculate
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(i)}_{T},\mathbf{s}_{n-T,T}\right)$
between two spot prices by (2) for each $i$;
11
12 end for
13Sort the list of similarities to form
$\pi_{\alpha}=(p_{1},p_{2},\cdots,p_{N_{2}})$ such that
$\textrm{tSim}\left(\tilde{\mathbf{S}}^{(p_{i})}_{T},\mathbf{s}_{n-T,T}\right)\leq\textrm{tSim}\left(\tilde{\mathbf{S}}^{(p_{j})}_{T},\mathbf{s}_{n-T,T}\right)$
for every $1\leq i\leq j\leq N_{2}$, take
$\mathcal{I}_{\alpha}=\\{p_{i}:i\geq\lceil\alpha N_{2}\rceil\\}$ ;
14 Estimate cost of carry $\widehat{P}\left(n,n+\frac{T_{0}}{\Delta t}\right)$
by formula (7);
15 Calculate $\widehat{F}^{\textrm{co}}$ from formula (8);
return _$\widehat{F}^{\textrm{co}}$_
Algorithm 3 GAN-MC for commodity futures/forward pricing
Similar to the equity pricing formula (6), the theoretical commodity forward
price is based on its current spot price, plus the cost of carry during the
interim before delivery [1]. The simple commodity forward contract or futures
could be expressed as
$F_{\tau}^{\textrm{co}}=(s_{t}+P(t,\tau))\exp{\\{r(\tau-t)\\}}$
Where $F^{\textrm{co}}_{\tau}$ denotes the forward or futures price at
delivery date $\tau$, $s_{t}$ denotes the commodity spot price at time $t$.
Then if we use $s_{\tau}$ to replace the term $s_{t}\exp{r(\tau-t)}$, the
price of commodity forward or futures would become
$F_{\tau}=s_{\tau}+P(t,\tau)\exp{\\{r(\tau-t)\\}}$. Like the previous notation
in algorithms, for pricing the commodity forward or futures at time
$n+\frac{T_{0}}{\Delta t}$, we use empirical estimation of
$P\left(n,n+\frac{T_{0}}{\Delta t}\right)$
$\widehat{P}\left(n,n+\frac{T_{0}}{\Delta
t}\right)=\frac{1}{N_{3}+1}\sum_{j=0}^{N_{3}}\left[\frac{F^{\textrm{co}}_{n-j}}{\exp{(rT_{0})}}-s_{n-j}\right]$
(7)
Where $N_{3}+1$ is the sample size for empirical estimation. Then we use GAN-
MC to estimate $\tilde{s}_{n+\frac{T_{0}}{\Delta t}}$. Analog to equity
futures or forward pricing formula 5, the commodity futures or forward pricing
formula would be
$\widehat{F}^{\textrm{co}}=\frac{1}{|\mathcal{I}_{\alpha}|}\sum_{i\in\mathcal{I}_{\alpha}}\tilde{s}^{(i)}_{n+\frac{T_{0}}{\Delta
t}}+\widehat{P}\left(n,n+\frac{T_{0}}{\Delta t}\right)\exp{(rT_{0})}$ (8)
Similar to the variance reduction theorem in equity forward or futures
pricing, we claim
###### Theorem 3.
Given $r,T_{0},\alpha,\\{F^{\textrm{co}}_{t}\\}_{t=n-N_{3}}^{n}$ fixed,
$\textrm{Var}(\widehat{F}^{\textrm{co}})$ would not increase with the increase
of $N_{2}$.
## 5 Experiments
### 5.1 Stock and Index Price Prediction
The accuracy of Monte Carlo pricing models is highly based on the accuracy and
variation of prediction on underlying assets. Equation (3)(4)(5) and(8) have
shown the pricing results are directly correlated to the generated stock
prices in the future. Thus before we test our pricing models on real-world
market data, we first check the generated fake stock prices tracks after GAN
training.
Figure 2: Stock and index prediction for the following 80 days. The upper
figure shows prediction results for TSLA, the blue line denotes the real stock
price of TSLA from 11/16/2021 to 3/11/2022, the red dotted line and green
dotted line denote two generated samples from generator $G$. The lower figure
shows prediction results for index S&P 500, the blue line denotes real index
price from 2/9/2022 to 6/3/2022, the red dotted line and green dotted line
indicate two generated samples from generator $G$.
We collect TSLA historical daily stock prices from 3/22/2019 to 4/11/2022 and
S&P 500 index prices from 6/14/2019 to 7/6/2022 for training and prediction.
Excluding all the holidays and weekends, there are 771 daily data in total. We
set $n$ in $\mathbf{s}_{1,n}$ to 670, the dimension of generator output $T$ to
128, the sample size threshold $N_{1}$ to 270, the size of generation $N_{2}$
to $1600$ and the proportion parameter $\alpha$ to $0.8$. All the financial
data is collected from Bloomberg database.
We follow the procedures of fake price generation in algorithm 1 and algorithm
2. Then we form the index set $\mathcal{I}_{\alpha}$ and pick two generated
price tracks as prediction in comparison to real market data. As shown in fig
2, the red dotted line and green dotted line indicate two generated samples
$G(\mathbf{Z}_{i}),G(\mathbf{Z}_{j})$ where $i,j\in\mathcal{I}_{\alpha}$.
Overall the generated fake stock tracks are more turbulent than real stock
prices, there are sharp decreases and sharp increases between two single day.
In TSLA stock price prediction, the generated samples distribute around real
stock prices. And as for pricing, the variation between different generations
would make Monte Carlo estimation more accurate. An interesting discovery in
S&P 500 index price prediction is that two generated data share a same trend,
which means they have a large linear correlation. The trends are similar but
the predicted index prices are not exactly the same for each day. The
prediction experiments show that our GAN could generate fitful samples for
pricing.
### 5.2 Option Pricing
Before showing our experiments on option pricing, we first introduce other
basic models for option pricing.
#### 5.2.1 Black–Scholes
The Black-Scholes model, also known as the Black-Scholes-Merton (BSM) model
estimates the theoretical value of derivatives based on other investment
instruments, taking into account the impact of time and other risk factors.
Raised by Fisher and Myron [23], the Black-Scholes model for option pricing
assumes no dividends are paid out during the life of the option, markets are
random, there are no transaction costs in buying the option, the risk-free
rate and volatility of the underlying asset are known and constant, the
returns of the underlying asset are normally distributed and the option can
only be exercised at expiration. Then if we denote $C$ as the call option
price, $N(x)$ denotes the standard normal cumulative distribution function,
$N(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-z^{2}/2}dz$. And we denote
$X$ as the strike price, $s_{t}$ as the spot price at time $t$, $T_{1}$ as the
date of option expiration, $r$ as annual risk-free interest rate, $\sigma$ as
the standard deviation of the stock’s returns, which is known as volatility.
Then the formula for call option
$C=N(d_{1})s_{t}-N(d_{2})Xe^{-r(T_{1}-t)}$
where
$\displaystyle d_{1}$
$\displaystyle=\frac{1}{\sigma\sqrt{T_{1}-t}}\left[\log\left(\frac{s_{t}}{X}\right)+\left(r+\frac{\sigma^{2}}{2}\right)(T_{1}-t)\right]$
$\displaystyle d_{2}$ $\displaystyle=d_{1}-\sigma\sqrt{T_{1}-t}$
And the formula for put option
$P=N(-d_{2})Xe^{-r(T_{1}-t)}-N(-d_{1})s_{t}$
where $d_{1},d_{2}$ are same defined in call option.
In implementing Black-Scholes formula, we actually use the pricing tools in
Bloomberg where volatility is automatically calculated according to market
data.
#### 5.2.2 Radial Basis Function Network
As mentioned in the work by Hutchinson and Poggio [7], the Radial Basis
Function(RBF) network could be used for data fitting[24]. If we replace the
Euclidean norm with matrix norm, the fitting process could be expressed as
$\hat{f}(x)=\sum_{i=1}^{k}c_{i}h_{i}((x-z_{i})^{\top}W^{\top}W(x-z_{i}))+\alpha_{0}+\alpha_{1}^{\top}x$
where $W,c_{i},z_{i},\alpha_{0},\alpha_{1}$ are parameters to be optimized on,
and $h_{i}$ is the basis function, which could be either Gaussian or
multiquadric. In pricing model we add one more sigmoid layer as output. The
augmented network will be of the form $g(\hat{f}(x))$ where
$g(u)=1/(1+e^{-u})$.
In implementing RBF network, we set the basis function as multiquadric and set
the input for the model as $\hat{g}(s/X,1,T_{1}-t)$ where $s$ is stock price,
$X$ is strike price and $T_{1}-t$ is the time until maturity.
#### 5.2.3 Multilayer Perceptrons Regression
Still in the paper by Hutchinson and Poggio [7], Multilayer Perceptrons(MLPs)
[25] are classical methods for high dimension regression. Consisting of fully
connected networks, a general formulation of MLPs with univariate output could
be written as
$\hat{f}(x)=h\left(\sum_{i=1}^{k}\delta_{i}h(\beta_{0i}+\beta_{1i}^{\top}x)+\delta_{0}\right)$
Where $h(\cdot)$ is the sigmoid function, and
$\delta_{i},\delta_{0},\beta_{0i},\beta_{1i}$ are parameters to be optimized
on. Unlike the RBF network, the nonlinear function $h$ in MLP is usually fixed
for the entire network.
In implementing MLP regression, we set the input of the model as
$\hat{f}(s/X,1,T_{1}-t)$ where $s$ is stock price, $X$ is strike price and
$T_{1}-t$ is the time until maturity.
#### 5.2.4 Projection Pursuit Regression
Still mentioned in the work of Hutchinson and Poggio [7], projection pursuit
regression(PPR) [26] was developed for high-dimensional regression. PPR models
are composed of projections of the data and estimating nonlinear combining
functions from data. The regression model could be stated as
$\hat{f}(x)=\sum_{i=1}^{k}\delta_{i}h_{i}(\beta_{i}^{\top}x)+\delta_{0}$
where $h_{i}$ are functions estimated from data,
$\delta_{i},\delta_{0},\beta_{i}$ are parameters to be optimized on, and $k$
is the number of projections.
In implementing PPR, we set the input of the model as $\hat{f}(s/X,1,T_{1}-t)$
where $s$ is stock price, $X$ is strike price and $T_{1}-t$ is the time until
maturity.
#### 5.2.5 Monte Carlo
In fact there are many methods of Monte Carlo estimation for option pricing,
and the main difference lays on the assumptions of stock price generation.
Here we adapt the methods from Kevin [27]. We assume that the process of stock
price follows geometric Brownian motion. If we denote $s$ as stock price, $t$
as time, then
$ds=\mu sdt+\sigma sdz\quad dz=\epsilon\sqrt{t}$
where $\mu$ is the drift parameter, $\sigma$ is volatility parameter,
$\epsilon$ is a random draw from standard normal distribution. Given stock
price at time $t$, we relate drift parameter to expected stock price and
exercise price $\mu=\frac{1}{T_{1}-t}\log(X/s_{t})$ where $T_{1}$ is the date
of option expiration. Following the stochastic process, we generate $N_{2}$
different stock price tracks
$(s^{(i)}_{t},\tilde{s}^{(i)}_{t+1},\cdots,\tilde{s}^{(i)}_{T_{1}})_{i=1}^{N_{2}}$
for pricing. Then if we denote $\Delta t$ as time unit, r as annual risk-free
interest rate, the estimated price $\widehat{C}=(1+r\Delta
t)^{T_{1}-t}\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\max(\tilde{s}^{(i)}_{T_{1}}-X,0)$
and $\widehat{P}=(1+r\Delta
t)^{T_{1}-t}\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\max(X-\tilde{s}^{(i)}_{T_{1}},0)$
for European option pricing and take the average of lower and upper bounds for
American option pricing.
In implementing Monte Carlo, we collect the volatility parameter for each
option from Bloomberg.
#### 5.2.6 Experiments
We first test our model on option pricing, which covers European and American
options, call and put options. As for American options, we collect TSLA
historical daily stock prices data from 3/22/2019 to 1/27/2022 for GAN
training and Monte Carlo estimation, excluding all the holidays and weekends,
there are 720 daily data in total. We collect TSLA call option data with
strike price, last deal price and expiration date from 2/25/2022 to 3/10/2022,
there are 8 different expiration dates for each day, and there are 10
different strike prices for each expiration date. We collect TSLA put option
data from 3/21/2022 to 4/1/2022 and the other setup is same to call. Thus we
have 800 option data for call and put, we use 720 data for training RBF
network, MLP regression, PPR and Linear Regression, and the rest for testing.
We set the dimension of generator $T$ to 128, the sample size threshold
$N_{1}$ to 290, the size of generation $N_{2}$ to 5120 and the proportion
parameter $\alpha$ to 0.8. We collect the annual risk-free interest rate from
Bloomberg. We use mean absolute percentage error(MAPE) as the metric for model
evaluation, where
$\text{MAPE}=\frac{100\%}{N_{\text{test}}}\sum_{j=1}^{N_{\text{test}}}\frac{|\widehat{V}-V|}{V}$
where $\widehat{V}$ is the predicted call or put option price, $V$ is the real
option price and $N_{\text{test}}$ is the size of test set.
| Model | MAPE
---|---
GAN-MC | 1.42%
MC | 2.55%
BS | 1.33%
RBF Network | 8.75%
MLP Regression | 6.42%
PPR | 4.52%
LR | 10.3%
LR-ITM | 10.88%
LR-OTM | 10.55%
| Model | MAPE
---|---
*GAN-MC | 1.02%
MC | 1.91%
BS | 2.82%
RBF Network | 12.1%
MLP Regression | 9.46%
PPR | 10.69%
LR | 15.32%
LR-ITM | 12.59%
LR-OTM | 17.25%
Table 1: Performance table for TSLA option pricing. Here GAN-MC means our
model, MC means Monte Carlo estimation, BS is the abbreviation of Black-
Scholes model, LR is the linear model using all the option data, LR-ITM is In-
the-Money Linear Model, LR-OTM is Out-of-the-Money Linear Model. The left
table is the performance on call option and the right table collects
performance for put option.
As seen from table 1, our model’s performance is close to Black-Scholes model
on call option pricing and our model reaches state-of-the-art for TSLA put
option pricing. For a single price prediction, a smaller variance of
$\widehat{C}$ or $\widehat{P}$ would make the prediction more accurate, we set
$N_{2}$ as a relative large number to make the Monte Carlo estimation more
accurate. And in both two cases our GAN-MC performs better than MC only model,
which means GAN holds better generation capacity for real market data. As for
the three non-parametric deep learning models, RBF network, MLP regression and
PPR, the performances on TSLA call and put option are quite similar for the
equivalence of three representations.
Apart from common stock prices, our model could still work on index prices. As
for European options, we test our model on S&P 500 index option price. Similar
to TSLA, for call option pricing, we collect S&P 500 historical daily index
prices data from 6/14/2019 to 1/10/2022 for GAN training and Monte Carlo
estimation, excluding all the holidays and weekends, there are 650 daily data
in total. We collect S&P 500 call option(SPXW) data with strike price, last
deal price and expiration date from 4/6/2022 to 4/28/2022, there are 8
different expiration dates mixed with call option types for each day and 10
different strike prices for each expiration date. After removing the part of
SPX, we have 700 S&P 500 call option data. We use 650 data for training and
the rest for testing. As for put option, we collect S&P 500 historical daily
index prices data from 1/6/2020 to 9/30/2022 for GAN training and Monte Carlo
estimation, excluding all the holidays and weekends, there are 690 daily data
in total. We collect S&P 500 put option(SPXW) data with strike price, last
deal price and expiration date from 10/7/2022 to 10/28/2022, we use 690 data
for training and the rest for testing.
| Model | MAPE
---|---
*GAN-MC | 4.50%
MC | 7.27%
BS | 19.96%
RBF Network | 17.00%
MLP Regression | 14.2%
PPR | 10.40%
LR | 6.72%
LR-ITM | 6.52%
LR-OTM | 7.82%
| Model | MAPE
---|---
*GAN-MC | 2.68%
MC | 20.61%
BS | 8.20%
RBF Network | 16.83%
MLP Regression | 12.28%
PPR | 19.97%
LR | 10.58%
LR-ITM | 10.39%
LR-OTM | 11.04%
Table 2: Performance table for S&P 500 Weeklys(SPXW) options pricing. The left
table is the performance on call option and the right table collects
performance for put option.
As seen from table 2, our model performs best among all the pricing models.
Maybe sometimes the last deal price of SPXW will not lay on the range of bid
price and ask price, Black-Scholes performs badly on SPXW pricing. And it
seems the linear trend is significant in SPXW, therefore linear models perform
better than other datasets.
### 5.3 Equity Forward or Futures Pricing
Similar to the methods used in option pricing, we conduct experiments for
equity futures pricing by GAN-MC, RBF network, MLP regression, PPR, linear
regression and Monte Carlo. We first collect S&P 500 historical daily index
prices data from 6/14/2019 to 4/21/2022 for GAN training and Monte Carlo
estimation, excluding all the holidays and weekends, there are 720 daily data
in total. Then we collect E-Mini S&P 500 futures data, which includes
ESU22(delivery at 9/16/2022) and ESZ22(delivery at 12/16/2022) from 7/12/2021
to 7/6/2022. In addtion we collect historical E-Mini S&P 500 futures
ESH21(delivery at 3/18/2022) from 1/8/2021 to 3/18/2022. All the futures data
includes the last futures price, remaining time before delivery date and S&P
500 index price. We use 720 data for training RNF network, MLP regression, PPR
and linear regression, and the rest for testing. Different from option
pricing, we use stock price $s$ and the time until delivery $T_{1}-t$ as the
input variables for non-parametric machine learning models for equity futures
pricing.
As for Monte Carlo, following the assumption of geometric Brownian motion, we
estimate drift parameter for lack of strike prices
$\hat{\mu}=\frac{1}{n}\sum_{t=1}^{n}\frac{(ds)_{t}}{s_{t}dt}$
where $(ds)_{t}=s_{t+1}-s_{t}$. And we estimate the volatility parameter as
historical volatility
$\hat{\sigma}=\sqrt{\frac{1}{n-1}\sum_{t=1}^{n}(R_{t}-\bar{R})^{2}}$
with $R_{t}=\log(s_{t}/s_{t-1})$ and $\bar{R}$ is the mean of $R_{t}$. Then
the Monte Carlo estimation of equity forward or futures price is given by
$\widehat{F}^{\textrm{eq}}=s_{t}\cdot\exp{\left\\{\left(r-\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\frac{\widehat{D}(T_{1})}{\tilde{s}^{(i)}_{T_{1}}}\right)\frac{T_{1}-t}{\Delta
t}\right\\}}$, where $t$ is current date, $T_{1}$ is the delivery date,
$\Delta t$ is time unit and $\tilde{s}^{(i)}_{T_{1}}$ is the generated stock
price in track
$(s^{(i)}_{t},\tilde{s}^{(i)}_{t+1},\cdots,\tilde{s}^{(i)}_{T_{1}})_{i=1}^{N_{2}}$.
We set the dimension of generator $T$ to 128, the sample size threshold
$N_{1}$ to 290, the size of generation $N_{2}$ to 5120 and the proportion
parameter $\alpha$ to 0.8. We collect the annual risk-free interest rate from
Bloomberg. And we use mean absolute percentage error(MAPE) as the metric for
model evaluation.
Model | MAPE
---|---
*GAN-MC | 0.03%
MC | 0.36%
RBF Network(Gauss) | 0.17%
RBF Network(Sqrt) | 0.31%
MLP Regression | 0.43%
PPR | 0.11%
LR | 0.31%
Table 3: Performance table for E-Mini S&P 500 futures pricing. Here RBF
Network(Gauss) means we use Gaussian as basis functions and RBF Network(Sqrt)
means we use multiquadric as basis functions.
As seen from table 3 our model still performs best on equity futures pricing.
For a single price prediction, a smaller variance of
$\widehat{F}^{\textrm{eq}}$ would make the prediction value more accurate. We
set $N_{2}$ as a relative large number to make the Monte Carlo estimation more
accurate. And in all cases our GAN-MC performs better than MC only model,
which means GAN holds better stability and generation capacity for market
data.
### 5.4 Commodity Forward or Futures Pricing
Quite similar to the methods used in section 5.3, we conduct experiments for
commodity forward contract pricing by GAN-MC, RBF network, MLP regression,
PPR, linear regression and Monte Carlo. We first collect LME copper spot daily
price data from 12/4/19 to 9/2/22 for GAN training and Monte Carlo estimation,
excluding all the holidays and weekends, there are 700 daily data in total.
Then we collect LME copper 3 months rolling forward daily price data from
10/17/18 to 9/30/22. All the forward data includes the last forward price,
remaining time before delivery date, which is three months and spot price. We
use 700 data for training RNF network, MLP regression, PPR and linear
regression, and the rest for testing. We use spot price $s$ and the time until
delivery as the input variables for non-parametric machine learning models for
commodity forward or futures pricing.
Similar to the method used in equity futures pricing, the Monte Carlo
estimation of commodity forward contract or futures price is given by
$\widehat{F}^{\textrm{co}}=\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}\tilde{s}^{(i)}_{T_{1}}+\widehat{P}(t,T_{1})\exp{(r(T_{1}-t))}$
where $t$ is current date, $T_{1}$ is the settlement date, and
$\tilde{s}^{(i)}_{T_{1}}$ is the generated stock price in track
$(s^{(i)}_{t},\tilde{s}^{(i)}_{t+1},\cdots,\tilde{s}^{(i)}_{T_{1}})_{i=1}^{N_{2}}$.
We set the dimension of generator $T$ to 128, the sample size threshold
$N_{1}$ to 290, the size of generation $N_{2}$ to 5120, sample size for
estimating cost of carry $N_{3}$ to 50 and the proportion parameter $\alpha$
to 0.8. We collect the annual risk-free interest rate from Bloomberg. And we
use mean absolute percentage error(MAPE) as the metric for model evaluation.
Model | MAPE
---|---
*GAN-MC | 0.08%
MC | 0.53%
RBF Network(Gauss) | 0.90%
RBF Network(Sqrt) | 2.33%
MLP Regression | 1.11%
PPR | 1.24%
LR | 1.13%
Table 4: Performance table for LME copper 3 month forward pricing.
The results in table 4 show that our model performs best on commodity forward
pricing. If we compare GAN-MC with MC, the better performance of our model
proves the generative network’s efficiency and capacity during generation.
Apart from our model, Monte Carlo and RBF Network(Gauss) models perform
greatly on LME copper forward pricing.
Apart from commodity forward contract, GAN-MC could still handle the commodity
futures cases. We then test our model on crude oil futures. Similar to copper,
we collect Cushing, OK WTI crude oil historical daily spot price from
7/11/2019 to 8/30/2022 for GAN training and Monte Carlo estimation, excluding
all the holidays and weekends, there are 700 daily data in total. Then we
collect CLV2, which is the WTI crude oil future settled on October 2022 from
2/5/2019 to 2/9/2022, and CLX2, which is settled on November 2022 from
2/5/2020 to 2/9/2022. All the futures data includes the last futures price,
remaining time before delivery date and WTI crude oil spot price. We use 700
daily data for training RNF network, MLP regression, PPR and linear
regression, and the rest for testing.
Model | MAPE
---|---
*GAN-MC | 0.58%
MC | 7.44%
RBF Network(Gauss) | 2.59%
RBF Network(Sqrt) | 1.88%
MLP Regression | 4.75%
PPR | 2.26%
LR | 4.29%
Table 5: Performance table for WTI crude oil futures pricing.
The results in table 5 show that our model performs best on commodity futures
pricing. Such success results from the capacity, generating ability and the
variance reduction properties of GAN-MC. Apart from our model, RBF
Network(Sqrt) and PPR models perform greatly on WTI crude oil futures pricing.
## 6 Conclusion
All the success of our model on different real market derivatives pricing
proves the correctness of our GAN-MC model. GAN is a powerful tool for
capturing the trend and variation of the underlying asset prices like stock or
index price. Monte Carlo could be used for reducing the variance of estimators
given independent sequences and efficient for derivatives pricing. Although
parametric derivatives pricing formulas are preferred when they are available,
our result show that generative model-based Monte Carlo alternatives could be
useful substitutes when arbitrage-based pricing formula or non-parametric
pricing model fails. While our results are promising, we can not claim our
approach will be successful in general, we have not covered swap and other
derivatives pricing yet and we hope to provide a more comprehensive analysis
of these alternatives in the near future.
## References
* Black [1976] Fischer Black. The pricing of commodity contracts. _Journal of financial economics_ , 3(1-2):167–179, 1976.
* Merton [1973] Robert C Merton. Theory of rational option pricing. _The Bell Journal of economics and management science_ , pages 141–183, 1973.
* Karoui et al. [1998] Nicole El Karoui, Monique Jeanblanc-Picquè, and Steven E Shreve. Robustness of the black and scholes formula. _Mathematical finance_ , 8(2):93–126, 1998.
* Wu [2004] Hsien-Chung Wu. Pricing european options based on the fuzzy pattern of black–scholes formula. _Computers & Operations Research_, 31(7):1069–1081, 2004.
* Magdziarz [2009] Marcin Magdziarz. Black-scholes formula in subdiffusive regime. _Journal of Statistical Physics_ , 136(3):553–564, 2009.
* Carmona and Durrleman [2005] René Carmona and Valdo Durrleman. Generalizing the black-scholes formula to multivariate contingent claims. _Journal of computational finance_ , 9(2):43, 2005\.
* Hutchinson et al. [1994] James M Hutchinson, Andrew W Lo, and Tomaso Poggio. A nonparametric approach to pricing and hedging derivative securities via learning networks. _The journal of Finance_ , 49(3):851–889, 1994\.
* Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. _Advances in neural information processing systems_ , 27, 2014.
* Zhang et al. [2019] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In _International conference on machine learning_ , pages 7354–7363. PMLR, 2019.
* Mirza and Osindero [2014] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. _arXiv preprint arXiv:1411.1784_ , 2014.
* Chen et al. [2020] Liangjian Chen, Shih-Yao Lin, Yusheng Xie, Yen-Yu Lin, Wei Fan, and Xiaohui Xie. Dggan: Depth-image guided generative adversarial networks for disentangling rgb and depth images in 3d hand pose estimation. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , pages 411–419, 2020.
* Arjovsky et al. [2017] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In _International conference on machine learning_ , pages 214–223. PMLR, 2017.
* Chen et al. [2016] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. _Advances in neural information processing systems_ , 29, 2016.
* Boyle [1977] Phelim P Boyle. Options: A monte carlo approach. _Journal of financial economics_ , 4(3):323–338, 1977.
* Fu and Hu [1995] Michael C Fu and Jian-Qlang Hu. Sensitivity analysis for monte carlo simulation of option pricing. _Probability in the Engineering and Informational Sciences_ , 9(3):417–446, 1995.
* Birge [1995] John R Birge. Quasi-monte carlo approaches to option pricing. Technical report, 1995.
* Broadie et al. [1997] Mark Broadie, Paul Glasserman, and Gautam Jain. Enhanced monte carlo estimates for american option prices. _Journal of Derivatives_ , 5:25–44, 1997.
* Poirot and Tankov [2006] Jérémy Poirot and Peter Tankov. Monte carlo option pricing for tempered stable (cgmy) processes. _Asia-Pacific Financial Markets_ , 13(4):327–344, 2006.
* Kim [2022] Yo-whan Kim. _How Transferable are Video Representations Based on Synthetic Data?_ PhD thesis, Massachusetts Institute of Technology, 2022.
* Rosenblatt [1961] Frank Rosenblatt. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Technical report, Cornell Aeronautical Lab Inc Buffalo NY, 1961.
* Cassisi et al. [2012] Carmelo Cassisi, Placido Montalto, Marco Aliotta, Andrea Cannata, and Alfredo Pulvirenti. Similarity measures and dimensionality reduction techniques for time series data mining. _Advances in data mining knowledge discovery and applications_ , pages 71–96, 2012.
* Quail and Overdahl [2009] Rob Quail and James A Overdahl. _Financial derivatives: pricing and risk management_ , volume 5. John Wiley & Sons, 2009.
* Black and Scholes [2019] Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. In _World Scientific Reference on Contingent Claims Analysis in Corporate Finance: Volume 1: Foundations of CCA and Equity Valuation_ , pages 3–21. World Scientific, 2019.
* Poggio and Girosi [1990] Tomaso Poggio and Federico Girosi. Networks for approximation and learning. _Proceedings of the IEEE_ , 78(9):1481–1497, 1990\.
* Rumelhart et al. [1985] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
* Friedman and Stuetzle [1981] Jerome H Friedman and Werner Stuetzle. Projection pursuit regression. _Journal of the American statistical Association_ , 76(376):817–823, 1981.
* Brewer et al. [2012] Kevin D Brewer, Yi Feng, and Clarence CY Kwan. Geometric brownian motion, option pricing, and simulation: Some spreadsheet-based exercises in financial modeling. _Spreadsheets in Education_ , 5(3):4598, 2012\.
|
# Collinear and triangular solutions to the coplanar and circular three-body
problem in the parametrized post-Newtonian formalism
Yuya Nakamura<EMAIL_ADDRESS>Hideki Asada
<EMAIL_ADDRESS>Graduate School of Science and Technology, Hirosaki
University, Hirosaki 036-8561, Japan
###### Abstract
This paper investigates the coplanar and circular three-body problem in the
parametrized post-Newtonian (PPN) formalism, for which we focus on a class of
fully conservative theories characterized by the Eddington-Robertson
parameters $\beta$ and $\gamma$. It is shown that there can still exist a
collinear equilibrium configuration and a triangular one, each of which is a
generalization of the post-Newtonian equilibrium configuration in general
relativity. The collinear configuration can exist for arbitrary mass ratio,
$\beta$, and $\gamma$. On the other hand, the PPN triangular configuration
depends on the nonlinearity parameter $\beta$ but not on $\gamma$. For any
value of $\beta$, the equilateral configuration is possible, if and only if
three finite masses are equal or two test masses orbit around one finite mass.
For general mass cases, the PPN triangle is not equilateral as in the post-
Newtonian case. It is shown also that the PPN displacements from the Lagrange
points in the Newtonian gravity $L_{1}$, $L_{2}$ and $L_{3}$ depend on $\beta$
and $\gamma$, whereas those to $L_{4}$ and $L_{5}$ rely only on $\beta$.
###### pacs:
04.25.Nx, 45.50.Pk, 95.10.Ce, 95.30.Sf
## I Introduction
The three-body problem is among the classical ones in physics. It led to the
notion of the chaos Goldstein . On the other hand, particular solutions such
as Euler’s collinear solution and Lagrange’s equilateral one Danby ; Marchal
express regular orbits and they have still attracted interest e.g. Asada ;
Torigoe ; Seto ; Schnittman ; Connors . If one mass is zero and the other two
masses are finite, the collinear solution and triangular one correspond to
Lagrange points $L_{1}$, $L_{2}$, $L_{3}$, $L_{4}$ and $L_{5}$ as particular
solutions for the coplanar restricted three-body problem.
In his pioneering work Nordtvedt , Nordtvedt found that the position of the
triangular points is very sensitive to the ratio of the gravitational mass to
the inertial mass in gravitational experimental tests, where the post-
Newtonian (PN) terms are not fully taken into account.
Krefetz Krefetz and Maindl Maindl studied the restricted three-body problem
in the PN approximation and found the PN triangular configuration for a
general mass ratio between two masses. These investigations were extended to
the PN three-body problem for general masses Yamada2010 ; Yamada2011 ;
Ichita2011 ; Yamada2012 ; Yamada2015 ; Yamada2016 , and the PN counterparts
for Euler’s collinear Yamada2010 ; Yamada2011 and Lagrange’s equilateral
solutions Ichita2011 ; Yamada2012 were obtained. It should be noted that the
PN triangular solutions are not necessarily equilateral for general mass
ratios and they are equilateral only for either the equal mass case or two
test masses. The stability of the PN solution and the radiation reaction at
2.5PN order were also examined Yamada2015 ; Yamada2016 .
In a scalar-tensor theory of gravity, a collinear configuration for three-body
problem was discussed Zhou . In addition to such fully classical treatments, a
possible quantum gravity correction to the Lagrange points was proposed
Battista2015a ; Battista2015b .
Moreover, the recent discovery of a relativistic hierarchical triple system
including a neutron star Ransom has generated renewed interest in the
relativistic three-body problem and the related gravitational experiments
Archibald ; Will2018 ; Voisin .
The main purpose of the present paper is to reexamine the coplanar and
circular three-body problem especially in the PPN formalism. One may ask if
collinear and triangular configurations are still solutions for the coplanar
three-body problem in the PPN gravity. If so, how large are the PPN effects of
the three-body configuration? We focus on the Eddington-Robertson parameters
$\beta$ and $\gamma$, because the two parameters are the most important ones;
$\beta$ measures how much nonlinearity there is in the superposition law for
gravity and $\gamma$ measures how much space curvature is produced by unit
rest mass Will ; Poisson . Hence, preferred locations, preferred frames or a
violation of conservation of total momentum will not be considered in this
paper. We confine ourselves to a class of fully conservative theories. See
e.g. Klioner for the celestial mechanics in this class of PPN theories.
This paper is organized as follows. In Section II, collinear configurations
are discussed in the PPN formalism. Section III investigates PPN triangular
configurations. In Section IV, the PPN corrections to the Lagrange points are
examined. For brevity, the Lagrange points defined in Newtonian gravity are
referred to as the Newtonian Lagrange points in this paper. Section V
summarizes this paper. Throughout this paper, $G=c=1$. $A,B$ and
$C\in\\{1,2,3\\}$ label three masses.
## II Collinear configuration in PPN gravity
### II.1 Euler’s collinear solution in Newton gravity
Let us begin with briefly mentioning the Euler’s collinear solution for the
circular three-body problem in Newton gravity Danby ; Marchal , for which each
mass $M_{A}$ ($A=1,2,3$) at $\bm{x}_{A}$ is orbiting around the common center
of mass (COM) at $\bm{x}_{G}$, and the orbital velocity and acceleration are
denoted as $\bm{v}_{A}$ and $\bm{a}_{A}$, respectively. In this section, we
suppose that three masses are always aligned, for which it is convenient to
use the corotating frame with a constant angular velocity $\omega$ on the
orbital plane chosen as the $x-y$ plane.
Without loss of generality, we assume $x_{1}>x_{2}>x_{3}$ for
$\bm{x}_{A}\equiv(x_{A},0)$. Let $R_{A}$ denote the relative position of each
mass $M_{A}$ from the COM at $\bm{x}_{G}\equiv(x_{G},0)$. Namely,
$R_{A}=x_{A}-x_{G}$. Note that $|R_{A}|\neq|\bm{x}_{A}|$ unless $x_{G}=0$ is
chosen. We define the relative vector between masses as
$\bm{R}_{AB}\equiv\bm{x}_{A}-\bm{x}_{B}$, for which the relative length is
$R_{AB}=|\bm{R}_{AB}|$. See Figure 1 for a configuration of the Euler’s
collinear solution.
Figure 1: Schematic figure for the collinear configuration of three masses.
The coordinate origin $x=0$ is chosen between $M_{1}$ and $M_{3}$, such that
$R_{1}>R_{2}>R_{3}$, $R_{1}>0$ and $R_{3}<0$. By taking account of this sign
convention, the equation of motion becomes
$\displaystyle R_{1}\omega^{2}$ $\displaystyle=$
$\displaystyle\frac{M_{2}}{R_{12}^{2}}+\frac{M_{3}}{R_{13}^{2}},$ (1)
$\displaystyle R_{2}\omega^{2}$ $\displaystyle=$
$\displaystyle-\frac{M_{1}}{R_{12}^{2}}+\frac{M_{3}}{R_{23}^{2}},$ (2)
$\displaystyle R_{3}\omega^{2}$ $\displaystyle=$
$\displaystyle-\frac{M_{1}}{R_{13}^{2}}-\frac{M_{2}}{R_{23}^{2}}.$ (3)
We define the distance ratio as $z\equiv R_{23}/R_{12}$, which plays a key
role in the following calculations. Note that $z>0$ by definition. We subtract
Eq. (2) from Eq. (1) and Eq. (3) from Eq. (2). By combining the results
including the same angular velocity $\omega$, we obtain a fifth-order equation
for $z$ as
$\displaystyle(M_{1}+M_{2})z^{5}+(3M_{1}+2M_{2})z^{4}+(3M_{1}+M_{2})z^{3}$
$\displaystyle-(M_{2}+3M_{3})z^{2}-(2M_{2}+3M_{3})z-(M_{2}+M_{3})=0,$ (4)
for which there exists the only positive root Danby ; Marchal . In order to
obtain Eq. (4), we do not have to specify the coordinate origin e.g.
$x_{G}=0$. This is because Eq. (4) does not refer to any coordinate system.
Once Eq. (4) is solved for $z$, we can obtain $\omega$ by substituting $z$
into any of Eqs. (1)-(3).
### II.2 PPN collinear configuration
In a class of fully conservative theories including only the Eddington-
Robertson parameters $\beta$ and $\gamma$, the equation of motion is Will ;
Poisson
$\displaystyle\bm{a}_{A}=$ $\displaystyle-\sum_{B\neq
A}\frac{M_{B}}{R_{AB}^{2}}\bm{n}_{AB}$ $\displaystyle-\sum_{B\neq
A}\frac{M_{B}}{R_{AB}^{2}}\bigg{\\{}\gamma
v_{A}^{2}-2(\gamma+1)(\bm{v}_{A}\cdot\bm{v}_{B})$
$\displaystyle~{}~{}~{}~{}~{}+(\gamma+1)v_{B}^{2}-\frac{3}{2}(\bm{n}_{AB}\cdot\bm{v}_{B})^{2}-\bigg{(}2\gamma+2\beta+1\bigg{)}\frac{M_{A}}{R_{AB}}$
$\displaystyle~{}~{}~{}~{}~{}-2(\gamma+\beta)\frac{M_{B}}{R_{AB}}\bigg{\\}}\bm{n}_{AB}$
$\displaystyle+\sum_{B\neq
A}\frac{M_{B}}{R_{AB}^{2}}\bigg{\\{}\bm{n}_{AB}\cdot[2(\gamma+1)\bm{v}_{A}-(2\gamma+1)\bm{v}_{B}]\bigg{\\}}(\bm{v}_{A}-\bm{v}_{B})$
$\displaystyle+\sum_{B\neq A}\sum_{C\neq
A,B}\frac{M_{B}M_{C}}{R_{AB}^{2}}\bigg{[}\frac{2(\gamma+\beta)}{R_{AC}}+\frac{2\beta-1}{R_{BC}}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\frac{1}{2}\frac{R_{AB}}{R_{BC}^{2}}(\bm{n}_{AB}\cdot\bm{n}_{BC})\bigg{]}\bm{n}_{AB}$
$\displaystyle-\frac{1}{2}(4\gamma+3)\sum_{B\neq A}\sum_{C\neq
A,B}\frac{M_{B}M_{C}}{R_{AB}R_{BC}^{2}}\bm{n}_{BC}+O(c^{-4}),$ (5)
where
$\displaystyle\bm{n}_{AB}$ $\displaystyle\equiv\frac{\bm{R}_{AB}}{R_{AB}}.$
(6)
For three aligned masses, Eq. (5) becomes the force-balance equation as
$\displaystyle\ell\omega^{2}=F_{N}+F_{M}+F_{V}\omega^{2},$ (7)
where we define $\ell\equiv R_{31}$, the mass ratio $\nu_{A}\equiv M_{A}/M$
for $M\equiv\sum_{A}M_{A}$, and
$\displaystyle F_{N}=\frac{M}{\ell^{2}z^{2}}[$ $\displaystyle
1-\nu_{1}-\nu_{3}+2(1-\nu_{1}-\nu_{3})z+(2-\nu_{1}-\nu_{3})z^{2}$
$\displaystyle+2(1-\nu_{1}-\nu_{3})z^{3}+(1-\nu_{1}-\nu_{3})z^{4}],$ (8)
$\displaystyle F_{M}=-\frac{M^{2}}{\ell^{3}z^{3}}[$
$\displaystyle\\{2(\beta+\gamma)\nu_{2}+(1+2\beta+2\gamma)\nu_{3}\\}\nu_{2}$
$\displaystyle+\\{(-1+4\beta+2\gamma)\nu_{1}+6(\beta+\gamma)\nu_{2}$
$\displaystyle~{}~{}~{}~{}~{}+3(1+2\beta+2\gamma)\nu_{3}\\}\nu_{2}z$
$\displaystyle+\\{(-5+12\beta+4\gamma)\nu_{1}+6(\beta+\gamma)\nu_{2}$
$\displaystyle~{}~{}~{}~{}~{}-(1-10\beta-4\gamma)\nu_{3}\\}\nu_{2}z^{2}$
$\displaystyle+\\{2(\beta+\gamma)\nu_{1}^{2}+4(\beta+\gamma)\nu_{2}^{2}$
$\displaystyle~{}~{}~{}~{}~{}-(7-14\beta-2\gamma)\nu_{2}\nu_{3}+2(\beta+\gamma)\nu_{3}^{2}$
$\displaystyle+((-7+14\beta+2\gamma)\nu_{2}$
$\displaystyle~{}~{}~{}~{}~{}+2(1+2\beta+2\gamma)\nu_{3})\nu_{1}\\}z^{3}$
$\displaystyle+\\{(-1+10\beta+4\gamma)\nu_{1}+6(\beta+\gamma)\nu_{2}$
$\displaystyle~{}~{}~{}~{}~{}+(12\beta+4\gamma-5)\nu_{3}\\}\nu_{2}z^{4}$
$\displaystyle+\\{3(1+2\beta+2\gamma)\nu_{1}+6(\beta+\gamma)\nu_{2}$
$\displaystyle~{}~{}~{}~{}~{}+(-1+4\beta+2\gamma)\nu_{3}\\}\nu_{2}z^{5}$
$\displaystyle+\\{(1+2\beta+2\gamma)\nu_{1}+2(\beta+\gamma)\nu_{2}\\}\nu_{2}z^{6}],$
(9)
and
$\displaystyle F_{V}=$ $\displaystyle\frac{M}{(1+z)^{2}z^{2}}$
$\displaystyle\times[-\nu_{1}^{2}\nu_{2}-2\nu_{1}\nu_{2}(2\nu_{1}+\nu_{2})z$
$\displaystyle~{}~{}~{}+\\{\gamma\nu_{1}^{3}+((-2+4\gamma)\nu_{2}+3(1+\gamma)\nu_{3})\nu_{1}^{2}$
$\displaystyle~{}~{}~{}+(2\nu_{2}+\nu_{3})(\gamma\nu_{2}^{2}+(1+2\gamma)\nu_{2}\nu_{3}+\gamma\nu_{3}^{2})$
$\displaystyle~{}~{}~{}+((-1+5\gamma)\nu_{2}^{2}+8(1+\gamma)\nu_{2}\nu_{3}+3(1+\gamma)\nu_{3}^{2})\nu_{1}\\}z^{2}$
$\displaystyle~{}~{}~{}+2(\nu_{1}+2\nu_{2}+\nu_{3})\\{\gamma\nu_{1}^{2}+\gamma\nu_{2}^{2}+(1+2\gamma)\nu_{2}\nu_{3}$
$\displaystyle~{}~{}~{}+\gamma\nu_{3}^{2}+((1+2\gamma)\nu_{2}+(3+2\gamma)\nu_{3})\nu_{1}\\}z^{3}$
$\displaystyle~{}~{}~{}+\\{\gamma\nu_{1}^{3}+2\gamma\nu_{2}^{3}-(1-5\gamma)\nu_{2}^{2}\nu_{3}-2(1-2\gamma)\nu_{2}\nu_{3}^{2}$
$\displaystyle~{}~{}~{}+\gamma\nu_{3}^{3}+((1+4\gamma)\nu_{2}+3(1+\gamma)\nu_{3})\nu_{1}^{2}$
$\displaystyle~{}~{}~{}+((2+5\gamma)\nu_{2}^{2}+8(1+\gamma)\nu_{2}\nu_{3}+3(1+\gamma)\nu_{3}^{2})\nu_{1}\\}z^{4}$
$\displaystyle~{}~{}~{}-2\nu_{2}\nu_{3}(\nu_{2}+2\nu_{3})z^{5}-\nu_{2}\nu_{3}^{2}z^{6}].$
(10)
By rearranging Eq. (5) for the collinear configuration by the same way as in
subsection II.A, we find a seventh-order equation for $z$ as
$\displaystyle\sum_{k=0}^{7}A_{k}z^{k}=0,$ (11)
where the coefficients are
$\displaystyle A_{7}=$
$\displaystyle\frac{M}{\ell}\bigg{[}-2(\beta+\gamma)-2\nu_{1}+4(\beta+\gamma)\nu_{3}+2\nu_{1}^{2}+4\nu_{1}\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}-2(\beta+\gamma)\nu_{3}^{2}-2\nu_{1}^{2}\nu_{3}-2\nu_{1}\nu_{3}^{2}\bigg{]},$
(12) $\displaystyle A_{6}=$ $\displaystyle 1-\nu_{3}$
$\displaystyle+\frac{M}{\ell}\bigg{[}-(6\beta+7\gamma)-(6+2\beta+2\gamma)\nu_{1}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-(2-8\beta-11\gamma)\nu_{3}+4\nu_{1}^{2}+(12+2\beta+2\gamma)\nu_{1}\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+(4-2\beta-4\gamma)\nu_{3}^{2}+2\nu_{1}^{3}-4\nu_{1}^{2}\nu_{3}-6\nu_{1}\nu_{3}^{2}-2\nu_{3}^{3}\bigg{]},$
(13) $\displaystyle A_{5}=$ $\displaystyle 2+\nu_{1}-2\nu_{3}$
$\displaystyle+\frac{M}{\ell}\bigg{[}-3(2\beta+3\gamma)-3(2+2\beta+2\gamma)\nu_{1}-(6-11\gamma)\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+(12+6\beta+2\gamma)\nu_{1}\nu_{3}+(12+6\beta-2\gamma)\nu_{3}^{2}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+6\nu_{1}^{3}-6\nu_{1}\nu_{3}^{2}-6\nu_{3}^{3}\bigg{]},$
(14) $\displaystyle A_{4}=$ $\displaystyle 1+2\nu_{1}-\nu_{3}$
$\displaystyle+\frac{M}{\ell}\bigg{[}-2\beta-4\gamma-(2\beta+8\gamma)\nu_{1}-(6+6\beta-8\gamma)\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-(6+4\beta-2\gamma)\nu_{1}^{2}+(4+2\beta-2\gamma)\nu_{1}\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+(12+8\beta-4\gamma)\nu_{3}^{2}+6\nu_{1}^{3}+2\nu_{1}^{2}\nu_{3}-4\nu_{1}\nu_{3}^{2}-6\nu_{3}^{3}\bigg{]},$
(15) $\displaystyle A_{3}=$ $\displaystyle-1+\nu_{1}-2\nu_{3}$
$\displaystyle+\frac{M}{\ell}\bigg{[}2\beta+4\gamma+(6+6\beta-8\gamma)\nu_{1}+(2\beta+8\gamma)\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-(12+8\beta-4\gamma)\nu_{1}^{2}-(4+2\beta-2\gamma)\nu_{1}\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+(6+4\beta-2\gamma)\nu_{3}^{2}+6\nu_{1}^{3}+4\nu_{1}^{2}\nu_{3}-2\nu_{1}\nu_{3}^{2}-6\nu_{3}^{3}\bigg{]},$
(16) $\displaystyle A_{2}=$ $\displaystyle-2+2\nu_{1}-\nu_{3}$
$\displaystyle+\frac{M}{\ell}\bigg{[}6\beta+9\gamma+(6-11\gamma)\nu_{1}+(6+6\beta+6\gamma)\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-(12+6\beta-2\gamma)\nu_{1}^{2}-(12+6\beta+2\gamma)\nu_{1}\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}+6\nu_{1}^{3}+6\nu_{1}^{2}\nu_{3}-6\nu_{3}^{3}\bigg{]},$
(17) $\displaystyle A_{1}=$ $\displaystyle-1+\nu_{1}$
$\displaystyle+\frac{M}{\ell}\bigg{[}6\beta+7\gamma+(2-8\beta-11\gamma)\nu_{1}+(6+2\beta+2\gamma)\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-(4-2\beta-4\gamma)\nu_{1}^{2}-(12+2\beta+2\gamma)\nu_{1}\nu_{3}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}-4\nu_{3}^{2}+2\nu_{1}^{3}+4\nu_{1}\nu_{3}^{2}+6\nu_{1}^{2}\nu_{3}-2\nu_{3}^{3}\bigg{]},$
(18) $\displaystyle A_{0}=$
$\displaystyle\frac{M}{\ell}\bigg{[}2\beta+2\gamma-4(\beta+\gamma)\nu_{1}+2\nu_{3}+2(\beta+\gamma)\nu_{1}^{2}$
$\displaystyle~{}~{}~{}~{}~{}~{}-4\nu_{1}\nu_{3}-2\nu_{3}^{2}+2\nu_{1}^{2}\nu_{3}+2\nu_{1}\nu_{3}^{2}\bigg{]}.$
(19)
It follows that Eq. (11) recovers the PN collinear configuration by Eq. (13)
of Reference Yamada2011 if and only if $\beta=\gamma=1$. The uniqueness is
because the number of the parameters $\beta$, $\gamma$ is two for eight
coefficients $A_{0},\cdots,A_{7}$.
From Eq. (7) for $z$ obtained above, the angular velocity $\omega_{PPN}$ of
the PPN collinear configuration is obtained as
$\displaystyle\omega_{PPN}=\omega_{N}\bigg{(}1+\frac{F_{M}}{2F_{N}}+\frac{F_{V}}{2\ell}\bigg{)},$
(20)
where $\omega_{N}=(F_{N}/\ell)^{1/2}$ is the Newtonian angular velocity. The
subscript $N$ denotes the Newtonian case.
## III Triangular configuration in PPN gravity
### III.1 Lagrange’s equilateral solution in Newtonian gravity
In this subsection, we suppose that the three masses are in coplanar and
circular motion with keeping the same separation between the masses, namely
$R_{AB}=a$ for a constant $a$.
It is convenient to choose the coordinate origin as the COM,
$\sum_{A}M_{A}\bm{x}_{A}=0,$ (21)
for which the equation of motion for each mass in the equilateral triangle
configuration takes a compact form as Danby
$\frac{d^{2}\bm{x}_{A}}{dt^{2}}=-\frac{M}{a^{3}}\bm{x}_{A}.$ (22)
See e.g. Eq. (8.6.5) in Reference Danby for the derivation of Eq. (22). A
triangular configuration is a solution, if the Newtonian angular velocity
$\omega_{N}$ satisfies
$(\omega_{N})^{2}=\frac{M}{a^{3}}.$ (23)
The orbital radius $\ell_{A}$ of each mass around the COM is Danby
$\displaystyle\ell_{1}$
$\displaystyle=a\sqrt{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}},$ (24)
$\displaystyle\ell_{2}$
$\displaystyle=a\sqrt{\nu_{1}^{2}+\nu_{1}\nu_{3}+\nu_{3}^{2}},$ (25)
$\displaystyle\ell_{3}$
$\displaystyle=a\sqrt{\nu_{1}^{2}+\nu_{1}\nu_{2}+\nu_{2}^{2}}.$ (26)
### III.2 PPN orbital radius
We suppose again that three masses in circular motion are in a triangular
configuration with a constant angular velocity $\omega$. By noting that a
vector in the orbital plane can be expressed as a linear combination of
$\bm{x}_{1}$ and $\bm{v}_{1}$, Eq.(5) becomes
$\displaystyle-\omega^{2}\bm{x}_{1}=$
$\displaystyle-(\omega_{N})^{2}\bm{x}_{1}+g_{1}(\omega_{N})^{2}\bm{x}_{1}$
$\displaystyle+\frac{\sqrt{3}M}{16a}\frac{\nu_{2}\nu_{3}(\nu_{2}-\nu_{3})(16\beta-1-9\nu_{1})}{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}}\omega_{N}\bm{v}_{1},$
(27)
where Eq. (23) is used and
$\displaystyle g_{1}=$
$\displaystyle\frac{M}{a}\left[\left(2\beta+\gamma+(\nu_{2}+\nu_{3})(\nu_{2}+\nu_{3}-1)-\frac{7}{16}\nu_{2}\nu_{3}\right)\right.$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}\left.+\frac{3}{16}\frac{\nu_{2}\nu_{3}\\{9\nu_{2}\nu_{3}+2(\nu_{2}+\nu_{3})(8\beta-5)\\}}{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}}\right].$
(28)
By a cyclic permutation, we obtain the similar equations for $M_{2}$ and
$M_{3}$.
The second and third terms in the right-had side of Eq. (27) are the PPN
forces. The second term is parallel to $\bm{x}_{1}$, whereas the third term is
parallel to $\bm{v}_{1}$ Note that $\bm{v}_{1}$ is not parallel to
$\bm{x}_{1}$ in circular motion.
The location of the COM in the fully conservative theories of PPN Baker1978 ;
Baker1979 remains the same as that in the PN approximation of general
relativity MTW ; LL
$\displaystyle\bm{G}_{PN}=\frac{1}{E}\sum\limits_{A}M_{A}\bm{x}_{A}\left[1+\frac{1}{2}\left(v_{A}^{2}-\sum\limits_{B\neq
A}\frac{M_{B}}{R_{AB}}\right)\right],$ (29)
where $E$ is defined as
$\displaystyle
E\equiv\sum\limits_{A}M_{A}\left[1+\frac{1}{2}\left(v_{A}^{2}-\sum\limits_{B\neq
A}\frac{M_{B}}{R_{AB}}\right)\right].$ (30)
This coincidence allows us to obtain the PPN orbital radius $\ell^{PPN}_{A}$
around the COM by straightforward calculations. The orbital radius of $M_{1}$
is formally obtained as
$\displaystyle(\ell^{PPN}_{1})^{2}$ $\displaystyle=$
$\displaystyle(\ell_{1})^{2}$ (31)
$\displaystyle+\frac{aM}{2}\left(1-\frac{a^{3}\omega_{N}^{2}}{M}\right)$
$\displaystyle~{}~{}\times(-2\nu_{1}^{2}\nu_{2}^{2}-2\nu_{2}^{2}\nu_{3}^{2}-2\nu_{3}^{2}\nu_{1}^{2}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}+2\nu_{1}\nu_{2}^{3}+\nu_{2}\nu_{3}^{3}+\nu_{2}^{3}\nu_{3}+2\nu_{3}^{3}\nu_{1}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}-2\nu_{1}^{2}\nu_{2}\nu_{3}+\nu_{1}\nu_{2}^{2}\nu_{3}+\nu_{1}\nu_{2}\nu_{3}^{2}),$
and the similar expressions of $\ell^{PPN}_{2}$ and $\ell^{PPN}_{3}$ for the
orbital radius of $M_{2}$ and $M_{3}$ are obtained.
Unless the second term of the right-hand side in Eq. (31) vanishes, the
difference between $\ell^{PPN}_{1}$ and $\ell_{1}$ would make our computations
rather complicated. However, it vanishes because $\omega_{N}$ satisfies Eq.
(23). As a result, the PPN orbital radius remains the same as the Newtonian
one. Namely, $\ell^{PPN}_{A}=\ell_{A}$.
### III.3 Equilateral condition
First, we discuss a condition for an equilateral configuration.
For Eq. (27) to hold, the coefficient of the velocity vector $\bm{v_{1}}$ must
vanish, because there are no other terms including $\bm{v_{1}}$. The
coefficient is proportional to $\nu_{2}\nu_{3}(\nu_{2}-\nu_{3})$. The same
thing is true also of $M_{2}$ and $M_{3}$. For any value of $\beta$,
therefore, the equilateral configuration in the PPN gravity can be present if
and only if three finite masses are equal or two test masses orbit around one
finite mass.
Note that one can find a very particular value of $\beta$ satisfying
$\displaystyle 16\beta-1-9\nu_{1}=0,$ (32)
which leads to the vanishing coefficient of the velocity vector $\bm{v_{1}}$.
However, this choice is very unlikely, because the particular value of $\beta$
is dependent on the mass ratio $\nu_{1}$ and it is not universal. Hence, this
case will be ignored.
### III.4 PPN triangular configuration for general masses
Next, let us consider a PPN triangle configuration for general masses. For
this purpose, we introduce a nondimensional parameter $\varepsilon_{AB}$ at
the PPN order, such that each side length of the PPN triangle can be expressed
as
$\displaystyle R_{AB}=a(1+\varepsilon_{AB}).$ (33)
The equilateral case is achieved by assuming $\varepsilon_{AB}=0$ for every
masses. See Figure 2 for the PPN triangular configuration.
Figure 2: Schematic figure for the PPN triangular configuration of three
masses. An inequilateral triangle is described by the parameter
$\varepsilon_{AB}$. $R_{A}$ coincides with $\ell_{A}$ in the Newtonian limit,
for which $\varepsilon_{AB}$ vanishes.
In order to fix the degree of freedom corresponding to a scale transformation,
we follow Reference Yamada2012 to suppose that the arithmetic mean of the
three side lengths is unchanged as
$\displaystyle\frac{R_{12}+R_{23}+R_{31}}{3}=a\bigg{[}1+\frac{1}{3}(\varepsilon_{12}+\varepsilon_{23}+\varepsilon_{31})\bigg{]}.$
(34)
The left-hand side of Eq. (34) is $a$ in the Newtonian case, which leads to
$\displaystyle\varepsilon_{12}+\varepsilon_{23}+\varepsilon_{31}=0.$ (35)
This is a gauge fixing in $\varepsilon_{AB}$.
In terms of $\varepsilon_{AB}$, Eq. (27) is rearranged as
$\displaystyle-\omega^{2}\bm{x}_{1}=$
$\displaystyle-(\omega_{N})^{2}\bm{x}_{1}$
$\displaystyle-\frac{3}{2}\frac{(\omega_{N})^{2}}{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}}$
$\displaystyle\times\bigg{[}\bigg{\\{}\nu_{2}(\nu_{1}-\nu_{2}-1)\varepsilon_{12}+\nu_{3}(\nu_{1}-\nu_{3}-1)\varepsilon_{31}\bigg{\\}}\bm{x}_{1}$
$\displaystyle~{}~{}~{}~{}~{}~{}+\sqrt{3}\nu_{2}\nu_{3}(\varepsilon_{12}-\varepsilon_{31})\frac{\bm{v}_{1}}{\omega_{N}}\bigg{]}+\bm{\delta}_{1},$
(36)
where
$\displaystyle\bm{\delta}_{1}=$ $\displaystyle
g_{1}(\omega_{N})^{2}\bm{x}_{1}+\frac{\sqrt{3}M\nu_{2}\nu_{3}(\nu_{2}-\nu_{3})(16\beta-1-9\nu_{1})}{16a(\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2})}\omega_{N}\bm{v}_{1}.$
(37)
By a cyclic permutation, the equations for $M_{2}$ and $M_{3}$ can be
obtained.
A triangular equilibrium configuration can exist if and only if the two
conditions (A) and (B) are simultaneously satisfied; (A) Each mass satisfies
Eq. (36), and (B) the configuration is unchanged in time.
Eq. (36) is the equation of motion for $M_{1}$. To be more accurate,
therefore, $\omega$ in Eq. (36) should be denoted as $\omega_{1}$. Similarly,
we introduce $\omega_{2}$ and $\omega_{3}$ in the equations of motion for
$M_{2}$ and $M_{3}$, respectively. Then, Condition (B) means
$\omega_{1}=\omega_{2}=\omega_{3}$.
Condition (A) is equivalent to Condition (A2); The coefficient of $\bm{v}_{A}$
in the equation of motion vanishes as
$\displaystyle\varepsilon_{12}-\varepsilon_{31}-\frac{M}{24a}(\nu_{2}-\nu_{3})(16\beta-1-9\nu_{1})=0,$
(38)
$\displaystyle\varepsilon_{23}-\varepsilon_{21}-\frac{M}{24a}(\nu_{3}-\nu_{1})(16\beta-1-9\nu_{2})=0,$
(39)
$\displaystyle\varepsilon_{31}-\varepsilon_{23}-\frac{M}{24a}(\nu_{1}-\nu_{2})(16\beta-1-9\nu_{3})=0.$
(40)
From Eqs. (38)-(40) and the gauge fixing as
$\varepsilon_{12}+\varepsilon_{23}+\varepsilon_{31}=0$, we obtain
$\displaystyle\varepsilon_{12}=\frac{M}{72a}$
$\displaystyle\bigg{[}(\nu_{2}-\nu_{3})(16\beta-1-9\nu_{1})$
$\displaystyle-(\nu_{3}-\nu_{1})(16\beta-1-9\nu_{2})\bigg{]},$ (41)
$\displaystyle\varepsilon_{23}=\frac{M}{72a}$
$\displaystyle\bigg{[}(\nu_{3}-\nu_{1})(16\beta-1-9\nu_{2})$
$\displaystyle-(\nu_{1}-\nu_{2})(16\beta-1-9\nu_{3})\bigg{]},$ (42)
and
$\displaystyle\varepsilon_{31}=\frac{M}{72a}$
$\displaystyle\bigg{[}(\nu_{1}-\nu_{2})(16\beta-1-9\nu_{3})$
$\displaystyle-(\nu_{2}-\nu_{3})(16\beta-1-9\nu_{1})\bigg{]}.$ (43)
Therefore, the PPN triangle is inequilateral depending on $\beta$ via
$\varepsilon_{AB}$ but not on $\gamma$. This suggests that also the PPN
Lagrange points corresponding to $L_{4}$ and $L_{5}$ are sensitive to $\beta$
but are free from $\gamma$, as shown in Section IV.
It follows that Eqs. (41)-(43) recover the PN counterpart of Eq. (26)-(28) of
Reference Yamada2012 if and only if $\beta=1$. The uniqueness is because the
PPN parameter is only $\beta$ for three equations as Eqs. (41)-(43).
Condition (B) is satisfied, if
$\omega_{1}=\omega_{2}=\omega_{3}\equiv\omega_{PPN}$, where $\omega_{PPN}$
means the angular velocity of the PPN configuration. By substituting Eqs. (41)
and (43) into Eq. (36), $\omega_{PPN}$ is obtained as
$\displaystyle\omega_{PPN}=\omega_{N}\left(1+\delta_{\omega}\right),$ (44)
where, by using Eq. (28), the PPN correction $\delta_{\omega}$ is
$\displaystyle\delta_{\omega}=$
$\displaystyle\frac{3}{4}\frac{\nu_{2}(\nu_{1}-\nu_{2}-1)\varepsilon_{12}+\nu_{3}(\nu_{1}-\nu_{3}-1)\varepsilon_{31}}{\nu_{2}^{2}+\nu_{2}\nu_{3}+\nu_{3}^{2}}-\frac{1}{2}g_{1}$
$\displaystyle=$
$\displaystyle-\frac{M}{48a}\\{64\beta+24\gamma-1-42(\nu_{1}\nu_{2}+\nu_{2}\nu_{3}+\nu_{3}\nu_{1})\\}.$
(45)
There is a symmetry among $M_{1},M_{2},M_{3}$ in the second line of Eq. (45),
which means that $\delta_{\omega}$ is the same for all bodies. Condition (B)
is thus satisfied.
## IV PPN corrections to the Lagrange points
### IV.1 PPN Lagrange points $L_{1}$, $L_{2}$ and $L_{3}$
In this section, we discuss PPN modifications of the Lagrange points that are
originally defined in the restricted three-body problem in Newton gravity. We
choose $\nu_{A}=1-\nu$, $\nu_{B}=\nu$ and $\nu_{C}=0$, where $\nu$ is the the
mass ratio of the secondary object (a planet).
First, we seek PPN corrections to $L_{1},L_{2}$ and $L_{3}$. There are three
choices of how to correspond $M_{1},M_{2}$ and $M_{3}$ to the Sun, a planet
and a test mass in the collinear configuration. Indeed the three choices lead
to the Lagrange points $L_{1}$, $L_{2}$ and $L_{3}$.
We consider the collinear solution by Eq. (11). We denote the physical root
for Eq. (11) as $z=z_{N}(1+\varepsilon)$ for the Newtonian root $z_{N}$ with
using a small parameter $\varepsilon$ ($|\varepsilon|\ll 1$) at the PPN order.
We substitute $z$ into Eq. (11) and rearrange it to obtain $\varepsilon$ as
$\displaystyle\varepsilon=-\cfrac{\sum\limits_{k=0}^{7}A^{PPN}_{k}(z_{N})^{k}}{\sum\limits_{k=1}^{6}kA^{N}_{k}(z_{N})^{k}},$
(46)
where $O(\varepsilon^{2})$ is discarded because of being at the 2PN order, and
$A^{N}_{k}$ and $A^{PPN}_{k}$ denote the Newtonian and PPN parts of $A_{k}$,
respectively, as $A_{k}=A^{N}_{k}+\varepsilon A^{PPN}_{k}$ ($A^{N}_{0}=0$ and
$A^{N}_{7}=0$ because there are no counterparts in the Newtonian case).
Eq. (46) is used for calculating the PPN corrections to $L_{1},L_{2}$ and
$L_{3}$. The PPN displacement from the Newtonian Lagrange point $L_{1}$ is
thus obtained as
$\displaystyle\delta_{PPN}R_{23}$ $\displaystyle\equiv R_{23}-(R_{23})_{N}$
$\displaystyle=\frac{\varepsilon
z_{N}}{(1+z_{N})^{2}}\ell+O(\ell\varepsilon^{2}),$ (47)
where $M_{1}$, $M_{2}$ and $M_{3}$ are chosen as a planet, a test mass and the
Sun, respectively.
Similarly, the PPN displacement from the Newtonian Lagrange point $L_{2}$
becomes
$\displaystyle\delta_{PPN}R_{31}$ $\displaystyle\equiv R_{31}-(R_{31})_{N}$
$\displaystyle=\frac{\varepsilon
z_{N}}{(1+z_{N})}\ell+O(\ell\varepsilon^{2}),$ (48)
where $M_{1}$, $M_{2}$ and $M_{3}$ are chosen as the Sun, a planet and a test
mass, respectively. The PPN displacement from the Newtonian Lagrange point
$L_{3}$ is
$\displaystyle\delta_{PPN}R_{23}$ $\displaystyle\equiv R_{23}-(R_{23})_{N}$
$\displaystyle=\frac{\varepsilon
z_{N}}{(1+z_{N})}\ell+O(\ell\varepsilon^{2}),$ (49)
where $M_{1}$, $M_{2}$ and $M_{3}$ are chosen as a planet, the Sun and a test
mass, respectively. Here, a value of $z_{N}$ depends on $L_{1}$, $L_{2}$ or
$L_{3}$, which is given by Eq. (4).
### IV.2 PPN Lagrange points $L_{4}$ and $L_{5}$
Next, we discuss PPN corrections to the Lagrange points $L_{4}$ and $L_{5}$,
for which we consider the PPN triangular solution. Let $a$ denote the orbital
separation between the primary object and the secondary one, which equals to
$R_{12}=\ell(1+\varepsilon_{12})$. Therefore,
$\ell=a(1-\varepsilon_{12})+O(a\varepsilon^{2})$, where $\varepsilon^{2}$
denotes the second order in $\varepsilon_{AB}$. By using this for $R_{23}$ and
$R_{31}$, we obtain
$R_{23}=a(1+\varepsilon_{23}-\varepsilon_{12})+O(a\varepsilon^{2})$, and
$R_{31}=a(1+\varepsilon_{31}-\varepsilon_{12})+O(a\varepsilon^{2})$.
The PPN displacement from the Newtonian Lagrange point $L_{4}$ (and $L_{5}$)
with respect to the Sun is obtained as
$\displaystyle\delta_{PPN}R_{31}\equiv$ $\displaystyle R_{31}-a$
$\displaystyle=$ $\displaystyle
a(\varepsilon_{31}-\varepsilon_{12})+O(a\varepsilon^{2})$ $\displaystyle=$
$\displaystyle-\frac{\nu(16\beta-10+9\nu)}{24}M$
$\displaystyle+O\left(\frac{M^{2}}{a}\right),$ (50)
where $\nu_{1}=1-\nu$, $\nu_{2}=\nu$ and $\nu_{3}=0$ are used in the last
line.
In the similar manner, the PPN displacement from the Newtonian Lagrange point
$L_{4}$ (and $L_{5}$) with respect to the planet
$\displaystyle\delta_{PPN}R_{23}$ $\displaystyle\equiv R_{23}-a$
$\displaystyle=a(\varepsilon_{23}-\varepsilon_{12})+O(a\varepsilon^{2})$
$\displaystyle=-\frac{(1-\nu)(16\beta-1-9\nu)}{24}M+O\left(\frac{M^{2}}{a}\right).$
(51)
Eq. (51) can be obtained more easily from Eq. (50) if the correspondence as
$1-\nu\leftrightarrow\nu$ is used.
Table 1: The PPN displacement from the Newtonian Lagrange points of the Sun-Jupiter system. The PPN corrections to $L_{1}$, $L_{2}$, $L_{3}$ and $L_{4}$ are listed in this table, where the sign convention for $L_{1}$, $L_{2}$, $L_{3}$ is chosen along the direction from the Sun to the Jupiter, and the correction to $L_{5}$ is identical to that to $L_{4}$. The PPN displacement for $L_{4}$ is two-dimensional and hence they are indicated by the deviations from the Sun and from the Jupiter. Lagrange points | | PPN displacement [m]
---|---|---
$L_{1}$ | | $-0.000051+40.00\beta-9.905\gamma$
$L_{2}$ | | $0.000040-50.27\beta+12.40\gamma$
$L_{3}$ | | $0.000122+1.424\beta+0.01882\gamma$
$L_{4}(L_{5})$-Sun | | $-0.05875\times(-9.991+16\beta)$
$L_{4}(L_{5})$-Jupiter | | $-61.53\times(-1+16\beta)$
### IV.3 Example: the Sun-Jupiter case
The PPN corrections to the $L_{1}$, $L_{2}$ and $L_{3}$ can be expressed as a
linear function in $\beta$ and $\gamma$. The PPN corrections to $L_{4}$ and
$L_{5}$ are in a linear function only of $\beta$. The results for the Sun-
Jupiter system are summarized in Table 1, where the sign convention is chosen
along the direction from the Sun to a planet.
Before closing this section, we mention gravitational experiments. The lunar
laser ranging experiment put a constraint on $\eta\equiv 4\beta-\gamma-3$ as
$|\eta|<O(10^{-4})$ Williams1996 ; Williams2004 . If one wish to constrain
$1-\beta$ at the level of $O(10^{-4})$ by using the location of the Lagrange
points, the Lagrange point accuracy of about a few millimeters (e.g. for
$L_{4}$) is needed in the solar system, though this is very unlikely in the
near future.
On the other hand, possible PPN corrections in a three-body system may be
relevant with relativistic astrophysics in e.g. a relativistic hierarchical
triple system and a supermassive black hole with a compact binary Rosswog ;
Suzuki ; Fang ; Kunz2021 ; Kunz2022 . This subject is beyond the scope of the
present paper.
## V Conclusion
The coplanar and circular three-body problem was investigated for a class of
fully conservative theories in the PPN formalism, characterized by the
Eddington-Robertson parameters $\beta$ and $\gamma$.
The collinear configuration can exist for arbitrary mass ratio, $\beta$ and
$\gamma$. On the other hand, the PPN triangular configuration depends on the
nonlinearity parameter $\beta$ but not on $\gamma$. This is far from trivial,
because the parameter $\beta$ is not separable from $\gamma$ apparently at the
level of Eq. (5). For any value of $\beta$, the equilateral configuration in
the PPN gravity is possible, if and only if three finite masses are equal or
two test masses orbit around one finite mass. For general mass cases, the PPN
triangle is not equilateral.
We showed also that the PPN displacements from the Newtonian Lagrange points
$L_{1}$, $L_{2}$ and $L_{3}$ depend on both $\beta$ and $\gamma$, while those
to $L_{4}$ and $L_{5}$ rely only upon $\beta$. It is left for future to study
the stability of the PPN configurations.
## VI Acknowledgments
We thank Kei Yamada and Yuuiti Sendouda for fruitful conversations. This work
was supported in part by Japan Science and Technology Agency (JST) SPRING,
Grant Number, JPMJSP2152 (Y.N.), and in part by Japan Society for the
Promotion of Science (JSPS) Grant-in-Aid for Scientific Research, No. 20K03963
(H.A.).
## References
* (1) H. Goldstein, Classical Mechanics (Addison-Wesley, MA, 1980).
* (2) J. M. A. Danby, Fundamentals of Celestial Mechanics (William-Bell, VA, 1988).
* (3) C. Marchal, The Three-Body Problem (Elsevier, Amsterdam, 1990).
* (4) H. Asada, Phys. Rev. D 80 064021 (2009).
* (5) Y. Torigoe, K. Hattori and H. Asada, Phys. Rev. Lett. 102, 251101 (2009).
* (6) N. Seto, T. Muto, Phys. Rev. D 81 103004 (2010).
* (7) J. D. Schnittman, Astrophys. J. 724 39 (2010).
* (8) M. Connors, P. Wiegert and C. Veillet, Nature 475, 481 (2011).
* (9) K. Nordtvedt, Phys. Rev. 169 1014 (1968).
* (10) E. Krefetz, Astron. J 72, 471 (1967).
* (11) T. I. Maindl, Completing the Inventory of the Solar System, Astronomical Society of the Pacific Conference Proceedings, edited by T.W. Rettig and J.M. Hahn, (Astronomical Society of the Pacific, San Francisco, 1996), 107, 147.
* (12) K. Yamada, H. Asada, Phys. Rev. D 82, 104019 (2010).
* (13) K. Yamada, H. Asada, Phys. Rev. D 83, 024040 (2011).
* (14) T. Ichita, K. Yamada, H. Asada, Phys. Rev. D 83, 084026 (2011).
* (15) K. Yamada and H. Asada, Phys. Rev. D 86, 124029 (2012).
* (16) K. Yamada, T. Tsuchiya and H. Asada Phys. Rev. D 91, 124016 (2015).
* (17) K. Yamada and H. Asada, Phys. Rev. D 93, 084027 (2016).
* (18) T. Y. Zhou, W. G. Cao, and Y. Xie, Phys. Rev. D 93, 064065 (2016).
* (19) E. Battista, S. Dell’Agnello, G. Esposito, and J. Simo, Phys. Rev. D 91, 084041 (2015); Erratum, Phys. Rev. D 93, 049902(E) (2016).
* (20) E. Battista, S. Dell’Agnello, G. Esposito, L. Di Fiore, J. Simo, and A. Grado, Phys. Rev. D 92, 064045 (2015); Erratum, Phys. Rev. D 93, 109904(E) (2016).
* (21) S. M. Ransom, I. H. Stairs, A. M. Archibald, J. W. T. Hessels, D. L. Kaplan and et al. Nature, 505, 520 (2014).
* (22) Anne M. Archibald, Nina V. Gusinskaia, Jason W. T. Hessels, Adam T. Deller, David L. Kaplan and et al. Nature, 559, 73 (2018).
* (23) C. M.Will, Nature, 559, 40 (2018).
* (24) G. Voisin, I. Cognard, P. C. C. Freire, et al., Astron. Astrophys. 638, A24 (2020).
* (25) C. M. Will, Living Rev. Relativity, 17, 4 (2014).
* (26) E. Poisson, and C. M. Will, Gravity, (Cambridge Univ. Press, UK. 2014).
* (27) S. A. Klioner and M. H. Soffel, Phys. Rev. D 62, 024019 (2000).
* (28) B. M. Barker and R. F. O’Connell, Phys. Lett. A 68, 289 (1978).
* (29) B. M. Barker and R. F. O’Connell, J. Math. Phys. 20, 1427 (1979).
* (30) C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation (Freeman, New York, 1973).
* (31) L. D. Landau and E. M. Lifshitz, The Classical Theory of Fields (Pergamon, New York, 1962).
* (32) J. G. Williams, X. X. Newhall, and J. O. Dickey, Phys. Rev. D 53, 6730 (1996).
* (33) J. G. Williams, S. G. Turyshev, and D. H. Boggs, Phys. Rev. Lett. 93, 261101 (2004).
* (34) S. Rosswog, R. Speith, and G. A. Wynn, Mon. Not. Roy. Astron. Soc. 351, 1121 (2004).
* (35) H. Suzuki, Y. Nakamura, and S. Yamada, Phys. Rev. D 102, 124063 (2020).
* (36) Y. Fang, and Q. G. Huang, Phys. Rev. D 102, 104002 (2020).
* (37) A. Kuntz, F. Serra, and E. Trincherini, Phys. Rev. D 104, 024016 (2021).
* (38) A. Kuntz, Phys. Rev. D 105, 024017 (2022).
|
Current address: ]Department of Materials Science and Engineering, Stanford
University, Stanford, CA, USA, 94305
Current address: ]Department of Physics and Astronomy, University of
Tennessee, Knoxville, TN, 37996, USA
# Evidence of $\phi_{0}$-Josephson junction from skewed diffraction patterns
in Sn-InSb nanowires
B. Zhang Department of Physics and Astronomy, University of Pittsburgh,
Pittsburgh, PA, 15260, USA Z. Li Department of Physics and Astronomy,
University of Pittsburgh, Pittsburgh, PA, 15260, USA V. Aguilar Department
of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, 15260, USA
P. Zhang Department of Physics and Astronomy, University of Pittsburgh,
Pittsburgh, PA, 15260, USA M. Pendharkar [ Electrical and Computer
Engineering, University of California, Santa Barbara, CA, 93106, USA C.
Dempsey Electrical and Computer Engineering, University of California, Santa
Barbara, CA, 93106, USA J.S. Lee [ California NanoSystems Institute,
University of California Santa Barbara, Santa Barbara, CA, 93106, USA S.D.
Harrington Materials Department, University of California Santa Barbara,
Santa Barbara, CA, 93106, USA S. Tan Department of Electrical and Computer
Engineering, University of Pittsburgh, Pittsburgh, PA, 15260, USA Petersen
Institute of NanoScience and Engineering, University of Pittsburgh,
Pittsburgh, PA, 15260, USA J.S. Meyer Univ. Grenoble Alpes, CEA, Grenoble
INP, IRIG, Pheliqs, 38000, Grenoble, France. M. Houzet Univ. Grenoble Alpes,
CEA, Grenoble INP, IRIG, Pheliqs, 38000, Grenoble, France. C.J. Palmstrøm
Electrical and Computer Engineering, University of California, Santa Barbara,
CA, 93106, USA S.M. Frolov<EMAIL_ADDRESS>Department of Physics and
Astronomy, University of Pittsburgh, Pittsburgh, PA, 15260, USA
###### Abstract
We study Josephson junctions based on InSb nanowires with Sn shells. We
observe skewed critical current diffraction patterns: the maxima in forward
and reverse current bias are at different magnetic flux, with a displacement
of 20-40 mT. The skew is greatest when the external field is nearly
perpendicular to the nanowire, in the substrate plane. This orientation
suggests that spin-orbit interaction plays a role. We develop a
phenomenological model and perform tight-binding calculations, both methods
reproducing the essential features of the experiment. The effect modeled is
the $\phi_{0}$-Josephson junction with higher-order Josephson harmonics. The
system is of interest for Majorana studies: the effects are either precursor
to or concomitant with topological superconductivity. Current-phase relations
that lack inversion symmetry can also be used to design quantum circuits with
engineered nonlinearity.
## I Introduction
Context. Interest in superconductor-semiconductor hybrid structures is along
two directions. On the one hand, they are explored as materials for quantum
technologies, such as superconducting qubits [1, 2]. On the other hand, they
are a platform with high potential for the discovery of topological
superconductivity [3].
Background: Josephson $\varphi_{0}$-junction. In semiconductor nanowires, a
combination of induced superconductivity, spin-orbit interaction and spin
splitting can famously induce Majorana modes and topological superconductivity
[4, 5]. The same ingredients can induce an anomalous Josephson effect, known
as $\varphi_{0}$-junction [6]. The primary characteristic of a Josephson
junction is the current phase relation (CPR) [7]. The most common CPR is a
sinusoidal function $I(\phi)=I_{c}\sin(\phi)$, where $I(\phi)$ is the
Josephson supercurrent, $\phi$ is the phase difference between superconducting
leads, and $I_{c}$ is the critical current. In a $\varphi_{0}$-junction,
$I(\phi=0)\not=0$, which is equivalent to a phase offset $\varphi_{0}$ in a
sinusoidal CPR [8, 9, 10, 11, 12, 13, 6, 14, 15, 16, 17, 18]. The
$\varphi_{0}$-junction state can be accompanied by bias direction-dependent
critical current [6, 14, 15, 16], which was dubbed the “supercurrent diode
effect” [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31].
A related effect is the $\pi$-junction effect where the additional phase shift
is equal to $\pi$ [32]. Note that the $\pi$-junction is not a special case of
a $\varphi_{0}$-junction. The $\pi$-junction can be realized under more basic
conditions, for instance without any spin-orbit interaction [33].
Challenge. Attempts were made to detect the phase shift $\varphi_{0}$ in
superconducting quantum interference devices (SQUIDs) [34, 35]. However, large
magnetic fields were used. These fields were in the SQUID plane in order not
to induce flux in the loop. However, at high fields of hundreds of milliTesla
to a few Tesla, and given the SQUID area in the tens of square microns,
multiple flux quanta thread the SQUID due to fringing fields and imperfect
alignment even when the utmoust effort is applied to ensure strictly in-plane
applied fields. Furthermore, Josephson junctions based on semiconductor
nanowires are gate-tunable. This is typically thought of as an advantage due
to an extra control knob. But the electric field from the gate changes the
path of supercurrent in the nanowire, and with it the enclosed flux in the
SQUID. A change by a fraction of a flux quantum is plausible for a 100 nm
nanowire in a field of hundreds of mT. This shifts the SQUID interference
pattern due to extra flux and not due to the intrinsic spin-orbit interaction
and the associated $\varphi_{0}$-junction effect.
Figure 1: A phenomenological model for the skewed diffraction pattern: (a)
CPR of a $\phi_{0}$-Josephson junctions ($\delta_{12}=0$) at an external field
$B=0.5B_{c}$ with $\phi_{0}=B$. Local maximum and minimum of CPR are taken as
critical current flow through positive and negative bias $I_{c+}$ and
$I_{c-}$. In this configuration we have $|I_{c+}|=|I_{c-}|$. (b) CPR of first
(red dotted), second harmonic (blue dot-dashed), and their sum (black solid)
at external field $B=0.5B_{c}$. The second harmonic has a shift of ground-
state phase $\delta\phi$ from the first order with a magnitude of
$\delta\phi=\delta_{12}/2$. (c) Skewed diffraction pattern generated with our
phenomenological model. (d) Coefficient $\gamma$ as a function of field
$B_{x}$ when $\delta_{12}=0$ and $\delta_{12}\neq 0$, respectively.
Approach. We use supercurrent diffraction patterns as means of investigating
the $\varphi_{0}$-junction state [19]. The diffraction pattern is the
evolution of the critical current in magnetic field. It can reveal exotic
effects such as d-wave superconductivity in corner junctions [36], the
presence of edge states in planar junctions [37], higher-order Josephson
harmonics [38]. The fields at which the effect manifests are as low as 10-20
mT, smaller than in previous works [34, 35] but large enough to dominate over
possible self-field effects.
Results list. In InSb-Sn nanowire junctions, we observe skewed diffraction
patterns. When the magnetic field is perpendicular to the nanowire and in-
plane (along the $\hat{x}$-direction), the switching current $I_{c}$ is
inversion symmetric with respect to flux and current bias. The pattern is
nearly-symmetric along the out-of-plane direction ($\hat{y}$) and along the
current flow direction ($\hat{z}$) [Fig. 2(a)]. The effect is observed over
wide ranges of gate voltage, which works against a fine-tuning explanation. At
the same time, no consistent effect of gate voltage on the skew magnitude is
observed. To interpret our results, we develop two models. The first is a
phenomenological model that illustrates how a two-component CPR with
$\varphi_{0}$ can result in a skewed diffraction pattern. The second is a
numerical tight-binding model which yields the $\varphi_{0}$-junction. Using
realistic junction parameters, the numerical model is capable of reproducing
the key experimental observations.
Brief Methods. Junctions are prepared by coating the standing InSb nanowire
with a 15 nm layer of Sn [39]. In front of the nanowire, another nanowire
shadows the flux of Sn to create two disconnected Sn segments. Shadow junction
wires are transferred onto chips patterned with local gates, contacts to wires
are made using standard electron beam lithography and thin film deposition.
Measurements are done in a dilution refrigerator with a base temperature of
$\sim$50 mK equipped with a 3D vector magnet. Experimental panels ploting
switching current are extracted from current-voltage measurements.
## II Figure 1: Phenomenological Model
We present a minimal model capable of reproducing skewed diffraction patterns
due to the $\varphi_{0}$-junction effect. This is not a microscopic model, but
we also perform microscopic tight-binding calculations further below. In the
phenomenological model, we postulate a CPR with the first and second
sinusoidal harmonics. In addition to a variable phase across the junction
$\phi$, we allow for two phase offsets: the global parameter $\varphi_{0}$ and
the relative phase offset between the first and the second harmonics,
$\delta_{12}$:
$\displaystyle I(\phi)=$ $\displaystyle
I_{1}\sin(\phi+\varphi_{0})+I_{2}\sin(2\phi+2\varphi_{0}+\delta_{12})$ (1)
where $I_{1}$ and $I_{2}$ are the amplitude of each harmonic at zero external
field.
We first explain how this CPR realizes the so-called supercurrent diode
effect. If $I_{2}=0$, the CPR exhibits $I(\phi=0)\not=0$ with the trace
shifted by $\varphi_{0}$ [Fig. 1(a)]. This offset can, in principle, be
detected in a SQUID, but not in a single junction measurement. This is because
$I_{c+}=I_{c-}$ for the maximum supercurrent in the positive and negative bias
direction. The same is true if $I_{2}\not=0$, but $\delta_{12}=0$. However, if
a phase offset between the first and the second harmonics $\delta_{12}\not=0$,
we get $I_{c+}\not=I_{c-}$, and the current-voltage characteristic of the
junction becomes shifted upwards or downwards [Fig. 1(b)].
This is the supercurrent diode effect. In a single junction, the phase $\phi$
is free to adjust until the maximum supercurrent is reached, detected by a
switch into the finite voltage state. If the CPR has at least two components
with a phase offset between them, the switching current is different for
positive and negative bias directions. Note that the diode effect is not to be
confused with hysteretic supercurrent in underdamped junctions.
Next we generate skewed critical $I_{c}$ diffraction patterns. For this we
assume a phenomenological magnetic field dependence for model parameters:
$I_{1},I_{2}\propto(1-{B/B_{c}}^{2})$, where $B_{c}$ is the critical field,
reflecting suppression of critical current by magnetic field, and
$\varphi_{0},\delta_{12}\propto B$ to model the effect of spin-orbit coupling
[6]. Fig. 1(c) shows the maxima in $I_{c+}$ and $I_{c-}$ located symmetrically
around $B=0$, the diffraction pattern is skewed and inversion-symmetric.
To quantify the skew, we introduce a coefficient $\gamma$, which share the
same function as ’supercurrent diode coefficient/quality-factor’ used in Ref
[27, 24]:
$\displaystyle\gamma=\frac{\Delta
I_{c}}{(|I_{c}|)}=\frac{|I_{c+}|-|I_{c-}|}{(|I_{c+}|+|I_{c-}|)/2}$ (2)
Here $\Delta I_{c}$ is the difference between the magnitudes of $I_{c+}$ and
$I_{c-}$, and $|I_{c}|$ is the average critical current. In Fig. 1(d),
$\gamma=0$ for any field when $\delta_{12}=0$. When $\delta_{12}\not=0$,
$\gamma$ peaks at a finite field.
Experimentally $\gamma$ can also change sign as the field is increased without
going through zero. The phenomenological model provides a simple explanation
for this. Within the model, $\gamma=-2I_{2}/I_{1}\sin\delta_{12}$ when
$I_{2}\ll I_{1}$. Hence, $\gamma$ changes sign when $\delta_{12}=\pi$. There
is no special significance to this situation, and in particular it does not
correspond to any topological transition.
## III Figure 2: Device Description
A schematic of the nanowire device is depicted in Fig. 2(a). InSb nanowire
(blue) covered by Sn shell (silver) is placed on top of local gate electrodes
and contacted by Ti/Au (gold) contacts. The direction of supercurrent is along
$\hat{z}$. The direction of spin-orbit field $B_{so}$ is induced by the
breaking of inversion symmetry in the device geometry, and indicated along
$-\hat{x}$ based on previous experiments [40].
The scanning electron microscope image of device A is in Fig. 2(b). The Sn
shell covers 3 of the 6 facets of the hexagonal nanowire cross-section. For
device A, the Sn shell faces the bottom, meaning it is oriented opposite to
the schematic in Fig. 2(a). For devices B and C the shell is likely on the
side (see supplementary information for cross-sectional STEM and AFM imaging).
In principle, the shell orientation can influence the direction of $B_{so}$,
but we see no evidence of this from the measurements.
Figure 2: (a) Cartoon of a shadow nanowire Josephson Junction device.
Effective spin-orbit magnetic field ($B_{so}$) is indicated. (b) Scanning
Electron Microscope image of Device A. Figure 3: Skewed critical current
diffraction pattern in device A. (a) At gate voltage $V_{gate}=-2\mathrm{V}$.
The normal resistance is 4 k$\Omega$. (b) Line-cuts from (a) labeled with
color squares. Critical current at positive and negative bias are labeled as
$I_{c+}$ and $I_{c-}$. (c) Upper panel: switching currents extracted from
panel (a), Lower panel: Coefficient $\gamma$ calculated from panel (c).
## IV Figure 3: Skewed Diffraction Pattern
Fig. 3 shows a representative skewed diffraction pattern from device A. The
field is applied along $\hat{x}$, in-plane and perpendicular to the nanowire.
In Fig. 3(a) the switching current is where the differential resistance
$dV/dI$ changes from zero (dark blue) to a finite value. The pattern is
visibly inversion-symmetric. In data processing, we treat current source that
gives voltage drop across the device smaller than $10\mu$V as superconducting
regime and vice versa. The switching currents $I_{c+}$ and $I_{c-}$ extracted
from panel (a) exhibit maxima displaced to positive and negative fields
respectively [Figs. 3(b),3(c)]. By taking I-V traces at $B_{x}=50$ mT and
$B_{x}=-50$ mT, we get $\Delta I_{c}=5-8$ nA [Fig. 3(b)], which is about 30%
of $|I_{c}|$. The maxima of $I_{c\pm}$ are at $B_{x}=\pm 20$ mT. The
coefficient $\gamma$ peaks at a higher field of order 100 mT right before the
collapse of $I_{c}$ at higher fields.
Supercurrent is not hysteretic in this regime. A hysteresis would manifest in
a vertical shift in the $\gamma$ dependence which would not be inversion-
symmetric. We discuss data acquisition and processing in the underdamped
regime in supplementary information. A 10 mT hysteresis in magnet field is
found in the measurements (Supplementary Section.V) and considered in our
discussion.
Figure 4: Field rotation in Device B. Coefficient $\gamma$ as a function of
angle when the field is rotating in three orthogonal planes: x-y(a), x-z(b),
and y-z(c) with fixed strength $|B|=50~{}\text{mT}$. In (a) the external field
is along the $\hat{y}$-axis when $\theta_{xy}=90^{\circ}$ and $270^{\circ}$.
In (b) the external field is along $\hat{x}$-axis when $\theta_{z-x}$ =
$90^{\circ}$ and $270^{\circ}$.In (C) the external field is along
$\hat{y}$-axis when $\theta_{z-y}$ = $90^{\circ}$ and $270^{\circ}$.
## V Figure 4: Magnetic Field Anisotropy
Device B has the same geometry as device A and its SEM picture can be found in
Fig. S1(c). We measure device B in the external field $|B|=50$ mT and rotate
the field in three orthogonal planes. Critical current are traced from zero
bias and extracted at where current bias gives differential resistance larger
than 2 k$\Omega$. Coefficients $\gamma$ are calculated based on extracted
$I_{c+}$ and $I_{c-}$ [Fig.4(a)-(c)]. $\gamma$ in x-y and x-z planes reach
zero when the external field is aligned to the $\hat{y}$ or $\hat{z}$ axis,
and is large along $\hat{x}$. In the y-z plane, $\gamma$ is significantly
reduced. More critical current diffraction patterns in fields at a different
angle in the x-y plane can be found in Fig. S4. The junction is made with
nanowires that are half-covered by superconductor shells. Shell orientation is
studied in the supplementary materials. Device A has its Sn shell on the
bottom and device B has the Sn shell on the side [Fig. S2]. Nevertheless, the
same general behavior, with skew coefficient $\gamma$ being largest along
$\hat{x}$ is observed in devices A, B, C (see Supplementary III).
Figure 5: Numerical simulation results: (a) Critical currents ($I_{c+}$ and
$I_{c-}$) as a function of magnetic field $B_{x}$. The chemical potential is
set to two transverse or four spin-full modes ($\mu=8\mathrm{meV}$). (b)
Coefficient $\gamma$ as a function of $B_{x}$ corresponding to different
combinations of terms in the Hamiltonian (see legend). (c) Coefficient
$\gamma$ as a function of angle $\theta$ when the external field is rotating
in three orthogonal planes with fixed strength $|B|=50$ mT. The Zeeman effect
(g = 50) is present in all the results. Other parameters used in the
simulation are $\alpha=200$ $\mathrm{nm}\cdot\mathrm{meV}$,
$m_{eff}=0.015m_{e}$, temperature $T=100$ mK. The lattice constant $a=8$ nm,
the nanowire diameter $d_{1}=120$ nm, the outer diameter (with Sn shell)
$d_{2}=140$ nm and the coverage angle $\phi=180^{\circ}$. How chemical
potentials were chosen in the simulation is discussed in Supplementary Part V.
## VI Figure 5: Tight-Binding Model
We numerically study the microscopic properties of the system within a tight-
binding model that has the same geometry as experiments (see supplementary
information for the description of the 3D model). This model was developed to
study supercurrent interference in nanowires using KWANT [41, 42, 43]. In that
project, the field orientation along the nanowire was primarily investigated.
Here, we rotate the field. We can toggle on and off spin-orbit interaction
($\alpha$), the orbital vector-potential effect of external field (A), while
Zeeman effect and disorder always remain on. For details of this model, see
Supplementary materials Section VII.
In Fig. 5(a) we reproduce a skewed diffraction pattern within the tight-
binding model for fields oriented along $\hat{x}$. Fig. 5(b), we illustrate
the role of spin-orbit interaction. The characteristic peak-antipeak structure
is present whenever $\alpha\neq 0$. The coefficient $\gamma$ remains zero when
only Zeeman effect of magnetic field is included.
Orbital effect only ($\alpha=0$, $\textbf{{A}}\not=0$) yields a similar
structure (see $B_{x}=80\mathrm{mT}$), limited in magnitude and field. We
believe this is a simulation artefact that appears when the external field is
perpendicular to the line connecting the center of the wire and the center of
the shell. (See Supplementary Section. VII for simulation at other chemical
potential) With all contributions toggled on, we reproduce the magnetic field
anisotropy of coefficient $\gamma$, compare Fig. 4 and Fig. 5(c).
The tight-binding model allows us to generate current-phase relations, which
exhibit $\varphi_{0}$ shifts as well as higher order harmonics and an extra
$\delta_{12}$ emerge and increase when field is along $\hat{x}$ [Fig. S16].
This agrees with the phenomenological model of Fig. 1. The parameters used in
the numerical model are discussed in the supplementary. Some of the
parameters, such as spin-orbit strength, exceed those previously reported for
InSb nanowires [40], however we cannot claim that matching this model to data
is a reliable way of extracting spin-orbit interaction strength.
Figure 6: Comparison between the experiment (a) and the numerical simulation
(b). Coefficient $\gamma$ versus external field $B_{x}$ and gate voltage
(chemical potential $\mu$) are plotted as 2D maps to study skew shape as
function of gate voltage, $V_{gate}$ ($\mu$ in simulation). The experimental
data are taken from device B. Parameters used for the simulation are the same
as in Fig 5. Based on the discussion in Supplementary Section V, the gate
voltage range in our experiment is corresponding to chemical potential $\mu$ =
15-30 meV in the simulation. Another 2D map derived from Device C can be found
in Fig. S18
## VII Figure 6: Gate Voltage Dependence
For device B we take gate sweeps of the supercurrent in a series of external
fields that are along the $\hat{x}$-axis [Fig.6(a)]. We observe that for
$V_{gate}>0$, $\gamma$ has a characteristic double-peak shape in magnetic
field which describes the skewed diffraction pattern. The behavior is observed
over a significant range of gate voltages, from near pinch-off to near
saturation of normal state conductance. The less regular traces at negative
gate voltages are too close to pinch-off where supercurrent is small. See
supplementary information for unprocessed gate voltage data for this and other
devices.
The data demonstrate that skewed diffraction patterns do not appear only at
fine-tuned values of gate voltage. On the other hand, there is no clear gate
voltage dependence of the magnitude of magnetic field extent of $\gamma$. The
behavior looks qualitatively similar in the tight-binding simulation [Fig.
6(b)]. Given the gate voltage range, we do not expect being able to
significantly tune the bulk Rashba spin-orbit coefficient in these InSb
nanowires.
## VIII Alternative Explanation: Self-field effects
Skewed diffraction patterns were observed for decades in Josephson junctions
due to self-field effects, where current through the junction generates flux
in the junction. This effect is more pronounced in wide junctions with high
critical current density. We estimate the magnetic field generated by current
flow through a nanowire with the Biot-Savart Law. With current I = 300 nA,
nanowire radius r = 60 nm and Junction width L = 120 nm, we get self induced
field B $\sim$ 1 $\mu$T at the surface. In Fig.2 (a) and (c), the maximum and
minimum of $I_{c}$ are achieved at $|B|=20$ mT, which is thousands of times
larger than the self-induced field. Therefore, the self-field effect is not
enough to explain the skewed pattern we observe. Furthermore, the skew due to
self-fields should generally manifest for external fields along both $\hat{x}$
and $\hat{y}$, and be sensitive to the shell orientation, while the skew
reported here is strongest along $\hat{x}$ for two different shell
orientations.
## IX Conclusion
Skewed diffraction patterns, observed in our experiment, are reproduced using
two theoretical models. The phenomenological model, and the tight-binding
model, agree that the imbalance between $I_{c+}$ and $I_{c-}$ and the critical
current maxima displaced for zero field are related to the current-phase
relation with two Josephson harmonics, and a phase-shift between them. The
tight-binding model yields phase-shifts $\varphi_{0}$ and $\delta_{12}$ due to
strong Rashba spin-orbit interaction. In turn, experiment shows that skew in
diffraction pattern is largest when the field is oriented along $\hat{x}$, the
likely orientation of spin-orbit effective field in nanowires. The effects are
observed in multiple nanowires and require no fine-tuning with gate voltages.
## X Acknowledgements
We thank G. Badawy, S. Gazibegovic, E. Bakkers for providing InSb nanowires.
We thank Y. Nazarov, R. Mong for discussions. We acknowledge the use of shared
facilities of the NSF Materials Research Science and Engineering Center
(MRSEC) at the University of California Santa Barbara (DMR 1720256) and the
Nanotech UCSB Nanofabrication Facility.
## XI Funding
Work supported by the NSF PIRE:HYBRID OISE-1743717, NSF Quantum Foundry funded
via the Q-AMASE-i program under award DMR-1906325, U.S. ONR and ARO and France
ANR through Grant No. ANR-17-PIRE-0001 (HYBRID).
## XII Data Availability
Curated library of data extending beyond what is presented in the paper, as
well as simulation and data processing code are available at [44].
## XIII Duration and Volume of Study
This project was started in May 2021, when the skewed diffraction pattern was
first observed in nanowires Josephson junction (devices were studied longer).
The experiments measurement ended in May 2022 and the simulation analysis
ended in August 2022. Devices studied in this project are made of nanowires
that is reported in ref [39]. 28 devices on 3 chips are measured during 5
cooldowns in dilution refrigerators, producing about 5200 datasets.
## XIV References
## References
* Larsen _et al._ [2015] T. W. Larsen, K. D. Petersson, F. Kuemmeth, T. S. Jespersen, P. Krogstrup, J. Nygård, and C. M. Marcus, Physical review letters 115, 127001 (2015).
* De Lange _et al._ [2015] G. De Lange, B. Van Heck, A. Bruno, D. Van Woerkom, A. Geresdi, S. Plissard, E. Bakkers, A. Akhmerov, and L. DiCarlo, Physical review letters 115, 127002 (2015).
* Frolov _et al._ [2020] S. Frolov, M. Manfra, and J. Sau, Nature Physics 16, 718 (2020).
* Lutchyn _et al._ [2010] R. M. Lutchyn, J. D. Sau, and S. D. Sarma, Physical review letters 105, 077001 (2010).
* Oreg _et al._ [2010] Y. Oreg, G. Refael, and F. Von Oppen, Physical review letters 105, 177002 (2010).
* Yokoyama _et al._ [2014] T. Yokoyama, M. Eto, and Y. V.Nazarov, Physical Review B 89, 195407 (2014).
* Golubov _et al._ [2004] A. A. Golubov, M. Y. Kupriyanov, and E. Il’Ichev, Reviews of modern physics 76, 411 (2004).
* Buzdin [2008] A. Buzdin, Physical review letters 101, 107005 (2008).
* Reynoso _et al._ [2008] A. Reynoso, G. Usaj, C. Balseiro, D. Feinberg, and M. Avignon, Physical review letters 101, 107001 (2008).
* Zazunov _et al._ [2009] A. Zazunov, R. Egger, T. Jonckheere, and T. Martin, Physical review letters 103, 147004 (2009).
* Margaris _et al._ [2010] I. Margaris, V. Paltoglou, and N. Flytzanis, Journal of Physics: Condensed Matter 22, 445701 (2010).
* Reynoso _et al._ [2012] A. A. Reynoso, G. Usaj, C. B. an D. Feinberg, and M. Avignon, Physical Review B 86, 214519 (2012).
* Brunetti _et al._ [2013] A. Brunetti, A. Zazunov, A. Kundu, and R. Egger, Physical Review B 88, 144515 (2013).
* Silaev _et al._ [2014] M. Silaev, A. Y. Aladyshkin, M. Silaeva, and A. Aladyshkina, Journal of Physics: Condensed Matter 26, 095702 (2014).
* Chen _et al._ [2018] C.-Z. Chen, J. J. He, M. N. Ali, G.-H. Lee, K. C. Fong, and K. T. Law, Physical Review B 98, 075430 (2018).
* Minutillo _et al._ [2018] M. Minutillo, D. Giuliano, P. Lucignano, A. Tagliacozzo, and G. Campagnano, Physical Review B 98, 144510 (2018).
* Campagnano _et al._ [2015] G. Campagnano, P. Lucignano, D. Giuliano, and A. Tagliacozzo, Journal of Physics: Condensed Matter 27, 205301 (2015).
* Dolcini _et al._ [2015] F. Dolcini, M. Houzet, and J. S. Meyer, Physical Review B 92, 035428 (2015).
* Sickinger _et al._ [2012] H. Sickinger, A. Lipman, M. Weides, R. Mints, H. Kohlstedt, D. Koelle, R. Kleiner, and E. Goldobin, Physical review letters 109, 107002 (2012).
* Ando _et al._ [2020] F. Ando, Y. Miyasaka, T. Li, J. Ishizuka, T. Arakawa, Y. Shiota, T. Moriyama, Y. Yanase, and T. Ono, Nature 584, 373 (2020).
* Lyu _et al._ [2021] Y.-Y. Lyu, J. Jiang, Y.-L. Wang, Z.-L. Xiao, S. Dong, Q.-H. Chen, M. V. Milošević, H. Wang, R. Divan, J. E. Pearson, _et al._ , Nature communications 12, 1 (2021).
* Baumgartner _et al._ [2022] C. Baumgartner, L. Fuchs, A. Costa, S. Reinhardt, S. Gronin, G. C. Gardner, T. Lindemann, M. J. Manfra, P. E. Faria Junior, D. Kochan, _et al._ , Nature nanotechnology 17, 39 (2022).
* Wu _et al._ [2022] H. Wu, Y. Wang, Y. Xu, P. K. Sivakumar, C. Pasco, U. Filippozzi, S. S. Parkin, Y.-J. Zeng, T. McQueen, and M. N. Ali, Nature 604, 653 (2022).
* He _et al._ [2022] J. J. He, Y. Tanaka, and N. Nagaosa, New Journal of Physics 24, 053014 (2022).
* Kopasov _et al._ [2021] A. Kopasov, A. Kutlin, and A. Mel’nikov, Physical Review B 103, 144520 (2021).
* Daido _et al._ [2022] A. Daido, Y. Ikeda, and Y. Yanase, Physical Review Letters 128, 037001 (2022).
* Yuan and Fu [2022] N. F. Yuan and L. Fu, Proceedings of the National Academy of Sciences 119, e2119548119 (2022).
* Hou _et al._ [2022] Y. Hou, F. Nichele, H. Chi, A. Lodesani, Y. Wu, M. F. Ritter, D. Z. Haxell, M. Davydova, S. Ilić, F. S. Bergeret, _et al._ , arXiv preprint arXiv:2205.09276 (2022).
* Gupta _et al._ [2022] M. Gupta, G. V. Graziano, M. Pendharkar, J. T. Dong, C. P. Dempsey, C. Palmstrøm, and V. S. Pribiag, arXiv preprint arXiv:2206.08471 (2022).
* Bauriedl _et al._ [2022] L. Bauriedl, C. Bäuml, L. Fuchs, C. Baumgartner, N. Paulik, J. M. Bauer, K.-Q. Lin, J. M. Lupton, T. Taniguchi, K. Watanabe, _et al._ , Nature communications 13, 1 (2022).
* Pal _et al._ [2022] B. Pal, A. Chakraborty, P. K. Sivakumar, M. Davydova, A. K. Gopi, A. K. Pandeya, J. A. Krieger, Y. Zhang, M. Date, S. Ju, _et al._ , Nature physics 18, 1228 (2022).
* Ryazanov _et al._ [2001] V. Ryazanov, V. Oboznov, A. Y. Rusanov, A. Veretennikov, A. A. Golubov, and J. Aarts, Physical review letters 86, 2427 (2001).
* Heikkilä _et al._ [2000] T. T. Heikkilä, F. K. Wilhelm, and G. Schön, EPL (Europhysics Letters) 51, 434 (2000).
* Szombati _et al._ [2016] D. Szombati, S. Nadj-Perge, D. Car, S. Plissard, E. Bakkers, and L. Kouwenhoven, Nature Physics 12, 568 (2016).
* Pita-Vidal _et al._ [2020] M. Pita-Vidal, A. Bargerbos, C.-K. Yang, D. J. Van Woerkom, W. Pfaff, N. Haider, P. Krogstrup, L. P. Kouwenhoven, G. De Lange, and A. Kou, Physical Review Applied 14, 064038 (2020).
* Wollman _et al._ [1995] D. Wollman, D. J. Van Harlingen, J. Giapintzakis, and D. Ginsberg, Physical review letters 74, 797 (1995).
* Hart _et al._ [2014] S. Hart, H. Ren, T. Wagner, P. Leubner, M. Mühlbauer, C. Brüne, H. Buhmann, L. W. Molenkamp, and A. Yacoby, Nature Physics 10, 638 (2014).
* Stoutimore _et al._ [2018] M. Stoutimore, A. Rossolenko, V. Bolginov, V. Oboznov, A. Rusanov, D. Baranov, N. Pugach, S. Frolov, V. Ryazanov, and D. Van Harlingen, Physical review letters 121, 177702 (2018).
* Pendharkar _et al._ [2021] M. Pendharkar, B. Zhang, H. Wu, A. Zarassi, P. Zhang, C. Dempsey, J. Lee, S. Harrington, G. Badawy, S. Gazibegovic, _et al._ , Science 372, 508 (2021).
* Nadj-Perge _et al._ [2012] S. Nadj-Perge, V. Pribiag, J. Van den Berg, K. Zuo, S. Plissard, E. Bakkers, S. Frolov, and L. Kouwenhoven, Physical review letters 108, 166801 (2012).
* Groth _et al._ [2014] C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Waintal, New Journal of Physics 16, 063065 (2014).
* Nijholt and Akhmerov [2016] B. Nijholt and A. R. Akhmerov, Physical Review B 93, 235434 (2016).
* Zuo _et al._ [2017] K. Zuo, V. Mourik, D. B. Szombati, B. Nijholt, D. J. Van Woerkom, A. Geresdi, J. Chen, V. P. Ostroukh, A. R. Akhmerov, S. R. Plissard, _et al._ , Physical review letters 119, 187704 (2017).
* [44] DOI: 10.5281/zenodo.7374094 .
Supplementary Materials
## XV Fabrication and Measurements
Nanowires are placed onto a Si chip that has predefined local gates.
Electrostatic local gates are patterned by 100 keV Electron Beam Lithography
(EBL) on undoped Si substrates. Local gates have mixed widths of 80 and 200nm
and are separated with a distance of 40 nm. Electron beam evaporation of
1.5/6nm Ti/PdAu is used to metalize the gates which are covered by 10nm of ALD
HfOx that serves as a dielectric layer [Fig. S1(a)].
Figure S1: Fabrication steps in making nanowires Josephson junction (a) Local
gate chips made on a Si substrate (purple). Ti/AuPd gates (gold) are covered
by HfOx dielectric (Green) (b) InSb shadow nanowires that are half-covered by
Sn shells are transferred onto the chips. (c) Metal leads made with Ti/Au are
connected to the shell to form a Josephson junction.
Based on the Scanning electron microscope images of all devices that are
studied in this report, the length of the JJs made with Sn-InSb nanowires is
120-150nm. The width of the junction is the same as the width of the
nanowires, which is $\approx$ 120nm. After placing nanowires onto gates [Fig.
S1(b)], we cover the whole chip with PMMA 950 A4 electron beam resist. Resist
is dried at room temperature by pumping with a mechanical pump in a metal
desiccator for 24 hours. Then we use EBL to define normal lead patterns. After
development, we clean the residue of resist in an oxygen plasma asher. In the
electron beam evaporator, we first use an in-situ ion mill to remove AlOx
capping layer from the nanowires in the contact area, after which we deposit
10nm/130nm Ti/Au on the chips [Fig. S1(c)].
Transport 2-point measurements with currents source and voltage measurement
model in parallel are used, with several stages of filtering placed at
different temperatures.
## XVI Device list and shell orientation
3 chips are studied in this project. Each chip contains multiple devices.
Among these chips, 5 devices demonstrate sharp critical currents that are
tuned by the gates. Skewed diffraction patterns taken from these devices are
all plotted and posted in this report. In the main text, we present skewed
diffraction patterns from device A [Fig. 3]. In device B, we study field
direction dependence by rotating fields in different planes and study gate
voltage and field effect on the skewed diffraction pattern with a 2D map [Fig.
4 and 6].
Figure S2: Scanning electron microscope (SEM) image of Devices measured in
report. (a) Right: SEM image of Device A (Chip: QPC2). Left: Scanning
transmission electron microscopy (STEM) based Energy-dispersive x-ray
spectroscopy (EDS) spectroscopic elemental image of Device A. (b) Right: SEM
image of Device A1 (Chip: QPC2). Left: STEM based EDS spectroscopic elemental
image of Device A1. (c) Right: SEM image of Device B (Chip: QPC4). Left:
Atomic force microscopy (AFM) image of Device B. (d) SEM image of Device
C(Chip: QPC3) (e) SEM image of Device C1 (Chip: QPC3)
Devices A and A1 are on Chip QPC2. The EDX results show InSb nanowires and Sn
shells. No ferromagnetic materials were found in the junction. Sn shells are
half-covering the nanowires from the bottom side and are in touch with the
HfOx dielectric layer.
Device B is on Chip QPC4. The shell orientation of the device is studied with
Atomic Force Microscopy [Fig. S2(c)]. Based on the AFM images, we conclude
that the Sn shell is on the side.
Devices C and C1 are on Chip QPC3. The shell orientations are not studied.
## XVII Diffraction pattern at different gates in field along three axes
### XVII.1 Device A
Figure S3: Diffraction patterns at different gates in the field along three
axes, measured in Device A (QPC2). (a) dV/dI differential resistance as a
function of $\hat{x}$-direction field $B_{x}$ and current bias. $\gamma$ is
calculated with the extracted magnitude $\Delta I_{c}$. (b) Diffraction
pattern when the field is applied parallel to the nanowires, along
$\hat{z}$-direction. (c) Diffraction pattern when the field is applied out of
substrate plane, along $\hat{y}$-direction.
### XVII.2 Device A1
Figure S4: Diffraction patterns at different gates in the field along
$\hat{x}$ and $\hat{z}$ axes, measured in Device A1 (QPC2). (a) dV/dI
differential resistance as function of $\hat{x}$-direction field $B_{x}$ and
current bias. $\gamma$ is calculated with the extracted magnitude $\Delta
I_{c}$. (b) Diffraction pattern when the field is applied parallel to the
nanowires, along $\hat{z}$-direction.
### XVII.3 Device B
Figure S5: Diffraction patterns at different gates in the field along
$\hat{x}$ and $\hat{z}$ axes, measured in Device B (QPC4). (a) dV/dI
differential resistance as function of $\hat{x}$-direction field $B_{x}$ and
current bias. $\gamma$ is calculated with the extracted magnitude $\Delta
I_{c}$. (b) Diffraction pattern when the field is applied parallel to the
nanowires, along $\hat{z}$-direction.
### XVII.4 Device C
Figure S6: Diffraction patterns at different gates in the field along three
axes, measured in Device C (QPC3)(a) dV/dI differential resistance as function
of $\hat{x}$-direction field $B_{x}$ and current bias. $\gamma$ is calculated
with the extracted magnitude $\Delta I_{c}$. (b) Diffraction pattern when
field is applied parallel to the nanowires, along $\hat{z}$-direction. (c)
Diffraction pattern when the field is applied out of substrate plane, along
$\hat{y}$-direction.
### XVII.5 Device C1
Figure S7: Diffraction patterns at different gates in the field along three
axes, measured in Device C1 (QPC3) (a) dV/dI differential resistance as
function of $\hat{x}$-direction field $B_{x}$ and current bias. $\gamma$ is
calculated with the extracted magnitude $\Delta I_{c}$. There is a shift of
gate, so strength of $I_{c}$ may be different from scans with other two fields
directions. (b) Diffraction pattern when field is applied parallel to the
nanowires, along $\hat{z}$-direction. (c) Diffraction pattern when the field
is applied out of substrate plane, along $\hat{y}$-direction.
## XVIII Diffraction pattern at x-y plane, field is applied along angle
$\theta$ between $0^{\circ}$ to $180^{\circ}$, Device C
Figure S8: Diffraction pattern at the x-y plane, the field is applied along
angle $\theta_{x-y}$ between $0^{\circ}$ to $180^{\circ}$, measured in Device
C. Critical current difference $\Delta I_{c}$ is extracted from measurement
data using a peak finder Python script.
## XIX Characteristics of the Junctions and hysteresis in the measurement
setup
Figure S9: Gate dependence taken at external field $|B|$ = 0, measurement
data are taken from Device A (a-c), Device B (d-f) and Device C (g-i). (a,d,g)
Current through the JJs as function of gate voltage $V_{gate}$ when bias
voltage is set to be $V_{bias}$ = 10mV. (b,e,h) V-I characteristics taken at
different gate voltages. (c,f,i) Upper panel: dV/dI differential resistance as
a function of current source and $V_{gate}$. Lower panel: extracted critical
current $I_{c}$ (blue) and $I_{c}R_{N}$ product (red) as functions of
$V_{gate}$.(j) Electrical conductance of the junction in the unit of quantum
conductance ($2e^{2}/h$) as a function of chemical potential $\mu$ when
simulating with parameters used in other figures. Followed by normal state
conductance as a function of gate voltage $V_{gate}$ measured in device A, B,
and C, respectively. Differential resistance are extracted from (c,f,i) and
converted to conductance for plotting. (k) Critical current difference
calculated with $\Delta I_{c}$ as functions of gate voltage at zero field,
measured in device A, B, and C. $I_{c+}$ and $I_{c-}$ are extracted from
(c,f,i)
The gate effect is studied by applying a bias voltage across the device
$V_{bias}=10mV$. Conduction channels in Device A [Fig. S9(a)] can be fully
closed by the tunnel gate. While Devices B and C [Fig. S9(d),(g)] cannot be
fully closed. The Josephson effect at zero magnetic field is best studied in
the current-bias configuration [Fig. S9(c),(f),(i)]. The switching current
from superconducting to normal state regime is demonstrated with a read peak
in differential resistance (referred to in other Figures as $I_{c}$). The
magnitude of $I_{c}$ is calculated with $|I_{c}|=(|I_{c+}|+|I_{c-}|)/2$, where
$I_{c+}$ and $I_{c-}$ are extracted with a peak finder Python script by
finding the two of largest differential resistance at each gate voltage.
$I_{c}$ increases at more positive gate voltage, while the extracted products
$I_{c}R_{N}$ ($R_{N}$ is the normal state resistance) are in the range of
200-300 $\mu\cdot eV$.
### XIX.1 Chemical potential used in simulation and corresponding gate
voltage in experimental measurements
In an attempt to establish additional correspondence between experiment and
theory, we study conductance as a function of chemical potential $\mu$ in the
simulation [Fig. S9(j)]. The normal state resistance read from skewed pattern
in Fig. 2(a) in main text is 4 kOhm, which is close to two transverse modes or
four spin-full modes. So we choose $\mu$ = 8 meV to study the skew shape in
Fig. 4. When studying the direction-dependent supercurrent transport as
function of field direction, we get the normal state resistance about 1.5-2
kOhm. Which is corresponding to six to eight transverse modes or twelve to
eighteen spin-full mode, and chemical potential $\mu\approx 20meV$ in the
simulation.
The skew map presented in Fig. 5 in main text is measured from Device B. By
comparing normal state conductance $G$ as a function of $V_{gate}$ in
simulation and measurement data [Fig. S9(j)], we conclude the range used in
Fig. 5, which is -0.5V to 2V, is corresponding to chemical potential
$\mu=15-60meV$. Another skew map measured with Device C in Fig. S18 (c) has
gate voltage range from -3V to 2V, which is corresponding to $\mu=30-60meV$.
### XIX.2 Hysteresis in the Josephson junction
Hysteresis in the current-voltage characteristics is studied by extracting
$\Delta I_{c}=|I_{c+}|-|I_{c-}|$ from devices A, B, and C and plotting them as
a function of gate voltage at zero magnetic fields [Fig. S9(k)]. We find the
magnitude of $I_{c-}$ is larger than $I_{c+}$ at more positive gate voltage in
Devices A and C. When extracting $\Delta I_{c}$ from skewed diffraction
patterns [Figs. S3(b), S6(c)], it is also easy to see the critical current is
larger at negative bias, which results in the gamma as a function of the
$B_{x}$ field that is no longer inversion symmetric about zero fields, zero
current. While in Device B, we didn’t see the obvious difference between
$I_{c+}$ and $I_{c-}$.
There are two methods used in current configuration measurements: 1\.
Unidirectional current sweeps, either from positive to negative, or from
negative to positive bias. This method is used in Devices A and C. Which
results in the hysteresis in current bias.
2\. Sweep from zero bias. At a fixed gate voltage or field, scan from zero
current bias to the positive and negative side respectively (0 to $I_{+}$ and
0 to $I_{-}$). So one data set only records scans with $I$ larger than zero
and vice versa. Then we combine two datasets to get a full scan. By doing
this, we can get rid of the hysteresis because only switching currents are
measured. This method is used in device B and the results are plotted in [Fig
.4, 6].
In the main text, the skewed diffraction pattern from Device A are studied at
a smaller gate voltage, $V_{gate}=-2V$ [Fig.3]. At this gate voltage,
hysteresis in the junction is not observed.
### XIX.3 Hysteresis in the superconducting magnet
Figure S10: (a) Skewed critical current diffraction patterns, field is
scanned from positive to negative and from negative to positive direction in
adjacent panels. Device A is used in this scan and gate voltage is set to be
$V_{gate}$ = -1V for this scan. (b) Extracted critical current difference
$\Delta I_{c}$ is plotted as a function of $\hat{x}$-field from two scans in
(a).
## XX Model used in simulation
To simulate the superconductor-nanowire-superconductor Josephson junction in
the presence of external magnetic field, we consider the following Hamiltonian
for a nanowire that is covered by superconductor lead at both ends.
$\displaystyle H=\left(\frac{\mathbf{p}^{2}}{2m^{*}}-\mu+\delta
U\right)\tau_{z}+\alpha(p_{z}\sigma_{x}-p_{x}\sigma_{z})\tau_{z}+g\mu_{B}\mathbf{B}\cdot\hat{\sigma}+\Delta\tau_{x},$
(S1)
where $\tau_{i}$ and $\sigma_{i}$ are Pauli matrices act on particle-hole and
spin space respectively. $\mathbf{p}=-i\hbar\nabla+e\mathbf{A}\tau_{z}$ is the
canonical momentum, and the magnetic potential $\mathbf{A}$ is chosen to be
$[0,B_{z}x-B_{x}z,-B_{y}x]$, so that it is invariant along the $x-$direction.
Further, $m^{*}$ is the effective mass, $\mu$ is the chemical potential and
$\delta U$ represent the onsite disorder inside the nanowires. The Zeeman
effect is given by $g\mu_{B}\mathbf{B}\cdot\hat{\sigma}$ and the Rashba spin-
orbit coupling is given by $\alpha(p_{z}\sigma_{x}-p_{x}\sigma_{z})$. Finally,
$\Delta$ is the superconducting pairing potential.
We first construct a tight-binding model based on the Hamiltonian (S1), then
the critical current under different parameter configurations can be obtained
from the imaginary part of the Green’s function. These Green’s functions can
be obtained by using the KWANT package. We explain the code in detail below.
The first step is to construct a system with the scattering region and leads.
Here we use the function kwant.continuum.discretize to convert the 3D
translational symmetric Hamiltonian (S2) into a tight-binding system (Figure
S11).
$\displaystyle
H=\left(\frac{\hbar^{2}(p_{x}^{2}+p_{y}^{2}+p_{z}^{2})}{2m^{*}}-\mu+\delta
U\right)\tau_{z}+\alpha(p_{z}\sigma_{x}-p_{x}\sigma_{z})\tau_{z}+g\mu_{B}\mathbf{B}\cdot\hat{\sigma}+\Delta\tau_{x},$
(S2)
Here the Hamiltonian does not contain the orbital effect because
kwant.continuum.discretize cannot handle the systems with lower symmetry. To
include the orbital effect, we need to apply the Peierls substitution to the
hopping term. The hopping between two sites $\vec{x}$ and $\vec{x}_{0}$
becomes $t\to te^{i\phi}$, where
$\phi=-e\mathbf{A}\cdot(\vec{x}-\vec{x}_{0})/\hbar.$
Figure S11: Tight-binding model generated by the function
kwant.continuum.discretize. Red dots represent the infinite leads.
In order to calculate the critical current, besides normal leads and
superconducting leads (red region in Fig. S11), we need to add a virtual self-
energy lead to this system. Here we attach the lead in the middle of the
nanowire (yellow region in Fig. S12). Notice this self-energy lead is not
connected to external devices, and is only used to calculate the Green’s
function. Usually, the Green’s function of an infinite system contains
infinite entries. But now we can divide this nanowire into two parts: self
energy lead (L) and the other region (R). Then, we have
$\displaystyle\begin{pmatrix}G_{L}^{r}&G_{LR}^{r}\\\
G_{RL}^{r}&G_{R}^{r}\end{pmatrix}=\begin{pmatrix}E+i\eta-H_{Lead}&H_{C}\\\
H_{C}^{\dagger}&E+i\eta-H_{R}\end{pmatrix}^{-1},$ (S3)
where $H_{C}$ and $H_{C}^{\dagger}$ are the hopping between the lead and the
rest of the nanowire. By solving this equation we get
$\displaystyle
G_{L}^{r}=\left(E-H_{Lead}-H_{C}^{\dagger}(E-H_{R})^{-1}H_{C}^{\dagger}\right)^{-1}.$
(S4)
Thus the Green’s function of the finite self energy lead contains the
information about the whole system.
In the KWANT package, the retarded Green’s function of the self-energy lead
can be obtained by using the function kwant.solvers.greens_function. We first
calculate the Green’s function $G^{r}_{L}(0)$ without the phase difference
between the two superconducting leads. Then the Green’s function with the
phase difference $\varphi$ can be obtained by modifying the Hamiltonian
$H_{Lead}$ in equation (S4). To be more precise, we change the hopping term
$t$ to $te^{i\varphi}$, where $t$ is the hopping from the left side of the
self-energy lead to the right side.
Critical current under finite temperature $T$ can be calculated by using the
imaginary Green’s function. Consider the self-energy lead as a subsystem. The
current equals the change in the number of electron on the left side of the
lead.
$\displaystyle I=ie\left\langle\sum_{i\in
L}\frac{\mathrm{d}n_{i}}{\mathrm{d}\tau}\right\rangle=\frac{ie}{\hbar}\left\langle\sum_{i\in
L}[c_{i}^{\dagger}(\tau)c_{i}(\tau),H_{Lead}]\right\rangle.$ (S5)
Here we consider the imaginary time evolution and $i$ runs through the
positions to the left of the self-energy lead. For any diagonal term
$c^{\dagger}_{j}c_{j}$ in $H_{Lead}$, we have
$\displaystyle[c_{i}^{\dagger}c_{i},c^{\dagger}_{j}c_{j}]=0.$ (S6)
For $j,k\neq i$, we have
$\displaystyle[c_{i}^{\dagger}c_{i},c_{j}c_{k}]=[c_{i}^{\dagger}c_{i},c^{\dagger}_{j}c_{k}]=[c_{i}^{\dagger}c_{i},c_{j}c_{k}^{\dagger}]=0.$
(S7)
Therefore, to have the non-zero commutator, we need at least one operator
$c_{j}$ or $c_{j}^{\dagger}$ such that $j\in L$. Suppose $j,k\in L$, then we
have
$\displaystyle[c_{j}^{\dagger}c_{j},c_{j}^{\dagger}c_{k}]=c_{j}^{\dagger}c_{j}c_{j}^{\dagger}c_{k}-c_{j}^{\dagger}c_{k}c_{j}^{\dagger}c_{j}=c_{j}c_{k}-0=c_{j}^{\dagger}c_{k}$
(S8)
$\displaystyle[c_{k}^{\dagger}c_{k},c_{j}^{\dagger}c_{k}]=c_{k}^{\dagger}c_{k}c_{j}^{\dagger}c_{k}-c_{j}^{\dagger}c_{k}c_{k}^{\dagger}c_{k}=0-c_{j}c_{k}=-c_{j}^{\dagger}c_{k}$
(S9)
These two term cancel each other, thus only hopping between the left side and
right side of the lead contribute to the critical current. Then the equation
(S5) simplifies to
$\displaystyle I$ $\displaystyle=\frac{ie}{\hbar}\left\langle\sum_{i\in L,j\in
R}[c_{i}^{\dagger}(\tau)c_{i}(\tau),t_{ji}c_{i}^{\dagger}(\tau)c_{j}(\tau)-t_{ij}c_{j}^{\dagger}(\tau)c_{i}(\tau)]\right\rangle$
(S10) $\displaystyle=\frac{ie}{\hbar}\sum_{i\in L,j\in R}(t_{ji}\langle
c_{i}^{\dagger}(\tau)c_{j}(\tau)\rangle-t_{ij}\langle
c_{j}^{\dagger}(\tau)c_{i}\rangle(\tau))$ (S11)
By using the definition $G(\tau,\tau^{\prime})_{ij}=\langle\langle
c_{i}^{\dagger}(\tau)c_{j}(\tau^{\prime})\rangle$ for $\tau>\tau^{\prime}$ we
have
$\displaystyle I=\frac{ie}{\hbar}\sum_{i\in L,j\in
R}(t_{ij}G(\tau,\tau^{\prime})_{ij}-t_{ji}G(\tau,\tau^{\prime})_{ji}).$ (S12)
Then by apply the inverse Fourier transformation on the right hand side of
this equation and take the limit $\tau-\tau^{\prime}\to 0^{+}$, we get
$\displaystyle I$ $\displaystyle=\frac{ie}{\hbar}\sum_{i\in L,j\in
R}\sum_{n\in\mathbb{Z}}k_{B}T(t_{ji}e^{i\omega_{n}(\tau-\tau^{\prime})}G(i\omega_{n})_{ij}-t_{ij}e^{i\omega_{n}(\tau-\tau^{\prime})}G(i\omega_{n})_{ij})$
(S13) $\displaystyle=\frac{iek_{B}T}{\hbar}\sum_{n\in\mathbb{Z}}\sum_{i\in
L,j\in R}(t_{ji}G(i\omega_{n})_{ij}-t_{ij}G(i\omega_{n})_{ij})$ (S14)
$\displaystyle=\frac{-4ek_{B}T}{\hbar}\sum_{n\in\mathbb{N}}\mathrm{Im}\\{\mathrm{Tr}\left(T_{RL}G(i\omega_{n})_{LR}-T_{LR}G(i\omega_{n})_{RL}\right)\\},$
(S15)
where $T_{LR}$ and $T_{RL}$ are the hopping matrices from left (right) to
right (left), $\omega_{n}=(2n+1)\pi k_{B}T$ is the $n$-th Matsubara frequency
for electron. Factor $4$ on the last line comes from positive-negative
symmetry when sum over all the integers and particle-hole symmetry of the
system.
Figure S12: Cross section of the nanowire. Here we add a two-layer self energy
lead in the middle of the wire, where $t$ is the hopping term from the left
side of self energy lead to the right side.
## XXI Parameters used in the simulations
In this section, we discuss the strength of parameters used in the simulation
that give the best match to experimental data.
### XXI.1 Temperature
In the measurements, lattice temperature can only be estimated by reading the
temperature from the sensor on the dilution refrigerator mixing chamber plates
and it is varying from 50 to 60 $\mathrm{mK}$ while scanning the external
field. However, electron temperature is the relevant parameter for the
simulation. In our case, electrons are cooled down from room temperature to
base temperature by several stages of filters, especially the cooper powder
filter. However, the temperature of electrons is usually a bit higher than the
device temperature. So we simulate the skewed shape in different temperatures
(Fig.S13) and choose 100$\mathrm{mK}$ for all the simulation results presented
in the main text.
Figure S13: (a) Skewed diffraction pattern from simulation when temperature T
= 50$\mathrm{mK}$, 100$\mathrm{mK}$, 150$\mathrm{mK}$ and 250$\mathrm{mK}$,
respectively. (b) Coefficient $\gamma$ as function of $B_{x}$ at different
temperatures.
From the simulation results, we also find that the maximum and minimum value
of coefficient $\gamma$ become smaller at higher temperature. This is
consistent with our temperature dependence measurement results [Fig. S17].
### XXI.2 Strength of Spin-orbit interaction
The strength of Spin-orbit coupling is estimated by studying the critical
current diffraction pattern and choosing the one which best reproduced the
experiment results. We set chemical potential $\mu$ = 8 meV, which is
corresponding to two transverse or four spin-full modes, same value is used in
the [FIg. 5] in the main text. The skew shape we observe experimentally has
two features that we aim to reproduce: 1) the largest critical current $I_{c}$
is not located at zero field; 2) the largest critical current difference is
around $B_{x}=\pm 50mT$. Based on the skewed shape simulated with different
strengths of spin-orbit coupling, we find the skew is best reproduced with
$\alpha=200nm\cdot meV$. So we choose $\alpha=200$ for all simulation results
in the main text. Note that the true strength of spin-orbit interaction in
nanowires may differ from the parameters in a tight-binding simulation.
Figure S14: (a)Critical current as function of $\hat{x}$-field when strength
of spin-orbit interaction $\alpha=0,60,120,180,240,300nm\cdot meV$. The
chemical potential $\mu=8meV$. (b) Coefficient $\gamma$ are extracted from (a)
and plotted as function of $B_{x}$ simulated with different $\alpha$. (c)At
same chemical potential, plot coefficient $\gamma$ versus perpendicular field
$B_{x}$ and spin-orbit strength $\alpha$ as a 2D map to study how alpha affect
skew shape.
### XXI.3 Skew shape and field rotation simulation at another chemical
potential
Figure S15: Numerical results from the KWANT simulation with same parameters
used in Fig. 5 but with another chemical potential $\mu=20meV$. This chemical
potential is corresponding to four transverse or eight spin-full modes. (a)
Critical current flow through each polarization as a function of magnetic
field $B_{x}$. (b) Coefficient $\gamma$ as a function of $B_{x}$, the
magnitude of $I_{c}$ is derived from (a). (c) Coefficient $\gamma$ as a
function of angle $\theta$ when the external field is rotating in three
orthogonal planes with fixed strength $|B|=50\mathrm{mT}$.
In the main text we present skewed diffraction pattern from Device A and
rotation from Device B (field rotation was not performed for device A). Thus
simulation at two separate chemical potential should be considered in
preparing this report. To maintain consistency of simulation, we choose
$\mu=8\mathrm{meV}$ for the simulation in the main text, Fig.4. Here we show
simulation results at $\mu=20\mathrm{meV}$ here.
## XXII Current phase relation derived from simulation results
Figure S16: (a) CPR when external field is along three directions related to
the device at field strength equals to 0T, 0.05T and 0.1T. (b) Amplitude of
Sine and Cosine terms that are derived from Fourier expansion of CPR when
external field is along $\hat{x}$-direction. The combined sine term is plotted
as function of $B_{x}$ field and order of harmonics. (c) Ground state phase
$\phi_{n0}$ at the first and the second order of combined sin harmonics as
function of external field $B_{x}$. A constant $\pi$ was subtracted from all
second order harmonics but has no effect as second order harmonic has a period
of $\pi$.
The current phase relation (CPR) is studied within the model and its
parameters suggested by comparison with experiment. We plot the CPR curve when
the strength of the external field is 0T, 0.05T, 0.1T in $B_{x}$, $B_{y}$ and
$B_{z}$ directions [Fig. S16]. Parameters used in this simulation is same as
that in Fig.4 in main text. In Fig.S16(a), we find that only when the external
field is along $\hat{x}$-direction, there is a shift of the ground state phase
in CPR. The numerical CPR curves are similar to those postulated in the
phenomenological model (Eq. 1 and Fig. 1).
To study how the time-reversal symmetry is broken when external field is along
$\hat{x}$-direction. We first perform a Fourier expansion with the simulated
CPR [Fig.S16 (b)]. We find there is a significant second order sin term in the
simulation. What’s more interesting is the first and second order cos terms
are also large compared to the sin term and they are first increasing when
field is applied but decreasing at higher field. This explains why the phase
of ground state shift by such a large value. It is known that harmonics
functions can be combined into a sine using the following:
$\displaystyle\sum_{i=1}A_{i}\sin(i\phi)+\sum_{i=0}B_{i}\cos(i\phi)=\sum_{i=1}A^{\prime}_{i}\sin(i\phi+\phi{i0})+Constant$
(S16)
Here we find constant is contributing less than $2\%$ of the combined function
in any case, so we drop it. We find the amplitude of first and second order of
combined sin function has a ratio about 4:1 when field strength is near zero.
This ratio is used in the minimal model presented in the main text.
Another interesting result is when we study the ground state phase $\phi_{i0}$
in the first and second harmonics, we find they are increasing linearly with
$B_{x}$ field within the range 0-100mT. This was mentioned in [6] but here we
provide more details. Based on the simulation, we get $\phi_{20}>2\phi_{10}$
and the grey shadow region is indicating the $\delta_{12}$ increase in with
field. Hence we can confirm there is a $\delta\phi$ term in the CPR from the
simulation. How this $\delta\phi$ related to the strength of spin-orbit
interaction is worth a further discussion in future works.
## XXIII Temperature dependence of skewed diffraction patterns
Figure S17: Temperature dependence in Device A. (a) For $V_{gate}=-2V$ and
external field $B_{x}=50mT$. dV/dI differential resistance is plotted as
function of current source I and temperature T. (b) Coefficient $\gamma$ is
plotted as function of temperature T. (c) Diffraction patterns taken at
T=1.1K, $\gamma$ is extracted from 2D scan results and plotted as function of
$B_{x}$
## XXIV $\gamma$ trace in different field
Figure S18: Coefficient $\gamma$ as function of $V_{gate}$ in different
external field $B$. (a) External field with fixed strength $|B|=50mT$ and
along $\hat{x}$, $\hat{y}$, and $\hat{z}$ axis. (b) External field with fixed
strength $|B|=100mT$ and along $\hat{x}$ and $\hat{z}$ axis. (c) The 2D
$B_{x}$ versus $V_{gate}$ map of coefficient gamma of Device C. The gate
voltage range is corresponding to the chemical potential $\mu=6-30meV$
|
methodsMethods References
# An assessment of the Association Between a Fast Radio Burst and Binary
Neutron Star Merger
Alexandra Moroianu∗1,2 Linqing Wen∗1,2 Clancy W. James3 Shunke Ai4,5 Manoj
Kovalam1,2 Fiona Panther1,2 Bing Zhang4,5
###### Abstract
Fast radio bursts (FRBs) are mysterious bright millisecond-duration radio
bursts at cosmological distances[1, 2]. While young magnetars have been put
forward as the leading source candidate[3, 4, 5, 6, 7], recent observations
suggest there may be multiple FRB progenitor classes[2, 8]. It has long been
theorised that FRBs could be emitted from compact object mergers[9] —
cataclysmic events such as binary neutron star (BNS) mergers that may be
detectable in gravitational waves (GWs) by the ground-based Laser
Interferometer Gravitational Wave Observatory (LIGO)[10] and Virgo[11]. Here
we report a potential coincidence between the only BNS merger event
GW190425[12] out of 21 GW sources detected during the first six months of
LIGO-Virgo’s 3rd Science Run and a bright, non-repeating FRB event, FRB
20190425A[2], from a search using public GW and CHIME FRB data. The FRB is
located within the GW’s sky localization area, occurred 2.5 hours after the GW
event, and has a dispersion measure consistent with the distance inferred from
GW parameter estimation[13]. The chance probability of a coincidence between
unrelated FRB and GW events in the databases is estimated to be $0.0052$
($2.8\,\sigma$). We estimate the chance of CHIME detecting such an event to
range from 0.4% for a beam-centre detection to 68% if a bright burst is
detectable in a far sidelobe. This potential association is consistent with
the theory[14] that the BNS merger leaves behind a supramassive, highly
magnetized compact object, which collapses to form a black hole after losing
angular momentum due to spindown and makes an FRB through ejecting the
magnetosphere[15]. If such a physical association is established, the equation
of state of the post-merger compact object is likely stiff, with a Tolman-
Oppenheimer-Volkoff non-spinning maximum mass[16] $M_{\rm
TOV}>2.63_{-0.23}^{+0.39}M_{\odot}$ for a neutron star remnant, or $M_{\rm
TOV}>2.31_{-0.08}^{+0.24}M_{\odot}$ for a quark star remnant.
Australian Research Council Centre of Excellence for Gravitational Wave
Discovery (OzGrav)
Department of Physics, University of Western Australia, Crawley WA 6009,
Australia
International Centre for Radio Astronomy Research, Curtin University, Bentley,
WA 6102, Australia
Nevada Center for Astrophysics, University of Nevada, Las Vegas, NV 89154, USA
Department of Physics and Astronomy, University of Nevada, Las Vegas, NV
89154, USA
To date, more than 600 FRBs have been detected at radio frequencies between
$110\,\mathrm{MHz}$ and $8\,\mathrm{GHz}$[17, 18]. The high all-sky rate[2]
and event rate density[19, 20] of FRBs, combined with the fact that some FRB
sources emit repeated bursts[21, 2], suggest that the majority of FRBs are not
produced from cataclysmic channels such as compact object mergers. However, a
small sub-population of FRBs associated with cataclysmic events would be
difficult to detect in these analyses[2], and be consistent with cataclysmic
event rates [19]. Furthermore, extensive follow-up studies have failed to
detect repeating radiation from some nearby FRBs[1, 22]. There exist several
theories that predict the association of an FRB with a GW event due to compact
binary coalescence between two neutron stars, a neutron star and a black hole,
or even two charged black holes[9]. In particular, binary neutron star (BNS)
mergers have long been theorised to emit FRB-like signals before[23, 24],
during[25] or after[14] the merger. With the publicly available GW catalogue
GWTC-2[13] and the newly released first FRB catalogue[2] from the Canadian
Hydrogen Intensity Mapping Experiment FRB project (CHIME/FRB), it is possible
to test these theories by searching for GW-FRB associations. We conduct a
search for GW-FRB coincidences using CHIME/FRB’s first FRB catalogue
containing 535 new FRB sources[2] (observed July 2018 - July 2019), 171 of
which overlap with the first half of LIGO’s $3^{\rm rd}$ Science Run (O3a:
April 1, 2019 - October 1, 2019)[13]. Our search time window is chosen to be
asymmetrical and 26 hours wide, encompassing FRBs that occur up to 2 hours
before a GW signal[25] and 24 hours after[14] (see Methods). A GW-FRB pair is
considered to be coincident in time if an FRB falls within the time window of
a GW signal. CHIME/FRB localization[2] is accurate on the order of arcminutes,
more precise than even the best GW localization of tens of square degrees[26].
Therefore, we consider a GW-FRB pair spatially coincident if the FRB lies
within the 90% credible interval[13] of a candidate GW’s localization (see
Methods).
We find an apparently non-repeating FRB 20190425A[2] temporally and spatially
coincident with a GW merger event: GW190425[12, 13]. GW190425 was observed on
April 25, 2019 08:18:05 UTC by LIGO Livingston (LIGO Hanford was offline) with
a false alarm rate of $\rm FAR=1/69,000\ \mathrm{yr^{-1}}$. There was no
significant detection made by Virgo due to its lower sensitivity, which helps
constrain the sky localization of the event. GW190425 is a BNS merger event,
however the remnant mass of the system in the source frame, $M_{\rm tot}^{\rm
final}=3.23^{+0.33}_{-0.11}\ M_{\odot}$, is significantly larger than that
predicted by known Galactic BNS systems[13]. FRB 20190425A is a curiously
bright (see Methods) radio transient with fluence $31.6\pm 4.2\ \mathrm{Jy\
ms}$ (assuming a beam-centre detection) and burst width $3.799\pm 0.002\cdot
10^{-4}\ \mathrm{s}$. It has a broadband emission across CHIME’s 400–800 MHz
bandwidth, and a single-peaked morphology, which is observed in 30% of the
CHIME FRB population, and is associated with non-repeating FRBs. In contrast,
repeating FRBs typically exhibit narrow-band structure[2]. It also has an
unusually low dispersion measure (DM) of $128.2\ \mathrm{pc\ cm^{-3}}$,
indicating an origin within $z<0.0394$, assuming DM originates only from the
Milky Way and the intergalactic medium[27]. It was observed 2.5 hours after
GW190425 at 10:46:33 UTC at a peak frequency of 591.8 MHz, and localized to
J2000 celestial coordinates RA = $255.72\pm 0.14^{\circ}$, DEC = $21.52\pm
0.18^{\circ}$. This places it in the 66.7% credible interval of GW190425’s
refined GWTC-2 skymap[13] (Figure 1). Parameter estimation[13] localized the
BNS merger event to redshift $z=0.03^{+0.01}_{-0.02}$. This is within the
upper limit ($z<0.0394$) specified by FRB 20190425A’s DM (see Methods), making
the two signals coincident within the error margins of their distances.
Figure 1: Temporal and spatial coincidence of GW190425 and FRB 20190425A. Top
left: LIGO Livingston signal to noise ratio (SNR) of GW190425 (see Methods).
Top right: flux (in Jy) of FRB 20190425A[2], occurring $2.5\ \rm hours$ after
the merger. Bottom: Sky direction of FRB 20190425A (cyan) and the localization
of GW190425[13] (90% contour indicated by black line). Figure 2: The expected
dispersion measure (DM) distribution inferred from GW190425, compared to non-
repeating CHIME FRBs. Blue solid line: the probability distribution of DM
after subtracting the contribution from the Milky Way’s interstellar medium
(DM-DMMWISM; see Methods). Orange dashed line: the observed value for FRB
20190425A, compared to the number of non-repeating CHIME FRBs ($N_{\rm FRB}$,
green histogram).
To quantify the likelihood of this association being entirely coincidental, we
estimate the probability of chance coincidence of GW190425 and FRB20190425A
assuming that FRB and GW detections are independent. Evidence against this
hypothesis comes from three sources: the spatial, temporal, and DM
coincidence. We exclude repeating FRBs in this analysis (see Methods). The
likelihood of a chance temporal coincidence between GW190425 and FRB 20190425A
($P_{\rm T}$) is taken to be the probability of CHIME randomly detecting an
FRB in a 2.5 hour stretch of time. We estimate a CHIME FRB detection rate of
1.93 per day around the time of GW190425, leading to a Poisson probability of
$P_{\rm T}=0.18$. We define $P_{\rm S}$ to be the probability that a random
CHIME FRB would be detected during this 2.5 hour interval with equal to or
higher likelihood in the GW190425 skymap than FRB 20190425A (i.e. within the
66.7% credible interval). Using the estimated declination dependence of
CHIME’s exposure[2], we estimate $P_{\rm S}=0.265$ (see Methods). If we
instead use our initial temporal (26 hr time window) and spatial (90% credible
interval) selection criteria, we find 0.88 ($P_{\rm T}$) and 0.15 ($P_{\rm
S}$) respectively. The probability of an FRB originating at redshift $z$
having a dispersion measure DM has been estimated using a sample of localized
FRBs[27]. We produce the expected DM probability distribution for the
published redshift posterior distribution[12] of GW190425. We find that FRB
20190425A lies in the 46% credible interval of GW190425’s DM probability
distribution. Only one of 473 other FRBs exhibits a DM with a more-likely
value (Figure 2). We thus find $P_{\rm DM}$, the chance that a random CHIME
FRB would have a DM with an equal or better match to expectations, to be
$2/474\approx 0.004$.
CHIME has shown that the temporal, spatial and DM distributions of FRBs are
independent[28], except for a potential correlation[29] between FRBs with DM
$\sim$ 800 pc cm-3 and large-scale structure at $z\sim 0.4$. We use $P_{\rm
T}P_{\rm S}P_{\rm DM}=0.18\cdot 0.265\cdot 0.004=1.9\cdot 10^{-4}$ as evidence
against the null hypothesis of a chance coincidence. The chance probability of
finding an FRB with smaller product $P_{\rm T}P_{\rm S}P_{\rm DM}$, i.e. the
p-value, is $0.0052$ (see Methods). Note the close proximity of this FRB in
time, sky direction and distance, as implied by DM measurements, to GW190425
makes this association more significant than the random chance of finding any
FRBs within the search windows used for initial discovery (13.5% probability).
Of further interest, GW190425-FRB 20190425A is the only GW-FRB pair that
survived our time-spatial coincidence criteria and it is linked to the only
BNS event out of the 21 GW sources detected. This encourages us to consider a
potential astrophysical association between GW190425 and FRB 20191425A.
We search for a host galaxy inside the reported error ellipse of FRB
20190425A’s central localization (see Methods). We identify one candidate in
the NASA Extragalactic Database (NED)111The NASA/IPAC Extragalactic Database
(NED) is funded by the National Aeronautics and Space Administration and
operated by the California Institute of Technology. within a redshift range
$0.001<z<0.08$ consistent with the BNS merger redshift range: UGC 10667
(Figure 3). This galaxy has a redshift consistent with the BNS merger and the
FRB DM. UGC 10667 has an offset of 5.05 arcminutes from the optimal FRB
location and a redshift of $z=0.03136\pm 0.00009$. The expected number of
galaxies[30] within the search volume is $N_{\mathrm{gals}}\sim 0.173$ (see
Methods) assuming a galaxy density of $\rho_{\mathrm{gals}}=2.35\cdot
10^{-3}\,\mathrm{Mpc}^{-3}$ and a maximum luminosity distance of
$d_{\mathrm{L}}=255.85\,\mathrm{Mpc}$. Therefore, to investigate the
association, we encourage follow-up observations of UGC 10667 to search for
evidence of the merger event, such as the afterglow emission of the ejecta in
the broad band, especially in the radio band at sub-GHz frequencies[31, 32].
Figure 3: Potential host association for GW190425-FRB 20190425A. We
investigate hosts within an error ellipse defined by the 1-$\sigma$
uncertainties in RA ($\pm 0.14^{\circ}$) and DEC ($\pm 0.18^{\circ}$) centered
on the optimal FRB location derived by CHIME[2] (red ellipse). Redshifts of
the identified extragalactic objects are listed in Methods. UGC 10667 (orange
cross) is the only host with a redshift consistent with the redshift derived
for the GW source. Background image: DSS2-red
(http://archive.eso.org/dss/dss).
Though we cannot definitively assign the potential GW-FRB association to a
single theory, it is consistent with the GW, short gamma-ray burst (sGRB) and
FRB association theory invoking the collapse of a post-BNS-merger
magnetar[14]. The FRB generation mechanism is the so-called “blitzar”
mechanism[15], which has been confirmed through numerical simulations[33].
Within this scenario, the 2.5 hour delay time between the FRB and the GW event
is the survival time of the supramassive neutron star before collapsing into a
black hole[14], which is consistent with the expected delay timescale range
for a supramassive magnetar both in theory and in observational data (see
Methods). The chance of CHIME detecting such a single burst within its
$1.3^{\circ}$–$2.5^{\circ}$ wide primary beam is 0.4–0.8%, increasing to 14%
when allowing for a detection in the far sidelobes (see Methods). The duration
of the FRB is of the order of light crossing time of the ejected magnetosphere
of the collapsing supramassive neutron star, which is of the order of its spin
period. This is consistent with the $\sim$ millisecond duration of FRB
20190425A. As an initially rapidly rotating supramassive neutron star
collapses to a black hole after losing angular momentum, the total ejected
electromagnetic (EM) energy[14, 33] is $E_{\rm EM}\sim 1.7\cdot 10^{44}\ {\rm
erg}\ (B/10^{14}\,\mathrm{Gauss})^{2}(R_{\rm NS}/10\,{\rm km})^{3}$, where $B$
and $R_{\rm NS}$ correspond to the surface magnetic field and the radius of
the neutron star, respectively. We calculate the FRB’s total isotropic radio
emission energy as $E_{\mathrm{FRB,iso}}=3.72\cdot 10^{38}\,\mathrm{erg}$
using the measured fluence of FRB 20190425A ($3.2\cdot
10^{-28}\,\mathrm{J\,m^{-2}}\cdot 400\,\mathrm{MHz}$) and the most probable
luminosity distance ($156\,\mathrm{Mpc}$) of GW190425. The true energy of the
emitter should be $E_{\mathrm{FRB,iso}}f_{b}\eta_{r}^{-1}$, where $f_{b}\leq
1$ is the beaming factor and $\eta_{r}$ is the radio emission efficiency. For
the blitzar FRB model, the EM energy injection is essentially isotropic[15,
33], so $f_{b}\sim 1$. This implies $\eta_{r}\sim 2\cdot 10^{-6}$, similar to
the reported value for the Galactic FRB 200428 (ref. [4, 5, 6, 7]). BNS
mergers eject dense neutron-rich material that could prevent the escape of
coherent radio emission. For an FRB to be observed following the merger, the
line of sight needs to be cleared by a sGRB jet[14]. However, the sGRB does
not need to be bright enough to be detected by current gamma-ray telescopes.
For example, a sGRB with the brightness of GRB 170817A would be able to clear
the ejecta and escape detection at the distance of $156\,\mathrm{Mpc}$ (ref.
[34, 35]). We note that an excess of gamma-rays was detected 0.5–5.9 s after
GW190425 by the International Gamma-Ray Astrophysics Laboratory (INTEGRAL)
using the SPI-ACS subsystem. This was reported as a candidate sGRB of marginal
significance associated with GW190425 by two independent analyses[36, 37].
Both the localization of the sGRB reported by INTEGRAL, and the non-detection
by FERMI,[37] is consistent with the FRB and GW association.
If confirmed, the astrophysical association between FRB 20190425A and GW190425
can constrain the poorly known equation of state (EoS) of the post-merger
compact object. GW parameter estimation reveals that the gravitational masses
of the two merger components in the source frame are
$M_{1}=2.03^{+0.58}_{-0.34}\ M_{\odot}$ and $M_{2}=1.35^{+0.26}_{-0.26}\
M_{\odot}$[13], with the final total mass of the merger product $M_{\rm
tot}^{\rm final}=3.23^{+0.33}_{-0.11}\ M_{\odot}$
(https://dcc.ligo.org/LIGO-P2000223/public). Assuming a uniformly rotating
neutron star for the merger remnant, we derive the final gravitational mass
$M_{\rm rem}=3.16^{+0.40}_{-0.24}\ M_{\odot}$ (see Methods), consistent with
the final total mass derived from the GW data. There is a universal maximum
mass ($M_{\rm TOV})$ that a non-rotating neutron or quark star can support
against gravitational collapse into a black hole[38, 39]. This depends on the
uncertain neutron star EoS, and is poorly constrained by data[40, 41, 42]. A
supramassive neutron star remnant can support a higher mass with an
enhancement factor up to $M_{\rm rem}/M_{\rm TOV}\sim 1.2$ for uniform
rotation[43, 38, 39]. If GW190425 produced a spin-supported supramassive
neutron star remnant, this constrains $M_{\rm TOV}>2.63^{+0.39}_{-0.23}\
M_{\odot}$. In addition, requiring that the remnant subsequently collapses
places an upper limit of $M_{\rm TOV}<M_{\rm
rem}/1.046=3.02^{+0.42}_{-0.25}\,\mathrm{M_{\odot}}$ (see Methods). Our
constraints on $M_{\rm TOV}$ are at the high end of, but still consistent
with, constraints derived from observations of GW170817 and X-ray emission of
Galactic pulsars (Figure 4, upper panel) [41, 40]. Given the high-mass, high-
pressure environment of a violent merger, quark deconfinement might occur for
the merger remnant[44, 45], making the enhancement factor as large as 1.4
(ref. [46]). Assuming $M_{\mathrm{rem}}=M_{\rm tot}^{\rm final}$, we find
$2.31_{-0.08}^{+0.24}\ M_{\odot}<M_{\rm TOV}<3.23^{+0.33}_{-0.11}\ M_{\odot}$,
consistent with existing constraints for quark star models[42] (Figure 4,
lower panel).
Figure 4: Constraints on the Tolman-Oppenheimer-Volkoff non-spinning maximum
mass ($M_{\rm TOV}$) for neutron stars (upper panel) and quark stars (lower
panel). Given the derived remnant mass $M_{\rm rem}$ of GW190425, the required
$M_{\rm rem}/M_{\rm TOV}$ enhancement factor as a function of $M_{\rm TOV}$ is
drawn as the black solid curve with the two associated dashed curves denoting
the error range. The maximum enhancement factor can be up to $\sim 1.2$ and
$\sim 1.4$ for a uniformly rotating neutron star[43, 38, 39] and quark
star[46], respectively. These are shown as the blue solid band in both panels
(the width of the band accommodates the possible range of different EoSs). The
minimum enhancement factor for a uniformly rotating star is $\sim 1.05$ for
neutron stars[39], and we assume $1$ for quark stars as the most conservative
limit, which are denoted as red bands in both panels. The allowed range of
$M_{\rm TOV}$ given a GW190425/FRB 20190425A association is derived from the
intersections of the colored bands and slanted curves; this is shown in dark
(light) gray shading when using the central value (including error ranges) of
$M_{\rm rem}$. Published constraints on the distribution of $M_{\rm TOV}$ are
presented as colored dotted lines[40, 42, 41] for comparison, with the
relevant probability density scale marked on the right axis.
Our constraints on $M_{\rm TOV}$ indicate that mergers with lower component
masses such as GW170817 (and its association with GRB 170817A)[47] would
produce a much longer-lived supramassive or stable neutron star. The late-time
X-ray rebrightening of the GW170817 remnant is consistent with the existence
of such a long-lived remnant [48, 49]. These mergers may produce FRBs that
repeat over a period of time much longer than 2.5 hours if the magnetar-FRB
mechanism applies. This then implies that repeating FRBs could be produced
from old stellar populations, such as repeating FRB 20200120E in a globular
cluster in M81[50, 8]. Since the rate density of BNS mergers is much smaller
than the inferred rate density of FRBs, this channel alone cannot account for
the FRB 20190425A-like FRBs in the CHIME sample. We encourage wide-field radio
observations concurrent with future GW observing runs in order to further test
our proposed GW–FRB association, and the use of FRB localisation data to
identify potential hosts galaxies to guide searches for a multiwavelength
counterpart.
## References
* [1] Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J. & Crawford, F. A Bright Millisecond Radio Burst of Extragalactic Origin. _Science_ 318, 777 (2007).
* [2] Amiri, M. _et al._ The First CHIME/FRB Fast Radio Burst Catalog. _ApJS_ 257, 59 (2021).
* [3] Michilli, D. _et al._ An extreme magneto-ionic environment associated with the fast radio burst source FRB 121102. _Nature_ 553, 182–185 (2018).
* [4] A bright millisecond-duration radio burst from a galactic magnetar. _Nature_ 587, 54–58 (2020). URL http://dx.doi.org/10.1038/s41586-020-2863-y.
* [5] Bochenek, C. D. _et al._ A fast radio burst associated with a Galactic magnetar. _Nature_ 587, 59–62 (2020).
* [6] Li, C. K. _et al._ HXMT identification of a non-thermal X-ray burst from SGR J1935+2154 and with FRB 200428. _Nature Astronomy_ (2021).
* [7] Mereghetti, S. _et al._ INTEGRAL Discovery of a Burst with Associated Radio Emission from the Magnetar SGR 1935+2154. _ApJ_ 898, L29 (2020).
* [8] Kirsten, F. _et al._ A repeating fast radio burst source in a globular cluster. _Nature_ 602, 585–589 (2022).
* [9] Platts, E. _et al._ A Living Theory Catalogue for Fast Radio Bursts. _Phys. Rept._ 821, 1–27 (2019).
* [10] Aasi, J. _et al._ Advanced ligo. _Classical and Quantum Gravity_ 32, 074001 (2015). URL http://dx.doi.org/10.1088/0264-9381/32/7/074001.
* [11] Acernese, F. _et al._ Advanced virgo: a second-generation interferometric gravitational wave detector. _Classical and Quantum Gravity_ 32, 024001 (2014). URL http://dx.doi.org/10.1088/0264-9381/32/2/024001.
* [12] Abbott, B. P. _et al._ Gw190425: Observation of a compact binary coalescence with total mass $\sim$ 3.4 m⊙. _The Astrophysical Journal_ 892, L3 (2020). URL http://dx.doi.org/10.3847/2041-8213/ab75f5.
* [13] Abbott, R. _et al._ Gwtc-2: Compact binary coalescences observed by ligo and virgo during the first half of the third observing run. _Physical Review X_ 11 (2021). URL http://dx.doi.org/10.1103/PhysRevX.11.021053.
* [14] Zhang, B. A possible connection between fast radio bursts and gamma-ray bursts. _The Astrophysical Journal_ 780, L21 (2013). URL http://dx.doi.org/10.1088/2041-8205/780/2/L21.
* [15] Falcke, H. & Rezzolla, L. Fast radio bursts: the last sign of supramassive neutron stars. _A &A_ 562, A137 (2014).
* [16] Oppenheimer, J. R. & Volkoff, G. M. On massive neutron cores. _Phys. Rev._ 55, 374–381 (1939). URL https://link.aps.org/doi/10.1103/PhysRev.55.374.
* [17] Pleunis, Z. _et al._ Lofar detection of 110–188 mhz emission and frequency-dependent activity from frb 20180916b. _The Astrophysical Journal Letters_ 911 (2020).
* [18] Gajjar, V. _et al._ Highest frequency detection of frb 121102 at 4–8 ghz using the breakthrough listen digital backend at the green bank telescope. _The Astrophysical Journal_ 863, 2 (2018). URL http://dx.doi.org/10.3847/1538-4357/aad005.
* [19] Ravi, V. The prevalence of repeating fast radio bursts. _Nature Astronomy_ 3, 928–931 (2019). URL http://dx.doi.org/10.1038/s41550-019-0831-y.
* [20] Luo, R. _et al._ On the frb luminosity function – – ii. event rate density. _Monthly Notices of the Royal Astronomical Society_ 494, 665–679 (2020). URL http://dx.doi.org/10.1093/mnras/staa704.
* [21] Spitler, L. G. _et al._ A repeating fast radio burst. _Nature_ 531, 202–205 (2016). URL http://dx.doi.org/10.1038/nature17168.
* [22] James, C. W. _et al._ Which bright fast radio bursts repeat? _MNRAS_ 495, 2416–2427 (2020).
* [23] Piro, A. L. Magnetic interactions in coalescing neutron star binaries. _The Astrophysical Journal_ 755, 80 (2012). URL http://dx.doi.org/10.1088/0004-637X/755/1/80.
* [24] Zhang, B. Fast radio bursts from interacting binary neutron star systems. _The Astrophysical Journal_ 890, L24 (2020). URL http://dx.doi.org/10.3847/2041-8213/ab7244.
* [25] Totani, T. Cosmological fast radio bursts from binary neutron star mergers. _Publications of the Astronomical Society of Japan_ 65, L12 (2013). URL http://dx.doi.org/10.1093/pasj/65.5.L12.
* [26] Abbott, B. P. _et al._ Prospects for observing and localizing gravitational-wave transients with advanced ligo, advanced virgo and kagra. _Living Reviews in Relativity_ 23 (2020). URL http://dx.doi.org/10.1007/s41114-020-00026-9.
* [27] Macquart, J. P. _et al._ A census of baryons in the Universe from localized fast radio bursts. _Nature_ 581, 391–395 (2020).
* [28] Josephy, A. _et al._ No Evidence for Galactic Latitude Dependence of the Fast Radio Burst Sky Distribution. _arXiv e-prints_ arXiv:2106.04353 (2021).
* [29] Rafiei-Ravandi, M. _et al._ CHIME/FRB Catalog 1 results: statistical cross-correlations with large-scale structure. _arXiv e-prints_ arXiv:2106.04354 (2021).
* [30] Gehrels, N. _et al._ Galaxy Strategy for LIGO-Virgo Gravitational Wave Counterpart Searches. _ApJ_ 820, 136 (2016).
* [31] Nakar, E. & Piran, T. Detectable radio flares following gravitational waves from mergers of binary neutron stars. _Nature_ 478, 82–84 (2011).
* [32] Gao, H., Ding, X., Wu, X.-F., Zhang, B. & Dai, Z.-G. Bright Broadband Afterglows of Gravitational Wave Bursts from Mergers of Binary Neutron Stars. _ApJ_ 771, 86 (2013).
* [33] Most, E. R., Nathanail, A. & Rezzolla, L. Electromagnetic Emission from Blitzars and Its Impact on Non-repeating Fast Radio Bursts. _ApJ_ 864, 117 (2018).
* [34] Abbott, B. P. _et al._ Gravitational waves and gamma-rays from a binary neutron star merger: Gw170817 and grb 170817a. _The Astrophysical Journal_ 848, L13 (2017). URL http://dx.doi.org/10.3847/2041-8213/aa920c.
* [35] Zhang, B. B. _et al._ A peculiar low-luminosity short gamma-ray burst from a double neutron star merger progenitor. _Nature Communications_ 9, 447 (2018).
* [36] Savchenko, V. _et al._ LIGO/Virgo S190425z: further analysis of INTEGRAL data. _GRB Coordinates Network_ 24178, 1 (2019).
* [37] Pozanenko, A. S., Minaev, P. Y., Grebenev, S. A. & Chelovekov, I. V. Observation of the second ligo/virgo event connected with a binary neutron star merger s190425z in the gamma-ray range. _Astronomy Letters_ 45, 710–727 (2019). URL http://dx.doi.org/10.1134/S1063773719110057.
* [38] Breu, C. & Rezzolla, L. Maximum mass, moment of inertia and compactness of relativistic stars. _MNRAS_ 459, 646–656 (2016).
* [39] Ai, S., Gao, H. & Zhang, B. What Constraints on the Neutron Star Maximum Mass Can One Pose from GW170817 Observations? _ApJ_ 893, 146 (2020).
* [40] Li, A., Miao, Z., Han, S. & Zhang, B. Constraints on the Maximum Mass of Neutron Stars with a Quark Core from GW170817 and NICER PSR J0030+0451 Data. _ApJ_ 913, 27 (2021).
* [41] Miller, M. C. _et al._ The Radius of PSR J0740+6620 from NICER and XMM-Newton Data. _arXiv e-prints_ arXiv:2105.06979 (2021).
* [42] Li, A., Miao, Z. Q., Jiang, J. L., Tang, S. P. & Xu, R. X. Bayesian inference of quark star equation of state using the NICER PSR J0030+0451 data. _MNRAS_ (2021).
* [43] Cook, G. B., Shapiro, S. L. & Teukolsky, S. A. Rapidly Rotating Neutron Stars in General Relativity: Realistic Equations of State. _ApJ_ 424, 823 (1994).
* [44] Dai, Z. G. & Lu, T. $\gamma$-Ray Bursts and Afterglows from Rotating Strange Stars and Neutron Stars. _Phys. Rev. Lett._ 81, 4301–4304 (1998).
* [45] Drago, A., Lavagno, A., Metzger, B. D. & Pagliara, G. Quark deconfinement and the duration of short gamma-ray bursts. _Phys. Rev. D_ 93, 103001 (2016).
* [46] Li, A. _et al._ Internal x-ray plateau in short GRBs: Signature of supramassive fast-rotating quark stars? _Phys. Rev. D_ 94, 083010 (2016).
* [47] Abbott, B. P. _et al._ Gw170817: Observation of gravitational waves from a binary neutron star inspiral. _Physical Review Letters_ 119 (2017). URL http://dx.doi.org/10.1103/PhysRevLett.119.161101.
* [48] Piro, L. _et al._ A long-lived neutron star merger remnant in GW170817: constraints and clues from X-ray observations. _MNRAS_ 483, 1912–1921 (2019).
* [49] Troja, E. _et al._ Accurate flux calibration of GW170817: is the X-ray counterpart on the rise? _MNRAS_ 510, 1902–1909 (2022).
* [50] Bhardwaj, M. _et al._ A Nearby Repeating Fast Radio Burst in the Direction of M81. _ApJ_ 910, L18 (2021).
## 1 Methods
### 1.1 Search Method
We search for coincidences between GW signals[13] and published CHIME FRBs[2],
taking into account all existing theoretical models for potential GW-FRB
associations[23, 25, 14, wang2016]zhang2020. Our initial search considers
coincidence in time and sky direction, and aims to identify potentially
interesting events for further analysis — distance and dispersion measure are
considered only in the combined chance probability calculationAlex2021thesis.
#### Time
Our search time window is chosen to be asymmetrical and 26 hours wide,
encompassing FRBs that occur up to 2 hours before a GW signal and 24 hours
after. This covers pre-merger emission theories such as magnetospheric
interactions[23, wang2016]zhang2016 and magnetic braking[25], and post-merger
emission theories such as magnetar collapse[14] (note this theory relates only
to BNS merger events). We source our GW data from the published GW catalog
GWTC-2 available at the time of writing[13] and CHIME FRBs from the CHIME/FRB
Public Database (https://www.chime-frb.ca/catalog). Our GW sample consists of
39 events detected by LIGO and Virgo during the O3a observing run (April 1,
2019 – October 1, 2019) and 536 FRBs — 474 of which are apparent non-repeaters
— published in CHIME Catalog 1[2]. Within our sample, 21 GW events and 171
(150 non-repeating) FRBs have overlapping observing windows and are selected
to search for temporal coincidence. A GW and FRB are considered to be
coincident in time if the FRB lies within the designated 26-hr time window of
a GW, and this is found by simply iterating over the GPS times of the two
signals. Any signals that are not coincident in time are cut, and the
remaining candidates are screened for coincidence in sky direction.
#### Sky Direction
The accuracy of GW localization is mainly determined by the measured signal
arrival time between detectors, as well as the relative signal amplitude at
each detector[wen2010, fairhurst2011, klimenko2011, pankow2018]. As a result,
it is largely dependent on the number and geographical separation of
interferometers that successfully detect a signal. Bayesian methods are used
to map the sky direction of a GW source as a posterior probability
distribution on the skysinger2016, with a ‘credible interval’ value being
assigned to each RA and DEC coordinate. We obtain these credible interval
values from the published GWTC-2 data[13]. Note a smaller credible level value
indicates closer proximity to the optimal valuesivia2006. The current
sensitivities of the Advanced LIGO and Virgo network are able to localize GW
signals to sky areas of tens of square degrees[26] for confident detections by
all three interferometers. CHIME FRBs are localized using beam model
predictions with errors on the order of arcminutes[2]. Therefore, for our sky
direction cut, we consider a GW-FRB pair spatially coincident if the best-
fitting FRB coordinates lie within the 90% percent credible interval of
candidate GW signals. This is done using the find_greedy_credible_levels
function of the ligo.skymap.postprocess.util moduleligoskymap.
### 1.2 Search Results
Our search found $12$ out of $21$ ($\sim 57\%$) GW events coincident in time
with at least one CHIME FRB. This high fraction is not surprising considering
the high rate of CHIME FRB detections (see “Chance Probability Derivation”
section). One GW event initially included in GWTC-2 that was temporally
coincident with two CHIME FRBs — GW190424_180648 — has since been redacted due
to its significance being reduced upon further re-analysis and is not included
in any updated GWTC event lists (https://www.gw-openscience.org/GWTC-2.1).
Thus, we eliminate this candidate coincidence from the spatial search. Of the
remaining GW-FRB pairs, only one candidate coincidence passed the sky
direction cut: FRB 20190425A - GW190425. Remarkably, this is also the only BNS
event detected during O3a.
### 1.3 Chance Probability Derivation
Given the wide time window we are searching over, the high detection rate of
CHIME FRBs and the poor localization of GW events, random coincidences in time
and sky direction are expected. Furthermore, since the relation between
dispersion measure (DM) and redshift $z$ for FRBs shows large fluctuations[27]
— and due to the uncertainty in $z$ inferred from GW190425 itself — we have
not placed any a priori cuts on the predicted DM values from the estimated
luminosity distance of GW190425. Nonetheless, FRBs arriving closer in time,
from a sky direction consistent with the GW event localization, and with DMs
consistent with the inferred GW event distance given the ionized matter
content of the Universe, will present greater evidence against such a chance
association. We derive a joint probability of chance association by combining
probabilities of random temporal, spatial and DM (or distance) coincidences
defined as $P_{\rm T}$, $P_{\rm S}$, and $P_{\rm DM}$, respectively. As non-
repeating FRBs best suit compact object merger theories, we choose to analyse
the distributions of non-repeating CHIME FRBs only, though a conservative
value is also calculated using all CHIME FRBs (i.e. including repeaters).
Details of the calculation are below.
#### $P_{\rm T}$
The likelihood of a chance temporal coincidence, $P_{\rm T}$, is the
probability of CHIME randomly detecting an FRB in the 2.5 hour post-merger, as
per the blitzar scenario. GW190425 was the only BNS event detected during the
overlap of O3a and CHIME Catalog 1 — see P-Value Calculation for calculations
considering other scenarios, e.g. BH–BH mergers. CHIME’s operationality was
not constant throughout O3a, during which CHIME’s FRB detection efficiency was
affected by scheduled maintenance and unscheduled outages[2]. In particular,
the detection rate reduces for a short period in the days after GW190425. As a
result, we cannot assume a uniform distribution of CHIME FRB detections. To
derive a more accurate value of the rate of FRBs ($R_{\rm FRB}$) around the
time of GW190425, we group CHIME FRBs into 5-day bins, and use the time-
averaged number of detections within a 30-day period (see Extended Data Figure
5) to find $R_{\rm FRB}$. Thus we average over variability in CHIME
sensitivity on small timescales. The resulting rates are 1.93 FRBs per day for
non-repeaters, and 2.03 per day when including repeaters. The number of FRBs
expected to fall within a 2.5 hour time window is then $2.5\cdot 1.93/24=0.20$
for non-repeaters and $2.5\cdot 2.03/24=0.21$ for our conservative value
considering all CHIME FRBs.
Figure 5: CHIME FRB detection rates. Number $N_{\rm FRB}$ of all (blue
histogram) and non-repeating (orange histogram) CHIME FRBs per 5 days. Average
rate during the period surrounding GW190425 (dashed black line) for all and
non-repeating FRBs shown by the blue and red lines, respectively.
The number of once-off FRB detections in any given time period $T$ will follow
a Poissonian distribution with expectation value $\lambda=TR_{\rm FRB}\approx
0.20$ for a 2.5 hour window. The probability $P_{\rm T}$ to observe a random
CHIME non-repeating FRB with a tighter time coincidence than FRB 20190425A is
then given by:
$\displaystyle P_{\rm T}=1-\exp(-\lambda)\approx 0.18.$ (1)
for non-repeating FRBs. The periodic and/or bursty activity periods for
repeating FRBs will make the time distribution from these sources non-
PoissonianCHIME2020_periodic,Rajwade_121102_repetitions. Ignoring the non-
Poissonian nature of repeating FRBs, and using $\lambda\approx 0.21$ for the
coincidence window, produces a conservative $P_{\rm T}=0.19$.
If instead we consider only our search time window of $T=26$ hr, the expected
rate $\lambda=2.1$, and hence $P_{\rm T}=0.88$.
#### $P_{\rm S}$
We define $P_{\rm S}$ to be the probability that a random CHIME FRB would be
detected at a location with equal to or higher likelihood in the GW190425
skymap than FRB 20190425A (i.e. within the 66.7% credible interval). This is a
function of GW190425’s refined skymapabbott2021, weighted by the exposure time
of CHIME at each coordinate. This can be written as:
$\displaystyle P_{\rm S}$ $\displaystyle=$
$\displaystyle\int_{\Omega_{x}}E(\Omega)d\Omega,$ (2)
where $\Omega$ is a general 2D sky coordinate, $\Omega_{x}$ is the 66.7%
credible region and $E(\Omega)$ is the relative likelihood (‘exposure’) of
seeing an FRB at sky position $\Omega$.
CHIME’s instantaneous exposure is a function of its primary beamshape and
CHIME’s location at a latitude of $49^{\circ}19^{\prime}13^{\prime\prime}.08$
North[2]. We ignore declination dependent fluctuations due to the placement of
the synthesized beams, which are on scales much smaller (${\mathcal{O}}\sim
0.1^{\circ}$) than variations in the GW190425 skymap. We also approximate the
instantaneous coverage in local azimuth angle to be very small, which is valid
at the declinations considered here. Therefore, any background FRB detected in
the [0,2.5] hour window after GW190425 — i.e. the 2.5 hour window before FRB
20190425A — will necessarily fall in the RA ($\alpha$) range
$218.22^{\circ}\leq\alpha\leq 255.72^{\circ}$. This method implicitly accounts
for the favourable alignment of the LIGO and CHIME antenna patterns due to
their nearby geographical location, which causes the GW190425 skymap to
significantly overlap with the sky area covered by CHIME’s primary beam within
the 2.5 hr time window.
The CHIME/FRB Collaboration use a sophisticated pulse-injection method —
accounting for beamshape effects — to calculate the exposure $E(\delta)$ to an
FRB as a function of declination $\delta$ [4]. This exposure time is equal to
the detection probability per square degree to within a normalisation factor,
and is split into ‘upper’ and ‘lower’ transit curves representing sky regions
viewed above and below the Northern pole. The un-normalized exposure,
$E_{u/l}^{\prime}(\delta)$, is modelled by manually fitting a spline to the
upper ‘u’ and lower ‘l’ transit curves (Figure 5 in Ref. [2]) of this
function. The normalization constant, $C$, is then derived by integrating
$E^{\prime}(\Omega)$ over all declinations and the RA range of interest using
the upper transit curves, and the RA range offset by 12 hr for the lower
transit curves, i.e.
$\displaystyle C=\int_{218.22^{\circ}\leq\alpha\leq
255.72^{\circ}}E_{u}^{\prime}(\delta)d\Omega+\int_{38.22^{\circ}\leq\alpha\leq
75.72^{\circ}}E_{l}^{\prime}(\delta)d\Omega\approx 1658$ (3)
Using our derived value for C, we then integrate the normalized exposure
function, $E_{u/l}(\Omega)=E_{u/l}^{\prime}(\delta)/C$, over the 66.7%
credible sky area in the range $218.22^{\circ}\leq\alpha\leq 255.72^{\circ}$
($38.22^{\circ}\leq\alpha\leq 75.72^{\circ}$) for the upper (lower) transit
respectively as per Eq. (2) to find the total probability. This produces
$P_{\rm S}=0.265$. As a simple check of this method, we also took the optimal
measured sky positions of all non-repeating CHIME FRBs, and find that of the
56 in this RA range, 17 of them (30%) lie within the credible interval. If we
instead randomise the RA of all non-repeating CHIME FRBs uniformly over the RA
range, we find that 29.4% lie within the 66.7% credible sky area of GW190425’s
localization; while shuffling FRB RA values in the CHIME catalog with respect
to DEC finds 28.8% lie in this interval. We attribute the excess beyond our
estimates to be random fluctuations of the observed RA,DEC of FRBs compared to
expectations.
If we instead consider an FRB uniformly distributed in time over our [-2,+24]
hr time window (for which $P_{\rm T}=0.88)$, we find $P_{S}=0.154$ for the 90%
likelihood region. Thus the total chance of an event passing our initial
selection criteria is $P_{\rm pass}=$13.5%.
We observe that the distribution of CHIME FRBs has been extensively analysed
for deviations from a simple declination dependent sensitivity[28], with the
only observed anisotropy not due to $E(\delta)$ being a potential correlation
between FRBs with DM $\sim$ 800 pc cm-3 and large-scale structure at $z\sim
0.4$ [29]. The spatial scales of such structure are small compared to the
variation in the GW190425 skymap, and the distances involved are much larger
than those relevant to GW190425, so we ignore this here.
### 1.4 P-value calculation
We use the product $P_{\rm T}P_{\rm S}P_{\rm DM}$ as evidence against our null
hypothesis, $H_{0}$, of a purely chance association. To estimate the
probability of obtaining an equally small product under $H_{0}$ (i.e. the
p-value), we consider the 88% probability of observing an FRB in the time
range [-2,24] hr about the time of GW190425. We use a uniform time
distribution, with RAs located at the centre of the CHIME beam, and DECs
distributed according to CHIME’s exposure. We calculate $P_{\rm S}P_{\rm DM}$
for all such FRBs, rejecting those outside the 90% credible area. Sampling
$P_{\rm DM}$ uniformly from $0$ to $1$ allows us to calculate the probability
of observing a value of $P_{\rm T}P_{\rm S}P_{\rm DM}<1.9\cdot 10^{-4}$ under
$H_{0}$. We find a p-value of 0.0052, i.e. the significance of this event is
$2.8\,\sigma$.
BNS are the most commonly proposed merger scenario for FRB progenitors [9,
LVKchime2022]. However, pre-merger emission from the inspiral of binaries
involving a charged black hole has also been proposedZhang2016bbh. In such a
scenario, the possibility of a CHIME FRB occurring during the [-2,0] hr time
window prior to the other 20 GW events should be considered as a trial factor.
For this calculation, we use the mean CHIME FRB detection rate of 1.62
FRBs/day during the overlap period between O3 and the CHIME catalogue (138
once-off FRBs in 85 days), for an expected number of 2.7 coincident FRBs.
Using the GWTC 2.1 skymaps, and the same methods to calculate $P_{S}$ above,
we find that 0.41 of these would also be expected to pass our spatial
selection criteria. Compared to the 13.5% chance of a background event passing
these criteria for GW190425, this effectively counts as three extra trials.
Thus if this scenario is considered equally plausible — which it is not, due
to the incredibly large required charge ($3.3\times 10^{21}$ C M/M⊙) on the
BHs — our p-value should be raised to 0.021, i.e. 2.3$\sigma$.
#### Notes on GW190425 Localization
The sky localization of GW190425 is poorly constrained due to it being
detected significantly in only a single detector. FRB 20190425A lies within
the 66.7% credible interval for the most recently published GWTC-2 skymap for
GW190425[13] (used in our main analysis). The FRB sky direction is also
largely consistent with all skymaps available for the GW event at various
stages of its discovery (Extended Data Table 1). It has a better consistency
with the initial online rapid Bayestar localizationsinger2016; using this
skymap results in a 50% lower sky coincidence probability ($P_{S}$) for a
random spatial coincidence than the GWTC-2 result. Our FRB, however, is less
consistent with the online LALInference skymaplalinference. Using this
LALInference skymap results in a larger $P_{S}$, by 32%. The GWTC-2 skymap is
obtained using cleaned data and a better estimation of the noise level, while
the initial Bayestar and LALInference skymaps are obtained using the available
online data around the time of the detection. Therefore we conclude that
$P_{S}=0.265$.
Table 1: Credible interval that FRB 20190425A lies in for each of the published skymaps for GW190425, and corresponding probabilities for coincidence in sky direction ($P_{S}$, see text). All skymaps were sourced from https://gracedb.ligo.org/superevents/S190425z/view/. Localization | Credible Interval | $P_{S}$ | Date Published
---|---|---|---
Bayestar | 25.0% | 0.14 | Apr 2019
LALInference | 90.2% | 0.35 | Apr 2019
Updated Superevent | 67.6% | 0.21 | Jul 2020
GWTC-2 | 66.7% | 0.265 | Oct 2020
#### $P_{\rm DM}$
An FRB produced at a given redshift, $z$, will have a likelihood distribution
$P({\rm DM|z})$ of dispersion measures, DM. The DM is a measure of the column
density of free electrons, $n_{e}$, along the line of sight $\ell$ from the
FRB source to the observer, in units of pc cm-3. Accounting for the redshift
of an extragalactic object, it is defined as
${\rm DM}=\int\frac{n_{e}(\ell)}{1+z}\ d\ell.$ (4)
The DM of extragalactic FRBs can be characterised in multiple ways. Here, we
use four components: two from the Milky Way, being its interstellar medium
(MWISM) and halo (MWhalo), and two extragalactic contributions, from the
intergalactic medium (IGM), and the host galaxy (host), two of which depend on
redshift $z$:
${\rm DM}(z)={\rm DM}_{\rm MW}+{\rm DM}_{\rm EG}(z)={\rm DM}_{\rm MWISM}+{\rm
DM}_{\rm MWhalo}+{\rm DM}_{\rm IGM}(z)+\frac{{\rm DM}_{\rm host}}{1+z}.$ (5)
We observe that in the CHIME Catalog 1 data, quoted ‘extragalactic’ values for
${\rm DM}_{\rm EG}$ include ${\rm DM}_{\rm MWhalo}$, i.e. ${\rm DM}_{\rm
EG}={\rm DM}-{\rm DM}_{\rm MWISM}$. The contribution from the ISM, ${\rm
DM}_{\rm MWISM}$, can be estimated using various electron density models, e.g.
NE2001cordes2002, YMW16yao2017, or YT20yamasaki2020, giving 48.79, 38.8 and
49.0 pc cm-3 respectively for FRB 20190425A. Such models can suffer from over-
fitting however, and may have poor predictive powerSchnitzeler2012DM. ${\rm
DM}_{\rm MWhalo}$ is uncertain — lower limits can be placed using pulsars from
the Large Magellanic CloudShannon2018, while upper limits can be placed using
nearby FRBsCHIME_M81_2021. ModellingProchaskaZheng2019 suggests ranges from
50–80 pc cm-3.
The extragalactic contributions, ${\rm DM}_{\rm IGM}+{\rm DM}_{\rm host}$,
have been estimated by fits to five localized FRBs[27]. Both contributions are
expected to have tails towards high DM values, from the few that intersect
many or dense clumps of matter along the line-of-sight, and/or originate from
dense regions within their host. Therefore DM values only poorly constrain the
distance to an FRB. However, a robust upper limit, $z_{\rm max}$, can be
estimated by setting ${\rm DM}_{\rm host}=0$, taking the lowest of model
values for $\rm DM_{\rm MWISM}$ (38.8 pc cm-3) and ${\rm DM}_{\rm MWhalo}$ (50
pc cm-3), and using the approximate relationInoue2004 in the nearby Universe
of $\rm DM_{\rm IGM}=1000\,{\rm pc\,cm}^{-3}\,z$. Such an estimate is
consistent with the nearby low-DM FRB discovered in M81[50]. Applying this to
FRB 20190425A produces $z_{\rm max}=0.0394$, or $\sim$180 MPc using standard
cosmologyade2016,Wright_2006_cosmology.
Deriving a realistic DM–z relation using the model described in [27] requires
considering variations in the component DM contributions. We use the NE2001
model for ${\rm DM}_{\rm MWISM}$, and assume ${\rm DM}_{\rm MWhalo}=50$ pc
cm-3. In this model, errors in ${\rm DM}_{\rm MWISM}$ and ${\rm DM}_{\rm
MWhalo}$ are absorbed into the distributions for ${\rm DM}_{\rm IGM}$ and
${\rm DM}_{\rm host}$. In their model, ${\rm DM}_{\rm IGM}$ is a function of
the total baryon content of the Universe, $\Omega_{b}$, and a ‘feedback’
parameter F governing how clumped those baryons are[27]. This DM model is also
explained in James2021, and implemented in publicly available Python
codefrb,zdm.
The feedback parameter F is poorly constrained by localized FRBs[27] — here,
we somewhat arbitrarily use F $=0.32$ — while the baryon content was
consistent with expectations from measurements of the cosmic microwave
backgroundPlanckCosmology2018. Most importantly for this work, fits to the FRB
host contribution ${\rm DM}_{\rm host}$ (which dominates over ${\rm DM}_{\rm
IGM}$ in the nearby Universe) used a log-normal distribution,
$\displaystyle p({\rm DM}_{\rm host}|\mu_{\rm host},\sigma_{\rm
host})=\frac{1}{{\rm DM_{\rm host}}}\frac{1}{\sigma_{\rm
host}\sqrt{2\pi}}\exp\left[-\frac{(\log{\rm DM}_{\rm host}-\mu_{\rm
host})^{2}}{2\sigma_{\rm host}^{2}}\right].$ (6)
Best-fit values of $e^{\mu_{\rm host}}=65-70$ pc cm-3 and $\sigma_{\rm
host}=0.4$ (where $\mu_{\rm host}$ and $\sigma_{\rm host}$ are the mean and
standard deviation of the fitted lognormal FRB host galaxy DM distributions,
respectively) are obtained using a “gold standard” sample of five localized
FRBs measured with the Australian Square Kilometre Array Pathfinder (ASKAP).
Figure 6: From left to right: probability distribution of redshift $z$ for
GW190425 (from https://dcc.ligo.org/LIGO-P2000223/public); and the probability
distributions of the mean and standard deviation of the fitted lognormal FRB
host galaxy DM distribution[27].
Incorporating uncertainties in $z$ from GW190425, $\mu_{\rm host}$, and
$\sigma_{\rm host}$ (shown in Extended Data Figure 6), the expected DM of an
FRB originating from GW190425 is therefore:
$p({\rm DM|GW190425})=\int p({\rm DM}|z,\mu_{\rm host},\sigma_{\rm
host})p(z)p(\mu_{\rm host})p(\sigma_{\rm host})dzd\mu_{\rm host}d\sigma_{\rm
host},$ (7)
where $p({\rm DM}|z,\mu_{\rm host},\sigma_{\rm host})$ is derived from (5) and
(6). Integrating over these distributions produces the expected DM
distribution. We plot in Figure 2 the expected distribution with the DMMWISM
contribution subtracted, to allow comparison with other once-off CHIME
FRBs[2].
We calculate the coincidence probability associated with the DM of FRB
20190425A by ordering all 474 non-repeating CHIME FRBs according to $p($DM-
DM${}_{\rm MWISM})$ and counting the number with equal or better
probabilities. Using the NE2001 model for DMMWISM, we find only one FRB with a
better-matching $p($DM-DM${}_{\rm MWISM})$. Thus, a simple estimation yields
$P_{\rm DM}=2/474\approx 0.0042$. The uncertainties in DMMWISM, $\mu_{\rm
host}$ and $\sigma_{\rm host}$ have also been derived using NE2001[27], i.e.
using NE2001 only is self-consistent. Nonetheless, we calculate a conservative
value of $P_{\rm DM}$ using two methods. Firstly, we simply use the YMW16
model to calculate DMMWISM for all CHIME FRBs. In this case we find four other
FRBs with a more probable DM value than FRB 20190425A, i.e. $P_{\rm
DM}=5/474\approx 0.0105$. We also simulate uncertainty in DMMWISM by randomly
generating these values from a Normal distribution with mean equal to the
average of NE2001 and YMW16, and standard deviation equal to the difference.
Out of 1000 random iterations, we find on-average 4.2 more-probable FRBs, i.e.
$P_{\rm DM}=5.2/474\approx 0.011$. We take this value as a conservative
estimate of $P_{\rm DM}$.
Therefore $P_{\rm tot}=P_{T}P_{S}P_{\rm DM}=0.18\cdot 0.265\cdot 0.004\
(0.19\cdot 0.265\cdot 0.011)=1.9\cdot 10^{-4}\ (5.5\cdot 10^{-4})$ is the
total (conservative) chance probability. These probabilities were derived
using a Frequentist approach, but we note that a Bayesian measure of
significance can also be appliedashton2018.
### 1.5 GW190425 SNR time series
We use modules from the PyCBC software packageNitz2021 to perform a matched
filteringFinn1992,Cutler1994 and obtain the SNR time series in Figure 1.
Public glitch subtracted, cleaned GW strain data is used for the filtering
(https://dcc.ligo.org/LIGO-T1900685/public) together with a TaylorF2 waveform
template for the most probable chirp mass of $1.48M_{\odot}$ [12].
### 1.6 Host galaxy association search
Name | z | RA | DEC
---|---|---|---
WISEA J170311.62+212626.6 | 0.078742 | 17 03 11.622 | +21 26 26.61
UGC 10667 | 0.031 | 17 02 38.976 | +21 34 35.91
WISEA J170310.07+212309.9 | 0.047523 | 17 03 10.07 | +21 23 09.9
Table 2: List of extra-galactic sources within the central 68% localization
uncertainty of FRB20190425 ($0.1\times 0.2\deg^{2}$) retrieved via the NASA
Extragalactic Database (NED) within a redshift range $0.001<z<0.08$ consistent
with the GW PE redshift distribution. Quoted redshifts are spectroscopic. We
exclude objects that do not have a listed spectroscopic redshift.
The properties of potential host galaxies found inside the error ellipse of
FRB 20190425A’s central localization are displayed in Extended Data Table 2.
The only host within the upper limit $z_{max}=0.0394$ is UGC 10667. To
investigate the significance of this host galaxy association we utilize the
methods described in [30]. The total number of galaxies expected within a
solid angle $\Delta\Omega$ at luminosity distance $d_{\mathrm{L}}$ is
$N_{\mathrm{gals}}=\rho_{\mathrm{gals}}\frac{4}{3}\pi
d_{\mathrm{L}}^{3}\frac{\Delta\Omega}{4\pi}$ in the nearby (Cartesian)
Universe. The density of galaxies can be derived from the Schechter function
associated with a galaxy catalog optimized for GW follow-up searches.[30] In
this work we utilize their value of $\rho_{\mathrm{Gals}}=2.35\cdot
10^{-3}\,\mathrm{Mpc}$, which considers galaxies that contribute to the top
50% of the luminosity function. Thus, for a luminosity distance of
$255.85\,\mathrm{Mpc}$ (the upper limit derived for the GW event[12]) one
expects $N_{\mathrm{Gals}}\sim 0.173$ within the CHIME 68% FRB 20190425 error
ellipse (i.e. $\sim 17\%$ chance of coincidence).
### 1.7 Delay time
Within the “blitzar” model for FRBs, the delay time between GW190425 and FRB
20190425A is defined by the survival time of the supramassive neutron star
(SMNS) formed at the merger before it collapsed into the black hole. The
collapse time scale depends on the mass of the SMNS. For the majority of the
cases, the collapse time likely coincides with the spindown timescale, which,
for a magnetic-dipole-dominated spindownshapiro83, can be estimated as
$\displaystyle t_{\rm sd,md}=\frac{E_{\rm rot}}{L_{\rm sd,md}}=3.0\times
10^{4}~{}s~{}\left(\frac{R_{NS}}{18~{}{\rm
km}}\right)^{-6}\left(\frac{B_{p}}{10^{14}~{}{\rm
Gauss}}\right)^{-2}\left(\frac{P_{i}}{0.8~{}{\rm
ms}}\right)^{2}\left(\frac{I}{8\times 10^{45}~{}{\rm g~{}cm^{2}}}\right),$ (8)
where $E_{\rm rot}=(1/2)I\Omega_{i}^{2}$ is the total rotational kinetic
energy of the SMNS, $L_{\rm sd,md}=B_{p}^{2}R_{s}^{6}\Omega_{i}^{4}/(6c^{3})$
is the spindown luminosity due to magnetic dipole radiation, $\Omega_{i}$ and
$P_{i}$ represent the initial angular velocity and period of the new-born
SMNS, $B_{p}$ stands for the strength of the surface magnetic field at the
pole, and $I$ is the momentum of inertia of the SMNS. These parameters have
been normalized to typical values at the maximum spin for an SMNS[gao20]. For
these nominal parameters, a collapse time at $2.5$ hr corresponds to
$B_{p}\sim 1.8\times 10^{14}$ G.
The light curves of a fraction of short GRBs are found to exhibit an X-ray
plateau followed by a steep decay rowlinson10,rowlinson13,lv15. These
“internal plateaus” are proposed to form from the wind emissions of an SMNS
that end with the collapse of the SMNS[14]. The distribution of the observed
plateau duration falls within the range of $10^{2}-10^{4}$slv15,Gao2016,
consistent with our interpretation of the FRB delay time.
### 1.8 Constraints on $M_{\rm TOV}$
We derive constraints on the universal Tolman-Oppenheimer-Volkoff non-spinning
maximum mass ($M_{\rm TOV}$) by constraining the gravitational mass of the
merger remnant at different post-merger phases. To achieve this, we first
derive the total post-merger baryonic mass (a conserved quantity). According
to GW parameter estimation[13], the gravitational masses of the binary neutron
stars in the source frame are $M_{1}=2.03^{+0.58}_{-0.34}M_{\odot}$ and
$M_{2}=1.35^{+0.26}_{-0.26}M_{\odot}$ with the sum of the two $M_{\rm
tot}=3.39^{+0.31}_{-0.11}M_{\odot}$. The remnant mass extracted from the GW
observation is $M_{\rm tot}^{\rm final}=3.23^{+0.33}_{-0.11}M_{\odot}$ in the
source frame (https://dcc.ligo.org/LIGO-P2000026/public). This is smaller than
$M_{\rm tot}$, indicating a fraction of its mass was radiated away in GWs. For
pre-merger NS, we use the non-spin or low-spin universal relation to estimate
the total baryonic mass $M_{b}$ of the system from its gravitational mass $M$:
$M_{b}=M+0.080M^{2}$, which is applicable to NSs that are not spinning near
the break-up limit (ref. timmes96,gao20). We can safely assume low-spins for
pre-merger NSs, as the spindown timescale (Equation 8) of a millisecond object
is about $1.3\times 10^{9}~{}{\rm yr}~{}(B_{p}/10^{8}~{}{\rm Gauss})^{-2}$.
Therefore, even if the initial spin of a BNS merger member is close to
milliseconds, it should have been spun down during the typical BNS merger
timescale $\sim 10^{9}$ yrwandermanpiran15, unless its $B$ field is lower than
a few $10^{8}$ G, which is very rare and never observed in the Milky Way. This
gives the total baryonic mass $M_{\rm b,tot}=3.86_{-0.31}^{+0.70}M_{\odot}$,
including $0.1M_{\odot}$ uncertainty introduced by the $M_{b}-M$ relation due
to the unknown neutron star EoS. During the merger, an order of
$0.06M_{\odot}$ baryonic mass is expected to have been ejected to power
kilonova emission[metzger17], therefore the baryonic mass of the final remnant
may be estimated as $M_{\rm b,rem}=3.80_{-0.31}^{+0.70}M_{\odot}$.
A neutron star post-merger remnant is expected to go through a brief
differential-rotation phase (< 1 second), before forming an essentially
uniformly rotating body[shapiro00, margalit19]. We therefore apply a universal
relation between baryonic mass $M_{b}$ and gravitational mass $M$ for
maximally rotating neutron stars, $M_{b}=M+0.064M^{2}$, to convert the
baryonic remnant mass $M_{\rm b,rem}$ to gravitational remnant mass $M_{\rm
rem}$ (ref. gao20). This leads to $M_{\rm rem}=3.16^{+0.40}_{-0.24}M_{\odot}$.
Note that $M_{\rm rem}$ is slightly lower than $M^{\rm final}_{\rm tot}$ as
$M^{\rm final}_{\rm tot}$ is not necessarily taken from the rigidly rotating
phase. Since uniform rotation can support a higher mass with a maximum
enhancement factor $M_{\rm rem}/M_{\rm TOV}\sim 1.2$ [43, 38]lasota96 (more
precisely, $1.201\pm 0.017$ [39]), a lower limit of $M_{\rm
TOV}>2.63^{+0.39}_{-0.23}M_{\odot}$ can be placed. The minimum enhancement
factor at the maximum rotation has been shown to be $1.046\pm 0.008$ [39].
This places a limit of $M_{\rm TOV}<3.02^{+0.42}_{-0.25}M_{\odot}$.
If a quark star is formed after the merger, due to the lack of a universal
$M_{b}-M$ relation, the gravitational mass of the remnant is difficult to
estimate. In this case, we use the final mass $M_{\rm tot}^{\rm final}$
extracted from the GW waveform as the remnant mass $M_{\rm rem}$ to constrain
$M_{\rm TOV}$. For rotating quark stars, the enhancement factor can be as high
as $M_{\rm rem}/M_{\rm TOV}\sim 1.4$[46]. On the other hand, since there is no
detailed calculation of the enhancement factor for maximum rotation, we adopt
the enhancement factor $\sim 1$ to represent a most conservative constraint on
the upper limit of $M_{\rm TOV}$ of quick stars. We then find that
$2.31_{-0.08}^{+0.24}M_{\odot}<M_{\rm TOV}<3.23^{+0.33}_{-0.11}M_{\odot}$.
In principle, $M_{\rm TOV}$ can be further constrained given the 2.5 hour
collapse time if the surface magnetic field strength of the post-merger
compact star is known[ravi14]. The combination of short GRB X-ray plateau
observations and theory suggests a collapse time ranging from minutes to
hours[lasky14, gao16, 46]. The observed 2.5 hours is consistent with this
range. For our case, if the magnetic field is very low, the increase of spin
period would be insignificant in 2.5 hours, in which case a $M_{\rm TOV}$
close to the lower limit is required to ensure the remnant collapse after the
star is slightly spun down; If the surface magnetic field is very high, most
of the angular momentum would have been lost in a short time. Then a $M_{\rm
TOV}$ close to the upper limit is required to prevent the supramassive compact
object from collapsing before 2.5 hours. In reality, the surface magnetic
field of the merger remnant is not well constrained. Therefore, the allowed
$M_{\rm TOV}$ range remains broad, as seen in the dark grey region of Figure
4.
### 1.9 Energetics
The estimated fluence of FRB 20190425A is $31.6\pm 4.2$ Jy ms[2]. This is a
lower limit, and is estimated assuming it occurs at CHIME’s beam centre. Using
the CHIME bandwidth of $400$ MHz, and assuming a luminosity distance of
$156\,\mathrm{Mpc}$, we calculate an isotropic equivalent energy of
$E_{\mathrm{FRB}}=3.72\cdot 10^{38}\,\mathrm{erg}$. The emitted FRB power is a
small/negligible fraction of the mass-energy of the remnant in the optimistic
scenario. In contrast, the GW mass-energy emitted in the merger that forms the
remnant is $\sim 7\%$ of the total mass of the merger product, i.e. a BNS
merger event is clearly capable of producing such a burst. If the emission is
beamed, $E_{\mathrm{FRB}}$ will be lower by the beaming factor, while if the
FRB was detected far from the CHIME beam centre, $E_{\mathrm{FRB}}$ will be
much higher.
Figure 7: Flux-DM distribution of CHIME FRBs. The majority of CHIME FRBs have
flux densities below $5\rm\ Jy$. FRB 20190425A (orange square) resides in the
low end of the DM spectrum, but has a somewhat exceptional flux density. Note
that these flux densities are lower limits, as CHIME flux measurements are
derived under the assumption that each burst is detected in the center of the
primary beam.
Here, we also wish to point out the curious brightness of FRB 20190425A.
Figure 7 highlights the flux density distribution of CHIME Catalog 1 FRBs. FRB
20190425A (orange square) exhibits a curiously high flux density for its DM,
with only four more notable outliers.
### 1.10 GRB association
Automated electromagnetic follow-up of the real-time detection of GW190425
(S190425z[GCN190425z]) resulted in the reports of a detection of a marginally
significant excess of gamma-raysINTEGRALGCN3 by the INTEGRAL telescope’s SPI-
ACS systemSavchenko2012. No significant signal was found by other subsystems
on-board INTEGRAL, including IBIS/PICsIT, ruling out a localization for the
source within the IBIS field of view (FoV). Subsequent analysis, correcting
for the local background variance at the time of the event resulted in the
updated report of an event detected by SPI-ACSINTEGRALGCN1 with a fluence
$F=2.9\cdot 10^{-10}$ – $2\cdot 10^{-9}\,\mathrm{erg\,cm^{2}}$ in the
$75-2000\,\mathrm{keV}$ range, six seconds after the GW event, assuming a
duration of $1\,\mathrm{s}$. The false alarm probability[36] is less than
$3\sigma$ however it cannot be ruled out that the event is physical. We find
that the localization provided does not exclude the location of FRB20190425A
nor a significant portion of the localization uncertainty for GW190425,
however it is noted[36] that the south-west arc of the localization contour is
slightly disfavored due to an absence of signal in the IBIS/Veto shield
instruments on-board INTEGRAL. A subsequent independent analysis of
INTEGRAL/SPI-ACS data by [37] reports the same sGRB event with a fluence of
$8\cdot 10^{-8}-2.4\cdot 10^{-6}\,\mathrm{erg\,cm^{-2}}$ with a significance
of $5.5\sigma$, wherein the authors utilize a different calibration model and
find an sGRB of significantly longer duration than in the initial prompt
analysisINTEGRALGCN1[36]INTEGRALGCN3.
We note that this work is complementary to the on-going international
LIGO–Virgo–KAGRA Collaboration effort to detect sub-threshold prompt GWs
associated with CHIME FRBs LVK_CHIME_2022.
### 1.11 Further Discussion
Capturing the FRB counterparts of a GW source is challenging given the fact
that the CHIME primary beamwidth is $1.3^{\circ}$–$2.5^{\circ}$ in RA, with
200 $\deg^{2}$ in FoV [2], covering only around 0.5% of the sky. What is the
chance that an FRB emitted from the post-merger remnant of GW190425 would
occur at a time in which it could be detected by CHIME?. The chance of an
event emitted at a random time, i.e. a random RA, at a declination of
$\delta=30^{\circ}$ occurring in this beamwidth is thus
($1.3^{\circ}$–$2.5^{\circ}$)/($360^{\circ}\cos\delta)=$0.4%–0.8%. On the
other hand, the chance of coincident detection could be improved by several
scenarios. A bright FRB could be detected far from beam centre, as was the
case with a radio flare from SGR 1935+2154, which was observed by CHIME
$22^{\circ}$ from the meridian [4]. This would increase the chance to capture
the FRB to $44^{\circ}/(360^{\circ}\cos\delta)$=14% ($44^{\circ}$ reflects
$22^{\circ}$ both sides of the beam). Furthermore, there is a slight
preference for GWs to be detected over a large sky area around the zenith (and
the nadir) of the LIGO detectors. In our case the GW event was detected
significantly only by the LIGO-Livingston detector whose zenith aligns well
with that of CHIME due to their geographical proximity. Constraining the time
interval to 2.5 hr, while assuming the FRB arises from the same location on
the sky, results in a chance detection probability of
($1.3^{\circ}$–$2.5^{\circ}$)/($37.5^{\circ}\cos\delta)=$4%–8% for in-beam
events, and $22^{\circ}/(37.5^{\circ}\cos\delta)$=68% for sidelobe events
(only one side of the beam counts, since only detections earlier than 2.5 hr
are considered).
Our favoured interpretation of FRB 20190425A is a blitzar event. Another
possibility is that the post-merger magnetar generates multiple repeating
bursts and FRB 20190425A is one of them. If this is the case, the likelihood
of detecting an FRB associated with the GW source would also have increased.
However, we disfavour this possibility for the following reasons: 1.
Observationally, FRB 20190425A is bright and carries the key observational
signatures of non-repeating FRBspleunis2021, e.g. single peak, short duration,
and broad spectrum. These are very different from the bursts detected from
repeating FRB sources, which typically have long widths, narrow spectra, and
very often multiple peaks. 2. Even within the non-repeater population, this
burst is brighter than others and is an outlier of the flux-DM relation of
most CHIME FRBs (Figure 7). This suggests that FRB 20190425A may have a
distinct physical origin from other FRBs. 3. Theoretically, interpreting
FRB20190425A within the blitzar framework already requires a large $M_{\rm
TOV}$ that is marginally consistent with other constraints on the NS EoS. The
repeater scenario requires that the post-merger product survive even longer
than 2.5 hours, which would further escalate the EoS tension with known
results.
The estimated merger rate of BNS (100–1700 Gpc-3 yr-1 LVK_merger_rates_2021)
is far lower than the estimated FRB rate $\sim 10^{5}$ Gpc-3 yr-1 [19,
20]james2022L. Thus blitzar FRBs initiated by BNS mergers can account for at
most 5% of the FRBs exhibiting broadband, single-peaked morphology observed by
CHIME, which themselves account for 30% of the population. If the association
between GW190425 and FRB 20190425A is indeed astrophysical and can be
attributed to the blitzar model, it implies the existence of “subpopulations
amongst subpopulations” of FRBs.
naturemag methods
We acknowledge the custodians of the land this research was conducted on, the
Whadjuk (Perth region) Noongar people and pay our respects to elders past,
present and emerging. This research has made use of data, software and/or web
tools obtained from the Gravitational Wave Open Science Center
(https://www.gw-openscience.org/), a service of LIGO Laboratory, the LIGO
Scientific Collaboration and the Virgo Collaboration. LIGO Laboratory and
Advanced LIGO are funded by the United States National Science Foundation
(NSF) as well as the Science and Technology Facilities Council (STFC) of the
United Kingdom, the Max-Planck-Society (MPS), and the State of
Niedersachsen/Germany for support of the construction of Advanced LIGO and
construction and operation of the GEO600 detector. Virgo is funded, through
the European Gravitational Observatory (EGO), by the French Centre National de
Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica
Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from
Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal,
Spain. This research has made use of the NASA/IPAC Extragalactic Database,
which is funded by the National Aeronautics and Space Administration and
operated by the California Institute of Technology; NASA’s Astrophysics Data
System Bibliographic Services; and the Python libraries
Matplotlib[Matplotlib2007], NumPy[Numpy2011], SciPy[SciPy2019] and
Pandas[pandas, reback2020pandas]. This research has made use of the DSS-2
based on photographic data obtained using The UK Schmidt Telescope. The UK
Schmidt Telescope was operated by the Royal Observatory Edinburgh, with
funding from the UK Science and Engineering Research Council, until 1988 June,
and thereafter by the Anglo-Australian Observatory. The DSS was produced at
the Space Telescope Science Institute under US Government grant NAG W-2166.
AM, FHP and MK utilized the OzSTAR national facility at Swinburne University
of Technology. The OzSTAR program receives funding in part from the Astronomy
National Collaborative Research Infrastructure Strategy (NCRIS) allocation
provided by the Australian Government. LW, FHP and MK acknowledge funding
support from Australian Research Council Centre of Excellence for
Gravitational Wave Discovery (OzGrav) under grant CE170100004. MK acknowledges
the SIRF postgraduate scholarship from the University of Western Australia.
CWJ acknowledges support from the Australian Government through the Australian
Research Council’s Discovery Projects funding scheme (project DP210102103). SA
and BZ acknowledges a Top Tier Doctoral Graduate Research Assistantship
(TTDGRA) at University of Nevada, Las Vegas.
We acknowledge Volodymyr Savchenko and Simon Driver for useful correspondence
regarding the sGRB coincidence and galaxy luminosity function respectively,
Dr. Qi Chu for her knowledge sharing of GW signal extraction, Teresa Slaven-
Blair, Tara Murphy, Dougal Dobie, and Hao Qiu for initial discussions relevant
to this research, Patrick Sutton for valuable comments regarding the
calculation of $P_{S}$, and Vivek Gupta for information regarding the UTMOST
detection of the Crab pulsar.
AM led the GW-FRB coincidence search, GW190425/FRB 20190425A follow-up, chance
probability (temporal and spatial) and significance analysis, drafted the
initial paper and brought it to completion. LW conceived the original idea for
the work, designed the research framework, built the collaboration team,
supervised all aspects of the analysis, and contributed to the writing and
completion of the paper. CWJ jointly conceived the original idea for the work,
contributed to supervision of students in the project, performed the
dispersion measure analysis, assisted with significance analysis, and
contributed to writing the paper. FHP contributed the host galaxy search, GW
parameter estimation, FRB energetics, sGRB context and paper writing and
figure showing FRB-host galaxy coincidences. SA and BZ proposed the
theoretical interpretation to the data, performed constraints on $M_{\rm TOV}$
for neutron stars and quark stars, and contributed to the writing of the
theory part of the paper. MK generated the approximated GW waveform, whitened
the public GW strain data and performed matched filtering to construct the SNR
time series.
The authors declare that they have no competing financial interests.
The CHIME FRB data is publicly available at https://www.chime-frb.ca/catalog.
The public GW event data is available at
https://gracedb.ligo.org/superevents/public/O3/ (general information),
https://dcc.ligo.org/LIGO-T1900685/public (strain data),
https://dcc.ligo.org/LIGO-P2000223/public (GWTC-2 parameter estimation) and
https://www.gw-openscience.org/eventapi/html/GWTC-2/GW190425/v2 (GWOSC event
portal).
Processed data is presented in the tables and figures of the paper. Code used
for processing data is available upon reasonable requests to the corresponding
authors.
|
††thanks: Corresponding author
# Realization of Dirac quantization in loop quantum gravity
Xiangdong Zhang<EMAIL_ADDRESS>Department of Physics, South China
University of Technology, Guangzhou 510641, China Yongge Ma<EMAIL_ADDRESS>Department of Physics, Beijing Normal University, Beijing 100875, China
###### Abstract
The system of gravity coupled to the non-rotational dust field is studied at
both classical and quantum levels. The scalar constraint of the system can be
written in the form of a true physical Hamiltonian with respect to the dust
time. In the framework of loop quantum gravity, the scalar constraint is
promoted to a well-defined operator in a suitable Hilbert space of the coupled
system, such that the physical Hamiltonian becomes a symmetric operator. By
the deparametrized form, a general expression of the solutions to the quantum
scalar constraint is obtained, and the observables on the space of solutions
can be constructed. Moreover, the Dirac quantization procedure can be fully
carried out in loop quantum gravity by this system.
non-rotational dust field, loop quantum gravity, physical Hamiltonian
It is well known that due to the singularities of the big bang and the black
hole interior, classical general relativity (GR) is no longer valid at the
regime where the spacetime curvature becomes divergent. Finding a consistent
theory of quantum gravity serves as one of the main driving forces in
theoretical physics in the past decades QG1 ; QG2 ; QG3 , and various
approaches have been pursued, including string/M-Theory string1 ; string2 ;
string3 ; string4 and loop quantum gravity(LQG) Ro04 ; Th07 ; As04 ; Ma07 .
LQG is notable by its background independent feature. This background
independent quantization method has been successfully generalized to a few
modified theories of gravity Zh11 ; Zh11b ; Ma18 ; Zhang20 .
The notion of time plays an important role in any quantum gravity theories,
since in diffeomorphism-invariant theories one only has the Hamiltonian
constraint rather than a true Hamiltonian which represents the evolution of
the system Isham94 . To overcome the time problem in quantum gravity Isham94 ,
one usually takes the viewpoint of relational evolution and employ the
deparametrization technique Lewandowski10 ; Lewandowski15 ; RS94 ; Kuchar91 .
This allows one to map the totally constrainted theory of canonical GR into a
theory with a true nonvanishing Hamiltonian with respect to some chosen
dynamical (emergent) time variable. The deparametrization formalism were
realized to a certain extent in a few systems, including massless scalar field
RS94 ; Lewandowski10 ; Lewandowski15 and dust fields Kuchar91 ; Husain15 ;
Thiemann15 . The combination of LQG with the deparametrization framework makes
it possible to solve the quantum Hamiltonian constraint. In the literature,
there exist two different strategies to solve the coupled Hamiltonian
constraint. The first strategy is to impose some gauge conditions or solve
some constraint (usually the diffeomophism constraint) at the classical level
to simplify the model, and then to deparametrize the Hamiltonian constraint
and quantize the model Husain15 ; Thiemann10 . In Husain15 , a particular time
gauge $t=T$ is adopted, where $T$ is the configuration variable of the non-
rotational dust while $t$ represents the time of the system. This means that
the time reparametrization is no longer a gauge symmetry Pawlowski12 ;
Lewandowski17 . Also, the authors in Lewandowski10 ; Thiemann15 consider
another possibility of using the diffeomophism constraint to re-express the
Hamiltonian constraint and then quantizing the deparametrized Hamiltonian.
This treatment usually requires to solve the diffeomophism constraint at the
classical level or restrict all discussions on the diffeomorphism-invariant
quantum states. However, how to express the gravitational diffeomorphism
constraint as an operator is still unclear in this treatment.
In the second strategy proposed in Lewandowski15 , one quantizes the coupled
system of gravity and a massless scalar field in the usual way of LQG, and
then deparametrizes the system after quantization and tries to obtain
solutions of the quantum constraints. The merit of this strategy is that the
full Hamiltonian constraint is realized at quantum level, and it is possible
to define a true Hamiltonian operator to represent evolution and to find the
physical solutions. However, in this model, the commutator of two Hamiltonian
operators does not vanish and hence no nontrival solutions could be obtained.
In the present letter, we will extend this favorable strategy to the coupled
system of gravity and the non-rotational dust field which was regarded as a
realistic matter field to deparametrize GR Pawlowski12 ; Thiemann15 ;
Lewandowski17 ; Husain15 ; Thiemann10 . Our purpose is to quantize the coupled
system without imposing any gauge fixing or solving any constraint before
quantization. We will show that the scalar constraint can be promoted to a
well-defined operator in a suitable Hilbert space of the coupled system, such
that the physical Hamiltonian becomes a symmetric operator. By the
deparametrized form, a general expression of the solutions to the quantum
scalar constraint is obtained for the first time, and the observables on the
space of solutions can be constructed.
The action for the non-rotational dust coupled to gravity reads Kuchar95 ;
Pawlowski12 ; Thiemann15 ; Lewandowski17 ; Husain15 ; Thiemann10
$\displaystyle S$ $\displaystyle=$ $\displaystyle\frac{1}{2}\int
d^{4}x\sqrt{-g}\left[\frac{1}{\kappa}R+M\left(g^{ab}(\partial_{a}T)\partial_{b}T+1\right)\right],$
(1)
where $g$ denotes the determinant of the spacetime metric $g_{ab}$, $M$ is the
rest mass density of the dust field and $\kappa=8\pi G$ with $G$ being the
Newton’s gravitational constant. In the Hamiltonian formalism, the
diffeomorphism and Hamiltonian constraints of the coupled system read
respectively
$\displaystyle C_{a}(x)$ $\displaystyle=C^{gr}_{a}(x)+\pi(x)T_{,a}(x)=0,$ (2)
$\displaystyle C^{tot}$
$\displaystyle=C^{gr}(x)+\frac{1}{2}\left(\frac{\pi^{2}}{M\sqrt{q}}+M\sqrt{q}\left(1+q^{ab}T_{,a}T_{,b}\right)\right)$
$\displaystyle=0,$ (3)
where $T_{,a}\equiv\partial_{a}T(x)$, $q$ denotes the determinant of the
spatial metric $q_{ab}$, $C^{gr}_{a}(x)$ and $C^{gr}(x)$ are the gravitational
diffeomorphism constraint and Hamiltonian constraint of GR respectively,
$\pi(x)$ is the conjugate momentum of $T(x)$ satisfying Pawlowski12 ;
Lewandowski17 ; Husain15
$\displaystyle\pi(x)=\pm M\sqrt{q}\sqrt{1+q^{ab}T_{,a}(x)T_{,b}(x)}.$ (4)
Following the same convention in Thiemann15 ; Husain15 and substituting Eq.
(4) into (3), one obtains
$\displaystyle C^{tot}$
$\displaystyle={\left|{C^{gr}(x)}\right|}+\pi(x)\sqrt{1+q^{ab}T_{,a}(x)T_{,b}(x)}=0.$
(5)
Since the variable $T$ represents the proper time of the dust particles, one
has $1+q^{ab}T_{,a}T_{,b}\neq 0$. Hence the constraint (5) is equivalent to
Thiemann15
$\displaystyle
C^{tot}=\pi(x)+\frac{{\left|{C^{gr}(x)}\right|}}{\sqrt{1+q^{ab}T_{,a}T_{,b}}}=0.$
(6)
While the term $\mathscr{T}\equiv\frac{1}{\sqrt{1+q^{ab}T_{,a}T_{,b}}}$ looks
an obstacle for the quantization of (6), our key observation is that it can be
re-expressed in a form suitable for the loop quantization.
In the connection-dynamical formalism of GR, the basic canonical pair for
gravity is the $su(2)$-valued connection $A^{i}_{a}$ and the densitizd triad
$E^{a}_{i}$ with basic Poisson bracket Th07
$\displaystyle\\{A^{i}_{a}(x),E^{b}_{j}(y)\\}=\kappa\beta\delta^{b}_{a}\delta^{i}_{j}\delta(x,y),$
(7)
where $\beta$ is the Barbero-Immirzi parameter. Based on the connection
formalism of GR, the kinematical structure of LQG has been constructed
rigorously As04 ; Ma07 . However, the quantum dynamics of LQG encoded in the
Hamiltonian constraint remains an open issue. The Hamiltonian constraint
operators proposed in Lewandowski15 ; Lewandowski15b do not generate new
vertices on the graphs of the cylindrical functions and hence are symmetric in
the Hilbert space consisting of the states that are diffeomorphism invariant
up to the vertices of their graphs. Another symmetric Hamiltonian constraint
operator was also been proposed in Ma15 , which does generate new vertices and
is well defined in the Hilbert space of the state that are diffeomorphism
invariant up to the non-planar vertices with valence higher than three. Thus
there are consistent ways to define the Hamiltonian constraint operator
corresponding to $C^{gr}$ in Eq.(6). To overcome remaining obstacle of the
term $\mathscr{T}$, we notice the following classical identity
$\displaystyle\mathscr{T}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{q}}{\sqrt{q+E_{i}^{a}E_{i}^{b}T_{,a}T_{,b}}}=\frac{2}{\kappa\beta\sqrt{q}}E_{i}^{a}\\{A_{a}^{i},S[1]\\}-\frac{2S}{\sqrt{q}},$
(8)
where
$q=\frac{1}{3!}{\left|{\varepsilon_{abc}\varepsilon^{ijk}E_{i}^{a}E_{j}^{b}E_{k}^{c}}\right|}$
and $S[f]\equiv\int d^{3}xf(x)S(x):=\int
d^{3}xf(x)\sqrt{q(x)+E_{i}^{a}E_{i}^{b}T_{,a}T_{,b}}$. Therefore, the smeared
version of Hamiltonian constraint (6) reads
$\displaystyle C(N)$ $\displaystyle=$
$\displaystyle{\int_{\Sigma}}d^{3}xN(\pi(x)+h(x))=0$ (9)
where
$\displaystyle h(x)$ $\displaystyle\equiv$
$\displaystyle{\left|{C^{gr}(x)}\right|}\left(\frac{2}{\kappa\beta\sqrt{q}}E_{i}^{a}\\{A_{a}^{i},S[1]\\}-\frac{2S(x)}{\sqrt{q}}\right)$
(10) $\displaystyle=:$ $\displaystyle h_{1}(x)+h_{2}(x).$
Eq.(9) implies that one can define a physical Hamiltonian $h(x)$ which
generates the evolution of the system with respect to the dynamical dust
”time” $T$.
To quantize the non-rotational dust, we will employ the polymer quantization
such that the ”exponent” of the integrated momentum of $T$ is well represented
Lewandowski15 , which is suitable to construct an operator corresponding to
(8). Thus the basic variables are quantized as Lewandowski15
$\displaystyle\hat{T}(x)|{\phi}\rangle=\phi(x)|{\phi}\rangle,$ (11)
$\displaystyle\exp{(-\frac{i}{\hbar}\int
d^{3}xp(x)\hat{\pi}(x))}|{\phi}\rangle=|{\phi+p}\rangle,$ (12)
where $|{\phi}\rangle$ represents a normalized basis in the Hilbert space
$\mathcal{H}_{T}$ of the dust field, such that
$\displaystyle\phi[\pi]:={\langle{\pi}|{\phi}\rangle}=\exp{\left(-\frac{i}{\hbar}\int
d^{3}x\phi(x)\pi(x)\right)}.$ (13)
The diffeomorphisms $\varphi$ of $\Sigma$ act unitarily in $\mathcal{H}_{T}$
by
$\displaystyle\hat{U}_{\varphi}\phi[\pi]=\phi[\varphi^{*}\pi]=(\phi\circ\varphi^{-1})[\pi].$
(14)
Moreover, since $|{\phi}\rangle$ are eigenstates of the self-adjoint operator
$\hat{T}(x)$, the operator $\widehat{T,_{a}}(x)$ corresponding to
$\partial_{a}T(x)$ can be defined as
$\displaystyle\widehat{T,_{a}}(x)|{\phi}\rangle=\left(\partial_{a}\phi(x)\right)|{\phi}\rangle.$
(15)
The kinematical Hilbert space is a direct product
$\mathcal{H}_{kin}=\mathcal{H}_{T}\otimes\mathcal{H}_{gr}$ of the dust field
part $\mathcal{H}_{T}$ and the geometry part $\mathcal{H}_{gr}$ with the
orthonormal basis $|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$, where $\gamma$
denotes a given finite graph, $j$ labels the $SU(2)$ representations
associated to the edges of $\gamma$, and $i$ labels the intertwiners assigned
to the vertices linking the edges Th07 ; Lewandowski15 . We note that the
Gaussian constraint can be easily solved by the gauge invariant spin-network
states as in LQG, so that the gauge invariant kinematical Hilbert space
$\mathcal{H}^{G}$ for the coupled system can be obtained. The orthonormal
basis in $\mathcal{H}^{G}$ will be still denoted as
$|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$. To define a quantum scalar
constraint operator corresponding to (9), as a first step, we will try to
define an operator $\hat{h}(x)$ in $\mathcal{H}^{G}$. By the point-splitting
method, the smeared term of $h_{2}$ in (10) can be regularized as
$\displaystyle\int d^{3}xN(x)h_{2}(x)$ $\displaystyle=$
$\displaystyle-\lim_{\varepsilon\rightarrow
0}{\int_{\Sigma}}d^{3}x\frac{2}{V_{U_{x}^{\varepsilon}}}N(x){\left|{C^{gr}(x)}\right|}{\int_{\Sigma}}d^{3}y\chi_{\varepsilon}(x-y)S(y),$
(16)
where $\chi_{\varepsilon}(x-y)$ is the characteristic function with width
$\varepsilon$, $V_{U_{x}^{\varepsilon}}$ denotes the volume of an arbitrary
neighborhood $U_{x}^{\varepsilon}$ containing the point $x$ with the size
$\varepsilon$. To deal with the second integral in the right hand side of Eq.
(16), we first notice that the classical expression $\int
d^{3}xf(x)\sqrt{E^{a}_{i}E^{b}_{i}T_{,a}T_{,b}}$ can be defined as an operator
in $\mathcal{H}^{G}$ as Ma05 ; Lewandowski15
$\displaystyle\int
d^{3}xf(x)\sqrt{\widehat{E^{a}_{i}E^{b}_{i}T_{,a}T}_{,b}}\cdot|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$
$\displaystyle=$ $\displaystyle
8\pi\gamma\ell^{2}_{p}\left(\sum_{I}\sqrt{j_{I}(j_{I}+1)}\sum_{s=1}^{m_{I}}\int_{e_{I}^{(s)}}fd\phi\right)\cdot|{\phi}\rangle\otimes|{\gamma,j,i}\rangle,$
(17)
where $I$ ranges the labels of the edges of $\gamma$, and
$e^{(1)}_{I},...e^{(k)}_{I},...,e^{(mI)}_{I}$ are segments of $e_{I}$ such
that $\phi|_{e^{(k)}_{I}}$ is monotonic, and they are oriented in such a way
that the scalar field $\phi$ is growing along each of them. Then, to
regularize this integral denoted by $S[\chi_{\varepsilon}]$, we introduce a
family of partitions of $\Sigma$, which is parameterized by some scale
$\delta$ such that
$\displaystyle\Sigma=\bigcup_{r}\Sigma_{r}^{\delta}.$ (18)
Then in the limit of $\delta\rightarrow 0$, the integral of
$S[\chi_{\epsilon}]$ can be taken into the two terms respectively before
taking the quare-root Lewandowski15 . Moreover, one can suitably choose the
intertwiners $i$ such that the basis
$|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$ consists of the eigenstates of the
volume operator as well as the operator in Eq.(17). Then one obtains
$\displaystyle\int_{\Sigma}d^{3}y\chi(x-y)\sqrt{\widehat{q}+\widehat{E^{a}_{i}E^{b}_{i}T_{,a}T_{,b}}}\cdot|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$
$\displaystyle=\left[8\pi\beta\ell^{2}_{p}\left(\sum_{I}\sqrt{j_{I}(j_{I}+1)}\sum_{s=1}^{m_{I}}\int_{e_{I}^{(s)}}\chi_{\varepsilon}(x-y)d\phi\right)+\sum_{\alpha^{\prime}}V_{v_{\alpha^{\prime}}}\chi_{\varepsilon}(x-v_{\alpha^{\prime}})\right]|{\phi}\rangle\otimes|{\gamma,j,i}\rangle,$
(19)
where $V_{v_{\alpha^{\prime}}}$ denotes the eigenvalue of the volume operator
at the vertex $v_{\alpha^{\prime}}$ of the graph $\gamma$. Since the regulated
gravitational Hamiltonian constraint operator $\hat{C}^{gr}_{\delta}$ acts
only on the vertices, Eq. (16) can be promoted as the following operator
$\displaystyle\int_{\Sigma}d^{3}xN(x)\hat{h}^{\delta}_{2}(x)\cdot|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$
$\displaystyle=-2\lim_{\varepsilon\rightarrow
0}\sum_{\alpha}\left(N(v_{\alpha})\hat{V}^{-1}_{U^{\varepsilon}_{v_{\alpha}}}{\left|{\hat{C}^{gr}_{\delta}(v_{\alpha})}\right|}\right)\left[8\pi\beta\ell^{2}_{p}\left(\sum_{I}\sqrt{j_{I}(j_{I}+1)}\sum_{s=1}^{m_{I}}\int_{e_{I}^{(s)}}\chi_{\varepsilon}(v_{\alpha}-y)d\phi\right)+V_{v_{\alpha}}\right]|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$
$\displaystyle=-2\sum_{\alpha}\left(N(v_{\alpha})\hat{V}^{-1}_{v_{\alpha}}{\left|{\hat{C}^{gr}_{\delta}(v_{\alpha})}\right|}V_{v_{\alpha}}\right)|{\phi}\rangle\otimes|{\gamma,j,i}\rangle,$
(20)
where $\hat{V}^{-1}_{v_{\alpha}}$ denotes the inverse volume operator at the
vertex $v_{\alpha}$ Ma15 ; YangMa16 . Note that, in order to have a well-
defined adjoint operator $\hat{C}^{gr}_{\delta}{}^{\dagger}$, we have used the
freedom of choosing the spin representations attached to each new added loop
by $\hat{C}^{gr}_{\delta}$ to ensure that the valence of any vertex of the
spin network state would not be changed by its action Ma15 ;
ZhangLewandowskiMa18 , and then
${\left|{\hat{C}^{gr}_{\delta}(v_{\alpha})}\right|}$ should be understood as
${\left|{\hat{C}^{gr}_{\delta}(v_{\alpha})}\right|}=\sqrt{\hat{C}^{gr}_{\delta}(v_{\alpha})\hat{C}^{gr}_{\delta}{}^{\dagger}(v_{\alpha})}$
Sahlmann15 . Note also that the last step in Eq. (20) is taken because of
$\lim_{\varepsilon\rightarrow
0}\int_{e_{I}^{(s)}}\chi_{\varepsilon}(v_{\alpha}-y)d\phi=0$. It is surprising
that the dust part dropped out of the final action of (20) due to its coupling
to the gravitational part. Now we consider the quantization of the term
$h_{1}$ in Eq.(9). The densitized triad can be expressed as
$E^{a}_{i}=\frac{1}{2}\varepsilon_{ijk}\varepsilon^{abc}e^{j}_{b}e^{k}_{c}$
with $e^{j}_{b}=\frac{2}{\beta}\\{A^{j}_{b}(x),V(x)\\}$, and by point-
splitting the smeared term of $h_{1}$ can be regularized as
$\displaystyle{\int_{\Sigma}}d^{3}xN(x)h_{1}(x)$ $\displaystyle=$
$\displaystyle\frac{2}{\kappa\beta}\lim_{\varepsilon\rightarrow
0}{\int_{\Sigma}}d^{3}y\frac{E^{a}_{i}\\{A_{a}^{i},S[1]\\}(y)}{V_{U_{y}^{\varepsilon}}}\chi_{\varepsilon}(x-y){\int_{\Sigma}}d^{3}xN(x){\left|{C^{gr}(x)}\right|}.$
(21)
Again, due to its coupling to the gravitational Hamiltonian $C^{gr}$, the dust
part in the expression of $S[1]$ will drop out of the final action of the
operator. Thus, the quantum analogy of (21) acts on the basis states as
$\displaystyle\int_{\Sigma}d^{3}xN(x)\hat{h}^{\delta}_{1}(x)\cdot|{\phi}\rangle\otimes|{\gamma,j,i}\rangle$
(22) $\displaystyle=$
$\displaystyle\frac{2}{\kappa\beta}\sum_{\alpha}\frac{8N(v_{\alpha})}{\beta^{2}(i\hbar)^{2}}\varepsilon_{ijk}\varepsilon^{abc}\hat{h}^{\delta}_{s_{j}}[\hat{h}^{\delta-1}_{s_{j}},\sqrt{\hat{V}_{v_{\alpha}}}]\hat{h}^{\delta}_{s_{k}}[\hat{h}^{\delta-1}_{s_{k}},\sqrt{\hat{V}_{v_{\alpha}}}]\hat{h}^{\delta}_{s_{i}}[\hat{h}_{s_{i}}^{\delta-1},\hat{V}_{v_{\alpha}}]{\left|{\hat{C}^{gr}_{\delta}(v_{\alpha})}\right|}\cdot|{\phi}\rangle\otimes|{\gamma,j,i}\rangle,$
where $\hat{h}^{\delta}_{s_{j}}$ is the holonomy along the segment $s_{j}$
starting at $v_{\alpha}$ with parameter length $\delta$. Now the physical
Hamiltonian $h(N)\equiv{\int_{\Sigma}}d^{3}xN(x)h(x)$ has been promoted as an
operator $\hat{h}^{\delta}(N)$ in $\mathcal{H}^{G}$. The subtleties here are
how to take the limit $\delta\rightarrow 0$ and whether the final Hamiltonian
operator could be symmetric. It turns out that the limit can be taken in the
gravitational Hilbert space $\mathcal{H}^{g}_{np4}$ of the spin-network states
$({\gamma,j,i}|$ that are diffeomorphism invariant up to the non-planar
vertices with valence higher than three Ma15 . Given a graph $\gamma$
underling a gauge invariant state $|{\gamma,j,i}\rangle\in\mathcal{H}_{gr}$,
one denotes its non-planar vertices with valence higher than 3 by
$V_{np4}(\gamma)$, the group of all diffeomorphisms preserving
$V_{np4}(\gamma)$ by Diff$(\Sigma)_{V_{np4}(\gamma)}$, and the diffeomorphisms
acting trivially on $\gamma$ by TDiff$(\Sigma)_{\gamma}$. Then the states
$({\gamma,j,i}|$ are defined as
$\displaystyle({\gamma,j,i}|\equiv\eta(|\gamma,j,i>):=\frac{1}{N_{\gamma}}\sum_{{\tiny{\varphi}}}\hat{U}_{\varphi}\cdot|{\gamma,j,i}\rangle,$
(23)
where $\tiny{\varphi\in
Diff(\Sigma)_{V_{np4}(\gamma)}/TDiff(\Sigma)_{\gamma}}$, $\hat{U}_{\varphi}$
is the unitary representation of $\varphi$ in $\mathcal{H}_{gr}$, and
$N_{\gamma}$ is a normalization factor. Since the states $({\gamma,j,i}|$
belong to the dual space $\mathscr{D}^{*}_{g}$ of a dense subset of
$\mathcal{H}_{gr}$, the inner product between the states in
$\mathcal{H}^{g}_{np4}$ is defined by
${\langle{\eta(f)}|{\eta(g)}\rangle}:=\eta(f)(g)$. Thus, we could naturally
define a symmetric physical Hamiltonian operator
$\hat{h}^{sym}(N)=\frac{1}{2}(\hat{h}(N)+\hat{h}^{{\dagger}}(N))$ therein. Now
we consider the quantization of the term $\int_{\Sigma}d^{3}xN(x)\pi(x)$ in
Eq.(9). Although one can not define an operator corresponding to this term in
$\mathcal{H}_{T}$, such an operator does exist in the dual space
$\mathscr{D}^{*}_{T}$ of a dense subset of $\mathcal{H}_{T}$ with state
$({\varphi}|$. The action of this operator reads Lewandowski15
$\displaystyle\left[\left(\int
d^{3}x\hat{\pi}(x)N(x)\right)^{*}({\varphi}|\right]|{\phi^{\prime}}\rangle$
$\displaystyle:=i\frac{d}{d\epsilon}\left[\exp\left(-i\epsilon\int
d^{3}x\hat{\pi}(x)N(x)\right)^{*}\right]\sum_{\phi}\varphi[\phi]\langle{\phi}||{\phi^{\prime}}\rangle$
(24) $\displaystyle=i\frac{d}{d\epsilon}\varphi[\phi^{\prime}+\epsilon
N]=:i\langle{\delta_{N}\varphi}||{\phi^{\prime}}\rangle.$
Hence, one has
$\displaystyle\hat{\pi}[N]\cdot({\varphi}|\equiv\left(\int
d^{3}x\hat{\pi}(x)N(x)\right)^{*}({\varphi}|=i({\delta_{N}\varphi}|.$ (25)
To be compatible with the gravitational Hilbert space $\mathcal{H}^{g}_{np4}$,
we consider only the states in $\mathscr{D}^{*}_{T}$ with the following form
with respect to certain graphs $\gamma$,
$\displaystyle({\Psi,V_{np4}(\gamma)}|=\sum_{\phi}\Psi(\phi(v_{1}),...,\phi(v_{m}))\langle{\phi}|,$
(26)
where the function $\Psi$ depends only on values of $\phi$ taking on the non-
planar vertices of $\gamma$ with valence higher than 3. It is obvious that
$({\Psi,V_{np4}(\gamma)}|$ are invariant under the action of
Diff$(\Sigma)_{V_{np4}(\gamma)}$. A natural inner produce can be defined for
these states by
$\displaystyle({\Psi,V_{np4}(\gamma)}|\Psi^{\prime},V_{np4}(\gamma^{\prime})):=\sum_{\phi,\phi^{\prime}}\bar{\Psi}(\phi(v_{1}),...,\phi(v_{m}))\Psi^{\prime}(\phi^{\prime}(v_{1}),...,\phi^{\prime}(v_{m}))\delta_{\gamma,\gamma^{\prime}}{\langle{\phi}|{\phi^{\prime}}\rangle}.$
(27)
It turns out that the operator in (25) keeps the space of the states
$({\Psi,V_{np4}(\gamma)}|$ invariant and hence is well defined in the Hilbert
space $\mathcal{H}^{T}_{np4}$ as the completion of $\mathscr{D}^{*}_{T}$ with
respect to the inner product (27). Thus the whole scalar constraint (9) has
been promoted as a well defined operator in
$\mathcal{H}^{T}_{np4}\otimes\mathcal{H}^{g}_{np4}$.
We denote the states in $\mathcal{H}^{T}_{np4}\otimes\mathcal{H}^{g}_{np4}$ by
$({\Psi,\gamma,j,i}|\equiv({\Psi,V_{np4}(\gamma)}|\otimes({\gamma,j,i}|$ and
consider how to obtain the solutions to the quantum constraint
$\displaystyle\left(\hat{\pi}[N]+\hat{h}^{sym}(N)\right)\cdot({\Psi,\gamma,j,i}|=0.$
(28)
Taking account of the fact that $\hat{h}^{sym}(N)$ commutates with
$\hat{\pi}[N]$ and the following commutator
$\displaystyle\left[\hat{\pi}[N(x)],\hat{T}(y)^{*}\right]=iN(y),$ (29)
where $\hat{T}(y)^{*}$ is the dual of $\hat{T}(y)$ in $\mathcal{H}^{T}_{np4}$,
we have the following identity,
$\displaystyle\left(\hat{\pi}[N]+\hat{h}^{sym}(N)\right)({\Psi,\gamma,j,i}|=e^{i\int
d^{3}x\hat{T}^{*}\hat{h}^{sym}}\hat{\pi}[N]e^{-i\int
d^{3}x\hat{T}^{*}\hat{h}^{sym}}({\Psi,\gamma,j,i}|.$ (30)
Therefore, the general solutions to the constraint equation (28) can be
written as
$\displaystyle({\Psi,\gamma,j,i}|=e^{i\int
d^{3}x\hat{T}^{*}\hat{h}^{sym}}({\Psi_{0},\gamma,j,i}|,$ (31)
where the functional $({\Psi_{0},V_{np4}(\gamma)}|$ satisfies
$\displaystyle({\delta_{N}\Psi_{0},V_{np4}(\gamma)}|=0.$ (32)
Following the ideas of the relational framework for observables Lewandowski10
, we can construct a large family of observables. Let $\hat{L}$ be a linear
operator in $\mathcal{H}_{gr}$, which is invariant with respect to the action
of Diff($\Sigma$)${}_{V_{np4}(\gamma)}$. Then its dual operator $\hat{L}^{*}$
natural acts on $({\gamma,j,i}|\in\mathcal{H}^{g}_{np4}$. Consider an operator
$\displaystyle\hat{\mathscr{O}}(L):=e^{i\int
d^{3}x\hat{T}^{*}\hat{h}^{sym}}\hat{L}^{*}e^{-i\int
d^{3}x\hat{T}^{*}\hat{h}^{sym}}.$ (33)
It is obvious that $\hat{\mathscr{O}}(L)$ commutes with the scalar constraint
operator $\hat{C}(N)$ in the weak sense as
$\displaystyle[\hat{\mathscr{O}}(L),\hat{C}(N)]({\Psi,\gamma,j,i}|=0.$ (34)
Also, each of such kind of operators $\hat{\mathscr{O}}(L)$ preserves the
space of solutions to the scalar constraint, since
$\displaystyle\hat{\mathscr{O}}(L)({\Psi,\gamma,j,i}|$ $\displaystyle=$
$\displaystyle e^{i\int
d^{3}x\hat{T}^{*}\hat{h}^{sym}}({\Psi_{0}}|\otimes\hat{L}^{*}({\gamma,j,i}|$
(35) $\displaystyle=$
$\displaystyle({\Psi,\gamma^{\prime},j^{\prime},i^{\prime}}|.$
Note that the solutions (31) are not fully diffeomorphism invariant, though
they are invariant with respect to Diff($\Sigma$)${}_{V_{np4}(\gamma)}$.
However, as proposed in Sahlmann15 , one can average them with respect to the
remaining diffeomorphisms to obtain the physical solutions to all the
constraints as
$\displaystyle(\tilde{\eta})[({\Psi,\gamma,j,i}|]:=\sum_{[f]}U_{f}\cdot({\Psi,\gamma,j,i}|,$
(36)
where the sum ranges $[f]\in$Diff$(\Sigma)/$Diff$(\Sigma)_{V_{np4}(\gamma)}$.
Since the states (36) are in the dual space of a dense subset of
$\mathcal{H}^{gr}_{np4}\otimes\mathcal{H}^{T}_{np4}$, we can further introduce
an inner product in their space by
$<\tilde{\eta}(\Psi)|\tilde{\eta}(\Phi)>=\tilde{\eta}(\Psi)(\Phi)$. Moerover,
the fully diffeomorphism invariant operators in $\mathcal{H}_{gr}$ can be
promoted to operators in the space of physical solutions by Eq. (33).
Therefore, the Dirac quantization can be fully realized for this model.
To summarize, the Dirac quantization procedure of the gravity model coupled
with a non-rotational dust field has been carried out in the framework of LQG
without imposing any gauge fixing or solving any constraint before
quantization. The main results are the following. First, the term
$\mathscr{T}$ in the scalar constraint (6) has been regularized into the form
(8) suitable for the loop quantization. Second, by employing the standard loop
quantization of the gravitational part and the polymer quantization of the
integrated momentum of the dust field as proposed in Ref. Lewandowski15 , the
scalar constraint (9) has been successfully quantized. The resulting operator
is well defined in the product Hilbert space
$\mathcal{H}^{T}_{np4}\otimes\mathcal{H}^{g}_{np4}$, which is invariant under
the action of Diff($\Sigma$)${}_{V_{np4}(\gamma)}$ for any suitable graph
$\gamma$. Third, the deparametrized form of the scalar constraint enables us
to find a general expression of the solutions to the quantum constraint, and
the observables on the space of solutions can be constructed. At last, by
averaging the solutions of the scalar constraint with respect to the remaining
diffeomorphisms, the Dirac quantization procedure is fully realized in LQG by
this model. In comparison with the previous treatments of this model, our
treatment leads to a significant difference. In the time-gauge quantization
approach Husain15 ; Pawlowski12 ; Lewandowski17 , the resulting physical
Hamiltonian was solely $\hat{C}^{gr}$, while in the approach of employing the
diffeomorphism constraint Thiemann15 ; Thiemann10 , the physical Hamiltonian
was $\sqrt{(\hat{C}^{gr})^{2}-\widehat{q^{ab}C_{a}C_{b}}}$. However, in our
full Dirac quantization approach, the term $\mathscr{T}$ comes into play so
that additional terms appear in the expression of the physical Hamiltonian
$\hat{h}^{sym}$ by Eqs. (20) and (22). In contrast to the model of coupling to
scalar field Lewandowski15 , in our model the dust field dropped out of the
expression of the physical Hamiltonian by regularization. This subtle change
makes the construction of the solutions (31) possible. It is desirable to
further understand the low energy effective theory of the fully quantized
model and derive its physical predictions. These issues are left for our
future study.
_Acknowledgements_ – This work is supported by NSFC with grant No. 12275087,
No. 11775082, No. 11961131013, No. 11875006, No. 12275022, and “the
Fundamental Research Funds for the Central Universities”.
## References
* (1) S. Weinberg, in General Relativity, An Einstein Centenary Survey, edited by S.W. Hawking and W. Israel (Cambridge University Press, Cambridge, 1980).
* (2) C. Kiefer, Quantum Gravity (Oxford University Press, Oxford, 2007).
* (3) H. K. Hamber, Quantum Gravitation, the Feynman Path Integral Approach (Springer-Verlag, Berlin, 2009).
* (4) M. B. Green, J. H. Schwarz, and E. Witten, Superstring Theory, Cambridge Monographs on Mathematical Physics Vol. 1 and 2 (Cambridge University Press, Cambridge, 1999).
* (5) J. Polchinski, String Theory (Cambridge University Press, Cambridge, 2001), Vol. 1 and 2.
* (6) C. V. Johson, DBranes, Cambridge Monographs on Mathematical Physics (Cambridge University Press, Cambridge, 2003).
* (7) K. Becker, M. Becker, and J. H. Schwarz, String Theory and M-Theory (Cambridge University Press, Cambridge, 2007).
* (8) C. Rovelli, Quantum Gravity, (Cambridge University Press, 2004).
* (9) T. Thiemann, Modern Canonical Quantum General Relativity, (Cambridge University Press, 2007).
* (10) A. Ashtekar and J. Lewandowski, Background independent quantum gravity: a status report, Class. Quant. Grav. 21, R53 (2004).
* (11) M. Han, Y. Ma and W. Huang, Fundamental Structure of loop quantum gravity, Int. J. Mod. Phys. D 16, 1397 (2007).
* (12) X. Zhang and Y. Ma, Extension of Loop Quantum Gravity to f( R) Theories, Phys. Rev. Lett. 106, 171301 (2011).
* (13) X. Zhang and Y. Ma, Loop quantum f(R) theories, Phys. Rev. D 84, 064040 (2011).
* (14) Q. Chen and Y. Ma, Hamiltonian structure and connection-dynamics of Weyl gravity, Phys. Rev. D 98, 086014 (2018).
* (15) X. Zhang, J. Yang, and Y. Ma, Canonical loop quantization of the lowest-order projectable Horava gravity, Phys. Rev. D 102, 124060 (2020).
* (16) C. Isham, Prima facie questions in quantum gravity, Lect. Notes Phys. 434, 1 (1994).
* (17) M. Domagala, K. Giesel, W. Kaminski, J. Lewandowski, Gravity quantized: Loop quantum gravity with a scalar field, Phys. Rev. D 82, 104038 (2010).
* (18) J. Lewandowski, H. Sahlmann, Loop quantum gravity coupled to a scalar field, Phys. Rev. D 93, 024042 (2016).
* (19) C. Rovelli and L. Smolin, The physical Hamiltonian in nonperturbative quantum gravity, Phys. Rev. Lett. 72 (1994).
* (20) K. V. Kuchar, C. G. Torre, Gaussian reference fluid and interpretation of quantum geometrodynamics. Phys. Rev. D 43 (1991) 419-441.
* (21) K. Giesel, T. Thiemann, Scalar Material Reference Systems and Loop Quantum Gravity, Class. Quant. Grav. 32, 13 (2015).
* (22) V. Husain, J. Ziprick, 3D gravity with dust: Classical and quantum theory, Phys. Rev. D 91, 124074 (2015).
* (23) K. Giesel, T. Thiemann, Algebraic Quantum Gravity (AQG) IV. Reduced Phase Space Quantisation of Loop Quantum Gravity Class. Quant. Grav. 27, 175009 (2010).
* (24) V. Husain, T. Pawlowski, Time and a physical Hamiltonian for quantum gravity, Phys. Rev. Lett. 108, 141301, (2012).
* (25) M. Assanioussi, J. Lewandowski, I. Makinen, Time evolution in deparametrized models of loop quantum gravity, Phys. Rev. D 96, 024043 (2017).
* (26) J. D. Brown, K. V. Kuchar, Dust as a standard of space and time in canonical quantum gravity. Phys. Rev. D 51, 5600 (1995).
* (27) E. Alesci, M. Assanioussi, J. Lewandowski, I. Makinen, Hamiltonian operator for loop quantum gravity coupled to a scalar field, Phys. Rev. D 91, 124067 (2015).
* (28) J. Yang, Y. Ma, New Hamiltonian constraint operator for loop quantum gravity, Phys. Lett. B 751:343-347, (2015).
* (29) Y. Ma, Y. Ling, Q operator for canonical quantum gravity, Phys. Rev. D 62, 104021 (2000).
* (30) J. Yang and Y. Ma, New volume and inverse volume operators for loop quantum gravity, Phys. Rev. D 94, 044003 (2016)
* (31) C. Zhang, J. Lewandowski, Y. Ma, Towards the self-adjointness of a Hamiltonian operator in loop quantum gravity, Phys. Rev. D 98, 086014 (2018).
* (32) J. Lewandowski, H. Sahlmann, Symmetric scalar constraint for loop quantum gravity, Phys. Rev. D 91, 044022 (2015).
|
# Risk-Averse Self-Scheduling of Storage in Decentralized Markets ††thanks:
Mr. Yurdakul gratefully acknowledges the support of the German Federal
Ministry of Education and Research and the Software Campus program under Grant
01IS17052. Mr. Billimoria’s Doctor of Philosophy at the University of Oxford
is supported by the Commonwealth PhD Scholarship, awarded by the Commonwealth
Scholarship Commission of the UK (CSCUK).
Ogun Yurdakul12 and Farhad Billimoria3 1Energy Systems and Infrastructure
Analysis Division, Argonne National Laboratory, Lemont, IL 60439, USA.
2Department of Electrical Engineering and Computer Science, Technical
University of Berlin 10623 Berlin, Germany 3Department of Engineering
Science, University of Oxford, Oxford OX1 2JD, United Kingdom
###### Abstract
Storage is expected to be a critical source of firming in low-carbon grids. A
common concern raised from ex-post assessments is that storage resources can
fail to respond to strong price signals during times of scarcity. While
commonly attributed to forecast error or failures in operations, we posit that
this behavior can be explained from the perspective of risk-averse scheduling.
Using a stochastic self-scheduling framework and real-world data harvested
from the Australian National Electricity Market, we demonstrate that risk-
averse storage resources tend to have a myopic operational perspective, that
is, they typically engage in near-term price arbitrage and chase only few
extreme price spikes and troughs, thus remaining idle in several time periods
with markedly high and low prices. This has important policy implications
given the non-transparency of unit risk aversion and apparent risk in
intervention decision-making.
###### Index Terms:
electricity markets, electricity storage, self-scheduling, risk-aversion.
## I Introduction
In renewable-dominated power systems, electricity storage is expected to play
an important role in maintaining system balance and resource adequacy. In
decentralized markets, system operators (SOs) do not issue commitment
instructions, instead relying upon full-strength price formation and iterative
pre-dispatch systems to signal the need for additional or reduced resource
commitment. During periods of scarcity, understanding the projected available
capacity of resources to service load at intraday system stress points is
critical to the SO’s decision-making on whether to intervene or apply
emergency protocols, such as load shedding.
Storage resources introduce new complexity to the problem because while the
resource may be operationally available, whether it can actually charge or
discharge at a point in time depends upon its state of charge (driven by prior
decisions) and technical constraints. As a case in point, the lack of
transparency on the availability of energy-limited plants was one of the key
drivers of the Australian Energy Market Operator’s (AEMO) decision to suspend
the National Electricity Market (NEM) in June 2022 during a period of extreme
scarcity. In certain cases, storage resources were observed to not discharge
during the highest prices of the day (while discharging at more muted prices
earlier in the day).111Our online companion [1] provides an empirical record
of many such observations of battery dispatch and energy prices for the NEM.
This seemingly non-intuitive behavior is commonly attributed to error or
failure (e.g., in forecasting or operational management) of storage
operations. Our work however proposes a second explanation: that of risk-
averse self-scheduling under uncertainty, a concomitant of which is that such
behavior could well be rational and utility-maximizing, and thus persist in
energy markets.
In the literature, risk-averse self-scheduling of thermal resources has been
extensively studied. Risk-constrained self-scheduling is explored in [2, 3]
given the internalization of the non-convex attributes of thermal resources.
In [4], the conditional Value-at-Risk measure is used to develop bidding
curves derived from an optimal self-schedule. For storage and in relation to
decentralized markets, inter-temporal tradeoffs are considered from the
perspective of look-ahead bidding to optimize state-of-charge and maximize
profits [5]. In [6] a risk-constrained problem is formulated for a pumped
storage plant to trade-off between expected profit and risks in energy and
ancillary service markets.
In this paper, we work out a novel risk-averse stochastic optimization
framework for the self-scheduling of storage in decentralized markets under
uncertain prices. In Section II, we develop the mathematical background and
market architecture, leading to the precise mathematical formulation of the
risk-averse scheduling problem in Section III. In Section IV, we conduct
several experiments using real-world actual data from the NEM and storage
resources bidding therein.
Using the insights we draw from the experiments, we lay out in Section IV two
key contributions of this work. First, we provide a novel explanation for the
seemingly non-intuitive behavior of storage. Specifically, we demonstrate how
the increasing uncertainty of prices with longer horizons can lead a risk-
averse decision-maker to adopt a rational but myopic approach to dispatch. We
illustrate how such a decision-maker can favor low-risk near-term dispatch
decisions at more moderate prices rather than higher-reward but more uncertain
peaks and troughs. Second, we present valuable insights into the sensitivity
of the expected profits to the duration and the degree of risk-aversion of a
storage resource. We observe that while increasing the capacity of a storage
resource can significantly boost profits for a risk-neutral decision-maker, it
barely makes a dent for relatively more risk-averse decision-makers. We set
out concluding remarks and policy implications in Section V.
## II Mathematical Background
We consider a storage resource trading in a decentralized electricity market.
The adopted market setting does not include a centrally organized short-term
forward market for energy and relies solely on the real-time market (RTM) for
the spot trading of energy, settled under a marginal pricing scheme222In this
paper we are focused upon energy prices and do not consider, at this stage,
ancillary service or other spot markets for which storage is eligible. We
assume that the resource acts as a price-taker, that is, it has no capability
to exercise market power or alter market prices.
We denote by $k$ the index of each time period for which the RTM is cleared
and by $K$ the total number of time periods in the problem horizon. We
represent by $\kappa$ the duration of each time period $k$ in minutes, which
is the smallest indecomposable unit of time in our analysis, during which we
assume the system conditions hold constant. Using our notation, an RTM cleared
for a day at half-hourly intervals would correspond to $\kappa=30$ min and
$K=48$. Define the set $\mathscr{K}\coloneqq\\{k\colon k=1,\ldots,K\\}$.
We denote by $\tilde{\lambda}_{k},k\in\mathscr{K}$ as the uncertain energy
price in time period $k$ at the transmission system bus or zone into which the
storage resource is located. We construct the vector
$\bm{\tilde{\lambda}}\coloneqq[\tilde{\lambda}_{1}\cdots\tilde{\lambda}_{k}\cdots\tilde{\lambda}_{K}]^{\mathsf{T}}$.
We assume that the SO determines and publishes pre-dispatch energy prices over
the $K$ time periods in order to provide market participants and itself with
advance information necessary to plan the physical operation of the power
system. We write the relation
$\bm{\tilde{\lambda}}\coloneqq\bm{\bar{\lambda}}+\bm{\tilde{\epsilon}}$, where
$\bm{\bar{\lambda}}$ denotes the vector of pre-dispatch energy prices and
$\bm{\tilde{\epsilon}}$ is a random vector of forecast errors representing the
difference between the pre-dispatch and market-clearing prices.
Typically, the uncertain forecast error exhibits a greater degree of variance
with extending horizon [1]. While the forecast error between pre-dispatch and
market-clearing prices are likely small for the periods close to the dispatch
interval, the pre-dispatch prices that eventuate may significantly deviate
from the market price for forecast horizons well away from the actual dispatch
time. We use a set of scenarios to model $\bm{\tilde{\epsilon}}$, where
$\bm{{\epsilon}}^{\omega}$ denotes the vector of forecast errors in scenario
$\omega\in\Omega$ and ${\pi}^{\omega}$ denotes its associated probability of
realization. We define the vector of LMPs for each scenario by
$\bm{{\lambda}}^{\omega}\coloneqq\bm{\bar{\lambda}}+\bm{{\epsilon}}^{\omega}$
and write
$\bm{{\lambda}}^{\omega}\coloneqq[{\lambda}_{1}^{\omega}\cdots{\lambda}_{k}^{\omega}\cdots{\lambda}_{K}^{\omega}]$,
where ${\lambda}_{k}^{\omega}$ denotes the energy price in time period $k$ of
scenario $\omega$.
We next turn to the mathematical modeling of storage. We denote by $p^{c}_{k}$
and $p^{d}_{k}$ the charging and discharging power of the storage resource in
time period $k$ with maximum values $p^{c}_{M}$ and $p^{d}_{M}$, respectively.
To ensure the mutual exclusivity of the charging and discharging modes, and to
reduce the computational burden of the problem, we draw upon the single binary
variable storage formulation presented in [7] and enforce the following
constraints:
$\displaystyle p^{d}_{k}-p_{M}^{d}u_{k}\leq 0,$ $\displaystyle
p^{c}_{k}-p_{M}^{c}(1-u_{k})\leq 0$ $\displaystyle\forall k\in\mathscr{K}.$
(1) $\displaystyle u_{k}\in\\{0,1\\},$ $\displaystyle p^{c}_{k},p^{c}_{k}\geq
0$ $\displaystyle\forall k\in\mathscr{K}.$ (2)
We denote the efficiency of the storage unit by $\eta$ (assuming symmetry
between charging and discharging efficiencies). We represent by $E_{k}$ the
stored energy level at the end of the time period $k$ and write the
intertemporal operational constraints:
$\displaystyle E_{k}=E_{k-1}-\frac{1}{\eta}p_{k}^{d}\kappa\frac{1\text{
h}}{60\text{ min}}+{\eta}p_{k}^{c}\kappa\frac{1\text{ h}}{60\text{ min}}$
$\displaystyle\forall k\in\mathscr{K}\setminus 1,$ (3) $\displaystyle
E_{k}=E_{o}-\frac{1}{\eta}p_{k}^{d}\kappa\frac{1\text{ h}}{60\text{
min}}+{\eta}p_{k}^{c}\kappa\frac{1\text{ h}}{60\text{ min}}$
$\displaystyle\forall k\in\\{1\\},$ (4)
where $E_{o}$ denotes the initial stored energy level at the beginning of the
problem horizon. The stored energy level in each period $k$ is bounded from
above and below by $E_{M}$ and $E_{m}$, respectively:
$\displaystyle E_{m}\leq E_{k}\leq E_{M}\hskip 8.5359pt\forall
k\in\mathscr{K}.$ (5)
In the next section, we harness the mathematical models laid out in this
section to work out a framework for the risk-averse self-scheduling problem of
a storage resource.
## III Risk-Averse Self-Scheduling
Absent a short-term forward market, the storage resource is directly exposed
to the uncertainty in the RTM prices. A great deal of studies in the
literature considers a risk-neutral market participant, for which the storage
resource may bring about large losses in some scenarios as long as those are
offset by even greater gains in other scenarios. Such studies however do not
cater to risk-averse decisions-makers, who constitute the focus of this work,
that prefer to ward off large losses independent of potential profits.
A widely used measure for incorporating risk into decision-making is the
Value-at-Risk (VaR). For a specified risk confidence level $\alpha$ in
$(0,1)$, $\alpha$-VaR is an upper estimate of losses that is exceeded with
probability $1-\alpha$. We denote the $\alpha$-VaR of the loss associated with
a decision $\bm{x}$ by $\zeta_{\alpha}(\bm{x})$. Despite presenting an
intuitive representation of losses, VaR exhibits several undesirable
properties, including not taking account of the losses suffered beyond
$\zeta_{\alpha}(\bm{x})$, non-coherence, and non-convexity when computed using
scenarios. Instead, we draw upon the CVaR measure to manage risk in our
framework.
For continuous distributions, the $\alpha$-CVaR of the loss for a decision
$\bm{x}$, which we denote by $\phi_{\alpha}(\bm{x})$, is the expected loss
given that the loss is greater than or equal to $\zeta_{\alpha}(\bm{x})$. The
definition of CVaR for discrete distributions is yet more subtle, which is the
case in our framework as we rely on scenarios to represent uncertain prices.
Rockafellar and Uryasev [8] define $\phi_{\alpha}(\bm{x})$ for general
distributions as the weighted average of $\zeta_{\alpha}(\bm{x})$ and the
expected loss strictly exceeding $\zeta_{\alpha}(\bm{x})$.
From a mathematical standpoint, CVaR presents several appealing features.
Pflug [9] shows that it is a coherent risk measure. Most notably,
$\phi_{\alpha}(\bm{x})$ can be efficiently computed by minimizing a piecewise
linear and convex function [8, Theorem 10], which can be cast as a linear
programming (LP) problem by introducing an additional variable.
A key thrust of our framework is to hedge the decision-maker against the risk
of incurring high charging costs. For an associated confidence level $\alpha$,
the $\alpha$-CVaR of the uncertain charging cost over the problem horizon can
be evaluated by solving the following LP problem:
$\displaystyle\underset{z^{\omega},\zeta}{\text{minimize}}$
$\displaystyle\mathcal{R}_{\alpha}(\bm{x})\coloneqq\zeta+\frac{1}{1-\alpha}\sum_{\omega\in\Omega}\pi^{\omega}z^{\omega},$
(6) subject to $\displaystyle
z^{\omega}\geq\sum_{k\in\mathscr{K}}{\lambda}^{\omega}_{k}p_{k}^{c}-\zeta,\;z^{\omega}\geq
0$ (7)
where the decision vector $\bm{x}$ succinctly represents the storage variables
$p_{k}^{c}$, $p_{k}^{d}$, and $E_{k}$ for all $k$ in $\mathscr{K}$, $\zeta$,
and the auxiliary variable $z^{\omega}$ introduced for each $\omega$ in
$\Omega$ and $\pi^{\omega}$ denotes the probability of the scenario
$\omega\in\Omega$.
In conjunction with minimizing the risk of incurring high charging costs,333We
would be remiss if we did not set forth that the resource faces a risk while
discharging as well, that of recording low revenues if the RTM prices
materialize at significantly lower levels than their pre-dispatch
counterparts. We choose to omit this risk in this paper and defer it to our
future work. the decision-maker seeks to maximize the expected profits of the
resource over the $K$ time periods:
$\displaystyle\mathcal{P}(\bm{x})\coloneqq\sum_{\omega\in\Omega}\pi^{\omega}\Big{[}\sum_{k=1}^{K}\lambda^{\omega}_{k}\big{(}p^{d}_{k}-p^{c}_{k}\big{)}\Big{]}.$
(8)
We adjust the trade-off between these seemingly conflicting objectives by
minimizing the weighted combination of $\mathcal{R}(\bm{x})$ and
$\mathcal{P}(\bm{x})$ subject to the storage constraints laid out in Section
II and the CVaR constraints presented in this section. The risk-averse self-
scheduling (RASS) problem is expressed as:
$\displaystyle\text{RASS}:$ minimize $\displaystyle\hskip
14.22636pt-\mathcal{P}(\bm{x})+\beta\mathcal{R}(\bm{x}),$ subject to
$\displaystyle\hskip
14.22636pt\eqref{plim}\text{--}\eqref{enl},\eqref{cvarc1}.$
The weight parameter $\beta\in[0,\infty)$ tunes the decision-maker’s degree of
risk-aversion. While increasing values of $\beta$ underscores the decision-
maker’s desire to mitigate risk, driving down $\beta$ toward zero represents a
more risk-neutral decision-maker. Setting $\beta=0$ implies that the sole
objective of the decision-maker is to maximize her profits—independent of the
risk that her decisions entail.
The RASS problem is solved on a rolling-window basis with a shrinking horizon.
At the outset, it is solved for the time periods $k=1\cdots K$ before the
uncertain price for any of the time periods is revealed, yet the optimal
decisions for only the first time period are implemented. After the RTM price
for $k=1$ is observed, the RASS problem is solved again for $k=2\cdots K$,
this time implementing the optimal decision for only $k=2$, and so forth. The
process repeats until the RASS problem is solved for $k=K$ with a single-
period horizon. The binding decisions of each rolling window are passed along
to the subsequent window by explicitly enforcing that the stored energy level
at the end of the first, binding time period of each window be equal to the
initial stored energy level at the beginning of the ensuing window.
## IV Case Study and Results
In this section, we conduct two case studies to gain insights into the self-
scheduling of a storage resource under different pre-dispatch price signals.
The price data of both case studies are from the NEM. As spelled out below, we
use the actual pre-dispatch prices observed in the NEM in two representative
days to form the vector of pre-dispatch prices in our experiments. To
construct the scenarios $\bm{{\lambda}}^{\omega},\,\omega\in\Omega$, we draw
upon the historical differences between the pre-dispatch and RTM prices that
eventuated in the NEM across 2019, yielding 11,680 observations, from which we
randomly select 100 observations to form the scenarios of each case study. The
price data have a temporal granularity of 30 min, that is, $\kappa=30$ min and
$K=48$. Commensurate with the price data, we harness the data of the storage
resources currently bidding in the NEM [10] in our experiments. Unless
otherwise stated, we pick the risk confidence level as $\alpha=0.95$ and the
storage efficiency as $\eta=0.85$ in all experiments. The data and the source
code of all experiments are furnished in the online companion [1].
### IV-A Case Study I
The storage data of Case Study I are from that of the Victorian Big Battery,
which is a grid-connected storage resource in the Australian state of Victoria
with a charging/discharging power limit of 300.0 MW and an energy storage
capacity of 450.0 MWh. We use the pre-dispatch prices in Victoria for June 12,
2022, which constitutes the last day before the cumulative price threshold was
exceeded in Victoria, triggering the onset of a series of market interventions
that culminated in AEMO suspending the NEM on June 15, 2022. We start out by
solving the RASS problem for $\beta=0$ and $\beta=0.4$. We observe from Fig. 1
that the net discharging power (i.e., discharging power less charging power)
under $\beta=0$, by and large, closely follows the pre-dispatch prices,
effectively exploiting price differences across three time windows, with the
first being between $k=11$ and $k=19$, second between around $k=24$ and
$k=36$, and finally that around the price spike at $k=46$.
Figure 1: Case Study I optimal storage dispatch decisions for $\beta=0$. Each
time period $k$ has a duration of 30 minutes.
We note from Fig. 2 that whereas the risk-neutral decision-maker ($\beta=0$)
waits until the price reaches the early-hour minimum at $k=11$ to start
charging, the risk-averse decision-maker ($\beta=0.4$) charges at its maximum
limit right at the first two time periods, during which the pre-dispatch price
is higher than that at $k=11$. We attribute this seemingly counter-intuitive
behavior to the fact that when the RASS problem is solved at the beginning of
the horizon, charging at $k=1$ entails a lower risk compared to $k=11$, as the
market-clearing price for the initial time periods is expected hover closely
around the pre-dispatch price. Since the uncertainty in forecast error
increases with the length of the look-ahead period, the risk-averse decision-
maker is driven to store more energy at the beginning of the horizon vis-à-vis
the risk-neutral counterpart.
Figure 2: Case Study I optimal storage dispatch decisions for $\beta=0.4$
In this light, the risk-averse decision-maker incurs a greater lost
opportunity cost by storing a higher level of energy at $k=1$ and $k=2$ and
discharging very little before $k=11$. This is because, by doing so, it
reaches its maximum capacity at $k=12$ and foregoes its ability to store
energy when the prices are around their lowest ebb of the day at $k=13$ and
$k=14$. In contrast, the risk-neutral decision-maker manages to store 195 MWh
of energy at $k=13$ and $k=14$. Tellingly, in order to store energy when the
prices reach the minimum of the PM hours at $k=27$, the risk-averse decision
maker winds up discharging at $k=25$. Although it could have well discharged
between $k=15$ and $k=22$ at higher prices and could have recorded greater
revenues, it prefers to discharge at a time period closer to that during which
it charges again. Indeed, the dispatch decisions under $\beta=0.4$ is overall
marred by a myopic focus. The risk averse decision-maker exploits price
differences primarily at extremely short time windows, such as between $k=10$
and $k=11$ or between $k=25$ and $k=27$, unless the price significantly rises
as around $k=46$. The prevailing myopic focus under $\beta=0.4$ can be
ascribed to the increasing uncertainty in forecast error with longer horizons,
driving the risk-averse decision-maker to arbitrage primarily between shorter
windows. A ramification of this behavior is that the resource fails to respond
to pre-dispatch price signals at time periods during which the system is in
dire need. For instance, after the pre-dispatch price reaches A$481.1/MWh at
$k=33$ (more than a sevenfold increase from that at $k=27$) the risk-averse
decision-maker does not discharge any energy. Similarly, when the pre-dispatch
price precipitously falls around $k=41$, the risk-neutral decision-maker
stores 352.9 MWh, whereas the risk-averse decision-maker does not charge at
all, failing to respond to the available 65.5% price drop. As a result, after
the price spikes at $k=46$, the risk-neutral decision-maker discharges 176.5
MWh more energy compared to the risk-averse decision-maker, greatly aiding the
SO during a period of extreme scarcity.
Figure 3: Case Study I optimal net discharging levels (i.e., discharging less
charging power) under different values of the weight parameter $\beta$.
Fig. 3 makes evident that the so-called myopic focus of the decision-maker
becomes more conspicuous under increasing values of $\beta$. Around the price
spike at $k=19$, the storage resource discharges the highest level of energy
under $\beta=0$, followed by $\beta=0.1$ and $\beta=0.2$, whereas no energy is
discharged under $\beta=0.3$ and $\beta=0.4$. These decisions impart the
relatively less conservative decision-makers ($\beta=0$, $\beta=0.1$, and
$\beta=0.2$) the capability to store more energy during the price drop around
$k=24$ compared to those under $\beta=0.3$ and $\beta=0.4$. These observations
are echoed in the dispatch decisions throughout the rest of the day. For
instance, as $\beta$ is reduced, the storage resource manages to more closely
follow the pre-dispatch price signal before and around the sharp rise at
$k=46$. Under decreasing values of $\beta$, the storage resource discharges a
higher level of energy around $k=38$, gaining in turn the capability to store
more energy during the dip in prices after $k=41$, which it can discharge when
the pre-dispatch price abruptly climbs at $k=46$.
### IV-B Case Study II
We next examine the battery dispatch decisions for a day in which the pre-
dispatch prices are highly volatile. For this purpose, we draw upon the pre-
dispatch prices in South Australia for January 16, 2019. The storage data are
taken from that of the ESCRI storage resource, which is connected to the grid
in South Australia and has a charging/discharging power limit of 30 MW and a
capacity of 8 MWh.
Figure 4: Case Study II optimal net discharging levels under different values
of the weight parameter $\beta$ for $\alpha=0.95$.
We begin by solving the RASS problem by increasing $\beta$ from $0$ to $0.5$
in $0.1$ increments. The results in Fig. 4 show that the relatively more risk-
neutral cases ($\beta=0$ and $\beta=0.1$) closely follow the pre-dispatch
price signals, charging/discharging at all of the seven price troughs/peaks of
the day. Under $\beta=0.2$ and $\beta=0.3$, however, the storage resource
prefers to arbitrage between only six and four time windows, respectively,
whereas under $\beta=0.4$, it capitalizes on only the widest price spread,
charging and discharging once throughout a highly volatile day. Most notably,
under $\beta=0.4$, the resource relinquishes the chance to take advantage of a
$56.7\%$ price difference between $k=22$ and $k=23$ when the price falls
precipitously, yet changes course three periods later and records a $126.7\%$
rise between $k=23$ and $k=26$.
Figure 5: Total expected profits under different values of $\beta$ and
$E_{M}$.
We observe that the storage fails to reach its maximum charging and
discharging power in the simulations, because if it were to sustain its
maximum charging power over 30 minutes, it would exceed its total energy
capacity. As such, we explore how the recorded profits would have evolved had
the storage resource had a different capacity. To this end, we repeat the
experiments by varying $E_{M}$ from $4.0$ MWh to $24.0$ MWh with $4.0$ MWh
increments. We note from Fig. 5 that, across most values of $\beta$, the total
expected profits rise under growing values of $E_{M}$, which can be ascribed
to an increased ability to leverage price differences under a larger storage
capacity. As $\beta$ increases, however, the profits seem to be plagued by
diminishing returns, and indeed plateau at a certain $E_{M}$ level as $\beta$
increases beyond $0.2$. These observations bring home that risk attitudes (via
increasing $\beta$ values) can create higher hurdles to notching greater
profits, which are not necessarily mitigated by a larger storage capacity.
Figure 6: Total expected profits under different values of $\alpha$ and
$\beta$.
In Fig. 6, we examine the influence of the risk confidence level $\alpha$ as
well as $\beta$ on the total expected profits over the $K$ time periods. Note
that, driving down $\alpha$ toward zero brings $\mathcal{R}_{\alpha}(\bm{x})$
toward the expected value of the charging costs, signifying a more risk-
neutral decision-maker. In contrast, increasing $\alpha$ toward one drives
$\mathcal{R}_{\alpha}(\bm{x})$ towards the highest value the costs can take,
representing a more conservative decision-maker. Fig. 6 bears out the
diminished capability of risk-averse decision-maker to respond to pre-dispatch
price signals and to arbitrage, manifesting itself through dwindling profits
under increasing values of $\beta$ and $\alpha$. These results reaffirm the
well-known relationship between risk and profit, whereby expected profits
climb under less conservative decisions, characterized by the values of
$\alpha$ and $\beta$ approaching zero.
## V Conclusion and Policy Implications
We model the self-scheduling problem of a risk-averse storage resource in
decentralized markets. Four core results are gleaned from the analysis. First
that risk-aversion tends to lead to a myopic approach to dispatch most notably
evident in seeking to arbitrage between near-term intervals, rather than
taking charging positions in the expectation of more profitable but risky
revenues much later in the look-ahead period. Second that risk aversion tends
to lead to reduced profits because of the conservative operating stance
adopted by the resource. Third, that this conservatism can mean that the
resource forgoes opportunities to dispatch at peak spot prices, which are
times of highest system scarcity. Finally, and importantly, while higher
energy storage capacity can mitigate low profits, increasing the storage
capacity has virtually no impact on profitability for higher risk-aversion
levels.
There are important policy implications that flow from this study. First, SOs
need to be acutely aware of the role of risk aversion in storage resource
dispatch and bidding. In particular, higher levels of risk aversion may mean
that storage resources may not be available for dispatch at times where
scarcity price signals are most evident. As risk aversion itself is a non-
observable quantity, this introduces uncertainty into an SO’s short-term and
medium term reliability forecasts, and risk into its decision-making on market
intervention. This points to an increasing need for system transparency on key
storage parameters including state-of-charge so that SOs have a better view on
available storage capacity. Finally, it points to the need for the SO itself
to develop a set of tools that quantify such risks and allow for better risk-
adjusted decisions on intervention and other system operations during
scarcity.
## References
* [1] O. Yurdakul and F. Billimoria. Online companion to risk-averse self-scheduling of storage in decentralized markets. [Online]. Available: https://github.com/oyurdakul/pesgm23
* [2] A. Conejo, F. Nogales, J. Arroyo, and R. Garcia-Bertrand, “Risk-constrained self-scheduling of a thermal power producer,” _IEEE Transactions on Power Systems_ , vol. 19, no. 3, pp. 1569–1574, 2004.
* [3] A. Papavasiliou, Y. He, and A. Svoboda, “Self-Commitment of Combined Cycle Units Under Electricity Price Uncertainty,” _IEEE Transactions on Power Systems_ , vol. 30, no. 4, pp. 1690–1701, 2015.
* [4] R. A. Jabr, “Robust self-scheduling under price uncertainty using conditional value-at-risk,” _IEEE Transactions on Power Systems_ , vol. 20, no. 4, pp. 1852–1858, 2005.
* [5] Y. Wang, Y. Dvorkin, R. Fernandez-Blanco, B. Xu, T. Qiu, and D. S. Kirschen, “Look-ahead bidding strategy for energy storage,” _IEEE Transactions on Sustainable Energy_ , vol. 8, no. 3, pp. 1106–1117, 2017.
* [6] J. Kazempour, M. Moghaddam, M. Haghifam, and G. R. Yousefi, “Risk-constrained dynamic self-scheduling of a pumped-storage plant in the energy and ancillary service markets,” _Energy Conversion and Management_ , vol. 50, no. 5, pp. 1368–1375, 2009.
* [7] Y. Chen and R. Baldick, “Battery storage formulation and impact on day ahead security constrained unit commitment,” _IEEE Transactions on Power Systems_ , 2022.
* [8] R. T. Rockafellar and S. Uryasev, “Conditional value-at-risk for general loss distributions,” _Journal of banking & finance_, vol. 26, no. 7, pp. 1443–1471, 2002.
* [9] G. C. Pflug, “Some remarks on the value-at-risk and the conditional value-at-risk,” in _Probabilistic constrained optimization_. Springer, 2000, pp. 272–281.
* [10] AEMO. (2022) Market Data NEMWeb - Information Registry. [Online]. Available: https://https://aemo.com.au/en/energy-systems/electricity/national-electricity-market-nem/data-nem/market-data-nemweb
|
# On the bright-end of the UV luminosity functions of galaxies at $z\sim
0.6-1.2$
M. Sharma1,2, M. J. Page1, I. Ferreras3,4,5, A. A. Breeveld1
1Mullard Space Science Laboratory, University College London, Holmbury St
Mary, Dorking, Surrey, RH5 6NT, UK
2Isaac Newton Group of Telescopes, C. Álvarez Abreu, 70, E38700 Santa Cruz de
La Palma, La Palma, Spain
3Instituto de Astrofísica de Canarias, Calle Vía Láctea s/n, E38205, La
Laguna, Tenerife, Spain
4Departamento de Astrofísica, Universidad de La Laguna, E38206, La Laguna,
Tenerife, Spain
5Department of Physics and Astronomy, University College London, Gower Street,
London WC1E 6BT, UK E-mail<EMAIL_ADDRESS>(MS)
(Accepted XXX; Received YYY; in original form ZZZ)
###### Abstract
We derive the Ultra-Violet (UV) luminosity function (LF) of star forming
galaxies falling in the redshift range $z=0.6-1.2$, in the rest-frame far-UV
(1500 Å) wavelength. For this work we are in particular interested in the
bright end of the UV LF in this redshift range. The data from XMM-Newton
Optical Monitor (XMM-OM), near-ultraviolet ($1600-4000$ Å) observations over
1.5 deg2 of the COSMOS field are employed for this purpose. We compile a
source-list of 879 sources with $\mathrm{UVW1}_{AB}$ extending to $\sim 21$
mag from the wide area UVW1 image of the COSMOS field in the two bins $0.6\leq
z\leq 0.8$ and $0.8\leq z\leq 1.2$. We use the maximum likelihood to fit a
Schechter function model to the un-binned data to estimate the parameters
(faint-end slope, characteristic magnitude and normalisation) of the Schechter
function. We find that the shape of the LF is consistent with the Schechter
model and the parameters are in fair agreement with other studies conducted
using direct measurements of the 1500 Å flux. We see a brightening of the
characteristic magnitude as we move from lower (0.7) to higher (1.0) redshift.
The measures for luminosity density are within the error margins of past
studies. We examine the brightest sources in our sample for AGN contribution.
These sources are characterised through their spectral energy distributions
(SEDs), integrated infrared luminosities and morphologies. We also explore
their overlap with the brightest IR galaxies in a similar redshift range.
###### keywords:
galaxies: evolution - ultraviolet: galaxies - ultraviolet: luminosity function
- galaxies: luminosity function
††pubyear: 2022††pagerange: On the bright-end of the UV luminosity functions
of galaxies at $z\sim 0.6-1.2$–References
## 1 Introduction
The luminosity function (LF) is one of the most powerful statistical tools
employed to study the distribution of large scale properties (e.g. galaxy
masses, luminosities, star formation rates) of a galaxy population under
consideration. The galaxies can be arranged based on how much light is coming
out of them. Counting these galaxies in a luminosity bin per comoving volume
gives a number density traditionally called the luminosity function. In other
words, a luminosity function describes the number density of galaxies in a
luminosity interval.
The estimation of galaxy LFs is one of the most important problems in extra-
galactic astronomy and observational cosmology, for it has multiple
applications. An integral of the luminosity function can be used to derive the
luminosity density. This luminosity density scales directly with the star-
formation rate for UV band measurements (e.g. Lilly et al., 1996; Madau et
al., 1998). The LF is widely used to de-project the angular correlation
function using Limber’s equation (Limber, 1953) to estimate the three
dimensional spatial distribution of galaxies, and study the large scale
properties like the correlation length, halo mass, galaxy bias, and halo
occupation (e.g. Hildebrandt et al., 2009a). There are studies proposing to
constrain the primordial non-Gaussianities at the small scales (Sabti et al.,
2021), and to study the primordial power spectrum (Yoshiura et al., 2020),
using the UV LFs at high redshifts. A precise estimate of the faint-end slope
is required to estimate the magnification bias in gravitational lensing
studies (e.g. Narayan & Wallington, 1993), which can be used as an independent
probe to study cosmological parameters such as the total matter density, the
dark matter power spectrum normalisation and also the galaxy bias (e.g.
Scranton et al., 2005; Hildebrandt et al., 2009b).
The theoretically predicted shape from the $\Lambda-$CDM model comes in the
form of the dark matter halo mass function (e.g. Shirasaki et al., 2021).
Assuming a baryon fraction and star formation efficiency, the halo mass
function can be compared to an observed stellar mass function (Bullock &
Boylan-Kolchin, 2017). The comparison leads to a mismatch between the two,
which becomes severe at the two extremes (high/low stellar masses or
bright/faint luminosities; Cole et al., 2001; Yang et al., 2003; Behroozi et
al., 2010). Due to this mismatch there is a lot of work going on to understand
the nature of physical processes that might be causing the deviations in the
observed LF shape with respect to the theoretical predictions. This makes
accurate estimates of the LF even more important as these can provide
important insights into the additional physical processes occurring at the
small scales and help us understand the baryon cycle at those scales
(Somerville & Davé, 2015). Variation in the shape of the LF as a function of
redshift, can be used as a proxy for the evolution of galaxies between
different epochs in the history of the Universe.
The stellar evolution in a normal galaxy is believed to dictate many radiative
processes occurring in the galaxy, and control the amount of light coming out
of it. UV luminosity of galaxies is of particular interest as it is produced
mainly by massive stars with short lifetimes (Madau & Dickinson, 2014), and
can be used to trace the underlying star formation activity in the galaxies
over a timescale of 100 Myr. The rest frame 1500 Å emission in the UV is used
extensively in the literature as it is one of the most important tracers to
understand the star-formation in galaxies. (Kennicutt & Evans, 2012).
Using both ground-based and space-borne observatories, many studies have
calculated the the UVLF at redshift $>1.2$ (e.g. Yoshida et al., 2006; Hathi
et al., 2010; Parsa et al., 2016; Moutard et al., 2020). In the redshift range
$z<1.2$ however, there have been only a handful of studies so far because 1500
Å emission can only be accessed from space-borne instruments. The first
results on the galaxy UV LF in the redshift range $\left(0.2\leq z\leq
1.2\right)$ were obtained by Arnouts et al. (2005) using the NUV data from the
Galaxy Evolution Explorer satellite (GALEX; Martin et al., 2005). These
results were followed up by Oesch et al. (2010) who used data from Hubble
Space Telescope (HST) WFC3/UVIS instrument to explore the redshift range
$0.5<z<1$, and by Hagen et al. (2015) who used the UV/Optical Telescope (UVOT)
on the Neil Gehrels Swift observatory to calculate the LF in the redshift
region from 0.2 to 1.2. Using ground-based observations from CFHT and Subaru
Telescope, Moutard et al. (2020) and from VLT Cucciati et al. (2012) have
calculated the Galaxy LF. Moutard et al. (2020) have re-analysed the GALEX
data from Arnouts et al. (2005) to extend their luminosity function to
redshifts less than 0.9. Very recently Page et al. (2021) have published their
UV LF results in the redshift range of $0.6<z<1.2$ using the observations of
the 13 Hr field taken through the UVW1 filter of the Optical Monitor telescope
onboard the XMM-Newton observatory (XMM-OM, Optical Monitor; Mason et al.,
2001). In Sharma et al. (2022), hereafter paper I, we have used data from
observations of the Chandra Deep Field South (CDFS) taken by the same
instrument (i.e. XMM-OM) to estimate the LF from redshift 0.6 to 1.2. This
work is a follow up study to paper I and we utilise the UVW1 imaging obtained
with the XMM-OM in the wide area Cosmic evolution survey (COSMOS; Scoville et
al., 2007) field. With the wide area of this field we expect to find many
luminous sources in our survey to extend our LF to brighter absolute
magnitudes. We take a look at the properties of the brightest UVW1 sources in
COSMOS, and try to establish a connection between the bright UV galaxies and
their infra-red (IR) counterparts.
The rest of this paper is structured as follows. All the data and processing
(i.e. the observations, UVW1 data, the image processing and the ancillary data
used for redshift information and to identify the stars and active galactic
nuclei), are explained in section 2. The completeness simulations are also
explained in this section. The corrections for Galactic extinction and the
K-corrections are discussed in section 3, before final analysis. The methods
used to compute the binned LF, fit the Schechter function parameters and the
luminosity density are described in section 4. This section also includes the
description of the expected effects of cosmic variance. In addition, we give
more emphasis to the bright-end of the LF in this section. We fit spectral
energy distributions to the sources in the brightest bins of the LF and
perform checks on the possibilities for these sources to be AGN. We present
our results in section 5, and discuss them in 6. Finally, we conclude this
paper in section 7. For this paper we have assumed a flat cosmology with
$\Omega_{\Lambda}=0.7$, $\Omega_{M}=0.3$ and Hubble’s constant $H_{0}=70$ km
s-1 Mpc-1. The distances are calculated in comoving co-ordinates in Mpc. The
AB system of magnitudes (Oke & Gunn, 1983) is adopted throughout this study.
## 2 Observations & Data
### 2.1 XMM-OM observations of the COSMOS field
Figure 1: COSMOS vs CDFS : Here we show the footprint of the COSMOS UVW1 image
(this work) in black colour and over plot the CDFS (paper I) footprint in the
bottom left of the plot in orange. The images are plotted on the same spatial
scale to show the contrast in the sky-area of the surveys. The little black
squares/rectangles in the COSMOS area indicate gaps between the different
pointings, where there is no OM data.
The COSMOS field with its wide 2 deg2 survey area and multi-wavelength
coverage from X-ray to radio, has been catering for studies of galaxy
evolution and large scale structure over a wide range of redshifts. In this
study we use data from XMM-COSMOS (Hasinger et al., 2007; Cappelluti et al.,
2007), a wide field survey of the COSMOS field with the XMM-Newton
observatory. In particular the data taken by the UVW1 filter of the XMM-OM at
a central wavelength of 2910 Å is utilised. The UVW1 data are most relevant to
derive the 1500 Å LF at the redshifts $0.6-1.2$. For this filter XMM-OM
provides a PSF (FWHM) of 2 arcsec. Our catalogue of the XMM COSMOS Survey
consists of imaging from 55 observations between December 2003 and May 2007,
targeting 25 slightly overlapping pointings spanning the COSMOS field. These
25 pointings are arranged in a $5\times 5$ grid. The final image also contains
a $4\times 4$ grid of holes (no data) with typical sizes ranging from 4 to 7.5
arcmin2 (Fig. 1).
All the imaging is produced in the ‘default’ (also known as ‘rudi-5’)
configuration, where a setup of 5 consecutive exposures is used to cover 92
per cent of the $17\times 17$ arcmin2 FOV of the XMM-OM. For details about the
image configurations see Mason et al. (2001). More details about the different
image configurations can also be found in the XMM-Newton User’s
Handbook111https://xmm-
tools.cosmos.esa.int/external/xmm_user_support/documentation/uhb/omdef.html.
### 2.2 XMM-OM data reduction and processing
The processing of COSMOS data is done here following the same process as in
paper I (i.e. XMM-OM image processsing pipeline with some tweaks). We give a
brief summary of the process here, and refer the reader to paper I and Page et
al. (2012, 2021) for details. The raw data were obtained from the XMM-Newton
Science Archive222https://www.cosmos.esa.int/web/xmm-newton/xsa/ (XSA). The
standard XMM-Newton Science Analysis
System333https://www.cosmos.esa.int/web/xmm-newton/sas task omichain is used
for primary pipeline processing of the data. The data products are removed
from the pipeline after correcting for mod-8 pattern for additional processing
to get rid of the background scattered light feature at the center of the
images, and also to remove some artefacts. The artefacts and their corrections
are explained in section 2.2 of paper I. After correcting for all the cosmetic
effects, the images were then distortion corrected for any non-linear offsets
in pixel positions, aligned with the equatorial coordinate frame of SDSS DR12
(Alam et al., 2015) then rotated and re-binned into sky coordinates using the
SAS task omatt. Finally, the images were co-added using the SAS task ommosaic.
The spatial extent of the final co-added COSMOS image can be seen in Fig. 1 as
compared to the smaller but deeper CDFS image used in the paper I.
The final image is then fed to the SAS task omdetect, which performs the
photometry and source detection, using different algorithms for identifying
point and extended sources. For point-like sources, the default photometric
aperture used by omdetect depends on signal to noise ratio, with brighter
sources measured in a 5.7 arcsec radius aperture, and fainter sources measured
in a 2.8 arcsec aperture (Page et al., 2012). Most of the UVW1-detected
galaxies with redshifts between 0.6 and 1.2 appear point-like to XMM-OM, and
the brightest of these would by default be measured in a 6 arcsec aperture.
Inspection of deep optical images of these luminous galaxies reveals that
photometry in such a large aperture will be contaminated by the light from
other surrounding galaxies. Therefore we have forced omdetect to adopt the
smaller 3 arcsecond radius aperture for bright, as well as faint galaxies,
within our sample. In total $7027$ sources are detected in the UVW1 image
above a signal to noise threshold of 4.
The edges of the images are areas of low signal to noise, resulting in
spurious sources entering the catalogue. To get rid of this problem we mask
the outer 10 pixels of the COSMOS UVW1 image. Due to this masking we lost
0.046 deg2 (2.9 per cent) of sky area and 102 (1.4 per cent) of the sources.
### 2.3 Completeness
Figure 2: Completeness of the source detection as a function of UVW1
magnitude, as determined from the simulations described in Section 3.1. The
black data points represents fraction of recovered simulated galaxies at each
input UVW1 mag. Confusion limit is represented by the red coloured shaded area
at the bottom of the plot. Figure 3: The distributions of the magnitude errors
in the detection process at each simulated UVW1 magnitude, where x-axis show
the magnitude offsets between the inserted and recovered sources and the
y-axis represents the fraction of sources at each offset. Each colour, as
represented by the discrete colourbar represents the input magnitude in the
completeness simulations. The inset window represents a part the plot
stretched in the y-axis for clarity.
The completeness of a survey is affected by various factors, primarily signal
to noise ratio, but also the blending of two or more faint sources into one
i.e. source confusion. In order to quantify how complete (or incomplete) as a
function of magnitude our galaxy sample is, we use the same technique as in
paper I. We briefly discuss the process here and refer the reader to paper I
and Page et al. (2021) for details.
We simulate artificial point sources of varying magnitudes, add them to the
real image and detect them using omdetect (section 2.2). If a source is
detected within a circular region of radius 1 arcsec of the position of the
inserted source, we consider the source as recovered. At least 1000 sources
are used at each input magnitude ranging from UVW1 mag 20 to 25 in steps of
0.25 mag, with smaller step sizes of 0.05, 0.10 and 0.15 between magnitudes
22.25 to 23.50, where completeness drops quickly. Before we calculate the
quantities required for our further analysis, the simulations are corrected
for the reddening and the edge-mask (section 2.2) applied on our final image.
The recovery fraction at a particular magnitude defines the completeness. This
fraction is plotted as a function of UVW1 AB magnitude in Fig. 2 and is used
in section 4.1 to calculate the binned LF. The distributions of the magnitude
offsets between the inserted and recovered sources are used as the error
distributions, as they contain information of the unknown systematics unique
to the XMM-OM, the UVW1 filter and the detection process. These distributions
are plotted as a function of simulated UVW1 magnitudes in Fig. 3. Each colour
represents a simulated UVW1 magnitude ranging from UVW1 mag 20 to 24.75. These
distributions are forward folded with the LF models in order to obtain the
observed distributions, before the maximum likelihood fitting is performed
(section 4.2) to obtain the LF parameters.
As it can be seen in Fig. 2, our catalogue is found to be > 98 per cent
complete for UVW1 < 21 mag, 75 per cent complete at UVW1 $\leq$ 22.5 mag and
falls < 10 per cent as UVW1 magnitude goes beyond $23.25$. At UVW1 24 mag the
recovery fraction goes down to $<1$ per cent. From Fig. 3 (see inset), it can
be seen that the magnitude offset distribution is lying at the negative values
mostly for UVW1 mag 23.75 and completely for UVW1 mag > 24, implying that the
recovered sources are brighter than the ones inserted.
This means that sources fainter than UVW1 mag 24 are only detected due to flux
boosting - faint sources blended with the noise spikes. We notice that the
faintest output magnitude of a detected source is 24.25, which is also
apparent from the completeness plot, where above 24.25 magnitudes the
completeness curve becomes more or less of an asymptote to the magnitude axis.
We conclude that at around this magnitude (UVW1 24.25 mag), we hit the
sensitivity limit for the survey (i.e. at fainter magnitudes than this, we can
not get any detections) and we are not limited by confusion. In order to be on
the safe side, we take the conservative approach similar to paper I and apply
a magnitude limit brighter than the faint detection limit. We choose UVW1
23.11 mag (= 23.02 mag after reddening correction) as the magnitude limit for
our survey. The completeness level at this magnitude is $47$ times the
residual level at the sensitivity limit.
### 2.4 Ancillary data
Once we have our source-list, with UVW1 magnitudes, we combine it with other
catalogues from the literature to collect additional information about our
sources. We use this additional information to identify the stars and AGNs in
our sample, and to assign redshifts to our galaxies. Similar to paper I, we
perform a test to find out the appropriate search radius to cross correlate
our source-list with other catalogues. As we are matching a large catalogue of
UVW1 sources to a very deep multi-wavelength catalogue with a much higher sky
density, a naive positional match has the potential to generate a large number
of spurious matches. Therefore we have explored carefully the matching
criteria.
There should be a limit to how blue the UVW1$-u$ colours of star-forming
galaxies can be. So, a UVW1$-u$ colour cut can be used with a large matching
radius to achieve a high matching rate while keeping the spurious matches to a
minimum. We find this blue limit in Appendix A, by comparing the UVW1 and $u$
colours of stars, QSOs and galaxies. We also check what fraction of total
sources are spuriously matched, in catalogues compiled using different
matching radii. From Fig. 15 and 16, we obtain a value $-0.5$. Since we do not
consider reddening effects, we put a conservative limit UVW1$-u>-1$ on our
sources and find a matching radius of 1.5 arcsec.
Figure 4: A joint distribution of the UVW1 AB magnitudes of the sources and
the angular offsets with their respective counterparts after matching them
with Laigle et al. (2016) catalogue using 1.5 arcsec as the maximum offset
radius. The histograms represent the marginal distributions.
#### 2.4.1 Stars
We have used several different sources to identify the stars in our source-
list. The first one is the catalogue from Leauthaud et al. (2007) who use two
alternate methods to differentiate the stars from other sources. We also use
spectroscopic identifications of stars from Prescott et al. (2006); Trump et
al. (2009); SDSS - III (Alam et al., 2015) and Hasinger et al. (2018).
In addition, we employ GAIA DR2 (Gaia Collaboration et al., 2018) data and
remove sources with significant proper motions. We define the significant
proper motions to be those for which the values are at least more than three
times the standard errors. We also use Marchesi et al. (2016) who use both
spectroscopic identification and photometric SEDs to identify stars. In total
we identified 1523 stars in our catalogue, which constitute $\sim$ 23 per cent
of sources in our edge-masked COSMOS UVW1 source-list. For the rest of the
analysis, we remove these stars from our source-list.
#### 2.4.2 Redshifts
We merge the remainder of our sources with catalogues containing spectroscopic
and photometric redshifts. Following are the brief details of the catalogues.
We use redshifts from the hCOSMOS (Damjanov et al., 2018), which surveyed the
COSMOS field with the MMT Hectospec multi-fiber spectrograph. We prioritise
their data from the central $\sim$ 1 deg2 of the field, which (hCOS20.6) is 90
per cent complete to the limiting $r$ magnitude 20.6. We used a set of
spectroscopic redshifts compiled by Alarcon et al. (2021) released as part of
their PAUS photometric catalogue. Then we add data from the zCOSMOS-bright
survey (Lilly et al., 2007; Knobel et al., 2012), which provides high-quality
spectroscopic redshifts up to redshift 1.2 for sources with
$I_{\mathrm{AB}}\leq 22.5$. Next we use the catalogue from Hasinger et al.
(2018), who used the Deep Imaging Multi-Object Spectrograph (DEIMOS) on the
Keck II telescope. We further add spectroscopic redshifts from the following :
Fiber Multi-Object Spectrograph (FMOS) - COSMOS survey (Kashino et al., 2019)
which uses medium resolution spectroscopy to obtain > 5000 redshifts, the
PRism MUlti-object Survey (PRIMUS; Coil et al., 2011) targeting faint galaxies
with the Inamori Magellan Areal Camera and Spectrograph (IMACS) spectrograph
on the Magellan I Baade 6.5 m telescope and the Large Early Galaxy Census
(LEGA-C; van der Wel et al., 2016) Survey of K-band selected galaxies using
the VIMOS spectrograph mounted on ESO’s Very Large Telescope. The last few
spectroscopic redshifts are from Paulino-Afonso, Ana et al. (2020), who used
VIMOS, Prescott et al. (2006) and Trump et al. (2009) who used Hectospec
multiobject spectrograph and IMACS respectively on the MMT 6.5 m telescope.
For photometric redshifts we primarily use the the photometric part of the
PAUS (Alarcon et al., 2021) catalogue and the $K_{\mathrm{s}}$ selected
photometric catalogue from (Muzzin et al., 2013) which contains photometry
from $0.15-24\mu$m. Then we add the Galaxy And Mass Assembly 10h region (G10;
Davies et al., 2015), which contains the best photometric redshifts from
PRIMUS and Ilbert et al. (2009). We also use photometric redshifts from the
following works : the near-infrared selected photometric catalogue created
using imaging from various telescopes covering more than 30 photometric bands
(COSMOS2015; Laigle et al., 2016); the CANDELS-COSMOS survey (Nayyeri et al.,
2017), which is conducted with 42 photometric bands ranging from 0.3 to 8
$\mu$m; a volume limited and mass complete sample of COSMOS2015 (Darvish et
al., 2017) and the updated edition of the Chandra COSMOS-Legacy (C-COSMOS)
Survey Marchesi et al. (2016). In addition to above mentioned catalogues, we
utilise data from the SDSS-III DR12 (Alam et al., 2015) to obtain the
photometric redshifts for our UVW1 sources.
We tabulate all these catalogues in Table 1 with the number of redshifts taken
from each catalogue and the quality flags used to constrain each catalogue to
the best possible (most secure) redshifts. In total we get 4578 redshifts out
of which 3658 ($\sim$ 80 per cent) are spectroscopic. The distribution of the
UVW1 magnitudes of the sources and their distances from counter-parts in
COSMOS2015 catalogue are plotted in Fig. 4.
Table 1: Need to change now! Catalogues used for spectroscopic and photometric
redshifts, along with the number of redshifts and quality flags (QF) used for
each catalogue. $z68$ represents the 68 per cent confidence interval around
each photometric redshift in the photometric catalogues.
Source Catalogue | Numbera | QF
---|---|---
Spectroscopic | |
Damjanov et al. (2018) | 1805 | e_Z $<0.0002$
Alarcon et al. (2021) | 983 | 2
Knobel et al. (2012) | 317 | multa
Hasinger et al. (2018) | 67 | 2, $\geq$ 1.5b
Kashino et al. (2019) | 13 | > 1
Coil et al. (2011) | 459 | 1c
van der Wel et al. (2016) | 11 | multd
Paulino-Afonso, Ana et al. (2020) | 1 | none
Trump et al. (2009) | 1 | q_z $>2$
Prescott et al. (2006) | 1 | none
Photometric | |
Alarcon et al. (2021) | 559 | none
Davies et al. (2015) | 35 | $\leq 2$
Muzzin et al. (2013) | 198 | 1
Laigle et al. (2016) | 2 | z68$<0.1$
Alam et al. (2015) | 122 | e_zph$<0.1^{e}$
Darvish et al. (2017) | 1 | 0
Marchesi et al. (2016) | 3 | none
$a$Multiple constraints : cc $=$ x.5, where x = 3, 4, 9, 13, 14, 18 on the
first selection and then, cc $=$ y.2$-$y.4, where y = 3, 4, 9, 13, 14, 18, 23,
24, 29 and cc $=$ z.5, where x = 2, 12, 22.
$b$We use the second quality Flag for the second run.
$c$We choose this quality flag for the first run, and then on the second run
we remove this condition and use secondary targets as well.
$d$f_z$=0$ and f_spec$=0$ and f_use$=1$.
$e$e_zph$<0.1$ and q_mode$=+$ and mode$=1$ and Class$=3$.
Figure 5: The distribution of the COSMOS UVW1 sources as a function of their
UVW1 magnitudes. The solid histogram shows all sources with redshifts and the
dashed histogram represents the sources with spectroscopic redshifts. Figure
6: Redshift distribution of the COSMOS UVW1 sample. The spectroscopic
redshifts are represented by the dashed line and the sum of spectroscopic and
photometric redshifts is represented by the solid line.
#### 2.4.3 AGNs
The UV contribution from the central super-massive black holes in active
galaxies may dominate the emission coming from the star forming regions, in a
UV survey. It is the latter emission that we are concerned with in this study.
So, the inclusion of UV-bright AGN in the sample can induce an overestimation
in the UV LF calculations. In particular, they affect the bright end of the UV
LF. We use the same method as in paper I, i.e. to use a X-ray luminosity cut
to handle the quasar contamination. The X-ray catalogues from Marchesi et al.
(2016); Trump et al. (2009) and Prescott et al. (2006) are used to identify
any X-ray sources in the sample. Any sources cross-correlated with Marchesi et
al. (2016) and having a luminosity greater than $10^{42}$ ergs sec-1 in the
$0.5-10$ KeV, $0.5-2$ KeV or $2-10$ KeV X-ray bands, were removed. In addition
to this, sources identified as quasars by Prescott et al. (2006) or identified
as having broad-line features by Trump et al. (2009) were also removed. The
above criteria identify 97 sources in the UVW1 source-list as quasars. We
remove all these sources from further analysis. It is important to remark
about the possibility of some AGN making their way into the final catalogue,
despite the luminosity cut described here, because the X-ray observations are
not sufficiently deep to detect all AGNs with X-ray luminosities $>10^{42}$
erg s-1 in our redshift range. We perform more tests to characterise these
bright UV sources in section 4.5.
After this step, we have 4481 sources in our source-list. The UVW1 magnitude
distribution of this source-list is plotted in Fig. 5. Finally, a sub-sample
of 879 galaxies has been selected with redshifts of the highest quality within
a range $0.6-1.2$, 637 of which are spectroscopic. The final redshift
distribution of this sub-sample is shown in Fig. 6.
## 3 Corrections
Before the UVW1 magnitudes are converted into 1500 Å absolute magnitudes and
used for calculating the LFs, we need to calculate two important corrections,
that we will apply to the absolute magnitudes.
### 3.1 Galactic extinction
All extra-galactic surveys, especially in the blue part of the galaxy spectral
energy distributions (SEDs) are affected by extinction from Galactic
foreground dust. We calculated the extinction correction for COSMOS source-
list in a similar way to paper I and Page et al. (2021). We use an extinction
calibration together with a dust map to calculate the Galactic extinction of
the UVW1 sources in the direction of the COSMOS field using Schlegel et al.
(1998) and Schlafly & Finkbeiner (2011). For this study we find a value 0.09
magnitudes for Galactic extinction in UVW1. This corrections is added to the
calculation of absolute magnitude for each source. The sample is made
available as a supplementary table with the online version of the paper. The
first five rows of the table are shown in Table 2.
Table 2: The first five rows of the UVW1 source catalogue used in this work. The columns list the positions, redshifts (z) and the apparent UVW1 magnitudes. The full table is available in the machine readable form with the online version of the paper. RA (J2000) | DEC (J2000) | z | UVW1 mag
---|---|---|---
deg | deg | |
150.423 | 2.583 | 0.82 | 21.06
149.889 | 2.735 | 0.71 | 21.07
150.666 | 2.025 | 0.80 | 21.26
149.815 | 2.830 | 0.85 | 21.45
150.758 | 2.238 | 0.60 | 22.46
### 3.2 K-correction
The observed SED of a galaxy appears different from the rest-frame SED as it
gets red-shifted due to the expansion of the Universe. So, depending upon the
redshift of a source a single waveband may be looking at totally different
parts of the galaxy SEDs. This will affect the calculation of absolute
magnitude of a source in that given waveband and cause erroneous results. To
avoid this effect we have to add a compensating term to the absolute
magnitudes of the galaxies in our sample. This compensating term is called the
K-correction (Hogg et al., 2002), which is defined as a ratio of fluxes in the
rest-frame and the observed (red-shifted) band. Page et al. (2021) describes
the methodology used to calculate the K-correction for each UVW1 source. The
calculated corrections for each source are plotted in Fig. 7. As in paper I,
the K-corrections $K\left(z\right)$ are added to the expression calculating
the 1500 Å absolute magnitudes from the apparent UVW1 magnitudes,
$M_{1500}\left(z\right)=m-5\log{\left(\frac{d_{L}\left(z\right)}{\mathrm{Mpc}}\right)}-25-K\left(z\right)-X_{\mathrm{UVW1}},$
(1)
where $X_{\mathrm{UVW1}}$ is the extinction correction from section 3.1 and
$d_{L}\left(z\right)$ is the luminosity distance.
## 4 Luminosity function and Luminosity density
Once all the corrections are applied to the sample, we divide the sample in
two redshift bins covering $0.6<z<0.8$ and $0.8<z<1.2$. After this we
calculate the binned LF, estimate the Schechter function parameters and in the
end we determine the UV luminosity density. This section outlines the
formalism for all these calculations. We also outline the method we employ to
construct the spectral energy distribution (SEDs) for the brighest sources.
### 4.1 Binned luminosity function estimate
Table 3: The COSMOS UVW1 survey has different levels of completeness at different magnitudes, hence the effective sky area changes with UVW1 magnitude. Using the completeness simulations we associate each limiting magnitude with a completeness fraction (second column). The effective area ($\mathcal{A}_{\mathrm{eff}}$) is the product of the completeness fraction and the geometric sky area of the survey and is given in the third column. UVW1 magnitude | Completeness | Effective Area
---|---|---
(AB mag) | (per cent) | (deg2)
19.93 | 98.83 | 1.5088
21.66 | 95.30 | 1.4550
22.16 | 88.04 | 1.3441
22.32 | 80.36 | 1.2269
22.41 | 73.24 | 1.1182
22.56 | 61.17 | 0.9339
22.66 | 52.44 | 0.8007
22.76 | 39.90 | 0.6094
22.86 | 30.65 | 0.4679
22.93 | 25.26 | 0.3855
23.02 | 17.76 | 0.2711
In the volume-magnitude space, we define the binned luminosity function as the
number of galaxies $N_{\mathrm{bin}}$ inside the bin bound by redshift
interval $z_{\mathrm{min}}<z<z_{\mathrm{max}}$ and magnitude interval
$M_{\mathrm{min}}<M<M_{\mathrm{max}}$, divided by the effective survey volume
enclosed by that bin (Page & Carrera, 2000),
$\phi\left(M,z\right)\equiv\phi=\frac{N_{\mathrm{bin}}}{V_{\mathrm{bin}}}.$
(2)
The effective survey volume inside each bin $V_{\mathrm{bin}}$ is a 4-volume
in the volume-magnitude space given by,
$V_{\mathrm{bin}}=\int_{M_{\mathrm{min}}}^{M_{\mathrm{max}}}\,\int_{z_{\mathrm{min}}}^{z_{\mathrm{max}}}\frac{\mathrm{d}V\left(z\right)}{\mathrm{d}z}\,\mathrm{d}z\,\mathrm{d}M,$
(3)
where $z_{\mathrm{min}}$ and $z_{\mathrm{max}}$, the lower and upper extremes
of the redshift interval are calculated as
$z_{\mathrm{min},\,i}=\mathrm{min}\left(z_{\mathrm{min}},\,z\left(M_{i},\,m_{l}\right)\right),$
(4)
and
$z_{\mathrm{max},\,i}=\mathrm{max}\left(z_{\mathrm{max}},\,z\left(M_{i},\,m_{u}\right)\right),$
(5)
where $m_{l}$ and $m_{u}$ are the lower and upper magnitude limits of the
survey, $z\left(M_{i},\,m_{l}\right)$ and $z\left(M_{i},\,m_{u}\right)$ are
the minimum and maximum redshifts at which the object $i$ can be found and
still be within the magnitude limits of the survey. The term
$\mathrm{d}V\left(z\right)/\mathrm{d}z$ in the integrand of equation 3 is
obtained as a product of effective area and the differential comoving volume
element,
$\frac{\mathrm{d}V\left(z\right)}{\mathrm{d}z}=\left(\frac{\pi}{180}\right)^{2}\,\int_{\Omega}\,\mathcal{A}_{\mathrm{eff}}\left(M\right)\,\frac{\mathrm{d}V\left(z\right)}{\mathrm{d}z\,\mathrm{d}\Omega}\,\mathrm{d}\Omega\,$
(6)
where $\mathcal{A}_{\mathrm{eff}}=\mathcal{A}\cdot C\left(m\right)$ is the
effective area, obtained by multiplying the sky-area ($\mathcal{A}$) by the
completeness function $C\left(m\right)$. We tabulate the effective areas along
with the completeness as a function of the UVW1 magnitudes in Table 3.
$\frac{\mathrm{d}V\left(z\right)}{\mathrm{d}z\,\mathrm{d}\Omega\,}=\frac{c\,H_{0}^{-1}\,d_{L}^{2}}{(1+z)^{2}\,[\Omega_{\lambda}+\Omega_{m}(1+z)^{3}]^{1/2}}\,$
(7)
is the differential comoving volume element (Hogg, 1999).
From Poisson’s statistics (Gehrels, 1986), we calculate the uncertainty for
$N$ objects and hence the statistical uncertainty in the LF for each bin. The
resulting luminosity function $\phi$ has units of
$\mathrm{Mpc}^{-3}\mathrm{mag}^{-1}$.
Figure 7: K-corrections made to the absolute magnitude of each source in our
UVW1 source-list as a function of their redshifts.
### 4.2 Schechter function parameters
We analyse the galaxy distribution in the redshift -magnitude space by
comparing it to a galaxy LF model using maximum likelihood. We use the
Schechter function (Schechter, 1976) to model the galaxy LF in each redshift
bin. It is parametrised as,
$\phi(M)=0.4\,\ln{10}\,\phi^{*}\\\ \times
10^{-0.4(M-M^{*})(1+\alpha)}\,\mathrm{exp}\left({-10^{-0.4(M-M^{*})}}\right)$
(8)
the product of an exponential function and a power law, which dictate the
shape at bright and faint magnitudes respectively. The shape transitions from
the power law of slope $\alpha$, to the exponential form at the knee of the
LF, described by $M^{*}$. The LF is normalised by the characteristic number
density $\phi^{*}$.
We convolve the Schechter function model with the error distributions (see
Fig. 3) obtained from the completeness simulations in section 2.3. These
histograms are normalised by the number of sources for each magnitude that are
inserted into the image.
Given the LF parameters $\theta=\left(\phi^{*},M^{*},\alpha\right)$, the
probability distribution such that a galaxy of magnitude $M_{i}$ is observed
at a redshift $z_{i}$, can be expressed as,
$p\left(M^{\prime}_{i},z_{i}\,|\,\theta\right)\propto\frac{p\left(\theta\,|\,M^{\prime}_{i},z_{i}\right)}{p\left(\theta\right)},$
(9)
and the likelihood function for $N_{G}$ galaxies can be written as,
$\mathcal{L}=\prod_{i}^{N_{G}}p\left(M^{\prime}_{i},z_{i}\,|\,\theta\right).$
(10)
This likelihood function can be written in a more convenient form as
$S=-2\ln\mathcal{L}=-2\sum_{i=1}^{N_{\mathrm{G}}}\ln
p\left(M^{\prime}_{i},z_{i}\right),$ (11)
and it can be minimised in order to maximise $\mathcal{L}$.
We obtain the posterior probability distributions for the Schechter function
parameters $\left(\phi^{*},\,M^{*},\,\alpha\right)$ using the Markov Chain
Monte Carlo (MCMC) method, assuming a uniform uninformative prior
$p\left(\theta\right)$. To implement MCMC, we use python module emcee
(Foreman-Mackey et al., 2013).
### 4.3 UV Luminosity density
Once the Schechter function parameters are determined for our survey, the
luminosity density can be derived by integrating the product of luminosity
with the luminosity function,
$j=\int_{0}^{\infty}L\,\phi\left(L\right)\,\mathrm{d}L,$ (12)
using equation 8 with $L/L^{*}=\mathrm{exp}[-0.4\ln{10}\cdot(M-M^{*})]$, we
can write
$j=\int_{0}^{\infty}L\,\phi^{*}\left(L/L^{*}\right)^{\alpha}\,\mathrm{exp}\left(-L/L^{*}\right)\,\mathrm{d}\left(L/L^{*}\right).$
(13)
This gives us a more robust quantity than normalisation ($\phi^{*}$) of the LF
to compare our results with past works and avoids the degeneracy in $\phi^{*}$
and $M^{*}$. We integrate from
$M_{1500}=-10\,(L_{1500}=$4.3\text{\times}{10}^{24}\text{\,}\mathrm{)}$$ to
$M_{1500}=-24\,(L_{1500}=$1.7\text{\times}{10}^{30}\text{\,}\mathrm{)}$$ to
get the luminosity density. The errors on the normalisation due to cosmic
variance (section 4.4) are included in addition to the statistical errors in
our calculation of the luminosity density.
Table 4: Derived Schechter function parameters of the galaxy UV LF from their respective posterior distributions at both redshift bins. Errors indicate $1\sigma$ (68.26 per cent) uncertainties. $z$ | $\phi^{*}/10^{-3}$ | $M^{*}$ | $\alpha$
---|---|---|---
| (Mpc-3) | |
$0.6-0.8$ | $5.05^{+0.75}_{-1.07}$ | $-19.12_{-0.22}^{+0.20}$ | $-1.37_{-0.43}^{+0.48}$
$0.8-1.2$ | $1.76^{+0.43}_{-0.64}$ | $-19.83_{-0.29}^{+0.26}$ | $-1.66_{-0.55}^{+0.59}$
Figure 8: UV luminosity function of galaxies in the redshift intervals
$0.6\leq z<0.8$ in the left panel and $0.8\leq z<1.2$ in the right panel as a
function of the 1500 Å magnitude. The data points show the binned number
densities measured using the Page & Carrera (2000) method. The black solid
line is our best-fitting Schechter function derived from the CDFS field as
described in Section 4.2. We obtain this curve from the median value of the
posterior distribution of Schechter function parameters. The grey shaded area
around the best-fit Schechter function represents the $1\sigma$ (68.26 per
cent) uncertainties. The blue and red and purple solid lines are the Schechter
functions obtained by Arnouts et al. (2005), Hagen et al. (2015) and Page et
al. (2021).
### 4.4 Cosmic Variance
Table 5: Cosmic variance errors on normalisation calculated using two alternate methods explained in section 4.4. We tabulate the average stellar masses (second column) in both redshift bins (column 1). The last column shows $1\sigma$ fractional errors in normalisation calculated using Trenti & Stiavelli (2008) (I) and Moster et al. (2010) (II). $z$ | $M_{*}/10^{10}$ | $\Delta\phi^{*}/\phi^{*}(1\sigma)$
---|---|---
| (${M_{\odot}}$) | I | II
$0.7$ | 1.43 | 0.103 | 0.111
$1.0$ | 1.73 | 0.083 | 0.069
The LF estimates are prone to errors due to the large-scale matter
distribution in the Universe. Due to the variation in the matter density, the
number counts of the galaxies fluctuate from one part of the universe to
another. This effect is most severe for surveys with small sky areas
(Somerville et al., 2004; Moster et al., 2010). In this section we calculate
the effects of this so-called cosmic variance on our estimates. We use two
independent methods, proposed by Trenti & Stiavelli (2008) and Moster et al.
(2010) to obtain cosmic variance induced errors on the characteristic number
density (or the normalisation) $\phi^{*}$ of the Schechter function form of
the LF. The first one introduced by Trenti & Stiavelli (2008), calculates
cosmic variance using an approach in which the N-body simulations are used to
produce mock surveys which in turn are used to calculate the average bias of
the sample. The other method suggested by Moster et al. (2010), estimates the
cosmic variance using the N-body simulated mocks from the sky-area of the
survey, the mean and size of the redshift bin, and the stellar masses of the
galaxies probed.
In the web tool provided by Trenti & Stiavelli (2008), we assume values for
$\sigma_{8}$ and an average halo occupation fraction of 0.8 and 0.5
respectively along with the bias formalism of Sheth & Tormen (1999). It gives
us $1\sigma$ fractional errors of 0.103 and 0.083 on normalisation.
As mentioned earlier, the method from Moster et al. (2010) needs the stellar
masses. So, we match our final UVW1 source-list to COSMOS/UltraVISTA
$K_{s}-$selected catalogue (Muzzin et al., 2013) with a matching radius 1.5
arcsec to get the stellar masses of our galaxies. The matching provides 495
stellar masses (96.5 per cent of our sources), which average to
$1.51\text{\times}{10}^{10}\text{\,}\mathrm{M}_{\odot}$ in the redshift bin
$0.6-0.8$. For these solar masses we get a relative $1\sigma$ error of 0.111
from the Moster et al. (2010) code. Following the same procedure for the
redshift bin $0.8-1.2$, we get 349 (95.4 per cent) counterparts with stellar
masses for our sources, giving an average stellar mass
$1.87\text{\times}{10}^{10}\text{\,}\mathrm{M}_{\odot}$ and a relative
$1\sigma$ error on normalisation of 0.069, due to cosmic variance.
The cosmic variance errors on the parameters, calculated using tools from both
Trenti & Stiavelli (2008) and Moster et al. (2010) are tabulated in Table 5.
The error bars on the normalisation due to cosmic variance are 67 and 43 per
cent smaller than 1 $\sigma$ statistical uncertainties. We expect this result
because of the large area the COSMOS field covers.
### 4.5 Spectral energy distributions
Figure 9: This figure represents the marginalized one dimensional (along the
diagonal) and two dimensional (off-diagonal) posterior distributions of
Schechter function parameters $\alpha$, $M^{*}$ and and $\phi^{*}$. The
redshift bin $0.6\leq z<0.8$ is represented by blue and redshift bin $0.8\leq
z<1.2$ is shown in red colour. The shaded region in the dark and light
coloured areas in the off-diagonal part of the plot correspond respectively to
68 and 95 per cent confidence intervals for LF parameters. The black ‘+’
symbols represent the median values for $\alpha$, $M^{*}$ and $\phi^{*}$. The
shaded region in the diagonal plots represent the one dimensional 68 per cent
confidence region.
We fit spectral energy distributions (SEDs) to the galaxies in the brightest
magnitude bins in both redshift ranges to examine their nature. We obtain the
rest frame photometry for our sources in different filters from UV to mid-IR,
by matching the positions to the COSMOS 2015 (Laigle et al., 2016) catalogue.
For sources that do not have photometry in Far-IR bands, we use the deblended
catalogue from Jin et al. (2018) and the HerMES catalogue (Herschel Multi-
tiered Extragalactic Survey; Oliver et al., 2012).
An SED model comprising two components is fitted to the photometry. The first
component is the stellar emissions from the galaxies and the other component
is from the dust emissions coming from the star forming regions. The stellar
emission templates are created using stellar population synthesis models of
Bruzual & Charlot (2003), assuming a Chabrier (2003) initial mass function for
solar metallicity. The sources we have at hand are very bright UV galaxies, so
we need stellar emission models for a very young population of stars. We chose
models of single stellar population (SSP) with a varying age from 0.01 to
13.18 Gyr, in 30 steps of size log age $\sim$ 0.1. These templates are
reddened by assuming the Calzetti et al. (2000) dust extinction model, with
the $E(B-V)$ values ranging from 0.0 to 4.0 in steps of 0.14. In total we have
900 stellar emission models. We complement these with the the mid to far
infra-red models created by Draine & Li (2007). The cold dust component of our
SED constitutes a linear combination of models with constant and variable
radiation fields, along with varying amounts of the PAH fraction. Since we are
dealing here with bright/large galaxies, so the SMC and LMC models from Draine
& Li (2007) are not included in our library. From the SED fits, we calculate
their total far infrared luminosities by integrating the total dust model from
8 to 1000 $\micron$. The bolometric luminosity is obtained by integrating the
total SED fit within $0.01-1000\micron$.
Figure 10: The comparison of the UV galaxies luminosity function of the CDFS
(yellow) and COSMOS (black/gray) in the redshift interval $0.6-0.8$ (left) and
$0.8-1.2$ (right). Upper left and right panels : The data points show the
binned number densities and the solid lines are maximum likelihood fitted
Schechter functions. The shaded regions represent the $1\sigma$ uncertainties
respectively for redshift bins $0.6-0.8$ and $0.8-1.2$. Lower left and right
panels : The one and two dimensional marginalised distributions in the
$M^{*}-\alpha$ space for the COSMOS (gray) and CDFS (yellow), for lower and
higher redshift bins. The dark and light shaded regions represent the
$1\sigma$ and $2\sigma$ confidence regions, and the ‘+’ symbols denotes the
median value of the parameters.
## 5 Results
The galaxy rest-frame UV LF in the redshift range $0.6<z<1.2$ is derived using
the method developed by Page & Carrera (2000), explained in section 4.2. We
produce our results by dividing the sample in two redshift bins $0.6<z<0.8$
and $0.8<z<1.2$. The results for both the bins are plotted in Fig. 8, along
with the best-fit Schechter function models from Arnouts et al. (2005); Hagen
et al. (2015); Page et al. (2021) and paper I at the same redshifts.
We show in Fig. 9, the one and two dimensional posterior distributions for the
LF parameters, obtained by MCMC simulations. The dark and light shaded regions
show the 68 and 95 per cent confidence regions for the Schechter function
parameters. The best-fit values obtained using the maximum likelihood method
presented in section 4.2 are listed in Table 4.
In Fig. 10, we show the comparison of results obtained by this study and from
paper I, for redshift bins centered at $z=0.7$ and $z=1.0$ respectively. The
top panels compare the binned and model LF whereas the bottom panels compare
the parameter space, for both redshift bins.
We compare our Schechter function parameters to estimates obtained by past
works in the same redshift range in Fig. 12.
Our estimates for the luminosity density are plotted with values from the
literature in Fig. 13, and tabulated in Table 6.
## 6 Discussion
Figure 11: We plot here the SEDs and HST ACS stamps of the brightest star
forming galaxies in our sample. The top row shows the most luminous (UVW1 AB
21.15 mag) source in the redshift bin $0.8-1.2$; the middle and the bottom
rows show two (UVW1 AB mag = 21.16, 21.35) of the most luminous objects in the
redshift range $0.6-0.8$. These sources constitute the brightest magnitude bin
of their respective redshift bins. The panels on the left show the HST ACS
stamps of these sources and their redshifts and the right hand side panels
show the SEDs. The size of the stamps is $10\times 10$ sq arcseconds. A
circular aperture (3 arcsec radius) used for UVW1 photometry COSMOS image is
also shown in white colour. The panels on the right hand side show the SEDs.
The total SEDs, shown in black solid lines, represent the linear sum of the
stellar models generated using Bruzual & Charlot (2003) and the two infra-red
models from Draine & Li (2007), one for a constant (orange) and another for a
variable (maroon) radiation field. The blue line represents the stellar model
with no reddening (i.e. $E(B-V)=0$). In each SED panel we also show the, the
fitted age of the simple stellar population (in Gyr), the colour excess and
the bolometric luminosity. Figure 12: The parameters $\alpha$ and $M^{*}$ and
$\phi*$ of the Schechter function as defined in equation 8 and tabulated in
Table 4. The values estimated from this paper are in black. Other colours are
used for values estimated by other studies of Arnouts et al. (2005); Hagen et
al. (2015); Oesch et al. (2010); Moutard et al. (2020); Page et al. (2021) and
paper I. Different panels from top to bottom represent the normalisation
$\phi*$, characteristic magnitude $M^{*}$ and faint-end slope $\alpha$ as
function of redshift. The horizontal and vertical error bars represent the
width of the redshift bin and 1 $\sigma$ (68.26 per cent) uncertainties
respectively. For clarity the values of parameters at same redshifts are
slightly shifted in redshift.
We calculate the UV LF of galaxies using UVW1 data from the wide area COSMOS
field. This work complements paper I, in which LFs were calculated from deep
UVW1 imaging in the CDFS, in that the larger sky area of COSMOS provides
access to a larger sample of the most luminous galaxies, and so we can
construct LFs to brighter absolute magnitudes. We can see in Fig. 10 (upper
panels), that the wide area COSMOS survey extends the bright end of the LF by
almost a magnitude in the redshift bin $0.6-0.8$ and half a magnitude at
$0.8-1.2$ compared to the CDFS LFs. As a consequence of this we can probe
space densities an order of magnitude lower in COSMOS than CDFS, of order
$10^{-6}$ Mpc-3 Mag-1.
Our measurements are fairly consistent with the Schechter function shape at
both redshift bins. We can see this in the top right panel of Fig. 10, where
the observed binned LF follows the shape of the modelled Schechter function. A
similar behaviour is observed in the redshift bin $0.6-0.8$ (top left of Fig.
10), except at the brightest absolute magnitude bin, in which the binned LF
appears to exceed considerably the Schechter function model. We therefore
examine this bin in more detail to determine whether the data in this bin pose
a serious challenge to the Schechter function model.
In this brightest absolute magnitude bin, we calculate from the Schechter
function model that the expected number of sources in our survey is 0.12,
whereas the number of galaxies observed is 2. From Poisson statistics, the
probability of observing two or more galaxies when the expected number is 0.12
is $7.1\times 10^{-3}$, a discrepancy that is a little less than 3$\sigma$,
but still somewhat uncomfortable.
Given this observed discrepancy, we have looked in more detail at the sources
in the brightest absolute magnitude bins in both magnitude ranges, and in
particular at the possibility that they are contaminated by AGN.
Some potential AGN candidates might be missed by the X-ray detection because
the X-ray observations do not reach
$10^{42}\,\,\mathrm{ergs\,s}^{-1}\mathrm{cm}^{-2}$ in all three bands
($0.5-10$, $0.5-2$ and $2-10$ KeV) out to redshift 1.2 (refer to section
2.4.3). Therefore it is possible that there are AGN which are hiding below the
X-ray detection limit and are present in our final UVW1 source-list. In the
construction of the LF, there is a significant potential for AGN contamination
at the high luminosities where the number densities are very low. So, caution
must be taken especially at the bright-end of the LF.
The first step we take to address this issue is to examine the mid-IR
properties of these brightest sources. There are two sources in the brightest
absolute magnitude bin in the $0.6-0.8$ redshift range and a single source in
the brightest absolute magnitude bin between redshifts 0.8 and 1.2. For these
sources, we obtained the magnitudes in the W1 ($3.4\micron$) and W2
($4.6\micron$) passbands of the Wide-field Infrared Survey Explorer (WISE;
Wright et al., 2010) and the 3.6 and 4.5 $\micron$ passbands of the Spitzer
Infrared Array Camera (IRAC; Fazio et al., 2004). We checked the sources
against the mid-IR identification criterion set out for WISE colors by Stern
et al. (2012) and Assef et al. (2013). Similarly, for IRAC colours we used the
prescription from İkiz et al. (2020). None of these sources satisfy the
conditions outlined for the WISE and IRAC colours, by the above mentioned
works. So, none of the sources populating the highest luminosity bins in our
LFs show evidence for a substantial energetic contribution from an AGN.
To look further at these three sources we examine their SEDs. We also examined
their morphology using the COSMOS image from the Hubble Space Telescope (HST)
Advanced Camera for Surveys (ACS) which employed the F814W filter (Koekemoer
et al., 2007). The SEDs along with the 10$\times$10 arcsecond postage stamp
images are shown in Fig. 11.
As seen in Fig. 11 these luminous galaxies are detected from the far-UV to the
sub-mm wavelengths. We fit SEDs containing stellar and dust components
(section 4.5). The fits suggest that all three objects have young stellar
populations and significant amounts of dust. We obtain their total IR
luminosity by integrating the dust components. These sources cover a range in
IR luminosities from $4.39\text{\times}{10}^{11}\text{\,}\mathrm{L}_{\odot}$
to $1.69\text{\times}{10}^{12}\text{\,}\mathrm{L}_{\odot}$. So, these most
powerful UV-selected galaxies in the COSMOS field correspond to (U)LIRGs.
These systems are defined by their total infrared luminosity as $-$ LIRGs;
$10^{11}{\mathrm{L}_{\odot}}<L_{\mathrm{IR}}(8-1000\micron)<10^{12}{\mathrm{L}_{\odot}}$
and ULIRGs;
$10^{12}{\mathrm{L}_{\odot}}<L_{\mathrm{IR}}(8-1000\micron)<10^{13}{\mathrm{L}_{\odot}}$
(Sanders & Mirabel, 1996; Genzel et al., 1998). However none of these systems
are as luminous as the most powerful IR galaxies in the redshift range under
consideration. Kartaltepe et al. (2010) in their 70 $\micron$ selected
catalogue find sources as bright as
$\sim$8\text{\times}{10}^{12}\text{\,}\mathrm{L}_{\odot}$$ in a redshift range
similar to ours. One of the reasons we do not find any such sources could be
that the most powerful IR galaxies are bright AGNs (Sanders & Mirabel, 1996;
Kartaltepe et al., 2010; Goto et al., 2011; Symeonidis & Page, 2021), which we
remove from our analysis. The other reason could be the UV selection, as the
brightest IR galaxies could be too obscured to be seen strongly at the UV
wavelengths.
The examination of the HST images suggests that the morphology of at least two
of these three sources constitute mergers in various stages. One of the
sources in the 0.6-0.8 redshift bin appears to be an early stage merger with
two discrete luminous galaxies within the XMM-OM UVW1 photometry aperture. The
identification of this UVW1 source as two discrete galaxies offers a solution
to the discrepancy between the observed and model-predicted number of galaxies
in the brightest absolute magnitude bin at $0.6<z<0.8$. If the UV emission of
the galaxies were measured separately, their individual UV absolute magnitudes
may well place them in a different bin of the luminosity function. Thus the
number of observed galaxies in this most-luminous bin may only be one, rather
than two, in which case the discrepancy is no longer significant, and we no
longer observe a deviation from the Schechter-function shape at the bright
end.
We move on to compare our LFs to those found in previous studies and begin by
seeing how our results compare to paper I. In terms of $M^{*}$ and $\alpha$
the bottom of Fig. 10 shows that the contours from our study and those from
paper I overlap. The best-fit values of the faint-end slope and the
characteristic magnitude are within $1\sigma$ and $2\sigma$ of each other
respectively for redshift bin centered at $z=0.7$ and $1\sigma$ each for
$z=1.0$. The contours for the CDFS are smaller, which shows that the LF
parameters are better constrained as compared to the COSMOS field. We
attribute this to the depth of the CDFS UVW1 survey, which probes fainter
absolute magnitudes and hence covers the transition between exponential and
power law parts of the Schechter function as well as the faint end slope.
Overall our values of the UV LF parameters are in accordance with findings
reported by Arnouts et al. (2005). The faint-end slope estimates of
$\alpha=-1.60\pm 0.26$ and $-1.63\pm 0.45$ from their study are within
$1\sigma$ from our estimate of $\alpha=-1.37_{-0.43}^{+0.48}$ and
$-1.66_{-0.55}^{+0.59}$, for redshift bins at $z=0.7$ and $z=1.0$
respectively. We do see a slight deviation in their LF however, in the
redshift range $0.6<z<0.8$ (left panel of Fig. 8). At bright absolute
magnitudes ($-20.5>M_{1500}<-21.5$) their model LF curve (pink) lies
significantly above our measurements. In parameter terms, this discrepancy
corresponds to the fainter best-fit characteristic magnitude we obtain in this
redshift range ($M^{*}=-19.12_{-0.22}^{+0.20}$) compared to that found by
Arnouts et al. (2005) ($M^{*}=-19.84\pm 0.40$). In the other redshift bin
(i.e. $0.8<z<1.2$), the best fit model curve from Arnouts et al. (2005) lies
close to our LF measurements, and our measurement of $M^{*}$ agrees with that
obtained by Arnouts et al. (2005) within 1$\sigma$. On comparison of the
$M^{*}$ with Hagen et al. (2015), we find that our values are fainter by at
least $3\sigma$ in both redshift bins. Their model LF curve (blue) in the
lower redshift bin is above our measurements throughout the magnitude range
considered in this study. The differences in the higher redshift bin become
more severe as the absolute magnitude brightens.
Figure 13: The luminosity density calculated using equation 13. The gray data
points are the observed luminosity densities from different past studies. Our
estimates for present work (COSMOS) are shown in black colour and those for
paper I (CDFS) in yellow. The vertical error bars represent the $1\sigma$
(68.26 per cent) uncertainties. The horizontal ones are redshift bin sizes.
The data points at same redshifts are slightly shifted for clarity. Table 6:
Luminosity density as a function of redshift. Errors indicate $1\sigma$
uncertainties, which include $1\sigma$ relative error from cosmic variance.
$z$ | $\rho/10^{26}$
---|---
| $(\mathrm{erg}\,\mathrm{s}^{-1}\mathrm{Hz}^{-1}\mathrm{Mpc}^{-3})$
| This work | paper Ia
$0.6-0.8$ | $1.34_{-0.49}^{+1.48}$ | $2.02_{-0.27}^{+0.33}$
$0.8-1.2$ | $1.10_{-0.51}^{+1.66}$ | $2.62_{-0.58}^{+1.06}$
$a$The error bars in Table 5 of paper I do not include the cosmic variance.
Comparing to other recent works, we find our results for $\alpha$ to be in
agreement with Page et al. (2021). The work from Oesch et al. (2010) (for
$z=0.75$), who have used the deepest data to date to calculate the LF, obtain
parameters which are in good agreement with our results. Their result for
$\alpha$ and the break magnitude $M^{*}$ agrees with our values with
deviations well within $1\sigma$.
The two important studies calculating the galaxy LF in the redshift range
$0.6-1.2$ using ground based instruments are Cucciati et al. (2012) and
Moutard et al. (2020). They derived their estimates by extrapolating their
observations taken at longer wavelengths to the UV regime using SED templates.
For completion we do compare our results to these works and add the data
points from these studies to the luminosity density plot, but we note that
comparison of these works with the studies dealing with direct UV surveys
should be taken with caution. As compared to Cucciati et al. (2012), we find
that our estimates of the faint-end slope at redshift 0.7 are $1\sigma$ away
from theirs. At the higher redshift bin, our value for $\alpha$ is well within
$2\sigma$ level from their values in redshift bins centered at 0.9 and 1.1.
However, we do find a bigger discrepancy, when we look at the characteristic
magnitudes. As compared to our estimates at redshift 0.7, their value is
fainter by at least $4\sigma$. For their higher redshift bins at 0.9 and 1.1
we see the difference in the $M^{*}$ values is $4\sigma$ and $2\sigma$
significant, as compared to our estimate at redshift 1.0. With regards to
Moutard et al. (2020) ($z=0.75$), our values are in excellent agreement.
We do not put much emphasis on the comparison of the values of the
normalisation $\phi^{*}$ obtained in this work with any of the previous works
and/or paper I. These differences in $\phi^{*}$ are expected due to cosmic
variance (Trenti & Stiavelli, 2008; Moster et al., 2010), between different
parts of the Universe explored by different studies. Nevertheless, for
completion we plot the estimates for $\phi^{*}$ in Fig. 12 (top panel) from
this work and some other studies. We would like the remind the reader about
the additional errors due to cosmic variance (section 4.4) to be considered,
while comparing the resulting value to other works at similar redshifts. Among
previous works based on direct observation of the 1500 Å radiation only paper
I, Page et al. (2021) and Hagen et al. (2015) estimate the uncertainties in
their measurements due to cosmic variance, so we compare our results for
normalisation only with these studies. Our estimates for $\phi^{*}$ are in
very good agreement with Hagen et al. (2015) in both redshift bins. With
regards to Page et al. (2021), our values agree at $2\sigma$ and $1\sigma$ at
redshifts 0.7 and 1.0. It should be mentioned however, that due to small
sample size, the values obtained by them have large statistical errors. As
compared to paper I, we notice smaller values of $\phi^{*}$ in both redshift
bins. These differences which are significant at $\sim 2\sigma$, can be
attributed to known galaxy clusters in the CDFS explored in paper I. Due to
the larger sky-area of the COSMOS image, here we have managed to get better
constraints on both the normalisation and the cosmic variance. We want to
remark here that more independent UV surveys can further help with properly
constraining the normalisation of the galaxy LF.
Between the two redshift bin we do not observe any evolution of the faint-end
slope. This is in line with paper I and other previous works. However, we do
see $\sim 2\sigma$ variation in the characteristic magnitude. As we move from
redshift 0.7 to 1.0, the $M^{*}$ brightens by 0.7 mag. This is very close to
0.8 mag evolution seen in paper I and a similar change in $M^{*}$ is observed
by Hagen et al. (2015). This brightening of the characteristic magnitude
supports the currently accepted notion of the star formation history of the
universe i.e. the star formation rate decreases as we move from redshifts of
around 2 to smaller values. As mentioned earlier, differences in the
normalisation are expected due to cosmic variance. We note here that not only
for comparison with other studies, but for looking at the potential evolution
of normalisation with redshift, these errors need to be considered. As
mentioned earlier, due to the large area of the COSMOS UVW1 image, the
uncertainties in normalisation due to cosmic variance
($$5.4\text{\times}{10}^{-2}\text{\,}\mathrm{missing}${Mpc}^{-3}$ and
$$1.4\text{\times}{10}^{-2}\text{\,}\mathrm{missing}${Mpc}^{-3}$ for $z=0.7$
and $z=1.0$ respectively) are smaller than the statistical uncertainties.
Nevertheless, we include these errors due to cosmic variance in quadrature to
statistical errors. We see an evolution in normalisation of the UV LF as it
goes down with more than $2\sigma$ significance as we move from lower to the
higher redshift bin.
The luminosity density in the two redshift bins is calculated as shown in
section 4.3. Our values fall within the error margins of previous works (Fig.
13). Due to the large area of the COSMOS UVW1 image, the errors are dominated
by statistical uncertainties. The errors on the luminosity density values of
the COSMOS field are larger than those from CDFS, and the values of luminosity
density in the CDFS are within the $1\sigma$ error bars of the COSMOS values
for redshift 0.7 and within $2\sigma$ for redshift 1.0 respectively.
## 7 Conclusion
We use the wide area COSMOS imaging survey taken through the UVW1 filter on
XMM-OM to calculate the UV LF of galaxies in a redshift range $0.6-1.2$. Using
a wide area survey we attempt to extend the analysis to brighter magnitudes,
which complements paper I where we calculated the UV LF using a deep survey in
the CDFS to constrain the faint-end of the LF.
The binned UV galaxy LF is estimated by using the Page & Carrera (2000) method
in the two redshift bins $0.6<z<0.8$ and $0.8<z<1.2$. The COSMOS imaging
pushed the rest frame $M_{1500}$ magnitudes upwards of $-21.5$ for the
redshift bin $0.6<z<0.8$ and $\simeq-22$ for the redshift bin $0.8<z<1.2$,
helping us to to put better constraints on the characteristic magnitude
$M^{*}$ value.
We fit the Schechter function to the data using maximum likelihood to estimate
the LF parameters (the faint-end slope, characteristic magnitude and the
normalisation). We compare the binned LF shape to the Schechter model. The
luminosity function seems to be well described by the Schechter function
shape. There is no evolution of $\alpha$ between the two redshift bins. This
is also expected from previous works that find an almost constant faint-end
slope within the redshift range of our interest. For individual redshift bins,
$\alpha$ falls within $1\sigma$ margins of values obtained in paper I and all
other studies in this redshift range using direct UV measurements. For the
characteristic magnitude $M_{*}$, we see our derived values are 0.5 to 1 mag
fainter than some previous studies. Between the redshift bins under
consideration, $M_{*}$ evolves by $\simeq 0.7$ mag with a $\sim 2\sigma$
significance. An evolution of the $M_{*}$ by $\simeq 0.8$ mag between the
redshift bins was also reported in paper I. With regards to the comparison of
values at a particular redshift to the previous works (again using direct UV
surveys), our estimates fall within $2\sigma$ margins.
As expected for the luminosity density we obtain values relatively smaller
than those obtained in the CDFS. However, the differences are significant at
$<2\sigma$ at both redshifts. Between the two redshift bins considered here,
the change in luminosity density is not significant enough to infer evolution.
We put a special focus on the bright sources in this study. First we test the
most luminous sources, found in the brightest magnitude bins of the LF at both
redshifts, for AGN contamination. Then we characterise these objects through
an SEDs analysis and by examining their morphologies. At least some of these
most powerful UV sources in the COSMOS field seems to be mergers at different
stages. Their SEDs and their integrated IR luminosities suggest that these
galaxies belong to the (U)LIRG classes, but not the most powerful ones. We
think it is mainly because we have removed the bright AGNs from this analysis
or because the most luminous galaxies are more heavily obscured than our UV-
selected galaxies.
## 8 Acknowledgements
This research makes use of the observations taken with XMM-Newton telescope,
an ESA science mission with instruments and contributions directly funded by
ESA Member States and NASA. MJP acknowledges support from the UK Science and
Technology Facility Council (STFC) grant number ST/S000216/1. MS would like to
extend their gratitude towards Vladimir Yershov and Eduardo Ojero for their
outstanding support with the XMM-OM software. MS would like to thank Michele
Trenti for sharing the source code for their cosmic variance calculator.
## 9 Data Availability
The data used in this article can be obtained from the XMM-Newton Science
archive (XSA) at https://www.cosmos.esa.int/web/xmm-newton/xsa. We provide the
source list used in this paper as a supplementary table with the online
version of the paper. Other supporting material related to this article is
available on a reasonable request to the corresponding author.
## Appendix A Cross-correlating the Catalogues
Figure 14: The angular offset distribution histograms are plotted here. The
black solid and dashed histograms represent the angular separations
corresponding to all and best (closest) matches between the UVW1 source-list
and Laigle et al. (2016) catalogue, within a given matching radius. The black
solid line shows the expected composite model (Rayleigh + linear) of all
matches fitted to the distribution of all matches. The $2\sigma$ uncertainty
in the fit shown as gray shaded area around it. The components of the
composite model i.e. the Rayleigh and the linear models, representing the true
and spurious matches, are plotted in dashed orange and blue colours
respectively. Figure 15: UVW1$-u$ colour as a function of photospheric
temperature of the stars. Figure 16: UVW1$-u$ colour tracks for the extended
starburst and the two SDSS average quasar templates as function of redshifts.
Here we calculate the appropriate matching radius for cross-matching our
source-list with the ancillary catalogues.
### A.1 Modeling the source-distribution
The UVW1 source-list is matched to the Laigle et al. (2016) catalogue with a
matching radius of 10 arcsec. We plot the distribution of the offsets in Fig.
14. The yellow and blue histograms represent the ‘all’ and ‘best’ matches
between the Laigle et al. (2016) catalogue and our source-list, as a function
of matching radius. Following paper I, we fit a distribution of the form
$D(x)=\,A\,\frac{x}{\sigma^{2}}\,\mathrm{exp}\left({-\frac{x^{2}}{2\sigma^{2}}}\right)+m\,x,$
(14)
to the distribution of all matches, keeping $A,\,\sigma,\,$ and $m$ as free
parameters. The first part of equation 14 represents the Rayleigh distribution
predicted for the true shape of the actual counterparts (Page et al., 2012)
and the second part showing a straight line for the distribution of the
spurious counterparts, growing linearly with the matching radius. The fit
results are $5251\pm 121$ sources, $0.308\pm 0.005$ arcsec and $994\pm 21$
sources per square arcsec for $A,\,\sigma,\,$ and $m$ respectively. Fig. 14
shows the two elements of the fit, the linear distribution with a slope
$m=994$ sources per square arcsec in green broken and the Rayleigh
distribution with amplitude $A=5251$ and width $\sigma=0.308$ in broken red
curves respectively. The black solid curve shows the total model distribution
for all matches.
The distributions in Fig. 14 can used to get an estimate of the optimum
angular radius in order to have minimum number of spurious matches. We can see
from Fig. 14, the modelled distribution of true matches (red curve) drops to
very small values after 1 arcsec matching radius while the number of spurious
matches keeps increasing. So 1 arcsec seems to be a good compromise to have
enough matches to build the LF while keeping the spurious sources low. To
check this we can see how many sources will get spuriously matched to our UVW1
sources within 1 arcsec. One way to obtain the number of spuriously matched
sources is to count the number of sources under the linear distribution in
Fig. 14 up to a given matching radius. A total of 497 (9 per cent) and 1118
(17.5 percent) sources are found to be spurious for matching radii of 1 and
1.5 arcsec. These numbers however are upper limits, especially for smaller
offset radii, where the linear distribution also has contribution from the
distribution of actual matches (i.e. Rayleigh). In the following section we
describe further measures that have been used to put additional constraints on
the colours of the sources in order to have a reliable source-list while
extending the cross-matching radius to 1.5 arcseconds.
### A.2 UVW1-u colour-cut
Figure 17: The spectral templates used to calculate the UVW1$-u$ colours of
the extended starburst galaxy and the two average SDSS quasar templates.
Figure 18: the UVW1$-u$ colours of the COSMOS UVW1 sources as a function of
the matching offset for matches with the COSMOS 2015 catalogue. The density of
the sources increases from lighter to darker shades. The top and bottom panels
separately show the distributions for all and best (closest) matches with
COSMOS 2015 catalogue. The black dashed line at UVW1$-u$ $=1$ shows the colour
cut applied to our source-list. Figure 19: The distribution of matches from
COSMOS 2015 catalogue for our UVW1 sources as a function of offset radius. The
solid black histogram represents all matches within a given matching radius.
The dashed black histogram represents the best matches. The solid and dashed
red histograms show distributions of all and best matches after applying the
colour cut UVW1$-u$ $>-1$.
In order to avoid missing out on genuine counterparts we need to increase the
matching radius while making sure that there are not a lot of spurious
matches. To achieve this, we put additional constraints on the colours of the
sources. We examine how blue a source can physically be by calculating the
UVW1$-u$ colours for stars, quasars and galaxies. We obtain the u colours, we
use the CFHT $u$-band filter.
For stars we use the synthesized stellar spectra from the ATLAS9 project
(Castelli & Kurucz, 2003) created using the stellar abundances from Grevesse &
Sauval (1998). We calculate the UVW1$-u$ colours for these spectra and plot
these as a function of their photospheric temperatures in Fig. 15. The
UVW1$-u$ colours become more and more blue as the temperature rises and stay
close to a value of -0.5 as we approach the hottest photospheric temperatures.
We calculate the colours for quasars from the average SDSS spectral templates
from Vanden Berk et al. (2001) and Harris et al. (2016). For the case of
galaxies we use the starburst galaxy template (SB1) from Kinney et al. (1996).
This template is extended beyond 1250 Å using the spectrum of Mrk 66 from
González Delgado et al. (1998). The UVW1$-u$ colours tracks for these galaxies
are plotted in Fig. 16. We show the model spectra for the extended starburst
and the quasar templates (labelled SDSS QSO 1 and SDSS QSO 2 for Vanden Berk
et al. (2001) and Harris et al. (2016) respectively) in Fig. 17.
From the analysis so far we see that the UVW1$-u$ colours can not be
theoretically bluer than $-0.5$. We note that we do not consider effects of
reddening for the above calculations, so we keep apply a rather conservative
colour-cut of UVW1$-u$ $>-1$ to our COSMOS UVW1 source-list. Fig. 18 shows the
distribution of UVW1$-u$ colours of the UVW1 sources as a function of offset
radius for matching with COSMOS 2015 catalogue.
To test the effect of the colour cut we plot distribution of the COSMOS 2015
counterparts of our UVW1 sources before and after the colour cut as a function
of matching radius in Fig. 19. The distribution of all matches after the
colour cut (red solid histogram) approaches the distribution of best matches
(red and black dashed histograms). This behaviour follows up to 2.0 arcsec.
So, we can use this matching radius for cross matching out UVW1 source-list
with other ancillary catalogues provided we apply the colour cut UVW1$-u$
$>-1$. However, jsut to be on the safe side we use a conservative value of 1.5
arcsec as the matching radius for out cross matching.
## References
* Alam et al. (2015) Alam S., et al., 2015, ApJS, 219, 12
* Alarcon et al. (2021) Alarcon A., et al., 2021, MNRAS, 501, 6103
* Arnouts et al. (2005) Arnouts S., et al., 2005, ApJ, 619, L43
* Assef et al. (2013) Assef R. J., et al., 2013, ApJ, 772, 26
* Behroozi et al. (2010) Behroozi P. S., Conroy C., Wechsler R. H., 2010, ApJ, 717, 379
* Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, MNRAS, 344, 1000
* Bullock & Boylan-Kolchin (2017) Bullock J. S., Boylan-Kolchin M., 2017, ARA&A, 55, 343
* Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
* Cappelluti et al. (2007) Cappelluti N., et al., 2007, ApJS, 172, 341
* Castelli & Kurucz (2003) Castelli F., Kurucz R. L., 2003, in Modelling of Stellar Atmospheres. (arXiv:astro-ph/0405087)
* Chabrier (2003) Chabrier G., 2003, PASP, 115, 763
* Coil et al. (2011) Coil A. L., et al., 2011, ApJ, 741, 8
* Cole et al. (2001) Cole S., et al., 2001, MNRAS, 326, 255
* Cucciati et al. (2012) Cucciati et al., 2012, A&A, 539, A31
* Damjanov et al. (2018) Damjanov I., Zahid H. J., Geller M. J., Fabricant D. G., Hwang H. S., 2018, ApJS, 234, 21
* Darvish et al. (2017) Darvish B., Mobasher B., Martin D. C., Sobral D., Scoville N., Stroe A., Hemmati S., Kartaltepe J., 2017, ApJ, 837, 16
* Davies et al. (2015) Davies L. J. M., et al., 2015, MNRAS, 447, 1014
* Draine & Li (2007) Draine B. T., Li A., 2007, ApJ, 657, 810
* Fazio et al. (2004) Fazio G. G., et al., 2004, ApJS, 154, 10
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Gaia Collaboration et al. (2018) Gaia Collaboration et al., 2018, A&A, 616, A1
* Gehrels (1986) Gehrels N., 1986, ApJ, 303, 336
* Genzel et al. (1998) Genzel R., et al., 1998, ApJ, 498, 579
* González Delgado et al. (1998) González Delgado R. M., Leitherer C., Heckman T., Lowenthal J. D., Ferguson H. C., Robert C., 1998, ApJ, 495, 698
* Goto et al. (2011) Goto T., et al., 2011, MNRAS, 414, 1903
* Grevesse & Sauval (1998) Grevesse N., Sauval A. J., 1998, Space Sci. Rev., 85, 161
* Hagen et al. (2015) Hagen L. M. Z., Hoversten E. A., Gronwall C., Wolf C., Siegel M. H., Page M., Hagen A., 2015, ApJ, 808, 178
* Harris et al. (2016) Harris D. W., et al., 2016, AJ, 151, 155
* Hasinger et al. (2007) Hasinger G., et al., 2007, ApJS, 172, 29
* Hasinger et al. (2018) Hasinger G., et al., 2018, ApJ, 858, 77
* Hathi et al. (2010) Hathi N. P., et al., 2010, ApJ, 720, 1708
* Hildebrandt et al. (2009a) Hildebrandt H., Pielorz J., Erben T., van Waerbeke L., Simon P., Capak P., 2009a, A&A, 498, 725
* Hildebrandt et al. (2009b) Hildebrandt H., van Waerbeke L., Erben T., 2009b, A&A, 507, 683
* Hogg (1999) Hogg D. W., 1999, arXiv e-prints, pp astro–ph/9905116
* Hogg et al. (2002) Hogg D. W., Baldry I. K., Blanton M. R., Eisenstein D. J., 2002, arXiv e-prints, pp astro–ph/0210394
* İkiz et al. (2020) İkiz T., Peletier R. F., Barthel P. D., Yeşilyaprak C., 2020, A&A, 640, A68
* Ilbert et al. (2009) Ilbert O., et al., 2009, ApJ, 690, 1236
* Jin et al. (2018) Jin S., et al., 2018, ApJ, 864, 56
* Kartaltepe et al. (2010) Kartaltepe J. S., et al., 2010, ApJ, 709, 572
* Kashino et al. (2019) Kashino D., et al., 2019, ApJS, 241, 10
* Kennicutt & Evans (2012) Kennicutt R. C., Evans N. J., 2012, ARA&A, 50, 531
* Kinney et al. (1996) Kinney A. L., Calzetti D., Bohlin R. C., McQuade K., Storchi-Bergmann T., Schmitt H. R., 1996, ApJ, 467, 38
* Knobel et al. (2012) Knobel C., et al., 2012, ApJ, 753, 121
* Koekemoer et al. (2007) Koekemoer A. M., et al., 2007, ApJS, 172, 196
* Laigle et al. (2016) Laigle C., et al., 2016, ApJS, 224, 24
* Leauthaud et al. (2007) Leauthaud A., et al., 2007, ApJS, 172, 219
* Lilly et al. (1996) Lilly S. J., Le Fevre O., Hammer F., Crampton D., 1996, ApJ, 460, L1
* Lilly et al. (2007) Lilly S. J., et al., 2007, ApJS, 172, 70
* Limber (1953) Limber D. N., 1953, ApJ, 117, 134
* Madau & Dickinson (2014) Madau P., Dickinson M., 2014, ARA&A, 52, 415
* Madau et al. (1998) Madau P., Pozzetti L., Dickinson M., 1998, ApJ, 498, 106
* Marchesi et al. (2016) Marchesi S., et al., 2016, ApJ, 817, 34
* Martin et al. (2005) Martin D. C., et al., 2005, ApJ, 619, L1
* Mason et al. (2001) Mason K. O., et al., 2001, A&A, 365, L36
* Moster et al. (2010) Moster B. P., Somerville R. S., Maulbetsch C., van den Bosch F. C., Macciò A. V., Naab T., Oser L., 2010, ApJ, 710, 903
* Moutard et al. (2020) Moutard T., Sawicki M., Arnouts S., Golob A., Coupon J., Ilbert O., Yang X., Gwyn S., 2020, MNRAS, 494, 1894
* Muzzin et al. (2013) Muzzin A., et al., 2013, The Astrophysical Journal Supplement Series, 206, 8
* Narayan & Wallington (1993) Narayan R., Wallington S., 1993, in Surdej J., Fraipont-Caro D., Gosset E., Refsdal S., Remy M., eds, Liege International Astrophysical Colloquia Vol. 31, Liege International Astrophysical Colloquia. p. 217
* Nayyeri et al. (2017) Nayyeri H., et al., 2017, ApJS, 228, 7
* Oesch et al. (2010) Oesch P. A., et al., 2010, ApJ, 725, L150
* Oke & Gunn (1983) Oke J. B., Gunn J. E., 1983, ApJ, 266, 713
* Oliver et al. (2012) Oliver S. J., et al., 2012, MNRAS, 424, 1614
* Page & Carrera (2000) Page M. J., Carrera F. J., 2000, MNRAS, 311, 433
* Page et al. (2012) Page M. J., et al., 2012, MNRAS, 426, 903
* Page et al. (2021) Page M. J., et al., 2021, MNRAS, 506, 473
* Parsa et al. (2016) Parsa S., Dunlop J. S., McLure R. J., Mortlock A., 2016, MNRAS, 456, 3194
* Paulino-Afonso, Ana et al. (2020) Paulino-Afonso, Ana Sobral, David Darvish, Behnam Ribeiro, Bruno Smail, Ian Best, Philip Stroe, Andra Cairns, Joseph 2020, A&A, 633, A70
* Prescott et al. (2006) Prescott M. K. M., Impey C. D., Cool R. J., Scoville N. Z., 2006, ApJ, 644, 100
* Sabti et al. (2021) Sabti N., Muñoz J. B., Blas D., 2021, J. Cosmology Astropart. Phys., 2021, 010
* Sanders & Mirabel (1996) Sanders D. B., Mirabel I. F., 1996, ARA&A, 34, 749
* Schechter (1976) Schechter P., 1976, ApJ, 203, 297
* Schlafly & Finkbeiner (2011) Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
* Schlegel et al. (1998) Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525
* Scoville et al. (2007) Scoville N., et al., 2007, ApJS, 172, 1
* Scranton et al. (2005) Scranton R., et al., 2005, The Astrophysical Journal, 633, 589
* Sharma et al. (2022) Sharma M., Page M. J., Breeveld A. A., 2022, MNRAS, 511, 4882
* Sheth & Tormen (1999) Sheth R. K., Tormen G., 1999, MNRAS, 308, 119
* Shirasaki et al. (2021) Shirasaki M., Ishiyama T., Ando S., 2021, ApJ, 922, 89
* Somerville & Davé (2015) Somerville R. S., Davé R., 2015, ARA&A, 53, 51
* Somerville et al. (2004) Somerville R. S., Lee K., Ferguson H. C., Gardner J. P., Moustakas L. A., Giavalisco M., 2004, ApJ, 600, L171
* Stern et al. (2012) Stern D., et al., 2012, ApJ, 753, 30
* Symeonidis & Page (2021) Symeonidis M., Page M. J., 2021, MNRAS, 503, 3992
* Trenti & Stiavelli (2008) Trenti M., Stiavelli M., 2008, ApJ, 676, 767
* Trump et al. (2009) Trump J. R., et al., 2009, ApJ, 696, 1195
* Vanden Berk et al. (2001) Vanden Berk D. E., et al., 2001, AJ, 122, 549
* Wright et al. (2010) Wright E. L., et al., 2010, AJ, 140, 1868
* Yang et al. (2003) Yang X., Mo H. J., van den Bosch F. C., 2003, MNRAS, 339, 1057
* Yoshida et al. (2006) Yoshida M., et al., 2006, ApJ, 653, 988
* Yoshiura et al. (2020) Yoshiura S., Oguri M., Takahashi K., Takahashi T., 2020, Phys. Rev. D, 102, 083515
* van der Wel et al. (2016) van der Wel A., et al., 2016, ApJS, 223, 29
|
# Spatio-Temporal Deep Learning Models of 3D Turbulence with Physics Informed
Diagnostics
Arvind T. Mohana,d, Dima Tretiakb,d, Misha Chertkovc and Daniel Livescud
CONTACT Author: A.T.M<EMAIL_ADDRESS>aCenter for Nonlinear Studies,
Los Alamos National Laboratory, Los Alamos, United States; b Department of
Mechanical Engineering, Georgia Institute of Technology, Atlanta, United
States; c Program in Applied Mathematics, University of Arizona, Tucson,
United States; d Computational Physics and Methods Group, Los Alamos National
Laboratory, Los Alamos, United States;
###### Abstract
Direct Numerical Simulations (DNS) of high Reynolds number turbulent flows,
encountered in engineering, earth sciences, and astrophysics, are not
tractable because of the curse of dimensionality associated with the number of
degrees of freedom required to resolve all the dynamically significant spatio-
temporal scales. Designing efficient and accurate Machine Learning (ML) based
reduced models of fluid turbulence has emerged recently as a promising
approach to overcoming the curse of dimensionality challenge. However, to make
the ML approaches reliable one needs to test their efficiency and accuracy,
which is recognized as important but so far incomplete task. Aiming to improve
this missing component of the promising approach we design and evaluate two
reduced models of 3D homogeneous isotropic turbulence and scalar turbulence
based on state-of-the-art ML algorithms of the Deep Learning (DL) type:
Convolutional Generative Adversarial Network (C-GAN) and Compressed
Convolutional Long-Short-Term-Memory (CC-LSTM) Network. Quality and
computational efficiency of the emulated velocity and scalar distributions is
juxtaposed to the ground-truth DNS via physics-rich statistical tests. The
reported results allows to uncover and classify weak and strong aspects of
C-GAN and CC-LSTM. The reported results, as well as the physics-informed
methodology developed to test the ML-based solutions, are expected to play a
significant role in the future for making the DL schemes trustworthy through
injecting and controlling missing physical information in computationally
tractable ways.
###### keywords:
3D turbulence, deep learning, neural networks, Convolutional LSTM,
Autoencoders, Generative Adversarial Networks
††articletype: ARTICLE TEMPLATE
## 1 Introduction
Several research problems in seemingly disparate fields such as socio-
economics, infrastructure networks, physical and natural sciences etc., have a
common thread: The data from these systems consist of multiple features
varying in both space and time exhibiting strong spatio-temporal dynamics. In
addition, many of these systems are high dimensional with millions or billions
of degrees of freedom, making them exceptionally complex to study
theoretically by means of mathematical and statistical analysis. Such systems
are often modeled through numerical computations producing vast amounts of
data. However, many practical high dimensional cases arising in engineering,
earth sciences and climate modeling, make reliable numerical computations
virtually impossible because of the sheer amount of the spatio-temporal
resolution required to simulate the governing fluid-mechanics equations with
high fidelity. One naturally asks if data science approaches, improved
dramatically in recent years, can help to resolve the challenge.
Deep Learning (DL), and specifically Deep Neural Networks (NNs), have
established themselves as the state-of-the art for data driven models, with
successes in myriad applications. Not surprisingly, there has also been a
surge of interest in DL applications to fluid mechanics, specifically to
computational fluid dynamics (CFD) of turbulent flows. Several recent
advancements in applications of DL and classical machine learning techniques
to CFD have focused on improving Reynolds Averaged Navier-Stokes (RANS) and
Large Eddy Simulation (LES) techniques. In these approaches, the turbulent
scales are intelligently parameterized for a class of flows through learning
from the ground truth provided by Direct Numerical Simulations (DNS). Some of
these approaches have augmented existing turbulence models with traditional ML
approaches [wu2018physics, wang2017comprehensive, tracey2015machine,
singh2017machine] while others have utilized NNs to learn Reynolds-stress
closures [maulik2019subgrid, ling2016reynolds], thereby reducing computational
costs of RANS/LES and increasing accuracy.
While we acknowledge that physics parameterization of turbulence is a valuable
body of work in itself, we also remark that there are many applications of
interest sensitive to accurate resolution of boundary/initial conditions as in
[klein2003digital, di2006synthetic] and/or generating complex synthetic flows
[juneja1994synthetic, jarrin2006synthetic], that require much deeper insight
into modeling underlying spatio-temporal phenomena. Efforts in these areas are
focused on constructing physics-trustworthy and implementation efficient ROMs.
The challenge then boils down to learning the behavior of the underlying
dynamical system (turbulence) in a ROM which can then be used to generate
spatio-temporal samples which are consistent with spatio-temporal correlations
expected in turbulence. Traditional approaches to resolving the challenges
rely on computationally efficient projection methods of the Galerkin type –
see e.g. [rempfer2000low, noack2005need, carlberg2011efficient, qian2020lift].
Other methods, such as based on sparse coding [deshmukh2016model,
sargsyan2015nonlinear], cluster expansion [kaiser2014cluster] and also
networks of vortices [nair2015network], were also utilized to construct ROMs.
Simultaneously, several advances in the recent decade have occurred in using
DL, which have made spectacular progress in extracting valuable patterns from
large data-sets. Most of these successes have been in areas of image
classification (i.e. spatial complexity) and language modeling (sequential
complexity), which are associated with the focused problems of high-priority
in the information industry. As a result, DL for modeling spatio-temporal
complexity in high dimensions has not yet progressed at the same rate.
However, ubiquity of the complex spatio-temporal structures in advanced
technological applications has caught attention of the DL community in recent
years, and significant advances have been made in developing generative models
[goodfellow2014generative, xingjian2015convolutional] for image generation and
video classification/generation. Much of this interest had originated from the
computer graphics and animation community, with several recent works focusing
on realistic fluid animations [chu2017data], splash animation [um2018liquid]
and droplet dynamics [mukherjee2018neuraldrop]. These recent efforts have
demonstrated that DL methods, such as Generative Adversarial Networks (GANs),
have a tremendous potential to handle large and spatio-temporally complex
datasets. However, these impressive recent results - primarily from the
animation and graphics communities - remain rudimentary in exploring and
understanding the underlying multi-scale physical phenomena. In a parallel
development, driven largely by the dynamical-systems oriented physics
community, interesting advances were made in reservoir computing
[zimmermann2018observing, lu2017reservoir, pathak2018hybrid], allowing, in
particular, an advanced modeling of relatively simple but chaotic systems,
such as governed by 1-dimensional Kuramoto-Sivashinsky equation
[pathak2018model]. However, this line of work has not yet progressed
sufficiently far to describe truly multi-scale complex systems. In particular,
we are yet to properly understand capability of DL methods to represent
complex turbulence phenomena, and specifically for the methods based on GANS
architectures [wu2019enforcing, king2018deep, yang2019enforcing,
chen2019aerodynamic].
This manuscript advances the cause and focuses on exploring capability of
various DL algorithms to model the dynamics of turbulence. Specifically, we
focus on analysis of algorithms/methods which are split into the following two
categories - Dynamic-map and Static-map. Dynamic-map methods model the
dynamics of the flow in time, whereas Static-map methods model samples of the
flow dynamics, without accounting for any temporal dynamics that might be
present. Essentially, Dynamic-map methods are formulated as an input-output
(supervised) learning problem, such that given a sequence of flow
realizations, NN is tasked to predict realizations at the subsequent time
instants. We adapt algorithms from the DL literature which include both
dynamic and static maps, for the aforementioned turbulence application. The
core focus of this work is on physically-sound analysis of predictions
provided by the NNs. We hope that the insights provided by this analysis will
help to demystify reported successes of these black-boxes and pave a road for
their further use as “gray” (if not completely transparent) boxes for multi-
scale and physics-rich applications, in particular by incorporating more
physics into the NN’s design.
The remainder of the manuscript is organized as follows. Section 2 outlines
the static and dynamic map neural network architectures studied in this work.
In Section 3 we present the DNS dataset which is utilized as ground truth
throughout this work. Section 4 outlines the turbulence diagnostic metrics we
employ to assess the validity of the machine learning models. The results for
the static-map architecture are presented in Section 5. Our approach for the
dynamic-map architecture via dimensionality reduction is presented in Section
2.2, followed by results in Section 7. Finally, we discuss our findings and
scope for future efforts in Section 8.
## 2 Deep Learning Algorithms
In this section, we describe two DL approaches discussed in the manuscript:
Static-map and Dynamic-map.
### 2.1 Static-map: Generative Adversarial Networks (GANs)
Generative Adversarial Networks (GANs), proposed by Goodfellow
[goodfellow2014generative, radford2015unsupervised], are built from two NNs,
called generator and discriminator, respectively. In this architecture, the
layers within the generator typically consist of transpose convolution, batch
normalization, and ReLU activation, respectively. The discriminator mirrors
the generator’s architecture by using standard convolutional layers, and
notably, contains no fully connected layers. The discriminator’s final
activation function is sigmoid; therefore, its output is a probability of the
sample being real or fake. In summary, the generator up-samples a latent
vector z to generate snapshots while the discriminator down-samples to
classify. Batch normalization modules are present in both networks and address
the issue of changing data distributions between layers in the models. Ioffe
et al.[Ioffe] named this issue internal covariate shift and addressed it in
detail in their paper. Our architecture differs from this vanilla GANs, by
employing convolutional layers in GANs (CGANs) to account for the high
dimensional data, and several modifications to improve training stability and
performance, as detailed below
Figure 1: Schematic of Generative Adversarial Networks (GANs) with
Convolutional Generators and Discriminators for 3D turbulence
Figure 1 illustrates the CGAN architecture, including the input and output
sizes. There are two phases to a training cycle, one for each network. First,
the discriminator is trained to differentiate between real samples and
generated samples. The real images are labeled as a 0 and the generated images
as a 1. Predicted labels are compared to the target labels, and then the loss
gradients are propagated through the network. The generator is trained using
the following dynamic loss function.
$Loss_{G}=BCE(D(G(z)),0)$
We take the Binary Cross Entropy between the discriminator’s label of the
generated snapshots, notated as D(G(z)), and the target label. The target for
the generator’s loss function is 0 i.e. it tries to produce samples
indistinguishable from the real samples, from the perspective of the
discriminator. While one network trains, the other’s weights are frozen and
are not updated. As the two networks train together, the gradients from the
discriminator allows the generator to learn the distribution of the training
data as it tries to replicate it. Another key consideration when training
CGANs is the balance between the discriminator and the generator. In general,
the discriminator should perform better than the generator and correctly
identify whether the samples it receives are fake or real. If the
discriminator is too weak, it will not be able to process the details of each
sample and differentiate between DNS and generated data. However, if the
discriminator is far stronger than the generator, it no longer provides
meaningful gradients to the generator, preventing further training. We combat
this through a combination of label smoothing, architecture changes, variable
learning rates, and variable optimizers over the vanilla GANs architecture.
The CGANs employed in this work consist of an 8 layer discriminator and a 5
layer generator. The generator takes a uniform vector z as an input, and
produces a cubic snapshot (of the same dimensions as the input) as an output.
For a discussion of sampling z, please see Appendix B.2. The discriminator and
generator do not have the mirrored structure typically found in vanilla GANs
literature. Instead, we found that adding further depth to the discriminator
allows it to better discern between the generated data and the real data.
Essentially, our deeper discriminator serves as a thorough accuracy check. A
larger kernel size ($7^{3}$) was found to perform well, and we observe that
the kernel size is especially important in the generator’s transpose
convolutional layers and is less important in the discriminator, for the
accuracy of the predictions. Intuitively, we can see that the generator
performs regression compared to the discriminator, which has a possibly less
challenging objective of binary classification. For this reason, we decided to
break convention in designing the CGAN architecture, with non-symmetrical
generator and discriminator networks. We did not use the loss functions for
either network as a metric to determine training progress. Instead, we used
the physical diagnostics detailed in Section 4. We determined that the model
converged when our diagnostics stopped improving. Furthermore, we noticed that
the trained discriminator became a useful tool to sort the generated snapshots
by those which the discriminator labeled real, and therefore fared better on
our diagnostic tests.
An important issue observed during training was the tendency of the network to
“memorize” a subset of samples rather than learning the entire data
distribution. This is known as mode collapse [che2016mode, Salimans]. An
analogous, illustrative example of the same problem with the popular MNIST
[deng2012mnist] dataset would be if the CGANs were only able to reproduce the
number 4 and nothing else. This occurs when the generator reproduces a sample
which “fools” the discriminator. After doing so, it learns to continue
reproducing similar samples until it converges on what it presumes is an
optimal output which - in reality - is a collapsed output containing only a
small set of classes. This subset of classes is generally determined by the
initialized weights of both networks. Since the output minimizes the loss
function, there is nothing to push the generator into creating any other
sample. As this happens, changing the latent vector $z$ no longer induces a
change in the output image $y$. In the case of our CGANs, mode collapse was
readily apparent every time it occurred: the discriminator’s loss quickly
approached 0 while the generator’s loss exploded. Furthermore, different
snapshots were visually indistinguishable when cut into slices and shown as an
image (accomplished through a method similar to Figure 22 in the Appendix). To
rectify this issue, we included multiple dropout layers in both networks. By
using random dropout to nullify some of the networks’ nodes, we force the
generator into creating different types of samples by introducing extra noise
into the network [Salimans]. Hence, even after training, the same input $z$
will still produce a different outputs $y$ due to the dropout layers present
in the generator network. This prevents it from converging to a single sample
subset. While mode collapse happens to be one of the more important aspects of
training CGANs, there are other practical considerations crucial to training
CGANs for complex datasets such as turbulence, and these are outlined in
Appendix B.
### 2.2 Dimensionality Reduction of Large Datasets with Convolutional
Autoencoder Neural Networks
The major challenge of data-driven modeling of large complex systems is that
time varying dynamics are fundamentally high dimensional in nature. Over the
years, several strong arguments have been made that in spite of its high
dimensional nature, the practically relevant large scale dynamics of many
systems of interest are typically low dimensional [holmes1997low]. Thereby, it
is argued that one can study the system reliably by modeling its low
dimensional representation (LDR), while ignoring other features.
Another important idea in dynamical systems theory is that the spatio-temporal
realizations of the system state contain information about the LDR, in form of
its observables [mezic2013analysis, bagheri2013koopman, rowley2009spectral].
Therefore, several studies have focused on estimating/approximating the LDR,
directly from observations of the actual system. This is a popular strategy
since there are several cases where it is difficult to analytically derive a
model for the LDR from the governing equations. Common examples are turbulence
(due to the complexity of the Navier-Stokes operator) and various earth
sciences problems where a theoretical description of the system is itself an
area of active research.
The LDR is often only the first step in building ROMs for system modeling,
with the next step being to model the temporal evolution of the LDR dynamics.
For turbulent flows, a popular strategy is to compute the LDR with Proper
Orthogonal Decomposition (POD) of the flow, whose modes contain dominant
dynamics in a smaller, low-dimensional sub-space compared to that of the
entire flow. These dominant modes are then evolved via Galerkin projection
[noack2005need], which projects the modal dynamics on the Navier-Stokes
equations, with the goal of approximating the evolution of the flow’s
intrinsic low dimensional attractor. A more recent innovation has been to
utilize Koopman operator theory to model the LDR by directly learning the
eigenpairs of the system [lusch2018deep, yeung2017learning]. However, Galerkin
projection based approaches require that the projected dynamics be
analytically represented and maintaining temporal stability is a topic of
research [sirisup2004spectral] . Deep learning approaches demonstrated in Ref.
[mohan2018deep] use POD modes that were evolved with LSTM neural networks
instead of Galerkin projection. The results showed promise in the ability of
LSTM networks to capture non-linear, non-stationary dynamics in temporal
evolution. However, much like the POD-Galerkin approach, the efforts in Ref
[mohan2018deep] did not account for variations in the spatial POD modes of the
LDR, and hence were limited in application. The CC-LSTM deep learning
architecture proposed in the present work significantly extends that
capability to include 3D spatio-temporal dynamics in a compute efficient
manner, thereby opening up the idea to larger datasets.
As mentioned previously, we construct a LDR with a Convolutional Autoencoder
NN that has been increasingly popular in the deep learning community
[theis2017lossy]. A Convolutional Autoencoder (CAE) consists of multi-layered
deep CNNs, which utilize the convolutional operators to successively reduce
the dimensionality of the data. The CAE learns compressed, low dimensional
“latent space” representations for each snapshot of the flow. The CAE has two
main components - the encoder and the decoder. The representational
information to transform the snapshot from its original high-dimensional state
to the latent space is stored in the encoder. Similarly, the reconstruction
from the latent to original state is learned by the decoder. Both the encoder
and decoder are tensors which are learned by standard neural network
backpropagation and optimization techniques. It is important to note that this
is a convolutional autoencoder, such that the spatial information is learned
by translating filters throughout the domain, as in a convolutional neural
network. These convolving filters capture various spatial correlations and
drastically reduce the number of weights we need to learn due to parameter-
sharing [goodfellow2016deep]. This makes the training considerably cost
effective and faster than using a standard fully-connected autoencoder. The
reader is referred to Ref. [goodfellow2016deep] for more details.
### 2.3 Dynamic-map: Compressed Convolutional LSTM (CC-LSTM)
#### 2.3.1 Convolutional LSTM: Potential And Challenges
Since turbulence datasets exhibit strong spatio-temporal dynamics, dynamic-map
networks can be a viable choice to learn these variations. The Convolutional
Neural Network (CNN) architecture is ideal for learning patterns in spatial
datasets, like images or volumetric datasets [qi2016volumetric]. More details
on the CNN architecture can be found in Appendix A.1. On the other hand, Long
Short Term Memory (LSTM) NNs have been found to be powerful for sequence
modeling, in applications ranging from language translation
[luong2015stanford] to financial forecasting applications [nelson2017stock].
The details of the LSTM architecture are presented in Appendix A.2. In a
complementary fashion, vanilla LSTMs are generally restricted to one-
dimensional datasets and not cases where the data also exhibits spatial
dynamics in addition to temporal. In this architecture, an LSTM cell consists
of input and hidden states that are one-dimensional vectors. Therefore a two
or three-dimensional input (such as an image or a volumetric data field) has
to be resized to a single dimension. The “removal” of this dimensional
information fails to capture spatial correlations that may exist in such data,
leading to increased prediction errors, as reported by Xinjian
[xingjian2015convolutional].
While deep learning literature on addressing this dual spatial/temporal
modeling challenge is scarce, a notable algorithm by Xinjian
[xingjian2015convolutional] is the Convolutional LSTM (ConvLSTM). ConvLSTM
consists of a simple but powerful idea - to embed Convolutional kernels (used
in CNNs) in a LSTM to learn both spatial and sequential dynamics
simultaneously. As a direct consequence of this embedding, the LSTM cell can
now process hidden and input states in higher dimensions, as opposed to
strictly one-dimensional sequences in traditional LSTM. With this abstraction,
the same equations for LSTM in in Appendix A.2 can be used for ConvLSTM cell,
with the only difference being that the input vector and the cell gates have
the same dimensionality. This enables us to provide a 2D/3D input and obtain
2D/3D vectors $C_{t}$ and $h_{t}$ as outputs from the ConvLSTM cell, thereby
retaining spatial information in the data. ConvLSTM has been successfully
demonstrated for several sequential image prediction/classification tasks
[wu2017convolutional, zhu2017multimodal, zhao2017learning].
In spite of its strengths, a major limitation of using ConvLSTM for large 2D
and 3D datasets has been its huge memory cost. The primary reason is the
complexity of embedding a convolutional kernel in a LSTM and unrolling the
network, which drastically increases the number of trainable parameters for
even moderate sized datasets. Consequently, existing literature on ConvLSTM
has primarily focused on 2D datasets, instead of 3D and higher dimensional
datasets, which are ubiquitous in scientific problems. As a result, there is a
clear need to adapt and rigorously evaluate ConvLSTMs for high dimensional
datasets like those encountered in turbulent flows, and compare the results
with popular methods like GANs. This is the focus of this paper.
#### 2.3.2 Compressed Convolutional LSTMs
Figure 2: Schematic of the Compressed ConvLSTM (CC-LSTM) Architecture with
Pre-trained Convolutional Autoencoder layers for dimensionality reduction of
Spatio-temporal 3D Flow Dataset
In order to reduce the computational/memory costs while also leveraging the
strengths of ConvLSTM, we propose a modified architecture where the high
dimensional flow snapshot (i.e. at any given time instant) is first
“compressed” to a low dimensional latent space, which is then used as a
training data for the ConvLSTM. The trained ConvLSTM predicts future instances
of the flow also in latent space; which is subsequently “decompressed” to
recover the original dimensions of the flow. This compression and
decompression are accomplished using a Convolutional Autoencoder neural
network (CAE), and we call the combined architecture of CAE + ConvLSTM as
Compressed Convolutional LSTM (CC-LSTM). This approach makes the ConvLSTM
approach more computationally tractable. A schematic detailing this
architecture is shown in Fig. 2. Further information about CAE is presented in
Section 2.2.
## 3 Dataset
The dataset consists of a 3D Direct Numerical Simulation (DNS) of homogeneous,
isotropic turbulence with passive scalars advected with the flow, in a box of
size $128^{3}$. Two passive scalars with different Probability Density
Functions (PDF) are considered here in order to provide more complexity to the
test cases, as explained below. We denote this dataset as ScalarHIT for the
remainder of this work. We provide a brief overview of the simulation and its
physics in this section. See [daniel2018reaction] for details. The ScalarHIT
dataset is obtained using the pseudo-spectral version of the CFDNS code, as
described in [daniel2018reaction]. We solve the incompressible Navier-Stokes
equations:
$\partial_{x_{i}}v_{i}=0,\qquad\partial_{t}v_{i}+v_{j}\partial_{x_{j}}v_{i}=-\frac{{1}}{\rho}\partial_{x_{i}}p+\nu\Delta
v_{i}+f_{i}^{v},$ (1)
where $f^{v}$ is a low band forcing, restricted to small wavenumbers $k<1.5$.
The $128^{3}$ pseudo-spectral simulations are dealiased using a combination of
phase-shifting and truncation to achieve a maximum resolved wave-number of
$k_{max}=\sqrt{{2}}/3\times 128\sim 60$.
$(a)\hskip 170.71652pt(b)$
Figure 3: Instantaneous turbulent kinetic energy from the Homogeneous
Isotropic Turbulence with Passive Scalars (ScalarHIT) dataset: (a) 3D view (b)
Cross-sectional views.
Spectral resolution used is $\eta k_{max}\sim 1.5$.
The scalar field $\phi$ evolves according to
$\displaystyle\partial_{t}\phi+v_{j}\partial_{x_{j}}\phi=\mathcal{{D}}\Delta\phi+f^{\phi},$
(2)
where the form of $f^{\phi}$ is designed such that the scalar PDF at
stationarity can be controlled. $\nu$ and $\mathcal{{D}}$ in Eqs. (1)-(2) are
viscosity and diffusion coefficients respectively. Two relevant parameters of
the flow are the Schmidt number ($\nu/\mathcal{{D}}$) and Reynolds number
($Re$). Simulations considered here are performed for a constant $Sc=1$. In
homogeneous isotropic turbulence, it is standard to associate $Re$ with the
Taylor microscale, as:
$\displaystyle
Re_{\lambda}=\sqrt{{\frac{{20}}{3}\frac{{\mathrm{{TKE^{2}}}}}{\nu\epsilon}}},$
(3)
where $\mathrm{{TKE}}$ is the turbulent kinetic energy.
Figure 4: 3D snapshots of the two passive scalar fields, with quasi-Gaussian
(left) and flat (right) PDFs (Ref. [daniel2018reaction]).
In this work, we use the novel scalar forcing approach based on a chemical
reaction analogy (RA), proposed in Ref. [daniel2018reaction]. This method can
produce more general scalar PDFs, for instance quasi-double-$\delta$ PDF,
compared to forcing methods that are limited to producing Gaussian or near-
Gaussian scalar PDFs. It also ensures the boundedness of the scalar field, in
contrast to previous methods that can violate naturally existing bounds. For
completeness, here we briefly describe the method and refer the reader to Ref.
[daniel2018reaction] for details. The RA method uses a hypothetical chemical
reaction to convert the mixed fluid back into unmixed pure states. Reactants
are identified based on a RA similar to that proposed in [cook2001transition]
to quantify the width of the Rayleigh-Taylor mixing layer and further
generalized in [livescu2008variable]. Thus, any partially mixed fluid state
can be considered as being composed of fully mixed fluid, M, where the scalar
has the value of its average, and excess pure fluid, E, i.e. fluid where the
scalar has the value of one of its bounds. Using standard reaction kinetics
formulas between M and E, Ref. [daniel2018reaction] arrived at a formula for
the forcing term, $f^{\phi}$ in Eqn. 2. If the scalar bounds are $\phi_{l}=-1$
and $\phi_{u}=+1$, then $f^{\phi}$ can be written in a compact form as
$f^{\phi}=sign(\phi)f_{c}|\phi|^{n}\left(1-|\phi|\right)^{m}$ (4)
where m, n are the stoichiometric coefficients and $f_{c}$, which is related
to the reaction rate constant, defines the strength of the forcing. All 3
parameters influence the shape of the scalar PDF at stationarity.
The forcing terms ensure that velocity and scalar fields attain stationary
states. The level of turbulence attained in the simulation translates to
$Re_{\lambda}\sim 91$ in the statistically steady regime. The scalar forcing
parameters are chosen such that scalar $\phi_{1}$ exhibits quasi-Gaussian
characteristics with kurtosis value of approximately 3, while scalar
$\phi_{2}$ has a much lower kurtosis value of 2.2. In both cases, $m=n=1$, but
$f_{c}$ has different values. We expect that the two NNs considered here would
be able to capture the quasi-Gaussian scalar PDF. The ability to capture the
scalar bounds is a novel test for both the static and dynamics maps.
Both networks studied in this work are trained on the ScalarHIT dataset. A
static-map network is agnostic to the sequential order in the snapshots, and
only seeks to learn the statistics of the flow in individual snapshots. We use
DNS training snapshots from $\tau\,=\,0-3$. Here, $\tau$ is the normalized
large eddy turnover time, corresponding to a single cycle in the statistically
stationary flow. The test data to validate the trained model predictions
consists of snapshots from $\tau\,=\,3-4.5$. Dynamic-map networks can also use
the same train/test data split as above. However, since the model aims to
capture the temporal dynamics of the flow, the sequential information in the
train/test data is retained throughout training.
## 4 Diagnostic Tests for Turbulence
In this section we review basic statistical concepts commonly used in the
modern literature to analyze results of theoretical, computational and
experimental studies of homogeneous isotropic incompressible turbulence in
three dimensions. Combination of these concepts are used in the main part of
the manuscript as a metric to juxtapose results of the two (static-map and
dynamic-map) DL methods.
We assume that a $3d$ snapshot, or its $2d$ slice, or a temporal sequence of
snapshots of the velocity field, ${\bm v}=(v^{(i)}({\bm r})|i=1,2,3)$, is
investigated. Here, we focus on analyses of static correlations within the
snapshots. The remainder of this section contains classical material described
in many books on the theory of turbulence (see e.g. [frisch_1995]). We
describe the main turbulence concepts mentioned in the results section one by
one, starting from simpler ones and advancing towards more complex concepts. A
key expectation from any generative machine learning models would be the
ability to predict non-Gaussian statistics.
### 4.1 $4/5$ Kolmogorov Law and the Energy Spectra
A main statement of the Kolmogorov theory of turbulence is that asymptotically
in the inertial range, i.e. at $L\gg r\gg\eta$, where $L$ is the largest (so-
called energy-containing) scale of turbulence and $\eta$ is the smallest (so-
called Kolmogorov or viscous) scale of turbulence, the statistics of motion
have a universal form that is uniquely dependent on the kinetic energy
dissipation,
$\varepsilon=\nu\langle\left(\nabla^{(i)}v^{(j)}\right)\left(\nabla^{(i)}v^{(j)}\right)\rangle/2$,
and does not depend on viscosity, $\nu$. A consequence of the existence of the
inertial range is that, within this range, the transfer term,
$F(r)\doteq\langle v^{(j)}({\bm 0})v^{(i)}({\bm r})\nabla^{(i)}v^{(j)}({\bm
r})\rangle,$
does not depend on $r$. Moreover, (in fact, the only formally proven statement
of the theory) the so-called $4/5$-law states that for the third-order moment
of the longitudinal velocity increment, $S_{3}^{(i,j,k)}({\bm
r})\doteq\langle\left(v^{(i)}({\bm r})-v^{(i)}({\bm
0})\right)\left(v^{(j)}({\bm r})-v^{(j)}({\bm 0})\right)\left(v^{(k)}({\bm
r})-v^{(k)}({\bm 0})\right)\rangle$:
$\displaystyle L\gg r\gg\eta:\quad
S_{3}^{(i,j,k)}\frac{r^{i}r^{j}r^{k}}{r^{3}}=-\frac{4}{5}\varepsilon r.$ (5)
The Kolmogorov self-similarity hypothesis applied to the second moment
velocity increment results in the expectation that within the inertial range,
this scales as $S_{2}(r)\sim C_{2}(\epsilon r)^{2/3}$. This feature is
typically tested by plotting the energy spectrum of turbulence in the wave
vector domain, where it is restated as a $-5/3$ power law dependence of the
energy spectrum with respect to the wavenumber, and will be addressed in the
forthcoming sections.
### 4.2 PDF of Longitudinal Velocity Gradient
Consistently with Eq. (5), estimation of the moments of order $n$ of the
longitudinal velocity gradient results in
$\displaystyle
D_{n}\doteq\Biggl{\langle}\left|\left(\nabla^{(i)}v^{(j)}\right)\left(\nabla^{(i)}v^{(j)}\right)\right|^{n/2}\Biggr{\rangle}\sim\frac{S_{n}(\eta)}{\eta^{n}},$
(6)
where $S_{n}(r)\doteq\langle\prod_{i=1}^{n}\left(v^{(i)}({\bm r})-v^{(i)}({\bm
0})\right){\bm r}^{(i)}/|{\bm r}^{(i)}|\rangle$. Intermittency (extreme non-
Gaussianity) of turbulence is stronger expressed at larger $n$ in Eq. (6).
### 4.3 Statistics of coarse-grained velocity gradients: $Q-R$ plane.
The properties of the velocity gradient tensor are related to a wide variety
of turbulence characteristics, such as the flow topology, deformation of
material volume, energy cascade, and intermittency. One of the hallmarks of 3D
turbulence is the tear-drop shape of the joint-PDF of the second (usually
denoted by Q) and third (usually denoted by R) invariants of the velocity
gradient tensor. This form can be related to the vortex stretching mechanism
and shows that certain local flow configurations are preferred in 3D
turbulence. A useful extension of this analysis was proposed in Ref.
[chertkov1999lagrangian] to velocity gradient coarse-grained over an inertial-
range scale. Following the notations from Ref. [chertkov1999lagrangian], the
coarse-grained velocity gradient tensor M is constructed by interpolating the
velocity at Lagrangian points, $i$, at the center of mass of the associated
tetrahedron of volume $\Gamma$ as
$M_{ab}=\left(\rho^{-1}\right)_{i}^{a}v_{i}^{b}\,-\frac{\delta_{ab}}{3}tr\left(\mathbf{\rho^{-1}_{i}}\mathbf{v}_{i}\right),$
(7)
where $a$ and $b$ are spatial coordinates and $\rho_{i}^{a}$ is the vector
resulting from the vertex positions after the elimination of the center of
mass. The invariants $Q$ and $R$ are then defined such that
$Q\,=\,-(1/2)tr\mathbf{M}^{2}$ and $R\,=\,-(1/3)tr\mathbf{M}^{3}$. Note that
the trace of M (i.e. the first invariant) is zero due to incompressibility.
Then the $Q-R$ joint-PDF indicates the turbulence structure at scale
$r=|\rho|$. Different parts of the $Q-R$ plane are associated with different
structures of the flow. Thus lower right corner (negative $Q$ and $R$), which
has higher probability than other regions, corresponds to a pancake type of
structure (two expanding directions, one contracting) with the direction of
rotation (vorticity) aligned with the second eigenvector of the stress. This
tear-drop shape of the probability isoline becomes more prominent with
decrease of the coarse-graining scale.
## 5 Results using Convolutional Generative Adversarial Networks (CGANs) for
3D Turbulence
The network is trained as described in Section 2.1, with the objective to
model the statistics of the ScalarHIT data. We now present the results where
we attempt to generate samples of the ScalarHIT flow and compare it with the
real flow. It is important to note in the CGAN architecture, the predicted
samples are not temporal and they are static i.e. the predictions are not
correlated, unlike its exotic variants like RNN-GANs [mogren2016c]. Figure 5
shows 3 randomly chosen samples out of the hundreds generated by the CGANs,
followed by its average. The diagnostic metrics used are those described in
Section 4. The first is the energy spectra, on the left in Fig. 5. We can see
that the spectra captured by the CGANs match very closely with the low and mid
range wavenumbers, which correspond to large and inertial scales of
turbulence. Discrepancies occur at higher wavenumbers in the inertial scales
and all the viscous scales. The next metric (in the center panel of Fig. 5)
is the probability density function (PDF) of the velocity gradient. The
objective is testing how the network captures intermittent events in the flow,
which are associated with the tails of the PDF. The intermittent events are
seen in strongly non-Gaussian shape of the PDF characterized by extended tails
and the CGANs come close to reproducing this trend well, with discrepancies
occurring at the tail. This behavior is seen in all samples that CGANs
generate as it has learned the statistics of the stationary flow dataset, and
$3$ samples are shown in Fig. 5 for example. Finally, the most stringent test
on the right is the $Q-R$ joint PDF, since it captures the 3D morphology of
the flow. The $Q-R$ joint PDF at $r=0$ corresponds to small scale behavior,
$r=8$ for inertial range scales and $r=32$ for large scale behavior, as
explained in Chertkov et. al. [chertkov1999lagrangian]. Even though the kernel
sizes for all networks in CGANs where $\leq 7$ i.e. significantly larger than
the kernels of size 3, we notice that it does not improve large scale
resolution. A clearer picture emerges from the $Q-R$ joint PDF, where we
notice that CGANs neglect the smaller scales as seen in the energy spectra,
while the inertial range scales are modeled reasonably well. Finally, the
CGANs seem to model the qualitative statistics of the stretching and
compression of the large scale flow morphology, with discrepancies occurring
in some of the quadrants. This finding also illustrates the value of $Q-R$
joint PDF in assessing any ML turbulence model, since such subtle deviations
in large scale structures are not noticed in the widely used Kolmogorov
spectrum and PDFs of velocity gradient magnitude.
Figure 5: Energy spectra (left), PDF of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen static snapshot
predictions produced by CGANs.
Figure 6: PDFs of passive scalars $\phi_{1}$ predicted by CGANs and comparison
with DNS, CGANs fail to capture narrow band statistics and the predicted
scalar PDF is unbounded.
Figure 7: PDFs of passive scalar $\phi_{2}$ predicted by CGANs and comparison
with DNS, where CGANs fail to capture wide-band statistics seen in DNS and the
predicted scalar PDF is unbounded
We now turn our attention to the two passive scalars $\phi_{1}$ and $\phi_{2}$
which are advected with the velocity field. Figures 6 and 7 compare the CGANs
scalar PDFs predictions against the DNS results for $\phi_{1}$ and $\phi_{2}$,
respectively. The passive scalars were introduced with specific, hard physical
bounds $(-1.0,+1.0)$; which is encountered in many physical scalars (e.g. mass
fractions). Since scalar $\phi_{1}$ has a quasi-Gaussian PDF, its values are
well within the bounds. Scalar $\phi_{2}$ has a much flatter PDF and attains
values close to the specified bounds. We see that the CGANs predictions for
both scalars are considerably worse compared to the velocity predictions, with
only $\phi_{1}$ prediction capturing the general trend of the DNS PDF. The
broadband flatter PDFs seen for $\phi_{2}$ are missed by the CGANs.
In summary, CGANs models overshoot the amplitude (y-axis) significantly and
lose a lot of fine details. Therefore, even though the convolutional generator
can sufficiently learn trends of large-scale behavior in the velocity fields,
it appears that it has severe difficulties learning the advected quantities by
the same velocity fields, especially for highly non-gaussian PDFs (as seen in
Fig. 7). This points to a topic worthy of further research due to the
popularity of GANs in modeling turbulent velocity fields, with passive scalars
having not been previously explored.
## 6 Analysis of 3D Turbulence Dimensionality Reduction with Convolutional
Autoencoders
Figure 8: Variable Striding in Convolutional Kernels with kernel size $\alpha$
and stride length $\beta$: $\beta\,=\,1$ corresponds to cell by cell striding,
while $\beta\,=\,3$ skips over 2 cells for every stride, thereby producing a
convolved domain of lower dimension Figure 9: Schematic of a Convolutional
Autoencoder NN Architecture with kernel size $\alpha$ and kernel stride
$\beta$ for dimensionality reduction of input data to latent space, and
reconstruction from reduced latent space to original dimensions
Schematic of the CAE architecture used for the ScalarHIT dataset is shown in
Fig. 9. The CAE greatly reduces the memory utilization since the same $n$
weights in a convolutional kernel are translated throughout the domain of size
$m\times m$, where $m>>n$. These $n$ weights are global, hence learned for all
regions of the domain. In contrast, the standard, fully-connected autoencoder
architecture would need $m^{2}$ weights which are local, leading to
prohibitive memory consumption and extremely high training cost. In addition
to computational benefits, the design of the convolutional kernel offers
flexibility in tuning the number of shared weights and mode of translation
through the domain, as will be explained in this section.
Another important aspect is the number of features i.e. trainable parameters
in the CAE. In the case of ScalarHIT dataset, there are $5$ features
corresponding to the $3$ components of velocity and $2$ passive scalars.
Increasing the number of features in the latent space allows it to encode more
information at a minimal increase in computing cost, while also compressing
the high dimensional dataset. We thus define the compression ratio $z$ as
$z\,=\,\frac{(\mathrm{original\ dimensions\times number\ of\ input\
features})}{(\mathrm{latent\ dimensions\times number\ of\ latent\ features})}$
(8)
From Equation 8, it follows that for input dimensions of size $128^{3}$ with
$5$ features; and a latent space of dimensions $15^{3}$ with $25$ features,
there is considerably lesser impact on $z$ with increase in latent space
features. As such, the most significant impact comes from the latent space
dimensions, giving us the liberty to increase the feature space. In fact, the
CAE in this work uses $25$ features to obtain a compression ratio of $\approx
125$ i.e. a 125-fold decrease in size for every snapshot of the flow. This
leads to tremendous gains in efficiency and makes a ROM computationally
efficient, since the original dimensions were prohibitive from a memory
standpoint. Mathematically, we can say that the subspace spanned by the input
features is mapped by a neural network onto a latent subspace spanned by a
different set of learned features. Typically, an increased number of features
in the latent space has a direct effect on accuracy of compression, but with a
decrease in the compression ratio and increase in the computational cost. The
optimal number of features is therefore a user choice, based on compute
resources and level of compression required. In any CAE, two key design
choices have to be made: the kernel size, $\alpha$ and kernel stride, $\beta$.
The kernel size indicates the spatial extent of a single kernel. For instance,
a kernel size of $3\times 3\times 3$ contains $27$ shared, trainable weights.
The next choice is to decide how the shared weights (i.e. the kernel)
translate across the domain. An illustration is shown in Fig. 8, where a
kernel $\alpha=3$ can be translated by a distance $\beta$ of our choosing,
known as stride. The figure shows kernel positions after $\beta\,=\,1$ and
$\beta\,=\,3$ strides on the domain, and the strides are repeated until the
entire domain has been traversed by the kernel. By increasing the stride, the
kernel needs fewer convolutions to cover the entire domain and results in much
smaller domain, as will be explained in the following section.
At the core of any CNN (and therefore a CAE) is the convolution operation. In
the CAE encoder network (i.e. layers to the left of the latent space in the
schematics), the kernel convolves with the data to reduce its dimensionality
for every time instant $t_{i}$.As a result, a $\alpha=3$ kernel had dimensions
$(3\times 3\times 3)$ and therefore downsamples a spatial field of size
$3^{3}$ \- known as the receptive field (Ref. [goodfellow2016deep]) - to a
single point. The decoder network kernel (i.e. layers to the right of the
latent space in the schematics) then upsamples each point in the latent space
back to the size of the receptive field through a deconvolution operation.
Downsampling in the case of NNs can be explained as a weighted averaging
operation, where the averaging weights are learned. Similarly, the upsampling
kernel weights are also learned to perform the inverse operation. By stacking
multiple CNN layers in the encoder, the input is downsampled in every layer
and the resulting domain - the latent space - can be extremely low
dimensional. Likewise, upsampling can be performed by suitable number of
decoding layers to recover the original dimension.
It is important to note that dimensionality reduction to obtain a LDR is
accompanied by loss of some information. For instance, popular approaches like
POD can represent dominant energetic dynamics in the first few eigenpairs.
These eigenpairs can be used for further analysis or modeling tasks, such as
Galerkin projection, while the eigenpairs having very low energy contribution
to the overall dataset are truncated, thereby leading to information loss.
While the CAE is no exception, it distinguishes itself from the POD in two
major ways: First, the POD bases compress the dataset as a linear map, whereas
autoencoders with multiple layers and non-linear activation functions are
inherently non-linear maps [gonzalez2018deep]. Consequently, autoencoders can
provide very high compression ratios for the same dataset. Second, POD
computation results in several global modes with the same dimensionality of
the datasets, with ROMs primarily emulating only the temporal coefficients of
the modes. i.e. the spatial structures captured by the POD modes are still
high dimensional. In contrast, CAE can directly learn local LDRs for each
snapshot that have degrees of freedom several orders of magnitude lower than
the training dataset. From a computing standpoint, this leads to significant
reduction in memory resources and ROMs can now emulate both spatial and
temporal dynamics with the low dimensional latent space.
Since it is derived from a CNN, the information content learned by a CAE is
dominated by $\alpha$ and $\beta$. For a fixed kernel size $\alpha$, the
striding of the convolutional kernel has a direct effect on the dimensionality
of the convolved output after each layer. From Fig. 8, it is clear that
increasing the stride diminishes the coverage of kernel over the domain,
making the convolved output sparser. For a fixed $\alpha=3$, $\beta=3$ leads
to an overlap with receptive field at the previous stride, while $\beta=3$
removes any overlap. Higher values of $\beta$ create gaps in the domain which
are not seen by the kernel, and hence can traverse the entire domain in fewer
steps than using $\beta=1$. These choices significantly influence the
accuracy, degree of compression and computational cost of the ROM. We now
present some physical insight about $\alpha$ and $\beta$ in the next section.
### 6.1 Physical Interpretation of $\alpha$ and $\beta$
It is now worthwhile to discuss implications of the these choices in
dimensionality reduction of complex, spatio-temporal and multiscale datasets
like turbulence. From the discussion above, it is apparent that there are two
competing strategies for dimensionality reduction in a CAE. The first strategy
relies on a large $\alpha$ to increase the receptive field. A larger receptive
field would decompose several adjacent data-points into a single data-point.
Therefore, for a desired dimension of the latent space, a suitable value of
$\alpha$ can be computed. The second strategy is to retain a constant, small
$\alpha$, but increase $\beta$ to traverse the domain in as few steps as
possible. The optimum $\beta$ can be estimated from the desired latent space
dimension and the number of stacked layers we are willing to allow, due to
computational cost involved in training deep networks.
There are caveats to both these strategies: A larger receptive field; in the
limit of $\alpha\rightarrow\infty$ (where $\infty$ refers to the
dimensionality of the dataset) increases the number of trainable weights, with
their number approaching the number of data-points in the domain. As mentioned
before, this is computationally prohibitive for 3D datasets of even small
sizes and is hence not feasible. This leads to the second strategy of
increasing $\beta$, while retaining a relatively small $\alpha$. This also has
pitfalls due to large discontinuities created between adjacent receptive
fields. A $\beta=1$ leads to smooth transitions in convolution operations
between subsequent layers, but requires large number of layers to achieve any
meaningful dimensionality reduction. In contrast, $\beta>1$ skips over some
features in the domain, leading to some information loss in the smaller
scales. However, it also leads to significant dimensionality reduction with
fewer layers, which reduces computational costs. Fig. 8 illustrates the effect
of these parameters on the convolution kernel. A $\beta=3$ for $\alpha=3$ can
quickly traverse the domain in fewer steps, while a $\beta=1$ for $\alpha=3$
ensures maximum overlap between adjacent receptive fields, at the cost of more
traversal steps.
At this juncture, it is useful to develop some intuition on $\alpha$ and
$\beta$ in terms of numerical solution of partial differential equations in
CFD. The convolutional kernel used in CAE has direct connections to the
numerical stencils used in finite difference/finite volume approaches
[dong2017image, long2017pde]. Consider the standard $2^{nd}$ order central
difference scheme in 1D for a quantity $\phi$
$\frac{\phi_{i-1}-2\phi_{i}+\phi_{i+1}}{\delta^{2}}$ (9)
This can be represented as a 1D convolutional kernel of $\alpha=3$ with three
constant weights $\frac{1}{\delta^{2}}$, $\frac{-2}{\delta^{2}}$ and
$\frac{1}{\delta^{2}}$ . In the CAE, the kernel has the same structure, but
all the constant weights in the convolutional kernels are replaced with
learnable weights. Therefore, the output of trained kernel is analogous to a
weighted combination of adjacent points, akin to numerical solution of PDEs.
In fact, there are deeper connections between convolutional kernels and
stencils of numerical schemes that have been uncovered recently for developing
efficient neural network based PDE solvers, and the reader is directed to the
Long et. al. [long2017pde] and Dong et al. [dong2017image].
In numerical solutions of PDEs, the kernel size corresponds to the order of
numerical scheme, which is typically constant and computed at every point in
the domain. By analogy, larger stencils may represent higher order numerical
schemes, as seen by an increased number of trainable weights in networks.
Extending the CNN terminology to PDE solvers for comparison, a PDE solver has
a constant $\alpha$ and $\beta=1$ which completes its operation in a single
“layer”. In contrast, the CAE has multiple layers with flexibility to have
different $\alpha,\beta$ in each layer. Thus, each layer of the CAE encoder
consists of a customized numerical stencil specific to the dataset. In lieu of
these close connections, the practical differences between PDE solvers and
CAEs boil down to the treatment of boundaries, stride and the mapping of input
features into a different subspace. In CAE, only the first layer in the deep
neural network encoder treats the boundaries, while the increasing $\beta$ at
successive layers decreases dimensionality of the data. In summary, CAE
encoders map the high dimensional input features into a low dimensional latent
space with an intelligent choice of kernel weights, kernel sizes and stride
lengths. The CAE decoder is essentially an inverse operation of the encoder,
but not in an explicit, mathematically exact fashion [ardizzone2018analyzing].
Instead the decoder weights and strides are trained with the encoder to
estimate the inverse map from latent space to original data. These connections
can be exploited to build CNNs with hard physics constraints based on
numerical methods, and the reader is referred to Mohan et al.
[mohan2020embedding] for details.
### 6.2 Convolutional Autoencoders: Influence of kernel size and sequence
length
The discussion thus far has emphasized the role of kernel size $\alpha$, in a
CNN (and therefore, a CAE) as a hyper-parameter with important consequences on
the accuracy of our learned model. In this work, each batch trained consists
of snapshots that retain their temporal order and are not shuffled. This means
the CAE has to extract a low dimensional latent space from the dynamics of a
temporal sequence, as opposed to learning from each snapshot as an independent
sample. Consequently, the temporal gap between subsequent snapshots i.e.
sampling-rate $\omega$ becomes a factor building a DL based ROM.
We now seek to study if the accuracy of the learned latent space is sensitive
to this relationship. To understand the sensitivity of our model to $\omega$,
we increase the sampling-rate for a constant batch size, to account for many
real-world applications where data collection frequency is not ideal. Since
the kernel performs a convolution operation over a numerical grid, its
receptive field is intimately connected to the turbulence scales it captures.
Intuitively, we would expect larger kernel sizes to capture spatial
correlations of larger scales. Likewise, it follows that these kernels would
also capture dynamics over longer time scales, due to the relationship between
length and time scales in turbulence. By decreasing the sampling-rate of the
snapshots, we can account for these longer temporal scales, while keeping the
batch size same. This ensures that all the differences we observe in model
accuracy is not from the batch size, but rather its sampling-frequency.
We intend to experimentally quantify the influence of $\alpha$ on the accuracy
of the compression, for future applications in 3D turbulence. The goal is to
observe if turbulent features over a range of scales orders of magnitudes
apart, show any preferential dependence to learning by various kernel sizes
and sampling rates. We choose $\omega$ to be 3, 6 and 9 samples apart, which
corresponds to $\omega\,=\,0.09\tau$, $0.18\tau$ and $0.27\tau$, where $\tau$
is the eddy turnover time for this flow. Finally, such a hyper-parameter sweep
would seek to establish that the results are consistent, and not due to chance
numerical artifacts that may have occurred during optimization. To this end,
several experiments are performed with two families of parameters:
1. 1.
With a small kernel size $\alpha=3$, vary sampling rate as $\omega=\
0.09\tau,0.18\tau,0.27\tau$.
2. 2.
With large kernel size $\alpha=9$, vary sampling rate as $\omega=\
0.09\tau,0.18\tau,0.27\tau$.
For consistency, we ensure that the number of layers and the striding $\beta$
in the encoder and decoder are constant for all experiments. The $\alpha=9$
kernel creates a higher compression ratio than $\alpha=3$, for the same number
of layers in encoder and decoder. As a result, the only variables in the
experiments are $\alpha$ and $\omega$. All experiments above are also trained
with three commonly used optimizers - Adam [kingma2014adam], Adadelta
[zeiler2012adadelta] and RMSProp [tieleman2012lecture] and the best model is
used for analysis, to ensure the final trends are not a consequence of an
arbitrary choice of optimizer, but instead an outcome of the $\alpha$ and
$\omega$ choices. We now present the results, and the trained model is
assessed using the same diagnostic metrics mentioned in Section 4. All the
diagnostics compare the statistics generated from a) DNS snapshots, and b) CAE
reconstructed models of their corresponding latent states. The previous
discussion in Section 6.1 points to information loss in small scale behavior
due to $\beta>1$. Quantifying the accuracy of the latent space with these
physics based diagnostics will shed light if this indeed holds true.
#### 6.2.1 Convolutional Autoencoder: $\alpha\,=\,3$
Figure 10: Energy spectra (left), PDFs of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen samples from CAE-
NN dimensionality reduction with $\alpha\,=\,3$ and $\omega=\ 0.09\tau$
Figure 11: Energy spectra (left), PDFs of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen samples from CAE-
NN dimensionality reduction with $\alpha\,=\,3$ and $\omega=\ 0.18\tau$.
Figure 12: Energy spectra (left), PDFs of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen samples from CAE-
NN dimensionality reduction with $\alpha\,=\,3$ and $\omega=\ 0.27\tau$.
The statistical diagnostics for the small kernel $\alpha\,=\,3$ with $\omega=\
0.09\tau$ is shown in Fig. 10. To indicate the quality of the model, we show
diagnostics at $3$ randomly chosen samples, followed by the averaged
diagnostics for several samples. We adopt this style for all CAE results in
this work. From the energy spectra, it is clear that large and inertial range
frequencies are retained accurately, while there is a marked discrepancy in
the small scale frequencies. This behavior is also observed in the velocity
gradient PDFs, where the large scale events around $Z\,=\,0$ are well
resolved, while discrepancies corresponding to small scales exist at the
tails. Finally, the most stringent test is the $Q-R$ plane PDF, since it
captures the 3D morphology of the flow. The $Q-R$ spectra at $r=0$ corresponds
to small scale behavior, $r=8$ for inertial range scales and $r=32$ for large
scale behavior. The spectra shows excellent agreement at large scales, thereby
corroborating the results from the energy spectra. The structure of the
inertial range is also accurately captured, with very minor discrepancies in
stretching behavior. Finally, we see that the small scale behavior is almost
entirely neglected by the kernel. The symmetric nature of PDFs indicate that
the network may be generating some random noise to compensate for information
loss in the small scales. Interestingly, the discussion about $\beta$ and its
relationship to turbulence scales in the previous section indicates this
outcome, which we have now verified. We now discuss the sensitivity of
training with $\omega$. The sampling rate is progressively decreased to
$\omega=\ 0.09\tau,0.18\tau$ and the diagnostics are shown in Fig. 11 and 12
respectively. The diagnostics show that the quality of results are extremely
robust despite a decrease in temporal sampling rate of the data. We caution
that this may likely be true only for cases like stationary, homogeneous
turbulence, whereas accuracy for flows with strong transients and non-
stationarity can be affected by $\omega$.
#### 6.2.2 Convolutional Autoencoder: $\alpha\,=\,9$
Figure 13: Energy Spectra (left), PDF of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen samples from CAE-
NN dimensionality reduction with $\alpha\,=\,9$ and $\omega=\ 0.09\tau$.
Figure 14: Energy spectra (left), PDFs of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen samples from CAE-
NN dimensionality reduction with $\alpha\,=\,9$ and $\omega=\ 0.18\tau$.
Figure 15: Energy spectra (left), PDFs of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right) for randomly chosen samples from CAE-
NN dimensionality reduction with $\alpha\,=\,9$ and $\omega=\ 0.27\tau$.
We now turn our attention to the large kernel $\alpha\,=\,9$ with $\omega=\
0.09\tau$. The diagnostics in Fig. 13 paint a somewhat different picture in
comparison with the small kernel. The energy spectra shows good agreement in
the low wavenumbers, but gets progressively worse with increasing wavenumber.
Finally, the high wavenumbers show major discrepancies with oscillatory
behavior not present in the DNS dataset. On the other hand, the velocity
gradient PDFs show a much better agreement with the DNS than the small kernel.
This seemingly counter-intuitive behavior likely happens due to the random
high wavenumber oscillations (seen in the energy spectra) fortuitously
replicating averaged small scale intermittent fluctuations in DNS. Finally, we
get a clear understanding of the large kernel performance looking at the $Q-R$
PDF statistics. The statistics show good large scale reconstruction, but
significant discrepancies in the inertial range, with the somewhat symmetric
stretching in the $R$ axis implying addition of random noise to the lower
quadrant of the $Q-R$ plane. The noise effect is further accentuated in the
small scale statistics, with appreciable deviations from the DNS statistics.
Similar to the small kernel, experiments are also performed for $\omega=\
0.09\tau,0.18\tau$ and the diagnostics are shown in Fig. 14 and 15
respectively. For $\omega=\ 0.18\tau$ we see similar behavior as $\omega=\
0.09\tau$, except for minor discrepancies in the large scales. These trends
are repeated in $\omega=\ 0.27\tau$. Overall, the quality of reconstruction
does not seem to change with decreasing sampling frequency, as seen for
$\alpha\,=\,3$ . Furthermore, all the results show consistent addition of
random noise to high wavenumbers and several inertial-range wavenumbers. The
presence of noise in the large kernel happens to be the most significant
difference from the small kernel, which consequently leads to deterioration in
reconstruction. It bears mentioning that the large kernel contains more
parameters than the small kernel, and as such needs significantly longer
training time to obtain convergence. In this work, the training time for
$\alpha\,=\,9$ was twice that of $\alpha\,=\,3$, and the memory requirements
were considerably higher.
From these experiments we can conclude that, at least for the case of
isotropic turbulence, the kernel size appears to be a more important parameter
affecting model accuracy than the sampling rate of the data. We note that
while a large kernel is capable of higher compression ratios than a small
kernel for the same layers, it comes at the price of accuracy, computational
time and memory. While both large and small kernels capture large scale
behavior well, the small kernel also reconstructs the inertial scales
reasonably well.
## 7 Results using Compressed Convolutional LSTM (CC-LSTM)
As discussed previously, the ConvLSTM network (Fig. 2) necessitates some form
of data compression to efficiently learn the spatio-temporal dynamics of the
flow with tractable computational effort. The CAEs described above are seen to
learn efficient latent space representations of the flow with excellent
compression in data size, and we denote the combined approach as CC-LSTM. As
mentioned in Section 3, we use as training data the time-varying latent space
for $\tau\,=\,3$ snapshots. After the parametric study with different $\alpha$
and $\omega$, it is observed that $\alpha\,=\,3$ and $\omega=\ 0.09\tau$ learn
sufficiently accurate models with the lowest computational cost. Therefore, we
use the latent space models from this configuration as the ConvLSTM training
data.
Since a ConvLSTM network can model spatio-temporal dynamics, we evaluate it by
making continuous predictions in time. We give a batch of temporal flow
snapshots compressed into CAE latent spaces as input and the network predicts
the next batch of latent spaces evolved in time. These predicted latent spaces
are then used to recover the true dimensions of the flow thru the CAE decoder.
The model is autoregressive, since the predictions are fed back into the
network as a new input. We repeat this autoregressive process for several time
instants, to study both the accuracy of the predicted snapshots, and how far
in time the network is able to generate stable snapshots without significant
deterioration in accuracy. The diagnostic tests outlined in Section 4 are used
to evaluate CC-LSTM generated snapshots. The velocity diagnostics are shown in
Fig. 16 for predicted snapshots at 1.5 eddy times from $\tau\,=\,3-4.5$ in the
DNS dataset. We make autoregressive predictions in
$\tau^{*}\,=\tau-3=\,0\rightarrow 1.5$, and the diagnostics are shown for
$\tau^{*}\,=\,0.1,1.0,1.5$ such that we are evaluating temporally correlated
snapshots across the predicted range. The ConvLSTM network has $3$ layers with
constant kernel size $\alpha\,=\,3$, with each hidden cell having $40$
features and RMSProp optimizer used to train the network. The approach was
implemented using the Pytorch [paszke2019pytorch] framework and trained in a
distributed multi-GPU batch-parallel fashion.
We see from the energy spectra that the large scale and inertial range spectra
are predicted extremely well, with discrepancies only in the small scale
range. Interestingly, the velocity gradient PDFs show near-perfect resolution
across all the scales, including the small scale behavior at the tails. This
likely indicates the ConvLSTM network is adding some artifacts to the
predictions which accurately mimics the tail behavior of the PDF, since this
was not a condition we enforced on the network. A more rigorous evaluation is
performed with the $Q-R$ PDFs, where we see that the statistical trends of the
small scales are neglected by the network as expected. Furthermore, we see
that the large scale trends are predicted quite well, followed by inertial
range scales with some discrepancies. Typically, most temporal modeling
techniques are accompanied by a significant loss in accuracy as the prediction
horizon $\tau^{*}$ increases. In this case, we see only marginal deterioration
in large scale statistics at $\tau^{*}>1$. The loss of accuracy is somewhat
more significant in the inertial scales, while the small scales do not see
much change. From these diagnostics, its apparent that the CC-LSTM is able to
consistently model large scale velocity dynamics of ScalarHIT over extended
time ranges, even while the accuracy in other scales might suffer. This is
quite promising, since modeling large scale dynamics at high fidelity is a
requirement for several practical applications.
We now turn our attention to the passive scalars. Figure 17 compares PDFs of
the DNS and CC-LSTM predictions at $\tau^{*}\,=\,0.1,1.0,1.5$ for the scalar
$Y1$. We observe that the predicted scalar is well bounded in $(-1,1)$.
However, the CC-LSTM significantly underpredicts the peak amplitude at all
$\tau^{*}$. Compared to the velocity profiles where large scale PDFs are
accurately captured, we notice that the network loses considerably more
information in $Y1$. This observation is more pronounced in the predictions of
scalar $Y2$ in Fig. 18. The network fails to capture both the peak amplitude
and the tails of the PDF. These results show that while the CAE maybe
appropriate for resolving large/inertial scales in velocity, the passive
scalar dynamics are more susceptible to changes in compression ratio.
Interestingly, both the CC-LSTM scalar predictions appear to retain
boundedness, despite loss of information, compared to the CGANs.
Finally, a key factor to evaluate these architectures is the computational
resources required to train a ROM. Since the CC-LSTM primarily learns dynamics
on latent space rather than the high dimensional raw data, it requires orders
of magnitude fewer parameters than CGANs, which has generally been the more
popular approach in the turbulence community. The details of the computational
costs are outlined in Appendix C, and shows significant advantages of CC-LSTM
over CGANs when scaling these approaches to large, realistic flows.
Furthermore, we note that training GANs/CGANs in a stable manner involves
several modifications over hyper-parameters in both the network design and
optimization, and has been well documented elsewhere in the broader machine
learning literature [thanh2019improving, radford2015unsupervised,
arora2017generalization, arora2018gans]. In this work, the authors had to
implement several strategies outlined in Appendix B to obtain reliable
predictions using CGANs. In contrast, CC-LSTM training was markedly more
stable and resilient to variations in hyper-parameter choices across different
kernel sizes and sequence lengths, further reducing compute cost.
Figure 16: Energy spectra (left), PDFs of the longitudinal velocity gradient
magnitude (middle), and joint PDFs of the Q and R invariants of the coarse-
grained velocity gradient tensor (right). CC-LSTM predictions are more
accurate than CGANs and temporally stable, with errors concentrated in the
small scales
Figure 17: Instantaneous $\phi_{1}$ scalar PDFs bounded $[-0.5,+0.5]$ from DNS
and NN at different time instances: $\tau^{*}\,=\,0.1$ (top),
$\tau^{*}\,=\,1.0$ (middle), $\tau^{*}\,=\,1.5$ (bottom). CC-LSTM predictions
are bounded and more accurate than CGANs with over-prediction at tails
Figure 18: Instantaneous $\phi_{2}$ scalar PDFs bounded $[-1.0,+1.0]$ from DNS
and NN at different time instances: $\tau^{*}\,=\,0.1$ (top),
$\tau^{*}\,=\,1.0$ (middle), $\tau^{*}\,=\,1.5$ (bottom). CC-LSTM predictions
are bounded and more accurate than CGANs with over-prediction of flat PDF
profile
## 8 Conclusions
In this work, we report a first systematic study of deep learning strategies
for generation of fully developed three-dimensional turbulence. We evaluate
neural network architectures representing two different approaches to high-
dimensional data modeling. The quality of the deep learning predictive models
are tested with physics-based metrics which identify the statistical
characteristics of 3D turbulence. The first architecture is a 3D convolutional
variant of popular approach known as Generative Adversarial Networks (GANs).
In this work, Convolutional GANs (CGANs) are demonstrated to have acceptable
accuracy in modeling large and inertial scale velocity features of individual
snapshots of the flow, albeit without capability for temporal predictions.
However, we also notice CGANs difficulties in modeling the probability density
functions (PDFs) of the passive scalars advected with the velocity, with the
predictions being frequently unbounded. Since CGANs lack temporal dynamics, we
propose an alternative neural network approach to perform spatio-temporal
prediction. This novel strategy utilizes a convolutional autoencoder (CAE)
neural network followed by a Convolutional LSTM (ConvLSTM) network. The CAE
learns a projection of the high-dimensional spatial data to a low dimensional
latent space, such that the latent space can be used as an input for temporal
predictions. We then employ the ConvLSTM network to predict the latent space
at future time instants. This two-tier prediction model, coined Compressed
Convolutional LSTM (CC-LSTM), is able to predict dynamics of the flow.
Furthermore, the CC-LSTM allows accurate reproduction of the large and
inertial scale statistics making it very attractive for many
practical/engineering applications. In case of the passive scalars, while CC-
LSTM struggles to capture the PDFs accurately, it is still able to bound the
scalar PDFs within its theoretical limits, as opposed to CGANs. From a
practical standpoint, one of the major observations of this investigation is
significant disparity between computational efficiency of CC-LSTM, when
compared with popular, state-of-the art approaches like CGANs, in the context
of 3D turbulence. Due to large number of parameters that ConvLSTM networks
need even for modestly sized datasets, we show that performing model reduction
with CAEs is a valuable first step in computationally efficient learning
models of 3D turbulence. This modified CC-LSTM approach needs orders of
magnitude fewer trainable parameters than CGANs, while showing superior
spatio-temporal predictive accuracy. While the networks shown in this work do
not have explicit physics constraints, versions of autoencoders with hard
constraints demonstrated by the authors in Ref. [mohan2020embedding] can be
easily adapted to the CC-LSTM framework, providing considerable flexibility in
learning.
## 9 Acknowledgements
The authors thank Don Daniel (LANL) for the dataset and valuable discussions.
This work has been authored by employees of Triad National Security, LLC which
operates Los Alamos National Laboratory (LANL) under Contract No.
89233218CNA000001 with the U.S. Department of Energy/National Nuclear Security
Administration. A.T.M. and D.L. have been supported by LANL’s LDRD program,
project number 20190058DR. A.T.M also thanks the Center for Nonlinear Studies
at LANL for support and acknowledges the ASC/LANL Darwin cluster for GPU
computing infrastructure.
## Appendix A Overview of Neural Network Architectures
Artificial Neural networks (ANNs) can be broadly defined as a class of
biologically-inspired statistical representations which capture patterns and
connections in a dataset. The elementary unit of an ANN is the artificial
neuron, and a layer of an ANN consists of multiple neurons. Subsequently,
“Deep” ANNs are built by stacking multiple such layers one after the other.
Mathematically, stacking numerous neurons in a connected manner (often ranging
from hundreds to millions) are able to represent complex nonlinear functions
i.e. ANNs can universally approximate any function; which is the paradigm
behind the rise of the modern “deep learning” revolution. The mathematical
representation of a neuron is shown in Eqn 10
$y\,=\,\phi\left(\sum_{j=0}^{m}w_{kj}x_{j}\right)$ (10)
Where $x$ is the vector of inputs, with $w$ being the series of “weights” that
produce the output $y$. The right side of the equation is operated upon by an
activation function $\phi$, which can be nonlinear. The key idea behind ANNs
is that the weights $w$ can be learned or estimated, given $x$ and $y$. This
is typically accomplished by the backpropagation algorithm which iteratively
computes $w$ for any $x-y$ pair via optimization (usually gradient descent)
based methods. This process is broadly termed as training an ANN. While the
core strategy of having learnable weights estimated with backpropagation and
optimization methods have been the mainstay of deep learning, the actual
architecture of the ANN has greatly evolved. Specifically, these variations
have focused on the structure and layout of $w$, such that they are tailor-
made for specific applications. Most applications can be broadly grouped into
two classes of prediction problems: a) classification, and b) regression.
Classification problems are often found in large image datasets; arising from
satellite imagery, consumer devices and even scientific observations.
Likewise, regression problems are ubiquitous in financial markets, consumer
demand forecasting, weather forecasting and numerous scientific applications.
Thus, most of the modern architectures in deep learning have adapted the
standard ANN layout of $w$ to account for these classes, which we will now
outline.
### A.1 Convolutional Neural Networks
Classification problems for image and other spatially varying datasets are
difficult due to the large number of degrees of freedom. For instance, a
$256\times 256$ image has $65,536$ datapoints. The ANN has all neurons
connected to every other neuron, known as a fully-connected NN or FCNN. It is
immediately apparent that training a FCNN for $65,536$ points requires atleast
as many parameters, if not more. Therefore, FCNNs for images can be
computationally prohibitive . However, we know that images often have spatial-
correlations, so treating each datapoint in isolation may not be very accurate
(or efficient). As such, Convolutional Neural Networks(CNNs) are a class of
ANNs developed which exploits the fact that each point in an image is likely
related to its neighboring points. The $w$ are structured as a kernel which
translates throughout the image. The kernel is essentially a function that
performs a convolution operation over the image data, and the result is
trained using backpropagation. Therfore, instead of learning $w$ for each
individual point in the image, we learn $w$ in a kernel with a predetermined
size. This kernel can be used to extract patterns in other images. CNNs are
often the driving force behind several state of the art image and pattern
recognition problems, and the idea of training a single kernel instead of a
FCNN leads to drastic reductions in computational cost.
### A.2 Recurrent Neural Networks
Sequence prediction is different from other types of learning problems, since
it imposes an order on the observations that must be preserved when training
models and making predictions. Recurrent Neural Networks (RNNs) are a class of
neural networks specially designed for such problems, which preserve this
sequential information in the function being learned. A key assertion behind
Recurrent networks are that sequential processes have “memory” i.e. the value
of the process is a function of the value at previous instants. Recurrent
networks attempt to capture this sequential relationship and learn the memory
effects. The Long Short-Term Memory (LSTM) neural network is a special variant
of RNN, which overcomes stability bottlenecks encountered in traditional RNNs
(like the Vanishing Gradient problem [hochreiter1998vanishing]), enabling its
practical application. LSTMs can also learn and harness sequential dependence
from the data, such that the predictions are conditional on the recent context
in the input sequence. For instance, to predict the realization at time
$t_{i}$, LSTMs can learn from the data at $t_{i-1}$ and also at times
$t_{i-k}$, where $k$ can be any number signifying the length of the prior
sequence. In effect, $k$ represents the “memory” in the system i.e. the extent
to which the outcome of the system depends on its previous realizations.
The basic architecture of the LSTM NN is now outlined. The LSTM networks are
different from other deep learning architectures like convolutional neural
networks (CNNs), in that the typical LSTM cell contains three gates: The input
gate, output gate and the forget gate. The LSTM regulates the flow of training
information through these gates by selectively adding information (input
gate), removing (forget gate) or letting it through to the next cell (output
gate). A schematic of the cells connected in a recurrent form is shown in Fig.
19.
The input gate is represented by $i$, output gate by $o$ and forget gate by
$f$. The cell state is represented as $C$ and the cell output is given by $h$,
while the cell input is denoted as $x$. Consider the equations of a LSTM cell
to compute its gates and states in Eqn 11 and a schematic of its structure in
Fig. 20.
$\displaystyle f_{t}\,$
$\displaystyle=\,\sigma\left(W_{f}\cdot\left[h_{t-1},x_{t}\right]+b_{f}\right)$
$\displaystyle i_{t}\,$
$\displaystyle=\,\sigma\left(W_{i}\cdot\left[h_{t-1},x_{t}\right]+b_{i}\right)$
$\displaystyle\tilde{C}_{t}\,$
$\displaystyle=\,tanh\left(W_{C}\cdot\left[h_{t-1},x_{t}\right]+b_{C}\right)$
$\displaystyle C_{t}\,$ $\displaystyle=\,f_{t}*C_{t-1}+i_{t}*\tilde{C}_{t}$
$\displaystyle o_{t}\,$
$\displaystyle=\,\sigma\left(W_{o}\cdot\left[h_{t-1},x_{t}\right]+b_{o}\right)$
$\displaystyle h_{t}\,$ $\displaystyle=\,o_{t}*tanh\left(C_{t}\right)$ (11)
Figure 19: LSTM Layout with Cell Connections
Figure 20: Architecture of a LSTM Cell with Various Gates
$W$ are the weights for each of the gates and $\tilde{C}$ is the updated cell
state. These states are propagated ahead through the network, as shown in Fig.
19 and weights are updated by backpropagation through time. The forget gate
plays a crucial role in reducing over-fitting by not retaining all information
from the previous time steps. This arrangement of gates and selective
information control is also the key reason why LSTMs do not suffer from the
vanishing gradient problem which plagued traditional RNNs
[hochreiter1998vanishing]. As a result, LSTMs are a powerful tool to model
sequential datasets. A more detailed and introduction to LSTM can be found in
Ref. [hochreiter1997long].
## Appendix B CGANs: Training and Implementational Details
The CGANs were trained with $96$ feature maps each in both the generator and
discriminator, with a batch size of $12$. The noise vector to initialize the
training was of size $100\times 1$. A binary cross entropy loss with ADAM
optimizer was used for training both the networks in GAN. The learning rate
was set at $2\times 10^{-5}$, with $\beta_{1}\,=\,0.5$ and
$\beta_{2}\,=\,0.999$ being the optimization parameters for ADAM.
### B.1 Transpose Convolution and Resize Convolution
Transpose convolutions are the traditional approach to upsampling used in CNN
based GANs. This operation can be thought of as the reverse of a standard
convolution: Instead of sliding a kernel across a group of pixels to learn a
mapping to fewer pixels, the kernel is trained to extrapolate individual
pixels to a larger pixel group. The distance that the kernel slides each time
is known as the stride. Deconvolution is sometimes mentioned as the same
operation even though the two operations are not the same (deconvolution is,
technically, the inverse of convolution). Since we are dealing with volumetric
data, we utilize 3D transpose convolutions which use a cubic kernel.
Furthermore, the stride defines how far the kernel translates each step, while
the padding determines how many zero-value pixels are added to the input. See
Figure 21 for a comparison of 3D and 2D transpose convolution. The use of
transpose convolutions in the generator results in a common issue of GANs
called checkerboard artifacting [Odena]. The artifacts are the result of
overlapping transpose convolutions when upsampling the data. This typically
occurs when the stride is less than the kernel size, especially when the
kernel size is not divisible by the stride.
Figure 21: Representation of transpose convolution. In this case, an input
size of 2 is up sampled to an output size of 3 with a kernel size of 2 (square
kernel for 2D and cubic for 3D) and a stride of 1. 2D transpose Convolution is
shown on the left and volumetric transpose convolution is shown on the right.
Odena et al.[Odena] provide an interesting solution to the checkerboard
artifact problem, known as resize convolution (RC). RC involves interpolation
followed by standard convolution. We found that trilinear interpolation worked
best for our application whereas nearest-neighbor interpolation continued to
result in some line artifacts. Trilinear interpolation consists of inserting
zero padding in between values in the input tensor (to resize the sample to
the desired dimensions) followed by averaging the values close to the padding
to determine the new value of those indices. We also found that the generator
does not learn when solely using RC. To determine if the generator network was
learning, we suspend updates to the discriminator, and continue training the
generator. With the discriminator no longer learning, the generator would have
no competition and therefore should begin to learn to output samples that the
discriminator will classify as “real”. However, if the discriminator still
continued to identify the generated images as “fake”, then we can concur that
the generator was not learning. This was indeed the case with the RC-only
generator above.
Instead of choosing between the two approaches, we employ a hybrid strategy
with transpose convolution in the first few layers of the generator and RC in
the rest of the layers. This scheme proved successful as the transpose
convolutional layers learned the underlying distribution of the data, while
the RC layers learned to smooth out and eliminate the checkerboard artifacts.
Figure 22 illustrates the results of using only one method of upsampling
followed by combining both methods. The exact details of our implementation
are described in the appendix.
Figure 22: An illustration of using only transpose convolution (left), only
resize convolution (middle), and the transpose-RC hybrid (right) on a 2D slice
of the flow
### B.2 Sampling from Random Distributions in GANS
GANs learn a probability distribution from a random noise vector that is
provided as input. There are two instances where random sampling is important
to consider in GANs. First, the input to the generator is a noise vector $z$
of length 100. Traditionally, $z$ would be sampled from the Gaussian
distribution $N(1,0)$. However Goodfellow’s GANs
tutorial[goodfellow2016tutorial] mentions that if $\textbf{z}^{(2)}$ is
Gaussian, prediction $x$ is also conditionally Gaussian given
$\textbf{z}^{(1)}$. Given that the training data is non-Gaussian, z is sampled
from a uniform distribution in order to avoid Gaussian behavior in the
generated data. A uniform distribution has constant probability given by
$P(x)=1/(b-a)$) (the bounds, $a$ and $b$ were 0 and 1, respectively). Figure
23 graphically shows the differences in all three distributions.
Figure 23: CGANs velocity gradient magnitude PDF comparison for different
random vector initializations
In GANs, the discriminator has a tendency to become overconfident and outputs
labels very close to 1 or 0 (as opposed to labels of 0.9 or 0.1). When this
happens, the generator’s gradients begin to vanish and it can no longer
meaningfully update its weights. One solution to this is to attempt to balance
the networks by making the generator stronger or the discriminator weaker.
Although this solution can be effective, we found it ultimately limited the
potential of the GAN. Therefore, we implement a solution first mentioned by
Salimans[Salimans]: one sided label smoothing. Instead of using 1 and 0 as
target labels, we added noise to the labels so that the real label would fall
in the range $[0.875,1]$, and the fake label would fall in the range
$[0,0.125]$. This effectively decreased the discriminator’s overconfidence so
that it could still accurately classify the samples without compromising on
the gradients it provides to the generator.
### B.3 Cyclic Learning Rates
In order to improve training efficiency, we employed the technique of cyclic
learning rates by Smith [smith2017cyclical] . Varying the learning rate as the
models train, allows them to converge faster and reach a lower loss value.
Figure 24 shows how our learning rate varied as the models trained. We
employed a triangular update policy, changing the learning rate in a piece-
wise linear fashion. Hence, the learning rate fluctuates between minimum and
maximum values at a rate determined by the number of steps it takes to
complete one full cycle. In this work, the min and max rates were set as
$2\times 10^{-7}$ and $2\times 10^{-5}$ respectively. Changing the learning
rate every iteration allowed us to forsake finding a perfect value for a
constant learning rate. Although Ref. [smith2017cyclical] detailed an
excellent way to find the minimum and maximum values for a given
classification model, GANs converge in a different manner that is not entirely
clear from losses alone. Therefore, to select these values we found the
minimum and maximum values at which the losses did not diverge, but also
learned at an acceptable speed. Furthermore, the discriminator’s learning rate
is an order of magnitude less than the generator, in order to balance the
networks’ relative strength.
Figure 24: Cyclic Learning Rate: Triangular Update Policy
## Appendix C Computational Costs: CCLSTM vs GANs
An import metric of comparison between the two approaches is their
computational requirements of training the 3D turbulence models. The
computational cost and memory requirements for neural networks are typically
governed by the total number of trainable parameters required to “learn” the
dynamics. For GANs, the Generator network needed $130,002,821$ parameters and
Discriminator $108,372,769$ parameters i.e. a total of $\approx 238$ million
parameters. In contrast, the CCLSTM approach proposed in this work of two
networks trained separately - the convolutional autoencoder (CAE) and the
convolutional LSTM (CLSTM). The CAE needs a total of only 74,380 parameters to
learn the compressed latent space for the flow, and the CLSTM network that
trains on the latent space needs $307,985$ parameters, for a kernel size
$\alpha\,=\,3$ and sequence length $\omega\,=\,3$. Therefore, the CC-LSTM
approach only needs a combined $382,365$ parameters for spatial and temporal
predictions, compared to GANs which does not account for temporal dynamics. We
like to emphasize that the CC-LSTM needs $\approx 600$ times fewer parameters
than GANs for the same flow, while predicting large scale dynamics better.
This superior increase in efficiency opens up CC-LSTM to larger datasets than
GANs.
|
# Are you using test log-likelihood correctly?
Sameer K. Deshpande111Contributed equally. Correspondence to:
<EMAIL_ADDRESS><EMAIL_ADDRESS>and<EMAIL_ADDRESS>University of
Wisconsin–Madison Soumya Ghosh11footnotemark: 1 MIT-IBM Watson AI Lab IBM
Research Tin D. Nguyen11footnotemark: 1 MIT-IBM Watson AI Lab Massachusetts
Institute of Technology Tamara Broderick MIT-IBM Watson AI Lab Massachusetts
Institute of Technology
###### Abstract
Test log-likelihood is commonly used to compare different models of the same
data or different approximate inference algorithms for fitting the same
probabilistic model. We present simple examples demonstrating how comparisons
based on test log-likelihood can contradict comparisons according to other
objectives. Specifically, our examples show that (i) approximate Bayesian
inference algorithms that attain higher test log-likelihoods need not also
yield more accurate posterior approximations and (ii) conclusions about
forecast accuracy based on test log-likelihood comparisons may not agree with
conclusions based on root mean squared error.
## 1 Introduction
Test log-likelihood, also known as predictive log-likelihood or test log-
predictive, is computed as the log-predictive density averaged over a set of
held-out data. It is often used to compare different models of the same data
or to compare different algorithms used to fit the same probabilistic model.
Although there are compelling reasons for this practice (Section 2.1), we
provide examples that falsify the following, usually implicit, claims:
* •
Claim: The higher the test log-likelihood, the more accurately an approximate
inference algorithm recovers the Bayesian posterior distribution of latent
model parameters (Section 3).
* •
Claim: The higher the test log-likelihood, the better the predictive
performance on held-out data according to other measurements, like root mean
squared error (Section 4).
Our examples demonstrate that test log-likelihood is not always a good proxy
for posterior approximation error. They further demonstrate that forecast
evaluations based on test log-likelihood may not agree with forecast
evaluations based on root mean squared error.
We are not the first to highlight discrepancies between test log-likelihood
and other analysis objectives. For instance, Quiñonero-Candela et al., (2005)
and Kohonen and Suomela, (2005) showed that when predicting discrete data
with continuous distributions, test log-likelihood can be made arbitrarily
large by concentrating probability into vanishingly small intervals. Chang et
al., (2009) observed that topic models with larger test log-predictive
densities can be less interpretable. Yao et al., (2019) highlighted the
disconnect between test log-likelihood and posterior approximation error in
the context of Bayesian neural networks. Our examples, however, reveal more
fundamental discrepancies between test log-likelihood and other evaluation
metrics. In particular, we show how comparisons based on test log-likelihood
can contradict comparisons based on other objectives even in simple models
like linear regression.
After introducing our notation, we precisely define test log-likelihood and
review arguments for its use in Section 2. In Section 3, we show that over a
range of posterior approximations provided by a recent method, those with
higher test log-likelihood provide worse posterior approximation quality; in
additional examples, we recover similar results even when using different
approximations and even when there is little or no model misspecification. In
Section 4, we show examples in both complex and simple models where test log-
likelihood is higher but root mean squared error on held-out data is worse.
Our examples in Section 4 do depend on model misspecification, but we note
that model misspecification is unavoidable in practice. We conclude in Section
5 with a reflection on when we should use test log-likelihood in practice.
## 2 Background
We assume we have access to data independently and identically distributed
(i.i.d.) from an unknown probability distribution $\mathcal{P}$. Let
$\mathcal{D}=\\{y_{n}\\}_{n=1}^{N}$ denote the training data. In many standard
analyses, practitioners will have access to a predictive density of a future
data point $y^{\star}$ given the observed $\mathcal{D}$:
$\pi(y^{\star}|\mathcal{D})$. For instance, consider the following three
cases.
* •
Case A: Practitioners often model the observed data by introducing a parameter
$\theta$ and specifying that the data are i.i.d. from a conditional
distribution $\Pi(Y|\theta)$ with density $\pi(y|\theta).$ In a non-Bayesian
analysis, one usually computes a point estimate $\hat{\theta}$ of the unknown
parameter (e.g. by maximum likelihood). Given a point estimate $\hat{\theta},$
the predictive density $\pi(y^{\star}|\mathcal{D})$ is just
$\pi(y^{\star}|\hat{\theta}).$
* •
Case B: A Bayesian analysis elaborates the conditional model from Case A by
specifying a prior distribution $\Pi(\theta)$ and formally computes the
density $\pi(\theta|\mathcal{D})$ of the posterior distribution
$\Pi(\theta|\mathcal{D})$ from the assumed joint distribution
$\Pi(\mathcal{D},\theta).$ The Bayesian posterior predictive density is given
by
$\pi(y^{\star}|\mathcal{D})=\int{\pi(y^{\star}|\theta)\pi(\theta|\mathcal{D})d\theta.}$
(1)
* •
Case C: An approximate Bayesian analysis proceeds as in Case B but uses an
approximation in place of the exact posterior. If we let
$\Pi(\theta|\mathcal{D})$ represent an approximation to the exact posterior,
Equation 1 yields the approximate Bayesian posterior predictive density
$\pi(y^{\star}|\mathcal{D})$. Sometimes, due the difficulty of the integral in
Equation 1, a further approximation may be used to yield a predictive density
$\pi(y^{\star}|\mathcal{D})$.
In all of these cases, we will refer to the practitioner as having access to a
model $\Pi$ that determines the predictive distribution
$\pi(y^{\star}|\mathcal{D})$; in particular, we allow “model” henceforth to
encompass fitted models and posterior approximations. One can ask how well the
resulting $\pi(y^{\star}|\mathcal{D})$ predicts new data generated from
$\mathcal{P}.$ Practitioners commonly assess how well their model predicts
out-of-sample using a held-out set of testing data
$\mathcal{D}^{\star}=\\{y^{\star}_{n}\\}_{n=1}^{N^{\star}},$ which was not
used to train the model. To compute test log-likelihood, they average
evaluations of the log-predictive density function over the testing set:
$\textrm{TLL}(\mathcal{D}^{\star};\Pi):=\frac{1}{N^{\star}}\sum_{n=1}^{N^{\star}}{\log\pi(y^{\star}_{n}|\mathcal{D})},$
(2)
where our notation makes explicit the dependence of the test log-likelihood
(TLL) on testing data $\mathcal{D}^{\star}$ and the chosen model $\Pi.$ In
particular, researchers commonly use test log-likelihood to select between two
models of the data, say $\Pi$ and $\tilde{\Pi};$ that is, they select model
$\Pi$ over $\tilde{\Pi}$ whenever $\textrm{TLL}(\mathcal{D}^{\star};\Pi)$ is
higher than $\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi}).$ Note that the
abbreviation NLPD (negative log predictive density) is also commonly used in
the literature for the negative TLL (Quiñonero-Candela et al., , 2005; Kohonen
and Suomela, , 2005).
### 2.1 The case for test log-likelihood
In what follows, we first observe that, if we wanted to choose a model whose
predictive distribution is closer to the true data distribution in a certain
KL sense, then it is equivalent to choose a model with higher _expected log-
predictive density_ (elpd). Second, we observe that TLL is a natural estimator
of elpd when we have access to a finite dataset.
The unrealistic case where the true data-generating distribution is known. The
expected log-predictive density is defined as
$\textrm{elpd}(\Pi):=\int{\log\pi(y^{\star}|\mathcal{D})d\mathcal{P}(y^{\star})}.$
Our use of the abbreviation elpd follows the example of Gelman et al., (2014,
Equation 1). If we ignore an additive constant not depending on $\Pi$,
$\textrm{elpd}(\Pi)$ is equal to the negative Kullback–Leibler divergence from
the predictive distribution $\Pi(y^{\star}|\mathcal{D})$ to the true data
distribution $\mathcal{P}(y^{\star})$. Specifically, if we assume
$\mathcal{P}$ has density $p(y^{\star}),$ we have
$\textrm{KL}\left(\mathcal{P}(y^{\star})\mathbin{\|}\Pi(y^{\star}|\mathcal{D})\right)=\int{p(y^{\star})\log
p(y^{\star})dy^{\star}}-\textrm{elpd}(\Pi).$
Thus, $\textrm{elpd}(\Pi)>\textrm{elpd}(\tilde{\Pi})$ if and only if the
predictive distribution $\Pi(y^{\star}|\mathcal{D})$ is closer, in a specific
KL sense, to the true data distribution than the predictive distribution
$\tilde{\Pi}(y^{\star}|\mathcal{D}).$
Test log likelihood as an estimator. Since we generally do not know the true
generating distribution $\mathcal{P}$, computing $\textrm{elpd}(\Pi)$ exactly
is not possible. By assumption, though, the test data are i.i.d. draws from
$\mathcal{P}$. So $\textrm{TLL}(\mathcal{D}^{\star};\Pi)$ is a computable
Monte Carlo estimate of $\textrm{elpd}(\Pi).$ If we assume
$\textrm{elpd}(\Pi)$ is finite, it follows that a Strong Law of Large Numbers
applies: as $N^{\star}\rightarrow\infty,$
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)$ converges almost surely to
$\textrm{elpd}(\Pi).$ Therefore, with a sufficiently high amount of testing
data, we might compare the estimates $\textrm{TLL}(\mathcal{D}^{\star};\Pi)$
and $\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi})$ in place of the desired
comparison of $\textrm{elpd}(\Pi)$ and $\textrm{elpd}(\tilde{\Pi})$.
### 2.2 Practical concerns
Since $\textrm{TLL}(\mathcal{D}^{\star};\Pi)$ is an estimate of
$\textrm{elpd}(\Pi),$ it is subject to sampling variability, and a careful
comparison would ideally take this sampling variability into account. We first
elaborate on the problem and then describe one option for estimating and using
the sampling variability in practice; we take this approach in our experiments
below.
To start, suppose we had another set of $N^{\star}$ testing data points,
$\tilde{\mathcal{D}}^{\star}$. Then generally
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)\neq\textrm{TLL}(\tilde{\mathcal{D}}^{\star};\Pi).$
So it is possible, in principle, to draw different conclusions using the TLL
based on different testing datasets. We can more reasonably express confidence
that $\textrm{elpd}(\Pi)$ is larger than $\textrm{elpd}(\tilde{\Pi})$ if the
lower bound of a confidence interval for $\textrm{elpd}(\Pi)$ exceeds the
upper bound of a confidence interval for $\textrm{elpd}(\tilde{\Pi})$.
We next describe one way to estimate useful confidence intervals. To do so, we
make the additional (mild) assumption that
$\sigma^{2}_{\textrm{TLL}}(\Pi):=\int{\left[\log\pi(y^{\star}|\mathcal{D})-\textrm{elpd}(\Pi)\right]^{2}d\mathcal{P}(y^{\star})}<\infty.$
Then a Central Limit Theorem applies: as $N^{\star}\rightarrow\infty,$
$\sqrt{N^{\star}}\left(\textrm{TLL}(\mathcal{D}^{\star};\Pi)-\textrm{elpd}(\Pi)\right)\stackrel{{\scriptstyle\text{d}}}{{\rightarrow}}\mathcal{N}(0,\sigma^{2}_{\textrm{TLL}}(\Pi)).$
Although we cannot generally compute $\sigma_{\textrm{TLL}}(\Pi),$ we can
estimate it with the sample standard deviation
$\hat{\sigma}_{\textrm{TLL}}(\Pi)$ of the evaluations
$\\{\log\pi(y^{\star}_{n}|\mathcal{D})\\}_{n}^{N^{\star}}$. The resulting
approximate 95% confidence interval for $\textrm{elpd}(\Pi)$ is
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)\pm
2\hat{\sigma}_{\textrm{TLL}}/\sqrt{N^{\star}}.$ In what follows, then, we will
conclude $\textrm{elpd}(\Pi)>\textrm{elpd}(\tilde{\Pi})$ if
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)-2\hat{\sigma}_{\textrm{TLL}}(\Pi)/\sqrt{N^{\star}}>\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi})+2\hat{\sigma}_{\textrm{TLL}}(\tilde{\Pi})/\sqrt{N^{\star}}.$
(3)
For the sake of brevity, we will still write
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)>\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi})$
in place of Equation 3 below.
To summarize: for a sufficiently large test dataset $\mathcal{D}^{\star}$, we
expect predictions made from a model with larger TLL to be closer (in the KL
sense above) to realizations from the true data-generating process. In our
experiments below, we choose large test datasets so that we expect TLL
comparisons to reflect elpd comparisons. Our experiments instead illustrate
that closeness between $\Pi(y^{\star}|\mathcal{D})$ and $\mathcal{P}$ (in the
KL sense above) often does not align with a different stated objective.
## 3 Claim: higher test log-likelihood corresponds to better posterior
approximation
In this section, we give examples where test log likelihood is higher though
the quality of an approximate posterior mean, variance, or other common
summary is lower. We start with examples in mis-specified models and then give
a correctly specified example. We conclude with a discussion of the source of
the discrepancy: even in the well-specified case, the Bayesian posterior
predictive need not be close to the true data-generating distribution.
Practitioners often use posterior expectations to summarize the relationship
between a covariate and a response. For instance, the posterior mean serves as
a point estimate, and the posterior standard deviation quantifies uncertainty.
However, as the posterior density $\pi(\theta|\mathcal{D})$ is analytically
intractable, practitioners must instead rely on approximate posterior
computations. There are myriad approximate inference algorithms – e.g. Laplace
approximation, Hamiltonian Monte Carlo (HMC), mean-field variational
inference, to name just a few. All these algorithms aim to approximate the
same posterior $\Pi(\theta|\mathcal{D}).$ Log predictive-density is often used
to compare the quality of different approximations, with higher TLL values
assumed to reflect more accurate approximations, e.g. in the context of
variational inference (see, e.g., Hoffman et al., , 2013; Ranganath et al., ,
2014; Hernández-Lobato et al., , 2016; Liu and Wang, , 2016; Shi et al., ,
2018) or Bayesian deep learning (see, e.g., Hernández-Lobato and Adams, ,
2015; Gan et al., , 2016; Li et al., , 2016; Louizos and Welling, , 2016; Sun
et al., , 2017; Ghosh et al., , 2018; Mishkin et al., , 2018; Wu et al., ,
2019; Izmailov et al., , 2020, 2021; Ober and Aitchison, , 2021).
Formally, suppose that our exact posterior is $\Pi(\theta|\mathcal{D})$ and
that we have two approximate inference algorithms that produce two approximate
posteriors, respectively $\hat{\Pi}_{1}(\theta|\mathcal{D})$ and
$\hat{\Pi}_{2}(\theta|\mathcal{D}).$ The exact posterior and its
approximations respectively induce predictive distributions
$\Pi(y^{\star}|\mathcal{D}),\hat{\Pi}_{1}(y^{\star}|\mathcal{D}),$ and
$\hat{\Pi}_{2}(y^{\star}|\mathcal{D}).$ For instance,
$\hat{\Pi}_{1}(\theta|\mathcal{D})$ could be the empirical distribution of
samples drawn using HMC and $\hat{\Pi}_{2}(\theta|\mathcal{D})$ could be a
mean-field variational approximation. Our first example demonstrates that it
is possible that (i)
$\textrm{TLL}(\mathcal{D}^{\star};\hat{\Pi}_{1})>\textrm{TLL}(\mathcal{D}^{\star};\Pi)$
but (ii) using $\hat{\Pi}_{1}$ could lead to different inference about model
parameters than using the exact posterior $\Pi.$ Our second example
demonstrates that it is possible that (i)
$\textrm{TLL}(\mathcal{D}^{\star};\hat{\Pi}_{1})>\textrm{TLL}(\mathcal{D}^{\star};\hat{\Pi}_{2})$
but (ii) $\hat{\Pi}_{1}(\theta|\mathcal{D})$ is a worse approximation to the
exact posterior $\Pi(\theta|\mathcal{D})$ than
$\hat{\Pi}_{2}(\theta|\mathcal{D}).$
Figure 1: _(Left)_. Predictive distributions under the Bayesian posterior and
mean field variational approximations. The two numbers in the title of each
plot are the 2-Wasserstein distance to the exact posterior and test log-
likelihood computed on $10^{4}$ test set observations. Two standard errors in
the test log-likelihood estimate are (A) 0.03, (B) 0.03, (C) 0.02, (D) 0.02,
(E) 0.02, (F) 0.02. _(Right)_. The relationship between 2-Wasserstein distance
to the posterior and test log-likelihood.
### 3.1 TLL and downstream posterior inference
Relying on TLL for model selection can lead to different inferences than we
would find by using the exact posterior. To illustrate, suppose we observe
$\mathcal{D}_{100}=\\{(x_{n},y_{n})\\}_{n=1}^{100}$ drawn from the following
heteroscedastic model:
$x_{n}\sim\mathcal{N}(0,1),\quad y_{n}\mid
x_{n}\sim\mathcal{N}(x_{n},1+\log(1+\exp(x_{n}))).$ (4)
Further suppose we model these data with a mis-specified homoscedastic model:
$\theta\sim\mathcal{N}([0,0]^{\top},[1,0;0,1]),\quad
y_{n}\mid\theta,\phi_{n}\sim\mathcal{N}(\theta^{T}\phi_{n},1),$ (5)
where $\phi_{n}=[x_{n},1]^{\top}$, and $\theta=[\theta_{1},\theta_{2}]$.
Figure 1 shows the posterior mean and the $95\%$ predictive interval of the
mis-specified regression line $\theta^{\top}\phi$ from (A) the exact Bayesian
posterior; (B) the mean field variational approximation restricted to
isotropic Gaussians; and (C)–(F) variational approximations with re-scaled
marginal variances. Each panel includes a scatter plot of the observed data,
$\mathcal{D}_{100}$. We also report the 2-Wasserstein distance between the
exact posterior and each approximation and the TLL averaged over
$N^{*}=10^{4}$ test data points drawn from Equation 4; note that the
2-Wasserstein distance can be used to bound differences in means and variances
(Huggins et al., , 2020). The variational approximation (panel (B) of Figure
1) is quite accurate: the 2-Wasserstein distance between the approximation and
the exact posterior is ${\sim}10^{-4}.$ See also Figure 2, which shows the
contours of the exact and approximate posterior distributions. As we scale up
the variance of this approximation, we move away from the exact posterior over
the parameters but the posterior predictive distribution covers more data,
yielding higher TLL. The left panel of Figure 11 in Section B.3 shows the same
pattern using the KL divergence instead of the 2-Wasserstein distance.
#### TLL and a discrepancy in inferences.
Researchers are often interested in understanding whether there is a
relationship between a covariate and response; a Bayesian analysis will often
conclude that there is no relationship if the posterior on the corresponding
effect-size parameter places substantial probability on an interval not
containing zero. In our example, we wish to check whether $\theta_{1}=0$.
Notice that the exact posterior distribution (panel (A) in Figures 1 and 2) is
concentrated on positive $\theta_{1}$ values. The $95\%$ credible interval of
the exact posterior222Throughout we used symmetric credible intervals formed
by computing quantiles: the $95\%$ interval is equal to the $2.5\%$–$97.5\%$
interquantile range. is $[0.63,1.07].$ Since the interval does not contain
zero, we would infer that $\theta_{1}\neq 0$. On the other hand, as the
approximations become more diffuse (panels (B)–(F)), TLL increases, and the
approximations begin to place non-negligible probability mass on negative
$\theta_{1}$ values. In fact, the approximation with highest TLL (panel (F) in
Figures 1 and 2) yields an approximate 95% credible interval of [-0.29,1.99],
which covers zero. Had we used this approximate interval, we would have failed
to conclude $\theta_{1}\neq 0.$ That is, in this case, we would reach a
different substantive conclusion about the effect $\theta_{1}$ if we (i) use
the exact posterior or (ii) use the approximation selected by highest TLL.
Figure 2: Contours of (A) the exact posterior, (B) the mean field variational
approximation restricted to isotropic Gaussians, and (C)–(F) re-scaled mean
field approximations. The line $\theta_{1}=0$ is highlighted in red.
### 3.2 TLL in the wild
Figure 3: _(Left)_. Predictive distributions under the Bayesian posterior (A)
and the SWAG posterior with SWAG learning rate of (B) $10^{-3}$, (C)
$10^{-2}$, (D) $10^{-1}$, (E) $1$, and (F) $10$. The two numbers in the title
of each plot are the 2-Wasserstein distance to the exact posterior and test
log-likelihood computed on $10^{4}$ test set observations. Two standard errors
in the test log-likelihood estimates are (A) 0.16, (B) 0.15, (C) 0.14, (D)
0.13, (E) 0.05, (F) 0.01. _(Right)_. Contours of the (A) exact posterior, and
(B)–(F) SWAG approximations with different learning rates. The line
$\theta_{1}=0$ is highlighted in red.
Next, we examine a more realistic scenario in which the difference between the
quality of the posterior approximation and the exact posterior distribution
TLL arises naturally, without the need to artificially increase the marginal
variance of the variational approximations. To explore this situation, we will
first introduce another example of mis-specification and repeat the type of
analysis described in Section 3.1.
Consider the following case: we observe 500 observations
$\mathcal{D}_{500}=\\{(x_{n},y_{n})\\}_{n=1}^{500}$ drawn from a non-linear
model:
$\begin{split}\theta_{*}=[-2,-1]^{\top},\quad x_{n}\sim\mathcal{N}(0,1),\quad
y_{n}\mid\theta_{*},\phi_{n}\sim\mathcal{N}(\theta_{*}^{\top}\phi_{n}+x_{n}^{2},0.5),\end{split}$
(6)
where $\phi_{n}=[x_{n},1]^{\top}$. Further suppose we modeled these data with
a mis-specified linear model
$\theta\sim\mathcal{N}([0,0]^{\top}[1,0;0,1]),\quad
y_{n}\mid\theta,\phi_{n}\sim\mathcal{N}(\theta^{\top}\phi_{n},0.5).$ (7)
While the misspecification here might appear egregious, linear models are
widely used in practice for modeling non-linear phenomena when one is
primarily interested in inferring whether the covariates are positively
correlated, negatively correlated, or are uncorrelated with the responses
(Berk et al., , 2014, 2018; Blanca et al., , 2018; Vowels, , 2023). Next, we
use SWAG (Maddox et al., , 2019), an off-the-shelf approximate inference
algorithm, to approximate the posterior $\Pi(\theta|\mathcal{D}_{500})$. We
also repeat the re-scaled variational inference experiment from Section 3.1
with this set of data and models (Equations 6 and 7); see Section B.2. SWAG
uses a gradient-based optimizer with a learning rate schedule that encourages
the optimizer to oscillate around the optimal solution instead of converging
to it. Then, a Gaussian distribution is fit to the set of solutions explored
by the optimizer around the optimum using moment matching. In general, one
must select the learning rate schedule in a heuristic fashion. One might be
tempted to use TLL to tune the learning rate schedule. We use this heuristic
and run SWAG for a thousand epochs, annealing the learning rate down to a
different constant value after $750$ epochs. Although used pedagogically here,
similar heuristics have been used in practice (di Langosco et al., , 2022),
where the learning rate is tuned based on the accuracy achieved on held-out
data. We vary this constant value over the set
$\\{10^{-3},10^{-2},10^{-1},1,10\\}$. In Figure 3, we show the resulting
posterior mean and the $95\%$ predictive interval of the misspecified
regression line $\theta^{\top}\phi$ from (A) the Bayesian posterior; (B)–(F)
the SWAG posteriors using different learning rate schedules. In each plot, we
overlay the observed data $\mathcal{D}_{500}$ (black dots) with the true data
generating function in dashed black. We also report the 2-Wasserstein distance
between the exact posterior and each approximation and the TLL averaged over
$N^{*}=10^{4}$ test data points drawn from Equation 6. In all cases, SWAG
overestimates the posterior variance, with predictive distributions that
better cover the data and consequently lead to a higher $\textrm{TLL}.$
However, these SWAG posterior approximations are _farther_ from the exact
posterior. In fact, we found that a learning rate of $10$ (Figure 3, _Left_ ,
panel (F)) maximized TLL but led to the worst approximation of the exact
posterior.
As in the previous section, next suppose we fit this misspecified linear model
to understand whether there is a relationship between the covariates and the
responses, i.e., whether $\theta_{1}=0$. Notice that the exact posterior
distribution (Figure 3, _Right_ , panel (A)) is concentrated on negative
$\theta_{1}$ values, with the $95\%$ posterior credible interval being
$[-1.96,-1.79].$ Since the interval is to the left of zero, we would infer
that $\theta_{1}<0$ and that the covariate and the response are negatively
correlated. In contrast, if we select the SWAG approximation with the highest
TLL, we select the posterior approximation in panel (F) on the right side of
Figure 3. The corresponding $95\%$ posterior credible interval is
$[-4.46,0.74]$, which places non-negligible probability mass on
$\theta_{1}>0$. In this case, we would not conclude that the response and the
covariate are negatively correlated – by contrast to the conclusion using the
exact posterior.
Figure 4: _(Left)_. Contours of (A) the exact posterior, (B) the mean field
variational approximation restricted to isotropic Gaussians, and (C)–(F) re-
scaled mean field approximations. The two numbers in the title of each plot
are the 2-Wasserstein distance to the exact posterior and test log-likelihoods
computed on $10^{4}$ test set observations. Two standard errors in the test
log-likelihood estimates are (A) 0.019, (B) 0.020, (C) 0.014, (D) 0.013, (E)
0.011, (F) 0.009. _(Right)_. The non-monotonic relationship between distance
to posterior and test log-likelihood. Observe that the exact posterior does
not achieve highest test log-likelihood.
### 3.3 TLL and well-specified models
The examples above demonstrated that TLL is not a reliable proxy to posterior
approximation quality when the model is mis-specified. Though mis-specified
models are the norm in practice, we now demonstrate that a distribution with
higher TLL may not provide a more accurate posterior approximation even when
the model is correctly specified.
To this end, consider the following Bayesian linear model:
$\begin{split}\theta\sim\mathcal{N}([0,0]^{\top},[1,0.9;0.9,1]),\quad
y_{n}\mid\theta,\phi_{n}\sim\mathcal{N}(\theta^{\top}\phi_{n},0.25^{2}),\end{split}$
(8)
where $\phi_{n}=[x_{n},1]^{\top}$. Now, suppose we observe ten data points
$\mathcal{D}_{10}=\\{(x_{n},y_{n})\\}_{n=1}^{10}$ sampled as
$\begin{split}\theta_{*}=[-2,-1]^{\top},\quad x_{n}\sim\mathcal{N}(0,1),\quad
y_{n}\mid\theta_{*},\phi_{n}\sim\mathcal{N}(\theta_{*}^{\top}\phi_{n},0.25^{2}).\end{split}$
(9)
The left panel of Figure 4 plots the contours of (A) the exact posterior
distribution $\Pi(\phi|\mathcal{D}_{10})$; (B) the mean field variational
approximation constrained to the isotropic Gaussian family; and (C)–(F)
variational approximations with re-scaled marginal variances. In each panel,
we report the 2-Wasserstein distance between the approximate and exact
posterior and the test log-predictive averaged over $N^{\star}=10^{4}$ test
data points drawn from Equation 9.
Although we have correctly specified the conditional model of
$y|(\theta,\phi),$ the exact posterior has a lower TLL than some of the
approximate posteriors; in particular, the 95% confidence intervals for (C)
and (D) are disjoint from the 95% confidence interval for the exact posterior,
shown in (A). The left panel of Figure 4 suggests that the more probability
mass an approximate posterior places around the true data-generating
parameter, the higher the $\textrm{TLL}.$ Eventually, as the approximation
becomes more diffuse, TLL begins to decrease (Figure 4 (right)). The non-
monotonicity demonstrates that an approximate posterior with larger implied
TLL can in fact be further away from the exact posterior in a 2-Wasserstein
sense than an approximate posterior with smaller implied $\textrm{TLL}.$ The
right panel of Figure 11 in Section B.3 demonstrates the same pattern using
the KL divergence instead of the 2-Wasserstein distance. And Figure 9 in
Section B.3 shows that, in the well-specified case, a distribution with larger
TLL can provide a worse approximation of the posterior standard deviation than
a distribution with smaller $\textrm{TLL}.$
### 3.4 What is going on?
We next discuss why we should not expect TLL to closely track posterior
approximation quality, or posterior-predictive approximation quality.
Essentially the issue is that, even in the well-specified case, the Bayesian
posterior predictive distribution need not be close to the true data-
generating distribution.
We illustrate these distinctions in Figure 5. The lower surface represents the
space of distributions over a latent parameter $\theta$. The upper surface
represents the space of distributions over an observable data point
$y^{\star}$. Each dot in the figure represents a distribution. The two dots in
the lower surface are the exact posterior $\Pi(\theta|\mathcal{D})$ (left,
green dot) and an approximate posterior $\hat{\Pi}(\theta|\mathcal{D})$
(right, red dot). The three dots in the upper surface are the posterior
predictive distribution $\Pi(y^{\star}|\mathcal{D})$ (left, green dot), the
approximate posterior predictive $\hat{\Pi}(y^{\star}|\mathcal{D})$ (lower
right, red dot), and the true data-generating distribution
$\mathcal{P}(y^{\star})$ (upper right, black dot). The gray lines on the left
and right indicate that the distribution in the upper surface can be obtained
from the corresponding (connected) distribution in the lower surface via
Equation 1.
Figure 5: Cartoon illustration highlighting the difference between three
different discrepancies explored in Section 3.4. The surfaces are spaces of
distributions over a latent parameter (lower surface) or an observable data
point $y^{\star}$ (upper surface). The pink line indicates that
$\textrm{TLL}(\mathcal{D}^{\star};\hat{\Pi})$ estimates a discrepancy between
the approximate posterior predictive $\hat{\Pi}(y^{\star}|\mathcal{D})$ (upper
surface, lower right, red dot) and the true data-generating distribution
$\mathcal{P}(y^{\star})$ (upper surface, upper right, black dot). The blue
line represents a different discrepancy between the exact posterior predictive
(upper surface, left, green dot) and the approximate posterior predictive
(upper surface, lower right, red dot). The yellow line represents another
different discrepancy between the exact posterior (lower surface, left, green
dot) and the approximate posterior (lower surface, right, red dot). Gray lines
connect distributions over parameters with their corresponding predictive
distributions.
The remaining three (non-gray) lines represent three different discrepancies.
Recall from Section 2 that $\textrm{TLL}(\mathcal{D}^{\star};\hat{\Pi})$
captures how close the approximate posterior predictive
$\hat{\Pi}(y^{\star}|\mathcal{D})$ is to the true data-generating process
$\mathcal{P}(y^{\star})$ in a particular KL sense:
$\textrm{TLL}(\mathcal{D}^{\star},\hat{\Pi})\approx-\textrm{KL}\left(\mathcal{P}(y^{\star})\mathbin{\|}\hat{\Pi}(y^{\star}|\mathcal{D})\right)+\textrm{constant}.$
To illustrate this notion of closeness, or equivalently discrepancy, in Figure
5, we draw a pink line between $\mathcal{P}(y^{\star})$ and
$\hat{\Pi}(y^{\star}|\mathcal{D})$. We observe that the TLL importantly does
_not_ approximate (even up to a constant) the analogous discrepancy from the
approximate posterior predictive $\hat{\Pi}(y^{\star}|\mathcal{D})$ to the
exact posterior predictive $\Pi(y^{\star}|\mathcal{D})$ (blue line in the
upper surface); that is, it does not capture how close the posterior
predictive approximation is to the exact posterior predictive. The TLL
likewise does not approximate (even up to a constant) the corresponding
discrepancy from the approximate posterior $\hat{\Pi}(\theta|\mathcal{D})$ to
the exact posterior $\Pi(\theta|\mathcal{D})$ (yellow line in the lower
surface); that is, it does not capture how close the posterior approximation
is to the exact posterior.
The pink and blue lines would (nearly) align if the posterior predictive were
very close to the true data-generating distribution. For a mis-specified
model, the posterior predictive need not be close to the true data-generating
distribution. For a well-specified model, the posterior predictive and true
data-generating distribution may still be far for a finite dataset. On that
view, as suggested by an anonymous referee, we might expect the observed
phenomenon to disappear asymptotically in the well-specified setting if
sufficient regularity conditions hold. The argument, essentially, is that (i)
the actual posterior $\Pi(\theta|\mathcal{D})$ converges to a point-mass at
the true data generating parameter; this convergence implies (ii) that the
actual posterior predictive $\Pi(y^{\star}|\mathcal{D})$ converges to the true
data distribution $\mathcal{P}(y^{\star}),$ from which it follows that for
large enough training datasets (iii)
$\textrm{KL}\left(\Pi(y^{\star}|\mathcal{D})\mathbin{\|}\hat{\Pi}(y^{\star}|\mathcal{D})\right)\approx\textrm{KL}\left(\mathcal{P}(y^{\star})\mathbin{\|}\hat{\Pi}(y^{\star}|\mathcal{D})\right).$
However, we emphasize first that essentially every real data analysis is mis-
specified. And second, if a practitioner is in a setting where they are
confident there is no uncertainty in the unknown parameter value, there may be
little reason to take a Bayesian approach or go to the sometimes-considerable
computational burden of approximating the Bayesian posterior.
## 4 Claim: higher test log-likelihood corresponds to lower predictive error
As noted in Sections 2.1 and 3.4, TLL estimates how close a predictive
distribution is from the true data-generating process in a specific KL sense.
On that view and analogous to Section 3, we would not expect conclusions made
by TLL to match conclusions made by comparing other predictive losses. Rather
than focus on more esoteric losses in our experiments, we note that TLL and
RMSE are often reported as default measures of model fit quality in papers. If
conclusions made between TLL and RMSE do not always agree (as we expect and
reinforce experimentally next), we should not expect TLL to always reflect
performance according to other predictive losses beyond RMSE. If the TLL is of
fundamental interest, this observation is of little consequence; if TLL is a
convenient stand-in for a potential future loss of interest, this observation
may be meaningful.
Misspecified Gaussian process regression. We next construct two models $\Pi$
and $\tilde{\Pi}$ such that
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)<\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi})$
but $\tilde{\Pi}$ yields larger predictive RMSE. Suppose we observe
$\mathcal{D}_{100}=\\{(x_{n},y_{n})\\}_{n=1}^{100}$ from the following data
generating process:
$\begin{split}x_{n}\sim\mathcal{U}(-5,+5)\quad
y_{n}|x_{n}\sim\mathcal{N}(\text{sin}(2x_{n}),0.1).\end{split}$ (10)
Further suppose we model this data using a zero-mean Gaussian process (GP)
with Gaussian noise,
$\begin{split}f\sim\text{GP}(\mathbf{0},k(x,x^{\prime})),\quad
y_{n}|f_{n}\sim\mathcal{N}(f_{n},\sigma^{2}),\end{split}$ (11)
where $f_{n}$ is shorthand for $f(x_{n})$. First consider the case where we
employ a periodic kernel,333PeriodicMatern32 in
https://github.com/SheffieldML/GPy constrain the noise nugget $\sigma^{2}$ to
$1.6$, and fit all other hyper-parameters by maximizing the marginal
likelihood. The resulting fit is shown in Figure 6 (A). Next, consider an
alternate model where we use a squared-exponential kernel and fit all hyper-
parameters including the noise nugget via maximum marginal likelihood. The
resulting fit is displayed in Figure 6 (B). The squared exponential model
fails to recover the predictive mean and reverts back to the prior mean
($\textrm{RMSE}=0.737$, 95% confidence interval $[0.729,0.745]$), while the
periodic model recovers the predictive mean accurately, as measured by
$\textrm{RMSE}=0.355$ ($95\%$ confidence interval $[0.351,0.360]$). Despite
the poor mean estimate provided by the squared exponential model, it scores a
substantially higher TLL.
Figure 6: The plots display two Gaussian processes trained on the same set of
data (represented by black plus symbols). The dashed red line shows the mean
of the posterior Gaussian process, while the red highlighted region represents
the $95\%$ predictive interval. The subplot titles display the TLL ($\pm 2$
standard error) attained by each Gaussian process. Although the Gaussian
process in panel (A) achieves a better mean fit compared to panel (B), it has
a worse TLL when evaluated on $10^{4}$ test instances (represented by black
dots).
In this example, we see that, even with an accurate point estimate, TLL can be
reduced by, for instance, inflating the predictive uncertainty. And this
discrepancy between TLL and RMSE is not necessarily removed by optimizing the
parameters of a model.
Misspecified linear regression. Our next example illustrates that even when
all parameters in a model are fit with maximum likelihood, a comparison based
on TLL may still disagree with a comparison based on RMSE. It also illustrates
that the discrepancy between TLL and RMSE can arise even in very simple and
low-dimensional models and even when the training dataset is very large.
Specifically, suppose that we observe
$\mathcal{D}=\\{(x_{n},y_{n})\\}_{n=1}^{100,000}$ generated according to
$x_{n}\sim\mathcal{U}(0,25),\quad
y_{n}|x_{n}\sim\text{Laplace}(x_{n},1/\sqrt{2}),$ (12)
which we model using one of the following mis-specified conditional linear
models:
$\begin{split}\Pi:y_{n}|x_{n}&\sim\mathcal{N}(\theta x_{n},\sigma^{2})\\\
&\text{or}\\\ \tilde{\Pi}:y_{n}|x_{n}&\sim\text{Laplace}(0.45+\theta
x_{n},\lambda).\end{split}$ (13)
Both $\Pi$ and $\tilde{\Pi}$ depend on two unknown parameters. $\Pi$ depends
on a slope $\theta$ and a residual variance $\sigma^{2}$ and $\tilde{\Pi}$
depends on a slope $\theta$ and a residual scale $\lambda$. The kind of mis-
specification is different across models; while $\Pi$ has the correct mean
specification but incorrect noise specification, $\tilde{\Pi}$ has incorrect
mean specification but correct noise specification.
We computed the maximum likelihood estimates (MLEs)
$(\hat{\theta}_{\Pi},\hat{\sigma}_{\Pi})$ and
$(\hat{\theta}_{\tilde{\Pi}},\hat{\lambda}_{\tilde{\Pi}})$ for both models.
The two fitted models induce the following predictive distributions of
$y^{\star}|x^{\star}$:
$\begin{split}\Pi(y^{\star}|x^{\star},\mathcal{D}):y^{\star}|x^{\star}&\sim\mathcal{N}(\hat{\theta}_{\Pi}x^{\star},\hat{\sigma}_{\Pi}^{2})\\\
&\text{and}\\\
\tilde{\Pi}(y^{\star}|x^{\star},\mathcal{D}):y^{\star}|x^{\star}&\sim\text{Laplace}(0.45+\hat{\theta}_{\tilde{\Pi}}x^{\star},\hat{\lambda}_{\tilde{\Pi}}).\end{split}$
(14)
The means of these predictive distributions are natural point estimates of the
output $y^{\star}$ at input $x^{\star}.$
Using a test set of size $N^{\star}=395{,}000,$ we observed
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)=-1.420<-1.389=\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi}).$
The standard error of either TLL estimate is only $0.002$. Hence, based on
sample mean and standard error, we conclude that $\tilde{\Pi}$ has better elpd
than $\Pi$. These values suggest that on average over inputs $x^{\star},$
$\tilde{\Pi}(y^{\star}|x^{\star},\mathcal{D})$ is closer to
$\mathcal{P}(y^{\star}|x^{\star})$ than $\Pi(y^{\star}|x^{\star},\mathcal{D})$
in a KL sense. However, using the same test set, we found that $\Pi$ yielded
more accurate point forecasts, as measured by root mean square error (RMSE):
$\begin{split}\left(\frac{1}{N^{\star}}\sum_{n=1}^{N^{\star}}{(y_{n}^{\star}-\hat{\theta}_{\Pi}x^{\star}_{n})^{2}}\right)^{1/2}=1.000<1.025=\left(\frac{1}{N^{\star}}\sum_{n=1}^{N^{\star}}{(y_{n}^{\star}-0.45-\hat{\theta}_{\tilde{\Pi}}x^{\star}_{n})^{2}}\right)^{1/2}.\end{split}$
(15)
In addition, the $95\%$ confidence intervals for the RMSE do not overlap: the
interval for $\Pi$’s RMSE is $[0.997,1.005]$ and that for $\tilde{\Pi}$’s RMSE
is $[1.022,1.029]$. The comparison of RMSEs suggests that on average over
inputs $x^{\star},$ the predictive mean of
$\Pi(y^{\star}|x^{\star},\mathcal{D})$ is closer to the mean of
$\mathcal{P}(y^{\star}|x^{\star})$ than the predictive mean of
$\tilde{\Pi}(y^{\star}|x^{\star},\mathcal{D}).$ In other words, the model with
larger TLL – whose predictive distribution is ostensibly closer to
$\mathcal{P}$ – makes worse point predictions than the model with smaller
$\textrm{TLL}.$
## 5 Discussion
Our paper is neither a blanket indictment nor recommendation of test log-
likelihood. Rather, we hope to encourage researchers to explicitly state and
commit to a particular data-analysis goal – and recognize that different
methods may perform better under different goals. For instance, when the
stated goal is to approximate (summary statistics of) a Bayesian posterior, we
argue that it is inappropriate to rely on test log-likelihood to compare
different approximation methods. We have produced examples where a model can
provide a better test log-likelihood but yield a (much) poorer approximation
to the Bayesian posterior – in particular, leading to fundamentally different
inferences and decisions. We have described why this phenomenon occurs: test
log-likelihood tracks closeness of approximate posterior predictive
distributions to the data-generating process and not to the posterior (or
posterior predictive) distribution. At the same time, we recognize that
evaluating posterior approximation quality is a fundamentally difficult
problem and will generally necessitate the use of a proxy. It may be useful to
consider multiple of the available options; a full accounting is beyond the
scope of this paper, but they include using conjugate models where exact
posterior summary statistics are available; comparing to established MCMC
methods on models where a sufficiently large compute budget might be expected
to yield a reliable approximation; simulation-based calibration (Talts et al.,
, 2018); sample-quality diagnostics (Gorham and Mackey, , 2015; Chwialkowski
et al., , 2016; Liu et al., , 2016); and a host of visual diagnostics (Gabry
et al., , 2019). A careful investigation to understand how a particular method
struggles or succeeds may be especially illuminating.
On the other hand, in many data analyses, the goal is to make accurate
predictions about future observables or identify whether a treatment will help
people who receive it. In these cases and many others, using a Bayesian
approach is just one possible means to an end. And many of the arguments for
using the exact Bayesian posterior in decision making assume correct model
specification, which we cannot rely upon in practice. In predictive settings
in particular, test log-likelihood may provide a compelling way to assess
performance. In addition to being essentially the only strictly proper local
scoring rule (Bernardo and Smith, , 2000, Proposition 3.13), TLL is sometimes
advertised as a “non-informative” choice of loss function (Robert, , 1996).
Importantly, however, non-informative does not mean all-encompassing: as our
examples in Section 4 show, test log-likelihood does not necessarily track
with other notions of predictive loss. As we discuss in Section 2.1, test log-
likelihood quantifies a predictive discrepancy only in a particular
Kullback–Leibler sense. It is important to note, however, that just because
two distributions are close in KL, their means and variances need not be
close; in fact, Propositions 3.1 & 3.2 of Huggins et al., (2020) show that
the means and variances of distributions that are close in KL can be
arbitrarily far apart. So even in settings where prediction is of interest, we
recommend users clearly specify their analytic goals and use evaluation
metrics tailored to those goals. If there is a quantity of particular interest
in the data-generating process, such as a moment or a quantile, a good choice
of evaluation metric may be an appropriate scoring rule. Namely, one might
choose a scoring rule whose associated divergence function is known to
quantify the distance between the forecast’s quantity of interest and that of
the data-generating process. For instance, when comparing the quality of mean
estimates, one option is using the squared-error scoring rule, whose
divergence function is the integrated squared difference between the
forecast’s mean estimate and the mean of the data-generating process. Another
option is the Dawid–Sebastiani score (Dawid and Sebastiani, , 1999), which
prioritizes accurately estimating predictive means and variances. See Gneiting
and Raftery, (2007) for a list of commonly used scoring rules and their
associated divergences.
## Acknowledgements
We are grateful to Will Stephenson for helping us find examples of
discrepancies between posterior approximation quality and $\textrm{TLL}.$ This
work was supported in part by the MIT-IBM Watson AI Lab, an NSF Career Award,
an ONR Early Career Grant, the DARPA I2O LwLL program, an ARPA-E project with
program director David Tew, and the Wisconsin Alumni Research Foundation.
## References
* Berk et al., (2014) Berk, R., Brown, L., Buja, A., George, E., Pitkin, E., Zhang, K., and Zhao, L. (2014). Misspecified mean function regression: Making good use of regression models that are wrong. Sociological Methods & Research, 43(3):422–451.
* Berk et al., (2018) Berk, R., Brown, L., Buja, A., George, E., and Zhao, L. (2018). Working with misspecified regression models. Journal of Quantitative Criminology, 34:633–655.
* Bernardo and Smith, (2000) Bernardo, J. M. and Smith, A. F. (2000). Bayesian Theory. Wiley.
* Blanca et al., (2018) Blanca, M. J., Alarcón, R., and Bono, R. (2018). Current practices in data analysis procedures in psychology: What has changed? Frontiers in Psychology, 9:2558.
* Chang et al., (2009) Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J., and Blei, D. (2009). Reading tea leaves: How humans interpret topic models. Advances in Neural Information Processing Systems, 22.
* Chwialkowski et al., (2016) Chwialkowski, K., Strathmann, H., and Gretton, A. (2016). A kernel test of goodness of fit. In International conference on machine learning, pages 2606–2615. PMLR.
* Dawid and Sebastiani, (1999) Dawid, A. P. and Sebastiani, P. (1999). Coherent dispersion criteria for optimal experimental design. Annals of Statistics, 27(1):65–81.
* di Langosco et al., (2022) di Langosco, L. L., Fortuin, V., and Strathmann, H. (2022). Neural variational gradient descent. In Fourth Symposium on Advances in Approximate Bayesian Inference.
* Gabry et al., (2019) Gabry, J., Simpson, D., Vehtari, A., Betancourt, M., and Gelman, A. (2019). Visualization in Bayesian workflow. Journal of the Royal Statistical Society Series A, 182(2):389–402.
* Gan et al., (2016) Gan, Z., Li, C., Chen, C., Pu, Y., Su, Q., and Carin, L. (2016). Scalable Bayesian learning of recurrent neural networks for language modeling. arXiv pre-print arXiv:1611.08034.
* Gelman et al., (2014) Gelman, A., Hwang, J., and Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24:997–1016.
* Ghosh et al., (2018) Ghosh, S., Yao, J., and Doshi-Velez, F. (2018). Structured variational learning of Bayesian neural networks with horseshoe priors. In Proceedings of the $35^{th}$ International Conference on Machine Learning.
* Gneiting and Raftery, (2007) Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102:359–378.
* Gorham and Mackey, (2015) Gorham, J. and Mackey, L. (2015). Measuring sample quality with Stein’s method. Advances in Neural Information Processing Systems, 28.
* Hernández-Lobato and Adams, (2015) Hernández-Lobato, J. M. and Adams, R. (2015). Probabilistic backpropagation for scalable learning of Bayesian neural networks. In Proceedings of the $23^{\text{rd}}$ International Conference on Machine Learning.
* Hernández-Lobato et al., (2016) Hernández-Lobato, J. M., Li, Y., Rowland, M., Hernández-Lobato, D., and Turner, R. (2016). Black-box $\alpha$-divergence minimization. In Proceedings of the $33^{rd}$ International Conference on Machine Learning.
* Hoffman et al., (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347.
* Huggins et al., (2020) Huggins, J. H., Kasprzak, M., Campbell, T., and Broderick, T. (2020). Validated variational inference via practical posterior error bounds. In Proceedings of the $23^{\text{rd}}$ International Conference on Artificial Intelligence and Statistics.
* Izmailov et al., (2020) Izmailov, P., Maddox, W. J., Kirichenko, P., Garipov, T., Vetrov, D., and Wilson, A. G. (2020). Subspace inference for Bayesian deep learning. In Uncertainty in Artificial Intelligence.
* Izmailov et al., (2021) Izmailov, P., Vikram, S., Hoffman, M. D., and Wilson, A. G. (2021). What are Bayesian neural network posteriors really like? In Proceedings of the $38^{th}$ International Conference on Machine Learning.
* Kohonen and Suomela, (2005) Kohonen, J. and Suomela, J. (2005). Lessons learned in the challenge: making predictions and scoring them. In Machine Learning Challenges Workshop, pages 95–116. Springer.
* Li et al., (2016) Li, C., Chen, C., Fan, K., and Carin, L. (2016). High-order stochastic gradient thermostates for Bayesian learning of deep models. In Proceedings of the Thirtieth AAAI Conference on Artifical Intelligence.
* Liu et al., (2016) Liu, Q., Lee, J., and Jordan, M. (2016). A kernelized Stein discrepancy for goodness-of-fit tests. In International conference on machine learning, pages 276–284.
* Liu and Wang, (2016) Liu, Q. and Wang, D. (2016). Stein variaitonal gradient descent: A general purpose Bayesian inference algorithm. In Advances in Neural Informational Processing Systems.
* Louizos and Welling, (2016) Louizos, C. and Welling, M. (2016). Structured and efficient variational deep learning with matrix Gaussian posteriors. In Proceedings of the $33^{rd}$ International Conference on Machine Learning.
* Maddox et al., (2019) Maddox, W. J., Izmailov, P., Garipov, T., Vetrov, D. P., and Wilson, A. G. (2019). A simple baseline for Bayesian uncertainty in deep learning. Advances in Neural Information Processing Systems, 32.
* Mishkin et al., (2018) Mishkin, A., Kunstner, F., Nielsen, D., Schmidt, M., and Khan, M. E. (2018). SLANG: Fast structured covariance approximations for Bayesian deep learning with natural gradient. In Advances in Neural Informational Processing Systems.
* Ober and Aitchison, (2021) Ober, S. W. and Aitchison, L. (2021). Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes. In Proceedings of the $38^{th}$ International Conference on Machine Learning.
* Quiñonero-Candela et al., (2005) Quiñonero-Candela, J., Rasmussen, C. E., Sinz, F., Bousquet, O., and Schölkopf, B. (2005). Evaluating predictive uncertainty challenge. In Machine Learning Challenges Workshop, pages 1–27. Springer.
* Ranganath et al., (2014) Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In Proceedings of the $17^{\text{th}}$ International Conference on Artificial Intelligence and Statistics.
* Robert, (1996) Robert, C. P. (1996). Intrinsic losses. Theory and Decision, 40:191–214.
* Shi et al., (2018) Shi, J., Sun, S., and Zhu, J. (2018). Kernel implicit variational inference. In International Conference on Learning Representations.
* Sun et al., (2017) Sun, S., Chen, C., and Carin, L. (2017). Learning structured weight uncertaitny in Bayesian neural networks. In Proceedings of the $20^{th}$ International Conference on Artificial Intelligence and Statistics.
* Talts et al., (2018) Talts, S., Betancourt, M., Simpson, D., Vehtari, A., and Gelman, A. (2018). Validating Bayesian inference algorithms with simulation-based calibration. arXiv preprint arXiv:1804.06788.
* Vowels, (2023) Vowels, M. J. (2023). Misspecification and unreliable interpretations in psychology and social science. Psychological Methods, 28(3):507.
* Wu et al., (2019) Wu, A., Nowozin, S., Meeds, E., Turner, R. E., Hernández-Lobato, J. M., and Gaunt, A. L. (2019). Deterministic variational inference for robust Bayesian neural networks. In International Conference on Learning Representations.
* Yao et al., (2019) Yao, J., Pan, W., Ghosh, S., and Doshi-Velez, F. (2019). Quality of uncertainty quantification for Bayesian neural network inference. arXiv:1906.09686.
## Appendix A Variational Approximations
In Section 3 we formed isotropic Gaussian approximations to the exact
posterior. In our illustrative examples, the exact posterior itself is a
Gaussian distribution, $\mathcal{N}(\mu,\Sigma)$. In Sections 3.1 and 3.3 we
use variational approximations that share the same mean as the exact posterior
and are isotropic, $\mathcal{N}(\mu,\rho\mathrm{I})$, where $\mathrm{I}$ is a
two-dimensional identity matrix and $\rho>0$ is a scalar. In this family of
distributions, the optimal variational approximation is
$\mathcal{N}(\mu,\rho^{*}\mathrm{I})$, where,
$\begin{split}\rho^{*}&=\underset{\rho}{\text{argmin
}}D_{\mathrm{KL}}(\mathcal{N}(\mu,\rho\mathrm{I})||\mathcal{N}(\mu,\Sigma)),\\\
&=\frac{2}{\text{tr}(\Sigma^{-1})}.\end{split}$ (16)
The result follows from setting the gradient
$\nabla_{\rho}D_{\mathrm{KL}}(\mathcal{N}(\mu,\rho\mathrm{I})||\mathcal{N}(\mu,\Sigma))$
to zero and rearranging terms,
$\begin{split}\nabla_{\rho}&D_{\mathrm{KL}}(\mathcal{N}(\mu,\rho\mathrm{I})||\mathcal{N}(\mu,\Sigma))=0,\implies\nabla_{\rho}\frac{\text{tr}(\rho\Sigma^{-1})}{2}-\nabla_{\rho}\ln\rho=0,\\\
&\implies\frac{1}{\rho}=\frac{\text{tr}(\Sigma^{-1})}{2},\quad\implies\rho=\frac{2}{\text{tr}(\Sigma^{-1})}.\end{split}$
(17)
Note that $\rho^{*}$ is guaranteed to be positive since $\Sigma^{-1}$ is
positive definite and thus $\text{tr}(\Sigma^{-1})>0$. This optimal
variational approximation, $\mathcal{N}(\mu,\rho^{*}\mathrm{I})$ is used in
Panel (B) of Figure 1, Figure 2, and Figure 4. The other panels use
$\mathcal{N}(\mu,\lambda\rho^{*}\mathrm{I})$, with $\lambda\in[1,5,10,15,30]$
for Figure 1 and Figure 2. For Figure 4 (Left), $\lambda$ takes values in
$[4,5,7,9]$, and in $[1,2,3,4,5,6,7,8,9,10,11]$ for Figure 4 (Right).
## Appendix B Experimental details and additional experiments
### B.1 Confidence Intervals
An additional note on confidence intervals for TLL. Suppose we are comparing
two models $\Pi$ and $\tilde{\Pi}$. Although
$\textrm{TLL}(\mathcal{D}^{\star};\Pi)$ (respectively,
$\hat{\sigma}_{\textrm{TLL}}(\Pi))$ will generally be correlated with
$\textrm{TLL}(\mathcal{D}^{\star};\tilde{\Pi})$ (respectively,
$\hat{\sigma}_{\textrm{TLL}}(\tilde{\Pi}))$, we do not expect a more careful
treatment of that correlation to change our substantive conclusions.
Confidence intervals for RMSE. To compute the RMSE confidence interval, we
first compute the mean of the squared errors (MSE, $m$) and its associated
standard error of the mean ($s$). Since we have a large number of data points
and the MSE takes the form of a mean, we assume the sampling distribution of
the MSE is well-approximated by a normal distribution. We use $[m-2s,m+2s]$ as
the $95\%$ confidence interval for the MSE. We use $[\sqrt{m-2s},\sqrt{m+2s}]$
as the $95\%$ confidence interval for the RMSE. Note that the resulting RMSE
confidence interval will generally not be symmetric.
### B.2 Additional TLL in the wild experiments
#### SWAG with higher learning rates.
In Figure 7 we continue the experiment described in Section 3.2 but using
higher learning rates of $12$, $15$, and $20$. Despite moving further from the
exact posterior the test log-likelihood remains higher than those achieved by
SWAG approximations with lower learning rates (panels (B) through (E) of
Figure 3).
Figure 7: _(Left)_. Predictive distributions under the SWAG posterior with
SWAG learning rate of (G) $12$, (H) $15$, (I) $20$. The two numbers in the
title of each plot are the 2-Wasserstein distance to the exact posterior and
test log-likelihood computed on $10^{4}$ test set observations. Two standard
errors in the test log-likelihood estimates are (G) 0.01, (H) 0.009, (I) 0.08.
_(Right)_. Contours of the SWAG approximations with different learning rates.
The line $\theta_{1}=0$ is highlighted in red.
#### Mean field variational inference.
Figure 8: _(Left)_. Predictive distributions under the Bayesian posterior and
mean field variational approximations. The two numbers in the title of each
plot are the 2-Wasserstein distance to the true posterior and test log-
likelihoods computed on $10^{4}$ test set observations. Two standard errors in
the test log-likelihood estimates are (A) 0.16, (B) 0.16, (C) 0.03, (D) 0.02,
(E) 0.02, (F) 0.01. _(Right)_. The relationship between distance to posterior
and test log-predictive density. Observe the log scale of the horizontal axis
and the non-monotonic relationship between test log-predictive density and
2-Wasserstein distance to the Bayesian posterior.
Next, we reproduce the experimental setup described in Section 3.2, but
instead of using SWAG to approximate the posterior, we use mean field
variational inference and examine the relationship between TLL and posterior
approximation quality under different re-scalings of the marginal variance of
the optimal variational approximation. Figure 8 shows the posterior mean and
the $95\%$ predictive interval of the mis-specified regression line
$\theta^{\top}\phi$ from (A) the Bayesian posterior; (B) the mean field
variational approximation restricted to isotropic Gaussians; and (C)–(F)
several re-scaled variational approximations. In each plot, we overlaid the
observed data $\mathcal{D}_{500}$, the true data generating function in dashed
black, and also report the 2-Wasserstein distance between the true posterior
and each approximation and the TLL averaged over $N^{*}=10^{4}$ test data
points drawn from Equation 4 Like in our previous example, the mean field
approximation (panel (B) of Figure 8) is very close to the exact posterior.
Further, as we scale up the marginal variance of the approximate posteriors,
the posterior predictive distributions cover more data, yielding higher
$\textrm{TLL},$ while simultaneously moving away from the exact posterior over
the model parameters in a 2-Wasserstein sense. Interestingly, when the
approximation is diffuse enough, TLL decreases, again highlighting its non-
monotonic relationship with posterior approximation quality. In this example
of a mis-specified model, the non-monotonic relationship between TLL and
2-Wasserstein distance means that TLL is, at best, a poor proxy of posterior
approximation quality.
### B.3 The highest TLL does not match the best estimate of a posterior
summary statistic or the lowest KL
We first reproduce the experimental setup that produced Figure 4, but now in
Figure 9, we plot TLL against the error in estimating the posterior standard
deviation. In particular, the horizontal axis shows the absolute value of the
difference between (a) the marginal standard deviation of the parameters of
interest under the approximation and (b) the marginal standard deviation under
the exact posterior. As in the right panel of Figure 4, we observe that the
highest (best) TLL does not correspond to the lowest (best) error in
estimating the posterior standard deviation.
Figure 9: The non-monotonic relationship between difference in marginal
standard deviations and TLL in a well-specified case. _(Left)_ The horizontal
axis reports the absolute difference in the standard deviation of the weight
$\theta_{1}$ between an approximation and the posterior. _(Right)_ The
horizontal axis reports the absolute difference in the standard deviation of
the bias $\theta_{2}$ between an approximation and the posterior.
To create Figure 10, we reproduce the experimental setup from Figure 8.
Relative to the right panel of Figure 8, we change only what is plotted on the
horizontal axis; for Figure 10, we plot the log of the absolute value of the
difference between marginal standard deviations. Again, we see that the
highest (best) TLL does not correspond to the lowest (best) error in
estimating the posterior standard deviation.
Figure 10: The non-monotonic relationship between difference in marginal
standard deviations and TLL in a mis-specified case. The meaning of horizontal
axis is similar to that of Figure 9.
Finally, Figure 11 reproduces analyses from the main text but uses KL
divergence instead of 2-Wasserstein distance to measure posterior
approximation quality. In particular, the left panel of Figure 11 recreates
the right panel of Figure 1; as in Figure 1, we see that the highest (best)
TLL does not correspond to the lowest (best) divergence value. Likewise, the
right panel of Figure 11 recreates the right panel of Figure 4; as in Figure
4, we see the highest TLL again does not correspond to the lowest divergence
value.
Figure 11: The smallest KL divergence does not correspond to the largest TLL,
in a mis-specified case (_left_) and a well-specified case (_right_). The left
panel reproduces the experimental results presented in the right panel of
Figure 1, but uses the reverse KL divergence to measure discrepancy with the
exact posterior instead of the 2-Wasserstein distance. The right panel
reproduces the results in the right panel of Figure 4.
|
# Experimental Observations of the Topology of
Convolutional Neural Network Activations††thanks: Version including technical
appendix can be found on ArXiv.
Emilie Purvine,1 Davis Brown,1 Brett Jefferson,1 Cliff Joslyn,1 Brenda
Praggastis,1
Archit Rathore,2 Madelyn Shapiro,1 Bei Wang,2 Youjia Zhou2 Primary; all other
authors listed in alphabetical order
###### Abstract
Topological data analysis (TDA) is a branch of computational mathematics,
bridging algebraic topology and data science, that provides compact, noise-
robust representations of complex structures. Deep neural networks (DNNs)
learn millions of parameters associated with a series of transformations
defined by the model architecture, resulting in high-dimensional, difficult-
to-interpret internal representations of input data. As DNNs become more
ubiquitous across multiple sectors of our society, there is increasing
recognition that mathematical methods are needed to aid analysts, researchers,
and practitioners in understanding and interpreting how these models’ internal
representations relate to the final classification. In this paper, we apply
cutting edge techniques from TDA with the goal of gaining insight into the
interpretability of convolutional neural networks used for image
classification. We use two common TDA approaches to explore several methods
for modeling hidden-layer activations as high-dimensional point clouds, and
provide experimental evidence that these point clouds capture valuable
structural information about the model’s process. First, we demonstrate that a
distance metric based on persistent homology can be used to quantify
meaningful differences between layers, and we discuss these distances in the
broader context of existing representational similarity metrics for neural
network interpretability. Second, we show that a mapper graph can provide
semantic insight into how these models organize hierarchical class knowledge
at each layer. These observations demonstrate that TDA is a useful tool to
help deep learning practitioners unlock the hidden structures of their models.
## Introduction
Convolutional neural networks (CNNs) are a class of deep learning (DL) models
that have been widely used for image classification tasks with great success,
but the reasoning behind their decisions is often difficult to determine.
Recent work has established an active field of explainable DL to tackle this
problem. There are tools that highlight areas of the images most influential
to the classification (Selvaraju et al. 2017), or reconstruct idealized input
images for each output class (Mahendran and Vedaldi 2015; Wei et al. 2015).
There are even tools that try to impose human concepts on the DL model (Kim et
al. 2018). The complexity and dependencies present within these trained models
demand methods in explainable DL that can summarize complex data without
losing critical structures, producing features of internal representations
that are both stable and persistent with respect to changing inputs and noise,
and significant with respect to representing meaningful features of the input
data.
Topological data analysis (TDA) is an emerging field that bridges algebraic
topology and computational data science. One of the hallmarks of TDA is its
ability to provide compact, noise-robust representations of complex structures
within data. These are exactly the kind of representations that are needed in
the DL space where different training runs or noisy input data may result in
slightly different hidden activations but in no change in the ultimate
classification. In other well-documented cases, slight changes in input,
perhaps unseen to the human eye, result in misclassifications. We believe TDA
can help us understand these cases as well by recognizing changes in the
compact representations of the complex structures of hidden activation layers.
In this paper, we build upon others’ recent work in using TDA to understand
various aspects of machine learning (ML) and DL models. We provide
experimental results that show how a topological viewpoint of hidden-layer
activations can summarize and compare the complex structures within them and
how the conclusions align with our human understanding of the image
classification task. We begin by providing some preliminaries on CNNs and TDA
and summarize related work. We then show our experiments, which use two tools
from TDA: persistent homology and mapper. Finally, we conclude with a
discussion and our directions for future work.
## Preliminaries
### Convolutional Neural Networks
CNNs are a type of deep neural network that respects the spatial information
existing in the input data. They use shared weights to provide translation
invariant measures of correlation across an input, which makes them ideal for
image classification tasks, where objects requiring identification might be
found anywhere in an image.
Mathematically, a trained neural network used for classification is best
described as the composition of linear and non-linear _tensor maps_ called
_layers_ , where a _tensor_ is a multi-dimensional real-valued array. The
input to a neural network is a tensor, and the output of the network is a
probability vector indicating the likelihood the input belongs to each class.
The intermediate outputs from each layer of the composition are called feature
maps or _activation tensors_. Linear layers use tensor maps that respect
element-wise addition and scalar multiplication, and can be either fully
connected or convolutional.
Convolutional layers use cross correlation, also known as a sliding dot
product, to map 3D tensors to 3D tensors. If the activation tensor from a
convolutional layer has dimensions $c\times n\times m$, we say the tensor has
$c$ _channels_ and $nm$ _spatial dimensions_. Activation tensors may be sliced
into spatial and channel activations, as shown in Figure 1, and then reshaped
to obtain vector representations of their values.
Figure 1: Visualization of spatial activation (middle) and channel activation
(right) within an activation tensor.
### Persistent Homology
One of the two topological tools that we use in our work is persistent
homology (PH). At a high level, PH is a method for understanding the
topological structure of a space that data are sampled from. We typically have
access only to the sample, in the form of a point cloud, and use PH to infer
large-scale structures of the unknown underlying space. Here, we provide a
brief overview of PH and point readers to Edelsbrunner and Harer (2008);
Ghrist (2008) for more details.
The theoretical basis for persistent homology lies in the concept of
_homology_ from algebraic topology. Given a topological object, e.g., a
surface or the geometric realization of a _simplicial complex_ (a collection
of finite sets, $\Sigma$, such that if $\tau\subset\sigma$ and
$\sigma\in\Sigma$ then $\tau\in\Sigma$), its homology is an algebraic
representation of its cycles in all dimensions. In dimensions 0, 1, and 2, the
cycles have simple interpretations as connected components, loops, and
bubbles, respectively. Higher dimensional interpretations exist but are less
intuitive.
Given a single point cloud, $S\subset\operatorname{\mathbb{R}}^{k}$, we can
construct a family of associated simplicial complexes on which to compute
homology. In this paper, we use the Vietoris-Rips (VR) complex given a scale
parameter $\epsilon$, $VR(S,\epsilon)$. In short, $VR(S,\epsilon)$ is a
simplicial complex where each collection of points in $S$ whose pairwise
distances are all at most $\epsilon$ is a set in $VR(S,\epsilon)$. We show
examples of two VR complexes (just the 1-skeleton, the pairwise edges) of the
same point cloud at two scale parameters in Figure 2.
Finally, we can describe the motivation and concept of PH. A single point
cloud technically is a simplicial complex, but it is not interesting
homologically. Whereas constructing a VR complex at a single scale parameter
does provide an interesting topological object, it does not capture the
multiscale phenomena of the data. PH is a method that considers all VR scale
parameters together to identify at which $\epsilon$ a cycle is first seen (is
“born”) and at which $\epsilon^{\prime}$ the cycle is fully triangulated
(“dies”). This set of birth and death values for a sequence of simplicial
complexes of a given point cloud provides a topological fingerprint for a
point cloud often summarized in a _persistence diagram_ (PD) as a set of
$(b,d)$ coordinates. Figure 2 also shows the point cloud’s PD from the full
sequence of $\epsilon$ thresholds.
Figure 2: VR complexes at two $\epsilon$ values and the PD of the point cloud.
In the PD, orange (resp. blue) points represent 1D (resp. 0D) persistent
features. Points on the horizontal dotted line are those that persist through
the entire filtration and have no death threshold.
PDs form a metric space under a variety of distance metrics. In this paper, we
will use _sliced Wasserstein (SW) distance_ introduced by Carrière, Cuturi,
and Oudot (2017). Given two PDs, the SW distance is computed by integrating
the Wasserstein distances for all projections of the PD onto lines through the
origin at different angles.
### Mapper
The mapper algorithm was first introduced by Singh, Memoli, and Carlsson
(2007). It is rooted in the idea of “partial clustering of the data guided by
a set of functions defined on the data” (2007). On a high level, the mapper
graph captures the global structure of the data.
Let $S\subset\operatorname{\mathbb{R}}^{k}$ be a high-dimensional point cloud.
A _cover_ of $S$ is a set of open sets in $\operatorname{\mathbb{R}}^{k}$,
$\operatorname{\mathcal{U}}=\\{U_{i}\\}$ such that $S\subset\cup_{i}U_{i}$. In
the classic mapper construction, obtaining a cover of $S$ is guided by a set
of scalar functions defined on $S$, referred to as _filter functions_. For
simplicity, we describe the mapper construction using a single filter function
$f:S\to\operatorname{\mathbb{R}}$. Given a cover
$\operatorname{\mathcal{V}}=\\{V_{\ell}\\}$ of
$f(S)\subset\operatorname{\mathbb{R}}$ where
$f(S)\subseteq\cup_{\ell}V_{\ell}$, we can obtain a cover
$\operatorname{\mathcal{U}}$ of $S$ by considering as cover elements the
clusters (for a choice of clustering algorithm) induced by $f^{-1}(V_{\ell})$
for each $V_{\ell}$.
Then, the 1D _nerve_ of any cover $\operatorname{\mathcal{U}}$ is a graph and
is denoted as $\operatorname{\mathcal{N}}_{1}(\operatorname{\mathcal{U}})$.
Each node $i$ in $\operatorname{\mathcal{N}}_{1}(\operatorname{\mathcal{U}})$
represents a cover element $U_{i}$, and there is an edge between nodes $i$ and
$j$ if $U_{i}\cap U_{j}$ is non-empty. If $\operatorname{\mathcal{U}}$ is
constructed as above, from a clustering of preimages of a filter function $f$,
then its 1D nerve, denoted as
$\operatorname{\mathcal{M}}=\operatorname{\mathcal{M}}(S,f):=\operatorname{\mathcal{N}}_{1}(\operatorname{\mathcal{U}})$,
is the _mapper graph_ of $(S,f)$.
Consider the point cloud in Figure 3 as an example containing two nested
circles. It is equipped with a height function
$f:S\to\operatorname{\mathbb{R}}$. A cover
$\operatorname{\mathcal{V}}=\\{V_{1},\cdots,V_{5}\\}$ of $f(S)$ is formed by
five intervals (see Figure 3 middle). For each $\ell$ ($1\leq\ell\leq 5$),
$f^{-1}(V_{\ell})$ induces a number of clusters that are subsets of $S$. Such
clusters form the elements of a cover $\operatorname{\mathcal{U}}$ of $S$. As
shown in Figure 3 (left), the cover elements of $\operatorname{\mathcal{U}}$
are contained within the 12 rectangles on the plane. The mapper graph of $S$
is shown in Figure 3c. For instance, cover $f^{-1}(V_{1})$ induces a single
cover element $U_{1}$ of $S$, and it becomes node 1 in the mapper graph of
$S$. $f^{-1}(V_{2})$ induces 3 cover elements $U_{2}$, $U_{3}$ and $U_{4}$,
which become nodes 2, 3 and 4. Since $U_{1}\cap U_{2}\neq\emptyset$, an edge
exists between node 1 and node 2. The two circular structures in Figure 3
(left) are captured by the mapper graph in Figure 3 (right).
Figure 3: A mapper graph of a point cloud containing two nested circles.
### Related Work
The value of TDA to organize, understand, and interpret various aspects of ML
and DL models has been recognized in several current research directions. Much
of this research has focused on model parameters, structure, and weights. Guss
and Salakhutdinov (2018) examine model architecture selection by defining the
“topological capacity” of networks, or the ability for the network to capture
the true topological complexity of the data. They explore the learnability of
model architectures in the face of increasing topological complexity of data.
Gabrielsson and Carlsson (2019) build the mapper graph of a point cloud of
learned weights from convolutional layers within a simple CNN and find that
the weights of different CNN model architectures trained on the same data set
have topological similarities. “Neural persistence”, developed by Rieck et al.
(2019), is a topological measure of complexity of a fully connected deep
neural network that depends on learned weights and network connectivity. They
find networks that use best practices such as dropout and batch normalization
have statistically higher neural persistence, and define a stopping criterion
to speedup the training of such a network.
Other studies, like that of Wheeler, Bouza, and Bubenik (2021) use TDA to
study activation tensors of simple multi-layer perceptron networks to discover
how the topological complexity, as measured by a property of persistence
landscapes, changes through the layers. Gebhart, Schrater, and Hylton (2019);
Lacombe, Ike, and Umeda (2021) investigate the topology of neural networks via
“activation graphs,” which model the natural graphical structure of the
network. Finally, most closely related to our work is that of Rathore et al.
(2021), which describes TopoAct, a visual platform to explore the
organizational principle behind neuron activations. TopoAct displays the
mapper graph of activation vectors for a single layer at a time in a CNN to
show how the model organizes its knowledge via the branching structures. The
authors consider a point cloud formed by randomly sampling a single spatial
activation in a given layer for each image in a corpus. We extend this work by
using a larger and more data-driven sample of spatial activations to build our
mapper graphs, quantifying the intuition of “pure” and “mixed” mapper nodes,
considering the effect of noisy input on the resulting graph, and showing how
our results generalize to multiple common model architectures.
## Point Cloud Summaries of Activations
Following the approach of Rathore et al. (2021), we model each convolutional
layer of a CNN as an $Np\times c$ point cloud by sampling $p$ spatial
activation vectors from the $c\times n\times m$ activation tensors produced by
$N$ images in a dataset. This gives us a collection of point clouds that can
be used to study the evolution of the activation space (i.e., the space of
spatial activations), as the complexity of features learned by each layer
increases as we move deeper into the model (Zhou et al. 2015; Olah et al.
2020). We introduce several data-driven sampling methods with the goal of
improving upon the quality of the sampled point cloud representation.
#### Random and full activations.
In our mapper experiments, for a fixed layer, we construct a high-dimensional
point cloud by _randomly sampling_ a single ($p=1$) spatial activation from
each input image, as in Rathore et al. (2021). We additionally experiment with
_full activation sampling_ ($p=nm$) by including all spatial activations of a
given layer for each image in the point cloud construction.
#### Top $l^{2}$-norm activations.
In our PH experiments, for a fixed layer we construct a point cloud with _top
$l^{2}$-norm sampling_ ($p=1$) by selecting the spatial activation with the
strongest $l^{2}$-norm from each image.
#### Foreground and background activations.
For a fixed convolutional layer, each spatial position in the activation
tensor can be traced back to its _effective receptive field_ , which is the
region of the input image that the network has “seen” via contributions from
previous layers. Naturally, each spatial activation corresponds to the subset
of the foreground and background pixels in its effective receptive field. To
investigate how foreground and background information of an input image
manifests in the activation space, we first use cv2.grabCut from the OpenCV
library (Bradski 2000) to perform image segmentation and identify the
foreground and background pixels in the images. We then assign a weight to
each spatial activation according to the number of foreground or background
pixels in its effective receptive field, as illustrated in Figure 4. The
spatial activations with the greatest weight are selected to represent each
image in the point cloud construction, referred to as _foreground_ or
_background sampling_. In our mapper experiments, we study the “top $p$”
foreground and background activations for $p=1$ and $p=5$.
### Reproducibility Details
The following two sections outline our experiments using PH and mapper graphs
to study the standard benchmark dataset CIFAR-10 (Krizhevsky and Hinton 2009)
on a ResNet-18 architecture (He et al. 2016). We perform standard
preprocessing to normalize the images by the mean and variance from the full
training set. Code for the models and additional details regarding the
dataset, as well as the parameters and computing infrastructure specific to
each set of experiments, are provided in the arXiv technical appendix.
Figure 4: Spatial positions whose effective receptive field contains primarily
foreground pixels are highly weighted in foreground sampling.
## Experiments with PH
Using the top $l^{2}$-norm sampling method, we construct point cloud summaries
of activations from the CIFAR-10 dataset on a ResNet-18 model to study the PH
of the activation space. The SW distance between PDs of these point cloud
summaries — which we will refer to from now on as the _SW distance between
layers_ — proves to be an interesting topological metric for capturing
similarity between layers; it exhibits some of the fundamental qualities of
strong representation similarity metrics for neural networks but fails to be
sensitive to others (Ding, Denain, and Steinhardt 2021).
### Relationships Between Layers
In Figure 5, we observe a grid-like pattern in the SW distances between layers
of ResNet-18 similar to the results found in Kornblith et al. (2019), which
the authors attribute to the residual architecture. This observation supports
our belief that meaningful qualities of the model and its architecture can be
uncovered by studying the topology of the activation space with PH.
Figure 5: SW distances between convolutional layers of ResNet-18; results
averaged over 10 random batches of 1000 CIFAR-10 test set images
($\text{CV}<0.17$).
### Representation Similarity Metrics & Intuitive Tests
Metrics such as canonical correlation analysis (CCA) (Morcos, Raghu, and
Bengio 2018; Raghu et al. 2017), centered kernel alignment (CKA) (Kornblith et
al. 2019), and orthogonal Procrustes distance (Ding, Denain, and Steinhardt
2021) provide dissimilarity measures that can be used to compare layers of
neural networks. Recent work has demonstrated the value of topological
approaches to representation similarity such as Representation Topology
Divergence (Barannikov et al. 2022). These methods operate on an $N\times cnm$
matrix representation of a convolutional layer, where the $c\times n\times m$
activation tensors produced by each of the $N$ inputs from the dataset are
normalized and unfolded into vectors in $\operatorname{\mathbb{R}}^{cnm}$.
Here we note this as a key difference from our $N\times c$ point cloud
representation obtained through top $l^{2}$-norm sampling but leave a more
thorough comparison to future work.
We apply the intuitive specificity and sensitivity tests outlined by Ding,
Denain, and Steinhardt (2021) to probe the utility of the SW distance between
layers as a representation similarity metric for neural networks. In
comparison to the intuitive test results shown for CCA, CKA, and orthogonal
Procrustes distance from Ding, Denain, and Steinhardt (2021), this metric
exhibits some non-standard behavior, for which we provide some speculative
explanations but further work is needed to fully understand such a metric.
#### Specificity.
To measure the impact of model initialization seed on the SW distance between
layers, we trained 100 ResNet-18 models with different initialization seeds on
CIFAR-10, and constructed top $l^{2}$-norm point cloud representations of the
layers of each model from $N=1000$ test set images. Figure 6 shows SW
distances for two of the models “A” and “B”, comparing pairs of layers in
Model A (left) as well as pairs of layers between Model A and Model B (right).
We find that variation in model seed has almost no impact on the SW distances,
as shown by the near-identical heatmaps and highlighted for layer 9 (bottom
row). The internal and cross-model SW distances relative to Model A layer 9
are highly correlated, with $\rho\approx 0.907$ computed by averaging
correlation with fixed Model A over the 99 remaining randomly initialized
models as Model B. Averaging internal and cross-model correlation relative to
each layer of Model A, we find $\rho\approx 0.910$. We conclude that SW
distance between layers is highly specific and robust to variation in
initialization seed.
Figure 6: Intuitive specificity test of SW distance between convolutional
layers of two ResNet-18 models initialized with different random seeds, for
1000 CIFAR-10 test set images.
#### Sensitivity.
A representation similarity metric should be robust to noise without losing
sensitivity to significant alterations. We apply the intuitive sensitivity
test of Ding, Denain, and Steinhardt (2021) by taking the SW distance between
each layer and its low-rank approximations as we delete principal components
from the $N\times c$ point cloud. The SW distance to the corresponding layer
in another model is averaged over the remaining 99 randomly initialized models
to compute a baseline SW distance for each layer. This baseline defines a
threshold of _detectable_ SW distance, above which distance cannot be solely
attributed to different initialization. In Figure 7, we see the sensitivity of
this metric is heavily dependent on layer depth.
Figure 7: Intuitive sensitivity test of SW distance for the first (0), middle
(8), and last (16) convolutional layers of ResNet-18, for 1000 CIFAR-10 test
set images.
## Experiments with Mapper Graphs
Figure 8: Mapper graphs from random (top) and full (bottom) activations from
ResNet-18 using the CIFAR-10 dataset.
In this section, we explore how the topology of the activation space changes
across layers by constructing mapper graphs from spatial activations from
$N=50\text{k}$ CIFAR-10 training images on a ResNet-18 model. The mapper graph
filter function is the $l^{2}$-norm of each spatial activation. We employ and
extend _MapperInteractive_ (Zhou et al. 2021), an open-source web-based
toolbox for analyzing and visualizing high-dimensional point cloud data via
its mapper graph. Because of the visual nature of mapper graphs, our
experiments will largely be evaluated by exploring and comparing the
_qualitative_ properties of the visualizations rather than quantitative
comparisons of structures. The exception will be our purity measures,
introduced in a later subsection.
### Random and Full Activations
In Figure 8, we compare the mapper graphs generated from a point cloud of
random activations ($50\text{k}\times c$) against those generated from the
full activations ($50\text{k}\cdot nm\times c$) across different convolutional
layers, where $c$ is the number of dimensions of each activation, and $nm$ is
the total number of spatial activation vectors per image. The glyph for each
node of the mapper graph is a pie chart showing the composition of class
labels in that node. It can be seen that at layer 16, the mapper graphs of the
random and full activations clearly capture the separation among class labels;
there is a central region in the graph where nodes with mixed labels (with
lower $l^{2}$-norm) separate out into branches with single labels (with higher
$l^{2}$-norm). As we move toward earlier layers, the ability of the mapper
graphs to show class separation gradually deteriorates. In addition, both
random and full activations show similar bifurcation patterns, indicating
robustness with respect to the sampled activations.
### Foreground and Background Activations
Next, we study whether branching structures emerge at earlier layers if we use
top foreground or background activations. Figure 9 shows the evolution of
mapper graphs using the foreground and background activations across layers.
We observe that the mapper graph of foreground activations at layer 15 already
shows notable class bifurcations. Such early separations are less obvious for
random and full activations. The mapper graphs of background activations also
show clear class separations at layer 15 and 16, indicating that background
pixels likely play an important role in class separation as well. Mapper
graphs for the top 5 foreground and background activations are provided, along
with similar observations in the technical appendix.
Figure 9: Mapper graphs generated from the foreground (top) and background
(bottom) activations with the largest weights.
### Activations with Gaussian Noise
Figure 10: Examples of CIFAR-10 images with perturbations. Column 1 contains
the original, and columns 2-4 contain images perturbed with different standard
deviations.
To explore the stability of mapper graphs to noise in the input data, we
injected pixel-wise Gaussian noise to all 50k images with different standard
deviations ($\sigma$). Examples of how the images change as the standard
deviation increases are shown in Figure 10, and the corresponding mapper
graphs at layer 16 are shown in Figure 11. It can be seen that the mapper
graphs are stable for small perturbations ($\sigma=0.1$). As $\sigma$
increases, mapper graphs illustrate that the model’s ability to differentiate
different classes decreases. This observation aligns with the intuition that
increasing the noise level will decrease prediction accuracy.
Figure 11: Perturbed mapper graphs generated from the full activations (top)
and the foreground activations (bottom) at the last convolutional layer.
### Mapper Graph Purity Measures
For an image classification task, each point (i.e., a spatial activation)
$x\in S$ is assigned a class label (inherited from the class label of its
corresponding input image). We introduce three quantitative measures to
quantify how well a mapper graph of the activation space separates the points
from different classes.
#### Node-wise purity.
Given a mapper graph $\operatorname{\mathcal{M}}$, the node-wise purity of a
node $i$ is defined as $\alpha_{i}=\frac{1}{c_{i}},$ where $c_{i}$ is the
number of class labels in node $i$: the more classes in node $i$, the less
pure node $i$ is. Figure 12 (bottom) shows the node-wise purity of mapper
graphs for foreground (top 1 and 5), random, and full activations at a variety
of layers (aligning with the layers seen in Figures 8 and 9). We observe that
node-wise purity is larger in deeper layers, indicating that the underlying
model gets better at separating the classes the deeper we go. However, the
type of sampling seems not to influence the purity as much. Top 5 foreground
sampling tends to have slightly higher purity, whereas random sampling has
lower purity.
#### Point-wise purity.
For a point $x\in S$, the point-wise purity is defined as
$\beta_{x}=\frac{\sum_{i=1}^{n_{x}}\alpha_{i}}{n_{x}},$
where $n_{x}$ is the number of nodes containing point $x$. It is the average
node-wise purity of all nodes containing $x$.
#### Class-wise purity.
For a class $k$, the class-wise purity is defined as
$\gamma_{k}=\frac{\sum_{i=1}^{N_{c}}\beta_{i}}{N_{c}},$
where $N_{c}$ is the number of points in class $k$. It is the average value of
point-wise purity for all points in class $k$. Figure 12 (top) shows the
class-wise purity of the deer class for foreground (top 1 and 5), random, and
full activations at the same set of layers as node-wise purity. As was the
case for the node-wise purity, we observe a general trend of increased class-
wise purity of mapper graphs in deeper layers of the neural network.
Figure 12: Top: class-wise purity of the deer class for random, full
activations, and foreground (top 1 and 5) at a variety of layers; bottom:
node-wise purity for random, full activations, and foreground (top 1 and 5) at
a variety of layers, and the legend is the same as that of the top plot.
### Generalization of Mapper Experiments to Additional Models
Figure 13: Mapper graphs of random (top) and foreground (bottom) activations
for models trained on the ImageNet dataset.
In order to show that our mapper graph observations are not dependent on the
ResNet-18 architecture or CIFAR-10 data set we also perform these experiments
using a different model-data pair. To compare with the prior experiments which
use the lower resolution CIFAR-10 data set, the experiments in this section
use a subset of 10 classes from the ImageNet dataset (Deng et al. 2009), as
shown in the legend of Figure 13. There are 1300 images per class, resulting
in a set of $N=13\text{k}$ images. The images have varying resolutions with an
average resolution of $469\times 378$. The data is pre-processed by first
resizing each image to 256 pixels and center cropping to a patch of size
2$24\times 224$, followed by a normalization with mean and variance of the
original ImageNet training set images. For foreground extraction, we apply a
different strategy than previously used since cv2.grabCut does not work as
well with the ImageNet dataset due to the large amount of high frequency
details in the image backgrounds. Instead we use a pre-trained DeepLabV3
semantic segmentation model (Chen et al. 2017) to obtain the foreground mask
which is then applied to the images to get the foreground pixels.
The models that we use for the generalization experiments include ResNet-18,
Inception_v1 (Szegedy et al. 2015), Inception_v3 (Szegedy et al. 2016) and
AlexNet (Krizhevsky, Sutskever, and Hinton 2012). The number of parameters of
each model is 11.6M, 6.6M, 27.2M and 61.1M respectively.
Figure 13 shows the resulting mapper graphs generated from the last layer of
each model. Through these experiments, we demonstrate that the structures and
insights we observe on ResNet-18 applied to CIFAR-10 are applicable to a wide
range of other image recognition models as well.
## Discussion and Future Work
Our experiments using PH and mapper to study activation tensors of CNNs add to
the growing body of literature to suggest that TDA provides useful summaries
of DL models and hidden representations. The ability of mapper graphs to
summarize point clouds from activation tensors and identify branching
structures was previously shown in (Rathore et al. 2021). In our paper, we go
beyond the random activations of that prior work to build mapper graphs of
foreground, background, and full activation point clouds. These mapper graphs
exhibit branching structures at earlier layers and show robustness with
respect to image noise. Our new purity measures further quantify the
observation that mapper graphs’ branching structures align with class
separations, and improve as we go deeper into the layers. Moreover, we also
show that the mapper graph branching structures are present not just in
ResNet-18 applied to CIFAR-10 but also to ImageNet studied using ResNet-18,
InceptionV1 and V3, and AlexNet.
Although the mapper graphs we study come from a single trained model, our PH
experiments show that the topological structures of the point clouds from
which the mapper graphs are built are independent of the training run. Work
has yet to be done to characterize those topological structures for CNNs
beyond mapper graphs, but the fact that the distances are training-invariant
indicates that such structures are indeed present and thus likely relevant to
model interpretation. Although SW distance does pass the specificity test, we
observed that, like the widely-cited CKA, it does not pass the sensitivity
test of Ding, Denain, and Steinhardt (2021). We expect this is in part due to
the previously noted differences between the standard representation and our
sampled point cloud; however, our sampling approach is needed to mitigate the
computational costs of PH, which scale with dimensionality of the underlying
space.
In future work, we plan to further characterize the types of topological
structures present in hidden layers of CNNs, explore theoretical
justifications for the success of our experiments, and complete a more
thorough analysis of the sensitivity of the SW distance via principal
component removal. Finally, in order to aid DL practitioners in unlocking the
hidden structures of their models, we plan to implement our methods into user-
friendly tools.
## Appendix A Technical Appendix
### Additional Figures for Mapper Experiments
Here we include some additional figures that further strengthen our
observations on foreground and background activation point clouds and their
robustness to image noise.
Figure 14: Mapper graphs generated from the foreground (top) and background
(bottom) activations with top 5 largest weights.
In the main body of the paper we showed mapper graphs generated from the top 1
foreground and background activations. In Figure 14 we show additional mapper
graphs from the top 5 foreground and background activations. The branching
properties and conclusions are similar to those for the top 1 foreground and
background mapper graphs.
For our image perturbation experiment we showed mapper graphs generated from
the full and foreground activations in the last layer for different levels of
injected noise in the input images. In Figure 15 we show additional mapper
graphs for the top 5 full activations, again for only the last convolutional
layer. We observe that even at noise level $\sigma=0.3$ there is good
branching structure in the last layer while the mapper graphs for the same
noise level as in Figure 11 have less clear branching. This may indicate that
top 5 activations are more robust to input noise than full and foreground top
1 activations.
Figure 15: Perturbed mapper graphs generated from the top 5 foreground
activations at the last convolutional layer. Figure 16: Class-wise purity of
the airplane class for random, full, and foreground (top 1 and 5) at a variety
of layers. Figure 17: Class-wise purity of the automobile class for random,
full, and foreground (top 1 and 5) at a variety of layers. Figure 18: Class-
wise purity of the bird class for random, full, and foreground (top 1 and 5)
at a variety of layers. Figure 19: Class-wise purity of the cat class for
random, full, and foreground (top 1 and 5) at a variety of layers. Figure 20:
Class-wise purity of the deer class for random, full, and foreground (top 1
and 5) at a variety of layers. Figure 21: Class-wise purity of the dog class
for random, full, and foreground (top 1 and 5) at a variety of layers. Figure
22: Class-wise purity of the frog class for random, full, and foreground (top
1 and 5) at a variety of layers. Figure 23: Class-wise purity of the horse
class for random, full, and foreground (top 1 and 5) at a variety of layers.
Figure 24: Class-wise purity of the ship class for random, full, and
foreground (top 1 and 5) at a variety of layers. Figure 25: Class-wise purity
of the truck class for random, full, and foreground (top 1 and 5) at a variety
of layers.
Finally, we provide figures for the class-wise purity measures across all
classes. In the main body of the paper we showed class-wise purity for the
deer class across a variety of layers and for random, full, and foreground
(top 1 and top 5) spatial activation sampling. Figures 16 \- 25 show the same
class-wise purity plot for all 10 classes across all sampling methods with the
same scale. The value range for the class-wise purity is $[0,1]$. However, to
avoid too much white space in the visualizations we make the scale in the
plots to be $[0,0.6]$, a bit more than the maximum purity observed.
### Reproducibility Checklist
Many of the items in the full reproducibility checklist are addressed in the
main body of the paper. Here we provide more details on model parameters,
preprocessing and randomization code, and mapper graph parameters for the
purposes of reproducibility of our results.
##### 6.1:
Code for preprocessing the data is contained in the following supplementary
code files:
* •
ResNet-18 models implemented and trained from scratch
* –
PH experiments:
* *
cifar_resnet.py
* *
ffcv_cifar10_train.py
* –
Mapper experiments:
* *
cifar_train.py
* •
Injecting Gaussian noise to input data:
* –
cifar_extract_full_activations_noises.py
* •
All types of point cloud creation for both PH and mapper experiments:
* –
Pulling top $l^{2}$-norm activation vectors from each layer for the
Experiments with PH section:
* *
topL2_pointclouds.py
* –
Pulling activation vectors for the Experiments with Mapper section
* *
Functions shared with all following scripts:
cifar_extract.py
* *
Full activations:
cifar_extract_full_activations.py
* *
Random activations:
cifar_extract_sampled_activations.py
* *
Foreground (top 1, top 5) activations:
cifar_extract_full_activations_foreground.py
* *
Background (top 1, top 5) activations:
cifar_extract_full_activations_background.py
* *
Full, foreground activations with Gaussian noises:
cifar_extract_full_activations_noises.py
cifar_extract_full_activations_foreground_noises.py
* *
Activations of ImageNet from additional models: model-forward-pass.ipynb
##### 6.2:
Code for conducting and analyzing experiments uses the following custom
scripts, openly available packages, and open source tools:
* •
To compute PD given a point cloud we use the ripser Python package (function
ripser.ripser) using default parameter choices (Tralie, Saul, and Bar-On 2018)
* •
To compute the SW distance we use the persim Python package (function
persim.sliced_wasserstein) using default parameter choices
* •
Code for Experiments with PH:
* –
SW distances between layers, for a single model or across differently
initialized models for specificity test:
* *
SW_distances.py
* –
Removing principal components of point clouds for sensitivity test:
* *
sensitivity_testing.py
* •
Computing mapper graphs
* –
Mapper Interactive command line API:
* *
cover.py
* *
kmapper.py
* *
mapper_CLI.py
* *
nerve.py
* *
visuals.py
* –
The “elbow” approach to get the eps parameter value:
* *
get_knn.py
* –
Full activations:
* *
get_mapper_full_batches.py
* –
Random activations:
* *
get_mapper_random_batches.py
* –
Foreground activations:
* *
top 1: get_mapper_full_fg_1.py
* *
top 5: get_mapper_full_fg_5.py
* –
Background activations:
* *
top 1: get_mapper_full_bg_1.py
* *
top 5: get_mapper_full_bg_5.py
* –
Full activations with Gaussian noises:
* *
get_mapper_full_batches_with_noises.py
* –
Foreground activations with Gaussian noises:
* *
top 1: get_mapper_full_fg_1_noises.py
* *
top 5: get_mapper_full_fg_5_noises.py
* –
Activations from additional models of ImageNet:
* *
get_mapper_additional_models.py
* •
To compute the mapper graphs using MapperInteractive there are four important
parameters to be tuned in the interactive interface: the number of intervals
and the overlap rate to create the cover; the DBSCAN clustering parameters eps
which sets the maximum distance between two points for one to be considered as
in the neighborhood of the other; and min_samples, the number of points in a
neighborhood for a point to be considered as a core point.
To create the mapper graphs we use MapperIneractive with the following
parameter choices:
* –
num_intervals=40
* –
overlap_rate=25%
* –
min_samples=5
* –
For the mapper graphs generated from the random, full, foreground and
background activations of the CIFAR-10 images at layers 16, 15, 13, 12, 8, and
4, the choices of eps are listed in Table 1
* –
For the mapper graphs generated from the perturbed images, the eps values are
the same as those generated from the original images for comparison purpose.
* –
For the mapper graphs generated from the random and foreground activations of
the ImageNet dataset at the last layers of models ResNet-18, Inception V1,
Inception V3 and AlexNet, the choices of eps are listed in Table 2
* •
Computing purity measures from a mapper graph and a labeling of points
* –
Node-wise purity: get_nodewise_purity.py
* –
Class-wise purity: get_classwise_purity.py
Layer | 16 | 15 | 13 | 12 | 8 | 4
---|---|---|---|---|---|---
random | 8.71 | 4.22 | 5.04 | 7.69 | 6.80 | 4.50
full | 8.52 | 2.50 | 3.50 | 5.41 | 4.50 | 3.50
fg (top 1) | 10.65 | 8.50 | 9.29 | 11.87 | 8.51 | 4.99
fg (top 5) | 11.00 | 7.50 | 9.52 | 10.05 | 7.00 | 4.00
bg (top 1) | 12.09 | 8.19 | 9.20 | 12.41 | 8.20 | 4.85
bg (top 5) | 10.07 | 8.50 | 9.55 | 11.02 | 7.57 | 4.52
Table 1: The eps values for the mapper graphs generated from the random, full, top 1 and top 5 foreground and background activations of the CIFAR-10 images at layers 16, 15, 13, 12, 8 and 4. Model | random | foreground
---|---|---
ResNet-18 | 51.5 | 55.0
Inception V1 | 25.0 | 38.5
Inception V3 | 45.0 | 56.0
AlexNet | 37.0 | 35.0
Table 2: The eps values for the mapper graphs generated from the random and
foreground activations of the ImageNet at the last layer of the models
ResNet-18, Inception V1, Inception V3 and AlexNet.
##### 6.3:
All scripts and code outlined in this reproducibility checklist will be
publicly released with a permissive license upon publication of this paper.
##### 6.4:
The only code implementing new methods is for the purity measures which was
already noted in reproducibility checklist item 6.2.
##### 6.5:
Where randomness is employed in applying Gaussian noise we use
numpy.random.randint() to generate random seeds. When selecting random batches
of $N=1000$ test images in the PH section we use a PyTorch DataLoader with
shuffle=True and a random seed of 0. All seeds are applied using
torch.manual_seed().
##### 6.6:
For our experiments we used the following computing infrastructures:
* •
PH experiments:
* –
GPU models (training): NVIDIA DGX-A100 with 8 A100 GPUs
* –
GPU models (experiments): Dual NVIDIA P100 12GB PCI-e based GPU
* –
CPU models: 16 Dual Intel Broadwell E5-2620 v4 @ 2.10GHz CPUs
* –
Amount of memory: 64 GB 2133Mhz DDR4
* –
Operating system: Centos 7.8 based operating system (ROCKS 7)
* –
Relevant software libraries and frameworks:
* *
FFCV (Leclerc et al. 2022)
* *
PyTorch (Paszke et al. 2019)
* *
Torchvision (Marcel and Rodriguez 2010)
* *
Ripser (Tralie, Saul, and Bar-On 2018)
* *
Persim (Saul and Tralie 2019)
* •
Mapper experiments using ResNet-18 on CIFAR-10:
* –
GPU models: NVIDIA 4x TITAN V with CUDA 11.2
* –
CPU models: 32 Intel Xeon Silver 4108 CPU @ 1.80GHz cores (HT)
* –
Amount of memory: 132GB of RAM
* –
Operating system: OpenSUSE Leap 15.3 (x86_64)
* –
Relevant software libraries and frameworks:
* *
Python (v3.6.15)
* *
MapperInteractive
* *
PyTorch (v1.9.0)
* *
sklearn (v0.24.2)
* •
Mapper experiments using ResNet-18, Inception V1, Inception V3 and AlexNet on
ImageNet:
* –
GPU models: NVIDIA GTX 1060
* –
CPU models: Intel Xeon CPU E5-2630 v3 @ 2.40GHz
* –
Amount of memory: 32GB of RAM
* –
Operating system: OpenSUSE Leap 15.1
* –
Relevant software libraries and frameworks:
* *
Python (v3.7.5)
* *
MapperInteractive
* *
Pytorch (v1.4.0)
* *
sklearn (v0.23.2)
##### 6.10:
Our comparison against other methods (e.g., SW distance vs. CCA and CKA) is
not a head-to-head performance comparison and so comparison metrics are not
applicable. Instead we discuss the similarities and differences between our
observations on SW distance and trends observed by Ding, Denain, and
Steinhardt (2021) on CCA and CKA.
##### 6.11:
All architectures and hyper-parameters for the models are standard choices.
For all models in the PH section we use the standard ResNet-18 architecture
outlined by He et al. (2016) in their analysis of CIFAR-10. For the mapper
experiments on the CIFAR-10 dataset, we trained a ResNet-18 model that we
implement from scratch. For the mapper experiments on the ImageNet dataset,
all the models (ResNet-18, Inception V1, Inception V3 and AlexNet) used are
the pre-trained models from the PyTorch built-in model library without any
modifications.
Parameters for creating mapper graphs were explained under item 6.2 above.
To create our mapper graphs the final parameter choices were a result of the
following process. The num_intervals, overlap_rate and min_samples were
determined by manually tuning, and eps values were determined by sorting the
distances of the $k$-th nearest neighbor for all points and finding the elbow
point, where $k=\texttt{min\\_samples}$.
## Acknowledgements
MS, BW, and YZ are key contributors to this work. BW was partially funded by
NSF DMS 2134223 and IIS 2205418.
## References
* Barannikov et al. (2022) Barannikov, S.; Trofimov, I.; Balabin, N.; and Burnaev, E. 2022. Representation Topology Divergence: A Method for Comparing Neural Network Representations. In Chaudhuri, K.; Jegelka, S.; Song, L.; Szepesvari, C.; Niu, G.; and Sabato, S., eds., _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , 1607–1626. PMLR.
* Bradski (2000) Bradski, G. 2000. The OpenCV Library. _Dr. Dobb’s Journal of Software Tools_.
* Carrière, Cuturi, and Oudot (2017) Carrière, M.; Cuturi, M.; and Oudot, S. 2017. Sliced Wasserstein Kernel for Persistence Diagrams. In Precup, D.; and Teh, Y. W., eds., _Proceedings of the 34th International Conference on Machine Learning_ , volume 70 of _Proceedings of Machine Learning Research_ , 664–673. PMLR.
* Chen et al. (2017) Chen, L.-C.; Papandreou, G.; Schroff, F.; and Adam, H. 2017. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv preprint arXiv:1706.05587.
* Deng et al. (2009) Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. ImageNet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_ , 248–255.
* Ding, Denain, and Steinhardt (2021) Ding, F.; Denain, J.-S.; and Steinhardt, J. 2021. Grounding Representation Similarity Through Statistical Testing. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y.; Liang, P.; and Vaughan, J. W., eds., _Advances in Neural Information Processing Systems_ , volume 34, 1556–1568.
* Edelsbrunner and Harer (2008) Edelsbrunner, H.; and Harer, J. 2008. Persistent homology—a survey. In _Surveys on Discrete and Computational Geometry_ , volume 453, 257–282. American Mathematical Society.
* Gabrielsson and Carlsson (2019) Gabrielsson, R. B.; and Carlsson, G. 2019. Exposition and Interpretation of the Topology of Neural Networks. In _2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA)_ , 1069–1076.
* Gebhart, Schrater, and Hylton (2019) Gebhart, T.; Schrater, P.; and Hylton, A. 2019. Characterizing the Shape of Activation Space in Deep Neural Networks. _2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)_ , 1537–1542.
* Ghrist (2008) Ghrist, R. 2008. Barcodes: the persistent topology of data. _Bulletin of the American Mathematical Society (New Series)_ , 45(1): 61–75.
* Guss and Salakhutdinov (2018) Guss, W. H.; and Salakhutdinov, R. 2018. On Characterizing the Capacity of Neural Networks using Algebraic Topology. arXiv preprint arXiv:1802.04443.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Kim et al. (2018) Kim, B.; Wattenberg, M.; Gilmer, J.; Cai, C.; Wexler, J.; Viegas, F.; and sayres, R. 2018. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In Dy, J.; and Krause, A., eds., _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , 2668–2677. PMLR.
* Kornblith et al. (2019) Kornblith, S.; Norouzi, M.; Lee, H.; and Hinton, G. 2019. Similarity of Neural Network Representations Revisited. In Chaudhuri, K.; and Salakhutdinov, R., eds., _Proceedings of the 36th International Conference on Machine Learning_ , volume 97 of _Proceedings of Machine Learning Research_ , 3519–3529. PMLR.
* Krizhevsky and Hinton (2009) Krizhevsky, A.; and Hinton, G. 2009. Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto, Toronto, Ontario.
* Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Pereira, F.; Burges, C.; Bottou, L.; and Weinberger, K., eds., _Advances in Neural Information Processing Systems_ , volume 25.
* Lacombe, Ike, and Umeda (2021) Lacombe, T.; Ike, Y.; and Umeda, Y. 2021. Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs. In _Proceedings of the 30th International Joint Conference on Artificial Intelligence_ , 2666–2672.
* Leclerc et al. (2022) Leclerc, G.; Ilyas, A.; Engstrom, L.; Park, S. M.; Salman, H.; and Madry, A. 2022\. FFCV: Accelerating Training by Removing Data Bottlenecks. https://github.com/libffcv/ffcv/. Commit f253865.
* Mahendran and Vedaldi (2015) Mahendran, A.; and Vedaldi, A. 2015. Understanding Deep Image Representations by Inverting Them. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Marcel and Rodriguez (2010) Marcel, S.; and Rodriguez, Y. 2010. Torchvision the Machine-Vision Package of Torch. In _Proceedings of the 18th ACM International Conference on Multimedia_ , MM ’10, 1485–1488. New York, NY, USA: Association for Computing Machinery.
* Morcos, Raghu, and Bengio (2018) Morcos, A.; Raghu, M.; and Bengio, S. 2018. Insights on representational similarity in neural networks with canonical correlation. In Bengio, S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi, N.; and Garnett, R., eds., _Advances in Neural Information Processing Systems_ , volume 31.
* Olah et al. (2020) Olah, C.; Cammarata, N.; Schubert, L.; Goh, G.; Petrov, M.; and Carter, S. 2020\. Zoom In: An Introduction to Circuits. _Distill_. Https://distill.pub/2020/circuits/zoom-in.
* Paszke et al. (2019) Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; and Chintala, S. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Wallach, H.; Larochelle, H.; Beygelzimer, A.; dAlché Buc, F.; Fox, E.; and Garnett, R., eds., _Advances in Neural Information Processing Systems 32_ , 8024–8035. Curran Associates, Inc.
* Raghu et al. (2017) Raghu, M.; Gilmer, J.; Yosinski, J.; and Sohl-Dickstein, J. 2017. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Garnett, R., eds., _Advances in Neural Information Processing Systems_ , volume 30.
* Rathore et al. (2021) Rathore, A.; Chalapathi, N.; Palande, S.; and Wang, B. 2021. TopoAct: Visually Exploring the Shape of Activations in Deep Learning. _Computer Graphics Forum_ , 40(1): 382–397.
* Rieck et al. (2019) Rieck, B.; Togninalli, M.; Bock, C.; Moor, M.; Horn, M.; Gumbsch, T.; and Borgwardt, K. 2019. Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology. In _International Conference on Learning Representations (ICLR)_.
* Saul and Tralie (2019) Saul, N.; and Tralie, C. 2019. Scikit-TDA: Topological Data Analysis for Python.
* Selvaraju et al. (2017) Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization. In _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_.
* Singh, Memoli, and Carlsson (2007) Singh, G.; Memoli, F.; and Carlsson, G. 2007. Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition. In Botsch, M.; Pajarola, R.; Chen, B.; and Zwicker, M., eds., _Eurographics Symposium on Point-Based Graphics_.
* Szegedy et al. (2015) Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going deeper with convolutions. In _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 1–9.
* Szegedy et al. (2016) Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the Inception Architecture for Computer Vision. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2818–2826.
* Tralie, Saul, and Bar-On (2018) Tralie, C.; Saul, N.; and Bar-On, R. 2018. Ripser.py: A Lean Persistent Homology Library for Python. _The Journal of Open Source Software_ , 3(29): 925.
* Wei et al. (2015) Wei, D.; Zhou, B.; Torrabla, A.; and Freeman, W. 2015. Understanding intra-class knowledge inside CNN. arXiv preprint arXiv:1507.02379.
* Wheeler, Bouza, and Bubenik (2021) Wheeler, M.; Bouza, J.; and Bubenik, P. 2021. Activation Landscapes as a Topological Summary of Neural Network Performance. In _2021 IEEE International Conference on Big Data (Big Data)_ , 3865–3870. IEEE.
* Zhou et al. (2015) Zhou, B.; Khosla, A.; Lapedriza, À.; Oliva, A.; and Torralba, A. 2015. Object Detectors Emerge in Deep Scene CNNs. In Bengio, Y.; and LeCun, Y., eds., _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_.
* Zhou et al. (2021) Zhou, Y.; Chalapathi, N.; Rathore, A.; Zhao, Y.; and Wang, B. 2021. Mapper Interactive: A Scalable, Extendable, and Interactive Toolbox for the Visual Exploration of High-Dimensional Data. In _2021 IEEE 14th Pacific Visualization Symposium (PacificVis)_ , 101–110.
|
# Nonreciprocal Phonon Propagation in a Metallic Chiral Magnet
T. Nomura<EMAIL_ADDRESS>Institute for Solid State Physics,
University of Tokyo, Kashiwa, Chiba 277-8581, Japan Tokyo Denki University,
Adachi, Tokyo 120-8551, Japan X.-X. Zhang<EMAIL_ADDRESS>RIKEN
Center for Emergent Matter Science (CEMS), Wako 351-0198, Japan R. Takagi
Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan
PRESTO, Japan Science and Technology Agency (JST), Kawaguchi 332-0012, Japan
K. Karube RIKEN Center for Emergent Matter Science (CEMS), Wako 351-0198,
Japan A. Kikkawa RIKEN Center for Emergent Matter Science (CEMS), Wako
351-0198, Japan Y. Taguchi RIKEN Center for Emergent Matter Science (CEMS),
Wako 351-0198, Japan Y. Tokura RIKEN Center for Emergent Matter Science
(CEMS), Wako 351-0198, Japan Department of Applied Physics, The University of
Tokyo, Tokyo 113-8656, Japan Tokyo College, University of Tokyo, Tokyo
113-8656, Japan S. Zherlitsyn Hochfeld-Magnetlabor Dresden (HLD-EMFL),
Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany Y. Kohama
Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba
277-8581, Japan S. Seki Department of Applied Physics, The University of
Tokyo, Tokyo 113-8656, Japan PRESTO, Japan Science and Technology Agency
(JST), Kawaguchi 332-0012, Japan
###### Abstract
The phonon magnetochiral effect (MChE) is the nonreciprocal acoustic and
thermal transports of phonons caused by the simultaneous breaking of the
mirror and time-reversal symmetries. So far, the phonon MChE has been observed
only in a ferrimagnetic insulator Cu2OSeO3, where the nonreciprocal response
disappears above the Curie temperature of 58 K. Here, we study the
nonreciprocal acoustic properties of a room-temperature ferromagnet Co9Zn9Mn2
for unveiling the phonon MChE close to the room temperature. Surprisingly, the
nonreciprocity in this metallic compound is enhanced at higher temperatures
and observed up to 250 K. This clear contrast between insulating Cu2OSeO3 and
metallic Co9Zn9Mn2 suggests that metallic magnets have a mechanism to enhance
the nonreciprocity at higher temperatures. From the ultrasound and microwave-
spectroscopy experiments, we conclude that the magnitude of the phonon MChE of
Co9Zn9Mn2 mostly depends on the Gilbert damping, which increases at low
temperatures and hinders the magnon-phonon hybridization. Our results suggest
that the phonon nonreciprocity could be further enhanced by engineering the
magnon band of materials.
When a chiral material is placed in a magnetic field, nonreciprocal properties
arise from the simultaneous breaking of the mirror and time-reversal
symmetries. Such nonreciprocal properties are due to magnetochiral effect
(MChE) [1, 2] observed for various (quasi)particle propagations, including
photons [3, 4, 5, 6, 7, 8], electrons [9, 10, 11, 12, 13, 14], magnons [17,
15, 16, 18, 19], and phonons [20]. For the case of phonons, the sound velocity
in a chiral material becomes different for parallel and antiparallel
propagations with respect to the magnetic field $\bf H$ [20]. Because of the
symmetry origin, the MChE is expected for any chiral materials although the
microscopic mechanism and magnitude depend on the system. Since the
nonreciprocal properties are closely related to functionalities, the MChE is
an attractive mechanism for novel devices (ex. single-phase diodes and
circulators).
So far, the phonon MChE has been reported only for the ferrimagnetic insulator
Cu2OSeO3 [20]. The phonon MChE is explained by a magnon-phonon hybridization
[20, 21]. Because of the Dzyaloshinskii-Moriya (DM) interaction, the magnon
dispersion in Cu2OSeO3 is asymmetric for wave vector $\bf k$ parallel and
antiparallel to $\bf H$ [15, 22]. When the magnon-phonon hybridization is
allowed by symmetry, the phonon dispersion is asymmetrically deformed by the
band repulsion, leading to the nonreciprocal sound velocity for $\pm\bf k$
[Fig. 1(a)]. In this case, the MChE is observed only below the Curie
temperature $T_{\mathrm{C}}\sim 58$ K [23, 24]. For realizing the MChE at room
temperature, experimental exploration on higher-$T_{\mathrm{C}}$ magnets is
important. Moreover, the investigation of metallic magnets allows for
exploring the phonon nonreciprocity in presence of conduction electrons.
Figure 1: Phonon MChE caused by the magnon-phonon hybridization. (a)
Dispersion relations of the magnon and acoustic-phonon bands near the $\Gamma$
point under magnetic fields ${\bf H}||{\bf k}$. (b) Effect of the magnon-
dispersion broadening. Note that only one of the circularly polarized phonon
mode hybridizes with magnons, since only right-handed polarization exists for
ferromagnetic-type spin waves with respect to the magnetization. The
ultrasound frequency in this study was always much smaller than the magnon gap
$\Delta_{0}/2\pi$.
Our target material Co9Zn9Mn2 has a $\beta$-Mn-type structure, which belongs
to the chiral space group $P4_{1}32$ ($P4_{3}32$) [25, 26]. The magnetic
properties of the series of (Co0.5Zn0.5)20-xMnx alloys have been
systematically studied, and various exciting properties related to their
chiral magnetism have been reported [16, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39]. Among this series, Co9Zn9Mn2 has a high Curie temperature
$T_{\mathrm{C}}\sim 400$ K. Co9Zn9Mn2 has a helimagnetic ground state, while
other Mn-rich compounds become spin glass at low temperature due to the
geometrical frustration [32]. The spin-glass state could complicate the
analysis of the magnon dispersion and its temperature dependence. In addition,
the magnon MChE and the magnon dispersion of Co9Zn9Mn2 have already been
studied [16]. Therefore, Co9Zn9Mn2 is a promising system to explore the room-
temperature phonon MChE.
Single crystals of Co9Zn9Mn2 were grown by the Bridgman method as described in
our previous papers [28, 29, 32]. For the ultrasound measurements, we used a
right-handed single crystal ($P4_{1}32$) with the sample length of 1.9 mm
along the [110] axis. We employed the ultrasound pulse-echo technique with a
phase-sensitive detection for the sound-velocity measurement [40]. In this
study, we investigated three acoustic modes
$c_{\mathrm{L}}=(c_{11}+c_{12}+2c_{44})/2$ (${\bf k}||{\bf u}||[110]$, $v=4.0$
km/s), $c_{\mathrm{T}}=(c_{11}-c_{12})/2$ (${\bf k}||[110]$, ${\bf
u}||[1\overline{1}0]$, $v=2.1$ km/s), and $c_{44}$ (${\bf k}||[110]$, ${\bf
u}||[001]$, $v=2.4$ km/s), where $\bf k$ and $\bf u$ are the propagation and
displacement vectors, respectively. We used the ultrasound frequency up to 530
MHz by high-harmonic generation of the LiNbO3 resonance transducers. At higher
frequency, the acoustic attenuation became too strong to analyze the
nonreciprocity. In this study, the experiments were performed in the Faraday
geometry with ${\bf H}||{\bf k}||[110]$. In addition, we performed the
microwave absorption spectroscopy on a crystal from the same batch. The
microwave absorption ($\Delta S_{11}$) caused by magnetic resonance was
monitored by using a coplanar waveguide and a vector-network analyzer. For
details, see Refs. [16, 41].
Figure 2(a) shows the magnetic-field dependence of the relative change of the
sound velocity $\Delta v/v_{0}$ of the $c_{44}$ acoustic mode at 4 and 250 K.
The magnetic transition from a conical to a collinear state is observed at
$H_{\mathrm{c}}=0.21$ T (4 K) and 0.14 T (250 K). The sound velocities for
$-\bf k$ and $+\bf k$ at $H_{\mathrm{c}}$ are slightly different, indicating
the nonreciprocal sound propagation. The sign of the difference becomes
opposite for $-H_{\mathrm{c}}$ and $+H_{\mathrm{c}}$, which is the
characteristic feature of the phonon MChE. We note that the other acoustic
modes ($c_{\mathrm{L}}$ and $c_{\mathrm{T}}$) show weaker $\Delta v/v_{0}$ and
nonreciprocal responses, indicating the weaker magnetoelastic coupling. These
results are presented in the Supplemental Material (SM) [42].
Figure 2(b) plots the results for $-\bf k$ as a function of the absolute value
of the magnetic field. The magnitude of the phonon magnetocihral effect
$g_{\mathrm{MCh}}$ defined as [20]
$g_{\mathrm{MCh}}(|{\bf H}|)=\frac{\Delta v(+{\bf H})}{v_{0}}-\frac{\Delta
v(-{\bf H})}{v_{0}}=\frac{v(+{\bf H})-v(-{\bf H})}{v_{0}}$ (1)
is plotted in the lower panel. The nonreciprocal response takes maximum at
$H_{\mathrm{c}}$, and then rapidly decreases when the field is further
increased. The nonreciprocal magnitude at the transition field
$g_{\mathrm{MCh}}(H_{\mathrm{c}})$ as a function of the ultrasound frequency
is plotted in Fig. 2(c). The nonreciprocity nonlinearly enhances at higher
frequencies. $g_{\mathrm{MCh}}(H_{\mathrm{c}})$ at 250 K is larger than that
at 4 K. The contour plot of $g_{\mathrm{MCh}}$ at 440 MHz is mapped on the
$T$-$H$ phase diagram [Fig. 3(a)].
Figure 2: Phonon MChE in Co9Zn9Mn2. (a) Relative change of the sound velocity
as a function of magnetic field, $\Delta v/v_{0}(H)$, of the $c_{44}$ acoustic
mode at the ultrasound frequency of 440 MHz. The results at 250 K (4 K) are
shown by the orange and green (light orange and light green) curves.
Schematics of the experimental configuration and the nonreciprocity are shown.
(b) $\Delta v/v_{0}(H)$ and the magnitude of the magnetochiral effect
$g_{\mathrm{MCh}}$ as a function of the absolute value of the magnetic field.
The data for $-k$, 440 MHz at 250 K from Fig. 2(a) are used. (c) Ultrasound
frequency dependence of the magnitude of the magnetochiral effect at the
transition field $g_{\mathrm{MCh}}(H_{\mathrm{c}})$. The results for the
$c_{44}$ mode at 4 and 250 K are fitted by Eq. (2) (dashed lines). Figure 3:
(a) Contour plot of $g_{\mathrm{MCh}}$ mapped on the $T$-$H$ phase diagram.
The conical-collinear phase boundary (green curve) is determined by the
anomalies in $\Delta v/v_{0}$. Here, the results at 440 MHz are used. (b)
Temperature dependence of the magnitude of the phonon MChE. The coefficient
$\Gamma$ is obtained by fitting the frequency dependence of
$g_{\mathrm{MCh}}(H_{\mathrm{c}})$ [Fig. 2(c)]. The results of Cu2OSeO3 [20]
are shown for comparison.
The experimental results show that the magnetochiral effect of Co9Zn9Mn2 is
enhanced at $H_{\mathrm{c}}$ and at higher ultrasound frequency. Similar
features are observed in Cu2OSeO3 and explained by the magnon-phonon
hybridization mechanism [20]. The magnon dispersion of Co9Zn9Mn2 is also
asymmetric for $\pm\bf k$ because of the DM interaction [16]. When such magnon
band hybridizes with a phonon band, the anticrossing asymmetrically deforms
the phonon band [Fig. 1(a)], leading to the nonreciprocal acoustic properties.
At the conical-collinear transition, the magnon gap becomes small, and the
hybridization occurs close to the $\Gamma$ point (Fig. 1). However, the magnon
frequency is more than one order of magnitude higher than the ultrasound
frequency in this study, and only the effect due to the band repulsion is
observed in our experiments. Here, we emphasize that the electron-phonon
hybridization cannot explain the characteristic enhancement of the phonon MChE
at $H_{\mathrm{c}}$ and the field dependence in the collinear phase. The
energy scale of the magnetic field (0.3 T) is too small to change the
electronic band structure of metals. We, thus, conclude that the rapid
decrease of the phonon MChE above $H_{\mathrm{c}}$ is related to the magnon
band gap which also rapidly opens above $H_{\mathrm{c}}$.
The magnon-phonon hybridization is mediated by the DM interactions or magnetic
anisotropy modulated by shear strains [20]. Since the DM interactions in
chiral magnets are strongly modulated by shear strains [43, 44], we treat the
former as the leading term. The magnitude of the MChE is expressed as [20],
$g_{\mathrm{MCh}}=\frac{4\gamma^{2}\braket{S^{z}}^{2}S^{2}D^{3}k^{3}}{c\Delta_{0}^{2}}=\Gamma
f^{3}.$ (2)
Here, $\gamma,S,D,c,\Delta_{0}$ are the magnetoelastic coupling constant, the
total spin moment, the DM-interaction coefficient, the elastic constant, and
the magnon gap, respectively. Since the wave vector $k$ is proportional to the
ultrasound frequency $f$, the equation is rewritten as the empirical form with
the phonon-MChE coefficient $\Gamma$.
Based on Eq. (2), the frequency dependence of
$g_{\mathrm{MCh}}(H_{\mathrm{c}})$ [Fig. 2(c)] is fitted and the temperature
dependence of $\Gamma$ is obtained as Fig. 3(b). The MChE is observed up to
170 K and 250 K for the $c_{\mathrm{T}}$ and $c_{44}$ modes, respectively.
$|\Gamma|$ is always larger for the $c_{44}$ than that for the
$c_{\mathrm{T}}$ mode, which again indicates the stronger magnetoelastic
coupling of the $c_{44}$ mode. $|\Gamma|$ of the $c_{44}$ mode increases
towards higher temperatures and suddenly becomes undetectable at 260 K. This
is because of the sudden decrease of $\Delta v/v_{0}(H_{\mathrm{c}})$ and
increase of the acoustic attenuation around this temperature [42]. This
characteristic temperature might be due to the decreased anisotropy at high
temperatures [37]. Another extrinsic origin might be the glass transition of
the bond connecting the transducers and the sample. Above the glass transition
temperature, the transverse sound propagation is strongly attenuated. In this
case, the thermal transport experiments might be an alternative technique to
detect the phonon MChE [45]. However, the simultaneous decrease of the
acoustic anomaly $\Delta v/v_{0}(H_{\mathrm{c}})$ and the nonreciprocity
$g_{\mathrm{MCh}}(H_{\mathrm{c}})$ (Fig. S1 in SM [42]) indicates that the
observations come from the intrinsic effect. We note that the bonding
conditions usually only affect the amplitude of the transmitted sound and does
not affect the obtained $\Delta v/v_{0}$.
For a quantitative comparison, the results of Cu2OSeO3
[$c_{\mathrm{T}}=(c_{11}-c_{12})/2$ mode] [20] are plotted by the black lines.
The maximum value of $\Gamma$ in Co9Zn9Mn2 is larger than that in Cu2OSeO3 by
the factor of $\sim$1.5. $|\Gamma|$ is tending to increase with temperature in
Co9Zn9Mn2, though it is decreasing in Cu2OSeO3 [Fig. 3(b)]. In the following,
we discuss the reason of the different temperature dependence by comparing the
case of insulator (Cu2OSeO3) and metal (Co9Zn9Mn2).
The negative-$T$ coefficient of $|\Gamma|$ for Cu2OSeO3 is due to the
temperature dependence of the ordered moment $\braket{S^{z}}$ [20]. Near the
magnetic phase transition, $\braket{S^{z}}$ scales as
$\sqrt{|T-T_{\mathrm{C}}|}$, leading the the $T$-linear dependence of
$g_{\mathrm{MCh}}$ [Eq. (2)]. Similarly to the case of Co9Zn9Mn2, the
temperature dependence of $\braket{S^{z}}$ almost scales as
$\sqrt{|T-T_{\mathrm{C}}|}$ [33], however, it only results in the negative-$T$
coefficient of $|\Gamma|$. Therefore, the positive-$T$ coefficient of
$|\Gamma|$ (high-temperature enhancement of the nonreciprocity) originates
from another property, which is strongly dependent on temperature and
overcomes the contribution from $\braket{S^{z}}$.
Another $T$-dependent parameter in Eq. (2) is the magnon gap $\Delta_{0}$.
Figures 4(a) and 4(b) show the microwave-absorption spectra as a function of
the magnetic field at 10 K and 200 K, respectively. With this technique, the
obtained spectra reflect the magnon gap at $k=0$. The overall features at 10 K
and 200 K are similar. In the conical phase, two branches of spin waves soften
towards $H_{\mathrm{C}}$. In the collinear phase, single absorption peak
linearly increases as a function of the magnetic field (dashed line). Slight
temperature dependence is observed in the magnon gap and width of the
absorption peak. Figure 4(c) plots the temperature dependence of the magnon
gap at the transition field $\Delta_{0}(H_{\mathrm{c}})$ determined from the
crossing point of the dashed line and the dotted line (the conical-collinear
boundary). For comparison, the results of Cu2OSeO3 are also shown [46]. The
gap of Co9Zn9Mn2 increases towards lower temperatures by the factor of $\sim
1.2$, which would lead to the decrease of $g_{\mathrm{MCh}}(H_{\mathrm{c}})$
by the factor of 1.4 [Eq. (2)]. This parameter cannot account for the observed
factor of $\Gamma(\mathrm{200\ K})/\Gamma(\mathrm{10\ K})\sim 10$ [Fig. 3(b)].
Figure 4: Magnetic-field dependence of the microwave-absorption spectra ($\Delta S_{11}$) at (a) 10 K and (b) 200 K. The ferromagnetic resonance lines are shown by the dashed lines for guide. The conical-collinear phase boundary is shown by the dotted line. (c) Temperature dependence of the magnon-gap frequency at the transition field. (d) Temperature dependence of the Gilbert damping coefficient $\alpha$ (left) and the FWHM at the transition field (right). The results for Cu2OSeO3 are taken from Ref. [46] and shown for comparison. Square (circle) symbols represent the result for ${\bf H}||[111]$ (${\bf H}||[100]$). $\alpha$ of Co9Zn9Mn2 is taken from Ref. [37]. Table 1: Summary of the magnetic and elastic parameters. The unit cells of Co9Zn9Mn2 and Cu2OSeO3 are taken per Co ion and per 4Cu cluster, respectively. The last column presents the calculated coefficient of the phonon MChE based on Eq. (2). | $S$ | $a_{0}$ | $D/a_{0}^{2}$ | $\Delta_{0}(H_{\mathrm{c}})/2\pi$ | FWHM$(H_{\mathrm{c}})$ | $c$ | | $g_{\mathrm{MCh}}/\gamma^{2}k^{3}$
---|---|---|---|---|---|---|---|---
| | (Å) | (J m-2) | (GHz) | (GHz) | (GPa) | | (m3)
Co9Zn9Mn2 (200 K) [16, 29] | 0.75 | 3.1 | 1.2$\times 10^{-3}$ | 8.9 | 2.4 | $c_{44}=45$ | | 43$\times$10-30
Co9Zn9Mn2 (10 K) | | | | 10.9 | 4.6 | $c_{44}=47$ | |
Cu2OSeO3 (30 K) [15] | 0.44 | 4.5 | 3.4$\times 10^{-4}$ | 3.7 | 0.4 | $c_{\mathrm{T}}=27$ | | 11$\times$10-30
Next, the temperature dependence of the full width at half maximum (FWHM) of
the magnetic resonance at $H_{\mathrm{c}}$ is plotted in Fig. 4(d) right axis
with orange triangles. The broadening of the absorption peak is observed below
100 K. The width of the magnon absorption ($\Delta f$) is related to the
Gilbert damping parameter $\alpha$ as [47, 46]
$\Delta f=2\alpha f_{\mathrm{r}}+\Delta f_{0},$ (3)
where $f_{\mathrm{r}}$ is the resonance frequency and $\Delta f_{0}$ is the
extrinsic broadening. The Gilbert damping parameters of Co9Zn9Mn2 [37] and
Cu2OSeO3 [46] are plotted in Fig. 4(d) left axis. The temperature dependence
of $\alpha$ and the width of the magnon absorption is in a proportional
relation as suggested by Eq.(3). The magnon damping of Co9Zn9Mn2 increases
towards lower temperatures. This is a typical behavior for magnetic alloys,
where magnons are scattered by conduction electrons [47, 48, 49]. At lower
temperature, the elecrton scattering time increases and the larger momentum
transfer at the intraband relaxation results in the enhanced damping. For the
case of Co9Zn9Mn2, disorder of Mn spins at low temperatures might also be the
reason of the increased $\alpha$ [37]. On the other hand, in the case of
magnetic insulators, the magnon-phonon and magnon-magnon scatterings are
suppressed at lower temperatures (less populated magnons and phonons), leading
to the smaller $\alpha$. Therefore, the opposite temperature dependence of
$\alpha$ is a clear contrast between magnetic insulators and metals, which is
a key to understand the temperature dependence of the phonon MChE of
Co9Zn9Mn2.
A hybridized state of a magnon and a phonon is called a magnon polaron [50,
51, 52]. When the magnon scattering time $\tau$ is too short to well form the
hybridized state (namely $\alpha$ is too large), the anticrossing gap
$\Delta\omega$ is effectively reduced. For instance, when
$1/\tau\gg\Delta\omega$, the magnon-polaron bands become more smeared and
short-lived in this weak-coupling case [50]. Substituting
$\tau=1/\alpha\omega$, this condition reads $\alpha\gg\Delta\omega/\omega$. In
other words, the magnon polaron cannot be formed when the magnon line width is
too large compared to the anticrossing gap [Fig. 1(b)]. The anticrossing gap
is estimated as (see SM for details [42]),
$\begin{split}\Delta\omega=\frac{\gamma\braket{S^{z}}\sqrt{S}D}{a_{0}^{2}\sqrt{\rho
s}v_{0}^{2}}{\omega^{*}}^{\frac{3}{2}},\end{split}$ (4)
where $a_{0}$, $\rho$, $s=\hbar/V_{0}$, and $\omega^{*}$ are the lattice
constant, the mass density, the reduced Planck constant normalized by the
unit-cell volume, and the angular frequency at the hybridization point.
Substituting the parameters of Co9Zn9Mn2, $\Delta\omega/2\pi$ is estimated to
be of the order of 10 MHz, using $\gamma\sim 10$. The estimated
$\Delta\omega/\omega$ is of the order of $10^{-3}$, which is consistent with
$\Delta v/v_{0}\sim 10^{-4}$ because one can roughly use $\omega=vk$ for small
$k$. Therefore, the damping factor $\alpha\sim 0.1$ is much larger than
$\Delta\omega/\omega$, indicating that the magnon-phonon hybridization occurs
in a weak coupling regime. In this case, the magnon linewidth (and the magnon
scattering time) can considerably affect the hybridization. Indeed, the
anomaly at the phase transition $\Delta v(H_{c})/v_{0}$ decreases below 100 K
where $\alpha$ also increases (Fig. S1(b) [42]), indicating the weaker magnon-
phonon hybridization. Such an effectively weaker magnon-phonon coupling
accordingly leads to smaller anticrossing gap and also reduces nonreciprocity
$g_{\mathrm{MCh}}$. Quantitative relation of how $\alpha$ affects
$g_{\mathrm{MCh}}$ exceeds the present theory based on sharp magnon-polaron
bands and is left for future investigation.
The above discussion suggests that the phonon MChE can be further enhanced by
reducing the magnon linewidth, which is related to the impurity or defect in
the crystal. Particularly, for the case of (Co0.5Zn0.5)20-xMnx alloys, the end
material Co10Zn10 has relatively small $\alpha$ [37] which might be
advantageous to realize a larger nonreciprocity. In this respect, magnetic
insulators typically have smaller $\alpha$ than metals where the magnon-
electron scattering is inevitable. In fact, even for the case of Cu2OSeO3,
which shows rather small $\alpha$ comparable to yttrium gallium garnet [46],
the strong-coupling condition $\alpha\ll\Delta\omega/\omega$ is not satisfied.
This indicates that the magnon linewidth is an important factor even for
discussing the phonon MChE in magnetic insulators.
Nevertheless, the phonon-MChE coefficient $\Gamma$ of Co9Zn9Mn2 is 1.5 times
larger than that of Cu2OSeO3. This is because of the strong DM interaction in
Co9Zn9Mn2. By using the parameters summarized in Table I and Eq. (2), in fact,
the expected $g_{\mathrm{MChE}}$ for Co9Zn9Mn2 is four times larger than
Cu2OSeO3, assuming similar $\gamma$. Therefore, this system has a great
potential to realize larger phonon MChE at room temperature, if the magnon
linewidth could be controlled.
In conclusion, we demonstrate the phonon MChE, the nonreciprocal sound
propagation in the chiral-lattice magnetic metal Co9Zn9Mn2 up to 250 K. In
contrast to the case in Cu2OSeO3, the phonon MChE of Co9Zn9Mn2 is enhanced at
higher temperatures. This temperature dependence is due to the magnon
linewidth. The magnon linewidth of Co9Zn9Mn2 rapidly increases below 100 K and
results in the weak coupling of the magnon-phonon hybridization, leading to
the decreased gap and nonreciprocity. Controlling the magnon dispersion could
further enhance the nonreciprocal properties of these chiral crystals.
We thank N. Nagaosa for fruitful discussions. We acknowledge the support of
the HLD at HZDR, member of the European Magnetic Field Laboratory (EMFL), and
the DFG through the Collaborative Research Center SFB 1143 (project-id
247310070). This work was partly supported by JSPS KAKENHI, Grant-in-Aid for
Scientific Research (JP19K23421, JP20K14403, JP20H00349, JP20K15164,
JP21H04440, JP21H04990, JP21K13876, JP21K18595, JP22H04965), JSPS Bilateral
Joint Research Projects (JPJSBP120193507), JST PRESTO (JPMJPR20B4), JST CREST
(JPMJCR1874, JPMJCR20T), and Katsu Research Encouragement Award of the
University of Tokyo, Asahi Glass Foundation and Murata Science Foundation.
## References
* [1] Y. Tokura and N. Nagaosa, Nat. Commun. 9, 3740 (2018).
* [2] M. Atzori, C. Train, E. A. Hillard, N. Avarvari, and G. L. J. A. Rikken, Chirality 33, 844–857 (2021).
* [3] G. L. J. A. Rikken and E. Raupach, Nature (London) 390, 493-494 (1997).
* [4] M. Vallet, R. Ghosh, A. Le Floch, T. Ruchon, F. Bretenaker, and J. Y. Thépot, Phys. Rev. Lett. 87, 183003 (2001).
* [5] C. Koerdt, G. Düchs, and G. L. J. A. Rikken, Phys. Rev. Lett. 91, 073902 (2003).
* [6] C. Train, R. Gheorghe, V. Krstic, L. M. Chamoreau, N. S. Ovanesyan, G. L. J. A. Rikken, M. Gruselle, and M. Verdaguer, Nat. Mater. 7, 729-734 (2008).
* [7] Y. Okamura, F. Kagawa, S. Seki, M. Kubota, M. Kawasaki, and Y. Tokura, Phys. Rev. Lett. 114, 197202 (2015).
* [8] M. Atzori, H. D. Ludowieg, Á. Valentín-Pérez, M. Cortijo, I. Breslavetz, K. Paillot, P. Rosa, C. Train, J. Autschbach, E. A. Hillard, and G. L. J. A. Rikken, Sci. Adv. 7, eabg2859 (2021).
* [9] G. L. J. A. Rikken, J. Fölling, and P. Wyder, Phys. Rev. Lett. 87, 236602 (2001).
* [10] V. Krstic, S. Roth, M. Burghard, K. Kern, and G. L. J. A. Rikken, J. Chem. Phys. 117, 11315-11319 (2002).
* [11] F. Pop, P. Auban-Senzier, E. Canadell, G. L. J. A. Rikken, and N. Avarvari, Nat. Commun. 5, 3757 (2014).
* [12] T. Yokouchi, N. Kanazawa, A. Kikkawa, D. Morikawa, K. Shibata, T. Arima, Y. Taguchi, F. Kagawa, and Y. Tokura, Nat. Commun. 8, 4757 (2017).
* [13] R. Aoki, Y. Kousaka, and Y. Togawa, Phys. Rev. Lett. 122, 057206 (2019).
* [14] A. Kitaori, N. Kanazawa, H. Ishizuka, T. Yokouchi, N. Nagaosa, and Y. Tokura, Phys. Rev. B 103, L220410 (2021).
* [15] S. Seki, Y. Okamura, K. Kondou, K. Shibata, M. Kubota, R. Takagi, F. Kagawa, M. Kawasaki, G. Tatara, Y. Otani, and Y. Tokura, Phys. Rev. B 93, 235131 (2016).
* [16] R. Takagi, D. Morikawa, K. Karube, N. Kanazawa, K. Shibata, G. Tatara, Y. Tokunaga, T. Arima, Y. Taguchi, Y. Tokura, and S. Seki, Phys. Rev. B 95, 220406(R) (2017).
* [17] Y. Iguchi, S. Uemura, K. Ueno, and Y. Onose, Phys. Rev. B 92, 184419 (2015).
* [18] S. Seki, M. Garst, J. Waizner, R. Takagi, N. D. Khanh, Y. Okamura, K. Kondou, F. Kagawa, Y. Otani, and Y. Tokura, Nat. Commun. 11, 256 (2020).
* [19] N. Ogawa, L. Köhler, M. Garst, S. Toyoda, S. Seki, and Y. Tokura, Proc. Natl. Acad. Sci. U.S.A. 118, e2022927118 (2021).
* [20] T. Nomura, X.-X. Zhang, S. Zherlitsyn, J. Wosnitza, Y. Tokura, N. Nagaosa, and S. Seki, Phys. Rev. Lett. 122, 145901 (2019).
* [21] A. A. Tereshchenko, A. S. Ovchinnikov, I. Proskurin, E. V. Sinitsyn, and J. Kishine, Phys. Rev. B 97, 184303 (2018).
* [22] M. Kataoka, J. Phys. Soc. Jpn. 56, 3635-3647 (1987).
* [23] J.-W. G. Bos, C. V. Colin, and T. T. M. Palstra, Phys. Rev. B 78, 094416 (2008).
* [24] S. Seki, X. Z. Yu, S. Ishiwata, and Y. Tokura, Science 336, 198-201 (2012).
* [25] T. Hori, H. Shiraisha, and Y. Ishii, J. Magn. Magn. Mater. 310, 1820–1822 (2007).
* [26] W. Xie, S. Thimmaiah, J. Lamsal, J. Liu, T. W. Heitmann, D. Quirinale, A. I. Goldman, V. Pecharsky, and G. J. Millera, Inorg. Chem. 52, 9399–9408 (2013).
* [27] Y. Tokunaga, X. Z. Yu, J. S. White, H. M. Rønnow, D. Morikawa, Y. Taguchi, and Y. Tokura, Nat. Commun. 6, 7638–7644 (2015).
* [28] K. Karube, J. S. White, N. Reynolds, J. L. Gavilano, H. Oike, A. Kikkawa, F. Kagawa, Y. Tokunaga, H. M. Rønnow, Y. Tokura, and Y. Taguchi, Nat. Mater. 15, 1237–1243 (2016).
* [29] K. Karube, J. S. White, D. Morikawa, M. Bartkowiak, A. Kikkawa, Y. Tokunaga, T. Arima, H. M. Rønnow, Y. Tokura, and Y. Taguchi, Phys. Rev. Mater. 1, 074405 (2017).
* [30] D. Morikawa, X. Yu, K. Karube, Y. Tokunaga, Y. Taguchi, T. Arima, and Y. Tokura, Nano Lett. 17, 1637–1641 (2017).
* [31] X. Z. Yu, W. Koshibae, Y. Tokunaga, K. Shibata, Y. Taguchi, N. Nagaosa, and Y. Tokura, Nature (London) 564, 95 (2018).
* [32] K. Karube, J. S. White, D. Morikawa, C. D. Dewhurst, R. Cubitt, A. Kikkawa, X. Yu, Y. Tokunaga, T. Arima, H. M. Rønnow, Y. Tokura, and Y. Taguchi, Sci. Adv. 4: eaar7043 (2018).
* [33] J. D. Bocarsly, C. Heikes, C. M. Brown, S. D. Wilson, and R. Seshadri, Phys. Rev. Materials 3, 014402 (2019).
* [34] T. Nagase, M. Komatsu, Y. G. So, T. Ishida, H. Yoshida, Y. Kawaguchi, Y. Tanaka, K. Saitoh, N. Ikarashi, M. Kuwahara, and M. Nagao, Phys. Rev. Lett. 123,137203 (2019).
* [35] K. Karube, J. S. White, V. Ukleev, C. D. Dewhurst, R. Cubitt, A. Kikkawa, Y. Tokunaga, H. M. Rønnow, Y. Tokura, and Y. Taguchi, Phys. Rev. B 102, 064408 (2020).
* [36] T. Nagase, Y. G. So, H. Yasui, T. Ishida, H. K. Yoshida, Y. Tanaka, K. Saitoh, N. Ikarashi, Y. Kawaguchi, M. Kuwahara, and M. Nagao, Nat. Commun. 12, 3490 (2021).
* [37] M. Preißinger, K. Karube, D. Ehlers, B. Szigeti, H.-A. Krug von Nidda, J. S. White, V. Ukleev, H. M. Rønnow, Y. Tokunaga, A. Kikkawa, Y. Tokura, Y. Taguchi, and I. Kézsmárki, npj Quantum Materials 6:65 (2021).
* [38] L. C. Peng, K. Karube, Y. Taguchi, N. Nagaosa, Y. Tokura, and X. Z. Yu, Nat. Commun. 12, 6797 (2021).
* [39] T. Shimojima, A. Nakamura, X. Z. Yu, K. Karube, Y. Taguchi, Y. Tokura, and K. Ishizaka, Sci. Adv. 7, eabg1322 (2021).
* [40] S. Zherlitsyn, S. Yasin, J. Wosnitza, A. A. Zvyagin, A. V. Andreev, and V. Tsurkan, Low Temp. Phys. 40, 123-133 (2014).
* [41] R. Takagi, M. Garst, J. Sahliger, C. H. Back, Y. Tokura, and S. Seki, Phys. Rev. B 104, 144410 (2021).
* [42] See Supplemental Material at http:// for the experimental details and theoretical derivation of the anticrossing gap at the magnon-phonon hybridization.
* [43] T. Koretsune, N. Nagaosa, and R. Arita, Sci. Rep. 5, 13302 (2015).
* [44] K. Shibata et al., Nat. Nanotech. 10, 589-592 (2015).
* [45] Y. Hirokane, Y. Nii, H. Masuda, and Y. Onose, Sci. Adv. 6, eabd3703 (2020).
* [46] I. Stasinopoulos, S. Weichselbaumer, A. Bauer, J. Waizner, H. Berger, S. Maendl, M. Garst, C. Pfleiderer, and D. Grundler, Appl. Phys. Lett. 111, 032408 (2017).
* [47] H. Yu, R. Huber, T. Schwarze, F. Brandl, T. Rapp, P. Berberich, G. Duerr, and D. Grundler, Appl. Phys. Lett. 100, 262412 (2012).
* [48] K. Gilmore, M. D. Stiles, J. Seib, D. Steiauf, and M. Fähnle, Phys. Rev. B 81, 174414 (2010).
* [49] M. Fähnle and C. Illg, J. Phys.: Condens. Matter 23, 493201 (2011).
* [50] T. Kikkawa, K. Shen, B. Flebus, R. A. Duine, K. Uchida, Z. Qiu, G. E. W. Bauer, and E. Saitoh, Phys. Rev. Lett. 117, 207203 (2016).
* [51] B. Flebus, K. Shen, T. Kikkawa, K. Uchida, Z. Qiu, E. Saitoh, R. A. Duine, and G. E. W. Bauer, Phys. Rev. B 95, 144420 (2017).
* [52] C. Kittel, Phys. Rev. 110, 836–841 (1958).
|
FB01fi
# Wireless Image Transmission with Semantic and Security Awareness
Maojun Zhang, Yang Li, Zezhong Zhang, Guangxu Zhu, Caijun Zhong M. Zhang and
C. Zhong are with the College of information Science and Electronic
Engineering, Zhejiang University, Hangzhou, China. (Email<EMAIL_ADDRESS>caijunzhong@zju.edu.cn). Y. Li is with China Academy of Information and
Communications Technology, Beijing, China (Email: liyang3@caict.ac.cn). Z.
Zhang is with The Chinese University of Hong Kong (Shenzhen), Shenzhen, China
(Email: zhangzezhong@cuhk.edu.cn). G. Zhu is with the Shenzhen Research
Institute of Big Data, Shenzhen, China (Email: gxzhu@sribd.cn).
###### Abstract
Semantic communication is an increasingly popular framework for wireless image
transmission due to its high communication efficiency. With the aid of the
_joint-source-and-channel_ (JSC) encoder implemented by neural network,
semantic communication directly maps original images into symbol sequences
containing semantic information. Compared with the traditional separate source
and channel coding design used in bit-level communication systems, semantic
communication systems are known to be more efficient and accurate especially
in the low signal-to-the-noise ratio (SNR) regime. This thus prompts a
critical while yet to be tackled issue of security in semantic communication:
it makes the eavesdropper much easier to crack the semantic information as it
can be retrieved even in a highly noisy channel. In this letter, we develop a
semantic communication framework that accounts for both semantic meaning
decoding efficiency and its risk of privacy leakage. To this end, targeting
wireless image transmission, we propose an JSC autoencoder featuring residual
structure for efficient semantic meaning extraction and transmission, and the
training of which is guided by a well-designed loss function that can flexibly
regulate the efficiency-privacy trade-off. Extensive experimental results are
provided to show the effectiveness and robustness of the proposed scheme.
## I introduction
The wide success of artificial intelligence (AI) in every perspectives of our
society has also driven the rapid advancement in wireless communications [1].
Recently, as a consequence of the fusion of AI and communication, a novel
paradigm, called semantic communication, has received great attention.
Building on the deep learning based end-to-end communication system [2],
semantic communication further introduces efficient semantic encoder network,
so that the essential semantic information instead of the raw data can be
extracted, encoded and delivered to the receiver, which is believed to be a
more efficient and effective way to convey information in the next generation
wireless networks [3].
In particular, semantic communication has shown promising gain in image
transmission task. In classic bit-level communication, the images are first
compressed into binary sequences by source coding algorithms (e.g., JPEG,
JPEG2000, BPG), followed by channel coding schemes (e.g., Turbo, LDPC) that
add certain redundancy to combat against the random channel perturbation, and
after that the codewords are modulated into symbol sequences for reliable
transmission. Such a separate source and channel coding scheme is hard to
guarantee the optimality of the whole system in terms of rate-distortion
trade-off. Prompted by this, authors in [4] first proposed a _joint-source-
and-channel-coding_ (JSCC) method for wireless image transmission, where the
images were directly mapped into complex-valued symbols through a well-trained
neural network. To further improve the quality of reconstructed image, the
feedback and multi-layer bandwidth-agile design were subsequently proposed in
[5, 6]. In addition, since semantic communication aims to deliver semantic
meaning instead of perfectly reconstructing the original source at the
receiver. For image transmission tasks, the commonly-used pixel similarity
(e.g., PSNR [4]) is no longer appropriate to describe the goodness of semantic
communication. Given this, some new reconstruction performance metric
customized for semantic communication were proposed in [7, 8], where semantic
communication exhibits remarkably higher efficiency and accuracy than the bit-
level communication, especially in the low signal-to-the-noise ratio (SNR)
regime.
Like every coin has two sides, accompanying the good performance in the low
SNR regime is the higher risk of privacy leakage, as it implies that the
eavesdroppers can crack the semantic information more easier even through a
highly noisy channel. This thus prompts a critical issue regarding secure
semantic communications. To design security-aware semantic communication
systems, one needs to balance the trade-off between the transmission
efficiency at the destination user (Bob), and the information leakage to the
eavesdropper (Eve). In classic bit-level communication systems, the secure
channel capacity, rather than channel capacity, serves as the main performance
metric of interest to ensure security. The theoretical analysis of secure
capacity was presented in [9, 10] targeting bit-level. Building on it, the
secrecy capacity region can be derived, and secure transmission can be
achieved by proper transmission power control and specific channel coding
designs [11, 12, 13]. Nevertheless, in semantic communication, the “black-box”
nature of JSCC block implemented by neural networks makes the derivation of
secure channel untractable if not impossible. Therefore, the existing secure
communication schemes building for bit-level communication systems cannot be
directly applied to semantic communication systems, leaving the secure
semantic communication remains a largely uncharted area.
As discussed above, there are two basic objectives in secure semantic
communication systems, namely, the one concerns efficiency, i.e., the semantic
recovery quality at Bob; and the other concerns privacy, i.e., the semantic
leakage to Eve. This gives rise to a fundamental trade-off between efficiency
and privacy. In this letter, we develope a secure semantic communication
framework that accounts for both objectives above. Firstly, we propose an
efficient _joint-source-channel_ (JSC) autoencoder featuring the cascading of
residual block with convolution layer for efficient semantic meaning
extraction and transmission, and the training of which is guided by a well-
designed loss function that can flexibly balance the efficiency-privacy trade-
off. Extensive experiments are conducted to show that the proposed JSCC scheme
can significantly outperform the traditional separate source and channel
coding scheme in the low SNR regime, in the meanwhile, prevent privacy leakage
at the semantic level, thus achieving the desired efficient and secure
semantic communication.
## II System Model
In this section, we present the downlink semantic communication system for
image transmission, and put forth the privacy issue caused by Eve.
### II-A Semantic Transmitter
Figure 1: Illustration of the semantic communication system for image
transmission
As shown in Fig 1, the _base station_ (BS) to confidentially transmit the image
$\mathbf{s}$ to the legitimate user (Bob), in the presence of a passive
eavesdropper (Eve). Different from the conventional separate source coding
(e.g., JPEG, BPG) and channel coding (e.g., Turbo, LDPC code) design, the
compression and anti-jamming are implemented by the _joint-source-and-channel_
(JSC) encoder composed of deep neural networks (DNNs). The encoding process is
given as follows:
$\displaystyle\mathbf{x}=f\left(\mathbf{s};\boldsymbol{\theta}\right),$ (1)
where $\boldsymbol{\theta}$ and $\mathbf{x}\in\mathbb{R}^{M\times 1}$ denote
the trainable parameters in JSC encoder and the latent semantic representation
of the image source $\mathbf{s}$, respectively.
Considering the transmit power limitation, we have the following power
constraint on the transmitted signal, i.e.,
$\frac{1}{M}\mathbb{E}\left\\{x_{i}^{2}\right\\}\leq p$.
### II-B Wireless Transmission
We consider the _additive white Gaussian noise_ (AWGN) channel. The received
signal of Bob through the legitimate AWGN channel is given by
$\displaystyle\mathbf{y}_{b}=\mathbf{x}+\mathbf{n}_{b},$ (2)
where $\mathbf{n}_{b}\sim\mathcal{N}\left(0,\sigma_{b}^{2}\mathbf{I}\right)$,
$\sigma_{b}^{2}$ is the average noise power.
Similarly, Eve can receive the information through the wiretap channel as
follows:
$\displaystyle\mathbf{y}_{e}=\mathbf{x}+\mathbf{n}_{e},$ (3)
where $\mathbf{n}_{e}\sim\mathcal{N}\left(0,\sigma_{e}^{2}\mathbf{I}\right)$,
$\sigma_{e}^{2}$ is the average noise power. Generally, as in [10, 11], we
assume that the wiretap channel between BS and Eve is worse than the channel
between BS and Bob, i.e., $P=\frac{\sigma_{e}^{2}}{\sigma_{b}^{2}}\gg 1$.
### II-C Semantic Receiver
In the receiver side, both Bob and Eve can try to decode the image as follows:
$\displaystyle\widehat{\mathbf{s}}_{b}=g\left(\mathbf{y}_{b};\boldsymbol{\Theta}_{b}\right),\qquad\widehat{\mathbf{s}}_{e}=g\left(\mathbf{y}_{e};\boldsymbol{\Theta}_{e}\right).$
(4)
We note that both JSC encoder and decoder can only be deployed after
sufficient training, while the unbearable communication overhead will be
introduced as a cost. Moreover, there is a strong demand for serving multiple
users in semantic communication system. Given these, sharing the JSC decoder
publicly should be a proper solution for alleviating the training burden,
i.e., $\boldsymbol{\Theta}_{e}=\boldsymbol{\Theta}_{d}$, as users in the cell
can collaborate to improve the JSC decoder through federated learning.
However, the shared JSC decoder raises critical privacy issue that it makes
Eve easy to crack the semantic information as it can be retrieved even in a
quite noisy channel. We shall tackle such issues in the subsequent section.
## III Proposed Method
In this section, we first propose a JSC autoencoder featuring the cascading of
residual block with convolution layer to extract the semantic information
efficiently. Given the potential privacy leakage, we then propose a data-
driven scheme that balances the efficiency-privacy trade-off.
### III-A JSC Autoencoder Design
Figure 2: Network architecture of the JSC autoencoder
The network architecture of JSC autoencoder is shown in Fig. 2. As in [14],
the residual blocks are added to improve the model performance and training
stability. In the encoder part, we adopt the method of alternately cascading
the residual block with convolution layer, and downsampling the input image
three times through residual block. In addition, all the intermediate results
are normalized by generalized normalization transformations (GDN) [15], which
is widely used in image compression. The network structure of the decoder is
similar to the encoder, while the sub-pixel convolution layer [16] is adopted
to reconstruct the image. Compared with the transposition convolution layer,
it can improve the resolution of the obtained image through learning, thus
improving the reconstruction performance.
Note that, unlike in traditional communication systems where perfect recovery
of $\mathbf{x}$ from $\mathbf{y}_{b}$ is pursued, we train the autoencoders in
an end-to-end way, as such, the image compression and channel adaption can be
achieved by using the following loss function,
$\displaystyle\mathcal{L}_{1}=\frac{1}{B}\sum_{i=1}^{B}d\left(\mathbf{s}_{i},\widehat{\mathbf{s}}_{i}\right),$
(5)
where $B$ denotes the batch size,
$d\left(\mathbf{s}_{i},\widehat{\mathbf{s}}_{i}\right)=\left\|\mathbf{s}_{i}-\widehat{\mathbf{s}}_{i}\right\|^{2}$
is the _mean squared-error_ (MSE) distortion between the reconstructed image
and the raw image.
### III-B Privacy-aware Design
(a) Original image
(b) BPG-Turbo-64QAM
(c) JSCC with (5)
Figure 3: The reconstructed image produced by Eve (${\rm SNR}_{\rm Eve}=0{\rm
dB}$)
As reported in [3, 4], one of the advantages of semantic communication is that
the satisfactory performance can be achieved even in the low SNR regime. As
shown in Fig. 3111The detailed settings are given in Section IV., with the
poor wiretap channel, Eve from the conventional image transmission system
cannot decode anything, while with the powerful JSC decoder, the Eve from the
semantic communication system can still crack the semantic information. It
shows that the semantic communication system does have higher risk of privacy
leakage than bit-level communication.
To address this issue, similar to secure channel capacity, an intuitive way is
to take the reconstruction quality of Eve into account of the training
objective. The loss function with privacy aware is given by
$\displaystyle\mathcal{L}_{2}=\frac{1}{B}\sum_{i=1}^{B}\left[d\left(\mathbf{s}_{i},\widehat{\mathbf{s}}_{d,i}\right)-\lambda\cdot
d^{\prime}\left(\mathbf{s}_{i},\widehat{\mathbf{s}}_{e,i}\right)\right],$ (6)
where $\lambda$ is the weighting factor, $d^{\prime}(\cdot)$ characterizes the
privacy leakage to Eve.
Then, the main challenge is to give a proper design of $d’(\cdot)$. There are
two principles for it. Firstly, $d’(\cdot)$ does not have to be the same form
with $d$, as privacy information may have various definitions. Secondly, there
exists a trade-off between the reconstruction quality of Bob and privacy
leakage to Eve. We should minimize the reconstruction distortion of Bob while
protecting privacy to a certain degree. Considering these, we propose the
following criterion of privacy leakage,
$\displaystyle
d^{\prime}\left(\mathbf{s}_{i},\widehat{\mathbf{s}}_{e,i}\right)=\left\\{\begin{array}[]{lr}-d\left(\mathbf{0},\widehat{\mathbf{s}}_{e,i}\right)~{}~{}~{}~{}d\left(\mathbf{0},\widehat{\mathbf{s}}_{e,i}\right)>\epsilon\\\
0~{}~{}\qquad\qquad{\rm otherwise.}\end{array}\right.$ (9)
where $\epsilon$ is the predefined indicator of privacy protection,
$\mathbf{0}$ denotes the all black image with a same shape of $\mathbf{s}$.
###### Remark 1.
_Generally, the degree of privacy leakage can be characterized by the mutual
information ${\rm I}(\mathbf{s}_{i};\widehat{\mathbf{s}}_{e,i})={\rm
H}(\widehat{\mathbf{s}}_{e,i})-{\rm
H}(\widehat{\mathbf{s}}_{e,i}|\mathbf{s}_{i})$. However, ${\rm
I}(\mathbf{s}_{i};\widehat{\mathbf{s}}_{e,i})$ is not tractable. We instead
minimize ${\rm H}(\widehat{\mathbf{s}}_{e,i})$, the upper bound of ${\rm
I}(\mathbf{s}_{i};\widehat{\mathbf{s}}_{e,i})$. It can be achieved by forcing
the image decoded by Eve to converge to the constant one, i.e., the all-black
image. In addition, $d’\left(\cdot\right)$ serves as a penalty function for
the training objective, for which we set a threshold, and if the privacy
leakage exceeds $\epsilon$, $d’\left(\cdot\right)$ will corrects the training
direction for privacy protection. _
Combining (6) with (9), the novel loss function with privacy aware is
presented, referred to as _secure mean squared-error_ (SecureMSE).
## IV Simulation Results
In this section, we conduct a set of experiments evaluating the performance of
the proposed scheme, including the reconstruction quality at Bob and the
privacy leakage to Eve. The AWGN system in Section II-B is first considered,
where $P=\frac{\sigma_{e}^{2}}{\sigma_{b}^{2}}$ is set to 15dB. In addition, a
more practical _multiple-input-single-output_ (MISO) system is also
considered, where precoding techniques can be exploited to relax the noise
power requirement in AWGN system. Specifically, $\mathbf{x}$ is normalized to
$\sqrt{Mp}\mathbf{x}/\left\|\mathbf{x}\right\|_{2}^{2}$ to satisfy the average
power constraints, with $p=1$. In all the presented figures, we denote ${\rm
SNR}$ as the transmission signal-to-noise ratio at Bob side.
Dataset: We use the Linnaeus 5 dataset for training and
testing.222chaladze.com/l5 The images have dimension $128\times 128\times 3$.
The whole image dataset is composed of 5 classes, including berry, bird, dog,
flower, and other. There are 1200 training images, 400 testing images per
class.
Performance Metric: To measure the performance of the proposed scheme and the
baseline schemes, we use the structural similarity index measure (SSIM) as the
performance metrics, which is given below.
$\displaystyle{\rm
SSIM}(\mathbf{s},\widehat{\mathbf{s}})=\frac{\left(2\mu_{\mathbf{s}}\mu_{\widehat{\mathbf{s}}}+c_{1}\right)\left(2\sigma_{\mathbf{s}\widehat{\mathbf{s}}}+c_{2}\right)}{\left(\mu_{\mathbf{s}}^{2}+\mu_{\widehat{\mathbf{s}}}^{2}+c_{1}\right)\left(\sigma_{\mathbf{s}}^{2}+\sigma_{\widehat{\mathbf{s}}}^{2}+c_{2}\right)}$
(10)
where $\mu_{\mathbf{s}}$, $\sigma_{\mathbf{s}}^{2}$,
$\sigma_{\mathbf{s}\widehat{\mathbf{s}}}^{2}$ are the mean and variance of
$\mathbf{s}$, and the covariance between $\mathbf{s}$ and
$\widehat{\mathbf{s}}$, respectively. $c_{1}$ and $c_{2}$ are constants for
numeric stability.
Training Setting: We adopt the Adam optimizer, with the learning rate of
0.0001. the pretrained model is first obtained by using the ImageNet dataset
with the loss function of MSE and the assumption of ideal transmission (i.e.,
the receiver can obtain $\mathbf{x}$ without noise.). Then the final model is
obtained through training under specific channel and loss setting. All the
number of filters in residual blocks and convolution layers are $128$. The
experiments are implemented by PyTorch and Python 3.8 on a Linux server with 2
NVIDIA RTX 3090 GPUs.
### IV-A Reconstruction Evaluation
Figure 4: Performance comparison among different transmission schemes
In this subsection, we compare the reconstruction quality of the proposed
schemes, the JSCC scheme in [4], and the conventional schemes with separate
source and channel coding. For the conventional schemes, two source coding
schemes including JPEG2000 and BPG are considered. To ensure fairness, the
same number of transmitted symbols are guaranteed. The results are presented
in Fig. 4. It can be seen that for the two traditional schemes, the
reconstruction quality is bad when the channel quality is poor (i.e., ${\rm
SNR}\leq 5{\rm dB}$). This is because the traditional scheme needs to
represent the original picture as bit sequences. The poor channel leads to
high bit error rate. For the compression and decompression schemes of BPG and
JPEG2000 standards, the accumulation of bit errors will cause decoding
failure. As for the JSC coding scheme, it transmits the most important
semantic information in the form of symbols. Although there exist symbol
errors, only semantic offset occurs. It has a relatively satisfactory
performance under low SNR regime and maintains similar performance with
traditional schemes under high SNR regime. Moreover, the alternating residual
and convolutional structure outperforms the full convolution structure, which
verifies the effectiveness of the proposed scheme.
### IV-B Security Evaluation
Figure 5: Performance comparison of Bob and Eve in AWGN system Figure 6:
Performance comparison of Bob and Eve in MISO system
(a) Original image
(b) MSE Bob
(c) MSE Eve
(d) SecureMSE Bob
(e) SecureMSE Eve
Figure 7: Examples of reconstructed images produced by Bob and Eve targeting
different objective in AWGN system (${\rm SNR}=10{\rm dB}$)
#### IV-B1 AWGN System
Under the AWGN system, the reconstruction performance of the model based on
the two training objectives (e.g., MSE in (5), SecureMSE in (6)) is shown in
Fig. 5. It is found that the MSE scheme achieves the best reconstruction
perforamance at Bob side. However, with the increase of SNR, especially when
${\rm SNR\geq 10dB}$, the reconstruction performance of Eve improves a lot as
well. We present some examples of reconstructed images when ${\rm SNR=10dB}$,
it can be seen that baseline models without privacy awareness can also roughly
reconstruct the approximate image, thus verifying the existence of privacy
leakage in the current semantic communication system. For the proposed
SecureMSE scheme, the reconstruction quality of Eve does not improve a lot as
the SNR grows due to the privacy awareness embedded in the well-designed loss
function. From Fig. 7, it can be seen that Eve with SecureMSE model can no
longer obtain any privacy information. In addition, comparing the
reconstruction effect of Bob under two objectives, SecureMSE only causes a
slight performance loss, which is negligible for human’s perception. The
validity of the proposed algorithm is thus verified.
(a) Original image
(b) MSE Bob
(c) MSE Eve
(d) SecureMSE Bob
(e) SecureMSE Eve
Figure 8: Examples of reconstructed images produced by Bob and Eve targeting
different objective in MISO system (${\rm SNR}=10{\rm dB}$)
#### IV-B2 MISO System
For a typical MISO system, $\mathbf{y}_{b}$ and $\mathbf{y}_{e}$ are
respectively given by
$\displaystyle\mathbf{y}_{b}=\left(\mathbf{h}_{b}^{H}\mathbf{v}\right)\otimes\mathbf{x}+\mathbf{n}_{b},~{}~{}\mathbf{y}_{e}=\left(\mathbf{h}_{e}^{H}\mathbf{v}\right)\otimes\mathbf{x}+\mathbf{n}_{e},$
(11)
where $\mathbf{h}_{b}\in\mathbb{C}^{N\times 1}$ and
$\mathbf{h}_{e}\in\mathbb{C}^{N\times 1}$ denote the channel between BS and
Bob, BS and Eve, respectively. The _maximum ratio transmission_ (MRT)
precoding scheme is adopted, that is,
$\mathbf{v}=\frac{\mathbf{h}_{b}}{\left\|\mathbf{h}_{b}\right\|_{2}^{2}}$.
Then, $\mathbf{y}_{b}$ and $\mathbf{y}_{e}$ can be rewritten as
$\displaystyle\mathbf{y}_{b}=\mathbf{x}+\mathbf{n}_{b},\qquad\mathbf{y}_{e}=\alpha_{e}\mathbf{x}+\mathbf{n}_{e},$
(12)
where
$\alpha_{e}=\frac{\mathbf{h}_{e}\mathbf{h}_{b}^{H}}{\left\|\mathbf{h}_{b}\right\|_{2}^{2}}$.
In the following experiment, we has $N=8$, and
$P=\frac{\sigma_{e}^{2}}{\sigma_{b}^{2}}=1$.
The performance comparison between the proposed scheme and the baseline one
under MISO system is shown in Fig. 6. In the low SNR regime (i.e.,
${\rm-5dB<SNR<5dB}$), the reconstruction performance of SecureMSE in Bob
decreases to a certain extent. This is because in the MISO system, as shown in
(12), the main difference between Bob and Eve is the fading coefficient. With
the high noise level, the JSC decoder can not distinguish Bob and Eve from the
heterogeneity of channel, thus makes it more difficult to maximize the
reconstruction quality at Bob while suppressing the privacy leakage to Eve. As
the SNR increases (i.e., ${\rm SNR>=10dB}$), it can be seen that both the
reconstruction performance of Bob and the privacy preservation against Eve
improve a lot. In addition, the example reconstruction image with $\rm
SNR=10dB$ is shown in Fig. 8. It can be seen that the model based on the
conventional MSE loss still suffer from the problem of privacy leakage, while
the model trained with the proposed SecureMSE loss once again prevents privacy
leakage, thus verifying the robustness of the proposed algorithm in dealing
with various scenarios.
## V Conclusion
In this letter, we study the semantic communication system for wireless image
transmission, and an efficient JSC framework is developed. In addition, we
discuss the privacy issue in the current semantic communication system and
reveal the potential privacy leakage. Prompted by this, we proposed a data-
driven privacy protection scheme called SecureMSE featuring a well designed
loss function with privacy awareness. Experimental results verify the
effectiveness and robustness of the proposed scheme.
## References
* [1] D. Gunduz, Z. Qin, I. E. Aguerri, H. S. Dhillon, Z. Yang, A. Yener, K. K. Wong, and C.-B. Chae, “Beyond transmitting bits: Context, semantics, and task-oriented communications,” [Online]. Available: https://arxiv.org/abs/2207.09353, 2022.
* [2] H. Ye, L. Liang, G. Y. Li, and B.-H. Juang, “Deep learning-based end-to-end wireless communication systems with conditional gans as unknown channels,” IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3133–3143, 2020.
* [3] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep learning enabled semantic communication systems,” IEEE Trans. Singal Process., vol. 69, pp. 2663–2675, 2021.
* [4] E. Bourtsoulatze, D. B. Kurka, and D. Gündüz, “Deep joint source-channel coding for wireless image transmission,” IEEE Trans. Cogn. Commun. Netw., vol. 5, no. 3, pp. 567–579, 2019.
* [5] D. B. Kurka and D. Gündüz, “Deepjscc-f: Deep joint source-channel coding of images with feedback,” IEEE J. Sel. Areas Inf. Theory, vol. 1, no. 1, pp. 178–193, 2020.
* [6] D. B. Kurka and D. Gündüz, “Bandwidth-agile image transmission with deep joint source-channel coding,” IEEE Trans. Wireless Commun., vol. 20, no. 12, pp. 8081–8095, 2021.
* [7] J. Wang, S. Wang, J. Dai, Z. Si, D. Zhou, and K. Niu, “Perceptual learned source-channel coding for high-fidelity image semantic transmission,” [Online]. Available: https://arxiv.org/abs/2205.13120, 2022.
* [8] D. Huang, F. Gao, X. Tao, Q. Du, and J. Lu, “Towards semantic communications: Deep learning-based image semantic coding,” [Online]. Available: https://arxiv.org/abs/2208.04094, 2022.
* [9] P. K. Gopala, L. Lai, and H. El Gamal, “On the secrecy capacity of fading channels,” IEEE Trans Inf. Theory, vol. 54, no. 10, pp. 4687–4698, 2008\.
* [10] Y. Liang, H. V. Poor, and S. Shamai, “Secure communication over fading channels,” IEEE Trans Inf. Theory, vol. 54, no. 6, pp. 2470–2492, 2008\.
* [11] K.-L. Besser, P.-H. Lin, C. R. Janda, and E. A. Jorswieck, “Wiretap code design by neural network autoencoders,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 3374–3386, 2019.
* [12] R. Fritschek, R. F. Schaefer, and G. Wunder, “Deep learning for the gaussian wiretap channel,” in Proc. IEEE International Conference on Communications (ICC), pp. 1–6, Shanghai, China, 2019.
* [13] T.-Y. Tung and D. Gunduz, “Deep joint source-channel and encryption coding: Secure semantic communications,” [Online]. Available: https://arxiv.org/abs/2208.09245, 2022.
* [14] Z. Cheng, H. Sun, M. Takeuchi, and J. Katto, “Deep residual learning for image compression.,” in Proc. of the IEEE conference on computer vision and pattern recognition (CVPR), Long Beach, USA, 2019.
* [15] J. Ballé, V. Laparra, and E. P. Simoncelli, “Density modeling of images using a generalized normalization transformation,” [Online]. Available: https://arxiv.org/abs/1511.06281, 2015.
* [16] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proc. of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1874–1883, Las Vegas, USA, 2016.
|
We consider the problem of determining the top-$k$ largest measurements from a dataset distributed among a network of $n$ agents with noisy communication links. We show that this scenario can be cast as a distributed convex optimization problem called sample quantile inference, which we solve using a two-time-scale stochastic approximation algorithm. Herein, we prove the algorithm's convergence in the almost sure sense to an optimal solution. Moreover, our algorithm handles noise and empirically converges to the correct answer within a small number of iterations.
Networks, quantile regression, stochastic approximation, convex optimization
§ INTRODUCTION
The selection of the $k$ largest quantities among a set of $n$ real numbers is of great importance in many applications of interest including machine learning [Rejwan and Mansour, 2020], data mining [Tang et al., 2017], and information retrieval [Shi et al., 2019]. These are known as top-$k$ strategies. Top-$k$ strategies also play a role in remote sensing [Zhang et al., 2022] and selection of the most informative neighbors for distributed optimization [Verma et al., 2021]. While this problem is trivial in the centralized case, where all the data is available at a single agent (server), it is a non-trivial problem in applications where the data is distributed over a network of clients. Here, we study the design of a distributed top-$k$ strategy over a network of many agents.
Consider a decentralized system in <ref> where the agents can only communicate with their neighbors over a noisy channels. Each agent has its own measurement and our goal is to select the $k$ largest ones by learning an optimal threshold. Once this threshold is properly computed, each agent independently declares whether its measurement is above the threshold, and therefore that it has one of the top-$k$ measurements. One possible strategy is to relate the threshold with a sample quantile, which can be cast as quantile inference problem. Remarkably, this problem admits a natural decomposition as a distributed nonsmooth convex optimization problem. Herein, we provide an algorithm based on a two-time-scale subgradient method and perform its corresponding convergence analysis.
There exists a significant literature on the design of top-$k$ strategies in different settings. To the best of our knowledge
[Babcock and Olston, 2003]
proposed the first distributed algorithm to find the $k$ largest values in a federated setting using a technique called range caching.
[Ustebay et al., 2011] provided a top-$k$ algorithm to compute the top-$k$ entries based on selective gossip.
[Rajagopal et al., 2006] studied the distributed estimation of an arbitrary quantile in a communication-constrained federated scenario. Compared with the above papers, [Haeupler et al., 2018] proposed faster optimal gossip algorithms for quantile computation by designing two types of tournament mechanisms. However, their algorithms are very sensitive to communication noise and do not converge to the exact quantile over noisy channels.
Our work is closely related to the work of [Lee et al., 2020], where a two-time-scale algorithm to estimate a sample quantile in a decentralized way is claimed to converge in the mean squared sense. However, due to a technicality, the proof of convergence in [Lee et al., 2020] fails to hold due to the lack of monotonicity of the sequence $\|\mathbf{E}[\bar{w}(t)] - \theta_p \mathbf{1}\|$. We avoid this technical difficulty by considering the convergence in the almost sure sense, combining the classical result of [Robbins and Siegmund, 1971] with a modern result on nonsmooth optimization by [Yi and Li, 2021].
§ PROBLEM SETUP
System model – A dataset is distributed accross multiple clients. Two clients can communicate if there is a link in between them. The communication links are noisy.
We begin our analysis by relating the problem of computing the top-$k$ observations with quantile inference, which is a convex optimization problem. Consider a collection of $n$ agents, $[n]=\{1,\cdots,n\}$, interconnected by a time-invariant, undirected, communication graph $\mathcal{G}=([n],\mathcal{E})$. Each agent holds a non-negative real number, which may represent a sensor measurement, a belief, or an opinion. Throughout this paper, we will simply refer to them as data. Let $z_i \in \mathbb{R}$ be the data of the $i$-th agent. The goal of the team of agents is to determine in a distributed fashion the $k$ agents holding the top-$k$ largest data. We want to do this efficiently. Moreover, the communication among agents occurs in the presence of additive noise.
At first, one may be inclined to consider the following strategy: Each agent keeps a list of $k$ entries in its memory. At time $t$ each agent sends this list to its neighbors. At time $t+1$, every agent updates its list with by selecting the top-$k$ received data and discarding the rest. Each agent sorts its list and repeats. While this simple scheme converges to the top-$k$ results in finite time, it has two main drawbacks. First, this scheme requires noiseless communication channels of $k$ real numbers per channel use. Even the slightest amount of noise will cause the algorithm to diverge. Second, it requires a memory with size $k$. If $k\sim \mathcal{O}(n)$, the communication and storage requirements will quickly turn the problem of finding the top-$k$ observations accross the network prohibitive.
On the other hand, this problem can be conveniently cast into the framework of distributed convex optimization, and admits an implementation where a single real number is exchanged and a single unit of memory is updated at each time. Furthermore, this algorithm is robust to the presence of noise. Consider the problem of infering the sample quantile from the dataset containing all of the agents' individual data points $\mathcal{D}\Equaldef\{z_i\}_{i=1}^n$. Let $\widehat{F}(\xi ; \mathcal{D})$ denote the empirical cumulative distribution function of the data set $\mathcal{D}$, defined as:
\begin{equation}
\widehat{F}(\xi ; \mathcal{D})\Equaldef \frac{1}{n} \sum_{i=1}^{n} \mathbf{1}( z_i \leq \xi).
\end{equation}
Let $p\in(0,1)$. The (sample) $p$-quantile is defined as
\begin{equation}\label{eq:quantile} \theta_{p}\Equaldef \inf \Big\{\xi \ \Big| \ \widehat{F}(\xi ; \mathcal{D}) \geq p \Big\},
\end{equation}
Empirical CDF and aggregate score function for $k=3$.
A classic result in quantile regression relates <ref> to the solution of the following optimization problem <cit.>:
\begin{equation} \label{eq:quantile_cvx}
\theta_p = \arg \min_{\xi\in \mathbb{R}} \sum_{i=1}^n \rho_p \big(z_i-\xi \big),
\ \ \text{where} \ \
\rho_p(x) \Equaldef \begin{cases}
(p-1)x & \text{if} \ \ x < 0 \\
p x & \text{if} \ \ x \geq 0 . \\
\end{cases}
\end{equation}
Score function
Let the local (private) functions be defined as
$ f_i(\xi)\Equaldef \rho_p(z_i-\xi),$, which are called the score functions (see <ref>),
and the objective be defined as the aggregate score function
$ f(\xi)\Equaldef\sum_{i=1}^{n}f_i(\xi),
then the sample quantile is the solution of the following distributed optimization problem:
\begin{equation}\label{eq:main_problem}
\theta_p = \arg \min_{\xi\in \mathbb{R}} \ \ f(\xi)=\sum_{i=1}^{n}f_i(\xi).
\end{equation}
A few noteworthy aspects of <ref> are: (1) This is a convex optimization problem;
(2) The objective function is nonsmooth; (3) The local functions have bounded subgradients:
\begin{equation}
|g_i(\xi)| \leq \max\{p,1-p\} \leq 1, \ \ g_i \in \partial f_i;
\end{equation}
and (4) the $p$-quantile $\theta_p$ belongs to the dataset $\mathcal{D}$, for any parameter $p \in \mathcal{P}$, where
\begin{equation}\label{eq:set_of_p's}
\mathcal{P} \Equaldef (0,1)\backslash\left\{ \frac{1}{n},\cdots,\frac{n-1}{n}\right\}.
\end{equation}
This framework can be used to compute many statistics of interest. For example, to compute the maximum $(k=1)$, let $p\in (1-1/n,1)$. To compute the minimum ($k=n$), let $p\in (0,1/n)$. Provided the number of samples in $\mathcal{D}$ is odd, to compute the median, set $p\in \big((n-1)/2n,(n+1)/2n\big)$. In general, if we would like to find the $k$-th largest element of $\mathcal{D}$, then
\begin{equation}
p \in \left(\frac{n-k}{n},\frac{n-k+1}{n} \right).
\end{equation}
§ DISTRIBUTED ALGORITHM AND MAIN RESULT
Herein, we propose and analyze the convergence of a two-time scale distributed sample quantile estimation algorithm in the presence of noise in the communication links. In particular, given the number of sensors $n$ and any probability $p \in\mathcal{P}$, we prove that the algorithm converges to the sample quantile $\theta_p$. The non-smoothness of the empirical cummulative distribution function leads to oscillation around the optimal solution in the process of convergence, which is a difficulty that must be addressed. A novel analysis technique is introduced to tackle this problem.
Consider a connected undirected graph $\mathcal{G}=([n],\mathcal{E})$ with $n$ nodes, each node observing a real number $z_i$, $i\in[n]$. Here, $[n]=\{1, \ldots, n\}$ denotes the set of nodes and $\mathcal{E} \subset [n] \times [n]$ denotes the set of edges between nodes.
Let $\mathcal{N}_i$ denote the set of neighbors of the $i$-th node, and $d_{\max}=\max_{i} |\mathcal{N}_i|$.
Let $L$ denote the graph Laplacian matrix and $\lambda_1,\lambda_2,\cdots,\lambda_n$ denote the eigenvalues of $L$, which satisfy $0=\lambda_1<\lambda_2<\cdots<\lambda_n$. Let $w_i(t)$ denote the local estimate of $\theta_p$ computed by node $i$ at the $t$-th iteration, $m_i(t)$ denotes the message sent by node $i$ to its neighbors $\mathcal{N}_i$, and $v_{ij}(t)$ be the communication noise on the link between nodes $i$ and $j \in \mathcal{N}_i$.
We assume that the noise sequences of random variables $\{v_{ij}(t)\}_{t=0}^{n}$ are independent and identically distributed (i.i.d.) satisfying the following two properties:
\begin{equation} \label{assump: E_v}
\mathbf{E}\big[v_{ij}(t)\big] = 0 ~~ \text{and}~~
\mathbf{E}\big[v^2_{ij}(t)\big] < \infty, \ \ (i,j)\in \mathcal{E}. %\label{assump: E_v_variance}
\end{equation}
Let $\alpha(t)$ and $\beta(t)$ be two deterministic step-size sequences. We initialize $w_i(0)=z_i, \ \ i\in [n].$ On the $t$-th round of local communication we perform the following steps:
\begin{align}
m_i(t)&=w_i(t)-\alpha(t) \big(\mathbf{1}\big(w_i(t)\ge z_i\big)-p\big),\\
w_i(t+1) &= m_i(t) - \beta(t)\sum_{j\in \mathcal{N}_{i}}\big(m_i(t)-y_j(t)\big),
\end{align}
where $y_j(t) = m_j(t)+v_{ji}(t)$.
* Message computation:
\begin{equation}
m_i(t)=w_i(t)-\alpha(t) \big(\mathbf{1}\big(w_i(t)\ge z_i\big)-p\big).
\end{equation}
* Local estimate update:
\begin{equation}
% \vw(t+1)=(\vI-\beta\vL)\bm{\psi}(t).
% w_i(t+1) = m_i(t) - \beta(t)\sum_{j\in \mathcal{N}_{i}}\Big(m_i(t)-\big(m_j(t)+v_{ji}(t)\big)\Big).
w_i(t+1) = m_i(t) - \beta(t)\sum_{j\in \mathcal{N}_{i}}\big(m_i(t)-y_j(t)\big),%\big(m_j(t)+v_{ji}(t)\big)\Big).
\end{equation}
where $y_j(t) = m_j(t)+v_{ji}(t)$.
For a given set of samples $\mathcal{D}=\{z_i\}_{i=1}^n$ distributed over $n$ agents connected by a static undirected network $\mathcal{G}$, and
any $p\in\mathcal{P}$, there exist step-size sequences $\alpha(t)$ and $\beta(t)$ satisfying
\begin{align} \label{ineq: finite_sum_aa}
\sum_{t=0}^{\infty} \alpha(t) &=\infty, ~~ \sum_{t=0}^{\infty} \alpha^2(t)<\infty,\\
\label{ineq: finite_sum_bb}
\sum_{t=0}^{\infty} \beta(t)&=\infty, ~~ \sum_{t=0}^{\infty} \beta^2(t)<\infty,\\
\label{ineq: finite_sum_aab}
& \sum_{t=0}^{\infty} \frac{\alpha^2(t)}{\beta(t)}<\infty,
\end{align}
such that the sequence $w_i(t)$ computed acording to <ref> satisfies
$w_i(t) \overset{\textup{a.s.\ }}{\longrightarrow}\theta_p, \ \ i \in [n]$.
Distributed two-time scale sample quantile estimation under communication noise
measurements $\{z_i\}_{i=1}^n$, initialization $w_i(0)=z_i$, $i\in [n]$, two step-size sequences $\{\alpha(t)\},\{\beta(t)\}$, quantile parameter $p\in(0,1)$
Local subgradient step:
$m_i(t)=w_i(t)-\alpha(t) \big(\mathbf{1}\left(w_i(t)\ge z_i\right)-p\big)$
Send message: $m_i(t)$ to neighbors $j\in \mathcal{N}_i$
Receive messages: $y_j(t)=m_j(t)+v_{ji}(t)$, $j\in \mathcal{N}_i$
Estimate update step:
$w_i(t+1) = m_i(t) - \beta(t)\sum_{j\in \mathcal{N}_{i}}\big(m_i(t)-y_j(t)\big)$
§ ALMOST SURE CONVERGENCE ANALYSIS
Our proof relies on a two-time-scale stochastic approximation algorithm. Define the deterministic step-size sequences
\begin{equation}
\alpha(t)\Equaldef \frac{\alpha_{0}}{(t+1)^{\tau_1}} \ \ \text{and} \ \ \beta(t)\Equaldef \frac{\beta_{0}}{(t+1)^{\tau_2}},
\ \ \text{where} \ \
\beta_{0} \le \frac{2}{\lambda_2+\lambda_n},\ \ \textcolor{black}{\alpha_0 \geq 1,}
\end{equation}
and $\tau_1,\tau_2$ satisfy $
0.5< \tau_2 < \tau_1 \le 1$ and $2\tau_1-\tau_2>1$.
<ref> can be expressed in vector form as
\begin{align} \label{alg: quantile_vector}
w(t+1) &\Equaldef \big(I - \beta(t)L\big)\big(w(t)- \alpha(t)g(t)\big) +\beta(t)v(t).
\end{align}
\begin{multline}
w(t)\Equaldef \big[w_1(t),\cdots,w_n(t)\big]^T, \ \ v(t)\Equaldef\Big[\sum_{j\in \mathcal{N}_{1}}v_{1j}(t),\cdots,\sum_{j\in \mathcal{N}_{n}}v_{nj}(t)\Big]^T, \\ g_i(t) \Equaldef \mathbf{1} \big(w_i(t) \ge z_i\big) - p, \ \ \text{and} \ \ g(t)\Equaldef\big[g_1(t),\cdots,g_n(t)\big]^T.
\end{multline}
Finally, define the following averages
\begin{align*}
\bar{w}(t)\Equaldef\frac{1}{n}\sum_{i=1}^n w_i(t), \ \
\bar{v}(t)\Equaldef\frac{1}{n}\sum_{i=1}^n v_i(t), \ \ \text{and} \ \
\bar{g}(t)\Equaldef\frac{1}{n}\sum_{i=1}^n g_i(t).
\end{align*}
Next, we prove the almost surely convergence of $w_i(t)$ to the quantile $\theta_p$ defined in the dynamical system in <ref>, where $p\in \mathcal{P}.$
Start by rewriting $w_i(t) \overset{\textup{a.s.\ }}{\longrightarrow}\theta_p, \ \ i \in [n]$ as
\begin{equation} \label{eq: QuantileCovergence_2}
\lim_{t \to +\infty} \|w(t) - \theta_p 1\|=0 ~~ \textup{a.s.}
\end{equation}
From the triangle inequality, we have
\|w(t) - {\theta_p}1 \|
\le \|w(t) - \bar{w}(t)1 \| +\| \bar{w}(t)1-{\theta_p}1\|.
We will show Eq. (<ref>) holds by first proving that
\begin{equation}
\label{eq: converge_to mean}
\lim_{t \to +\infty} \|w(t) - \bar{w}(t)1 \|=0 ~~ \textup{a.s.}, \ \ \text{and} \ \
\lim_{t \to +\infty} \| \bar{w}(t)-{\theta_p}\| = 0 ~~ \textup{a.s.} %\label{eq:mean_to_optimal}
\end{equation}
Part 1:
Recall that the Laplacian matrix $L$ satisfies $L 1 = 0$. Multiplying both sides of <ref> by $G\Equaldef \frac{1}{n} 1 1^T$ yields
\begin{equation} \label{eq: secondterm}
\bar{w} (t+1) 1 = G \big(w(t)- \alpha(t)g(t)\big)+ \beta(t) Gv(t).%\\
%&=& \bar{w}(t)1 - \alpha(t) \bar{g}(t) 1 +\beta(t) \bar{v}(t)1.
\end{equation}
Let $B(s)\Equaldef\big(I - \beta(s)L-G\big)$. Since $B(s) 1=0$, by combining Eqs. (<ref>) and (<ref>) we get
\begin{equation} \label{eq:thetap_unbiasedness}
w(t+1)-\bar{w} (t+1) 1
B(t)\big(w(t)-\bar{w}(t)1- \alpha(t)g(t)\big)+\beta(t)(I-G)v(t).
\end{equation}
Therefore, after taking the norm and squaring both sides of Eq. (<ref>), we obtain
\begin{multline} \label{eq:thetap_unbiasedness_2}
\norm{w(t+1)-\bar{w} (t+1) 1}^2
\norm{B(t)\big(w(t)-\bar{w}(t)1- \alpha(t)g(t)\big)}^2
+\norm{\beta(t)(I-G)v(t)}^2 \\
+2\ip{B(t)\big(w(t)-\bar{w}(t)1- \alpha(t)g(t)\big)}{\beta(t)(I-G)v(t)}.
\end{multline}
Let $\mathcal{F}_t$ denote the $\sigma$-algebra generated by $\{v(\ell)\}_{\ell=1}^{t-1}$.
Taking the conditional expectation of <ref> with respect to $\mathcal{F}_t$, we obtain
\begin{equation} \label{eq:E_thetap_unbiasedness_1}
\E[\norm{w(t+1)-\bar{w} (t+1) 1}^2 \mid \mathcal{F}_t]
=\norm{B(t)\big(w(t)-\bar{w}(t)1- \alpha(t)g(t)\big)}^2 + \E\[\norm{\beta(t)(I-G)v(t)}^2\],
\end{equation}
where we used the fact that $\E\big[v(t)\mid\mathcal{F}_t\big]=0$. Let $\eta>0$, using Young's inequality[$(a+b)^2\le (1+\eta)a^2+(1+1/\eta)b^2, ~\eta>0$.], we have
\begin{multline} \label{eq:E_thetap_unbiasedness_2}
\E\big[\norm{w(t+1)-\bar{w} (t+1) 1}^2\mid \mathcal{F}_t\big]
\leq (1+\eta)\norm{B(t)\big(w(t)-\bar{w}(t)1\big)}^2 \\
+ \(1+1/ \eta\)\norm{\alpha(t)B(t)g(t)}^2+\E\[\norm{\beta(t)(I-G)v(t)}^2\].
\end{multline}
Using the inequality $\|Ab\| \le \|A\|\|b\|$, we obtain
\begin{multline} \label{eq:E_thetap_unbiasedness_3}
\E\big[\norm{w(t+1)-\bar{w} (t+1) 1}^2\mid \mathcal{F}_t\big]
\le(1+\eta)\norm{B(t)}^2\norm{w(t)-\bar{w}(t)1}^2 \\
+\(1+1/ \eta\)\alpha^2(t)\norm{B(t)}^2\norm{g(t)}^2 +\beta^2(t)\norm{I-G}^2\E\[\norm{v(t)}^2\].
\end{multline}
We can also show that
\begin{equation}\label{eq:norm}
\norm{B(s)}=\max\big\{|1-\lambda_2\beta(s)|,|\lambda_n\beta(s)-1|\big\} = 1-\lambda_2\beta(s).
\end{equation}
Incorporating <ref> and $ \norm{I-G}=1$ into Eq. (<ref>), the following inequality holds for any $\eta>0$:
\begin{multline} \label{eq:E_thetap_unbiasedness_4}
\E\big[\norm{w(t+1)-\bar{w} (t+1) 1}^2 \mid \mathcal{F}_t\big]
\le(1+\eta)\big(1-\lambda_2 \beta(t)\big)^2\norm{w(t)-\bar{w}(t)1}^2 \\
+\(1+1/\eta\)\alpha^2(t)\big(1-\lambda_2 \beta(t)\big)^2\norm{g(t)}^2
\end{multline}
Choosing $\eta=\lambda_2 \beta(t)$, we get
\begin{multline} \label{eq:thetap_unbiasedness_6}
\E\big[\norm{w(t+1)-\bar{w} (t+1) 1}^2 \mid \mathcal{F}_t\big]
\le
%(1-\lambda_2^2 \beta^2(t))\(1-\lambda_2 \beta(t)\)\norm{w(t)-\bar{w}(t)1}^2 \\
%+\frac{\(1-\lambda_2^2 \beta^2(t)\)\(1-\lambda_2 \beta(t)\)}{\lambda_2 \beta(t)}\alpha^2(t)\norm{g(t)}^2 +\beta^2(t)\E\[\norm{v(t)}^2\]\\
% \le&\(1-\lambda_2 \beta(t)\)\norm{w(t)-\bar{w}(t)1}^2 +\frac{\alpha^2(t)}{\lambda_2 \beta(t)}\norm{g(t)}^2+\beta^2(t)\E\[\norm{v(t)}^2\]\\
\norm{w(t)-\bar{w}(t)1}^2-\lambda_2 \beta(t)\norm{w(t)-\bar{w}(t)1}^2 \\+\frac{\alpha^2(t)}{\lambda_2 \beta(t)}\norm{g(t)}^2+\beta^2(t)\E\[\norm{v(t)}^2\].
\end{multline}
The subgradients of the local functions for the distributed quantile computation problem satisfy $\norm{g_i(t)}<1$. Thus, we have $\norm{g(t)}\le \sqrt{n}$.
Together with Eqs. (<ref>), (<ref>) and (<ref>), we obtain
\begin{align}
\E\[\sum_{t=1}^{\infty} \(\frac{\alpha^2(t)}{\lambda_2 \beta(t)}\norm{g(t)}^2+\beta^2(t)\norm{v(t)}^2\)\]< \infty.
\end{align}
Therefore, from Lemma <ref> (Robbins-Siegmund Theorem [Robbins and Siegmund, 1971]),
we conclude that $\norm{w(t)-\bar{w} (t) 1}$ converges almost surely to zero, and $\sum_{t=1}^{\infty}\beta(t)\norm{w(t)-\bar{w}(t)1}^2<\infty.$
If $\tau_2\leq 1$, the sequence $\beta(t)$ is not summable. Therefore,
\begin{align}
\lim_{t\to +\infty} \norm{(w(t)-\bar{w}(t)1}=0 ~~ \textup{a.s.}
\end{align}
Part 2:
Multiplying both sides of Eq. (<ref>) by $\frac{1}{n} 1^T$ and subtracting ${\theta_p}$, we have
\begin{align} \label{eq: w_avg_2}
\bar{w} (t+1)-{\theta_p} = \bar{w}(t) -{\theta_p} - \alpha(t) \bar{g}(t) + \beta(t) \bar{v}(t).
\end{align}
Squaring both sides of <ref> yields
\begin{equation} \label{eq: w_avg_3}
\big|\bar{w} (t+1)-{\theta_p}\big|^2 = \big|\bar{w}(t) -{\theta_p} - \alpha(t) \bar{g}(t)\big|^2 +\beta^2(t) \big|\bar{v}(t)\big|^2
+ 2\beta(t) \big[\bar{w}(t) -{\theta_p} - \alpha(t) \bar{g}(t)\big]\bar{v}(t).
\end{equation}
Taking conditional expectation with respect to $\mathcal{F}_t$ and using the fact that $\E\big[\bar{v}(t) \mid \mathcal{F}_t\big]=0$, yields
\begin{equation} \label{eq: w_expecatation_1}
\E\big[\big|\bar{w} (t+1)-{\theta_p}\big|^2 \mid \mathcal{F}_t\big] = \big|\bar{w}(t) -{\theta_p} - \alpha(t) \bar{g}(t)\big|^2
+\beta^2(t) \E\big[\bar{v}^2(t)\big].
\end{equation}
Defining $\varphi(t)\Equaldef \big|\bar{w}(t) -{\theta_p} - \alpha(t) \bar{g}(t)\big|^2$, we have
\begin{equation} \label{eq: mt}
\varphi(t)=\big|\bar{w}(t) -{\theta_p}\big|^2 +\alpha^2(t)\big|\bar{g}(t)\big|^2
-2\alpha(t) \big(\bar{w}(t) -{\theta_p}\big)\bar{g}(t).
\end{equation}
For each $g_i(t)$, the following chain of inequalities holds:
(w̅(t) -θ_p)g_i(t)
= (w_i(t)-θ_p)g_i(t)+(w̅(t) -w_i(t))g_i(t)
(a)≥ f_i(w_i(t)) -f_i(θ_p) +(w̅(t) -w_i(t))g_i(t)
(b)= f_i(w_i(t))-f_i(w̅(t))+f_i(w̅(t))-f_i(θ_p) +(w̅(t) -w_i(t))g_i(t)
(c)≥ -2|w̅(t) -w_i(t)|+f_i(w̅(t))-f_i(θ_p),
where $(a)$ follows the convexity of $f_i(x)$
, $(b)$ follows from adding and subtracting $f_i\big(\bar{w}(t)\big)$, and $(c)$ follows the $1$-Lipschitz condition that $|g_i(t)|<1$ and $\big|f_i(x)-f_i(y)\big| \le |x-y|$. Therefore,
\begin{equation}
\big(\bar{w}(t) -{\theta_p}\big)\bar{g}(t)
= \frac{1}{n} \sum_{i=1}^n \big(\bar{w}(t) -{\theta_p}\big)g_i(t)
\ge -\frac{2}{n} \sum_{i=1}^n \big|\bar{w}(t) -w_i(t)\big| + f\big(\bar{w}(t)\big)-f(\theta_p),
\end{equation}
\begin{equation}
\varphi(t) \le \big|\bar{w}(t) -{\theta_p}\big|^2- 2\alpha(t) \Big[f\big(\bar{w}(t)\big)-f(\theta_p)\Big] +\frac{4\alpha(t) }{n} \sum_{i=1}^n \big|\bar{w}(t) -w_i(t)\big| +\alpha^2(t)\big|\bar{g}(t)\big|^2. \label{eq: mt2}
\end{equation}
Incorporating Eq. (<ref>) into Eq. (<ref>), we get
\begin{multline}
\E\Big[\big|\bar{w} (t+1)-{\theta_p}\big|^2 \ \big| \ \mathcal{F}_t\Big]
\le \big|\bar{w}(t) -{\theta_p}\big|^2- 2\alpha(t) \big(f(\bar{w}(t)\big)-f\big(\theta_p)\big) \\
+\frac{4\alpha(t) }{n} \sum_{i=1}^n \big|\bar{w}(t) -w_i(t)\big| +\alpha^2(t)\big|\bar{g}(t)\big|^2 +\beta^2(t) \E\big[\bar{v}^2(t)\big].
\label{eq: w_Expectation2}
\end{multline}
Since $\theta_{p}$ is the minimizer of $f(\xi)$, we have $f\big(\bar{w}(t)\big)-f(\theta_p)\ge 0$. Therefore,
\begin{equation}
\E \bigg[\sum_{t=1}^\infty\alpha^2(t)\big|\bar{g}(t)\big|^2\bigg]< \infty \ \ \ \textup{and} \ \ \
\E \bigg[\sum_{t=1}^\infty\beta^2(t) \big|\bar{v}(t)\big|^2\bigg]< \infty.
\end{equation}
To apply Lemma <ref>, we must show that
\begin{equation}
%\E \bigg[\sum_{t=1}^\infty\alpha(t) \big|\bar{w}(t) -w_i(t)\big|\bigg]=
\E\bigg[\sum_{t=0}^\infty\alpha(t+1) \big|\bar{w}(t+1) -w_i(t+1)\big|\bigg]< \infty.
\end{equation}
Since $\big|\bar{w}(s) -w_i(s)\big| \le \big\|w(s)-\bar{w}(s) 1\big\|$, using <ref> the following inequalities hold
\begin{multline}
\sum_{t=0}^\infty\alpha(t+1) \big|\bar{w}(t+1) -w_i(t+1)\big|
\le \sum_{t=0}^\infty\alpha(t+1) \big\|w(t+1)-\bar{w}(t+1) 1\big\|\\
\le \sum_{t=0}^\infty\alpha(t+1)\prod_{s=0}^{t}\big\|B(s)\big\| \big\|w(0)\big\| +\sum_{t=0}^\infty\alpha(t+1)\sum_{l=0}^{t}\alpha(l)\prod_{s=l}^{t}\big\|B(s)\big\| \big\|g(l)\big\| \\
+\sum_{t=0}^\infty\alpha(t+1)\beta(t)\|I-G\|\big\|v(t)\big\| +\sum_{t=0}^\infty\alpha(t+1) \sum_{l=0}^{t} \beta(l) \prod_{s=l+1}^{t} \big\|B(s)\big\| \|I-G\|\big\|v(l)\big\|.
\end{multline}
By using Lemma <ref> and $\norm{B(s)}= 1-\beta(s)\lambda_2<\exp\big(-\beta(s)\lambda_2\big)$,
we get
\begin{align}
\sum_{t=0}^\infty\alpha(t+1)\prod_{s=0}^{t}\big\|B(s)\big\|&<\infty, ~~
\sum_{t=0}^\infty\alpha(t+1)\sum_{l=0}^{t}\alpha(l)\prod_{s=l}^{t}\big\|B(s)\big\| <\infty,\\
\sum_{t=0}^\infty\alpha(t+1) &\sum_{l=0}^{t} \beta(l) \prod_{s=l+1}^{t} \big\|B(s)\big\| <\infty.
\end{align}
From <ref>, using Jensen's inequality we get $\E\big[\|v(l)\| \big] < \sqrt{\E\big[\|v(l)\|^2 \big]}<\infty$. The step-size sequences $\alpha(t)$ and $\beta(t)$ satisfy
\begin{equation}
\E \bigg[\sum_{t=1}^\infty\alpha(t) \big|\bar{w}(t) -w_i(t)\big|\bigg]< \infty.
\end{equation}
Thus, Lemma <ref> implies that $\big|\bar{w}(t) -{\theta_p}\big|$ converges almost surely and
\begin{equation}
\sum_{t=1}^\infty\alpha(t) \Big(f\big(\bar{w}(t)\big)-f(\theta_p)\Big)< \infty.
\end{equation}
Since $\alpha(t)$ is not summable, we have
$ \liminf_{t \to \infty} f\big(\bar{w}(t)\big)=f(\theta_p)=0$ almost surely.
Thus, there exists a subsequence $\big\{\bar{w}(t_j)\big\}$ such that
\begin{equation}
\lim_{j \to \infty} f\big(\bar{w}(t_j)\big)=\liminf_{t \to \infty} f\big(\bar{w}(t)\big)=f(\theta_p)=0~~ \text{a.s.}
\end{equation}
From <cit.>, since $np$ is not an integer, $\theta_{p}$ is the unique minimum of <ref>. Furthermore, since $f(\xi)$ is continuous, we have $\lim_{j \to \infty} \big|\bar{w}(t_j)-\theta_p\big|=0$ almost surely.
Together with the fact that $|\bar{w}(t) -{\theta_p}|$ converges almost surely, we have
\begin{equation} \label{neq: convergence}
\lim_{t \to \infty} \big|\bar{w}(t)-\theta_p\big|=0~~\text{a.s.}
\end{equation}
Aggregate score functions (top left) and noiseless distributed sample quantile estimation and for different $k$, $k=1,3,5,8,10$ (top right).
Convergence of distributed sample quantile inference under different communication noise with $\sigma^2=1,10,20$ and $k=1$ (bottom left).
Number of agents with measurements greater than their corresponding thresholds as a function of iterations, for $k=1,3,5$ and $\sigma^2=10$ (bottom right).
§ NUMERICAL RESULTS
In this section, we provide some numerical results that display the convergence of the distributed sample quantile inference algorithm and its application to top-$k$ data selection. Consider a system with $n=10$ agents interconnected by the graph in Fig. <ref>. The data samples $\mathcal{D}=$ $(45$, $8$, $22$, $91$, $15$, $82$, $53$, $7$, $44$, $99)$ were drawn from an i.i.d. uniform distribution on $\{1,\cdots,100\}$, whose empirical CDF is shown in <ref> (left). The communication noises are zero mean i.i.d. Gaussian random variables with variance $\sigma^2$. The parameters of two time-scale sequences are set to $\alpha_0=80$, $\beta_0=2/(\lambda_2+\lambda_n)$, $\tau_1=1$ and $\tau_2=0.505$.
The panel shown in <ref> displays several aspects that support our theoretical analysis. First, <ref> (top left) shows the aggregate score functions for different values of $k$, and we can see how the curvature varies with the value of $k$ leading to some values of $\theta_p$ being faster to compute than others.
The convergence of distributed sample quantile inference with noiseless links for different $k$ is shown in Fig. <ref> (top right), which shows that the asymptotic convergence rate (slope of the error curves) are approximately constant. However, the offset among curves depends on $k$, the dataset, and the graph connectivity and is difficult to characterize.
The convergence of distributed sample quantile estimation and top-$k$ selection under noisy links is shown in Fig. <ref> (bottom left). The variance of the noise is set to be $\sigma^2=1, 10, 20$, and $k=1$. We have generated 100 sample paths for each value of $\sigma^2$, and the figure shows the asymptotic convergence of the average of the sample paths. Finally, <ref> (bottom right) shows the convergence of the number of agent with data samples greater than their corresponding thresholds, for $k=1,3,5$ with $\sigma^2=10$. The threshold at each agent is set to be $\tau_i(t)=w_i(t)-0.5$. This correction needs to be done to avoid oscillation of the agent holding the $k$-th largest data sample.
When the number of iterations is larger than $55$, $61$ and $250$, the system can always find the top-$5$, top-$3$, top-$1$ data points, respectively. Therefore, even though the convergence time in the simulations is in the order of $10^4$ or $10^5$, the convergence to the right decision whether an agent is holding one of the top-$k$ data points happen within a much smaller number of iterations, which is a desirable feature[All the code used to obtain the numerical results contained herein can be found at <https://github.com/mullervasconcelos/L4DC23.git>]
§ CONCLUSION
In this work, we have studied networked top-$k$ data selection by using distributed sample quantile inference over noisy channels. We have provided the analysis for the almost sure convergence of a two-time-scale distributed subgradient method. Numerical results have demonstrated the convergence of the distributed algorithm and the corresponding data selection.
There are two interesting directions for future work. One is to analyze the convergence rate for the distributed sample quantile estimation. The other is to provide new algorithms to speed up the convergence.
§ AUXILIARY LEMMAS
Let $\big\{v(t), \mathcal{F}_{t}\big\},\big\{d(t), \mathcal{F}_{t}\big\},$ and $\big\{a(t), \mathcal{F}_{t}\big\}$ be three nonnegative adapted sequences. If
\begin{equation}
\mathbf{E}\big[v(t+1) \mid \mathcal{F}_{t}\big] \leq v(t)+a(t)-d(t) \ \ \ \ \text{and} \ \ \ \ \mathbf{E}\bigg[\sum_{t=1}^{\infty} a(t)\bigg]<\infty
\end{equation}
then $\sum_{t=1}^{\infty} d(t)<\infty$ almost surely, and $v(t)$ converges
almost surely.
Suppose that $\alpha(t)$ and $\beta(t)$ satisfy <ref>. Then for any $c>0$, we have
\begin{equation}
\sum_{t=0}^\infty\alpha(t+1) \exp\(-c\sum_{s=0}^{t}\beta(s)\)< \infty, \ \
\text{and} \ \
\sum_{t=0}^\infty\alpha(t+1) \sum_{l=0}^{t}\alpha(l) \exp\(-c\sum_{s=l}^{t}\beta(s)\)< \infty.
\end{equation}
urlstyle[Babcock and Olston, 2003]
Brian Babcock and Chris Olston.
Distributed top-k monitoring.
In Proceedings of the 2003 ACM SIGMOD international conference
on Management of data, pages 28–39, 2003.
[Haeupler et al., 2018]
Bernhard Haeupler, Jeet Mohapatra, and Hsin-Hao Su.
Optimal gossip algorithms for exact and approximate quantile
In Proceedings of the 2018 ACM Symposium on Principles of
Distributed Computing, pages 179–188, 2018.
[Koenker, 2005]
Roger Koenker.
Quantile Regression.
Econometric Society Monographs. Cambridge University Press, 2005.
[Lee et al., 2020]
Jongmin Lee, Cihan Tepedelenlioglu, and Andreas Spanias.
Distributed quantiles estimation of sensor network measurements.
International Journal of Smart Security Technologies,
20 (7), 2020.
[Rajagopal et al., 2006]
Ram Rajagopal, Martin J Wainwright, and Pravin Varaiya.
Universal quantile estimation with feedback in the
communication-constrained setting.
In IEEE International Symposium on Information Theory, pages
836–840, 2006.
[Rejwan and Mansour, 2020]
Idan Rejwan and Yishay Mansour.
Top-$ k $ combinatorial bandits with full-bandit feedback.
In Algorithmic Learning Theory, pages 752–776. PMLR, 2020.
[Robbins and Siegmund, 1971]
Herbert Robbins and David Siegmund.
A convergence theorem for non negative almost supermartingales and
some applications.
In Optimizing methods in statistics, pages 233–257. Elsevier,
[Shi et al., 2019]
Shaohuai Shi, Qiang Wang, Kaiyong Zhao, Zhenheng Tang, Yuxin Wang, Xiang Huang,
and Xiaowen Chu.
A distributed synchronous SGD algorithm with global top-$k$
sparsification for low bandwidth networks.
In IEEE International Conference on Distributed Computing
Systems, pages 2238–2247, 2019.
[Tang et al., 2017]
Bo Tang, Shi Han, Man Lung Yiu, Rui Ding, and Dongmei Zhang.
Extracting top-$k$ insights from multi-dimensional data.
In Proceedings of the ACM International Conference on
Management of Data, pages 1509–1524, 2017.
[Ustebay et al., 2011]
Deniz Ustebay, Rui Castro, and Michael Rabbat.
Efficient decentralized approximation via selective gossip.
IEEE Journal of Selected Topics in Signal Processing,
50 (4):0 805–816, 2011.
[Verma et al., 2021]
Ashwin Verma, Marcos M. Vasconcelos, Urbashi Mitra, and Behrouz Touri.
Max-gossip subgradient method for distributed optimization.
In IEEE Conference on Decision and Control, pages 3130–3136,
[Yi and Li, 2021]
Peng Yi and Li Li.
Distributed nonsmooth convex optimization over markovian switching
random networks with two step-sizes.
Journal of Systems Science and Complexity, 340
(4):0 1324–1344, 2021.
[Zhang et al., 2022]
Xu Zhang, Marcos M. Vasconcelos, Wei Cui, and Urbashi Mitra.
Distributed remote estimation over the collision channel with and
without local communication.
IEEE Transactions on Control of Network Systems, 90
(1):0 282–294, 2022.
|
# On fake subfields of number fields
Joachim König Korea National University of Education, Department of
Mathematics Education, 28173 Cheongju, South Korea
###### Abstract.
We investigate the failure of a local-global principle with regard to
“containment of number fields”; i.e., we are interested in pairs of number
fields $(K_{1},K_{2})$ such that $K_{2}$ is not a subfield of any algebraic
conjugate $K_{1}^{\sigma}$ of $K_{1}$, but the splitting type of any single
rational prime $p$ unramified in $K_{1}$ and in $K_{2}$ is such that it cannot
rule out the containment $K_{2}\subseteq K_{1}^{\sigma}$. Examples of such
situations arise naturally, but not exclusively, via the well-studied concept
of arithmetically equivalent number fields. We give some systematic
constructions yielding “fake subfields” (in the above sense) which are not
induced by arithmetic equivalence. This may also be interpreted as a failure
of a certain local-global principle related to zeta functions of number
fields.
###### Key words and phrases:
Number fields; arithmetic equivalence; local-global principles; permutation
groups
## 1\. Introduction and main results
Two number fields $K_{1}$ and $K_{2}$ are called arithmetically equivalent if
they have they same zeta function. This is equivalent to the property that all
primes of $\mathbb{Q}$ which are unramified in $K_{1}$ and $K_{2}$ have the
same splitting pattern in those two fields, i.e., their Frobenius has the same
cycle type in the two induced permutation actions. In other words, $K_{1}$ and
$K_{2}$ are in some sense indistinguishable by purely local investigation.
Such number fields have featured prominently in the context of several
arithmetic problems, such as the Davenport-Lewis-Schinzel problem on
reducibility of variable separated equations, and the investigation of pairs
of Kronecker conjugate polynomials, most notably by Fried (e.g, [8]) and
Müller ([17]). See, e.g., [19] for an introduction as well as some systematic
nontrivial examples of arithmetically equivalent number fields. Recently,
stronger versions of arithmetic equivalence (e.g., [20], [22]) have also been
studied, and links to other kinds of “equivalence notions” in field arithmetic
were explored e.g. in [18] and [14]. In this paper, we consider a natural
generalization of this notion: two fields $K_{1}$ and $K_{2}$ for which purely
local investigations (namely, of the splitting types of unramified primes) are
insufficient to rule out the possibility that $K_{2}$ is a subfield of
$K_{1}$. Concretely, we make the following definitions.
###### Definition 1.1.
Let $K_{1}$ and $K_{2}$ be number fields of degrees $n:=[K_{1}:\mathbb{Q}]$
and $m:=[K_{2}:\mathbb{Q}]$, such that $m$ divides $n$. We say that $K_{2}$ is
locally sub-$K_{1}$ if the following holds for all primes $p$ of $\mathbb{Q}$
which are unramified in $K_{1}$ and $K_{2}$: if $c_{1},\dots,c_{d}$ are the
residue degrees of the primes extending $p$ in $K_{2}$, then there exist
positive integers $e_{i,1},\dots,e_{i,r_{i}}$ such that
$\sum_{j=1}^{r_{i}}e_{i,j}=\frac{n}{m}$ for all $i\in\\{1,\dots,d\\}$ and the
residue degrees of primes extending $p$ in $K_{1}$ are exactly the $c_{i}\cdot
e_{i,j}$ ($i=1,\dots,d$, $j=1,\dots,r_{i}$). When $K_{2}$ is locally
sub-$K_{1}$, but not contained in any conjugate $K_{1}^{\sigma}$ of $K_{1}$
($\sigma\in\textrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$), we shall also
call $K_{2}$ a fake subfield of $K_{1}$.
Obviously, if $K_{2}\subseteq K_{1}$, then $K_{2}$ is locally sub-$K_{1}$ by
the multiplicativity of residue degrees in towers of extensions. The name
“fake subfield” is inspired by a recent article of Corvaja [5] on “fake
values” of polynomials, dealing with a failure of local-global principles
similar in spirit to the one considered here, but with regard to value sets of
polynomials.
The notion of $K_{2}$ being locally sub-$K_{1}$ is furthermore indeed a
natural generalization of arithmetic equivalence, the latter being exactly the
case of two fields being mutually locally subfields of each other; in other
words, our notion yields an order relation on the set equivalence classes of
number fields modulo arithmetic equivalence. Since any field $K_{2}$ arising
from $K_{1}$ via arbitrary iteration of “taking subfields” and “taking
arithmetically equivalent fields” will automatically be locally sub-$K_{1}$, a
natural question is whether all examples of fake subfields arise from
arithmetic equivalence. The answer will quickly be seen to be “no”. After
reviewing the situation in small degrees in Section 3, the main purpose of
this article is to provide explicit infinite families of examples. In
particular, we show the following:
###### Theorem 1.1.
For all odd primes $p$, the following hold:
* a)
Every degree $p+2$ number field $K$ with symmetric Galois group $S_{p+2}$
occurs as the fake subfield of some number field $F$.
* b)
There exist infinitely many solvable number fields of degree $2p$ occurring as
the fake subfield of some number field $F$.
Furthermore, all these examples may be chosen such that they are not induced
by arithmetic equivalence.
We will prove the two parts of Theorem 1.1 separately as Theorem 4.2 and
Theorem 5.2. In Section 6, we consider fake subfields of fields having no
nontrivial subfields. For most of our results, translation of our main notions
into permutation group-theoretical properties are crucial. These are
contained, together with some other basic observations, in Section 2. Towards
the end of the paper, we will consider a strenghtened version of our main
definition (Section 7), whose treatment requires some additional number
theoretical ideas. We concludes with some open problems in Section 8.
A few arguments require a moderate amount of computer calculations. These have
been performed using Magma [2], and the corresponding code is included in an
ancillary file.
Acknowledgements: I thank Pietro Corvaja, as well as the two anonymous
referees, for helpful suggestions. I am also indebted to Daniele Garzoni for
pointing out Lemma 6.2.
## 2\. Some preparations
The following famous result, due to Gassmann ([9]), turns the problem of
arithmetic equivalence into a purely group-theoretical problem.
###### Proposition 2.1.
Two number fields $K_{1}$ and $K_{2}$ are arithmetically equivalent if and
only if the following hold: 1) the Galois closure of $K_{1}/\mathbb{Q}$ and of
$K_{2}/\mathbb{Q}$ is the same, say $\Omega$; and 2) the subgroups
$U:=\textrm{Gal}(\Omega/K_{1})$ and $V:=\textrm{Gal}(\Omega/K_{2})$ of
$G:=\textrm{Gal}(\Omega/\mathbb{Q})$ fulfill that each conjugacy class of $G$
intersects $U$ and $V$ in the same number of elements, i.e., for all $g\in G$
one has $|g^{G}\cap U|=|g^{G}\cap V|$. (In this case, $U$ and $V$ are also
said to be Gassmann-equivalent in $G$.)
###### Corollary 2.2.
Let $K_{1}$ and $K_{2}$ be arithmetically equivalent number fields, with joint
Galois closure $\Omega\supseteq\mathbb{Q}$. Let
$G:=\textrm{Gal}(\Omega/\mathbb{Q})$, and consider any transitive permutation
action of $G$. Then $U_{1}:=\textrm{Gal}(\Omega/K_{1})$ and
$U_{2}:=\textrm{Gal}(\Omega/K_{2})$ have the same number of orbits in this
action.
###### Proof.
By Gassmann’s criterion (Proposition 2.1), $U_{1}$ and $U_{2}$ in particular
contain the same amount of elements having exactly $d$ fixed points in the
given action, for each $d\geq 0$. By the Cauchy-Frobenius formula, this means
that $U_{1}$ and $U_{2}$ have the same number of orbits. ∎
We now turn to some first observations around our main notion of fake
subfields. This notion has several implications on the relation of the two
fields involved; notably, it is already part of the definition that the degree
of a number field must be divisible by the degrees of all its fake subfields.
The following lemma gives a further noteworthy relation.
###### Lemma 2.3.
If $K_{2}$ is locally sub-$K_{1}$, then the Galois closure of
$K_{2}/\mathbb{Q}$ is contained in the one of $K_{1}/\mathbb{Q}$. In
particular, any number field can have only finitely many fake subfields.
###### Proof.
Let $p$ be a prime which is totally split in the Galois closure of
$K_{1}/\mathbb{Q}$. Since $K_{2}$ is locally sub-$K_{1}$, it follows that $p$
must also be totally split in the Galois closure of $K_{2}/\mathbb{Q}$. A
well-known theorem by Bauer ([1]) then asserts that the latter Galois closure
is contained in the former one. ∎
For the following, some extra terminology will be useful: given a permutation
$\sigma\in S_{n}$ consisting of exactly $d$ disjoint cycles of length
$c_{1}\geq\dots\geq c_{d}$ (for short: cycle type
$\lambda:=(c_{1},\dots,c_{d})$), and a total of $d$ cycle types
$\mu_{1}=(e_{1,1},\dots,e_{1,r_{1}}),\dots,\mu_{d}=(e_{d,1},\dots,e_{d,r_{d}})$
of permutations in $S_{d}$, define the concatenation of $\lambda$ and
$(\mu_{1},\dots,\mu_{d})$ to be the cycle type (in $S_{mn}$) of the form
$(c_{1}\cdot e_{1,1},\dots,c_{1}\cdot e_{1,r_{1}},\dots,c_{d}\cdot
e_{d,1},\dots,c_{d}\cdot e_{d,r_{d}})$. This corresponds exactly to the
following situation in the context of splitting of prime ideals in number
fields: If $K_{1}\supseteq K_{2}\supseteq\mathbb{Q}$ are number fields with
$m=[K_{2}:\mathbb{Q}]$ and $n=[K_{1}:K_{2}]$, such that the rational prime $p$
decomposes into $d$ primes $\mathfrak{p}_{1},\dots,\mathfrak{p}_{d}$ of
degrees $c_{1},\dots,c_{d}$ in $K_{2}$, and each $\mathfrak{p}_{i}$ decomposes
into $r_{i}$ primes of degree $e_{i,1},\dots,e_{i,r_{i}}$ in $K_{1}$, then the
cycle type of Frobenius at $p$ in $K_{1}$ is exactly the concatenation defined
above. In particular, a field $K$ being locally sub-$F$ is equivalent to the
following: for each prime $p$ unramified in $F/\mathbb{Q}$, let $\lambda_{1}$
and $\lambda_{2}$, respectively, be the cycle type of Frobenius at $p$ in the
Galois group of (the Galois closure of) $K/\mathbb{Q}$ and $F/\mathbb{Q}$
respectively; then there exists a tuple
$\underline{\mu}:=(\mu_{1},\dots,\mu_{d})$ of cycle types such that the
concatenation of $\lambda_{1}$ with $\underline{\mu}$ equals $\lambda_{2}$.
Using this, we can reword our main notions group-theoretically, namely in
terms of cycle structures in the Galois groups.
###### Lemma 2.4.
Let $K_{1}$ and $K_{2}$ be number fields of degrees $n:=[K_{1}:\mathbb{Q}]$
and $m:=[K_{2}:\mathbb{Q}]$, such that $m$ divides $n$ and the Galois closure
of $K_{2}$ is contained in the one of $K_{1}$. For $i\in\\{1,2\\}$, let
$G_{i}$ be the Galois group of the Galois closure of $K_{i}/\mathbb{Q}$
(viewed in its induced degree-$[K_{i}:\mathbb{Q}]$ action), and let
$\pi:G_{1}\to G_{2}$ be the restriction to the Galois closure of $K_{2}$. Then
$K_{2}$ is locally sub-$K_{1}$ if and only if the following holds: For all
$g\in G_{1}$, there exists an element $\sigma$ in the imprimitive wreath
product $S_{k}\wr G_{2}\leq S_{n}$ (with $k:=n/m$) whose cycle structure is
the same as that of $g$, whereas the cycle structure of its projection onto
$G_{2}$ (via action of the wreath product on blocks) is the same as that of
$\pi(g)$.
###### Proof.
Note that the cycle structures in the wreath product $S_{k}\wr G_{2}$ are
exactly the concatenations of $\lambda$ and $(\mu_{1},\dots,\mu_{d})$ where
$\lambda$ is a cycle type (consisting of $d\geq 1$ cycles) in $G_{2}$ and
$\mu_{1},\dots,\mu_{d}$ are cycle types in $S_{k}$. Thus the implication
“$\Leftarrow$” is immediate from the preceding. The implication
“$\Rightarrow$” follows in the same way, up to noting that, by the Frobenius
density theorem (a weaker version of the Chebotarev density theorem), every
cycle type in the Galois group occurs as the cycle type of Frobenius at
infinitely many primes. ∎
One reason for the relevance of the notion of arithmetic equivalence is that
arithmetically equivalent fields have the same zeta function. Below, we
translate the more general notion of being “locally sub-$K_{1}$” into a
property of zeta functions. Recall that the (Dedekind) zeta function of a
number field $K$ has an Euler product
$\zeta_{K}(s)=\prod_{\mathfrak{p}}\frac{1}{1-{N_{K/\mathbb{Q}}(\mathfrak{p})^{-s}}}$,
with the product being over all prime ideals $\mathfrak{p}$ of $O_{K}$. For a
finite set $\mathcal{S}$ of rational primes, by the contribution at
$\mathcal{S}$ to $\zeta_{K}$, we mean the product of all terms corresponding
to primes extending some prime in $\mathcal{S}$. Of course, the contribution
at unramified rational primes is completely determined by the cycle structure
of their Frobenius.
###### Lemma 2.5.
The following are equivalent for number fields $K_{1},K_{2}$:
* 1)
$K_{2}$ is locally sub-$K_{1}$.
* 2)
For every finite set $\mathcal{S}$ of prime numbers unramified in $K_{1}$ and
in $K_{2}$, there exists an extension $F:=F_{\mathcal{S}}$ of $K_{2}$ such
that the contributions at $\mathcal{S}$ to the zeta functions of $F$ and of
$K_{1}$ are the same.
###### Proof.
2)$\Rightarrow$ 1) is obvious. For the converse, one may use the following:
since the symmetric groups have generic Galois extensions over all number
fields, it is possible (e.g., as a special case of a result by Saltman [21,
Theorem 5.9]), given any finite collection of primes $\mathfrak{p}$ of $K_{2}$
and for each such $\mathfrak{p}$ a cycle type in $S_{n}$ with
$n:=\frac{[K_{1}:\mathbb{Q}]}{[K_{2}:\mathbb{Q}]}$, to find an
$S_{n}$-extension $F\supseteq K_{2}$ having Frobenius class given by the
prescribed cycle type at all those primes $\mathfrak{p}$. Concretely, choose
as the finite set of primes $\mathfrak{p}$ of $K_{2}$ exactly those extending
the given rational primes $p\in\mathcal{S}$ in $K_{2}$, and choose the cycle
types such that, for each such $p\in\mathcal{S}$, the cycle type of the
Frobenius at $p$ in $K_{1}/\mathbb{Q}$ and in $F/\mathbb{Q}$ is the same (this
is possible via suitable concatenation of cycle types, by Lemma 2.4). Then $F$
achieves the claim of 2) for the given finite set of prime numbers. ∎
In general, it is indeed necessary to exclude the ramified primes from the
characterizing condition 2) above; for this, we refer to Example 7.1
Of course, if in Lemma 2.5, the fields $F$ did not depend on the chosen set
$\mathcal{S}$ of primes, one would indeed have arithmetic equivalence between
$F$ and $K_{1}$. It is therefore natural to search for examples of fake
subfields that do not arise from arithmetic equivalence; in the context of
Lemma 2.5, this may then be seen as the failure of a local-global principle
(namely, for the zeta functions). The following is an elementary, but useful
criterion to find or exclude candidates for this.
###### Lemma 2.6.
Assume that $K$ is a fake subfield of some number field $F$. Let $L$ be the
Galois closure of $K/\mathbb{Q}$ and $G=\textrm{Gal}(L/\mathbb{Q})$ (viewed in
its induced degree $[K:\mathbb{Q}]$ action). Finally, let
$U:=\textrm{Gal}(L/L\cap F)\leq G$. Then the following hold.
* 1)
$U$ does not fix a point, but
* 2)
every element of $U$ has at least one fixed point.
###### Proof.
Let $\Omega$ be the Galois closure of $F/\mathbb{Q}$,
$\Gamma:=\textrm{Gal}(\Omega/\mathbb{Q})$ and $V:=\textrm{Gal}(\Omega/F)$. Due
to Lemma 2.3) we have $L\subseteq\Omega$, and hence $U$ equals the restriction
of $V$ to $L$. Since every $\sigma\in V$ has a fixed point in the
degree-$[F:\mathbb{Q}]$ action of $\Gamma$, the fact that $K$ is locally
sub-$F$ forces $\sigma$ to also fix a conjugate of $K$, i.e., every element of
$U\leq G$ has a fixed point. On the other hand, $U$ itself cannot fix a point,
or otherwise, up to algebraic conjugates, the fixed field of $U$, and hence a
fortiori $F$, would contain $K$, contradicting the notion of “fake subfield”.
∎
Note that permutation groups $U\leq S_{n}$ possessing no fixed point, but in
which every element has a fixed points, have been considered in the context of
“intersective polynomials”, i.e., polynomials having a root in (almost) every
$\mathbb{Q}_{p}$, but not in $\mathbb{Q}$ itself. For essentially the same
reason, they feature prominently in the context of “fake values” of morphisms
studied in [5].
###### Corollary 2.7.
If $K/\mathbb{Q}$ is a Galois extension, then $K$ is not a fake subfield of
any number field.
###### Proof.
This follows directly from the previous lemma, upon noting that no non-
identity element in the regular permutation action of a group has a fixed
point. ∎
## 3\. Extensions of small degree
We begin our investigation of concrete examples of fake subfields by
considering number fields of small degree.
###### Corollary 3.1.
If $[K:\mathbb{Q}]\leq 4$, then $K$ cannot be a fake subfield of any number
field.
###### Proof.
One verifies directly that no transitive group of degree $\leq 4$ has a fixed
point free subgroup in which every element fixes a point (this also reflects
the well-known fact that there are no non-trivially intersective polynomials
of degree less than $5$). The assertion thus follows from Lemma 2.6. ∎
### 3.1. Quintic fields
For quintic fields $K/\mathbb{Q}$, the question of whether $K$ is a fake
subfield of some number field is completely resolved by the solvability or
non-solvability of the Galois group.
###### Theorem 3.2.
* a)
Every $S_{5}$ and every $A_{5}$-quintic field occurs as the fake subfield of
some number field, in a way not induced by arithmetic equivalence.
* b)
No solvable quintic field occurs as the fake subfield of a number field.
###### Proof.
Since b) can again be derived straightaway by inspection using Lemma 2.6 (or
alternatively, from Lemma 5.1), it suffices to prove a).
First, let $K$ be an $S_{5}$ quintic number field, $\Omega$ the Galois closure
of $K/\mathbb{Q}$, and let $U:=\langle(1,2,3),(2,3)(4,5)\rangle(\cong
S_{3})\leq S_{5}$. We claim that a) $K$ is a fake subfield of the fixed field
$\Omega^{U}$ of $U$, and b) this relation is not induced by arithmetic
equivalence. Indeed, the pairs of cycle structures of non-identity elements of
$S_{5}$ in the two coset actions are exactly $((2.1^{3}),(2^{10}))$,
$((2^{2}.1),(2^{8}.1^{4}))$, $((3.1^{2}),(3^{6}.1^{2}))$,
$((4.1),(4^{4}.2^{2}))$, $((5),(5^{4}))$ and $((3.2),(6^{3}.2))$, all of which
are compatible with $K$ being a fake subfield of $\Omega^{U}$.
Furthermore, the overgroups of $U$ of order dividing
$24=|\textrm{Gal}(\Omega/K)|$ are exactly $U$ itself and $S_{3}\times S_{2}$
(the full 2-set stabilizer). None of these have Gassmann equivalent subgroups
inside $S_{5}$; for example, the only other subgroup (up to conjugacy) of
order $12$ other than $S_{3}\times S_{2}$ is $A_{4}$, which cannot be
equivalent to the $2$-set stabilizer, since it contains no transposition; and
the only other subgroups of order $6$ other than $U$ are
$\langle(1,2,3),(1,2)\rangle\cong S_{3}$ and $\langle(1,2,3)(4,5)\rangle\cong
C_{6}$; neither can be equivalent to $U$, since they contain no double
transposition. In other words, there is no way to “jump” from $U$ to
$\textrm{Gal}(\Omega/K)\cong S_{4}$ via containment and arithmetic
equivalence.
Next, let $K$ be an $A_{5}$-quintic field. Now the analog of the above
construction with $U=\langle(1,2,3),(2,3)(4,5)\rangle\leq A_{5}$ does not
quite work; notably, in the (degree-$10$) action of $A_{5}$ on cosets of $U$,
the 3-cycle has cycle structure $(3^{3}.1)$, which is not compatible with
$(3.1^{2})$. Instead, the following succeeds.
Let $\Gamma=A_{5}\times C_{3}$, and consider the order $6$ subgroup
$U=\langle(1,2,3),(2,3)(4,5)\rangle\leq A_{5}$ (i.e., $[\Gamma:U]=30$). This
is clearly not contained in an index-$5$ subgroup $V\cong A_{4}\times C_{3}$
of $\Gamma$. Since elements of cycle structure $(3.1^{2})$ did indeed
constitute the only obstruction to the fixed field of $V$ being a fake
subfield of the fixed field of $U\times C_{3}$, it suffices to verify that all
preimages of such elements in $\Gamma$ no longer constitute an obstruction to
the fixed field of $V$ being a fake subfield of the fixed field of $U$. Those
preimages either have cycle structure $(3^{9}.1^{3})$ (namely, elements of
order $3$ inside $A_{5}$, and hence contained in some conjugate of $U$) or
$(3^{10})$ (namely, elements of the form $(\sigma,\tau)$, where $\sigma\in
A_{5}$ and $\tau\in C_{3}$ are each of order $3$; these are not contained in
any conjugate of $U$). Clearly, there is no obstruction arising from either of
these cycle structures together with the “small” cycle structure $(3.1^{2})$.
Also, once again, it is easy to verify that the “fake subfield” relation is
not induced by arithmetic equivalence (in fact, the only non-trivially
Gassmann equivalent subgroups inside $\Gamma$ are pairs of subgroups of order
$12$ both projecting to a point stabilizer $A_{4}$ inside $A_{5}$, i.e.,
arithmetic equivalence cannot be used to jump from an overfield of a
degree-$5$ field to a field containing no such subfield). ∎
### 3.2. Sextic fields
###### Theorem 3.3.
* a)
Every sextic number field whose Galois closure has Galois group $S_{6}$ or
$A_{6}$ is a fake subfield of some number field, but this relation is always
induced by arithmetic equivalence.
* b)
Every sextic number field whose Galois closure has Galois group isomorphic to
$A_{4}$ occurs as a fake subfield of some degree-$12$ number field, in a way
not induced by arithmetic equivalence.
###### Proof.
Regarding a), one can verify that the only subgroups $U$ of $S_{6}$ fulfilling
the conditions of Lemma 2.6 are of order $4$, and up to conjugacy equal to
$\langle(1,2)(3,4),(3,4)(5,6)\rangle$ (i.e., containing three double
involutions, and having three orbits of length $2$). But then, if $F$ is a
field having a sextic $S_{6}$\- or $A_{6}$ number field $K$ as a fake
subfield, as in Lemma 2.6, $F$ has to contain the fixed field of $U$. This,
however, is arithmetically equivalent to the fixed field of
$\langle(1,2)(3,4),(1,3)(2,4)\rangle$ (by Gassmann’s criterion, since this
group also contains three double transpositions), but since the latter group
fixes two points, its fixed field contains some conjugate of $K$.
Regarding b), denote by $V_{4}\leq S_{4}$ the Klein $4$-group acting
transitively on $4$ points, and by $a,b,c\in V_{4}$ the double transpositions.
Set $G:=\\{((x_{1},x_{2},x_{3}),y)\in V_{4}\wr C_{3}=(V_{4})^{3}\rtimes
C_{3}\mid x_{i}\in V_{4};y\in C_{3};x_{1}x_{2}x_{3}=1\\}$, acting as a
transitive subgroup of the wreath product $V_{4}\wr C_{3}\leq S_{12}$. (In
Magma’s database of transitive groups, cf. [11], $G$ is the group
TransitiveGroup$(12,32)$.) Let $U_{1}\leq G$ be a point stabilizer in this
degree-$12$ action, i.e., without loss of generality $U_{1}=\\{((x,x,1)\in
G\cap(V_{4})^{3}\mid x\in V_{4}\\}$. Let
$N=\\{(1,1,1),(a,b,c),(b,c,a),(c,a,b)\\}\subset G\cap(V_{4})^{3}$ be an
order-$4$ normal subgroup of $G$ containing three fixed point free
involutions. Then $G/N\cong A_{4}$, and we let $U_{2}\leq G$ be a preimage of
an order-$2$ subgroup of this quotient $A_{4}$ (i.e., $[G:U_{2}]=6$). One
verifies quickly that the only pairs of cycle structures of non-identity
elements of $G$ in the action on cosets of $U_{2}$ and $U_{1}$ respectively
are $((1^{6}),(2^{6}))$, $((2^{2}.1^{2}),(2^{4}.1^{4}))$,
$((2^{2}.1^{2}),(2^{6}))$ and $((3^{2}),(3^{4}))$. From this, it is evident
that, in any Galois extension of $\mathbb{Q}$ with group $G$, the fixed field
of $U_{2}$ is locally a subfield of the fixed field of $U_{1}$, and even a
fake subfield since $U_{2}$ contains no conjugate of $U_{1}$; indeed,
otherwise $N$ would have to intersect a stabilizer in the degree-$12$ action
in at least a subgroup of order $2$, which is clearly not the case by its
definition via fixed point free involutions. Furthermore, this “fake subfield”
relation is not induced by arithmetic equivalence, notably because $U_{1}\cong
V_{4}$ injects into $G/N$ and thus has three orbits (of length $2$) in the
degree-$6$ action, whereas $U_{2}$ maps to an order-$2$ subgroup of $G/N$ and
hence has four orbits, whence Corollary 2.2 is applicable. We have therefore
obtained that the assertion of b) holds for every sextic $A_{4}$ number field
which embeds into a $G$-extension of $\mathbb{Q}$. That this holds indeed for
every $A_{4}$ number field follows from classical results on embedding
problems (e.g., [16, Chapter I, Theorem 2.4]), since $G$ is a semidirect
product of $A_{4}$ and an abelian normal subgroup $C_{2}\times C_{2}$. This
concludes the proof. ∎
From the observations so far, one also immediately obtains:
###### Corollary 3.4.
The smallest degree $[K:\mathbb{Q}]$ of a number field $K$ possessing a fake
subfield not induced by arithmetic equivalence is $12$.
## 4\. Extensions with symmetric Galois group
We now progress to more systematic examples of fake subfields. In view of our
definition of fake subfields, the following notion is useful: Let $K_{1}$ and
$K_{2}$ be two number fields, $\Omega$ the compositum of the Galois closures
of $K_{1}$ and of $K_{2}$, and $\sigma\in G:=\textrm{Gal}(\Omega/\mathbb{Q})$.
Let $\lambda_{i}$ be the cycle structure of $\sigma$ in the action induced by
$K_{i}/\mathbb{Q}$ ($i=1,2$). If there exists a tuple $\underline{\mu}$ of
cycle types such that the concatenation of $\lambda_{2}$ with
$\underline{\mu}$ equals $\lambda_{1}$, then we say that $\sigma$ does not
pose an obstruction to $K_{2}$ being locally sub-$K_{1}$ (resp., to $K_{2}$
being a fake subfield of $K_{1}$, if additionally no conjugate of $K_{2}$ is
contained in $K_{1}$). If $\sigma$ does not pose an obstruction to either
$K_{1}$ being locally sub-$K_{2}$ and $K_{2}$ being locally sub-$K_{1}$ (in
which case, of course, the cycle structures of $\sigma$ in the two actions
must be the same), then we say that $\sigma$ does not pose an obstruction to
$K_{1}$ and $K_{2}$ being arithmetically equivalent.
The following elementary observation was already known to Gassmann (and is, in
fact, crucial to his criterion).
###### Lemma 4.1.
Assume $K_{0}$, $K_{1}$, $K_{2}$ are number fields with $K_{2}$ contained in
$K_{0}$, but not in any conjugate of $K_{1}$. Let $\Omega/\mathbb{Q}$ be the
compositum of the Galois closures of $K_{0}$ and of $K_{1}$ over $\mathbb{Q}$,
and let $G:=\textrm{Gal}(\Omega/\mathbb{Q})$ and
$U_{i}:=\textrm{Gal}(\Omega/K_{i})$ for $i=0,1,2$. Assume that $\sigma\in G$
is such that the following holds: for all $k\in\mathbb{N}$, the number of
fixed points of $\sigma^{k}$ in the action on cosets of $U_{0}$ and of $U_{1}$
is the same. Then $\sigma$ does not pose an obstruction to $K_{0}$ and $K_{1}$
being arithmetically equivalent. In particular, $\sigma$ does not pose an
obstruction to $K_{2}$ being a fake subfield of $K_{1}$.
We now extend the above results about quintic $S_{5}$ number fields to
infinite families of Galois groups.
###### Theorem 4.2.
Let $p\geq 3$ be a prime number and $K$ a (degree-$p+2$) $S_{p+2}$-number
field. Then $K$ is a fake subfield of some number field, in a way not induced
by arithmetic equivalence. In particular, there are infinitely many such
number fields $K$ for each $p$.
###### Proof.
We begin by noting that existence of infinitely many $S_{n}$-extensions of
$\mathbb{Q}$ for each $n\in\mathbb{N}$ was already shown by Hilbert in [10].
Now choose subgroups $U_{0},U_{1},U_{2}\leq S_{p+2}$ as follows: $U_{2}$ is
the stabilizer of a point (in the natural action of $S_{p+2}$), $U_{0}\cong
D_{p}$ is a dihedral group of order $2p$ contained in the two-point stabilizer
of $S_{p+2}$ (in particular, up to conjugates, contained in $U_{2}$), and
$U_{1}\cong D_{p}$ is a dihedral group with orbit lengths $p$ and $2$ (in
particular, not being contained in $U_{2}$, even up to conjugates). This means
that all non-identity elements of $U_{0}$ are $p$-cycles or involutions with
three fixed points, and all non-identity elements of $U_{1}$ are $p$-cycles or
involutions with one fixed point. Letting $\Omega/\mathbb{Q}$ be any Galois
extension with Galois group $S_{p+2}$, we will verify that there exists no
element $\sigma\in S_{p+2}$ posing an obstruction to $K_{2}$ being a fake
subfield of $K_{1}$, where $K_{i}$ denotes the fixed field of $U_{i}$ for
$i=0,1,2$. We distinguish the following cases:
* i)
$\sigma$ powers to a $p$-cycle. In this case $\sigma$ is necessarily either
itself a $p$-cycle, or has cycles of length $p$ and $2$. Since $U_{0}$ and
$U_{1}$ are of the same order containing the same number of $p$-cycles, every
$p$-cycle is contained in the same number of conjugates of $U_{0}$ and of
$U_{1}$, meaning that $p$-cycles have the same number of fixed points in the
two coset actions. Also, elements of cycle type $(p.2)$ are contained in no
conjugate of $U_{0}$ and $U_{1}$, thus have no fixed point in either action.
By Lemma 4.1, the elements $\sigma$ considered here pose no local obstruction
to $K_{2}$ being a fake subfield of $K_{1}$.
* ii)
$\sigma$ powers to an involution with exactly one fixed point. We claim that,
in the action on cosets of $U_{1}$, $\sigma$ has only cycles of length $d$ and
$d/2$, where $d:=ord(\sigma)$. Indeed, $\sigma^{d/2}$ and $\sigma^{d}=id$ are
necessarily the only powers of $\sigma$ contained in some conjugate of
$U_{1}$, readily yielding the claim. We next determine the proportion of
$d/2$-cycles of $\sigma$ in this coset action.
A well-known formula gives the number of fixed points of $\tau\in G$ in the
action on cosets of $U$ as $\frac{[G:U]\cdot|\tau^{G}\cap U|}{|\tau^{G}|}$.
Evaluating with $G=S_{p+2}$, $U=U_{1}\cong D_{p}$ as above, and $\tau$ an
involution with one fixed point, yields $2^{a-1}\cdot a!$ with
$a:=\frac{p+1}{2}$. This is bounded from above by
$\frac{1}{p+2}[G:U]=\frac{(p+1)!}{2p}$, for all $p\geq 3$. Since the number of
these fixed points is exactly the number of elements contained in $d/2$-cycles
of $\sigma$, we have obtained that at most a proportion of $\frac{1}{p+2}$ of
all elements are contained in such “short” cycles. We now form a cycle type in
the symmetric group on $|U_{2}|/|U_{1}|=\frac{(p+1)!}{2p}$ letters comprising
as many $d/2$-cycles as contained in the above coset representation of
$\sigma$, and only $d$-cycles otherwise; we have only verified that the
proportion of $d/2$-cycles is small enough for this, and furthermore the cycle
lengths thus formed do indeed add up to the required permutation degree
$|U_{2}|/|U_{1}|$: indeed, not only the large permutation degree
$[G:U_{1}]=\frac{(p+2)!}{2p}$, but also the quotient
$\frac{(p+2)!/(2p)}{p+2}=\frac{(p+1)!}{2p}$ of the two permutation degrees is
necessarily divisible by $d$; this follows readily from the fact that $\sigma$
is a permutation in $S_{p+1}$ of order coprime to $p$.
Then, concatenating the fixed point of $\sigma$ with the thus constructed
cycle type, as well as concatenating all other cycles of $\sigma$ (say, of
length $m$) in the natural action by a partition consisting only of cycles of
length $d/m$, yields exactly the cycle structure of $\sigma$ in the action on
cosets of $U_{1}$, showing that there is no obstruction coming from $\sigma$.
* iii)
$\sigma$ powers to an involution with exactly three fixed points. In this
case, since we may already assume $p>3$ due to Theorem 3.2, no non-identity
power of $\sigma$ fixes a point in the action on cosets of $U_{1}$, i.e., all
cycle lengths of $\sigma$ in this action are identical (hence, equal to
$d:=ord(\sigma)$). Clearly, $\sigma$ then does not pose an obstruction either
(just concatenate any given cycle of length $e|d$ in the natural action by an
element of $S_{|U_{2}|/|U_{1}|}$ with suitably many cycles of length $d/e$.
This is possible since $d$ necessarily divides the quotient of the two
permutation degrees, as in Case ii).)
* iv)
$\sigma$ is none of the above. In this case, no non-identity power of $\sigma$
fixes any point in the action on cosets of either $U_{0}$ or $U_{1}$, meaning
that Lemma 4.1 can be applied again.
Finally, assume that the above relation is induced by (containment of fields
and) arithmetic equivalence. By Corollary 2.2, together with the fact that
$U_{2}$ and $U_{1}$ have orbit lengths $(p+1,1)$ and $(p,2)$ respectively,
this can only happen if $U_{1}$ has some intransitive overgroup $V_{1}$ (i.e.,
still with orbits of length $p$ and $2$) which is Gassmann-equivalent to a
subgroup $V_{2}$ with orbit lengths $(p+1,1)$. Gassmann equivalence means in
particular that every element of $V_{1}$ has a fixed point (since every
element of $V_{2}$ does). Clearly, $V_{1}$ acts faithfully on its orbit of
length $p$ (otherwise it would contain a $p$-cycle and a transposition with
disjoint support, the product of which would yield a fixed point free
element). In this situation, Theorem 3.3 of [6] yields the existence of a
normal subgroup $N\triangleleft V_{1}$ of index $2$ such that every element in
$V_{1}\setminus N$ has exactly one fixed point on the length $p$ orbit. In
particular, every $2$-point stabilizer of $V_{1}$ on this orbit is contained
in $N$, which in particular means that $N$ cannot act $2$-transitively on this
orbit. So $N$ is a transitive, but not $2$-transitive group of prime degree
$p$. By a classical result due to Burnside ([3]), $N$ – and hence also $V_{1}$
– is then solvable, i.e., more concretely, is isomorphic to a subgroup of
$AGL_{1}(p)\cong C_{p}\rtimes C_{p-1}$. On the other hand, $V_{2}$ acts
transitively on its orbit of length $p+1$ and contains a $p$-cycle, i.e., is
$2$-transitive. Hence $|V_{2}|\geq p(p+1)$, whereas $|V_{1}|\leq p(p-1)$,
contradicting their Gassmann equivalence. This completes the proof. ∎
###### Remark 1.
The degree restriction in Theorem 4.2 should not be expected to be necessary
at all for the conclusion to hold; in fact, it may reasonable to conjecture
the same result for all $S_{n}$-extensions of sufficiently large degree $n$,
although a proof might be combinatorially and group-theoretically intricate.
## 5\. Solvable number fields
We now consider the phenomenon of fake subfields among solvable number fields.
###### Lemma 5.1.
If the Galois group of the Galois closure of $K/\mathbb{Q}$ acts as a
Frobenius group,111Frobenius groups are often, but not always solvable. then
$K$ is not a fake subfield of any number field. In particular, no solvable
number field of prime degree occurs as a fake subfield of any number field.
###### Proof.
Let $G$ be a Frobenius group with Frobenius kernel $N$, and assume that a
subgroup $U\leq G$ as in Lemma 2.6 exists. Since all elements of
$N\setminus\\{1\\}$ are fixed point free, $U\cap N=\\{1\\}$, and $UN$ is again
a Frobenius group. But it is well-known that in such a group, all complements
to $N$ are conjugate to each other, meaning that $U$ is in fact a point
stabilizer of $UN\leq G$, contradicting its definition. The second assertion
is immediate from the first, since solvable groups of prime degree are cyclic
or Frobenius groups. ∎
###### Theorem 5.2.
Let $p\geq 5$ be a prime, and let $G=C_{2}\wr C_{p}\leq S_{2p}$ be the
imprimitive wreath product of cyclic groups of order $2$ and $p$. Then every
degree-$2p$ number field with Galois group $G$ occurs as the fake subfield of
some number field, in a way not induced by arithmetic equivalence.
###### Proof.
Upon relabelling the elements of $\\{1,\dots,2p\\}$ suitably, $G$ is generated
by the transposition $(1,2)$ together with the double-$p$-cycle
$(1,3,5,\dots,2p-1)(2,4,6,\dots,2p)$. Consider now the subgroup $U$ of $G$
generated by the two involutions $(1,2)(3,4)\dots(p,p+1)$ and
$(p,p+1)(p+2,p+3)\dots(2p-1,2p)$. We will show that the fixed field $K_{2}$ of
a point stabilizer in $G\leq S_{2p}$ is a fake subfield of the fixed field
$K_{1}$ of $U$ (of course, there are then infinitely many such fields, since
the solvable group $G$ is well-known to occur as the Galois group of
infinitely many Galois extensions of $\mathbb{Q}$). To find out about the
cycle structures in the action of $G$ on cosets on $U$, note first that the
only cycle structures in $G(\leq S_{2p})$ are those of powers of the
$2p$-cycle, as well as involutions with any number of transpositions between
$1$ and $p$. Elements $x\in G$ such that no non-identity power of $x$ is
contained in a conjugate of $U$ clearly consist only of cycles of length
$\textrm{ord}(x)$ in that coset action, and then it is obvious that such an
element does not pose an obstruction to $K_{2}$ being a fake subfield of
$K_{1}$. On the other hand, the only elements which do have nontrivial powers
inside a conjugate of $U$ (and, in fact, are themselves contained), are
involutions with $\frac{p+1}{2}$ transpositions (namely two of them inside
$U$, and clearly conjugate to each other in $G$ via a suitable power of the
double-$p$-cycle) and with $p-1$ transpositions (namely, one of them inside
$U$). We show that none of these pose an obstruction either; since
$[K_{1}:\mathbb{Q}]/[K_{2}:\mathbb{Q}]$ is a $2$-power, it is sufficient to
show that the proportion of fixed points of these elements in the action on
cosets of $U$ is no more than the proportion of fixed points in the
degree-$2p$ action (as indeed, one can then pass from the cycle type in the
“small” action to the one in the “large” one, simply by concatenating each
fixed point of the given involution $x\in G$ with an involution with the right
amount of fixed points).
Using again the expression $\frac{[G:U]\cdot|x^{G}\cap U|}{|x^{G}|}$ for the
fixed point number of $x$ in the action on cosets of $U$, the fixed point
proportion obviously becomes $\frac{|x^{G}\cap U|}{|x^{G}|}$. In our case, we
are reduced to $|x^{G}\cap U|\in\\{1,2\\}$, with the case $|x^{G}\cap U|=1$
obviously not posing an obstruction to the required proportion; in the other
case (namely, $x$ an involution with $\frac{p+1}{2}$ transpositions), it
suffices to note that the point stabilizer in the degree-$2p$ action also
contains at least two conjugates of $x$; to see this, note that this point
stabilizer is, up to conjugacy, equal to
$\langle(3,4),(5,6),\dots,(2p-1,2p)\rangle$, which has $\begin{pmatrix}p-1\\\
(p+1)/2\end{pmatrix}$ involutions of the required cycle type; this is $\geq
p-1$ since $p>3$.
Moreover, due to Corollary 2.2, if the above relation were induced by
(containment and) arithmetic equivalence, then $U$ would necessarily have at
least as many orbits as the point stabilizer in the degree-$2p$ action.
However, $U$ has $p$ orbits (all of length $2$), whereas the point stabilizer
has two fixed points and orbit lengths $2$ otherwise, i.e., has $p+1$ orbits.
∎
Theorem 1.1b) is now an immediate consequence of Theorem 5.2 together with
Theorem 3.3b) (covering the case $p=3$) and Shafarevich’s theorem asserting
the existence of infinitely many Galois extensions of $\mathbb{Q}$ with any
prescribed solvable Galois group.
## 6\. Fake subfields of fields with primitive Galois group
We so far mainly focussed on the question whether certain fields can occur as
fake subfields of other fields. In this section, we take a moment to consider
the opposite viewpoint, i.e., we ask whether certain fields can have fake
subfields. A particularly interesting case seems to be the one where a field
$K_{1}$ has no non-trivial subfields $\mathbb{Q}\subsetneq F\subsetneq K_{1}$.
In terms of the Galois group, the latter is equivalent to saying that
$U:=\textrm{Gal}(\Omega/K_{1})$ is a maximal subgroup of
$G:=\textrm{Gal}(\Omega/\mathbb{Q})$, where $\Omega$ denotes the Galois
closure of $K_{1}/\mathbb{Q}$; equivalently, $G$ acts primitively on cosets of
$U$. It is well-known that such a field $K_{1}$ can nevertheless have
arithmetically equivalent fields not conjugate to $K_{1}$ (notably, for $d\geq
3$, the group $PGL_{d}(q)$ has two Gassmann equivalent classes of maximal
subgroups of index $\frac{q^{d}-1}{q-1}$, corresponding to the stabilizer of a
line and a hyperplane in $GL_{d}(q)$). Below we give an example where $K_{1}$
has no nontrivial arithmetically equivalent fields, but nevertheless possesses
fake subfields; in other words, while $K_{1}/\mathbb{Q}$ has no nontrivial
intermediate fields, purely local observations would suggest the existence of
such subfields.
###### Lemma 6.1.
There exist infinitely many number fields of degree $234$ possessing no
nontrivial subfields and no nontrivial arithmetically equivalent fields, but
possessing fake subfields of degree $13$. More precisely, every Galois
extension of $\mathbb{Q}$ with Galois group isomorphic to $PSL_{3}(3)$
contains such fields.
###### Proof.
Let $G=PSL_{3}(3)$. This group is known to occur as the Galois group of
infinitely many Galois extensions of $\mathbb{Q}$, cf. [15]. Furthermore, $G$
possesses maximal subgroups $U\cong S_{4}(\cong PGL_{2}(3))$, and there is no
subgroup of $G$ Gassmann-equivalent to $U$ other than the conjugates of $U$
(e.g., the only other class of order-$24$ subgroups of $PSL_{3}(3)$ has
elements of order $6$, which $S_{4}$ does not). Nevertheless, comparing the
natural (degree $13$) permutation action of $PSL_{3}(3)$ with the
degree-$234(=13\cdot 18)$ action on cosets of $U$ yields that the fixed fields
of the index-$13$ subgroups are fake subfields of the fixed field of $U$ (and
since the latter fields have neither nontrivial subfields nor nontrivially
arithmetically equivalent fields, this relation cannot be induced by
arithmetic equivalence). Indeed, computation with Magma yields that the pairs
of cycle structures of non-identity elements of $G$ in both actions are as
follows: $((2^{4}.1^{5}),(2^{108}.1^{18}))$, $((3^{3}.1^{4}),(3^{78}))$,
$((3^{4}.1),(3^{77}.1^{3}))$, $((4^{2}.2^{2}.1),(4^{54}.2^{8}.1^{2}))$,
$((6.3.2.1^{2}),(6^{36}.3^{6}))$, $((8.4.1),(8^{27}.4^{4}.2))$ and
$((13),(13^{18}))$; now it is an easy exercise to compose the “small” cycle
structures with suitable cycle structures in $S_{18}$ to obtain all the
“large” cycle structures. ∎
###### Remark 2.
* a)
An exhaustive search of the Magma database of primitive groups confirms that
the examples in Lemma 6.1 are indeed smallest possible, both with regard to
the degree of the “larger” and of the “smaller” field involved. For such
computations, it is useful to note that, under the given assumptions, both
fields must have the same Galois closure. Indeed, if the Galois closure of the
“small” field were strictly smaller, the point stabilizer in the large degree
action, having no nontrivial overgroups, would have to surject onto the whole
group under projection to the smaller Galois group, which is incompatible with
Lemma 2.6.
* b)
It would be interesting to exhibit not only an infinite family of fields, but
an infinite family of groups leading to examples as in Lemma 6.1. They seem to
be rare among primitive groups, although I have computationally verified
$PSL_{3}(p)$ to yield examples for $p=3,5,7$. Verifying whether this
generalizes to all odd primes $p$ might be feasible, although beyond the scope
of this article.
We conclude this section by noting that examples such as the above can no
longer occur if the assumption of primitivity of the Galois group is
strengthened to $2$-transitivity.
###### Lemma 6.2.
Let $K$ be a number field, denote by $\Omega$ the Galois closure of
$K/\mathbb{Q}$ and assume that $\textrm{Gal}(\Omega/\mathbb{Q})$ acts as a
doubly transitive group. Then $K$ has no fake subfields of degree strictly
less than $[K:\mathbb{Q}]$.
###### Proof.
Let $G:=\textrm{Gal}(\Omega/\mathbb{Q})$ and $H<G$ any subgroup of index
smaller than $[K:\mathbb{Q}]$. We claim that, since $G$ is doubly transitive,
$H$ must necessarily be transitive. The assertion then follows easily, since
the point stabilizer $U:=\textrm{Gal}(\Omega/K)$ in $G$ is then conversely
transitive in the action on cosets of $H$ and so possesses a fixed point free
element in this action, whence the fixed field of $H$ cannot be a fake
subfield of $K$ by Lemma 2.6. The claim itself seems to be reasonably well
known (e.g., Exercise 2.12.3 in [4]), but we give a proof for completeness.
Let $\pi$ be the permutation character of the action of $G$ on conjugates of
$K$. Since this action is doubly transitive, $\pi=1_{G}+\chi$ for an
irreducible character $\chi$ (with $1_{G}$ the principal character). The
number of orbits of $H$ in this action is then
$\langle(1_{G}+\chi)_{|H},1_{H}\rangle$, which by Frobenius reciprocity equals
$\langle 1_{G}+\chi,\psi\rangle$ where $\psi$ is the permutation character of
$G$ in the action on cosets of $H$. Here $\langle 1_{G},\psi\rangle=1$, and
since $\deg(\psi)<\deg(\pi)$, $\psi-1_{G}$ must be a sum of characters of
degree $<\deg\chi$. Hence, the scalar product equals $1$, i.e., $H$ is
transitive, completing the proof. ∎
## 7\. Taking into account the ramified primes
While the exclusion of ramified primes in our main definitions is convenient
due to the arising reductions to group theory, it is of course natural (in
particular, in the context of local-global principles and their failure) to
strengthen the definitions to include also the ramified primes. The natural
way to do this is as follows:
###### Definition 7.1.
Say that $K_{2}$ is strongly locally sub-$K_{1}$, if for each rational prime
$p$, the following holds: if $K_{{2,\mathfrak{p}_{1}}}$, $\dots$,
$K_{2,\mathfrak{p}_{r}}$ are the completions of $K_{2}$ at all the primes
$\mathfrak{p}_{i}$ extending $p$, and $K_{{1,\mathfrak{q}_{1}}}$, $\dots$,
$K_{1,\mathfrak{q}_{s}}$ are the completions of $K_{1}$ at all the primes
$\mathfrak{q}_{j}$ extending $p$, then there exists a partition of
$\\{1,\dots,s\\}$ into $r$ subsets $M_{i}$ ($i=1,\dots,r$) such that for all
$j\in M_{i}$, the field $K_{1,\mathfrak{q}_{j}}$ contains (some conjugate of)
$K_{2,\mathfrak{p}_{i}}$, and additionally $\sum_{j\in
M_{i}}[K_{1,\mathfrak{q}_{j}}:K_{2,\mathfrak{p}_{i}}]$ equals the same value
(namely, automatically $[K_{1}:\mathbb{Q}]/[K_{2}:\mathbb{Q}]$) for all
$i=1,\dots,r$.
Clearly, when restricting to unramified primes, this definition becomes equal
to our previous definition of being locally sub-$K_{1}$, due to unramified
extensions of $\mathbb{Q}_{p}$ being uniquely identified by their degree. The
analogous strengthening has also been considered in the setting of arithmetic
equivalence, the strengthened notion being denoted as “locally equivalent”,
e.g., in [13]. In both cases, it is not difficult to find examples
demonstrating that the “weak” version in general does not imply the “strong”
one (and hence, systematically finding examples for the strong version may
require not only investigation of the structure of the Galois group, but also
solution of additional arithmetic problems).
We give one example (not induced by arithmetic equivalence) of number fields
$K_{1}$ and $K_{2}$ such that $K_{2}$ is locally, but not strongly locally
sub-$K_{1}$.
###### Example 7.1.
We use the notation of the proof of Theorem 3.3b). Let
$V_{4}=\\{1,a,b,c\\}\leq S_{4}$ be the Klein 4-group, and let
$G:=\\{((x_{1},x_{2},x_{3}),y)\in V_{4}\wr C_{3}=(V_{4})^{3}\rtimes C_{3}\mid
x_{i}\in V_{4};y\in C_{3};x_{1}x_{2}x_{3}=1\\}$. Now, additionally define
$U_{3}:=\langle(a,b,c),(a,a,1)\rangle\leq G\cap(V_{4})^{3}$. I.e., $U_{3}\cong
C_{2}\times C_{2}$. Choose any odd prime $p$. Then it follows from very
general existence results on Galois extensions with prescribed local behavior
that there exists a Galois extension of $\mathbb{Q}$ with Galois group $G$,
ramified at $p$ with inertia group generated by the fixed point free
involution $\sigma:=(a,b,c)$ and with decomposition group conjugate to
$U_{3}$. Indeed, since $G$ is a semidirect product of two elementary-abelian
groups of coprime order, it possesses a generic Galois extension over
$\mathbb{Q}$ by [21, Theorem 3.5], which by Theorem 5.9 of the same paper
implies realizability of $G$ as a Galois group with prescribed local behavior
at any finitely many given primes. Recall now from the proof of Theorem 3.3b)
that any such Galois extension gives rise to sextic number fields $K_{2}$
being fake subfields of certain degree-$12$ number fields $K_{1}$, and we now
analyze the residue degrees of primes extending $p$ in these fields $K_{1}$
and $K_{2}$. These are simply the number of orbits of $\langle\sigma\rangle$
joined to one common orbit of $U_{3}$ in the respective actions of degree $12$
and $6$. In the degree-$12$ action, only one pair of transpositions of
$\sigma$ (namely, coming from the component entry “$b$”) is joined together,
meaning that in $K_{1}$, $p$ is extended by one prime of residue degree $2$
and primes of degree $1$ otherwise. On the other hand, $\sigma$ is in the
kernel of the degree-$6$ action while $U_{3}$ maps to a cyclic group generated
by a double transposition. This means that two pairs of fixed points of
$\sigma$ are joined to two transpositions, i.e., in $K_{2}$, $p$ is extended
by two primes of residue degree $2$ (and primes of degree $1$ otherwise).
Therefore, $p$ poses an obstruction to $K_{2}$ being strongly locally
sub-$K_{1}$; in fact, in view of Lemma 2.5, it is worth noting that due to the
above, no degree-$12$ number field containing $K_{2}$ can even have the same
contribution to the zeta function at $p$ as $K_{1}$.
At least, we can show the following:
###### Theorem 7.1.
Assume that $K_{2}$ is locally sub-$K_{1}$, and $K_{1}$ is “locally cyclic”,
i.e., all primes ramifying in the Galois closure of $K_{1}/\mathbb{Q}$ have
cyclic decomposition group. Then $K_{2}$ is strongly locally sub-$K_{1}$.
###### Proof.
Let $p$ be a prime ramifying in $K_{1}$, let $\sigma$ be a generator of the
decomposition group at any prime extending $p$ in the Galois closure $\Omega$
of $K_{1}/\mathbb{Q}$, and let $\lambda_{i}$ be the cycle structure of
$\sigma$ in the action on cosets of $\textrm{Gal}(\Omega/K_{i})$ ($i=1,2$).
Cycles of $\lambda_{i}$ are then in one-to-one correspondence with primes
extending $p$ in $K_{i}$, with the cycle length equaling the degree over
$\mathbb{Q}_{p}$ of the respective completion. Since $K_{2}$ is locally
sub-$K_{1}$, $\lambda_{1}$ can be written as a concatenation of $\lambda_{2}$
with some $\underline{\mu}=(\mu_{1},\dots,\mu_{r})$ (where $r$ is the number
of primes $\mathfrak{p}_{i}$ extending $p$ in $K_{2}$). In this way, each
$\mathfrak{p}_{i}$ is assigned a finite set of primes $\mathfrak{q}_{i,j}$ of
$K_{1}$ with the property that the degree
$[(K_{1})_{\mathfrak{q}_{i,j}}:\mathbb{Q}_{p}]$ of each completion is a
multiple of $[(K_{2})_{\mathfrak{p}_{i}}:\mathbb{Q}_{p}]$. But since the
completion at any prime extending $p$ in all of $\Omega$ is a cyclic extension
of $\mathbb{Q}_{p}$, subextensions are uniquely identified by their degree,
meaning that divisibility of degrees automatically implies containment of
fields. This proves the assertion. ∎
Certain heuristics in inverse Galois theory suggest that locally cyclic Galois
extensions of $\mathbb{Q}$ should exist for every finite group, but this is of
course a hard problem. Among the groups we have considered so far, we can
conclude the following.
###### Corollary 7.2.
There are infinitely many degree-$n$ $S_{n}$-number fields for
$n\in\\{5,7,9\\}$, and infinitely many solvable number fields of degree $2p$
for every prime $p\geq 3$, which are fake subfields in the strong sense of
some number field.
###### Proof.
As seen in the proof of Theorems 4.2 and 5.2, these number fields all occur as
fake subfields of some number field with Galois group $S_{p+2}$
($p\in\\{3,5,7\\}$) or a solvable group, respectively. The assertion now
follows from Theorem 7.1, together with the observation that all these groups
are realizable as the Galois group of infinitely many locally cyclic
extensions of $\mathbb{Q}$ (Theorems 5.1 and 5.4 in [12]; in fact, the claim
about solvable groups is already known due to Shafarevich’s method for
realizing solvable groups as Galois groups). ∎
## 8\. Open questions
Due to Gassmann’s criterion, the question whether a given number field $K$ has
any non-trivially arithmetically equivalent fields translates to a pure
question about the Galois group of its Galois closure over $\mathbb{Q}$. Is
this also true for the question whether $K$ occurs as a fake subfield of some
number field? This does not seem automatic, even though I do not know of any
counterexamples. Indeed, we have an elementary necessary group-theoretical
condition for a number field $K$ to occur as the fake subfield of some number
field $F$ (Lemma 2.6), but it remains unclear whether this condition is also
sufficient. We have at least seen (e.g., in the case $G=A_{5}$ in Theorem 3.2,
and in the case $G=A_{4}$ acting on six points, in Theorem 3.3b)) that it may
of course in general be necessary to look for fields $F$ outside of the Galois
closure of $K/\mathbb{Q}$. Looking a bit closer at such examples, one sees
that, in the case $G=A_{5}$, the obstruction for finding a fake overfield
inside the same Galois closure was relatively “harmless”: it came from the
cycle type $(3.1^{2})$, which had cycle structure $(3^{3}.1)$ in the only
possible (namely, degree-$10$) candidate action of $A_{5}$. The obstruction
arises essentially from the two degrees being too close to each other - it
vanishes once the cycles in the second cycle type are simply repeated
sufficiently many times (for example, three times, leading to the cycle
structure $(3^{9}.1^{3})$, which is exactly what happens when passing from the
above degree-$10$ action of $A_{5}$ to the degree-$30$ action of $A_{5}\times
C_{3}$, as we did in the proof of Theorem 3.2). Other cases require more
intricate solutions. Notably, if there is an element $\sigma\in G$ whose fixed
point proportion in the “small” (i.e., candidate for a fake subfield)
permutation action is smaller than in the action on cosets of $U$ (in the
notation of Lemma 2.6), then this will remain an obstruction even after
arbitrarily repeating all the cycles from the second permutation action; in
other words, passing to a direct product cannot get rid of the obstruction in
such a case, nor can passing to a full wreath product. There are, of course,
many other possibilities to embed $G$ as a quotient into an imprimitive group
whose blocks action equals the action on cosets of $U$, and it may well be
possible to give some inductive construction to successively get rid of
obstructions, but the group theory should become rather delicate in the
process.
On a related note, in the case where one has to go beyond the Galois closure
of $K/\mathbb{Q}$ in search of fields containing $K$ as a fake subfield, one
has to solve certain embedding problems, corresponding to group extensions
$1\to N\to\Gamma\to G=\textrm{Gal}(K/\mathbb{Q})\to 1$. It seems worth
wondering whether it could ever happen that all group extensions leading to
fields containing $K$ as a fake subfield have to be non-split extensions of
the Galois group of $K/\mathbb{Q}$. In such a case, the solvability of the
embedding problem would in general depend on the behavior of primes in
$K/\mathbb{Q}$, i.e., it could happen that, among such fields with the same
Galois group, some occur as fake subfields whereas others don’t. Whether this
phenomenon can indeed occur, I do not know.
## References
* [1] M. Bauer, Zur Theorie der algebraischen Zahlkörper. Math. Annalen 77 (1916), 353–356.
* [2] W. Bosma, J. Cannon, C. Playoust, The Magma algebra system. I. The user language. J. Symb. Comput. 24 (1997), 235–265.
* [3] W. Burnside, On the properties of groups of odd order. Proc. London Math. Soc. 33 (1900). 162–185.
* [4] P.J. Cameron, Permutation groups. London Math. Soc. Student Texts 45, Cambridge Univ. Press, 1999.
* [5] P. Corvaja, On the local-to-global principle for value sets. Riv. Mat. Univ. Parma 13 (2022), 47–72.
* [6] C. Elsholtz, B. Klahn, M. Technau, On polynomials with roots modulo almost all primes. Acta Arithm. 205 (2022), 251–263.
* [7] B. Fein, W. M. Kantor, M. Schacher, Relative Brauer groups II. J. Reine Angew. Math. 328 (1981), 39–57.
* [8] M. D. Fried, The field of definition of function fields and a problem in the reducibility of polynomials in two variables. Illinois J. Math. 17 (1973), 128–146.
* [9] F. Gassmann, Bemerkungen zur vorstehenden Arbeit von Hurwitz (Über Beziehungen zwischen den Primidealen eines algebraischen Körpers und den Substitutionen seiner Gruppe). Math. Z. 25 (1926), 665–675.
* [10] D. Hilbert, Über die Irreducibilität ganzer rationaler Functionen mit ganzzahligen Coefficienten. J. Reine Angew. Math. 110 (1892), 104–129.
* [11] A. Hulpke, Constructing transitive permutation groups. J. Symb. Comput. 39 (1) (2005), 1–30.
* [12] K.-S. Kim, J. König, On Galois extensions with prescribed decomposition groups. J. Number Theory 220 (2021), 266–294.
* [13] B. Linowitz, D. B. MacReynolds, N. Miller, Locally equivalent correspondences. Ann. Inst. Fourier 67(2) (2017), 451–482.
* [14] A. Lubotzky, D. Neftin, Sylow-conjugate number fields. Preprint (2021). https://arxiv.org/pdf/2201.04103.
* [15] G. Malle, Polynomials for primitive nonsolvable permutation groups of degree $d\leq 15$. J. Symb. Comput. 4 (1987), 83–92.
* [16] G. Malle, B.H. Matzat, Inverse Galois Theory. 2nd edition. Springer, Berlin, 2018.
* [17] P. Müller, Kronecker conjugacy of polynomials. Trans. Amer. Math. Soc. 350 (5) (1998), 1823–1850.
* [18] D. Neftin, Admissibility and field relations. Isr. J. Math 191 (2012), 559–584.
* [19] R. Perlis, On the equation $\zeta_{k}(s)=\zeta_{k^{\prime}}(s)$. J. Number Theory 9 (3) (1977), 342–360.
* [20] D. Prasad, A refined notion of arithmetically equivalent number fields, and curves with isomorphic Jacobians. Adv. Math. 312 (2017), 198–208.
* [21] D. J. Saltman, Generic Galois extensions and problems in field theory. Adv. Math. 43 (1982), 250–283.
* [22] A. V. Sutherland, Stronger arithmetic equivalence. Discrete Analysis 2021:23 (2021), 23 pp.
|
††thanks: Email address.
# Microdomains and Stress Distributions in Bacterial Monolayers on Curved
Interfaces
Blake Langeslay1 Gabriel Juarez2<EMAIL_ADDRESS>1Department of Physics,
University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA
2Department of Mechanical Science and Engineering, University of Illinois at
Urbana-Champaign, Urbana, Illinois 61801, USA
###### Abstract
Monolayers of growing non-motile rod-shaped bacteria act as active nematic
materials composed of hard particles rather than the flexible components of
other commonly studied active nematics. The organization of these granular
monolayers has been studied on flat surfaces but not on curved surfaces, which
are known to change the behavior of other active nematics. We use molecular
dynamics simulations to track alignment and stress in growing monolayers fixed
to curved surfaces, and investigate how these vary with changing surface
curvature and cell aspect ratio. We find that the length scale of alignment
(measured by average microdomain size) increases with cell aspect ratio and
decreases with curvature. Additionally, we find that alignment controls the
distribution of extensile stresses in the monolayer by concentrating stress in
low-order regions. These results connect active nematic physics to bacterial
monolayers and can be applied to model bacteria growing on droplets, such as
marine oil-degrading bacteria.
## I Introduction
The role of mechanical forces in bacterial growth is of increasing interest to
both biologists and physicists Persat _et al._ (2015); You _et al._ (2018);
Duvernoy _et al._ (2018); Boyer _et al._ (2011); Grant _et al._ (2014);
Volfson _et al._ (2008). Bacteria colonize a wide variety of interfaces
(liquid-solid, liquid-air, liquid-liquid) with vastly different properties
Marshall (1986); Krajnc _et al._ (2022); Conrad (2020); Niepa _et al._
(2017). To thrive under these diverse conditions, cells must contend with
physical forces such as surface tension and hydrodynamic interactions to
stably adhere to the interface over many generations. Bacteria growing on a
flat interface as a monolayer of cells have been successfully modeled as an
active nematic material. This is a widely studied class of active liquid
crystal whose components align parallel to one another and have end-to-end
symmetry You _et al._ (2018); Dell’Arciprete _et al._ (2018). Rod-shaped
bacteria cells have the required symmetry and, when in a dense monolayer,
align due to steric interactions between cells. Extensile activity can be
produced by motility Copenhagen _et al._ (2021) or by the growth and division
of the cells Dell’Arciprete _et al._ (2018). In either case, forces in the
monolayer are strongly coupled to alignment, as both motility and growth exert
forces along the axis of orientation.
The active nematic model of bacterial monolayers has proven powerful in
predicting the internal forces and behavior of real systems such as monolayers
of gliding Myxobacteria and chaining biofilms Copenhagen _et al._ (2021);
Yaman _et al._ (2019). In particular, the behavior of cells near topological
defects has been successfully tied to the behavior of other well-studied
active nematic systems. Topological defects are singularities in the director
field of liquid crystal alignment, points around which there is a net rotation
of the director; this rotation is defined as the defect’s charge. In active
nematic materials, defects are almost always limited to charges of $\pm 1/2$
Giomi _et al._ (2013). These singularities drive much of the unique behavior
of active nematics. Comet-shaped +1/2 defects move as motile particles that
generate complex flows Giomi _et al._ (2013); Thampi _et al._ (2013), and
material accumulates at positively charged defects and depletes at negatively
charged defects, allowing 2D materials to escape into the third dimension by
multilayering or buckling Guillamat _et al._ (2022); Saw _et al._ (2017);
Endresen _et al._ (2021); Turiv _et al._ (2020); Kawaguchi _et al._ (2017).
This last effect has been observed to drive transitions from monolayers to
multilayered 3D structures Copenhagen _et al._ (2021); Yaman _et al._
(2019).
Previous work on monolayers of rod-shaped bacteria has indicated that when the
cells are hard rods (rather than the flexible rods observed in species such as
Myxobacteria), their alignment behavior changes. In this case, working with
bacteria such as E. coli, cells have been observed to segregate into
microdomains, regions of near-parallel local alignment analogous to grains in
crystalline or granular materials You _et al._ (2018). Rather than having a
continuous, gradually changing alignment field, these systems exhibit sharp
changes in alignment across the boundaries between microdomains. This
represents a fundamentally different type of liquid crystal behavior from that
observed in active nematics composed of microtubules or flexible cells. In
these systems, microdomains and their boundaries can replace topological
defects as a way of mapping alignment You _et al._ (2018).
The alignment behavior of simulated and microtubule-based active nematic
materials is well known to change based on the curvature of their substrate.
Topological defects respond to curvature, with $+1/2$ ($-1/2$) defects
accumulating at regions of positive (negative) Gaussian curvature Alaimo _et
al._ (2017); Ellis _et al._ (2018); Nestler and Voigt (2022). However, it is
currently unknown what effect curvature might have on the alignment of a more
granular hard-cell system. To understand how stresses behave in a curved
growing monolayer (for example, one growing on a droplet of oil in water), it
is crucial to first understand the alignment behavior of its cells.
To investigate this issue, we simulated the growth of rod-shaped cells on
spherical surfaces. This enabled an analysis of how alignment responds to
curvature and how stresses in turn respond to alignment. We find that both
cell aspect ratio and surface curvature play a role in controlling the length
scale of alignment, with higher curvature substrates limiting alignment.
Additionally, we find that regions of high stress are predicted by the
orientational order, a general measure of low local alignment. These results
enable predictions of how granular monolayer behavior might vary on
differently curved surfaces.
## II Methods
### II.1 Experiments of bacterial growth at flat liquid interfaces
Cell cultures of _A. borkumensis_ were grown for 24 hours in ATCC medium 2698
at 30 ∘C in an orbital shaker at 180 RPM. Cells were non-motile and rod-shaped
with an average length of 2.7 µm and width of 0.7 µm. To observe bacterial
growth at oil-water interfaces, a custom microfluidic device was used. A flat
oil-water interface was pinned to a microscope slide by a thin copper TEM grid
(18 µm, SPI Supplies, 2010C-XA) with square apertures 205 µm wide. The cell
culture was injected above this, allowing cells to adsorb on the interface.
Then, a microfluidic chamber was constructed around the grid to house the
interface and allow for constant flow of growth media (diluted 10:1 with
artificial seawater) at 2 µL min-1. This flow prevented additional cells from
settling on the grid during the experiment.
Time-lapse phase contrast microscopy was used to image the growing cell colony
8 hours using a $60\times$ objective (NA $=0.6$ and a depth of field of 2 µm).
A single square aperture in the TEM grid was imaged in an experiment, selected
for clarity and lack of visible contaminants. Images were recorded with a 50
ms exposure time at 2-minute intervals for 24 hours.
### II.2 Simulations of bacterial growth on flat and curved substrates
Molecular dynamics simulations were conducted to obtain precise quantitative
data on the physical characteristics of growing bacterial monolayers on flat
and curved surfaces. Each cell was modeled as a spherocylinder with a diameter
$d_{0}$ and length $l$ between the endcaps. To simulate cell growth, the
length of the cylinder $l$ increased linearly with time up to a maximum length
$l_{0}$ while the diameter remained fixed at $d_{0}$. The changes in position
$\vec{x}$ and orientation $\theta$ of each cell were modeled by the overdamped
Newton’s equations as follows:
$\frac{d\vec{x}}{dt}=\frac{1}{l\zeta}\vec{F}$ (1)
$\frac{d\theta}{dt}=\frac{1}{l^{3}\zeta}\tau$ (2)
where $\zeta$ is the effective viscosity of the interface based on the
interaction between the bacterium and its surroundings and $\vec{F}$ and
$\tau$ are the total force and torque on the cell due to cell interaction
forces. The interactions between cells were modeled as Hertzian forces, with
the force on cell $i$ due to cell $j$ calculated as follows:
$\vec{F_{ij}}=Yd_{0}^{1/2}h_{ij}^{3/2}\vec{N_{ij}}$ (3)
where $Y$ is proportional to the Young’s modulus of a cell, $h_{ij}$ is the
overlap distance between the two cell bodies, and $\vec{N_{ij}}$ is the vector
normal to cell $j$ at the point of contact Orozco-Fuentes and Boyer (2013);
You _et al._ (2018); Hertz (1882).
Cell growth was modeled using a time-independent rate $g_{0}$. To prevent the
synchronized division of cells, each cell was assigned random a value between
$g_{0}/2$ and $3g_{0}/2$ for the growth rate. Whenever a cell length $l$
exceeded the maximum length $l_{0}$, cell division would occur where the cell
would split into two identical cells, each with a length $(l_{0}-d_{0})/2$.
The new cell would be initialized with the same orientation $\theta$ as the
original cell, however, the new cell would be assigned a different randomized
growth rate.
Simulations of cell growth on flat substrates were initialized with a single
parent cell, whereas simulations of growth on spherical substrates were
initialized with two parent cells, one located at the north pole and the other
at the south pole of the sphere. This resulted in two hemispherical colonies
growing until contact was made at the equator, after which, the distribution
of cells rapidly became homogeneous across the entire surface. Data was
collected on fully-covered spheres once they had reached a packing fraction of
$\phi=1.05$, where $\phi$ was defined as the area fraction of the surface
covered by cells. In all simulations, the center of volume of cells was
constrained to be attached to both flat and spherical substrates and the cell
orientation was constrained parallel to the surface (or to the tangent plane
at the point of contact, for spherical substrates). Therefore, no out-of-plane
motion was allowed.
Simulation model parameter values were chosen to be representative of a
generic gram-negative, rod-shaped bacterium, including those for the A.
borkumensis cells used in experiments at flat liquid interfaces You _et al._
(2018). Therefore, the values were set to the following: cell diameter $d_{0}$
to 0.7 µm, Young’s modulus $Y$ to 4 MPa, drag per length $\zeta$ to 200 Pa h,
and growth rate $g_{0}$ to 2 µm h-1. A simulation time step of $5\times
10^{-6}$ hours was used. To study the effect of different cell aspect ratios,
the maximum growth length allowed was varied between $2<l_{0}<5$ µm. Cell
elongation was parametrized with the aspect ratio $\alpha$, defined here as
$\alpha=(l_{0}+d_{0})/d_{0}$, therefore, ranging from $3.9<\alpha<8.1$. To
study the effect of varying substrate curvature $\kappa$, spherical substrates
with radius $R=10,12,15,20,\ \text{and}\ 30$ µm were used. Here, substrate
curvature is defined as the Gaussian curvature, or $\kappa=R^{-2}$, for a
spherical surface.
## III Results
Figure 1: Formation of a bacterial monolayer due to growth at flat and curved
substrates. (a) Micrographs of _A. borkumensis_ cells growing on a flat liquid
interface at (top) early times and (bottom) up to full surface coverage. (b,
c) Simulations of cell growth on (b) flat and (c) spherical substrates with
$R=20$ µm at (top) early times and (bottom) up to full surface coverage.
Colors correspond to the cell orientation angle. All scale bars shown
represent 20 µm. See Supplementary videos SV1, SV2, and SV3.
In both experiments and simulations, bacteria grow and divide at the surface,
forming a monolayer that eventually covers the entire available surface area,
shown in Fig. 1. At early times, single cells grow and divide to form
colonies, shown in Fig. 1 (top row). At later times, as the cells continue to
grow and divide, their contact forces and torques cause them to align with
their neighbors to form a liquid crystal with nematic symmetry, shown in Fig 1
(bottom row). As cells grow, the area covered by the colony increases
exponentially with time until the available surface area is fully covered.
These behaviors are consistently observed in experiments of growth on flat
interfaces and in simulations of growth on flat and spherical substrates.
To establish correspondence between experiments and simulations, the
distribution of topological defects in cell monolayers on flat interfaces and
substrates, respectively, are compared. First, to identify topological
defects, a director field was established. In experiments, the director field
was determined using a custom image-processing algorithm based on the
brightness gradient of the phase contrast images. In simulations, the director
field was generated directly from the position and orientation of each cell.
Then, to locate topological defects within the director field, each point on
the grid (or each cell in simulations) was tested for a net rotation of the
surrounding director field (with net rotations of $\pm\pi/2$ corresponding to
defect charges of $\pm 1/2$) DeCamp _et al._ (2015). Nearby points with
similar net rotations were then separated into clusters, with the centroid of
each cluster corresponding to a defect of the associated charge.
The distribution of topological defects in experiments at flat interfaces
corresponds to simulations of growing cells with the same dimensions as A.
borkumensis on flat substrates, shown in Fig. 2 (top row, flat) and
Supplementary Fig. S1. This is determined using the mean defect separation,
defined as the average over the three nearest-neighbor distances for all
defects. In experiments, the mean defect separation is $8.9\pm 3.1$ µm, while
in simulations, the mean defect separation is $8.17\pm 2.9$ µm.
Based on the good qualitative and quantitative agreement between experiments
and simulations, the effect of cell aspect ratio $\alpha$ and substrate
curvature $\kappa$ on orientational order, the degree of alignment, and stress
within the cell monolayer was investigated using simulations. First, the
orientational order S was evaluated to measure the degree of local alignment
in the monolayer. This was calculated at each individual cell i using the
following equation:
$S_{i}=\sum_{j}\frac{1}{2}(3\cos^{2}(\theta_{i}-\theta_{j})-1)$ (4)
where $\theta_{j}$ is the orientation of each cell within a search radius of
cell $i$. For the purpose of this analysis, the search radius was set equal to
the division length $l_{0}$ of the cells. Here, $S\rightarrow 1$ represents
ordered regions while $S\rightarrow 0$ represents disordered regions.
Simulations reveal regions of high order separated by regions of low order,
shown in Fig. 2 (top row). These regions of high cell alignment emerge for all
combinations of substrate curvature and cell aspect ratio, including flat
substrates. For a given curvature, increasing the cell aspect ratio from
$\alpha=4.9$ to $\alpha=6.7$ increases the size of these regions, shown in
Fig. 2 ($R=20$ µm). Additionally, topological defects tend to coincide with
areas of low order, shown in Fig. 2 (top row). This follows from the
definition of defects, as the net rotation of the director field around the
defect requires imperfect alignment. Therefore, increasing the cell aspect
ratio also increases the distance between $\pm$ 1/2 defect separation, as
shown in Supplementary Fig. S2.
Figure 2: Orientational order, topological defects, and microdomains in
simulations of monolayers with varying cell aspect ratio $\alpha$ and
substrate radii $R$. (Top row) Orientational order S and topological defects
with charge $\pm 1/2$ shown in red/blue, respectively. Topological defects
tend to occur in regions of low orientational order for all aspect ratios and
substrate radii. (Bottom row) Microdomain representation of the cell
monolayers shown in the top row. Microdomain boundaries tend to coincide with
regions of low orientational order for all aspect ratios and substrate radii.
For visualization purposes, spherical substrates ($R=10$ µm and $R=20$ µm) are
not shown to scale while the field of view for the flat simulations (leftmost
column) is $50\times 50$ µm.
Next, the mean microdomain area $\langle A\rangle$ was calculated to
characterize the degree of cell alignment in the monolayer. Boundaries between
microdomains are one-dimensional discontinuities in the alignment field rather
than point defects, separating regions of near-parallel alignment. Cells were
sorted into microdomains using the following two criteria: (i) cells were in
contact with one another and (ii) their orientation differed by less than 0.2
radians.
Microdomains, represented as differently colored regions, correspond to
regions of high-aligned cells, shown in Fig. 2 (bottom row). Borders between
large microdomains correspond to regions of low order, reflecting the
discontinuity in alignment between microdomains. Similarly, for a given
curvature, increasing the cell aspect ratio increases the size of a single
microdomain, shown in Fig. 2 ($R=20$ µm). The area of a microdomain is given
as $A=\phi\sum{A_{i}}$, where $A_{i}$ are the areas of each cell component
within the microdomain. For all substrate curvatures and aspect ratios
investigated here, the distribution of microdomain areas within the monolayer
is described by $P(A)\propto$ exp($A/\langle A\rangle$), where $\langle
A\rangle$ is the mean microdomain area for that monolayer, shown in
Supplementary Fig. S3. You _et al._ (2018). The mean microdomain area for a
system is used to characterize the area scale of its cell alignment.
The mean microdomain area increased with increasing cell aspect ratio, shown
in Fig. 3(a). For the lowest curvature ($\kappa=0.001$, $R=30$ µm) the effect
of cell aspect ratio was the most dramatic, with an increase in the mean
domain area of seven-fold. For the highest curvature ($\kappa=0.01$, $R=10$
µm), however, the effect of cell aspect ratio was less prominent, producing an
increase in the domain area of a factor of two. In fact, the dependence of
mean domain area on aspect ratio could be described with a power law relation
in the range of aspect ratios investigated, with scaling exponents increasing
from 1.2 for $\kappa=0.01$ up to 2.9 for $\kappa=0.001$. The increase in
alignment at higher $\alpha$ is consistent with previous work on bacterial
monolayers, which has shown that more elongated (higher aspect ratio) cells
produce stronger alignment in flat monolayers You _et al._ (2018).
Figure 3: The effect of cell aspect ratio and substrate curvature on
microdomain area. (a) Log-log plot of mean microdomain area $\langle A\rangle$
for different cell aspect ratios $\alpha$ on five different substrate
curvatures. Microdomain area increases with the aspect ratio for all
curvatures. (b) Semilog plot of mean microdomain area $\langle A\rangle$ for
different substrate curvatures $\kappa$ and four different cell aspect ratios.
Microdomain area decreases with substrate curvature for all aspect ratios. (c)
Summary of the mean microdomain area $\langle A\rangle$ across the simulated
parameter space of aspect ratio and substrate curvature.
The mean microdomain area decreased with increasing substrate curvature, shown
in Fig. 3(b). For low aspect ratios ($\alpha\leq 5$), an inverse exponential
form with $\langle A\rangle\propto$ exp($-\kappa/\kappa_{0}$) accurately
describes the relation between area and curvature. Specifically, for $\alpha$
of $3.9$ and $4.9$, the value of $\kappa_{0}$ was equal to $0.039$ µm-2 and
$0.020$ µm-2, respectively. For high aspect ratios ($\alpha>5$), the inverse
exponential does not accurately describe the relation between area and
curvature, demonstrating a qualitative change in the system’s alignment
behavior. In all cases, however, more curved substrates produced consistently
lower microdomain areas.
Measurements of mean domain area across a range of aspect ratios and
curvatures were combined to show the system’s response over the
$\alpha-\kappa$ parameter space, shown in Fig. 3(c). The combined effect of
cell aspect ratio and substrate curvature on microdomain area is evident. The
largest domain areas are observed at high cell aspect ratios and low substrate
curvatures. The smallest domain areas, however, are observed at low cell
aspect ratios and high substrate curvatures.
Lastly, the parallel component of the Virial stress $\sigma_{\parallel}$ on
each cell was measured to determine the force distributions in the monolayer.
The Virial stress $\bm{\sigma}_{i}$ on a cell is given as follows:
$\bm{\sigma}_{i}=\frac{\phi}{a_{i}}\sum_{j}\bm{r}_{ij}\bm{F}_{ij}$ (5)
where $a_{i}$ is the area of cell $i$, $\bm{r}_{ij}$ is the vector from the
center of cell $i$ to the point of contact with cell $j$, and $\bm{F}_{ij}$ is
the force from cell $j$ on cell $i$. When $\bm{\sigma}$ is calculated in the
basis of vectors parallel and perpendicular to the cell’s orientation, it can
be decomposed into parallel ($\sigma_{\parallel}$), perpendicular
($\sigma_{\perp}$), and shear ($\tau$) components:
$\bm{\sigma}=\begin{bmatrix}\sigma_{\parallel}&\tau/2\\\
\tau/2&\sigma_{\perp}\end{bmatrix}$
The parallel stress $\sigma_{\parallel}$ corresponds to the force in the
direction of the cells’ growth. Because of the extensile nature of the system,
$\sigma_{\parallel}$ is always negative (corresponding to a compressive force)
You _et al._ (2018).
Figure 4: Parallel component of the Virial stress $\sigma_{\parallel}$ in
growing cell monolayers on curved surfaces. (a) Visualization of the
normalized parallel stress in cell monolayers on a curved surface with $R=20$
µm and for cell aspect ratios of $\alpha=4.9$ and $6.7$. (b) Scatter plot of
the normalized parallel stress of individual cells and their local
orientational order for the case of $R=20$ µm and $\alpha=4.9$. The trendline
shows the data binned by orientation order and the errorbars represent the
standard error. (c) Relation between the normalized parallel stress magnitude
and the orientational order for differing aspect ratios. Average parallel
stress magnitude increases by up to 30% in the lowest-order bin ($S<0.1$). (d)
Relation between the parallel stress magnitude and the orientational order for
differing surface curvatures. (e) Deviation from mean parallel stress at
topological defects ($+1/2$ in red, $-1/2$ in blue) and in low-order regions
($S<0.1$ in magenta). Low-order regions have a greater deviation from the mean
stress than topological defects for all curvatures and aspect ratios.
The parallel component of the Virial stress, normalized by the mean stress,
for each cell in the monolayer on a curved substrate with $R=20$ µm is
visualized and shown in Fig. 4(a). At the cell level, the normalized stress
varied over a wide range of values. For example, for $\alpha=4.9$ and $R=20$,
the mean stress was $\langle\sigma_{\parallel}\rangle=-0.043$ N m and the
normalized stress varied from a minimum of zero up to a maximum of $\sim
3.8\langle\sigma_{\parallel}\rangle$. For an aspect ratio of $\alpha=6.7$ and
the same surface curvature, the mean stress was
$\langle\sigma_{\parallel}\rangle=-0.035$ N m and the normalized stress varied
from a minimum of zero up to a maximum of $\sim
4.4\langle\sigma_{\parallel}\rangle$. Stress in growing monolayers with
varying values of $\alpha$ and $\kappa$ can be seen in Supplementary videos
SV4-SV7.
A scatter plot of the normalized parallel component of Virial stress and the
orientational order for a representative simulation ($R=20$, $\alpha=4.9$) is
shown in Fig. 4(b), grey points. It is evident that, at the individual cell
level, stress values can vary over a wide range with respect to the average
value of stress. While no trend is immediately visible in the scattered data,
binning the data by the orientational order $S$ reveals a relationship between
the normalized stress $\sigma_{\parallel}/\langle\sigma_{\parallel}\rangle$
and orientational order $S$. First, five evenly spaced bins were determined by
identifying the minimum $S_{min}$ and maximum $S_{max}$ values of the
orientational order for each simulation. Then, the mean orientational order
and mean parallel stress were computed for each bin. The result is shown in
Fig. 4(b), red line. For this example ($R=20$, $\alpha=4.9$), the binned data
shows that cells in regions of low-order experience a higher magnitude of
parallel stress compared to cells in high-order regions.
The trend of higher-magnitude stress at lower-order regions is a robust
observation for all simulations with varying cell aspect ratio and substrate
curvature. Plots of normalized parallel stress binned by order and averaged
across all simulations with the same aspect ratio or surface curvature are
shown in Fig. 4(c) and 4(d), respectively. For all cases, the average
normalized stress decreases with increasing orientational order. Specifically,
in regions of high orientational order ($S>0.5$), the local average of
normalized stress approaches unity, or
$\sigma_{\parallel}\approx\langle\sigma_{\parallel}\rangle$. In regions of low
orientational order, however, ($S<0.1$), the local average of normalized
stress is 15% to 35% larger than the global average stress, or
$\sigma_{\parallel}\geq 1.15\langle\sigma_{\parallel}\rangle$. The reason for
the large standard error for $\alpha=8.1$ and $S<0.1$ is that the high aspect
ratio cells are more aligned on average, resulting in a sparse number of cells
in low-order regions to analyze.
The stress in other active nematics such as those composed of epithelial cells
is higher near $+1/2$ topological defects, which themselves are low-order
regions by definition Guillamat _et al._ (2022). To investigate whether this
effect was responsible for the correlation of high stress and low order in
bacterial monolayers, stress near topological defects was compared to the
previously calculated stress in low-order regions ($S<0.1$). The mean value of
the parallel component of the Virial stress ($\sigma_{\parallel}$) was
computed for all cells within close proximity ($r=l_{d}/2$) of a $\pm 1/2$
topological defect within the monolayer. Then, this value was compared to the
average normalized parallel stress of the bin with the lowest orientational
order ($S<0.1$) and is shown in Fig. 4(e). For all cases, the average
deviation from the mean stress near $\pm 1/2$ defects is less than 10%, or
$\sigma_{\parallel}\leq 1.1\langle\sigma_{\parallel}\rangle$. In contrast, the
average deviation from the mean stress in regions of low orientational order
($S<0.1$) is always greater than 15%, or $\sigma_{\parallel}\geq
1.15\langle\sigma_{\parallel}\rangle$, for all cell aspect ratios and
substrate curvatures. This shows that the observed high stress-low order
correlation in this system is not caused by stress concentration near
topological defects, and is instead a distinct effect.
## IV Discussion
Previous work on monolayers of growing hard-rod cells has shown that, in the
case of a flat substrate, microdomain size increases with increasing cell
aspect ratio You _et al._ (2018). Our work is consistent with this result,
and further confirms that the trend holds for monolayers growing on curved
substrates as well. Additionally, while previous work demonstrated the
relation between microdomain size and aspect ratio for an unconfined growing
colony You _et al._ (2018), our results show that the same relation is true
for a growing colony confined to a finite substrate area (the surface of a
sphere). Together, these show that microdomain formation and its dependence on
cell aspect ratio are robust collective behaviors in growing hard-rod
monolayers under a variety of conditions.
We also find a new dependence of microdomain area on the curvature of the
surface. Regardless of cell aspect ratio, higher curvature substrates resulted
in smaller microdomains. This decrease in alignment is attributable to the
surface’s curvature restricting cells from lying parallel to each other.
Microdomains can only maintain close cell alignment over a small enough area
to be approximated as locally flat, and a microdomain spanning a much larger
area on a curved surface will eventually be geometrically required to
fracture. As curvature increases, the maximum area that can be treated as
locally flat decreases, and accordingly the domain area decreases as well.
In a continuous (non-granular) active nematic composed of flexible epithelial
cells, extensile stress is highest at positively charged topological defects
Guillamat _et al._ (2022). This causes the cell layer to deform at these
points, producing “mounds” localized near the defectsGuillamat _et al._
(2022). Similar deformations are seen in other continuous active nematics at
deformable 2D interfaces, such as microtubules on the surface of a vesicle or
actin fibers in the morphogenesis of multicellular organisms Keber _et al._
(2014); Maroudas-Sacks _et al._ (2021), and this behavior has been further
confirmed and studied in numerous simulations and theoretical works Metselaar
_et al._ (2019); Ruske and Yeomans (2021); Santiago (2018); Vafa and Mahadevan
(2008); Hoffmann _et al._ (2022); Alert (2022). In a granular active nematic,
our simulations show that extensile stress concentrates at all low-order
regions rather than at the locations of point $+1/2$ topological defects. This
can be expected to result in different forms of deformation under growth
stress.
Our results are relevant for systems of bacteria growing at liquid-liquid
interfaces, such as at the oil-water interface of a droplet. For example,
previous work has shown that cell growth confined to the surface of a droplet
produces tube-like protrusions similar to those produced by continuous active
nematics Hickl and Juarez (2022); Prasad _et al._ (2022). However, a complete
theoretical description of droplet deformation by cell growth is still
lacking. Our results suggest that these protrusions should nucleate at low-
order sites such as boundaries between microdomains, rather than nucleating
exclusively at $+1/2$ defects. By extension, this allows us to predict how
interfacial curvature influences deformations due to the underlying
microstructure. For example, higher curvature (smaller) droplets will produce
more closely spaced protrusions since the characteristic size of their
microdomains decreases.
Our results also emphasize the importance of constituent particle properties
on collective behavior. While a hard-rod monolayer exhibits many of the same
properties as a nematic composed of microtubules or other flexible components
Dell’Arciprete _et al._ (2018), its internal forces are not well predicted by
topological defects. Furthermore, a growing self-organized monolayer produces
microdomains that are distinct from the continuous alignment fields of other
active nematic systems. The partially granular nature of a bacterial monolayer
is clearly of great importance to understanding its collective behavior.
In conclusion, we have shown that stress distributions in a hard-rod bacterial
monolayer vary predictably based on cell aspect ratio and substrate curvature.
Specifically, our experimentally validated simulations show that stress in a
hard-rod monolayer concentrates at low-order regions, which occur at the
boundaries of microdomains. The length scale of these microdomains increases
with cell aspect ratio and decreases with substrate curvature. These results
demonstrate that while a bacterial monolayer can be effectively modeled as a
continuum active nematic, in some cases when its cells act as hard rods, the
alignment and stress distributions behave in distinctly different ways.
## V Conflicts of Interest
There are no conflicts of interest to declare.
## VI Acknowledgements
This work used the eXtreme Science and Engineering Discovery Environment
(XSEDE), which is supported by National Science Foundation grant number
#ACI-1548562. In particular, we used the Pittsburgh Supercomputing Center’s
Bridges-2 resources under allocation ID PHY210132. We thank Vincent Hickl for
assistance with experiments on growing bacterial monolayers at liquid
interfaces.
## References
* Persat _et al._ (2015) A. Persat, C. Nadell, M. Kim, F. Ingremeau, A. Siryaporn, K. Drescher, N. Wingreen, B. Bassler, Z. Gitai, and H. Stone, Cell 161, 988 (2015).
* You _et al._ (2018) Z. You, D. J. G. Pearce, A. Sengupta, and L. Giomi, Phys. Rev. X 8, 031065 (2018).
* Duvernoy _et al._ (2018) M.-C. Duvernoy, T. Mora, M. Ardré, V. Croquette, D. Bensimon, C. Quilliet, J.-M. Ghigo, M. Balland, C. Beloin, S. Lecuyer, and N. Desprat, Nature Communications 9, 1120 (2018).
* Boyer _et al._ (2011) D. Boyer, W. Mather, O. Mondragón-Palomino, S. Orozco-Fuentes, T. Danino, J. Hasty, and L. S. Tsimring, Physical Biology 8, 026008 (2011).
* Grant _et al._ (2014) M. A. A. Grant, B. Wacław, R. J. Allen, and P. Cicuta, Journal of The Royal Society Interface 11, 20140400 (2014).
* Volfson _et al._ (2008) D. Volfson, S. Cookson, J. Hasty, and L. S. Tsimring, Proceedings of the National Academy of Sciences 105, 15346 (2008).
* Marshall (1986) K. Marshall, Advances in Colloid and Interface Science 25, 59 (1986).
* Krajnc _et al._ (2022) M. Krajnc, P. Stefanic, R. Kostanjšek, I. Mandic-Mulec, I. Dogsa, and D. Stopar, npj Biofilms and Microbiomes 8, 25 (2022).
* Conrad (2020) J. C. Conrad, Journal of Industrial Microbiology and Biotechnology 47, 725 (2020).
* Niepa _et al._ (2017) T. H. R. Niepa, L. Vaccari, R. L. Leheny, M. Goulian, D. Lee, and K. J. Stebe, Scientific Reports 7, 17864 (2017).
* Dell’Arciprete _et al._ (2018) D. Dell’Arciprete, M. L. Blow, A. T. Brown, F. D. C. Farrell, J. S. Lintuvuori, A. F. McVey, D. Marenduzzo, and W. C. K. Poon, Nature Communications 9, 4190 (2018).
* Copenhagen _et al._ (2021) K. Copenhagen, R. Alert, N. S. Wingreen, and J. W. Shaevitz, Nature Physics 17, 211 (2021).
* Yaman _et al._ (2019) Y. I. Yaman, E. Demir, R. Vetter, and A. Kocabas, Nature Communications 10, 2285 (2019).
* Giomi _et al._ (2013) L. Giomi, M. J. Bowick, X. Ma, and M. C. Marchetti, Phys. Rev. Lett. 110, 228101 (2013).
* Thampi _et al._ (2013) S. P. Thampi, R. Golestanian, and J. M. Yeomans, Phys. Rev. Lett. 111, 118101 (2013).
* Guillamat _et al._ (2022) P. Guillamat, C. Blanch-Mercader, G. Pernollet, K. Kruse, and A. Roux, Nature Materials 21, 588 (2022).
* Saw _et al._ (2017) T. B. Saw, A. Doostmohammadi, V. Nier, L. Kocgozlu, S. Thampi, Y. Toyama, P. Marcq, C. T. Lim, J. M. Yeomans, and B. Ladoux, Nature 544, 212 (2017).
* Endresen _et al._ (2021) K. D. Endresen, M. Kim, M. Pittman, Y. Chen, and F. Serra, Soft Matter 17, 5878 (2021).
* Turiv _et al._ (2020) T. Turiv, J. Krieger, G. Babakhanova, H. Yu, S. V. Shiyanovskii, Q.-H. Wei, M.-H. Kim, and O. D. Lavrentovich, Science Advances 6, eaaz6485 (2020).
* Kawaguchi _et al._ (2017) K. Kawaguchi, R. Kageyama, and M. Sano, Nature 545, 327 (2017).
* Alaimo _et al._ (2017) F. Alaimo, C. Köhler, and A. Voigt, Scientific Reports 7, 5211 (2017).
* Ellis _et al._ (2018) P. W. Ellis, D. J. G. Pearce, Y.-W. Chang, G. Goldsztein, L. Giomi, and A. Fernandez-Nieves, Nature Physics 14, 85 (2018).
* Nestler and Voigt (2022) M. Nestler and A. Voigt, Communications in Computational Physics 31, 947 (2022).
* Orozco-Fuentes and Boyer (2013) S. Orozco-Fuentes and D. Boyer, Phys. Rev. E 88, 012715 (2013).
* Hertz (1882) H. Hertz, _On the Contact of Rigid Elastic Solids and on Hardness_ (MacMillan, 1882).
* DeCamp _et al._ (2015) S. J. DeCamp, G. S. Redner, A. Baskaran, M. F. Hagan, and Z. Dogic, Nature Materials 14, 1110 (2015).
* Keber _et al._ (2014) F. C. Keber, E. Loiseau, T. Sanchez, S. J. DeCamp, L. Giomi, M. J. Bowick, M. C. Marchetti, Z. Dogic, and A. R. Bausch, Science 345, 1135 (2014).
* Maroudas-Sacks _et al._ (2021) Y. Maroudas-Sacks, L. Garion, L. Shani-Zerbib, A. Livshits, E. Braun, and K. Keren, Nature Physics 17, 251 (2021).
* Metselaar _et al._ (2019) L. Metselaar, J. M. Yeomans, and A. Doostmohammadi, Phys. Rev. Lett. 123, 208001 (2019).
* Ruske and Yeomans (2021) L. J. Ruske and J. M. Yeomans, Phys. Rev. X 11, 021001 (2021).
* Santiago (2018) J. A. Santiago, Phys. Rev. E 97, 052706 (2018).
* Vafa and Mahadevan (2008) F. Vafa and L. Mahadevan, Physical Review Letters 129, 098102 (2008).
* Hoffmann _et al._ (2022) L. A. Hoffmann, L. N. Carenza, J. Eckert, and L. Giomi, Science Advances 8, eabk2712 (2022).
* Alert (2022) R. Alert, Journal of Physics A: Mathematical and Theoretical 55, 234009 (2022).
* Hickl and Juarez (2022) V. Hickl and G. Juarez, Soft Matter 18, 7217 (2022).
* Prasad _et al._ (2022) M. Prasad, N. Obana, S.-Z. Lin, K. Sakai, C. Blanch-Mercader, J. Prost, N. Nomura, J.-F. Rupprecht, J. Fattaccioli, and A. S. Utada, bioRxiv (2022).
|
# Hyperfiniteness of boundary actions of relatively hyperbolic groups
Chris Karpinski111McGill University. Email:
<EMAIL_ADDRESS>
###### Abstract
We show that if $G$ is a finitely generated group hyperbolic relative to a
finite collection of subgroups $\mathcal{P}$, then the natural action of $G$
on the geodesic boundary of the associated relative Cayley graph induces a
hyperfinite equivalence relation. As a corollary of this, we obtain that the
natural action of $G$ on its Bowditch boundary $\partial(G,\mathcal{P})$ also
induces a hyperfinite equivalence relation. This strengthens a result of Ozawa
obtained for $\mathcal{P}$ consisting of amenable subgroups and uses a recent
work of Marquis and Sabok.
## 1 Introduction
This paper studies equivalence relations induced by boundary actions of
relatively hyperbolic groups. The study of boundary actions began with the
work of Connes, Feldman and Weiss in [5] and Vershik in [21] who studied the
actions of free groups on their boundaries. They showed that for a free group,
its action on the Gromov boundary is $\mu$-hyperfinite for every Borel quasi-
invariant probability measure $\mu$ on the boundary. Adams [1] later
generalized this result to all hyperbolic groups.
Relatively hyperbolic groups were introduced by Gromov [10]; see also the
monograph of Osin [17]. Given a relatively hyperbolic group $G$ with a
collection of parabolic subgroups $\mathcal{P}$ there is a natural boundary
called the Bowditch boundary, denoted $\partial(G,\mathcal{P})$, which is a
compact metrizable space on which $G$ acts naturally by homeomorphisms.
In [18], Ozawa generalized the work of Adams [1] to the actions of relatively
hyperbolic groups on their Bowditch boundary under the assumptions that the
parabolic subgroups are exact. When the parabolic subgroups of $G$ in
$\mathcal{P}$ are amenable, Ozawa [18] proved that the action of $G$ on
$\partial(G,\mathcal{P})$ is topologically amenable, and, more generally, when
the parabolic subgroups are exact, Ozawa [18] proved that the group $G$ is
exact. Alternative proofs of the exactness of the group were given by Osin
[16] who worked with parabolic subgroups with finite asymptotic dimension and
by Dadarlat and Guentner [6] who worked with parabolic subgroups that are
uniformly embeddable into a Hilbert space.
In [22], Zimmer introduced the notion of amenability of equivalence relations;
see also the work of Connes, Feldman and Weiss [5]. By [2, Theorem 5.1], a
measurable action of a countable group $G$ on a standard probability space
$(X,\mu)$ is $\mu$-amenable if and only if $\mu$-almost all stabilizers are
amenable and the orbit equivalence relation is $\mu$-amenable.
In this paper we generalize the result of Ozawa and work with relatively
hyperbolic groups without any assumptions on the parabolic subgroups. In fact,
we consider boundary actions from the Borel perspective. A countable Borel
equivalence relation is called _hyperfinite_ if it is a countable increasing
union of finite Borel sub-equivalence relations. Dougherty, Jackson and
Kechris showed in [8, Corollary 8.2] that the boundary action of any free
group induces a hyperfinite orbit equivalence relation. The result of
Dougherty, Jackson and Kechris was generalized to cubulated hyperbolic groups
by Huang, Sabok and Shinko in [12], and later to all hyperbolic groups by
Marquis and Sabok in [15]. In this paper, we prove the following:
###### Theorem A.
Let $G$ be a finitely generated group hyperbolic relative to a finite
collection of subgroups $\mathcal{P}$ and let $\hat{\Gamma}$ be the associated
relative Cayley graph. Then the natural action of $G$ on the geodesic boundary
$\partial\hat{\Gamma}$ induces a hyperfinite orbit equivalence relation.
###### Corollary B.
Let $G$ be a finitely generated group hyperbolic relative to a finite
collection of subgroups $\mathcal{P}$. Then the natural action of $G$ on the
Bowditch boundary $\partial(G,\mathcal{P})$ induces a hyperfinite orbit
equivalence relation.
Corollary B in particular strengthens the result of Ozawa [18] in case the
parabolic subgroups are amenable. Indeed, hyperfiniteness implies
$\mu$-amenability for every invariant Borel probability measure $\mu$ and by
[3, Theorem 3.3.7], an action of a countable group on a locally compact space
by homeomorphisms is topologically amenable if and only if it is
$\mu$-amenable for every invariant Borel probability measure $\mu$.
We proceed by following a similar approach to [12] and [15], studying geodesic
ray bundles $\text{Geo}(x,\eta)$ in relative Cayley graphs (Definition 2.2).
For the case of a cubulating hyperbolic group $G$ studied in [12], the crucial
property from which the hyperfinitess of the boundary action of $G$ follows is
the finite symmetric difference of geodesic ray bundles: for any $x,y\in G$
and any $\eta\in\partial G$, $\text{Geo}(x,\eta)\triangle\text{Geo}(y,\eta)$
is finite (see [12, Theorem 1.4]). In [20], Touikan showed that this symmetric
difference need not be finite in Cayley graphs of general hyperbolic groups,
although in [14], Marquis provides many examples of groups acting
geometrically on locally finite hyperbolic graphs where this finite symmetric
difference property does hold. In [15], Marquis and Sabok define a modified
version of the geodesic ray bundle, denoted $\text{Geo}_{1}(x,\eta)$ for $x\in
G$ and $\eta\in\partial G$ (see [15, Definition 5.5] and Definition 2.6 in our
paper) and show ([15, Theorem 5.9]) that these modified geodesic ray bundles
satisfy a finite symmetric difference property:
$|\text{Geo}_{1}(x,\eta)\triangle\text{Geo}_{1}(y,\eta)|<\infty$ for each
$x,y\in G$ and for each $\eta\in\partial G$. Marquis and Sabok then deduce
hyperfiniteness of the boundary action as a consequence of this finite
symmetric difference property of the modified bundles (see [15, Section 6]).
Local finiteness of the Cayley graph plays a crucial role in establishing the
finite symmetric difference property of the $\text{Geo}_{1}$ bundles in [15].
However, relative Cayley graphs of relatively hyperbolic groups are not
locally finite. To make up for this loss of local finiteness, we rely on
finiteness results about relative Cayley graphs of relatively hyperbolic
groups from [17] (namely, [17, Theorem 3.26]).
We note also that the hyperfiniteness of boundary actions has been studied
beyond relatively hyperbolic groups. Przytycki and Sabok have recently
established the hyperfiniteness of the actions of a mapping class group of an
oriented surface of finite type on the boundaries of the arc graph ([19,
Theorem 1.1]) and the curve graph ([19, Corollary 1.2]) of the surface.
Acknowledgement: I owe great thanks to my advisor Marcin Sabok for his
continuous support, patience and guidance throughout the production of this
work.
## 2 Preliminaries
In this paper, for a hyperbolic metric space $X$, $\partial X$ will denote the
geodesic boundary of $X$. We will also denote $C_{hb}(X)$ the _horoboundary_
of $X$ (see [15, Section 2.4] for a definition of the horoboundary).
### 2.1 Relatively hyperbolic groups
Relatively hyperbolic groups were first introduced by Gromov in his seminal
paper [10] as a generalization of hyperbolic groups. The following definitions
can be found in [17].
Let $G$ be a group generated by a finite set $X$, let
$\mathcal{P}=\\{H_{1},...,H_{n}\\}$ be a collection of subgroups of $G$ and
let $\mathcal{H}=\bigcup\mathcal{P}$. The relative Cayley graph associated to
$X$ and $\mathcal{P}$ is the Cayley graph $\hat{\Gamma}$ with respect to the
generating set $X\cup\mathcal{H}$. This graph can be identified with the
coned-off Cayley graph obtained by starting with the Cayley graph $\Gamma$ of
$G$ with respect to $X$, adjoining to $\Gamma$ a vertex $v_{gH_{i}}$ for each
left coset $gH_{i}$ and connecting each vertex of $gH_{i}$ in $\Gamma$ to
$v_{gH_{i}}$ by an edge of length $\frac{1}{2}$. The notation $d_{X}$ and $d$
refer to the word metrics with respect to the generating sets $X$ and
$X\cup\mathcal{H}$, respectively. We will use the notation $B_{r}^{X}(x)$ to
denote the closed ball of radius $r$ in the metric $d_{X}$ about the point
$x\in G$.
A finitely generated group $G$ is hyperbolic relative to a collection of
subgroups $\mathcal{P}=\\{H_{1},...,H_{n}\\}$ if there exists a finite
generating set $X$ of $G$ such that the associated relative Cayley graph is
hyperbolic and satisfies the bounded coset penetration property (BCP) (see
[17, Definition 6.5] for the definition of the BCP; we will not need to use
the definition of BCP, so we do not define it here). Relative hyperbolicity is
invariant under change of finite generating set by [17, Proposition 2.8].
For a finitely generated group $G$ hyperbolic relative to a finite collection
$\mathcal{P}$ of subgroups, there is a natural compact metrizable space on
which $G$ acts naturally by homeomorphisms, denoted $\partial(G,\mathcal{P})$
and called the Bowditch boundary (see [4, Section 4] for the construction of
the Bowditch boundary). The following theorem is the main ingredient in
establishing Corollary B as a result of Theorem A.
###### Theorem 2.1.
Let $G$ be hyperbolic relative to a finite collection of subgroups
$\mathcal{P}$, with relative Cayley graph $\hat{\Gamma}$. Then
$\partial\hat{\Gamma}$ embeds $G$-equivariantly and homeomorphically into
$\partial(G,\mathcal{P})$ with countable complement.
###### Proof.
In [7, Proposition 1, Section A.2], it is shown that the coned-off Cayley
graph $\hat{\Gamma}$ witnesses the relative hyperbolicity of $G$ with respect
to $\mathcal{P}$ according to Definition 2 of relative hyperbolicity from [4].
Therefore, by [4, Proposition 8.5] and [4, Proposition 9.1],
$\partial\hat{\Gamma}$ embeds $G$-equivariantly and homeomorphically into
$\partial(G,\mathcal{P})$ and $\partial\hat{\Gamma}$ has countable complement
in $\partial(G,\mathcal{P})$.
∎
### 2.2 Combinatorial Geodesic Ray Bundles
Let $X$ be a hyperbolic graph equipped with its natural combinatorial metric
(assigning edges length 1), and denote the vertex set of $X$ by $X^{(0)}$. We
present some definitions and terminology used in [15] that we will use in our
paper. We refer the reader to Sections 3 and 4 of [15] for a further study of
the objects we define in this section.
###### Definition 2.2.
For $x\in X^{(0)}$ and $\eta\in\partial X$, define $CGR(x,\eta)$ to be the set
of all combinatorial geodesic rays (CGRs) based at $x$ and define the
combinatorial geodesic ray bundle
$\text{Geo}(x,\eta)=\bigcup\text{CGR}(x,\eta)$ to be the set of all vertices
on CGRs in $\text{CGR}(x,\eta)$.
By [15, Lemma 3.2], every CGR $\gamma=(x_{n})_{n}$ converges to some $\xi\in
C_{hb}(\hat{\Gamma})$. We denote such limit $\xi=\xi_{\gamma}$.
###### Definition 2.3.
Fixing a basepoint $z\in X^{(0)}$, for $\eta\in\partial\hat{\Gamma}$ define
the limit set $\Xi(\eta)=\\{\xi_{\gamma}:\gamma\in\text{CGR}(z,\eta)\\}$.
By [15, Lemma 3.1] (which says that we can move the basepoint of any geodesic
ray to any other basepoint to obtain a geodesic with the same tail), the
definition of $\Xi(\eta)$ is independent of the basepoint (i.e. for any
$z_{1},z_{2}\in X^{(0)}$ and $\xi\in\Xi(\eta)$, we have $\xi=\xi_{\gamma}$ for
some $\gamma\in\text{CGR}(z_{1},\eta)$ if and only if
$\xi=\xi_{\gamma^{\prime}}$ for some
$\gamma^{\prime}\in\text{CGR}(z_{2},\eta)$).
###### Definition 2.4.
For $x\in X^{(0)},\eta\in\partial X$ and $\xi\in\Xi(\eta)$, define the
combinatorial sector $Q(x,\xi)=\\{y\in X^{(0)}:y\in\gamma\text{ for some
}\gamma\in\text{CGR}(x,\eta)\text{ with }\xi_{\gamma}=\xi\\}$.
###### Definition 2.5.
For $\eta\in\partial X$, a vertex $x\in X^{(0)}$ is $\mathbf{\eta}$-special if
$\bigcap_{\xi\in\Xi(\eta)}Q(x,\xi)$ contains a CGR $\gamma$. The set of all
$\mathbf{\eta}$-special vertices is denoted $X_{s,\eta}$.
By [15, Lemma 4.7], if $x\in X_{s,\eta}$, then there exists a unique
$\xi^{\prime}\in\Xi(\eta)$ such that
$\bigcap_{\xi\in\Xi(\eta)}Q(x,\xi)=Q(x,\xi^{\prime})$. We denote such
$\xi^{\prime}$ by $\xi^{\prime}=\xi_{x,\eta}$.
Our main objects of interest will be the following modified geodesic ray
bundles, first defined in [15, Definition 5.5].
###### Definition 2.6.
Let $x\in X^{(0)}$ and $\eta\in\partial X$. For $\xi\in\Xi(\eta)$, let
$Y(x,\xi)$ be the set of $\eta$-special vertices $y\in\text{Geo}(x,\eta)$ with
$\xi_{y,\eta}=\xi$ at minimal distance to $x$. Put
$\text{Geo}_{1}(x,\eta)=\bigcup_{\xi\in\Xi(\eta)}\bigcup_{y\in
Y(x,\xi)}Q(y,\xi)$
## 3 Geodesic Ray Bundles in Relatively Hyperbolic Groups
In this section, we examine modified geodesic ray bundles in the relative
Cayley graph $\hat{\Gamma}$ and prove that these modified bundles have finite
symmetric difference for a fixed boundary point. This section generalizes [15,
Theorem 5.9].
We begin by showing that $|\\{\gamma(i):\gamma\in\text{CGR}(x,\eta)\\}|$ is
uniformly bounded for each $i$, each $x\in G$ and each
$\eta\in\partial\hat{\Gamma}$, which is a well-known property in any uniformly
locally finite hyperbolic graph.
We will make use of the following result, which states that geodesic triangles
in the relative Cayley graph are slim with respect to the metric $d_{X}$ for
some finite generating set $X$.
###### Theorem 3.1.
Let $G$ be a finitely generated group hyperbolic relative to a collection of
subgroups $\\{H_{1},...,H_{n}\\}$. There exists a finite generating set $X$ of
$G$ such that the following holds. There exists a constant $\nu$ such that for
any geodesic triangle $pqr$ in the relative Cayley graph $\hat{\Gamma}$ and
any vertex $u$ on $p$, there exists a vertex $v$ on $q\cup r$ such that
$d_{X}(u,v)\leq\nu$.
###### Proof.
The finite generating set $X$ is constructed in the proof of [17, Lemma 3.1]
and it is shown in the proof of [17, Theorem 3.26] that $X$ satisfies the
stated property. ∎
Here is the main result of this section.
###### Theorem 3.2.
Let $G$ be a finitely generated group hyperbolic relative to a collection of
subgroups $\\{H_{1},...,H_{n}\\}$. There exists a finite generating set $X$ of
$G$ such that the following holds. Let $\hat{\Gamma}$ be the associated
relative Cayley graph. Then there is a constant $B$ such that for any $x\in
G$, any $\eta\in\partial\hat{\Gamma}$, and each $i\in\mathbb{N}$, we have
$|\\{\gamma(i):\gamma\in\text{CGR}(x,\eta)\\}|\leq B$
###### Proof.
Take the finite generating set $X$ to be as in Theorem 3.1. Let
$i\in\mathbb{N}$. Let $\nu$ be the constant from Theorem 3.1. Note that
$\hat{\Gamma}$ is $\nu$-hyperbolic. Fix any $\gamma_{0}\in CGR(x,\eta)$ and
let $k=i+3\nu+1$. We will show that for each $\gamma\in\text{CGR}(x,\eta)$,
there exists a vertex $v$ on $\gamma_{0}$ with $d(v,\gamma_{0}(i))\leq 3\nu$
and such that $d_{X}(\gamma(i),v)\leq\nu$.
Let $\gamma\in CGR(x,\eta)$ be arbitrary. Begin by joining $\gamma(k)$ and
$\gamma_{0}(k)$ with a geodesic $\alpha$ (see Figure 1). By
$\nu$-hyperbolicity of $\hat{\Gamma}$, we have that
$d(\gamma(k),\gamma_{0}(k))\leq 2\nu$, so $\alpha$ has length $\ell(\alpha)$
at most $2\nu$.
Letting $|_{k}$ denote the restriction of a geodesic to $\\{0,1,...,k\\}$, we
apply Theorem 3.1 to the geodesic triangle with sides $\gamma_{0}|_{k},\alpha$
and $\gamma|_{k}$, By Theorem 3.1, there exists a vertex $v$ on
$\gamma_{0}|_{k}$ or on $\alpha$ such that $d_{X}(\gamma(i),v)\leq\nu$. We
cannot have $v$ on $\alpha$ because then we would have $d(\gamma(i),v)\leq\nu$
(since $d\leq d_{X}$), which would imply by the triangle inequality that
$k-i=d(\gamma(i),\gamma(k))\leq d(\gamma(i),v)+d(v,\gamma(k))\leq
d(\gamma(i),v)+\ell(\alpha)\leq\nu+2\nu=3\nu$
contradicting our choice of $k$. Therefore, we must have that $v$ is on
$\gamma_{0}|_{k}$.
Lastly, let us show that $d(v,\gamma_{0}(i))\leq 3\nu$. By
$\nu$-hyperbolicity, we have $d(\gamma(i),\gamma_{0}(i))\leq 2\nu$, and note
that $d_{X}(\gamma(i),v)\leq\nu$ implies $d(\gamma(i),v)\leq\nu$, so by the
triangle inequality,
$d(v,\gamma_{0}(i))\leq
d(v,\gamma(i))+d(\gamma(i),\gamma_{0}(i))\leq\nu+2\nu=3\nu$
We conclude that for each $i\in\mathbb{N}$ and each
$\gamma\in\text{CGR}(x,\eta)$, $\gamma(i)$ must be $\nu$-close in $d_{X}$ to a
vertex $v$ on $\gamma_{0}$ with $d(v,\gamma_{0}(i))\leq 3\nu$. There are at
most $6\nu+1$ such vertices on $\gamma_{0}$, so we obtain that
$|\\{\gamma(i):\gamma\in\text{CGR}(x,\eta)\\}|\leq(6\nu+1)|B_{X}^{\nu}(1)|$.
Thus, we set $B=(6\nu+1)|B_{X}^{\nu}(1)|$.
Figure 1: The arrangement of geodesics in the proof of Theorem 3.2.
∎
As a corollary of Theorem 3.2, we obtain the following:
###### Theorem 3.3.
Let $G$ be a finitely generated group hyperbolic relative to a collection of
subgroups $\\{H_{1},...,H_{n}\\}$. There exists a finite generating set $X$ of
$G$ such that the following holds. If $\hat{\Gamma}$ is the associated
relative Cayley graph, then
$\text{Geo}_{1}(x,\eta)\triangle\text{Geo}_{1}(y,\eta)$ is finite for each
$x,y\in G$ and each $\eta\in\partial\hat{\Gamma}$.
###### Proof.
Let $X$ be as in Theorem 3.2. By Theorem 3.2, we have that
$\text{Geo}(x,\eta)$ is uniformly locally finite for each $x\in G$ and
$\eta\in\partial\hat{\Gamma}$. In [15, Theorem 5.9], it is proved that if a
hyperbolic graph $\Gamma$ has the property that $\text{Geo}(x,\eta)$ is
uniformly locally finite for each vertex $x$ and each $\eta\in\partial\Gamma$,
then $\text{Geo}_{1}(x,\eta)\triangle\text{Geo}_{1}(y,\eta)$ is finite for
each pair of vertices $x,y$ and each $\eta\in\partial\Gamma$. Therefore,
$\text{Geo}_{1}(x,\eta)\triangle\text{Geo}_{1}(y,\eta)$ is finite for each
$x,y\in G$ and each $\eta\in\partial\hat{\Gamma}$. ∎
###### Remark 3.4.
Note that Theorem 3.2 implies that if a relatively hyperbolic group $G$ is
generated by a finite set $X$ as in Theorem 3.3, and if the set of ends of the
associated relative Cayley graph is the same as $\partial\hat{\Gamma}$, then
the ends of $\hat{\Gamma}$ have uniformly bounded degree (see [11, Section 2]
for the definition of ends and the degree of an end). This appears to not have
been known for relative Cayley graphs of relatively hyperbolic groups.
## 4 Hyperfiniteness of the boundary action
In this section, we establish the hyperfiniteness of the boundary actions of
relatively hyperbolic groups as a consequence of Theorem 3.3. Our arguments
follow [15, Section 6]. The main difference here is in our coding of labels of
geodesics. In this section, we fix a finite generating set $X$ for $G$ as in
Theorem 3.3 and let $\hat{\Gamma}$ denote the associated relative Cayley graph
of $G$ with respect to $\\{H_{1},...,H_{n}\\}$ and $X$.
First, we give a binary coding to the symmetrized generating set
$S:=(X\cup\mathcal{H})^{\pm}$. Using that $S$ is countably infinite, we fix a
bijection $f:S\to 2^{<\mathbb{N}}:=\bigcup_{n\in\mathbb{N}}2^{n}$ from $S$ to
the set $2^{<\mathbb{N}}$ of all finite binary sequences (which we can
identify with the set of all finitely supported, infinite binary strings). The
label of a geodesic ray is then coded as an element of
$(2^{<\mathbb{N}})^{\mathbb{N}}$, the set of all infinite sequences of finite
binary strings.
We will need to order elements of $(2^{n})^{n}$ (i.e. the set of length $n$
sequences of length $n$ binary strings) for each $n$. Following [8, Section
7], for each $m_{1},m_{2}\in\mathbb{N}$, each
$w=(w_{0},w_{1},...,w_{n-1})\in(2^{m_{1}})^{m_{2}}$ and for each
$n\in\mathbb{N}$ with $n\leq m_{1},m_{2}$, we put
$w|_{n}=((w_{0})|_{n},(w_{1})|_{n},...,(w_{n-1})|_{n})$, where $(w_{j})|_{n}$
is the restriction of the length $m_{1}$ binary sequence $w_{j}$ to the first
$n$ entries. Similarly, if $w\in(2^{\mathbb{N}})^{\mathbb{N}}$, we put
$w|_{n}=((w_{0})|_{n},(w_{1})|_{n},...,(w_{n-1})|_{n})$. If we visualize
$w\in(2^{n})^{n}$ as an $n\times n$ matrix, then $w|_{i}$ is an $i\times i$
submatrix of the $n\times n$ matrix $w$, starting at the top left corner of
$w$.
For each $n\in\mathbb{N}$, we fix a total order $<_{n}$ on $(2^{n})^{n}$ as in
[8, Section 7] such that for all $w,v\in(2^{n+1})^{n+1}$,
$w|_{n}<_{n}v|_{n}\implies w<_{n+1}v$. Given $\gamma\in\text{CGR}(g,\eta)$, we
define $\text{lab}(\gamma)\in(2^{<\mathbb{N}})^{\mathbb{N}}$ to be its coded
label. Therefore, according to above, $\text{lab}(\gamma)|_{n}\in(2^{n})^{n}$
denotes the restricted label. Now, analogously to [15, Definition 6.1], we
put:
###### Definition 4.1.
For $\eta\in\partial\hat{\Gamma}$, define:
$C^{\eta}=\\{(g,\text{lab}(\gamma)|_{n})\in G\times(2^{n})^{n}:g\in
Geo_{1}(e,\eta),\gamma\in CGR(g,\eta),n\in\mathbb{N}\\}$
###### Definition 4.2.
An $s$ in $(2^{n})^{n}$ occurs in $C^{\eta}$ if $(g,s)\in C^{\eta}$ for some
$g\in Geo_{1}(e,\eta)$. An $s$ in $(2^{n})^{n}$ occurs infinitely often in
$C^{\eta}$ if $(g,s)\in C^{\eta}$ for infinitely many $g\in Geo_{1}(e,\eta)$.
Note that for each $n\in\mathbb{N}$, there exists $s\in(2^{n})^{n}$ which
occurs infinitely often in $C^{\eta}$ because taking any
$\gamma\in\text{CGR}(e,\eta)$, by [15, Proposition 5.8],
$\gamma\setminus\text{Geo}_{1}(e,\eta)$ is finite, so there exists some $N$
such that for all $k\geq N$, $\gamma(k)\in\text{Geo}_{1}(e,\eta)$. Then
$(\gamma(k),\text{lab}((\gamma(i))_{i\geq k})|_{n})\in C^{\eta}$ and
$\text{lab}((\gamma(i))_{i\geq k})|_{n}\in(2^{n})^{n}$ for each $k\geq N$.
Since $(2^{n})^{n}$ is finite, by the Pigeonhole Principle, some
$s\in(2^{n})^{n}$ must repeat infinitely often in $C^{\eta}$, that is,
$(\gamma(k),s)\in C^{\eta}$ for infinitely many $k\geq N$. For each
$n\in\mathbb{N}$, we can therefore choose the minimal (in the order $<_{n}$
defined above) such $s\in(2^{n})^{n}$ occuring infinitely often in $C^{\eta}$.
We shall denote this element by $s_{n}^{\eta}$.
###### Proposition 4.3.
For each $n\in\mathbb{N}$, we have that $(s_{n+1}^{\eta})|_{n}=s_{n}^{\eta}$.
###### Proof.
Since $s_{n+1}^{\eta}$ appears infinitely often in $C^{\eta}$, so does
$(s_{n+1}^{\eta})|_{n}$, so $s_{n}^{\eta}<_{n}(s_{n+1}^{\eta})|_{n}$ or
$s_{n}^{\eta}=(s_{n+1}^{\eta})|_{n}$. If
$s_{n}^{\eta}<_{n}(s_{n+1}^{\eta})|_{n}$, then since there are only finitely
many extensions of $s_{n}^{\eta}$ to an element of $(2^{n+1})^{n+1}$ and since
$s_{n}^{\eta}$ appears infinitely often in $C^{\eta}$, there would exist
$s\in(2^{n+1})^{n+1}$ such that $s|_{n}=s_{n}^{\eta}$ and $s$ appears
infinitely often in $C^{\eta}$. Since $s|_{n}<_{n}(s_{n+1}^{\eta})|_{n}$, we
obtain that $s<_{n+1}s_{n+1}^{\eta}$, contradicting the minimality of
$s_{n+1}^{\eta}$. Therefore, $s_{n}^{\eta}=(s_{n+1}^{\eta})|_{n}$. ∎
We now fix a total order $\leq$ on the group $G$ such that $g\leq h\implies
d(e,g)\leq d(e,h)$ (for instance, fixing a total order on $S$, we can define
$\leq$ to be lexicographic order on elements of $G$ as words over $S$, where
we choose for each element of $G$ the lexicographically least word over $S$
representing it). Using the same notation as in [15, Section 6], we put:
###### Definition 4.4.
For each $n\in\mathbb{N}$ and $\eta\in\partial\hat{\Gamma}$, put
$T_{n}^{\eta}=\\{g\in Geo_{1}(e,\eta):(g,s_{n}^{\eta})\in C^{\eta}\\}$ and put
$g_{n}^{\eta}=\min T_{n}^{\eta}$ (where the minimum is with respect to the
above total order on $G$). Put $k_{n}^{\eta}=d(e,g_{n}^{\eta})$ for each
$n\in\mathbb{N}$.
Note that $\min T_{n}^{\eta}$ exists because
$T_{n}^{\eta}\subseteq\text{Geo}(e,\eta)$ and $\text{Geo}(e,\eta)$ is locally
finite by Theorem 3.2. By definition of $\leq$ and since
$s_{n}^{\eta}=(s_{n+1}^{\eta})|_{n}$ for each $n$, we have that
$(T_{n}^{\eta})_{n}$ is a non-increasing sequence of sets and therefore the
sequence $(k_{n}^{\eta})_{n\in\mathbb{N}}$ is a non-decreasing sequence of
natural numbers.
We shall now generalize the results of [15, Section 6], which were stated for
Cayley graphs of hyperbolic groups. Recall that we fix our hyperbolicity
constant to be $\nu$ from Theorem 3.1. We recall that the topology on $G$ is
the discrete topology induced by the relative metric $d$, the topology on
$\partial\hat{\Gamma}$ is the canonical topology on the geodesic boundary,
having countable neighbourhood base
$V(\eta,m)^{g}=\\{\mu\in\partial\hat{\Gamma}:\exists\gamma\in\text{CGR}(g,\mu)\text{
and }\lambda\in\text{CGR}(g,\eta)\text{ with }d(\gamma(t),\lambda(t))\leq
2\nu\text{ for each }t\leq m\\}$ for each $m\in\mathbb{N}$, each
$\eta\in\partial\hat{\Gamma}$ and each basepoint $g\in G$, $G^{\mathbb{N}}$
has the product topology and $C_{hb}(\hat{\Gamma})$ has the topology of
pointwise convergence.
Let us establish a link between the topology of $\partial\hat{\Gamma}$ and
sequences of CGRs in $\hat{\Gamma}$. The condition in the following
proposition is often used as the definition of the topology on $\partial X$
when $X$ is a proper hyperbolic space, but in general does not give the same
topology on $\partial X$ that we work with here.
###### Proposition 4.5.
Suppose that $\eta_{n}\to\eta$ in $\partial\hat{\Gamma}$. Then for any $g\in
G$, there exists a sequence of CGRs $(\gamma_{n})_{n}$ such that
$\gamma_{n}\in CGR(g,\eta_{n})$ for each $n$ and such that every subsequence
of $(\gamma_{n})_{n}$ itself has a subsequence which converges to some CGR
$\gamma\in CGR(g,\eta)$.
###### Proof.
Since $\eta_{n}\to\eta$, by definition of the topology on
$\partial\hat{\Gamma}$, we have that for each $m\in\mathbb{N}$, there exists a
CGR $\gamma_{m}\in\text{CGR}(g,\eta_{m})$ and
$\lambda_{m}\in\text{CGR}(g,\eta)$ such that
$d(\gamma_{m}(t),\lambda_{m}(t))\leq 2\nu$ for every $t\leq m$. Fixing any
$\lambda\in\text{CGR}(g,\eta)$, we obtain that
$d(\gamma_{m}(t),\lambda(t))\leq 4\nu$ for every $t\leq m$ and every $m$,
since $\lambda_{m},\lambda\in\text{CGR}(g,\eta)$ for all $m$ and hence are
$2\nu$ close for each $m$. We claim that every subsequence of
$(\gamma_{n})_{n}$ has a convergent subsequence. First, let us argue as in the
proof of Theorem 3.2 to show that for each $i$,
$|\\{\gamma_{n}(i):n\in\mathbb{N}\\}|$ is finite.
Given $i\in\mathbb{N}$, set $k=i+5\nu+1$. For each $n\geq k$, we have
$d(\gamma_{n}(k),\lambda(k))\leq 4\nu$. Let $u$ denote a geodesic between
$\gamma_{n}(k)$ and $\lambda(k)$ (see Figure 2).
Figure 2: The geometry of the geodesics $\gamma_{n}$, $\lambda$.
Then arguing as in the proof of Theorem 3.2, there exists a vertex $v$ on
$\lambda$ with $d_{X}(\gamma_{n}(i),v)\leq\nu$. It follows that
$|\\{\gamma_{n}(i):n\geq k\\}|$ is finite, and therefore that
$|\\{\gamma_{n}(i):n\in\mathbb{N}\\}|$ is finite. Therefore,
$\bigcup_{n}\gamma_{n}\cup\lambda$ is locally finite. Since
$\bigcup_{n}\gamma_{n}\cup\lambda$ is locally finite, by Kőnig’s lemma it
follows that every subsequence of $(\gamma_{n})_{n}$ has a convergent
subsequence. The limit CGR $\gamma$ of this subsequence is in
$\text{CGR}(g,\eta)$ because for each $t$, $d(\gamma_{k}(t),\lambda(t))\leq
4\nu$ for all but finitely many $k$, so $d(\gamma(t),\lambda(t))\leq 4\nu$ for
all $t$.
∎
We now generalize the claims of [15, Section 6] to relatively hyperbolic
groups. We begin by generalizing Claim 1 of [15]. In Claim 1 in [15], the set
$C$ below is proved to be compact, while here it is only closed.
###### Claim 4.6.
The set $C=\\{\gamma\in G^{\mathbb{N}}:\gamma\text{ is a CGR}\\}$ is closed.
Furthermore, for any $g\in G$ and any $\eta\in\partial\hat{\Gamma}$, the set
$CGR(g,\eta)\subseteq G^{\mathbb{N}}$ is compact.
###### Proof.
Let $(\gamma_{n})_{n}$ be a sequence of elements of $C$ converging to some
$\gamma\in G^{\mathbb{N}}$. We claim that $\gamma$ is a geodesic. Indeed,
since $\gamma_{n}\to\gamma$, for each $m\in\mathbb{N}$, there exists
$N\in\mathbb{N}$ such that for all $n\geq N$, we have
$\gamma_{n}|_{m}=\gamma|_{m}$. In particular, it follows that $\gamma|_{m}$ is
a geodesic, since $\gamma_{n}|_{m}$ is a geodesic for each $n$. Thus, $\gamma$
is a geodesic ray based at $\lim_{n}\gamma_{n}(0)$ and is hence a CGR, so
$\gamma\in C$. Therefore, $C$ is closed.
The "furthermore" statement follows immediately from Kőnig’s lemma, since
$\text{Geo}(g,\eta)$ is locally finite (by Theorem 3.2).
∎
The next claims are the exact relatively hyperbolic analogues of claims from
[15] and their proofs are almost identical (most proofs are completely
identical), however, we present all proofs for completeness.
###### Claim 4.7.
The set $R=\\{(\eta,g,\gamma)\in\partial\hat{\Gamma}\times
G^{\mathbb{N}}:\gamma\in CGR(g,\eta)\\}$ is closed in
$\partial\hat{\Gamma}\times G^{\mathbb{N}}$.
###### Proof.
Suppose that $(\eta_{n},g_{n},\gamma_{n})\in R$ for all $n$ and that
$(\eta_{n},g_{n},\gamma_{n})\to(\eta,g,\gamma)$. Then
$\eta_{n}\to\eta\in\partial\hat{\Gamma}$, $g_{n}\to g$ in $G$ (so that
$(g_{n})$ is eventually equal to $g$, by discreteness of $G$) and
$\gamma_{n}\to\gamma$ in $G^{\mathbb{N}}$, so that
$\gamma\in\text{CGR}(g,\eta^{\prime})$ for some
$\eta^{\prime}\in\partial\hat{\Gamma}$ (by Claim 4.6). We will show that
$\eta=\eta^{\prime}$.
As $\eta_{n}\to\eta$, by Proposition 4.5, there exists a sequence
$(\gamma^{\prime}_{n})_{n}$ with
$\gamma^{\prime}_{n}\in\text{CGR}(g,\eta_{n})$ which has a subsequence
$(\gamma_{n_{k}}^{\prime})_{k}$ that converges to some $\gamma^{\prime}\in
CGR(g,\eta)$. Choose $k$ large enough such that all $g_{n_{k}}$ equal $g$, so
that $\gamma_{n_{k}},\gamma^{\prime}_{n_{k}}\in CGR(g,\eta_{n_{k}})$. We then
have that $d(\gamma_{n_{k}}(m),\gamma^{\prime}_{n_{k}}(m))\leq 2\nu$ for each
$m$. Taking $k\to\infty$, we obtain that $d(\gamma(m),\gamma^{\prime}(m))\leq
2\nu$ for all $m$, and therefore that $\eta=\eta^{\prime}$. Thus,
$(\eta_{n},g_{n},\gamma_{n})\to(\eta,g,\gamma)$ with
$\gamma\in\text{CGR}(g,\eta)$, so $(\eta,g,\gamma)\in R$ and so $R$ is closed.
∎
###### Claim 4.8.
The set
$F=\\{(\eta,g,(\gamma(0),\gamma(1)...,\gamma(n)))\in\partial\hat{\Gamma}\times
G\times G^{<\mathbb{N}}:\gamma\in CGR(g,\eta)\\}$ is Borel in
$\partial\hat{\Gamma}\times G\times G^{<\mathbb{N}}$.
###### Proof.
Let
$F^{\prime}=\\{(\eta,g,(\gamma(0),\gamma(1),...,\gamma(n)),\gamma^{\prime})\in\partial\hat{\Gamma}\times
G\times G^{<\mathbb{N}}\times G^{\mathbb{N}}:(\eta,g,\gamma^{\prime})\in
R\text{ and }\gamma^{\prime}_{i}=\gamma_{i}\text{ for each }0\leq i\leq n\\}$.
By Claim 4.7, $F^{\prime}$ is closed in $\partial\hat{\Gamma}\times G\times
G^{<\mathbb{N}}\times G^{\mathbb{N}}$. Note that $F$ is the projection of
$F^{\prime}$ to the first 3 components $\partial\hat{\Gamma}\times G\times
G^{<\mathbb{N}}$. Note also that the section
$F^{\prime}_{(\eta,g,(\gamma(0),\gamma(1),...,\gamma(n)))}$ is compact for
every
$(\eta,g,(\gamma(0),\gamma(1),...,\gamma(n)))\in\partial\hat{\Gamma}\times
G^{<\mathbb{N}}$. Indeed,
$F^{\prime}_{(\eta,g,(\gamma(0),\gamma(1),...,\gamma(n)))}=\\{\gamma^{\prime}\in
CGR(g,\eta):\gamma^{\prime}(i)=\gamma(i)\text{ for all }0\leq i\leq n\\}$,
which is a closed subset of the compact set $\text{CGR}(g,\eta)$, hence it is
compact. By [13, Theorem 18.18], it follows that $F$ is Borel in
$\partial\hat{\Gamma}\times G\times G^{<\mathbb{N}}$. ∎
###### Claim 4.9.
The set $M=\\{(\eta,\xi)\in\partial\hat{\Gamma}\times
C_{hb}(\hat{\Gamma}):\xi\in\Xi(\eta)\\}$ is Borel in
$\partial\hat{\Gamma}\times C_{hb}(\hat{\Gamma})$.
###### Proof.
We follow a similar proof to the proof of [15, Claim 4]. We will show that $M$
is both analytic and coanalytic, hence Borel by [13, Theorem 14.11]. By
definition of $\Xi(\eta)$, we have that $(\eta,\xi)\in M$ if and only if
$\exists\gamma\in G^{\mathbb{N}}:(\eta,\gamma(0),\gamma)\in R\text{ and
}\xi_{\gamma}=\xi$
We also have that
$\xi_{\gamma}=\xi\iff\forall g\in G\text{ }\exists n\in\mathbb{N}\text{
}\forall m\geq n\text{ }f_{\gamma(m)}(g)=\xi(g)$
which gives a Borel definition of the set $\\{(\xi,\gamma)\in
C_{hb}(\hat{\Gamma})\times C:\xi_{\gamma}=\xi\\}$. Thus, from Claim 4.7 we
have that $M$ is analytic. To show that $M$ is coanalytic, we will show the
following, denoting $N_{k}(A)$ the $k$-neighbourhood of a subset $A$ of $G$:
$\displaystyle(\eta,\xi)\in M\text{ if and only if }\forall\lambda\in
G^{\mathbb{N}}\text{ if }(\eta,e,\lambda)\in R,\text{ then }\forall
k\in\mathbb{N},\exists\gamma^{k}\in G^{k+1}\text{ a geodesic path with }$
$\displaystyle\gamma^{k}(0)=e\text{ such that }\gamma^{k}\subseteq
N_{2\nu}(\lambda)\text{ and such that }\forall g\in G,\exists
n_{g}\in\mathbb{N}\text{ such that }\forall
i,j>n_{g},f_{\gamma^{j}(i)}(g)=\xi(g)$
This formula defines a coanalytic set since there is a single universal
quantifier $\forall$ ranging over an uncountable standard Borel space
$G^{\mathbb{N}}$.
For the forward direction, if $(\eta,\xi)\in M$, then there exists
$\gamma\in\text{CGR}(e,\eta)$ converging to $\xi$. We simply take
$\gamma^{k}=\gamma|_{k}$ (the restriction of $\gamma$ from 0 to $k$) for each
$k\in\mathbb{N}$. Then for each $\lambda\in\text{CGR}(e,\eta)$, we have
$d(\gamma(n),\lambda(n))\leq 2\nu$ for each $n\in\mathbb{N}$, so
$\gamma^{k}\subseteq N_{2\nu}(\lambda)$ for each $k$. Furthermore, since
$\gamma$ converges to $\xi$, we have that for all $\forall g\in G$, there
exists $n_{g}$ such that for all $i,j>n_{g}$, we have
$f_{\gamma^{j}(i)}(g)=\xi(g)$.
For the reverse direction, let $\lambda\in\text{CGR}(e,\eta)$. Then there
exists a sequence $\gamma^{k}\in G^{k+1}$ of geodesic paths starting at $e$,
each contained in $N_{2\nu}(\lambda)$ and such that
$f_{\gamma^{i}(j)}(g)\to\xi(g)$. For each $i$, fix $k=i+3\nu+1$ and using
$\nu$-hyperbolicity, choose an $N$ sufficiently large such that for all $n\geq
N$, we have
$d(\gamma_{n}(t),\lambda(t))\leq 2\nu$
for all $t\leq k$. Arguing as in the proof of Theorem 3.2, we have that
$\\{\gamma^{j}(i):j\geq N\\}$ is finite, so that
$\\{\gamma^{j}(i):j\in\mathbb{N}\\}$ is finite for each $i$. Therefore, by
Kőnig’s lemma, $(\gamma^{k})_{k}$ has a subsequence converging to some CGR
$\gamma$ based at $e$, and $\gamma\subseteq N_{2\nu}(\lambda)$, so
$\gamma\in\text{CGR}(e,\eta)$. From $f_{\gamma^{i}(j)}(g)\to\xi(g)$ as
$i,j\to\infty$, we have that $\xi_{\gamma}=\xi$. Since
$\gamma\in\text{CGR}(e,\eta)$, we conclude that $(\eta,\xi)\in M$.
∎
By [15, Proposition 5.2], for each $\eta\in\partial\hat{\Gamma}$, we have that
the section $M_{\eta}=\Xi(\eta)$ is finite, having cardinality bounded above
by the constant $B$ from Theorem 3.2. Since $M$ is Borel and has finite
sections of size at most $B$, by the Lusin-Novikov theorem we have Borel
functions $\xi_{1},...,\xi_{B}:\partial\hat{\Gamma}\to C_{hb}(\hat{\Gamma})$
such that $M$ is the union of the graphs
$G_{\xi_{i}}=\\{(\eta,\xi_{i}(\eta)):\eta\in\partial\hat{\Gamma}\\}$ of the
$\xi_{i}$.
###### Claim 4.10.
For each $i=1,...,B$, $Q_{i}=\\{(\eta,g,h)\in\partial\hat{\Gamma}\times
G^{2}:h\in Q(g,\xi_{i}(\eta))\\}$ is Borel in $\partial\hat{\Gamma}\times
G^{2}$.
###### Proof.
By [15, Lemma 4.2], for $x,y\in G$, denoting $\gamma(x,y)$ the union of all
geodesic paths in $\hat{\Gamma}$ from $x$ to $y$, we have that
$Q(g,\xi_{i}(\eta))=\bigcup_{n\in\mathbb{N}}\gamma(g,x_{n})$ for some,
equivalently any, $(x_{n})_{n}\in\text{CGR}(g,\eta)$ converging to
$\xi_{i}(\eta)$. From this, we obtain that:
$h\in Q(g,\xi_{i}(\eta))\iff\exists\lambda\in C\text{(resp. $\forall\lambda\in
C$) : $\lambda(0)=g$ and $\xi_{\lambda}=\xi_{i}(\eta)$ and $\exists
n\in\mathbb{N}$ : $h\in\gamma(g,\lambda(n))$}$
This yields the analyticity (from the $\exists$ above) and coanalyticity (from
the $\forall$ above) of $Q_{i}$, hence Borelness of $Q_{i}$.
∎
###### Claim 4.11.
The set $P=\\{(\eta,h)\in\partial\hat{\Gamma}\times
G:h\in\hat{\Gamma}_{s,\eta}\\}$ is Borel in $\partial\hat{\Gamma}\times G$.
###### Proof.
We have that $h\in\hat{\Gamma}_{s,\eta}$ if and only if:
$\forall n\in\mathbb{N},\exists\gamma^{n}\in G^{n+1}:(\eta,h,\gamma^{n})\in
F\text{ and }\forall i\leq B\text{ }\forall k<n,(\eta,h,\gamma^{n}(k))\in
Q_{i}$
Indeed, if $h\in\hat{\Gamma}_{s,\eta}$, then
$\bigcap_{\xi\in\Xi(\eta)}Q(h,\xi)$ contains a CGR
$\gamma\in\text{CGR}(h,\eta)$, so we can take $\gamma^{n}=\gamma|_{n}$ (the
restriction from 0 to $n$) for all $n\in\mathbb{N}$ to satisfy the above
condition.
Conversely, if the above condition holds, then by local finiteness of
$\text{Geo}(h,\eta)$, the sequence $(\gamma^{n})_{n\in\mathbb{N}}$ with
$(\eta,h,\gamma^{n})\in F$ will have a subsequence converging to some
$\gamma\in\text{CGR}(h,\eta)$ and the above condition yields that
$\gamma\subseteq\bigcap_{\xi\in\Xi(\eta)}Q(h,\xi)$, so that
$h\in\hat{\Gamma}_{s,\eta}$.
Since $F$ and $Q_{i}$ are Borel, we conclude that $P$ is Borel.
∎
###### Claim 4.12.
The set $P_{1}=\\{(\xi,\eta,h)\in
C_{hb}(\hat{\Gamma})\times\partial\hat{\Gamma}\times
G:h\in\hat{\Gamma}_{s,\eta}\text{ and }\xi=\xi_{h,\eta}\\}$ is Borel in
$C_{hb}(\hat{\Gamma})\times\partial\hat{\Gamma}\times G$.
###### Proof.
We have $(\xi,\eta,h)\in P_{1}$ if and only if:
$(\eta,h)\in P\text{ and $\exists i\leq B$ : $(\eta,\xi)\in G_{\xi_{i}}$ and
$\forall j\leq B$, $Q(h,\xi_{i}(\eta))\subseteq Q(h,\xi_{j}(\eta))$}$
Since $P$ is Borel (Claim 4.11), $G_{\xi_{i}}$ is Borel (as $\xi_{i}$ is
Borel), and $Q_{i}$ is Borel (Claim 4.10), the above yields that $P_{1}$ is
Borel.
∎
###### Claim 4.13.
The set $L=\\{(h,\xi,\eta)\in G\times
C_{hb}(\hat{\Gamma})\times\partial\hat{\Gamma}:h\in
Y(e,\xi),\xi\in\Xi(\eta)\\}$ is Borel in $G\times
C_{hb}(\hat{\Gamma})\times\partial\hat{\Gamma}$.
###### Proof.
We have that $(h,\xi,\eta)\in L$ if and only if $(\eta,\xi)\in M$ and $h$ is
the closest element to $e$ (in the metric $d$) such that
$h\in\text{Geo}(e,\eta)$ and $(\xi,\eta,h)\in P_{1}$. Thus, by Claims 4.9,
4.10, 4.12, $L$ is Borel (note that $h\in\text{Geo}(e,\eta)\iff(\eta,e,h)\in
Q_{i}$ for some $i\leq B$, so $\\{(h,\eta)\in
G\times\partial\hat{\Gamma}:h\in\text{Geo}(e,\eta)\\}$ is Borel in
$G\times\partial\hat{\Gamma}$ by Claim 4.10).
∎
###### Claim 4.14.
The set $B=\\{(g,h,\xi,\eta)\in G^{2}\times
C_{hb}(\hat{\Gamma})\times\partial\hat{\Gamma}:g\in Q(h,\xi),h\in
Y(e,\xi),\xi\in\Xi(\eta)\\}$ is Borel in $G^{2}\times
C_{hb}(\hat{\Gamma})\times\partial\hat{\Gamma}$.
###### Proof.
We have that $(g,h,\xi,\eta)\in B$ if and only if $\exists i\leq r$ such that
$\xi=\xi_{i}(\eta)$ and $(\eta,h,g)\in Q_{i}$ and $(h,\xi,\eta)\in L$. Since
$L,Q_{i}$ and $\xi_{i}$ are Borel, it follows that $B$ is Borel.
∎
###### Claim 4.15.
The set $A=\\{(\eta,g)\in\partial\hat{\Gamma}\times
G:g\in\text{Geo}_{1}(e,\eta)\\}$ is Borel in $\partial\hat{\Gamma}\times G$.
###### Proof.
Since $\text{Geo}_{1}(e,\eta)=\bigcup_{\xi\in\Xi(\eta)}\bigcup_{h\in
Y(e,\xi)}Q(h,\xi)$, we have:
$(\eta,g)\in A\iff\exists\xi\in\Xi(\eta),\exists h\in Y(e,\xi):g\in
Q(h,\xi)\iff\exists\xi\in\Xi(\eta),\exists h\in Y(e,\xi):(g,h,\xi,\eta)\in B$
Therefore, $A$ is the projection $(g,h,\xi,\eta)\mapsto(\eta,g)$ of $B$ onto
$\partial\hat{\Gamma}\times G$. By Claim 4.14, $B$ is Borel. Also, the
sections $\\{(h,\xi)\in G\times C_{hb}(\hat{\Gamma}):h\in
Y(e,\xi),\xi\in\Xi(\eta)\\}$ of $B$ are finite by Theorem 3.2 and [15,
Proposition 5.2]. Therefore, by the Lusin-Novikov theorem, $A$ is Borel.
∎
###### Claim 4.16.
The set
$D=\\{(\eta,(\gamma(0),\gamma(1),...,\gamma(n)))\in\partial\hat{\Gamma}\times
G^{<\mathbb{N}}:\gamma(0)\in Geo_{1}(e,\eta)\text{ and }\gamma\in
CGR(\gamma(0),\eta)\\}$ is Borel in $\partial\hat{\Gamma}\times
G^{<\mathbb{N}}$.
###### Proof.
We have that $(\eta,(\gamma(0),\gamma(1),...,\gamma(n)))\in D$ if and only if
$(\eta,\gamma(0),(\gamma(0),\gamma(1),...,\gamma(n)))\in F$ and
$(\eta,\gamma(0))\in A$. By Claim 4.15, $A$ is Borel in
$\partial\hat{\Gamma}\times G$. Also, $F$ is Borel by Claim 4.8. Therefore,
$D$ is Borel.
∎
###### Claim 4.17.
For each $n$, the set
$S_{n}:=\\{(\eta,s^{n})\in\partial\hat{\Gamma}\times(2^{n})^{n}:s^{n}=s_{n}^{\eta}\\}$
is Borel in $\partial\hat{\Gamma}\times(2^{n})^{n}$.
###### Proof.
We have that $(\eta,s^{n})\in S_{n}$ if and only if $s^{n}$ is the
$<_{n}$-minimal element in $(2^{n})^{n}$ for which the following holds:
$\forall m\in\mathbb{N},\exists(\gamma(0),\gamma(1),...,\gamma(n))\in
G^{n+1}:d(\gamma(0),e)\geq m,(\eta,(\gamma(0),\gamma(1),...,\gamma(n)))\in
D\text{ and $\text{lab}(\gamma)|_{n}=s^{n}$}$
Note that the the "only if" holds by local finiteness of $\text{Geo}(e,\eta)$.
Thus, $S_{n}$ is Borel by Claim 4.16.
∎
Now let $E$ denote the orbit equivalence relation of the action of $G$ on
$\partial\hat{\Gamma}$.
###### Definition 4.18.
Let $Z=\\{\eta\in\partial\hat{\Gamma}:k_{n}^{\eta}\nrightarrow\infty\\}$.
Since $\text{Geo}_{1}(e,\eta)$ is locally finite (as $\text{Geo}(e,\eta)$ is
locally finite and $\text{Geo}_{1}(e,\eta)\subseteq\text{Geo}(e,\eta)$), we
have that $Z$ is the set of all $\eta$ such that there exists $g_{\eta}$
belonging to $T_{n}^{\eta}$ for all $n$, i.e. for which there exists
$\gamma^{\eta}\in CGR(g_{\eta},\eta)$ with label
$s^{\eta}\in(2^{<\mathbb{N}})^{\mathbb{N}}$.
###### Lemma 4.19.
The map $\alpha:(Z,E|_{Z})\to(\partial\hat{\Gamma},=)$ given by $\eta\mapsto
g_{\eta}^{-1}\eta$ is a Borel reduction.
###### Proof.
We argue as in [15]. First, let us show that $s_{n}^{\eta}=s_{n}^{g\eta}$ for
each $g\in G$, each $\eta\in\partial\hat{\Gamma}$ and each $n\in\mathbb{N}$.
If there are infinitely many pairs $(h,s_{n}^{\eta})\in C^{\eta}$, then since
the left action of $G$ on $\hat{\Gamma}$ preserves labels of geodesics, there
are infinitely many pairs $(gh,s_{n}^{\eta})$, where
$s_{n}^{\eta}=\text{lab}(\gamma)|_{n}$ for some
$\gamma\in\text{CGR}(\gamma(0),g\eta)$ and where $\gamma(0)\in
g\text{Geo}_{1}(e,\eta)=\text{Geo}_{1}(g,g\eta)$ (using [15, Lemma 5.10] in
the last line).
By [15, Theorem 5.9], the symmetric difference between
$\text{Geo}_{1}(g,g\eta)$ and $\text{Geo}_{1}(e,g\eta)$ is finite and so there
are infinitely many pairs $(gh,s_{n}^{\eta})\in\text{Geo}_{1}(e,g\eta)$.
Hence, there are infinitely many pairs $(gh,s_{n}^{\eta})\in C^{g\eta}$. Thus,
as $s_{n}^{\eta}$ is least in the order $<_{n}$ that appears infinitely often
in $C^{\eta}$, we have that $s_{n}^{\eta}=s_{n}^{g\eta}$. As
$s_{n}^{\eta}=s_{n}^{g\eta}$ for each $n$, we have $s^{\eta}=s^{g\eta}$.
This implies that $\alpha$ is constant on $G$-orbits. Indeed, suppose
$\theta=g\eta$ for some $g\in G$, $\eta,\theta\in Z$. We have that $\alpha$
maps the boundary point $[\gamma^{\theta}]$ to the boundary point
$[g_{\theta}^{-1}\gamma^{\theta}]$. Note that
$g_{\theta}^{-1}\gamma^{\theta}\in\text{CGR}(e,g_{\theta}^{-1}\theta)$ and
$\text{lab}(g_{\theta}^{-1}\gamma^{\theta})=s^{\theta}$, because
$\gamma^{\theta}$ has label $s^{\theta}$ and left multiplication preserves
labels of geodesics. On the other hand, $\alpha$ maps $\eta=[\gamma^{\eta}]$
to $g_{\eta}^{-1}\eta=[g_{\eta}^{-1}\gamma^{\eta}]$. We have that
$g_{\eta}^{-1}\gamma^{\eta}\in\text{CGR}(e,g_{\eta}^{-1}\eta)$ and
$\text{lab}(g_{\eta}^{-1}\gamma^{\eta})=s^{\eta}$. But by above,
$s^{\eta}=s^{g\eta}=s^{\theta}$. Therefore, $g_{\eta}^{-1}\gamma^{\eta}$ and
$g_{\theta}^{-1}\gamma^{\theta}$ both start at $e$ and have the same label.
Therefore, they are the same geodesic. Hence,
$g_{\theta}^{-1}\theta=g_{\eta}^{-1}\eta$ i.e. $\alpha(\theta)=\alpha(\eta)$.
It follows that $\alpha$ is reduction to $=$ on $\partial\hat{\Gamma}$.
Indeed, the above shows that $\theta
E\eta\implies\alpha(\theta)=\alpha(\eta)$. Conversely, if
$\alpha(\theta)=\alpha(\eta)$, then $g_{\eta}^{-1}\eta=g_{\theta}^{-1}\theta$,
so $\theta=g_{\theta}g_{\eta}^{-1}\eta$, and therefore $\theta E\eta$.
It remains to show that $\alpha$ is Borel. To show this, let us first show
that the set $U:=\\{(\eta,s)\in
Z\times(2^{\mathbb{N}})^{\mathbb{N}}:s=s^{\eta}\\}$ is Borel. We have
$s=s^{\eta}$ if and only if $(\eta,s|_{n})\in S_{n}$ for each
$n\in\mathbb{N}$, so $\\{(\eta,s)\in
Z\times(2^{\mathbb{N}})^{\mathbb{N}}:s=s^{\eta}\\}$ is Borel by Claim 4.17
(note that the map $(\eta,s)\mapsto(\eta,s|_{n})$ is continuous, hence Borel,
for each $n\in\mathbb{N}$).
Now the Borelness of $U$ implies the Borelness of the graph of $\alpha$.
Indeed, note that for $\eta\in Z$ and $\theta\in\partial\hat{\Gamma}$,
denoting $\text{lab}:Z\times C\to Z\times(2^{\mathbb{N}})^{\mathbb{N}}$ the
continuous map $(\eta,\gamma)\mapsto(\eta,\text{lab}(\gamma))$, we have:
$\displaystyle\theta=g_{\eta}^{-1}\eta$ $\displaystyle\iff\exists\gamma\in
C:\gamma\in\text{CGR}(e,\theta)\text{ and }\text{lab}(\gamma)=s^{\eta}$
$\displaystyle\iff\exists\gamma\in C:(\theta,e,\Gamma)\in R\text{ and
}(\eta,\gamma)\in\text{lab}^{-1}(U)$
Putting $T=\\{(\eta,\theta,\gamma)\in Z\times\partial\hat{\Gamma}\times
C:(\theta,e,\gamma)\in R\text{ and }(\eta,\gamma)\in\text{lab}^{-1}(U)\\}$, we
have that $T$ is Borel because $R$ and $U$ are Borel (see Claim 4.7 for the
Borelness of $R$). By above, the graph of $\alpha$ is the projection
$\text{proj}_{Z\times\partial\hat{\Gamma}}(T)$ of $T$ onto the first two
coordinates $(\eta,\theta)$. For each $(\eta,\theta)\in
Z\times\partial\hat{\Gamma}$, the section $T_{(\eta,\theta)}=\\{\gamma\in
C:(\eta,\theta,\gamma)\in T\\}=\\{\gamma\in
C:\gamma\in\text{CGR}(e,\theta)\text{ and }\text{lab}(\gamma)=s^{\eta}\\}$ is
finite, being either a singleton or the empty set (because a geodesic ray is
uniquely determined by its basepoint and label). Therefore, by the Lusin-
Novikov theorem, we have that $\text{proj}_{Z\times\partial\hat{\Gamma}}(T)$
is Borel. Thus, the graph of $\alpha$ is Borel, so $\alpha$ is Borel.
∎
###### Lemma 4.20.
$E$ is smooth on the saturation
$[Z]_{E}=\\{\eta\in\partial\hat{\Gamma}:\exists\theta\in Z\text{ such that
}\theta E\eta\\}$.
###### Proof.
By Lemma 4.19, $E$ is smooth on $Z$, yielding the claim. ∎
###### Definition 4.21.
Let $Y=\partial\hat{\Gamma}\setminus[Z]_{E}$. For each $n\in\mathbb{N}$,
define $H_{n}:\partial\hat{\Gamma}\to 2^{G}$ by
$H_{n}(\eta)=(g_{n}^{\eta})^{-1}T_{n}^{\eta}$. Let $F_{n}$ be the equivalence
relation on $\mathrm{im}H_{n}$ which is the restriction of the shift action of
$G$ on $2^{G}$ to $\mathrm{im}H_{n}$.
The following lemma is a generalization of [15, Lemma 6.7].
###### Lemma 4.22.
There exists a constant $K$ such that for each $n\in\mathbb{N}$, each
equivalence class of $F_{n}$ has size at most $K$.
###### Proof.
Note that by Thereom 3.2, we have that each closed ball of radius $r$ in
$\text{Geo}(x,\eta)$ has cardinality at most $(2(r+2\nu)+1)B$, where $B$ is
the constant from Theorem 3.2. We will show that we can take $K=(20\nu+1)B$.
Let $\eta,\theta\in\partial\hat{\Gamma}$ and suppose that
$H_{n}(\eta)=gH_{n}(\theta)$. By the proof of [15, Lemma 6.7] (which only
relies on the hyperbolicity of the Cayley graph and local finiteness of
geodesic ray bundles and so holds in our context when applied to
$\hat{\Gamma}$), we have $d(e,g)\leq 8\nu$. For completeness, let us reproduce
this proof.
By defiinition, $T_{n}^{\eta}$ (resp. $T_{n}^{\theta}$) is an infinite subset
of $\text{Geo}(e,\eta)$ (resp. $\text{Geo}(e,\theta)$). Since
$\text{Geo}(e,\eta)$ is locally finite, this means that $T_{n}^{\eta}$ (resp.
$T_{n}^{\theta}$) uniquely determines $\eta$ (resp. $\theta$). From
$H_{n}(\eta)=gH_{n}(\theta)$, we have
$(g_{n}^{\eta})^{-1}T_{n}^{\eta}=g(g_{n}^{\theta})^{-1}T_{n}^{\theta}$ and
since $T_{n}^{\eta}$ and $T_{n}^{\theta}$ determine their boundary points,
this implies that $(g_{n}^{\eta})^{-1}\eta=g(g_{n}^{\theta})^{-1}\theta$. Let
us denote by $\sigma$ the common boundary point
$(g_{n}^{\eta})^{-1}\eta=g(g_{n}^{\theta})^{-1}\theta$.
Figure 3: The geometry of the proof of Lemma 4.22.
We have that
$g,e\in(g_{n}^{\eta})^{-1}T_{n}^{\eta}=g(g_{n}^{\theta})^{-1}T_{n}^{\theta}\subseteq\text{Geo}(g(g_{n}^{\theta})^{-1},\sigma)$,
so there exists $\lambda\in\text{CGR}(g(g_{n}^{\theta})^{-1},\sigma)$ passing
through $g$ and $\lambda^{\prime}\in\text{CGR}(g(g_{n}^{\theta})^{-1},\sigma)$
passing through $e$. Write $g=\lambda(m_{1})$ and $e=\lambda^{\prime}(m_{2})$
for some $m_{1},m_{2}\in\mathbb{N}$. Note that by $\nu$-hyperbolicity, we have
$d(e,\lambda(m_{2}))\leq 2\nu$. Also, we have $m_{2}\geq m_{1}$. Indeed, since
$g_{n}^{\theta}g^{-1}\in
g_{n}^{\theta}g^{-1}(g_{n}^{\eta})^{-1}T_{n}^{\eta}=T_{n}^{\theta}$, we have:
$m_{2}=d(e,g(g_{n}^{\theta})^{-1})=d(e,g_{n}^{\theta}g^{-1})\geq
d(e,g_{n}^{\theta})=d(e,(g_{n}^{\theta})^{-1})=d(g,g(g_{n}^{\theta})^{-1})=m_{1}$
where $d(e,g_{n}^{\theta}g^{-1})\geq d(e,g_{n}^{\theta})$ holds by
$\leq$-minimality of $g_{n}^{\theta}$ in $T_{n}^{\theta}$.
Similarly, from $g,e\in(g_{n}^{\eta})^{-1}T_{n}^{\eta}$, we have
$g,e\in\text{Geo}((g_{n}^{\eta})^{-1},\sigma)$, and so there exists
$\gamma\in\text{CGR}((g_{n}^{\theta})^{-1},\sigma)$ passing through $g$ and
$\gamma^{\prime}\in\text{CGR}((g_{n}^{\theta})^{-1},\sigma)$ passing through
$e$. Write $g=\gamma(m_{3})$ and $e=\gamma^{\prime}(m_{4})$ for some
$m_{3},m_{4}\in\mathbb{N}$ (see Figure 3). By $\nu$-hyperbolicity, we have
$d(e,\gamma(m_{4}))\leq 2\nu$ and $m_{4}\leq m_{3}$ because $g_{n}^{\eta}g\in
g_{n}^{\eta}g(g_{n}^{\theta})^{-1}T_{n}^{\theta}=T_{n}^{\eta}$ and so by the
$\leq$-minimality of $g_{n}^{\eta}$ in $T_{n}^{\eta}$, we have that:
$m_{3}=d((g_{n}^{\eta})^{-1},g)=d(e,g_{n}^{\eta}g)\geq
d(e,g_{n}^{\eta})=d(e,(g_{n}^{\eta})^{-1})=m_{4}$
Let us now consider the sub-CGRs of $\lambda$ and $\gamma$ starting at $g$.
Using $\nu$-hyperbolicity, since $m_{2}\geq m_{1}$, there exists $m_{5}\geq
m_{3}$ such that $d(\lambda(m_{2}),\gamma(m_{5}))\leq 2\nu$. Then by the
triangle inequality and our above estimates, we have:
$d(\gamma(m_{4}),\gamma(m_{5}))\leq
d(\gamma(m_{4}),e)+d(e,\lambda(m_{2}))+d(\lambda(m_{2}),\gamma(m_{5}))\leq
6\nu$
Therefore,
$d(e,g)=d(e,\gamma(m_{3}))\leq
d(e,\gamma(m_{4}))+d(\gamma(m_{4}),\gamma(m_{3}))\leq
2\nu+d(\gamma(m_{4}),\gamma(m_{5}))\leq 8\nu$
where we have $d(\gamma(m_{4}),\gamma(m_{3}))\leq
d(\gamma(m_{4}),\gamma(m_{5}))$ because $m_{5}\geq m_{3}\geq m_{4}$.
Thus, $H_{n}(\eta)=gH_{n}(\theta)$ implies that $g$ is in the ball of radius
$8\nu$ about $e$ in $\text{Geo}((g_{\eta}^{n})^{-1},\sigma)$, which has
cardinality at most $(2(8\nu+2\nu)+1)B=(20\nu+1)B=K$. Thus, $F_{n}$-classes
have cardinality at most $K$.
∎
The following remaining results have the same proof as in [15].
###### Lemma 4.23.
Let $n\in\mathbb{N}$. Then the map $H_{n}$ is Borel and so $\mathrm{im}H_{n}$
is analytic.
###### Proof.
The sets $\\{(\eta,g_{n}^{\eta})\in\partial\hat{\Gamma}\times
G\\},\\{(\eta,T_{n}^{\eta})\in\partial\hat{\Gamma}\times 2^{G}\\}$ and
$G_{H_{n}}=\\{(\eta,H_{n}(\eta)):\partial\hat{\Gamma}\times 2^{G}\\}$ are all
definable using formulas with countable quantifiers and references to the
Borel sets $D$ and $S_{n}$ (see Claims 4.16 and 4.17), so these sets are all
Borel. As $G_{H_{n}}$ is the graph of $H_{n}$, it is Borel, so $H_{n}$ is
Borel and hence $\text{im}H_{n}$ is analytic. ∎
Using [15, Lemma 2.3], there exists a finite Borel equivalence relation
$F_{n}^{\prime}$ on $2^{G}$ with $F_{n}\subseteq F_{n}^{\prime}$. Since
$F_{n}^{\prime}$ is finite, Borel, there exists a Borel reduction
$f_{n}:2^{G}\to 2^{\mathbb{N}}$ from $F_{n}^{\prime}$ to $E_{0}$ for each
$n\in\mathbb{N}$, using which we define
$f:\partial\hat{\Gamma}\to(2^{\mathbb{N}})^{\mathbb{N}}$ by
$f(\eta)=(f_{n}(H_{n}(\eta)))_{n}$. Put $E^{\prime}=f^{-1}(E_{1})$, i.e.
$\theta E^{\prime}\eta\iff f(\theta)E_{1}f(\eta)$.
###### Lemma 4.24.
The equivalence relation $E^{\prime}$ is a hyperfinite countable Borel
equivalence relation.
###### Proof.
Since $H_{n}$ is Borel, we have that $E^{\prime}$ is Borel. We also have that
$E^{\prime}$ is hypersmooth by definition, and so it is hyperfinite by [9,
Theorem 8.1.5]. We follow the same proof as the proof of [15, Lemma 6.9], to
show that $E^{\prime}$ is countable.
For each $n\in\mathbb{N}$, define the relation $E_{n}^{\prime}$ on
$\partial\hat{\Gamma}$ by $\eta E_{n}^{\prime}\theta$ if
$f_{m}(H_{m}(\eta))=f_{m}(H_{m}(\theta))$ for all $m\geq n$. Each
$E_{n}^{\prime}$ is countable because if $\eta E_{n}^{\prime}\theta$, then
$f_{n}(H_{n}(\eta))=f_{n}(H_{n}(\theta))$, and $f_{n}\circ H_{n}$ is
countable-to-one since $H_{n}$ is countable-to-one because if
$H_{n}(\eta)=H_{n}(\theta)$, then $\eta E\theta$ and $E$ is countable, and
$f_{n}$ is finite-to-one since $F_{n}^{\prime}$ is finite. Therefore, there
are only countably many choices for $\eta$ such that $\eta
E_{n}^{\prime}\theta$ once $\theta$ is fixed. Thus, $E_{n}^{\prime}$ is
countable. Noting that $E^{\prime}=\bigcup_{n\in\mathbb{N}}E_{n}^{\prime}$, we
obtain that $E^{\prime}$ is countable.
∎
###### Lemma 4.25.
$f$ is a homomorphism from $E|_{Y}$ to $E_{1}$.
###### Proof.
Suppose $\eta,\theta\in Y$ are $E$-related, as witnessed by $g\in G$ (so
$g\eta=\theta$). By [15, Theorem 5.9] and [15, Lemma 3.10], we have that
$g\text{Geo}_{1}(e,\eta)$ and $\text{Geo}_{1}(e,\theta)$ differ by a finite
set. By local finiteness of $\text{Geo}(e,\eta)$ and since $\eta,\theta\in Y$,
we have that there exists $N\in\mathbb{N}$ such that
$gT_{n}^{\eta}\subseteq\text{Geo}_{1}(e,\theta)$ for all $n\geq N$. By the
proof of Lemma 4.19, we have $s_{n}^{\eta}=s_{n}^{\theta}$, which gives,
together with $gT_{n}^{\eta}\subseteq\text{Geo}_{1}(e,\theta)$, that
$gT_{n}^{\eta}=T_{n}^{\theta}$. This then yields
$(g_{n}^{\theta})^{-1}gg_{n}^{\eta}H_{n}^{\eta}=H_{n}^{\theta}$ for all $n\geq
N$. Thus, we have $H_{n}(\eta)F_{n}H_{n}(\theta)$ and so
$H_{n}(\eta)F_{n}^{\prime}H_{n}(\theta)$ for all $n\geq N$ since
$F_{n}\subseteq F_{n}^{\prime}$. Therefore,
$f_{n}(H_{n}(\eta))=f_{n}(H_{n}(\theta))$ for all $n\geq N$ and so
$f(\eta)E_{1}f(\theta)$.
∎
Let us now establish Theorem A on the hyperfiniteness of $E$, following the
proof of [15, Theorem A].
Proof of Theorem A.
Note that $E|_{Y}$ is a sub-relation of $E^{\prime}$. Indeed, if
$\theta,\eta\in Y$ and $\theta E\eta$, then by Lemma 4.25, we have that
$f(\theta)E_{1}f(\eta)$, which implies $\theta E^{\prime}\eta$. By Lemma 4.24,
we have that $E^{\prime}$ is hyperfinite, so $E|_{Y}$ is hyperfinite, since a
sub-relation of a hyperfinite equivalence relation is hyperfinite. On
$\partial\hat{\Gamma}\setminus Y=[Z]_{E}$, $E$ is smooth by Lemma 4.20, and
hence hyperfinite. Therefore, $E$ is hyperfinite on $\partial\hat{\Gamma}$.
Recall that we worked with a fixed a finite generating set $X$ in this
section. If we use a different finite generating set $X^{\prime}$ for $G$,
then the relative Cayley graph $\hat{\Gamma}^{\prime}$ corresponding to
$X^{\prime}$ is $G$-equivariantly quasi-isometric to $\hat{\Gamma}$ (via the
identity map on $G$), so $\partial\hat{\Gamma}^{\prime}$ is $G$-equivariantly
homeomorphic to $\partial\hat{\Gamma}$. It follows that the orbit equivalence
relation of $G$ on $\partial\hat{\Gamma}^{\prime}$ is also hyperfinite.
$\square$
As a corollary, we obtain Corollary B on the hyperfiniteness of the action of
$G$ on $\partial(G,\mathcal{P})$, where $\mathcal{P}$ is the collection of
parabolic subgroups.
Proof of Corollary B.
By Theorem 2.1, $\partial\hat{\Gamma}$ embeds $G$-equivariantly and
topologically into $\partial(G,\mathcal{P})$ with countable complement.
Therefore, the orbit equivalence relation of $G$ on $\partial\hat{\Gamma}$ is
a subrelation of the orbit equivalence relation of $G$ on
$\partial(G,\mathcal{P})$. Since the orbit equivalence relation of $G$ on
$\partial\hat{\Gamma}$ is hyperfinite (by Theorem A) and since
$\partial(G,\mathcal{P})\setminus\partial\hat{\Gamma}$ is countable, it
follows that the orbit equivalence relation of $G$ on
$\partial(G,\mathcal{P})$ is also hyperfinite. $\square$
## References
* [1] S. Adams. Boundary amenability for word hyperbolic groups and an application to smooth dynamics of simple groups. Topology, 33(4):765–783, 1994.
* [2] S. Adams, G. Elliott, and T. Giordano. Amenable actions of groups. Trans. Amer. Math. Soc., 344(2):803–822, 1994.
* [3] Claire Anantharaman-Delaroche and Jean Renault. Amenable groupoids. Monographies de L’Enseignement Mathematique [Monographs of L’Enseignement Mathematique], 36, 2000.
* [4] B. Bowditch. Relatively hyperbolic groups. International Journal of Algebra and Computation, 22(03):1250016, 2012.
* [5] A. Connes, J. Feldman, and B. Weiss. An amenable equivalence relation is generated by a single transformation. Ergodic Theory and Dynamical Systems, 1(4):431–450, 1981.
* [6] M. Dadarlat and E. Guentner. Uniform embeddability of relatively hyperbolic groups. Journal für die reine und angewandte Mathematik, 612(2007):1–15, 2007.
* [7] François Dahmani. Les groupes relativement hyperboliques et leurs bords. Prépublication de l’Institut de Recherche Mathématique Avancée [Prepublication of the Institute of Advanced Mathematical Research], 2003.
* [8] R. Dougherty, S. Jackson, and A. S. Kechris. The structure of hyperfinite borel equivalence relations. Trans. Amer. Math. Soc., 341(1), 1994.
* [9] Su Gao. Invariant Descriptive Set Theory, volume 293 of Pure and Applied Mathematics. CRC Press, 2009.
* [10] M. Gromov. Hyperbolic groups. Essays in group theory, Math. Sci. Res. Inst. Publ., 8:75–263, 1987\.
* [11] Matthias Hamann, Florian Lehner, Babak Miraftab, and Tim Ruhmann. A Stallings’ type theorem for quasi-transitive graphs. J. Combin. Theory, Series B, 157:40–69, 2022.
* [12] Jingyin Huang, Marcin Sabok, and Forte Shinko. Hyperfiniteness of boundary actions of cubulated hyperbolic groups. Ergodic Theory and Dynamical Systems, 40(9):2453–2466, Mar 2019\.
* [13] Alexander Kechris. Classical Descriptive Set Theory, volume 156 of Graduate Texts in Mathematics. Springer-Verlag, 1995.
* [14] Timothée Marquis. On geodesic ray bundles in buildings. Geometriae Dedicata, 202(1):27–43, Oct 2018.
* [15] Timothée Marquis and Marcin Sabok. Hyperfiniteness of boundary actions of hyperbolic groups. Mathematische Annalen, 377(3-4):1129–1153, Jun 2020.
* [16] Denis Osin. Asymptotic dimension of relatively hyperbolic groups. International Mathematics Research Notices, 35(2005):2143–2161, 2005.
* [17] Denis Osin. Relatively hyperbolic groups: Intrinsic geometry, algebraic properties, and algorithmic problems, volume 179. Memoirs American Mathematical Society, 2006.
* [18] Narutaka Ozawa. Boundary amenability of relatively hyperbolic groups. Topology and its Applications, 153:2624–2630, 2006.
* [19] Piotr Przytycki and Marcin Sabok. Unicorn paths and hyperfiniteness for the mapping class group. Forum of Mathematics, Sigma, 9:e36, 2021.
* [20] Nicholas W. M. Touikan. On geodesic ray bundles in hyperbolic groups. Proceedings of the American Mathematical Society, 2018.
* [21] A. Vershik. The action of $PSL(2,\mathbb{Z})$ in $\mathbb{R}^{1}$ is approximable. Uspehi Mat. Nauk, 199(1):209–210, 1978.
* [22] R. Zimmer. Amenable ergodic group actions and an application to poisson boundaries of random walks. J. Functional Analysis, 27(3):350–372, 1978.
|
# Ellipsoidal and hyperbolic Radon transforms; microlocal properties and
injectivity
James W. Webber$\dagger$ , Sean Holman$\ddagger$ and Eric Todd Quinto
Department of Oncology and Gynecology, Brigham and Women’s Hospital, 221
Longwood Ave. Boston, MA 02115 Department of Mathematics, The University of
Manchester, Alan Turing Building, Oxford Road, Manchester M13 9PY Department
of Mathematics, Tufts University, 177 College Ave, Medford, MA 02155
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
We present novel microlocal and injectivity analyses of ellipsoid and
hyperboloid Radon transforms. We introduce a new Radon transform, $R$, which
defines the integrals of a compactly supported $L^{2}$ function, $f$, over
ellipsoids and hyperboloids with centers on a smooth connected surface, $S$.
$R$ is shown to be a Fourier Integral Operator (FIO) and in our main theorem
we prove that $R$ satisfies the Bolker condition if the support of $f$ is
connected and not intersected by any plane tangent to $S$. Under certain
conditions, this is an equivalence. We give examples where our theory can be
applied. Focusing specifically on a cylindrical geometry of interest in
Ultrasound Reflection Tomography (URT), we prove injectivity results and
investigate the visible singularities. In addition, we present example
reconstructions of image phantoms in two-dimensions, and validate our
microlocal theory.
###### Key words and phrases:
ellipsoids, hyperboloids, Radon transforms, microlocal analysis, stability,
injectivity
## 1\. Introduction
In this paper, we introduce a novel Radon transform, $R$, which defines the
integrals of compactly supported $L^{2}$ functions in $\mathbb{R}^{n}$ over
ellipsoid, two-sheeted hyperboloid, and elliptic hyperboloid surfaces, with
centers on a smooth, $(n-1)$-dimensional hypersurface, which we denote by $S$.
$R$ has applications in many imaging fields, such as Ultrasound Reflection
Tomography (URT), Photoacoustic Tomography (PAT), ground penetrating radar,
and Synthetic Aperture Radar (SAR). We present a novel microlocal and
injectivity analysis of $R$, and determine the singularities (image edges)
detected by $R$ in examples of interest in URT.
The literature considers microlocal and injectivity analysis of spherical and
ellipsoidal Radon transforms [13, 22, 32, 5, 30, 6, 2, 17, 25, 7, 16, 29, 23,
14, 24, 4]. Analytic uniqueness is considered in [17]. In [25], the authors
consider a Radon transform, $\mathcal{R}$, which defines the integrals of an
$n$-D function over $(n-1)$-dimensional spheres with centers on a smooth,
strongly convex hypersurface, denoted by $\mathcal{S}$ (using the notation of
[25]). The authors show that $\mathcal{R}$ is a Fourier Integral Operator
(FIO) with left projection that drops rank on planes tangent to $\mathcal{S}$.
More precisely, the left and right projections of $\mathcal{R}$ are shown to
be Whitney folds. This means that there are artifacts in filtered
backprojection type reconstructions from $\mathcal{R}f$ data which are
reflections in hyperplanes tangent to $\mathcal{S}$.
In [2], the authors present a microlocal analysis of an elliptic Radon
transform, $\mathcal{R}$ (to adopt the notation of [2]), of interest in two-
dimensional URT. The authors consider a scanning modality, whereby a single
emitter-receiver pair, kept a fixed distance apart, are rotated about the
origin on lines tangent to the unit circle. The reflectivity function, which
is the reconstruction target in URT, is supported on the interior of the unit
circle. $\mathcal{R}f$ has two degrees of freedom, which are the major
diameter of the ellipse, and the position of ellipse center, which lies on the
unit circle and follows the emitter-receiver rotation. The authors prove that
$\mathcal{R}$ is an elliptic FIO with conical relation, $\mathcal{C}$, which
satisfies the Bolker condition. After which, it is shown that the normal
operator of $\mathcal{R}$ is an elliptic Pseudodifferential Operator (PDO),
order $-1$, and thus the inverse of $\mathcal{R}$ is stable on Sobolev scale
$\frac{1}{2}$.
In [16], the authors consider a spherical Radon transform. The spheres of
integration have centers restricted to cylindrical hypersurfaces of the form
$\Gamma\times\mathbb{R}^{m}$, where $\Gamma$ is a hypersurface in
$\mathbb{R}^{n}$. The authors present a general methodology for inverting
spherical Radon transforms with center set $\Gamma\times\mathbb{R}^{m}$.
Specifically, the authors show that if an inversion formula is known for the
center set $\Gamma$, then this can be extended to
$\Gamma\times\mathbb{R}^{m}$. They apply the theory of [4], which provides
inversion formulae for the spherical Radon transform with a flat plane center
set, to derive inversion formulae for elliptic and circular cylinder center
sets. Numerical results are also provided when the set of sphere centers is an
elliptic cylinder, and the authors present simulated reconstructions of image
phantoms from spherical integral data using the proposed formulae. A blurring
effect is observed near sharp discontinuities in the image reconstructions,
indicating that all singularities are not well resolved when the center set is
an elliptic cylinder.
In our work, we introduce a novel Radon transform, denoted by $R$, which
defines the integrals over ellipsoids and hyperboloids with centers on a
smooth, connected surface, $S$. We show that $R$ is an FIO. Our central
theorem proves that $R$ satisfies the Bolker condition if and only if
$\text{supp}(f)$ is not intersected by any hyperplane tangent to $S$. The
Bolker condition is important as it relates to image artifacts in filtered
backprojection type reconstructions from Radon transform data, specifically to
artifacts which are additional (unwanted) singularities in the reconstruction
that are not in the object. Such artifacts are also often observed using
iterative solvers and algebraic reconstruction techniques [35]. If the Bolker
condition is satisfied, this implies reconstruction stability, and unwanted
microlocal singularities are eliminated. Conversely, if the Bolker condition
fails, the capacity for artifacts is amplified.
The calculations which determine satisfaction of Bolker shed light on the
nature of the image artifacts (should they exist) if Bolker fails and can be
used to predict artifact location and to help suppress artifacts [10, 34].
In a similar vein to [25], the left projection of $R$ is shown to drop rank on
planes which are tangent to $S$ and we discover “mirror point” type artifacts
which occur on opposite sides of planes tangent to $S$. Specifically, if the
tangent planes to $S$ do not intersect $\text{supp}(f)$, then we show that the
artifacts are constrained to lie outside of $\text{supp}(f)$, and thus the
Bolker condition holds. This is one of the central ideas of our main theorem.
In [25], the surfaces of integration are spheres, which are symmetric about
any plane through their center. This causes the reflection artifacts
discovered in [25]. However, ellipsoids and hyperboloids do not share such
symmetries, and thus the artifacts we discover are not reflections through
planes tangent to $S$, as in [25], but can be understood as a “perturbed” or
“distorted” reflection. See Section 3.1, for a more detailed discussion on
mirror point artifacts. The microlocal theory we present here is a
generalization of the work of [25], to ellipsoid and hyperboloid integration
surfaces.
After establishing our central microlocal theorems, we present a number of
examples where our theory can be applied, some of which are relevant to URT.
We focus on a cylindrical scanning geometry in $\mathbb{R}^{3}$, of interest
in URT, and prove injectivity results. Specifically, we prove that any $L^{2}$
function, $f$, compactly supported on the interior of a unit cylinder in
$\mathbb{R}^{3}$, can be reconstructed uniquely from its integrals over
spheroids with centers on the unit cylinder. A unit cylinder in
$\mathbb{R}^{3}$ is a special case of the more general cylindrical
hypersurfaces considered in [16]. The authors of [16] consider spherical
integral surfaces, whereas we consider, more general, spheroid integral
surfaces. Our injectivity results hold for compactly supported $L^{2}$
functions, which advances the theory of [16], as their inversion formulae
apply only to smooth functions of compact support. In addition, we show, using
Volterra integral equation theory [28], that, with limited spheroid radii, one
can reconstruct $f$ on cylindrical tubes (or “layers”) which are subsets of
the unit cylinder interior. Limited sphere radii are not considered in [16].
We aim to address limited spheroid and sphere radii in this work.
The remainder of this paper is organized as follows. In section 2, we give
some definitions from microlocal analysis that will be used in our theorems.
In section 3, we define our generalized Radon transform and prove our main
microlocal theorems, and follow up with some examples in section 3.2. In
section 4, we investigate a cylindrical scanning geometry with applications in
URT, and prove our main injectivity theorems. We also discuss in detail the
visible singularities and show how the wavefront coverage varies with
emitter/receiver discretization. To finish, in section 5, we present some
example image reconstructions in two-dimensions and verify our microlocal
theory.
## 2\. Definitions from microlocal analysis
We next provide some notation and definitions. Let $X$ and $Y$ be open subsets
of $\mathbb{R}^{n_{X}}$ and $\mathbb{R}^{n_{Y}}$, respectively. Let
$\mathcal{D}(X)$ be the space of smooth functions compactly supported on $X$
with the standard topology and let $\mathcal{D}^{\prime}(X)$ denote its dual
space, the vector space of distributions on $X$. Let $\mathcal{E}(X)$ be the
space of all smooth functions on $X$ with the standard topology and let
$\mathcal{E}^{\prime}(X)$ denote its dual space, the vector space of
distributions with compact support contained in $X$. Finally, let
$\mathcal{S}({{\mathbb{R}}^{n}})$ be the space of Schwartz functions, that are
rapidly decreasing at $\infty$ along with all derivatives. See [31] for more
information.
For a function $f$ in the Schwartz space $\mathcal{S}(\mathbb{R}^{n_{X}})$ or
in $L^{2}({{\mathbb{R}}^{n}})$, we use $\mathcal{F}f$ and $\mathcal{F}^{-1}f$
to denote the Fourier transform and inverse Fourier transform of $f$,
respectively (see [18, Definition 7.1.1]). Note that
$\mathcal{F}^{-1}\mathcal{F}f({\mathbf{x}})=\frac{1}{(2\pi)^{n_{X}}}\int_{{\mathbf{y}}\in\mathbb{R}^{n_{X}}}\int_{{\mathbf{z}}\in\mathbb{R}^{n_{X}}}\exp(({\mathbf{x}}-{\mathbf{z}})\cdot{\mathbf{y}})\,f({\mathbf{z}})\,\mathrm{d}{\mathbf{z}}\,\mathrm{d}{\mathbf{y}}$.
We use the standard multi-index notation: if
$\alpha=(\alpha_{1},\alpha_{2},\dots,\alpha_{n})\in\left\\{0,1,2,\dots\right\\}^{n_{X}}$
is a multi-index and $f$ is a function on $\mathbb{R}^{n_{X}}$, then
$\partial^{\alpha}f=\left(\frac{\partial}{\partial
x_{1}}\right)^{\alpha_{1}}\left(\frac{\partial}{\partial
x_{2}}\right)^{\alpha_{2}}\cdots\left(\frac{\partial}{\partial
x_{n_{X}}}\right)^{\alpha_{n_{X}}}f.$
If $f$ is a function of $({\mathbf{y}},{\mathbf{x}},\mathbf{s})$ then
$\partial^{\alpha}_{\mathbf{y}}f$ and $\partial^{\alpha}_{\mathbf{s}}f$ are
defined similarly.
We identify cotangent spaces on Euclidean spaces with the underlying Euclidean
spaces, so we identify $T^{*}(X)$ with $X\times\mathbb{R}^{n_{X}}$. If $\Phi$
is a function of $({\mathbf{y}},{\mathbf{x}},\mathbf{s})\in Y\times
X\times{{\mathbb{R}}}^{N}$ then we define
$\mathrm{d}_{{\mathbf{y}}}\Phi=\left(\frac{\partial\Phi}{\partial
y_{1}},\frac{\partial\Phi}{\partial y_{2}},\cdots,\frac{\partial\Phi}{\partial
y_{{n_{X}}}}\right)$, and $\mathrm{d}_{\mathbf{x}}\Phi$ and
$\mathrm{d}_{\mathbf{s}}\Phi$ are defined similarly. Identifying the cotangent
space with the Euclidean space as mentioned above, we let
$\mathrm{d}\Phi=\left(\mathrm{d}_{{\mathbf{y}}}\Phi,\mathrm{d}_{{\mathbf{x}}}\Phi,\mathrm{d}_{\mathbf{s}}\Phi\right)$.
We use the convenient notation that if $A\subset{{\mathbb{R}}}^{m}$, then
$\dot{A}=A\setminus\mathbf{0}$.
The singularities of a function and the directions in which they occur are
described by the wavefront set [9, page 16]:
###### Definition 2.1.
Let $X$ be an open subset of ${{\mathbb{R}}^{n}}$ and let $f$ be a
distribution in $\mathcal{D}^{\prime}(X)$. Let
$({\mathbf{x}}_{0},{\boldsymbol{\xi}}_{0})\in
X\times{\dot{{\mathbb{R}}^{n}}}$. Then $f$ is _smooth at ${\mathbf{x}}_{0}$ in
direction ${\boldsymbol{\xi}_{0}}$_ if there exists a neighborhood $U$ of
${\mathbf{x}}_{0}$ and $V$ of ${\boldsymbol{\xi}}_{0}$ such that for every
$\Phi\in\mathcal{D}(U)$ and $N\in\mathbb{R}$ there exists a constant $C_{N}$
such that for all ${\boldsymbol{\xi}}\in V$,
(2.1) $\left|\mathcal{F}(\Phi f)(\lambda{\boldsymbol{\xi}})\right|\leq
C_{N}(1+\left|\lambda\right|)^{-N}.$
The pair $({\mathbf{x}}_{0},{\boldsymbol{\xi}_{0}})$ is in the _wavefront
set,_ $\mathrm{WF}(f)$, if $f$ is not smooth at ${\mathbf{x}}_{0}$ in
direction ${\boldsymbol{\xi}_{0}}$.
This definition follows the intuitive idea that the elements of
$\mathrm{WF}(f)$ are the point–normal vector pairs above points of $X$ at
which $f$ has singularities. For example, if $f$ is the characteristic
function of the unit ball in $\mathbb{R}^{3}$, then its wavefront set is
$\mathrm{WF}(f)=\\{({\mathbf{x}},t{\mathbf{x}}):{\mathbf{x}}\in S^{2},t\neq
0\\}$, the set of points on a sphere paired with the corresponding normal
vectors to the sphere.
The wavefront set of a distribution on $X$ is normally defined as a subset the
cotangent bundle $T^{*}(X)$ so it is invariant under diffeomorphisms, but we
do not need this invariance, so we will continue to identify
$T^{*}(X)=X\times{{\mathbb{R}}^{n}}$ and consider $\mathrm{WF}(f)$ as a subset
of $X\times{\dot{{\mathbb{R}}^{n}}}$.
###### Definition 2.2 ([18, Definition 7.8.1]).
We define $S^{m}(Y\times X,\mathbb{R}^{N})$ to be the set of
$a\in\mathcal{E}(Y\times X\times\mathbb{R}^{N})$ such that for every compact
set $K\subset Y\times X$ and all multi–indices $\alpha,\beta,\gamma$ the bound
$\left|\partial^{\gamma}_{{\mathbf{y}}}\partial^{\beta}_{{\mathbf{x}}}\partial^{\alpha}_{{\boldsymbol{\sigma}}}a({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\right|\leq
C_{K,\alpha,\beta,\gamma}(1+\left\lVert{\boldsymbol{\sigma}}\right\rVert)^{m-|\alpha|},\
\ \ ({\mathbf{y}},{\mathbf{x}})\in K,\
{\boldsymbol{\sigma}}\in\mathbb{R}^{N},$
holds for some constant $C_{K,\alpha,\beta,\gamma}>0$.
The elements of $S^{m}$ are called _symbols_ of order $m$. Note that these
symbols are sometimes denoted $S^{m}_{1,0}$. The symbol $a\in S^{m}(Y\times
X,{{\mathbb{R}}}^{N})$ is _elliptic_ if for each compact set $K\subset Y\times
X$, there is a $C_{K}>0$ and $M>0$ such that
(2.2) $\left|a({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\right|\geq
C_{K}(1+\left\lVert{\boldsymbol{\sigma}}\right\rVert)^{m},\ \ \
({\mathbf{y}},{\mathbf{x}})\in K,\
\left\lVert{\boldsymbol{\sigma}}\right\rVert\geq M.$
###### Definition 2.3 ([19, Definition 21.2.15]).
A function
$\Phi=\Phi({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\in\mathcal{E}(Y\times
X\times\dot{\mathbb{R}^{N}})$ is a _phase function_ if
$\Phi({\mathbf{y}},{\mathbf{x}},\lambda{\boldsymbol{\sigma}})=\lambda\Phi({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})$,
$\forall\lambda>0$ and $\mathrm{d}\Phi$ is nowhere zero. The _critical set of
$\Phi$_ is
$\Sigma_{\Phi}=\\{({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\in Y\times
X\times\dot{\mathbb{R}^{N}}:\mathrm{d}_{{\boldsymbol{\sigma}}}\Phi=0\\}.$
A phase function is _clean_ if the critical set
$\Sigma_{\Phi}=\\{({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\ :\
\mathrm{d}_{\boldsymbol{\sigma}}\Phi({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})=0\\}$
is a smooth manifold with tangent space defined by the kernel of
$\mathrm{d}\,(\mathrm{d}_{\sigma}\Phi)$ on $\Sigma_{\Phi}$. Here, the
derivative $\mathrm{d}$ is applied component-wise to the vector-valued
function $\mathrm{d}_{\sigma}\Phi$. So,
$\mathrm{d}\,(\mathrm{d}_{\sigma}\Phi)$ is treated as a Jacobian matrix of
dimensions $N\times(2n+N)$.
By the Constant Rank Theorem the requirement for a phase function to be clean
is satisfied if $\mathrm{d}\left(\mathrm{d}_{\boldsymbol{\sigma}}\Phi\right)$
has constant rank.
###### Definition 2.4 ([19, Definition 21.2.15] and [20, section 25.2]).
Let $X$ and $Y$ be open subsets of ${{\mathbb{R}}^{n}}$. Let
$\Phi\in\mathcal{E}\left(Y\times X\times{{{\mathbb{R}}}}^{N}\right)$ be a
clean phase function. In addition, we assume that $\Phi$ is _nondegenerate_ in
the following sense:
$\mathrm{d}_{{\mathbf{y}}}\Phi$ and $\mathrm{d}_{{\mathbf{x}}}\Phi$ are never
zero on $\Sigma_{\Phi}$.
The _canonical relation parametrized by $\Phi$_ is defined as
(2.3) $\displaystyle\mathcal{C}=$
$\displaystyle\left\\{\left(\left({\mathbf{y}},\mathrm{d}_{{\mathbf{y}}}\Phi({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\right);\left({\mathbf{x}},-\mathrm{d}_{{\mathbf{x}}}\Phi({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\right)\right):({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\in\Sigma_{\Phi}\right\\},$
###### Definition 2.5.
Let $X$ and $Y$ be open subsets of ${{\mathbb{R}}^{n}}$ and
$\mathbb{R}^{n_{Y}}$, respectively. Let an operator
$A:\mathcal{D}(X)\to\mathcal{D}^{\prime}(Y)$ be defined by the distribution
kernel $K_{A}\in\mathcal{D}^{\prime}(Y\times X)$, in the sense that
$Af({\mathbf{y}})=\int_{X}K_{A}({\mathbf{y}},{\mathbf{x}})f({\mathbf{x}})\mathrm{d}{\mathbf{x}}$.
Then we call $K_{A}$ the _Schwartz kernel_ of $A$. A _Fourier integral
operator (FIO)_ of order $m+N/2-(n_{X}+n_{Y})/2$ is an operator
$A:\mathcal{D}(X)\to\mathcal{D}^{\prime}(Y)$ with Schwartz kernel given by an
oscillatory integral of the form
(2.4)
$K_{A}({\mathbf{y}},{\mathbf{x}})=\int_{\mathbb{R}^{N}}e^{i\Phi({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})}a({\mathbf{y}},{\mathbf{x}},{\boldsymbol{\sigma}})\mathrm{d}{\boldsymbol{\sigma}},$
where $\Phi$ is a clean nondegenerate phase function and $a$ is a symbol in
$S^{m}(Y\times X,\mathbb{R}^{N})$. The _canonical relation of $A$_ is the
canonical relation of $\Phi$ defined in (2.3). The FIO $A$ is _elliptic_ if
its symbol is elliptic.
This is a simplified version of the definition of FIOs in [8, section 2.4] or
[20, section 25.2] that is suitable when there are global coordinates and a
global phase function. In general, an FIO must be defined using a partition of
unity, local coordinates, and phase functions corresponding to local regions
of the same, globally defined, canonical relation; for details see [8, section
2.4] or [20, section 25.2]. Because we assume phase functions are
nondegenerate, our FIOs can be defined as maps from $\mathcal{E}^{\prime}(X)$
to $\mathcal{D}^{\prime}(Y)$ and sometimes on larger domains. For general
information about FIOs, see [8, 20, 19]. For information about the Schwartz
Kernel, see [18, Theorem 5.1.9].
Pseudodifferential operators are a special class of FIOs, which include linear
differential operators, given in the next definition.
###### Definition 2.6.
An FIO is a pseudodifferential operator if its canonical relation
$\mathcal{C}$ is contained in the diagonal
$\mathcal{C}=\Delta:=\\{({\mathbf{x}},{\boldsymbol{\xi}};{\mathbf{x}},{\boldsymbol{\xi}})\\}.$
Let $X$ and $Y$ be sets and let $\Omega_{1}\subset X$ and $\Omega_{2}\subset
Y\times X$. The composition $\Omega_{2}\circ\Omega_{1}$ and transpose
$\Omega_{2}^{t}$ of $\Omega_{2}$ are defined
$\displaystyle\Omega_{2}\circ\Omega_{1}$
$\displaystyle=\left\\{{\mathbf{y}}\in Y\hskip 0.85358pt:\hskip
0.85358pt\exists{\mathbf{x}}\in\Omega_{1},\
({\mathbf{y}},{\mathbf{x}})\in\Omega_{2}\right\\}$
$\displaystyle\Omega_{2}^{t}$
$\displaystyle=\left\\{({\mathbf{x}},{\mathbf{y}})\hskip 0.85358pt:\hskip
0.85358pt({\mathbf{y}},{\mathbf{x}})\in\Omega_{2}\right\\}.$
The Hörmander-Sato Lemma provides the relationship between the wavefront set
of distributions and their images under FIO.
###### Theorem 2.7 ([18, Theorem 8.2.13]).
Let $f\in\mathcal{E}^{\prime}(X)$ and let
${A}:\mathcal{E}^{\prime}(X)\to\mathcal{D}^{\prime}(Y)$ be an FIO with
canonical relation $\mathcal{C}$. Then,
$\mathrm{WF}({A}f)\subset\mathcal{C}\circ\mathrm{WF}(f)$.
Let $A$ be an FIO with adjoint $A^{*}$. Then if $\mathcal{C}$ is the canonical
relation of $A$, the canonical relation of $A^{*}$ is $\mathcal{C}^{t}$. Many
imaging techniques are based on application of the adjoint operator $A^{*}$
and so to understand artifacts we consider $A^{*}A$ (or, if $A$ does not map
to $\mathcal{E}^{\prime}(Y)$, then $A^{*}\psi A$ for an appropriate cutoff
$\psi$). Because of Theorem 2.7,
$\mathrm{WF}(A^{*}\psi
Af)\subset\mathcal{C}^{t}\circ\mathcal{C}\circ\mathrm{WF}(f).$
The next two definitions provide tools, which we will apply in the next
section, to analyze this composition.
###### Definition 2.8.
Let $\mathcal{C}\subset T^{*}(Y\times X)$ be the canonical relation associated
to the FIO ${A}:\mathcal{E}^{\prime}(X)\to\mathcal{D}^{\prime}(Y)$. We let
$\Pi_{L}$ and $\Pi_{R}$ denote the natural left- and right-projections of
$\mathcal{C}$, projecting onto the appropriate coordinates:
$\Pi_{L}:\mathcal{C}\to T^{*}(Y)$ and $\Pi_{R}:\mathcal{C}\to T^{*}(X)$.
Because $\Phi$ is nondegenerate, the projections do not map to the zero
section. If $A$ satisfies our next definition, then $A^{*}A$ (or $A^{*}\psi
A$) is a pseudodifferential operator [15, 27].
###### Definition 2.9.
Let ${A}:\mathcal{E}^{\prime}(X)\to\mathcal{D}^{\prime}(Y)$ be a FIO with
canonical relation $\mathcal{C}$ then $A$ (or $\mathcal{C}$) satisfies the
_Bolker Condition_ if the natural projection $\Pi_{L}:\mathcal{C}\to T^{*}(Y)$
is an embedding (injective immersion).
## 3\. Ellipsoid and hyperboloid Radon transforms
In this section we show under fairly weak assumptions that a general Radon
transform integrating over ellipsoids, hyperboloids, or elliptic hyperboloids
with centers on a surface satisfies the Bolker condition. Then, we investigate
several special cases.
Let $\mathrm{Sym}(n)$ denote the set of invertible symmetric matrices with
real entries, which is an $n(n+1)/2$ dimensional smooth manifold, and suppose
$A\in\mathrm{Sym}(n)$. Let $S$ be a smooth connected hypersurface in
${{\mathbb{R}}^{n}}$. For $(\mathbf{s},A,t)\in
S\times\mathrm{Sym}(n)\times\mathbb{R}=:Y$, let
(3.1)
$\Psi(\mathbf{s},A,t;{\mathbf{x}})=t-{\mathbf{x}}_{T}^{T}A{\mathbf{x}}_{T}\ \
\text{where}\ \ {\mathbf{x}}_{T}={\mathbf{x}}-\mathbf{s}.$
If $A$ is positive definite and $t>0$, then
$\Psi(\mathbf{s},A,t;{\mathbf{x}})=0$ is the defining equation of an ellipsoid
with center at $\mathbf{s}$. In other cases,
$\Psi(\mathbf{s},A,t;{\mathbf{x}})=0$ can be a hyperboloid or elliptic
hyperboloid. Note that if $t=0$, the surface
$\Psi(\mathbf{s},A,0;{\mathbf{x}})=0$ is singular. Therefore, we will exclude
$t=0$ from our analysis.
Our Radon transform can be written
(3.2)
$\begin{split}Rf(\mathbf{s},A,t)&=\int_{\mathbb{R}^{n}}\left|\nabla_{{\mathbf{x}}}\Psi\right|\delta\left(\Psi(\mathbf{s},A,t;{\mathbf{x}})\right)f({\mathbf{x}})\mathrm{d}{\mathbf{x}}\\\
&=\int_{-\infty}^{\infty}\int_{\mathbb{R}^{n}}\left|\nabla_{{\mathbf{x}}}\Psi\right|f({\mathbf{x}})e^{i\sigma\Psi(\mathbf{s},A,t;{\mathbf{x}})}\mathrm{d}{\mathbf{x}}\mathrm{d}\sigma\end{split}$
for $f\in L^{2}_{c}(D)$, where $D$ is an open, connected subset of
${{\mathbb{R}}^{n}}$. The most general case we will consider is when $A$ is
restricted to be in an embedded submanifold $M\subset\mathrm{Sym}(n)$. Note
that this includes the case when $M=\\{A\\}$ is a single matrix and thus a
zero dimensional submanifold. With this in mind, we define
$Y_{M}=S\times M\times\dot{\mathbb{R}}$
and the operator $R_{M}$ is given by (3.2) but with $A$ restricted to $M$.
We now state our main theorem.
###### Theorem 3.1.
Let $S\subset{{\mathbb{R}}^{n}}$ be a smooth connected hypersurface. Let $D$
be an open connected subset of ${{\mathbb{R}}^{n}}$, and let $M$ be a
submanifold of $\mathrm{Sym}(n)$, possibly of dimension zero. Then,
$R_{M}:\mathcal{E}^{\prime}(D)\to\mathcal{D}^{\prime}(Y_{M})$ is an FIO
satisfying the Bolker condition if $D$ is disjoint from every tangent plane to
$S$. That is,
(3.3) $D\bigcap\left(\bigcup_{\mathbf{s}\in
S}P_{\mathbf{s}}\right)=\emptyset,$
where $P_{\mathbf{s}}$ is the tangent plane to $S$ at $\mathbf{s}\in S$. If
additionally $\mathrm{dim}(M)=0$, then the Bolker condition will fail if any
tangent plane to $S$ intersects $D$.
We should point out that Theorem 3.1 will apply to the Radon transform $R_{M}$
with any smooth weight, not just the weight in (3.2), since the proof uses
only microlocal results and the symbol of $R_{M}$ will still be smooth.
In the proof of Theorem 3.1 and throughout the article, we use the following
notation: $\mathbf{0}_{m\times n}$ is the $m\times n$ zero matrix; and $I_{m}$
is the $m\times m$ identity matrix. If
${\mathbf{x}}=(x_{1},x_{2},\dots,x_{n-1},x_{n})\in{{\mathbb{R}}^{n}}$, then
${\mathbf{x}}^{\prime}=(x_{1},x_{2},\dots,x_{n-1})$.
###### Proof of Theorem 3.1.
Referring to the second line in (3.2), $R_{M}$ will be an FIO provided that
(3.4)
$\Phi(\mathbf{s},A,t;{\mathbf{x}};\sigma)=\sigma\left(t-{\mathbf{x}}_{T}^{T}A{\mathbf{x}}_{T}\right)$
is a nondegenerate phase function. This is true since
$\frac{\partial\Phi}{\partial t}=\sigma$ and
$\nabla_{\mathbf{x}}\Phi=-2\sigma(A{\mathbf{x}}_{T})^{T}\neq\mathbf{0}$ since
$D$ is disjoint from $S$ and $A$ is invertible.
Our proof is in two parts. First we consider the case when $S$ is the graph of
a smooth function, and then we use this result locally for the general case.
Indeed, let $\Omega$ be an open connected subset of ${\mathbb{R}^{n-1}}$ and
let $q:\Omega\to{{\mathbb{R}}}$ be a smooth function. Now, let
$S=\left\\{({\mathbf{y}},q({\mathbf{y}}))\hskip 0.85358pt:\hskip
0.85358pt{\mathbf{y}}\in\Omega\right\\}.$
To simplify notation when ${\mathbf{y}}\in\Omega$ is fixed, we will let
$q=q\left(y_{1},\ldots,y_{n-1}\right),\qquad q_{j}=\frac{\partial q}{\partial
y_{j}}$
and so for ${\mathbf{x}}\in\mathbb{R}^{n}$, ${\mathbf{y}}\in\Omega$
${\mathbf{x}}_{T}=(x_{1}-y_{1},x_{2}-y_{2},\dots,x_{n-1}-y_{n-1},x_{n}-q)^{T}.$
Note that, with this notation, ${\mathbf{x}}$ is in the tangent plane
$P_{({\mathbf{y}},q({\mathbf{y}}))}$ if and only if
(3.5) $(\nabla q^{T},-1)\cdot{\mathbf{x}}_{T}=(q_{1},\ q_{2},\ \dots\ ,\
q_{n-1},\ -1)\cdot{\mathbf{x}}_{T}=0.$
When $\mathrm{dim}(M)>0$, calculation using (3.4) shows that the canonical
relation for $R_{M}$ is
(3.6) $\displaystyle\mathcal{C}_{M}$
$\displaystyle=\big{\\{}\left(({\mathbf{y}},q),A,{\mathbf{x}}_{T}^{T}A{\mathbf{x}}_{T};\nabla_{\mathbf{y}}\Phi\cdot\mathrm{d}{\mathbf{y}}+\nabla_{A}\Phi\cdot\mathrm{d}A+\sigma\mathrm{d}t;{\mathbf{x}};-\nabla_{\mathbf{x}}\Phi\cdot\mathrm{d}{\mathbf{x}}\right)$
$\displaystyle\hskip 170.71652pt\hskip 0.85358pt:\hskip
0.85358pt({\mathbf{y}},A;{\mathbf{x}};\sigma)\in\Omega\times M\times
D\times\dot{\mathbb{R}}\big{\\}}$
The only difference when $\mathrm{dim}(M)=0$ is that the $\mathrm{d}A$ term is
removed. For the remainder of the proof we assume $\mathrm{dim}(M)>0$ but
minor modifications allow the same arguments to work for the case
$\mathrm{dim}(M)=0$.
Note that $({\mathbf{y}},A;{\mathbf{x}};\sigma)\in\Omega\times M\times
D\times\dot{\mathbb{R}}$ provide a global parametrization of $\mathcal{C}_{M}$
since $t$ is determined by ${\mathbf{y}}$, $q({\mathbf{y}})$, ${\mathbf{x}}$
and $A$. The left projection of $R_{M}$ is
(3.7)
$\begin{split}\Pi^{M}_{L}({\mathbf{y}},A;{\mathbf{x}};\sigma)&=\Bigg{(}({\mathbf{y}},q),A,{\mathbf{x}}_{T}^{T}A{\mathbf{x}};2\sigma{\mathbf{x}}_{T}^{T}AB^{T}\mathrm{d}{\mathbf{y}}-\sigma{\mathbf{x}}_{T}^{T}\mathrm{d}A{\mathbf{x}}_{T}+\sigma\mathrm{d}t\Bigg{)},\end{split}$
where
$B=\left[I_{n-1},\nabla q\right]=\begin{pmatrix}1&0&\cdots&0&q_{1}\\\
0&1&\cdots&0&q_{2}\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\
0&0&\cdots&1&q_{n-1}\end{pmatrix}.$
Using the natural coordinates on $T^{*}(\Omega\times
M\times\dot{\mathbb{R}})$, the differential of $\Pi^{M}_{L}$ is represented by
(3.8) $D\Pi^{M}_{L}=\hbox{}\vbox{\kern 0.86108pt\hbox{$\kern 0.0pt\kern
2.5pt\kern-5.0pt\left(\kern
0.0pt\kern-2.5pt\kern-6.66669pt\vbox{\kern-0.86108pt\vbox{\vbox{
\halign{\kern\arraycolsep\hfil\@arstrut$\kbcolstyle#$\hfil\kern\arraycolsep&
\kern\arraycolsep\hfil$\@kbrowstyle#$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep&&
\kern\arraycolsep\hfil$\@kbrowstyle#$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep\cr
5.0pt\hfil\@arstrut$\scriptstyle$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\nabla_{{\mathbf{y}}},\nabla_{A},\frac{\partial}{\partial\sigma}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\nabla_{\mathbf{x}}\\\\{\mathbf{y}},A,\mathrm{d}t$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle I_{(2n-1)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\textbf{0}_{(2n-1)\times
n}\\\\\mathrm{d}{\mathbf{y}}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\cdot$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
2\sigma BA\\\\\mathrm{d}A$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\cdot$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\cdot\\\\{t}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle\cdot$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
2{\mathbf{x}}_{T}^{T}A$\hfil\kern 5.0pt\crcr}}}}\right)$}}.$
Thus, $\Pi^{M}_{L}$ will be an immersion if the $n\times n$ submatrix
(3.9) $C=\begin{pmatrix}2\sigma BA\\\
2{\mathbf{x}}_{T}^{T}A\end{pmatrix}=\begin{pmatrix}2\sigma B\\\
2{\mathbf{x}}_{T}^{T}\end{pmatrix}A$
is invertible. Note that $CA^{-1}$ is row equivalent to
(3.10) $(CA^{-1})^{\prime}=\begin{pmatrix}I_{n-1}&\nabla q\\\
\mathbf{0}_{1\times(n-1)}&(\nabla
q^{T},-1)\cdot{\mathbf{x}}_{T}\end{pmatrix}.$
Therefore, by (3.5) $D\Pi^{M}_{L}$ is injective if ${\mathbf{x}}$ is not in
the tangent plane to $S$ at $({\mathbf{y}},q)$. If $\mathrm{dim}(M)=0$, then
the rows of (3.8) corresponding to $\mathrm{d}A$ are absent and so
$D\Pi^{M}_{L}$ is injective if and only if ${\mathbf{x}}$ is not in the
tangent plane to $S$ at $({\mathbf{y}},q)$. This proves the final statement of
the theorem about the Bolker condition failing. On the other hand, if no
tangent plane to $S$ intersects $D$ then this shows that $D\Pi^{M}_{L}$ is
always injective which proves that $\Pi^{M}_{L}$ is an immersion in that case.
Now we will prove the injectivity part of the Bolker condition when $S$ is a
graph assuming that (3.3) holds. Without loss of generality, we assume $D$ is
above every tangent plane to $S$, i.e., if $\mathbf{s}\in S$, ${\mathbf{x}}\in
D$, and $({\mathbf{x}}^{\prime},t)\in P_{\mathbf{s}}$, then $x_{n}>t$. To see
this is possible, one assumes some tangent plane $P_{\mathbf{s}_{1}}$ is below
$D$ and another, $P_{\mathbf{s}_{2}}$ is above. Then, one uses the following
Intermediate Value Theorem argument to show some tangent plane intersects $D$.
Let ${\mathbf{x}}\in D$ and let $\ell=\left\\{({\mathbf{x}}^{\prime},w)\hskip
0.85358pt:\hskip 0.85358ptw\in{{\mathbb{R}}}\right\\}$ be the vertical line
containing ${\mathbf{x}}$. The point of intersection
$P_{\mathbf{s}_{1}}\cap\ell$ is below $D\cap\ell$ and
$P_{\mathbf{s}_{2}}\cap\ell$ is above. Therefore, there is an $\mathbf{s}\in
S$ where $P_{\mathbf{s}}\cap\ell$ is in $D$ since both $S$ and $D$ are
connected and the map from $S$ to the point of intersection of $\ell$ and
$P_{\mathbf{s}}$ is continuous. This assumption implies that
(3.11) $(-q_{1},-q_{2},\dots,-q_{n-1},1)\cdot{\mathbf{x}}_{T}>0$
whenever ${\mathbf{x}}\in D$ and ${\mathbf{y}}\in\Omega$.
Seeking to establish injectivity, let us suppose that
${\mathbf{u}},{\mathbf{v}}\in D$, $A\in M$ and $\sigma\in\dot{\mathbb{R}}$ are
such that
(3.12)
$\Pi^{M}_{L}({\mathbf{y}},A;{\mathbf{u}};\sigma)=\Pi^{M}_{L}({\mathbf{y}},A;{\mathbf{v}};\sigma).$
Then, using (3.7), we see
(3.13) $BA{\mathbf{u}}_{T}=BA{\mathbf{v}}_{T}.$
Note that $\text{Null}\left(BA\right)=\text{span}\left(A^{-1}(-\nabla
q^{T},1)^{T}\right)$ and because of (3.13)
(3.14) ${\mathbf{u}}_{T}={\mathbf{v}}_{T}+sA^{-1}(-\nabla q^{T},1)^{T}.$
for some $s\in\mathbb{R}$. On the other hand, by setting the $t$ components in
(3.7) equal we have
(3.15)
${\mathbf{u}}^{T}_{T}A{\mathbf{u}}_{T}={\mathbf{v}}^{T}_{T}A{\mathbf{v}}_{T}$
By taking the inner product of (3.14) with $A{\mathbf{u}}_{T}$ and using
(3.15), we see that
(3.16) $-s{\mathbf{v}}_{T}\cdot(-\nabla
q^{T},1)^{T}=s{\mathbf{u}}_{T}\cdot(-\nabla q^{T},1)^{T}.$
Therefore, either $s=0$ and ${\mathbf{u}}={\mathbf{v}}$, or ${\mathbf{v}}$ and
${\mathbf{u}}$ are on opposite sides of the tangent plane
$P_{({\mathbf{y}},q)}$ to $S$ at $({\mathbf{y}},q)$. This second case is not
allowed since $D$ is above every tangent plane to $S$.
Now we consider the general case when $S$ is a smooth connected hypersurface,
not necessarily a graph, and $D$ an open connected set.
For the last statement of the theorem concerning when the Bolker condition
fails, if there is a point in $D$ which is in a tangent plane
$P_{\mathbf{s}_{0}}$ to $S$, then after translating and rotating $S$ will be
represented by a graph in a neighborhood of $\mathbf{s}_{0}$ and the argument
after (3.10) shows the immersion part of the Bolker condition will fail if
$\mathrm{dim}(M)=0$.
With this case handled, we now assume that no tangent plane to $S$ intersects
$D$ and suppose $\mathbf{s}_{0}\in S$. We will show that $\Pi_{L}^{M}$ is an
injective immersion locally above $S$ near $\mathbf{s}_{0}$ (i.e., on the
canonical relation $\mathcal{C}_{M}$ and above points $(\mathbf{s},A,t)\in
Y_{M}$ for $\mathbf{s}$ in a neighborhood in $S$ of $\mathbf{s}_{0}$). We do
this by reducing the problem to the case we just considered, when the
hypersurface $S$ is a graph.
This will show $R_{M}:\mathcal{E}^{\prime}(D)\to\mathcal{D}^{\prime}(Y_{M})$
satisfies the Bolker assumption globally for the following reasons. First,
being an immersion is a local condition. To check injectivity, note that if
$\Pi_{L}^{M}(\nu_{0})=(\mathbf{s}_{0},A_{0},t_{0};\eta_{0})=\Pi_{L}^{M}(\nu_{1})$
then, since the basepoints of the image are the same, to show
$\nu_{0}=\nu_{1}$, one just needs to know $\Pi_{L}^{M}$ is injective on
$\left(\Pi_{L}^{M}\right)^{-1}\left\\{(\mathbf{s}_{0},A_{0},t_{0};\eta_{0})\right\\}$.
Using a translation $T$ of ${{\mathbb{R}}^{n}}$ followed by a rotation $R$ we
map $\mathbf{s}_{0}$ into $\mathbf{0}\in{{\mathbb{R}}^{n}}$ and $S$ into a
connected submanifold $S^{\prime}$ such that the hyperplane
$P_{\mathbf{0}}=\left\\{{\mathbf{x}}\in{{\mathbb{R}}^{n}}\hskip
0.85358pt:\hskip 0.85358ptx_{n}=0\right\\}$ is tangent to $S^{\prime}$ at
$\mathbf{s}=\mathbf{0}$. We let $D^{\prime}$ be the image of $D$ under this
rigid motion, $RT$. Without loss of generality, we assume that $D^{\prime}$ is
above $P_{\mathbf{0}}$. The rotation $R$ also conjugates $M$ to another
embedded manifold in $\mathrm{Sym}(n)$, which we denote by $M^{\prime}$.
Let $\Omega$ be an open connected neighborhood of
$\mathbf{0}\in{\mathbb{R}^{n-1}}$ that is so small that there is a smooth
function $q:\Omega\to[0,\infty)$ such that
$\Omega\ni{\mathbf{y}}\mapsto({\mathbf{y}},q({\mathbf{y}}))$
give local coordinates on $S^{\prime}$ near $\mathbf{0}$. Let
$S^{\prime}_{0}=\left\\{({\mathbf{y}},q({\mathbf{y}}))\hskip 0.85358pt:\hskip
0.85358pt{\mathbf{y}}\in\Omega\right\\}.$
Since $D^{\prime}$ is above $P_{\mathbf{0}}$, it is above every tangent plane
to $S^{\prime}_{0}$ by the Intermediate Value Theorem argument given earlier.
Let $Y^{\prime}_{0}=\left\\{({\mathbf{y}},q({\mathbf{y}})),A,t)\hskip
0.85358pt:\hskip 0.85358pt{\mathbf{y}}\in\Omega,A\in
M^{\prime},t\in\dot{\mathbb{R}}\right\\}$. Since $D^{\prime}$ is above every
tangent plane to $S^{\prime}_{0}$, the first part of this proof implies that
$R_{M^{\prime}}:\mathcal{E}^{\prime}(D^{\prime})\to\mathcal{D}^{\prime}(Y^{\prime}_{0})$
satisfies the Bolker condition. Let $S_{0}$ be the image of $S^{\prime}_{0}$
under $(RT)^{-1}$ and let $Y_{0}=S_{0}\times M\times\dot{\mathbb{R}}$. This
proof implies that
$R_{M}:\mathcal{E}^{\prime}(D)\to\mathcal{D}^{\prime}(Y_{0})$ satisfies the
Bolker condition. Since the Bolker condition is local above $Y_{M}$ and these
coordinate patches cover $S$, the Bolker condition holds for
$R_{M}:\mathcal{E}^{\prime}(D)\to\mathcal{D}^{\prime}(Y_{M})$. This finishes
the proof.∎
### 3.1. Visible singularities and artifacts
The normal operator for $R_{M}$ is $\mathcal{N}=R_{M}^{*}\varphi R_{M}$ where
$\varphi$ is a cutoff on $Y_{M}$ that guarantees one can compose these
operators (sometimes $R_{M}^{*}$ is replaced by a weighted dual operator, and
if $R_{M}$ is a proper map, the cutoff $\varphi$ is not needed).
_Visible singularities of $f$_ are those that are singularities of
$\mathcal{N}f$, i.e., singularities in
$\mathrm{WF}(\mathcal{N}f)\cap\mathrm{WF}(f)$. Other singularities of $f$ are
called _invisible singularities_. _Artifacts_ are singularities of
$\mathcal{N}f$ that are not in $f$, i.e., singularities in
$\mathrm{WF}(\mathcal{N}f)\setminus\mathrm{WF}(f)$.
To understand visible singularities, note that the set of visible
singularities of $f$ is contained in the image of
$\Pi^{M}_{R}(\mathcal{C}_{M})$. This is true because
(3.17)
$\mathrm{WF}(\mathcal{N}f)\subset\mathcal{C}_{M}^{t}\circ\mathcal{C}_{M}\circ\mathrm{WF}(f)=\Pi_{R}^{M}\circ\left(\Pi_{L}^{M}\right)^{-1}\circ\Pi_{L}^{M}\circ\left(\Pi_{R}^{M}\right)^{-1}(\mathrm{WF}(f)),$
by the Hörmander-Sato Lemma [18, Theorem 8.2.13] and so the only singularities
that will come from this composition will be those in
$\Pi_{M}^{R}(\mathcal{C}_{M})$.
Now, we consider visible singularities for the spherical transform, so
$M=\left\\{I_{n}\right\\}$. We claim, there will be more visible singularities
the more $S$ “wraps around” the scanning region. To see this, one first
observes that a singularity $({\mathbf{x}},\xi)\in\mathrm{WF}(f)$ is detected
by spherical integrals only if $\\{{\mathbf{x}}+\alpha\xi\hskip
0.85358pt:\hskip 0.85358pt\alpha\in\mathbb{R}\\}\cap S\neq\emptyset$. This
follows by a calculation of $\mathcal{C}_{M}$ for this case (either use the
expression for $\mathcal{C}_{M}$ in (3.6) for the spherical case and then find
the image of $\Pi_{R}^{M}(\mathcal{C}_{M})$ or see e.g., the discussion of
visible singularities around (4.6) and (4.13) in [12]). Therefore, if $S$ is
as in Figure 3(b), then $R_{M}f$ detects all singularities in $D$ since $S$
surrounds $D$. If $S$ is as in Figure 3(c), then singularities in a horizontal
direction at points above $S$ are invisible since no horizontal line through
such points intersects $S$
We can also use the proof of Theorem 3.1 to understand how one can get
artifacts in reconstructions using $\mathcal{N}$ if $S$ is not globally convex
or $D$ is not on the convex side of $S$. For simplicity, we consider the case
when $S$ is the graph of a function, $q:\Omega\to{{\mathbb{R}}}$ where
$\Omega$ is an open subset of ${\mathbb{R}^{n-1}}$.
The injectivity part of the Bolker condition fails if points below and above
$S$ are in the domain $D$ or if $S$ is not globally convex. Let
$({\mathbf{y}},A,t)\in Y_{M}$ and let
$E=E({\mathbf{y}},A,t)=\left\\{{\mathbf{x}}\hskip 0.85358pt:\hskip
0.85358pt{\mathbf{x}}_{T}^{T}A{\mathbf{x}}_{T}=t\right\\}.$
Then $E$ is the manifold of integration for $R_{M}$ at $({\mathbf{y}},A,t)$.
Let ${\mathbf{x}}\in E$ and assume $\xi$ is conormal to $E$ at ${\mathbf{x}}$.
Then $\tau=({\mathbf{x}},\xi)\in\Pi_{R}^{M}(\mathcal{C}_{M})$ by (3.6) and
$\nu=({\mathbf{y}},A,t,\eta)\in\mathcal{C}_{M}\circ\left\\{\tau\right\\}$ for
some $\eta\in T^{*}_{({\mathbf{y}},A,t)}(Y_{M})$.
By the injectivity calculation for $\Pi_{L}^{M}$, there is a second preimage
of $\nu$ and its basepoint is calculated using equations (3.14), (3.15), and
(3.16). This “mirror point” ${\mathbf{x}}_{m}$ to ${\mathbf{x}}$ is the other
point on $E$ and the line through ${\mathbf{x}}$ parallel to $A^{-1}(-\nabla
q^{T},1)^{T}$. This mirror point, ${\mathbf{x}}_{m}$ is on the other side of
the tangent plane to $S$ at $({\mathbf{y}},q)$ by (3.16).
Note that ${\mathbf{x}}_{m}$ is the basepoint of a second preimage,
$\tau^{\prime}=({\mathbf{x}}_{m},\xi^{\prime})$, of $\nu$ under composition
with $\mathcal{C}_{M}$, i.e.,
$\nu\in\mathcal{C}_{M}\circ\left\\{\tau\right\\}$ and
$\nu\in\mathcal{C}_{M}\circ\left\\{\tau^{\prime}\right\\}$ by (3.12).
If ${\mathbf{x}}$ is on the tangent plane to $S$ at $({\mathbf{y}},q)$, then
${\mathbf{x}}_{m}={\mathbf{x}}$ (see (3.16)),and $\Pi_{L}^{M}$ is not an
immersion at $(\nu,\tau)$, and the immersion assumption of Bolker breaks down
here.
This explains why artifacts can occur if $S$ is not globally convex or if $D$
is on both sides of $S$. Given any point ${\mathbf{x}}\in
E({\mathbf{y}},A,t)$, there is a mirror point ${\mathbf{x}}_{m}$ on the other
side of the tangent plane to $S$ at $({\mathbf{y}},q)$ such that both
covectors introduced in this section, $\tau$ and $\tau^{\prime}$ are in
$\mathcal{C}_{M}^{t}\circ\left\\{\nu\right\\}$. Therefore, if
$\tau\in\mathrm{WF}(f)$, by (3.17), $\tau^{\prime}$ can be an added
singularity in $\mathrm{WF}(\mathcal{N}f)$, and there can be an artifact at
${\mathbf{x}}_{m}$.
### 3.2. Examples
In this section, we apply Theorem 3.1 to several interesting special cases.
###### Corollary 3.2.
Let $C$ be an open convex set with a smooth boundary, $S$, and let $M$ be a
submanifold of $\mathrm{Sym}(n)$, possibly of dimension zero. Then,
$R_{M}:\mathcal{E}^{\prime}(C)\to\mathcal{D}^{\prime}(Y_{M})$ is an FIO
satisfying the Bolker condition.
The corollary follows because condition (3.3) in Theorem 3.1 holds as $C$ is
convex and $S$ is smooth.
###### Example 3.3 ($S$ with gradient zero along an axis).
In this example, we consider measurement surfaces $S$ which are flat along one
directional axis, and the special case when the integral surfaces are
ellipsoids of revolution (or spheroids). Without loss of generality, in this
example, $S$ is assumed to be flat in the $x_{n-1}$ direction. In Ultrasound
reflection tomography (URT), the integration surfaces are spheroids, and the
foci of the spheroids represent the sound wave emitter/receiver positions [2].
If we were to construct a measurement surface in URT, which is flat in the
$x_{n-1}$ direction, such that there is an emitter/receiver at every point on
$S$ (i.e., we have an $(n-1)$-D surface of emitters), then the URT data can be
modeled by $Rf$, where $f$, in URT, denotes the acoustic reflectivity
function. Specifically, we set $A=\text{diag}(1,\ldots,1,r_{n-1},1)$ in
equation (3.1), with $r_{n-1}\in(0,1]$, and constrain $S$ to have gradient
zero in the $x_{n-1}$ direction. Then the defining function, $\Psi$ (see
section 3, equation (3.1)), describes a spheroid surface with foci on $S$.
Thus, Theorem 3.1 has direct applications to measurement surfaces in URT.
(a) cylindrical $S$
(b) parabolic $S$.
Figure 1. Example $S$ in $\mathbb{R}^{3}$ with practical application to URT.
The surfaces above are flat (gradient zero) in the $x_{2}$ direction and are
globally convex.
In Figure 1, we have illustrated some example $S\subset\mathbb{R}^{3}$, which
are flat in the $x_{2}$ direction, and for which the Bolker condition holds.
In Figure 1(a), the function support lies in the cylinder interior, and in
Figure 1(b), the function support is assumed to be contained in
$\\{x_{3}>x_{1}^{2}\\}$.
###### Example 3.4 (Centers on a hyperplane).
Integral transforms over spheres or ellipsoids centered on a plane have been
studied for application to radar [32, 5, 22, 6], sonar [4, 21], seismic [14],
and ultrasound imaging [2, 13]. Theorem 3.1 holds if $S$ is a hyperplane and
$D$ is on one side of $S$. Note that this does not cover the common offset
geometry, where integrals are being taken over ellipsoids centered on a plane
and with foci a fixed distance apart, or common midpoint, where integrals are
being taken over ellipsoids with foci symmetric about a line cases considered
in [11] where integrals are being taken over ellipsoids with foci a fixed
distance apart and oriented for each ${\mathbf{y}}\in S$, since the foci of
our ellipses change with $t$.
(a) ellipsoid centered on flat plane
(b) two-sheeted hyperboloid centered on flat plane
Figure 2. Flat plane measurement surface. Left - ellipsoid integral surface.
Right - two-sheeted hyperboloid surface.
###### Example 3.5 (Centers on a spheroid, exponential, and sinusoid
surface).
In this example, we discuss additional example measurement surfaces in cases
when the Bolker condition is satisfied, and others when Bolker is not
satisfied. Specifically, we consider the spheroid and exponential surfaces
illustrated in Figures 3(a) and 3(b). In Figure 3(a), the function support is
assumed to be contained within the spheroid interior, and in Figure 3(b),
$S=\\{{\mathbf{x}}\in\mathbb{R}^{3}:x_{3}-e^{x_{1}^{2}+x_{2}^{2}}=0\\}$, and
$\text{supp}(f)\subset\\{x_{3}>e^{x_{1}^{2}+x_{2}^{2}}\\}$. In both cases, no
plane tangent to $S$ intersects $\text{supp}(f)$. Therefore, Theorem 3.1
holds.
In Figure 3(c), we give an example “sinusoidal” measurement surface, defined
by $S=\\{{\mathbf{x}}\in\mathbb{R}^{3}:x_{3}-\sin x_{1}+\sin x_{2}=0\\}$, with
$\text{supp}(f)\subset\\{{\mathbf{x}}\in\mathbb{R}^{3}:x_{3}-\sin x_{1}+\sin
x_{2}>0\\}$. In this case, there exist planes tangent to $S$ which intersect
$\text{supp}(f)$, and thus, if $\text{dim}(M)=0$, the Bolker condition is not
satisfied by Theorem 3.1, and we would expect to see mirror-point type
artifacts through planes tangent to $S$, as described in section 3.1.
(a) spheroid
(b) exponential
(c) sinusoid
Figure 3. Two-sheeted hyperboloid and ellipsoid surfaces of integration
centered on convex and non-convex scanning surfaces. in (3(c)) so Bolker is
not satisfied (see section 3.1).
## 4\. Cylindrical measurement surface in $\mathbb{R}^{3}$
In this section, we investigate in more detail the cylindrical scanning
surface introduced in Example 3.3 and Figure 1(a). Specifically, we show that
any $L^{2}$ function, with compact support on a cylinder interior, can be
recovered uniquely from its integrals over spheroids with foci on a cylinder.
We also discuss in more detail the visible singularities and how the stability
varies with the discretization of emitters/receivers on the cylinder surface.
Our center set will be the cylinder of radius one with axis parallel to the
second coordinate axis and we will consider spheroids with rotation axis on
$C$, centers
(4.1)
$\mathbf{s}=\mathbf{s}(\phi_{0},y_{0})=(\cos\phi_{0},y_{0},\sin\phi_{0})^{T}\in
C,$
and fixed aspect ratio $s\in(0,1]$. This gives the Radon transform
(4.2)
$R_{s}f(p,\phi_{0},y_{0})=Rf\left(\mathbf{s}(\phi_{0},y_{0}),\text{diag}(1,s^{2},1),p^{2}\right),$
where $t$ is replaced by $p^{2}=t$ in (3.1).
In the following subsections, we address the injectivity and microlocal
stability properties of $R_{s}$.
###### Remark 4.1.
Injectivity results are proven in [17], for a class of generalized Radon
transforms for real-analytic submanifolds in a compact real-analytic manifold
with boundary. This important work does not imply injectivity for $R_{s}$ for
reasons which we now explain. The key point is that the spheroids $R_{s}$
integrates over are not parameterized as in [17]. After a reduction, the
authors in [17] parameterize their manifolds of integration by
$(s,\theta)\in{{\mathbb{R}}}\times S^{n-1}$ [17, p. 1518], but our spheroids
are parameterized by $(p,\mathbf{s})\in\dot{{{\mathbb{R}}}}\times C$. $C$ is
topologically neither compact nor simply connected, although $S^{n-1}$ is.
Thus, the theory of [17] cannot apply to $R_{s}$. To prove injectivity for
$R_{s}$, we apply linear Volterra equation theory. We also provide an
inversion method based on Neumann series.
### 4.1. Injectivity
We first introduce notation we will use in the proofs and define the auxiliary
variables
(4.3)
$\hat{x}_{1}=\hat{x}_{1}(\phi_{0},x_{1},x_{3})=\sqrt{(x_{1}-\cos\phi_{0})^{2}+(x_{3}-\sin\phi_{0})^{2}}\qquad\hat{x}_{2}=\hat{x}_{2}(x_{2},y_{0})=x_{2}-y_{0},$
then $\hat{x}_{1}$ represents the radius of the circle in Figure 4. In this
notation, we can describe the spheroid defined by matrix
$M=\text{diag}(1,{s^{2}},1)$, center $\mathbf{s}(\phi_{0},y_{0})$, and
parameter $p$ as
(4.4) $\hat{x}_{1}^{2}+s^{2}\hat{x}_{2}^{2}=p^{2}.$
We use standard cylindrical coordinates for points inside $C$:
${\mathbf{x}}=(x_{1},x_{2},x_{3})=(r\cos\phi,x_{2},r\sin\phi),\ \
\text{where}\ \ r>0\ \ \phi\in[0,2\pi].$
In this notation the polar radius for ${\mathbf{x}}$ is
(4.5) $r=\sqrt{\hat{x}_{1}^{2}+1-2\hat{x}_{1}\cos\theta},\ \ \
\hat{\phi}=\phi-\phi_{0},\ \ \
\frac{\hat{x}_{1}}{\sin\hat{\phi}}=\frac{r}{\sin\theta},$
where $\hat{\phi}$ is the angle between $(x_{1},x_{3})$ and
$(\cos\phi_{0},\sin\phi_{0})$. See Figure 4. Note that Figure 4 shows the
picture for $\phi_{0}=0$ and, in general, the picture is rotated and
$\hat{\phi}$ is measured from the vector $(\cos\phi_{0},\sin\phi_{0})$.
We use Figures 4 and 5 in the proofs to explain the geometry behind our
integrals. They show two cross-sections of the spheroid (4.4) with center
$\mathbf{s}(0,y_{0})=(1,y_{0},0)$: first with a plane perpendicular to the
$x_{2}$ axis in Figure 4 and second with a plane containing the axis of the
cylinder and $\mathbf{s}$ in Figure 5.
$x_{1}$$x_{3}$$f$ $\hat{\phi}$$\theta$$r$$\hat{x}_{1}$1 Figure 4. A cross
section of the spheroid with center $(1,0,0)=\mathbf{s}(0,0)$ perpendicular to
the spheroid axis at $\hat{x}_{2}=x_{2}$. The axes are $(x_{1},x_{3})$ because
$\phi_{0}=0$ but the picture would be rotated for general $\phi_{0}$ and then
the angle $\hat{\phi}$ would be measured from the ray containing
$(\cos\phi_{0},\sin\phi_{0})$. The cylinder has unit radius.
$\hat{x}_{2}$$f$$\hat{x}_{1}$$\hat{x}_{2}$1$p$$\mathbf{s}$$\frac{p}{s}$$(x_{1},x_{3})$
plane Figure 5. Cylindrical geometry $(\hat{x}_{1},\hat{x}_{2})$ plane cross
section. The $(x_{1},x_{3})$ plane of Figure 4 is drawn as a dashed line.
The following proposition is the first step in writing $R_{s}f$ in terms of
Volterra equations of Fourier coefficients of $f$.
###### Proposition 4.2.
Let $f\in L^{2}_{c}(\mathbb{R}^{3})$. Then
(4.6)
$R_{s}f(p,\phi_{0},y_{0})=\int_{-\frac{p}{s}}^{\frac{p}{s}}\sqrt{p^{2}-s^{2}\hat{x}_{2}^{2}+s^{4}\hat{x}_{2}^{2}}\int_{-\pi}^{\pi}f\left(r,\sin^{-1}\left(\frac{\hat{x}_{1}}{r}\sin\theta\right)+\phi_{0},\hat{x}_{2}+y_{0}\right)\mathrm{d}\theta\mathrm{d}\hat{x}_{2}.$
###### Proof.
Let
$\mathrm{d}l=\sqrt{1+\left(\frac{\mathrm{d}\hat{x}_{1}}{\mathrm{d}\hat{x}_{2}}\right)^{2}}\mathrm{d}\hat{x}_{2}$
be the arc measure on the ellipse in Figure 5. Then the surface element on the
spheroid of revolution is
(4.7)
$\mathrm{d}A=\hat{x}_{1}\mathrm{d}l\mathrm{d}\theta=\sqrt{p^{2}-s^{2}\hat{x}_{2}^{2}+s^{4}\hat{x}_{2}^{2}}\mathrm{d}\theta\mathrm{d}\hat{x}_{2}.$
Thus, using equation (4.3) and (4.5) to rewrite $\phi=\hat{\phi}+\phi_{0}$ in
terms of $\theta$, it follows that
(4.8)
$R_{s}f(p,\phi_{0},y_{0})=\int_{-\frac{p}{s}}^{\frac{p}{s}}\sqrt{p^{2}-s^{2}\hat{{x}}_{2}^{2}+s^{4}\hat{{x}}_{2}^{2}}\int_{-\pi}^{\pi}f\left(r,\sin^{-1}\left(\frac{\hat{x}_{1}}{r}\sin\theta\right)+\phi_{0},\hat{{x}}_{2}+y_{0}\right)\mathrm{d}\theta\mathrm{d}\hat{{x}}_{2},$
which completes the proof.∎
We now have our main injectivity result.
###### Theorem 4.3.
For any fixed $s\in(0,1]$, $R_{s}$ is injective on domain $L^{2}_{c}(C)$,
where $C$ is the open unit cylinder .
###### Remark 4.4.
The following proof uses some similar intuitions to that of [3], applied in
that paper to circular Radon transforms. We extend such ideas to three-
dimensions, and to spheroid surfaces.
###### Proof.
Let $\epsilon>0$. We first prove the theorem if $f\in
L^{2}_{c}(C_{\epsilon})$, where
(4.9) $C_{\epsilon}=\left\\{{\mathbf{x}}\in\mathbb{R}^{3}\hskip
0.85358pt:\hskip 0.85358pt\sqrt{x_{1}^{2}+x_{3}^{2}}<1-\epsilon\right\\}.$
Let $f\in L^{2}_{c}(C_{\epsilon})$. Taking the Fourier transform in $y_{0}$ on
both sides on (4.8) yields
(4.10) $\begin{split}&\widehat{R_{s}f}(p,\phi_{0},\eta)=\\\
&2\int_{0}^{\frac{p}{s}}\cos(\eta\hat{x}_{2})\sqrt{p^{2}-s^{2}\hat{x}_{2}^{2}+s^{4}\hat{x}_{2}^{2}}\int_{-\pi}^{\pi}\hat{f}\left(r,\sin^{-1}\left(\frac{\hat{x}_{1}}{r}\sin\theta\right)+\phi_{0},\eta\right)\mathrm{d}\theta\mathrm{d}\hat{x}_{2},\end{split}$
where $\eta$ is dual to $y_{0}$. We now calculate the Fourier components in
$\phi_{0}$ on both sides of (4.10), where
$\hat{f}_{n}\left(r,\eta\right)=\frac{1}{2\pi}\int_{0}^{2\pi}\hat{f}(r,\phi,\eta)e^{-in\phi}\mathrm{d}\phi$:
(4.11)
$\widehat{R_{s}f}_{n}(p,\eta)=4\int_{0}^{\frac{p}{s}}\cos(\eta\hat{x}_{2})\sqrt{p^{2}-s^{2}\hat{x}_{2}^{2}+s^{4}\hat{x}_{2}^{2}}\int_{0}^{\pi}\cos(n\hat{\phi})\hat{f}_{n}\left(r,\eta\right)\mathrm{d}\theta\mathrm{d}\hat{x}_{2}.$
Note that $r$ and $\hat{\phi}$ depend on $\theta$ and $\hat{x}_{1}$ as given
in (4.5), with $r$ even as a function of $\theta$, and $\hat{\phi}$ odd as a
function of $\theta$.
Making the change of variables $\hat{x}_{1}=\sqrt{p^{2}-s^{2}\hat{x}_{2}^{2}}$
in the $\hat{x}_{2}$ integral yields
(4.12) $\begin{split}&\widehat{R_{s}f}_{n}(p,\eta)=\\\
&\frac{4}{s}\int_{0}^{p}\frac{\hat{x}_{1}\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-\hat{x}_{1}^{2})}}{\sqrt{p^{2}-\hat{x}_{1}^{2}}}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)\int_{0}^{\pi}\cos(n\hat{\phi})\hat{f}_{n}\left(r,\eta\right)\mathrm{d}\theta\mathrm{d}\hat{x}_{1}.\end{split}$
Let us now do a change of variables in the $\theta$ integral. Using Figure 4
and (4.5), we see $r=\sqrt{\hat{x}_{1}^{2}+1-2\hat{x}_{1}\cos\theta}$, and
$\sin\hat{\phi}=\frac{\hat{x}_{1}}{r}\sin\theta$. We have
$\frac{\mathrm{d}r}{\mathrm{d}\theta}=\frac{\hat{x}_{1}}{r}\sin\theta=\sin\hat{\phi},\
\ \ \cos\hat{\phi}=\frac{r^{2}+1-\hat{x}_{1}^{2}}{2r}.$
Then
(4.13) $\begin{split}&\widehat{R_{s}f}_{n}(p,\eta)=\\\
&\frac{4}{s}\int_{0}^{p}\frac{\hat{x}_{1}\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-\hat{x}_{1}^{2})}}{\sqrt{p^{2}-\hat{x}_{1}^{2}}}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)\int_{1-\hat{x}_{1}}^{1-\epsilon}\frac{T_{|n|}\left(\frac{r^{2}+1-\hat{x}_{1}^{2}}{2r}\right)}{\sqrt{1-\left(\frac{r^{2}+1-\hat{x}_{1}^{2}}{2r}\right)^{2}}}\hat{f}_{n}\left(r,\eta\right)\mathrm{d}r\mathrm{d}\hat{x}_{1}\end{split}$
where $T_{|n|}$ is a Chebyshev polynomial degree $|n|$, $n\in\mathbb{Z}$.
Because $f$ is supported in $C_{\epsilon}$, the upper limit in the inner
integral in (4.13) can be $r=1-\epsilon$, rather than $r=1+\hat{x}_{1}$. Now,
using Fubini’s theorem, we see
(4.14) $\begin{split}&\widehat{R_{s}f}_{n}(p,\eta)=\\\
&\frac{4}{s}\int_{1-p}^{1-\epsilon}\int_{1-r}^{p}\frac{\hat{x}_{1}\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-\hat{x}_{1}^{2})}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)T_{|n|}\left(\frac{r^{2}+1-\hat{x}_{1}^{2}}{2r}\right)}{\sqrt{p^{2}-\hat{x}_{1}^{2}}\sqrt{1-\left(\frac{r^{2}+1-\hat{x}_{1}^{2}}{2r}\right)^{2}}}\hat{f}_{n}\left(r,\eta\right)\mathrm{d}\hat{x}_{1}\mathrm{d}r,\end{split}$
Substituting $u=1-r$ yields
(4.15)
$\begin{split}\widehat{R_{s}f}_{n}(p,\eta)&=\frac{4}{s}\int_{0}^{p}K_{n}(\eta;p,u)\tilde{\hat{f}}_{n}\left(u,\eta\right)\mathrm{d}u,\end{split}$
a Volterra equation of the first kind, where
(4.16)
$K_{n}(\eta;p,u)=\int_{u}^{p}\frac{\hat{x}_{1}\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-t^{2})}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)T_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)}{\sqrt{p^{2}-\hat{x}_{1}^{2}}\sqrt{1-\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)^{2}}}\mathrm{d}\hat{x}_{1},$
and $\tilde{\hat{f}}_{n}\left(u,\eta\right)=\hat{f}_{n}\left(1-u,\eta\right)$.
To show injectivity, we let $\epsilon_{1}\in(0,1/2)$ and first bound the
kernel $K_{n}$ and its derivative for each fixed $\eta$ on the set
(4.17) $D_{\epsilon_{1}}=\left\\{(p,u)\hskip 0.85358pt:\hskip
0.85358ptp\in[\epsilon,1-\epsilon_{1}],\ u\in[\epsilon,p]\right\\}.$
We have
(4.18)
$\begin{split}\sqrt{1-\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)^{2}}&=\sqrt{1-\left(1+\frac{u^{2}-\hat{x}_{1}^{2}}{2(1-u)}\right)^{2}}\\\
&=\sqrt{\frac{\hat{x}_{1}^{2}-u^{2}}{1-u}+\frac{(\hat{x}_{1}^{2}-u^{2})(u^{2}-\hat{x}_{1}^{2})}{4(1-u)^{2}}}\\\
&=\frac{\sqrt{\hat{x}_{1}^{2}-u^{2}}}{\sqrt{1-u}}\sqrt{1+\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}}.\end{split}$
Thus
(4.19)
$\begin{split}&K_{n}(\eta;p,u)=\sqrt{1-u}\int_{u}^{p}\frac{\hat{x}_{1}\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-\hat{x}_{1}^{2})}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)T_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)}{\sqrt{p^{2}-\hat{x}_{1}^{2}}\sqrt{\hat{x}_{1}^{2}-u^{2}}\sqrt{1+\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}}}\mathrm{d}\hat{x}_{1}\\\
&=\sqrt{1-u}\int_{0}^{1}\frac{\hat{x}_{1}\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-\hat{x}_{1}^{2})}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)T_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)}{\sqrt{v}\sqrt{1-v}\sqrt{p+\hat{x}_{1}}\sqrt{\hat{x}_{1}+u}\sqrt{1+\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}}}\mid_{\hat{x}_{1}=u+v(p-u)}\mathrm{d}v,\end{split}$
after substituting
(4.20) $\hat{x}_{1}=u+v(p-u)$
in the last step. We have
(4.21)
$\begin{split}K_{n}(\eta;p,p)&=\frac{p\sqrt{1-p}}{2}\int_{0}^{1}\frac{1}{\sqrt{v}\sqrt{1-v}}\mathrm{d}v\\\
&=\frac{\pi p\sqrt{1-p}}{2},\end{split}$
and thus $K_{n}(\eta,\cdot,\cdot)$ is non-zero on the diagonal, unless
$p=0,1$, for all $\eta\in\mathbb{R}$ and $n\in\mathbb{Z}$. The support of $f$
is bounded away from the cylinder surface and we are considering
$p\in[\epsilon,1-\epsilon_{1}]$, and thus we do not consider $p=0,1$.
We will now show that $K_{n}(\eta;p,u)$ and
$\frac{\mathrm{d}}{\mathrm{d}p}K_{n}(\eta;p,u)$ are bounded for each $\eta$
and all $(p,u)\in D_{\epsilon_{1}}$. To do this, we show that all the terms
dependent on $p$ under the integral on the second line of (4.19) are bounded
and have bounded first order derivative with respect to $p$. First we have
$|\hat{x}_{1}|\leq 1$ and from the change of variable (4.20),
$\frac{\mathrm{d}}{\mathrm{d}p}\hat{x}_{1}=v\leq 1$. Now
$\sqrt{\hat{x}_{1}^{2}+s^{2}(p^{2}-\hat{x}_{1}^{2})}=\sqrt{(1-s^{2})\hat{x}_{1}^{2}+s^{2}p^{2}}\leq\sqrt{(1-s^{2})+s^{2}}=1,$
and
$\frac{\mathrm{d}}{\mathrm{d}p}\sqrt{(1-s^{2})\hat{x}_{1}^{2}+s^{2}p^{2}}=\frac{ps^{2}+v(1-s^{2})\hat{x}_{1}}{\sqrt{(1-s^{2})\hat{x}_{1}^{2}+s^{2}p^{2}}}\leq\frac{1}{\sqrt{(1-s^{2})\epsilon^{2}+s^{2}\epsilon^{2}}}=\frac{1}{\epsilon},$
noting $\hat{x}_{1}\geq u\geq\epsilon$ and $p>\epsilon$. We have
$\left|\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)\right|\leq
1$, and
(4.22)
$\begin{split}\frac{\mathrm{d}}{\mathrm{d}p}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)&=-\frac{\frac{\eta}{s}(p-v\hat{x}_{1})}{\sqrt{p^{2}-\hat{x}_{1}^{2}}}\sin\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)\\\
&=-\left(\frac{\eta}{s}\right)^{2}(p-v\hat{x}_{1})\operatorname{sinc}\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right),\end{split}$
and thus
$\left|\frac{\mathrm{d}}{\mathrm{d}p}\cos\left(\frac{\eta}{s}\sqrt{p^{2}-\hat{x}_{1}^{2}}\right)\right|\leq\left(\frac{\eta}{s}\right)^{2}$.
For $n\neq 0$, we have
$\left|T_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)\right|\leq
1$ and
(4.23)
$\begin{split}\frac{\mathrm{d}}{\mathrm{d}p}T_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)&=-\frac{v\hat{x}_{1}}{1-u}T^{\prime}_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)\\\
&=-\frac{|n|v\hat{x}_{1}}{1-u}U_{|n|-1}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right),\end{split}$
where $U_{n}$ is a Chebyshev polynomial of the second kind. It follows that
$\left|\frac{\mathrm{d}}{\mathrm{d}p}T_{|n|}\left(\frac{(1-u)^{2}+1-\hat{x}_{1}^{2}}{2(1-u)}\right)\right|\leq\frac{|n|^{2}}{\epsilon_{1}},$
using Figure 4 and the Law of Cosines, that $|U_{|n|}(x)|\leq|n|+1$ for
$\left|x\right|\leq 1$, and $\frac{1}{1-u}\leq\frac{1}{\epsilon_{1}}$. The
$n=0$ case is trivial since $T_{0}=1$.
Now, we have
$\frac{1}{\sqrt{p+\hat{x}_{1}}\sqrt{\hat{x}_{1}+u}}\leq\frac{1}{2\epsilon}$
and
$\frac{\mathrm{d}}{\mathrm{d}p}\left(\frac{1}{\sqrt{p+\hat{x}_{1}}\sqrt{\hat{x}_{1}+u}}\right)=\frac{1+2v}{\sqrt{p+\hat{x}_{1}}\sqrt{\hat{x}_{1}+u}}\leq\frac{3}{2\epsilon}.$
Finally, we have
$\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}\geq\frac{u^{2}-1}{4(1-u)}=-\frac{(u+1)}{4}\geq-\frac{1}{2}.$
Thus,
$\left(1+\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}\right)^{-\frac{1}{2}}\leq\sqrt{2}$,
and
$\frac{\mathrm{d}}{\mathrm{d}p}\left(1+\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}\right)^{-\frac{1}{2}}=\frac{v\hat{x}_{1}}{4(1-u)}\left(1+\frac{u^{2}-\hat{x}_{1}^{2}}{4(1-u)}\right)^{-\frac{3}{2}}\leq\frac{1}{\epsilon_{1}\sqrt{2}}.$
After putting all this together, we can convert (4.15) into a Volterra
equation of the second kind with bounded kernel for $(p,u)\in
D_{\epsilon_{1}}$ and invert by successive approximations using classical
Volterra Integral equation theory [33, 28]. Therefore, if $R_{s}f=0$, then
(4.15) implies that $\hat{f}_{n}(r,\eta)=0$ for all $\eta$ and for
$r\in[\epsilon_{1},1-\epsilon]$. The lower limit of $\epsilon_{1}$ was used
only to show $K_{n}$ is bounded and so $\hat{f}_{n}(r,\eta)=0$ for all
$r\in[0,1-\epsilon]$.
Thus, any $f\in L^{2}_{c}(C)$ is uniquely determined by $R_{s}f$, for any
fixed $s\in(0,1]$. This implies that $R_{s}$ is injective on $L_{c}^{2}(C)$. ∎
###### Remark 4.5.
Note that the proof requires $f$ to be zero near the boundary of the cylinder.
This is needed so that the inner integral in (4.13) can have upper limit
$1-\epsilon$ instead of $1$. This allows us to prove that $K$ satisfies the
hypotheses to make (4.15) an invertible Volterra equation.
The standard inversion result for Volterra equations would not apply to
$K_{n}$ if that upper limit were $1$ since $K_{n}$ would no longer be bounded.
This suggests there could be a null space for $R_{s}$ on $L^{2}(C)$.
We now discuss the visible singularities.
### 4.2. Visible singularities
In this section, we investigate the singularity coverage (or edge detection)
using spheroid and spherical integral data when the surface of sources and
receivers is a unit cylinder with central axis $x_{2}$, as considered in the
previous section on injectivity. Let $\partial C$ denote the cylinder of
source and receiver positions. We consider $\phi$ and $x_{2}$ in the range
$\phi\in[0,2\pi]$ and $x_{2}\in[-1,1]$, and parameterize $\partial C$ using
cylindrical coordinates
$\mathbf{s}(\phi,x_{2})=(\cos\phi,x_{2},\sin\phi)\in\partial C$, as in (4.1)
For every ${\mathbf{x}}\in\\{\sqrt{x_{1}^{2}+x_{3}^{2}}<1\\}=C_{I}$ (i.e., for
every ${\mathbf{x}}$ in the interior of $\partial C$), we can calculate the
proportion of wavefront directions that are detected by spherical and spheroid
integral data. The spherical data is three-dimensional, and the degrees of
freedom are $(p,\phi_{0},y_{0})$. The spheroid data is four-dimensional, with
degrees of freedom $(s,p,\phi_{0},y_{0})$. We wish to investigate whether the
additional scaling factor, $s$, offers any improvement in terms of edge
detection. We discretize $\partial C$ with $\phi$ at $1^{\circ}$ intervals,
$\phi\in\\{\frac{j\pi}{180}:0\leq j\leq 179\\}=\Phi$, and
$x_{2}\in\\{-1+\frac{2j}{N-1}:0\leq j\leq N-1\\}=X_{2}$, where $N\geq 1$
controls the level of discretization along the $x_{2}$ axis. For the spherical
data, we consider all spheres with centers
$c\in\\{(\cos\phi,x_{2},\sin\phi):\phi\in\Phi,x_{2}\in X_{2}\\}=\partial
C_{0}$. We consider all spheroids with axis of revolution parallel to $x_{2}$,
whose foci $c_{1},c_{2}\in\partial C_{0}$. For every given $c$ and
${\mathbf{x}}\in C_{I}$, we calculate the wavefront direction,
$\xi=\frac{{\mathbf{x}}-c}{\|{\mathbf{x}}-c\|}$, detected. For every
${\mathbf{x}}\in C_{I}$, we calculate all $180\times N$ wavefront directions
detected at ${\mathbf{x}}$, and use this information to build a 3-D map of the
total wavefront detection on $C_{I}$. Similarly, we can calculate a 3-D map of
the directional coverage for the spheroid integral data, and compare against
the spherical map.
(a) 4D spheroid data
(b) 3D sphere data
(c) 3D sphere data - cropped color bar
Figure 6. Detectable singularities when $N=50$. Top row - $(x_{1},x_{2})$
plane. Bottom row - $(x_{1},x_{3})$ plane. Left - 4D spheroid data. Middle -
3D sphere data. Right - 3D sphere data with cropped color bar to better show
the details.
In Figure 6, we present $(x_{1},x_{3})$ and $(x_{1},x_{2})$ plane cross-
sections showing the directional coverage using spherical and spheroid data
when $N=50$. The left-hand and middle columns of Figure 6 compare the
directional coverage of spheroid and spherical data on the same scale. The
right-hand column of Figure 6 shows the spherical wavefront detection with the
color bar cropped so that the reader can better see the details. We can see
that the wavefront coverage is significantly stronger using spheroid integral
data, when compared to spherical, and thus the additional degree of freedom,
$s$, has proven beneficial.
To show what happens as the number of emitters, and level of $x_{2}$
discretization ($N$), varies, we plot curves of the average directional
coverage over the $(x_{2},x_{3})$ and $(x_{1},x_{3})$ planes for varying $N$
in Figure 7. For all $N\leq 100$, spheroid data offers greater average
wavefront detection, when compared to spherical, for all $N\leq 100$, although
the difference becomes less pronounced with increasing $N$.
(a) $(x_{1},x_{2})$ plane
(b) $(x_{1},x_{3})$ plane
Figure 7. Mean directional coverage percentage over $(x_{1},x_{2})$ and
$(x_{1},x_{3})$ plane cylinder cross-sections for varying $N$.
Let $R_{E}(s,p,\phi_{0},y_{0})=R_{s}(p,\phi_{0},y_{0})$. If one were to design
a URT scanner with a cylindrical set of emitters/receivers, as described in
this section, it would be beneficial to measure spheroid integral data,
$R_{E}f$, over $R_{1}f$ (spherical) data, as $R_{E}$ offers greater edge
detection, especially with more limited emitter/receiver discretization.
$R_{E}$ and $R_{1}$ both have the theoretical guarantees of injectivity and
satisfaction of Bolker, as proven by our microlocal and injectivity theorems.
## 5\. Example image reconstructions in two-dimensions
In this section, we present two-dimensional image reconstructions from
spherical (circular) integral data. We consider two scanning curves, $S$, one
which is non-convex, and one which is convex. We verify Corollary 3.2 by
comparing the artifacts in (unfiltered) backprojection reconstructions of
delta functions to artifacts predicted by our theory. In addition, we also
investigate techniques to suppress the image artifacts in the non-convex
curves case. Specifically, we apply discrete solvers and a Total Variation
(TV) regularizer with smoothing parameter chosen by cross validation.
We define the Radon transform
(5.1) $R_{c}f(r,y_{1})=Rf\left((y_{1},q(y_{1}))^{T},I_{2\times
2},r^{2}\right),$
where $q$ defines the set of circle centers, $S=q(\mathbb{R})$, and $r$ is the
circle radius. In this section, we present reconstructions of $f$ from
$R_{c}f$ data. We consider two example $q$, one non-convex with
$q(y_{1})=a(y_{1}+100)(y_{1}-100)y_{1}^{2}+100$ and $a=5/10^{6}$, and one
convex, with $q(y_{1})=\frac{x^{2}}{50}-100$. See Figure 8 for an illustration
of both curves. In both cases,
$\text{supp}(f)\subset\\{(x_{1},x_{2})\in\mathbb{R}^{2}:x_{2}>q(x_{1}),{\left|x_{1}\right|<100}\\}$.
$-100$$-50$$0$$50$$100$$-100$$-50$$0$$50$$100$$x_{1}$$x_{2}$non-convexconvex
Figure 8. Convex and non-convex measurement curve examples.
These example curves and function supports are chosen for two reasons. First,
for $f\in L^{2}_{c}(D)$, $R_{c}f$ uniquely determines $f$ for both $q$
considered, and hence there are no artifacts due to a null space. This is true
since $S$, for both $q$ in Figure 8, is not the union of a finite set and a
Coxeter system of straight lines [1]. Second, $R_{c}f$ detects all
singularities in $D$. Thus, the only artifacts are due to noise and the Bolker
condition, which is our focus. We will also discover later, due to
discretization limitations, streaking artifacts which occur along circles at
the boundary of the data set. Similar boundary artifacts have been discussed
previously in the literature, in regards to photo-acoustic tomography and
sonar [12].
(a) predicted artifacts
(b) $A^{T}A\delta$ (observed artifacts)
Figure 9. Predicted observed artifacts in delta function reconstructions. Top
row - non-convex curve. Bottom row - convex curve.
### 5.1. Delta function reconstructions
We now present unfiltered backprojection image reconstructions of delta
functions to validate the results of Theorem 3.1. A delta function is
supported at a single point and has singularities (edges) in all directions.
Thus, in a reconstruction of a delta function, $\delta$, from circular
integral data with centers on $S=q(\mathbb{R})$, we would expect to see
artifacts which are the reflections of $\delta$ in planes tangent to $S$. Let
$A$ denote the discretized form of $R_{c}$. We sample circle centers
$c_{i}=\left(-100+\frac{i-1}{2},q\left(-100+\frac{i-1}{2}\right)\right)$ for
$0\leq i\leq 401$, and radii $r_{j}=1+j$ for $1\leq j\leq 199$. See the right
column of Figure 9 for example backprojection reconstructions of delta
functions when $S$ is convex and non-convex. In the left column of Figure 9,
we show the artifacts due to Bolker as predicted by our theory. The predicted
and observed artifacts match up well, and all artifact locations in the convex
case lie outside the function support, which is in line with Theorem 3.1. In
the bottom-right of Figure 9, there are additional circular shaped streaking
artifacts which pass through the delta function. The circular streaks have
centers at the end points of the red curve in the bottom-left of Figure 9.
These occur due to the sharp cutoff at the boundary of the sinogram, since the
circle centers are only finitely sampled. The boundary artifacts are less
noticeable in the non-convex curve case, in the top-right of Figure 9, and the
artifacts due to Bolker appear more strongly.
### 5.2. Phantom reconstructions
Here, we present algebraic reconstructions of image phantoms from circular
integral data. We consider two phantoms, one simple and one complex. The
simple phantom consists of two rectangles with density 1, and the complex
phantom is made up of a thin cross, a square, a hollow ellipse, and two
circular phantoms, all of varying densities. The nonzero densities are
arranged to fit within $D$, for both the convex and non-convex measurement
curves considered. We present reconstructions using the Landweber method and
TV regularization. Specifically, to implement TV, we find
(5.2) $\operatorname*{arg\,min}_{{\mathbf{x}}}\
(A{\mathbf{x}}-\mathbf{b})^{T}(A{\mathbf{x}}-\mathbf{b})+\alpha\sqrt{\|\nabla{\mathbf{x}}\|^{2}_{2}+\beta^{2}},$
where $\alpha,\beta>0$ are regularization parameters.
(a) phantom
(b) Landweber
(c) TV
Figure 10. Reconstructions of image phantoms - non-convex curve.
The data was simulated by $\mathbf{b}=A_{\epsilon}{\mathbf{x}}+\eta$, where
$A_{\epsilon}$ is a perturbed $A$. Specifically, to generate $A_{\epsilon}$,
the non-zero values of $A$ (i.e., the circular integral weights) were
multiplied by $1+u$, where $u\sim U(-0.5,0.5)$, that is, $u$ is drawn from a
uniform distribution on $[-0.5,0.5]$. We use $A_{\epsilon}$ to generate data
to avoid inverse crime. The added noise is white noise drawn from a standard
Gaussian $\eta\sim\mathcal{N}(0,\sigma)$, where $\sigma$ controls the noise
level. The hyperparameters $\alpha,\beta$ were chosen using cross-validation,
so there is no optimism in the results with respect to the selection of
$\alpha,\beta$.
(a) phantom
(b) Landweber
(c) TV
Figure 11. Reconstructions of image phantoms - convex curve.
See Figure 10, where we show reconstructions of the test phantoms using the
Landweber method and TV in the non-convex curve case. We see severe artifacts
in the Landweber reconstructions, which is not surprising given the inversion
instabilities of $R$ when $S$ does not satisfy global convexity. The artifacts
are largely suppressed in the TV reconstructions. TV enforces sparse
gradients, and thus it TV can combat additional, unwanted singularities in the
reconstruction due to Bolker. In this example, TV is effective in removing the
artifacts.
In Figure 11, we present image reconstruction of the simple and complex
phantom in the convex curve case. The artifacts in the Landweber
reconstruction of the simple phantom are minimal and we see only a background
noise effect. In the Landweber reconstruction of the complex phantom, there
are mild artifacts which appear as a streaking effect near the boundary of the
hollow ellipse. In the TV reconstructions of both phantoms, the noise effects
and streaking artifacts are suppressed.
## 6\. Conclusions and further work
In this paper, we presented novel microlocal and injectivity results for a new
generalized Radon transform, $R$, which defines the integrals of square
integrable compactly supported functions over ellipsoids, hyperboloids, and
elliptic hyperboloids and generalizations with centers on a smooth surface,
$S$. We showed that $R$ was an FIO and proved that $R$ satisfied the Bolker
condition if condition (3.3) holds, and this is if and only if when $M$ is
dimension zero. We applied our theory to some examples in section 3.2, and
provided a more in-depth analysis of a cylindrical scanning geometry of
interest in URT, where we proved injectivity results. Specifically, we showed
that any $f\in L^{2}_{c}$, with support in a cylindrical tube, could be
reconstructed uniquely from its integrals over spheroids with centers on a
cylinder which encapsulates the support of $f$. We also investigated the
visible singularities for the cylindrical geometry in section 4.2. In section
5, to validate our microlocal theory and show the image artifacts, we
presented image reconstructions of delta functions and image phantoms. We also
tested a method of artifact reduction, using TV regularization and cross-
validation, which proved successful in the simulations conducted.
In further work, we aim to investigate the potential practical applicability
of the hyperboloid and elliptic hyperboloid Radon transform, as, in this work,
we considered only applications of $R$ in fields such as URT, where the
integral surfaces are spheroids. There is indication that the hyperboloid case
may be of interest in proton therapy through the measurement of multiple gamma
rays emitted as a cascade [26]. We aim also to pursue three-dimensional image
reconstruction methods for the cylindrical scanning geometry introduced in
section 3.2. As evidenced in section 4.2, the inverse of $R$ in the
cylindrical case is severely unstable, as there are invisible singularities.
Thus, it is likely that we will require strong regularization (e.g., TV or
machine learning) to solve this problem.
## Acknowledgements:
The authors thank Plamen Stefanov for helpful comments on this work. The third
author’s research was partially supported by NSF grant 1712207 and Simons
grant 708556. The first author wishes to acknowledge funding support from
Brigham Ovarian Cancer Research Fund, The V Foundation, Abcam Inc., and Aspira
Women’s Health. Sean Holman was supported by the Engineering and Physical
Sciences Research Council grant number EP/V007742/1.
## References
* [1] M. L. Agranovsky and E. T. Quinto. Injectivity sets for the Radon transform over circles and complete systems of radial functions. Journal of Functional Analysis, 139(2):383–414, 1996.
* [2] G. Ambartsoumian, J. Boman, V. P. Krishnan, and E. T. Quinto. Microlocal analysis of an ultrasound transform with circular source and receiver trajectories. In Geometric analysis and integral geometry, volume 598 of Contemp. Math., pages 45–58. Amer. Math. Soc., Providence, RI, 2013.
* [3] G. Ambartsoumian, R. Gouia-Zarrad, and M. A. Lewis. Inversion of the circular radon transform on an annulus. Inverse Problems, 26(10):105015, 2010.
* [4] L.-E. Andersson. On the determination of a function from spherical averages. SIAM Journal on Mathematical Analysis, 19(1):214–232, 1988.
* [5] P. Caday. Cancellation of singularities for synthetic aperture radar. Inverse Problems, 31(1):015002, 22, 2015.
* [6] J. D. Coker and A. H. Tewfik. Multistatic sar image reconstruction based on an elliptical-geometry radon transform. In 2007 International Waveform Diversity and Design Conference, pages 204–208. IEEE, 2007.
* [7] A. M. Cormack. Representation of a function by its line integrals with some radiological applications. J. Appl. Physics, 34(9):2722–2727, 1963.
* [8] J. J. Duistermaat. Fourier integral operators, volume 130 of Progress in Mathematics. Birkhäuser, Inc., Boston, MA, 1996.
* [9] J. J. Duistermaat and L. Hormander. Fourier integral operators, volume 2. Springer, 1996.
* [10] R. Felea, R. Gaburro, and C. J. Nolan. Microlocal analysis of sar imaging of a dynamic reflectivity function. SIAM Journal on Mathematical Analysis, 45(5):2767–2789, 2013.
* [11] R. Felea, V. P. Krishnan, C. J. Nolan, and E. T. Quinto. Common midpoint versus common offset acquisition geometry in seismic imaging. Inverse Probl. Imaging, 10(1):87–102, 2016.
* [12] J. Frikel and E. T. Quinto. Artifacts in incomplete data tomography with applications to photoacoustic tomography and sonar. SIAM J. Appl. Math., 75(2):703–725, 2015.
* [13] R. Gouia-Zarrad and G. Ambartsoumian. Approximate inversion algorithm of the elliptical radon transform. In 2012 8th International Symposium on Mechatronics and its Applications, pages 1–4. IEEE, 2012.
* [14] C. Grathwohl, P. C. Kunstmann, E. T. Quinto, and A. Rieder. Imaging with the elliptic radon transform in three dimensions from an analytical and numerical perspective. SIAM Journal on Imaging Sciences, 13(4):2250–2280, 2020.
* [15] V. Guillemin and S. Sternberg. Geometric Asymptotics. American Mathematical Society, Providence, RI, 1977.
* [16] M. Haltmeier and S. Moon. The spherical radon transform with centers on cylindrical surfaces. Journal of Mathematical Analysis and Applications, 448(1):567–579, 2017.
* [17] A. Homan and H. Zhou. Injectivity and Stability for a Generic Class of Generalized Radon Transforms. J Geom Anal, 27:1515–1529, 2017.
* [18] L. Hörmander. The analysis of linear partial differential operators. I. Classics in Mathematics. Springer-Verlag, Berlin, 2003. Distribution theory and Fourier analysis, Reprint of the second (1990) edition [Springer, Berlin].
* [19] L. Hörmander. The analysis of linear partial differential operators. III. Classics in Mathematics. Springer, Berlin, 2007. Pseudo-differential operators, Reprint of the 1994 edition.
* [20] L. Hörmander. The analysis of linear partial differential operators. IV. Classics in Mathematics. Springer-Verlag, Berlin, 2009. Fourier integral operators, Reprint of the 1994 edition.
* [21] J. Klein. Inverting the spherical radon transform for physically meaningful functions. arXiv preprint math/0307348, 2003.
* [22] V. P. Krishnan, H. Levinson, and E. T. Quinto. Microlocal analysis of elliptical radon transforms with foci on a line. In The mathematical legacy of Leon Ehrenpreis, pages 163–182. Springer, 2012.
* [23] L. A. Kunyansky. Explicit inversion formulae for the spherical mean radon transform. Inverse problems, 23(1):373, 2007.
* [24] S. Moon and J. Heo. Inversion of the elliptical radon transform arising in migration imaging using the regular radon transform. Journal of Mathematical Analysis and Applications, 436(1):138–148, 2016.
* [25] L. V. Nguyen and T. A. Pham. Microlocal analysis for spherical Radon transform: two nonstandard problems. Inverse Problems, 35(7):074001, 15, 2019.
* [26] C. M. Panaino, R. I. Mackay, M. Sotiropoulos, K. J. Kirkby, and M. J. Taylor. Full 3d position reconstruction of a radioactive source based on a novel hyperbolic geometrical algorithm. Computer Physics Communications, 252, 2020.
* [27] E. T. Quinto. The dependence of the generalized Radon transform on defining measures. Trans. Amer. Math. Soc., 257:331–346, 1980.
* [28] E. T. Quinto. The invertibility of rotation invariant Radon transforms. J. Math. Anal. Appl., 94:602–603, 1983.
* [29] B. Rubin. Inversion formulas for the spherical radon transform and the generalized cosine transform. Advances in Applied Mathematics, 29(3):471–497, 2002.
* [30] B. Rubin. A note on the sonar transform and related Radon transforms. arXiv:2206.05854 [math.FA], page 13, 2022.
* [31] W. Rudin. Functional analysis. McGraw-Hill Book Co., New York, 1973. McGraw-Hill Series in Higher Mathematics.
* [32] P. Stefanov and G. Uhlmann. Is a curved flight path in SAR better than a straight one? SIAM J. Appl. Math., 73(4):1596–1612, 2013.
* [33] F. G. Tricomi. Integral equations. Dover Publications, Inc., New York, 1985. Reprint of the 1957 original.
* [34] J. W. Webber and S. Holman. Microlocal analysis of a spindle transform. Inverse Problems & Imaging, 13(2):231–261, 2019.
* [35] J. W. Webber and E. T. Quinto. Microlocal analysis of a compton tomography problem. SIAM Journal on Imaging Sciences, 13(2):746–774, 2020.
|
# Phenomenology of Strong Interactions — Towards an Effective Theory for Low
Energy QCD
Adamu Issifu<EMAIL_ADDRESS>Departamento de Física, CFM—Universidade
Federal de Santa Catarina,
Caixa Postal 476, 88040-900 Florianópolis, SC, Brazil Departamento de Física,
Universidade Federal da Paraíba, Caixa Postal 5008, 58051-970 João Pessoa,
Paraíba, Brazil Francisco A. Brito<EMAIL_ADDRESS>Departamento de
Física, Universidade Federal da Paraíba, Caixa Postal 5008, 58051-970 João
Pessoa, Paraíba, Brazil Departamento de Física, Universidade Federal de
Campina Grande Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
###### Abstract
In this paper, we develop models applicable to phenomenological particle
physics by using the string analogy of particles. These theories can be used
to investigate the phenomenology of confinement, deconfinement, chiral
condensate, QGP phase transitions, and even the evolution of the early
universe. Other confining properties such as scalar glueball mass, gluon mass,
glueball-meson mixing states, QCD vacuum, and color superconductivity can also
be investigated in these model frameworks. We use one of the models to
describe the phenomenon of color confinement among glueballs at the end of the
paper. The models are built based on the Dirac-Born-Infeld (DBI) action
modified for open strings with their endpoints on a D$p$-brane or brane-anti-
brane at a tachyonic vacuum.
## I Introduction
String theory was conceived in the late 1960s to provide an explanation for
the behavior of nuclear matter such as protons and neutrons Schwarz ; Schwarz1
. Even though the theory was not successful in explaining the characteristics
of quarks and gluons at its inception, it promised an interesting intervention
in other areas of physics Green ; Polchinski2 ; Johnson ; Zwiebach1 ; Becker .
It showed capability in giving an insight into cosmology, astrophysics, and
unification of the fundamental forces of nature which has been on the table of
physicists for some time now. Consequently, the theory of Quantum
Chromodynamics (QCD) was developed in the early 1970s to give a comprehensive
explanation of nuclear matter Gross ; Politzer . The QCD theory is now
accepted as the standard theory for strong interactions. Interestingly, recent
development shows that string theory and QCD describe the same physics.
The formulation of Quantum Electrodynamics (QED) and QCD are almost the same.
They are both formulated from field theory based on gauge theory which forms
the foundation of the highly acceptable Standard Model (SM). The fundamental
particles of these theories are studied based on the gauge boson they
interchange: Photons for electrodynamic force, $W^{\pm}$ and $Z^{0}$ bosons
for electroweak force and gluon for strong nuclear force Quigg ; Collins1 ;
Shaw . Subsequently, gravitational force does not fall under this category, so
they are investigated under Einstein’s general theory of relativity.
Physicists have bunged their hope on string theory to unify all the
fundamental forces, this will fall under ’physics beyond the SM’ Aharony ;
Maldacena ; Grana ; Polchinski3 ; Haro . Upon the similarities, quarks and
gluons are color particles classified under three conventional colors (red,
blue, and green) whilst photons are color neural bosons that mediate
electrically charged leptons. Also, gluons self-interact due to their color
charges but photons do not Quigg ; Collins1 .
Additionally, QCD falls short in explaining color-neutral particles such as
bound states of gluons (glueballs) and quarks (hadrons and mesons), so string
theory can be resorted for further description. The string-like description of
hadrons arising from the quark model Amsler is an important phenomenon in
applying string theory. Under this picture, when a quark and an antiquark are
pulled apart, they behave as if they are connected by a rubber band (gluon)
which becomes increasingly difficult to separate when the separation distance
between them keeps increasing. This analogy gradually fails when the particles
are brought closer and closer together. In this regime string theory fails and
QCD theory becomes viable. Now, looking at the particles in terms of
fundamental strings, string theory describes hadrons quite well and provides
the background for the unification of the fundamental forces. In string
theory, the strings are treated as particles, where different particles are
associated with different string oscillations. The masses of the particles are
also associated with the energy of the oscillating string. The intrinsic spin
of the particle is associated with the two perpendicular oscillations of the
string with its endpoints fixed on D-branes, similar to the direction of
electric and magnetic fields in a photon. Hence, photons and gluons can be
identified in terms of the open strings as spin-1 particles. The clockwise and
anti-clockwise movement of the closed strings makes it possible to classify it
among spin-2 bosons such as graviton. Since, QCD involves color fields, to
study it under string theory we assume that the endpoints of the open strings
on a D-brane serve as the source and sink of the color charge Bigazzi .
It has been conjectured that open bosonic strings studied at a tachyonic
vacuum behave as if they are closed strings with no D-branes. However, soliton
solutions in this region point to the presence of lower dimensional branes
Sent2 ; Bardakci . This projection is also corroborated in superstring
theories Sent1 ; Sen1 and evident in the first Recknagel ; Sen2 ; Harvey and
second Kostelecky ; Sen ; Zwiebach2 ; Kostelecky1 ; Berkovits ; Harvey1 ;
Gopakumar ; Gerasimov quantizations in string theories Sent4 . At the
tachyonic vacuum, the negative energy density $V(T_{0})$ of the tachyons in
the vacuum exactly cancels Witten1 the energy density of the D-branes
$\varepsilon_{p}$ i.e. $V(T_{0})+\varepsilon_{p}=0$. In non-BPS D-branes
$\varepsilon_{p}=\tau_{p}$ where $\tau_{p}$ is the D-brane tension Lindstrom ;
Gustafsson ; Sent3 . It should be noted that at $|T|=T_{0}$ the total energy
density of the tachyons and the brane tension vanishes identically, signifying
the absence of open strings at that point. In this view, at a tachyonic
vacuum, there will be no physical open string excitations because there are no
D-branes. On the contrary, the ’usual’ field theory gives an alternative
explanation, because shifting the vacuum and expanding around its ’true’
minimum can change a negative square mass of the tachyons to a positive one
even though it does not completely remove all the states. Since it will not
cost any energy to adjust the fundamental strings on the worldvolme of the
D-branes at a tachyonic vacuum, it will be difficult to notice their presence.
Hence, fluctuations around the vacuum represent the presence of lower
dimensional D-branes Sent2 ; Bardakci . These phenomena have been investigated
in references Sen ; Zwiebach2 ; Kostelecky using open string field theory
Witten2 .
In this paper, we modify the Dirac-Born-Infeld (DBI) action, so that the
associated open strings with their endpoints on the remaining lower-
dimensional branes at the tachyonic vacuum can be analyzed. The objective is
to develop models that can mimic QCD theory both in the UV and IR regions with
UV safety. The models can be applied in developing potential models such as
the linear confining and Cornell potential models. In the analysis, we
consider that the string worldsheet falls inside the D-brane worldvolume
making it easy for the endpoints of the strings to be connected by the flux
line on the worldvolume to form a close string suitable for modeling color
confinement in a flux-tube like picture. The dynamics of the strings
tangential to the D-brane worldvolume is represented by the gauge field
$F_{\mu\nu}$ and the component transverse to the worldvolume is represented by
a massless scalar field $X^{a}$. The net flux involved is determined by the
source and sink of the flux carried by the endpoints of the string on the
lower dimensional remnants of the original D-brane in the vacuum. Also, the
condition for minimum energy warrants that the flux does not spread because
the source and sink of the flux emanate from a pointlike object on the D-brane
worldvolume.
The paper is organized as follows: In Sec. II we review Tachyons and D-branes,
divided into two subsections, under Sec. II.1 we review Tachyons, and in Sec.
II.2 we review D-branes. We present the Modification of the Dirac-Born-Infeld
Action which serves as the bases for the study in Sec. III. Also, in
subsection III.1 we review the Dimensional Reduced $\text{U}(1)$ Yang-Mills
theory and its intended consequences in relation to the Standard Model (SM) of
Particle Physics. Section IV and subsequent sections contain our original
contributions to the subject. We present the Bosonic String at Tachyonic
Vacuum in Sec. IV, where we present the details of dimensional reduction from
$10D$ Dirac-Born-Infeld action to $4D$ conducive for describing SM particles.
We present Gauge Theories Modified with $G(\phi)$, in Sec. V, this section was
divided into two. We studied Fermions Coupled to Fundamental Strings at
Tachyonic Vacuum in Sec. V.1 and Non-Abelian Gauge Theory in Sec. V.2. Section
VI contains the Phenomenon of Gluon Confinement, divided into four
subsections. We present The Model in Sec. VI.1, Confining Potentials in Sec.
VI.2, Gluon Condensation in Sec. VI.3 and strong Running Coupling and QCD
$\beta$-function in Sec. VI.4. we present our findings and Conclusions in Sec.
VII.
## II Review of Tachyons and D$p$-branes
### II.1 Tachyons
Generally, tachyons are classified as particles that travel faster than light
or weakly interacting superluminal particles. Relativistically, single
particle energy is expressed as $E^{2}=p^{2}c^{2}+m^{2}c^{4}$, where $p$ is
the spatial momentum and $m$ is the mass of the particle. For a particle to be
faster than light, the relativistic velocity $\beta=pc/E>1$. Hence, for
tachyons with real $p$, $m$ must necessarily be imaginary Bilaniuk . However,
this analogy does not make a strong and convincing case for the tachyons.
Rather, Quantum Field Theory (QFT) provides some satisfactory explanation to
the dynamics of tachyons. QFT suggests that particles that travel faster than
light does not exist in nature. So tachyons are simply unstable particles that
decay. Based on this understanding, we consider a scalar field, say $\phi$,
with the usual kinetic term and a potential $V(\phi)$ whose extremum is at the
origin. If one carries out perturbation quantization about $\phi=0$ up to the
quadratic term and ignores the higher order terms in the action, we obtain a
particle-like state at $V^{\prime\prime}(0)$. These results have two
interesting interpretations; if $V^{\prime\prime}(0)>0$ we have a particle of
mass $m^{2}_{\phi}$ but for $V^{\prime\prime}(0)<0$ we have a tachyonic state
with a negative $m^{2}_{\phi}$. In this case, tachyons can be given a physical
meaning. So far, we know that the tachyons have negative $m^{2}_{\phi}$ and
potential whose maximum is at the origin, thus a small displacement of $\phi$
at the origin will cause it to grow exponentially with time towards the true
minimum. By the above description, tachyons can be represented by a potential
such as
$V(\phi)=-\dfrac{1}{2}m_{\phi}^{2}\phi^{2}+c_{3}\phi^{3}+c_{4}\phi^{4}\cdots,$
(1)
where $c_{3}\,\text{and}\,c_{4}$ are constants. Hence, tachyonic fields are
associated with the ’usual’ Higgs fields where the Higgs particles acquire
negative square mass at the true minimum of their potential. Accordingly,
tachyons in QFT are related to instability that breaks down the perturbation
theory in normal field theory. The usual quantization where the cubic and
higher order terms are considered small corrections to the quadratic term is
no longer tenable. Since $V(\phi)$ has its maximum at $\phi=0$, it renders it
classically unstable at that point. So one cannot guarantee that the
fluctuation at that point is small. This behavior comes with an inbuilt
solution i.e. one can expand the potential about the true minima $\phi_{0}$ up
to the quadratic term and proceed with the perturbation quantization about
that point. Thus, the cubic and higher-order terms in the expansion can be
discarded. This process will lead to the creation of a particle with a
positive mass $m^{2}_{\phi}$ in the spectrum. This process removes the
tachyonic modes in the spectrum. Additionally, in D$p$-brane systems, the
theory must be invariant under $Z_{2}$ symmetry, i.e. $\phi\rightarrow-\phi$
and in the presence of brane-anti-brane, the theory must necessarily be
invariant under phase symmetry $\phi\rightarrow e^{i\alpha}\phi$. Indeed,
there are some benefits in working in tachyonic modes otherwise we can define
a new field, $\eta=\phi-\phi_{0}$, and express the potential in terms of the
new field,
$V(\eta)=V(\eta+\phi_{0}).$ (2)
Working with this potential from the onset will remove the tachyonic mode from
the spectrum because $V^{\prime\prime}(\eta=0)$ will be positive. However,
there are some benefits to working with tachyonic fields, for instance, they
possess high symmetry. This symmetry might not be explicit in $V(\eta)$ as it
is in $V(\phi)$. The high symmetry in tachyonic fields leads to the phenomenon
of spontaneous symmetry breaking, where the potential has more than one minima
i.e. $V(\phi)=V(-\phi)$ corresponding to $\pm\phi_{0}$. This phenomenon is
well known in elementary particle physics Sent3 .
### II.2 D$p$-branes
Recent advances in string theory have provided some explanations regarding
nonperturbative features of QCD theory, thanks to the discovery of
D$p$-branes. The study of D$p$-branes gives insight into physically relevant
systems such as black holes, supersymmetric gauge theories, and the connection
between Yang-Mills theories and quantum gravity. Upon the numerous progress, a
consistent nonperturbative background-independent construction of the theory
is yet to be developed. This poses a challenge in directly addressing
cosmological problems. There are five known ways for which supersymmetric
closed strings can be quantized, i.e. IIA, IIB, I, heterotic $\text{SO}(32)$
and heterotic $E_{8}\times E_{8}$ superstring theories. Individually, they
give a perturbative description of quantum gravity. However, they are
connected through duality symmetries Hull ; Witten . Indeed, some of these
dualities are nonperturbative in nature because the string coupling $g_{s}$ in
one theory may have an inverse relation $1/g_{s}$ with the other. The
superstring theories together with M-theory (a theory that unifies superstring
theories) consist of an extended object of higher dimensions, called
D$p$-branes. Each of these theories contains branes but in different
complements. Particularly, IIA/IIB superstring theories consist of even/odd
D$p$-brane dimensional configurations. Using the appropriate duality
transformations, one brane can be mapped onto the other even with the strings
connecting them. Thus, none of the branes are seen to be more fundamental to
others. This process is commonly referred to as ’brane democracy’.
Before moving into the specific kind of strings set out for this study, we
will shed light on the features of open and closed strings. Closed strings are
topologically the same as a circle $S^{1}$, i.e. $[0,2\pi]$. They give rise to
a massless set of spacetime fields, identified as graviton $g_{\mu\nu}$,
dilaton $\varphi$, antisymmetric two-form $B_{\mu\nu}$ and an infinite set of
massive fields upon quantization. Supersymmetric closed strings are also
related to massless fields identified with graviton supermultiplets such as
Ramond-Ramond $p$-form fields $A^{(p)}_{\mu,\cdots,\mu_{p}}$ and gravitino
$\psi_{\mu\alpha}$. Consequently, the quantum theory of closed strings is
naturally related to the theory of gravity in spacetime. On the other hand,
open strings are topologically similar to the interval $[0,\pi]$. They give
rise to massless gauge field $A_{\mu}$ in spacetime upon quantization.
Supersymmetric open strings are also associated with massless gaugino
$\psi_{\alpha}$. Accordingly, open strings have their ends on Dirichlet
$p$-branes (D$p$-brane) and the gauge field lives on the worldvolume of the
D$p$-brane. Upon these differences, the physics of closed and open strings are
related at the quantum level. Closed strings were first observed through the
one-loop process of an open string. Under this process, close strings appeared
as poles in nonplanar one-loop open string diagrams Gross1 ; Lovelace . The
open strings have their endpoints on the D$p$-branes whilst close strings have
no endpoints. The degree of freedom of the open strings is associated with
standing wave modes on the fields while the close strings correspond to left-
moving and right-moving waves. The boundary conditions for open strings on the
bosonic field $X^{M}$ are Neumann (freely moving endpoints) and Dirichlet
boundary conditions (fixed endpoints). The close string on the other hand
corresponds to periodic and anti-periodic boundary conditions.
Some D$p$-branes have unstable configurations both in the supersymmetric or
the nonsupersymmetric string theories. The instability is attributed to
tachyonic modes with negative square mass $m^{2}_{\phi}\,<\,0$ in the open
string spectrum, we are interested in investigating the effects of the
tachyonic modes Sen ; Zwiebach . Some D$p$-brane configurations with open
strings containing tachyons are:
Brane-antibrane; it is a type IIA or IIB string theory with parallel
D$p$-branes separated by $d\,<\,l_{s}$ ($l_{s}$ is the string length scale).
They also carry tachyons in their open string spectrum. The difference in
their orientation leads to opposite Ramond-Ramond (R-R) charges Hughes . So
the brane and anti-brane pair can annihilate leaving a neutral vacuum because
the net R-R charge will be zero.
Wrong-dimension branes; the D$p$-brane with wrong dimension for type IIA/IIB
with odd/even spatial dimension for $p$ instead of even/odd dimensions carry
no charges under classical IIA/IIB supergravity fields. They have tachyons in
their open string spectrum. Such branes can annihilate to form a vacuum
without violating charge conservation.
Bosonic D$p$-branes; just as the wrong-dimension branes of type IIA/IIB string
theory, the D$p$-brane of any particular dimension in the bosonic string
theory have no conserve charge and has tachyons in their open string spectrum.
Also, they can annihilate to form a neutral vacuum without violating charge
conservation.
Again, even though the non-BPS D$p$-branes of type IIA/IIB string theory are
unstable owing to the presence of tachyonic modes on their worldvolume Sen1 .
We can obtain stable non-BPS branes by taking orientifolds/orbifolds of the
theory which projects out the tachyonic modes Sen1 ; Bergman ; Sen2 .
## III Modification of Dirac-Born-Infeld Action
We begin the study with the Born-Infeld action (BI) Born sometimes referred
to as Dirac-Born-Infeld action (DBI) Gibbons ; Dirac . The focus will be on
D$p$-branes Polchinski ; Taylora which are nonperturbative states on which
open strings live. They are equally coupled with closed strings, Ramond-Ramond
states and other massive fields. The nonperturbative nature of the action
makes it possible to describe low energy degrees of freedom of the D$p$-branes
Leigh making it possible for application in low energy QCD. The distinction
between the DBI action, other $p$-branes and supermembrane theories Hughes ;
Bergshoeff ; Townsend is the presence of gauge field in the worldvolume of
DBI. The gauge field is associated with virtual open string states.
Consequently, we obtain confinement of the fundamental strings with their
endpoints fixed on the branes. Generally, the action can be expressed as
$S=-T_{p}\int d^{p+1}\xi
e^{-\varphi}\sqrt{-\text{det}\left(G_{\mu\nu}+B_{\mu\nu}+(2\pi\alpha^{\prime})F_{\mu\nu}\right)}+S_{CS}+\text{fermions},$
(3)
where $G_{\mu\nu}$, $B_{\mu\nu}$ and $\varphi$ are the induced metric tensor,
antisymmetric tensor, and dilaton field to the D$p$-brane worldvolume
respectively. Also, $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$
is the worldvolume electromagnetic field strength of $A_{\mu}$ and $S_{CS}$ is
a set of Chern-Simon terms while
$\tau_{p}=\dfrac{T_{p}}{g_{s}}=\dfrac{1}{g_{s}\sqrt{\alpha^{\prime}}(2\pi\sqrt{\alpha^{\prime}})^{p}},$
(4)
is the brane tension and $g_{s}=e^{\langle\varphi\rangle}$ is the string
coupling, with associated string tension
$T_{\text{string}}=\dfrac{1}{2\pi\alpha^{\prime}}.$ (5)
In type IIA/IIB string theory with $p$ even/odd are associated with quantum
open strings containing massless $A_{\mu},\,\mu=0,1,\cdots,p$ and
$X^{M},\,M=p+1,\cdots,9$ fields. These fields are the consequence of the gauge
field living on the hypersurface and $X^{\mu}(\xi)$ transverse excitations.
The geometry of D$p$-brane is not flat, so we generally define the embedding
$X^{\mu}(\xi)$, where $\xi^{\alpha}$ represent $p+1$ coordinates on the
D$p$-brane worldvolume $\sum_{(p+1)}$, and $X^{\mu}$ is the ten functions
mapping from $\sum_{(p+1)}$ onto the spacetime manifold $\mathbb{R}^{9,1}$.
Introducing the scalar field into the action, it becomes invariant under
diffeomorphism and Abelian gauge transformations. So, a way of fixing the
freedom of the former is to adopt a ’static gauge’ such that
$\displaystyle X^{\mu}\equiv\xi^{\mu}\qquad\text{for}\qquad 0\,\leq\,\mu\leq
p.$ (6)
The remaining fields are
$X^{\mu}\equiv X^{M}\qquad\text{for}\qquad(p+1)\,\leq\,M\,\leq d-p,$ (7)
which are the transverse coordinates to the worldvolume Tseytlin ; Taylor ;
Taylor1 where $d$ is a spatial dimension. Thus, one can choose $d=9$ for
superstring theory and $d=25$ for bosonic string theory. Under this gauge, the
originally $d+1$-dimensional global poincaré symmetry spontaneously breaks
down to a product of $p+1$ dimensional poincaré group with $d-p$ dimensional
rotational symmetry group i.e.
$\text{SO}(1,d)\rightarrow\text{SO}(1,p)\times\text{SO}(d-p)$. For a
D$p$-brane possessing $p<d$ extends over $p$-dimensional subspace of
$d$-dimensional space. The focus will be on a D$p$-branes with $p$-dimensional
hyperplanes in $d$-dimensional space. Again, according to the static gauge
fixed above, there are two possible consistent truncations;
* •
$X^{M}=0$; corresponds to pure BI theory Born in $\mathbb{E}^{p,1}$ and
* •
$F_{\mu\nu}=0$; also corresponds to Dirac’s theory Dirac of minimal timelike
submanifolds of $\mathbb{E}^{d,1}$.
We can introduce the transverse scalar fluctuations by defining the induced
metric as
$G_{\mu\nu}\approx\eta_{\mu\nu}+\eta_{MN}\partial_{\mu}X^{M}\partial_{\nu}X^{N},$
(8)
that will approximate the D$p$-brane worldvolume to near flat. In this study,
we are interested in understanding the dynamics of bosonic open strings with
tachyonic modes in their spectrum, so we will proceed systematically toward
that objective. The worldvolume theory of non-BPS D$p$-brane in IIA/IIB string
theory corresponds to a massless $\text{U}(1)$ vector field, transverse
oscillating scalar field $X^{M}$ and a tachyonic field $T$ Sent . Accordingly,
the leading order of the action results in dimensional reduction
$10$-dimensional $\text{U}(1)$ Yang-Mills theory. Besides, higher-order
corrections, $\alpha^{\prime}=l_{s}^{2}$, to the order of the string scale are
also possible. Since we are proceeding with the assumption that the massless
fields are slowly varying compared to the string length $l_{s}$, i.e. we
discard the higher order derivatives and write the DBI action Leigh ; Garousi
; Callan in a simple form without the explicit presence of the tachyons.
Therefore, Eq.(3) takes the form
$S=-T_{p}\int d^{p+1}\xi
e^{-\varphi}\sqrt{-\text{det}\left(\eta_{\mu\nu}+\eta_{MN}\partial_{\mu}X^{M}\partial_{\nu}X^{N}+B_{\mu\nu}+(2\pi\alpha^{\prime})F_{\mu\nu}\right)}.$
(9)
Now we will express the extended form of the DBI by including the dynamics of
the tachyons as studied in Garousi . Accordingly,
$S=-T_{p}\int d^{p+1}\xi
e^{-\varphi}V(T)\sqrt{-\text{det}\left(\eta_{\mu\nu}+\partial_{\mu}T\partial_{\nu}T+\eta_{MN}\partial_{\mu}X^{M}\partial_{\nu}X^{N}+B_{\mu\nu}+(2\pi\alpha^{\prime})F_{\mu\nu}\right)},$
(10)
where $V(T)$ is the tachyon potential and $\partial_{\mu}T\partial_{\nu}T$ is,
as usual, the kinetic energy of the tachyons Garousi1 . Under this
conjuncture, the action vanishes at the minimum of the tachyon potential
$V(T_{0})=0$ Sent2 ; Sent1 ; Recknagel . We will now continue the discussion
by approximating that the D$p$-branes are nearly flat, with constant dilaton,
and vanishing antisymmetric two-form term $B_{\mu\nu}$ to remove the close
string quanta in the system.
### III.1 Dimensional Reduction $\text{U}(1)$ Yang-Mills Theory
Studying D$p$-branes under Yang-Mills theory enables us to understand the
physics of D$p$-branes without necessarily applying any complex string theory
artifacts. Detailed analyses show there is enough evidence that super Yang-
Mills theory carries a lot of information concerning string theory than one
may possibly imagine Banks . Besides, recent developments in high-energy
physics investigations have shown that string theory gives insight into low-
energy field theories in the nonperturbative region Taylor . This has been
conjectured to be equivalent to QCD where confinement can be realized in color
fluxtube picture. From Eq.(9), in the limit of vanishing $B_{\mu\nu}$ and
further assuming that the D$p$-branes are almost flat, close to the
hypersurface i.e. $X^{M}=0$, $M>p$. Again, we suppose that the fields are
slowly varying such that $\partial_{\mu}X^{M}\partial^{\mu}X_{M}$ and
$2\pi\alpha^{\prime}F_{\mu\nu}$ are in the same order. Therefore, the action
can be expanded as
$S=-\tau_{p}V_{p}-\dfrac{1}{4g^{2}_{YM}}\int
d^{p+1}\xi\left(F_{\mu\nu}F^{\mu\nu}+\dfrac{2}{(2\pi\alpha^{\prime})^{2}}\partial_{\mu}X^{M}\partial^{\mu}X_{M}\right)+{{\cal
O}((\partial_{\mu}X^{M})^{4},F^{4})}.$ (11)
Here, $V_{p}$ is the $p$-brane worldvolume and $g_{YM}$ is the Yang-Mills
coupling given by,
$g_{YM}^{2}=\dfrac{1}{(2\pi\alpha^{\prime})^{2}\tau_{p}}=\dfrac{g}{\sqrt{\alpha^{\prime}}}\left(2\pi\sqrt{\alpha^{\prime}}\right)^{p-2}.$
(12)
The second term in Eq.(11) corresponds to $\text{U}(1)$ gauge theory in $p+1$
dimension coupled with $9-p$ scalar fields. Introducing fermions fields, as
mentioned below Eq.(3) into the action, we recover the supersymmetric
$\text{U}(1)$ Yang-Mills theory in $10\,d$ $\mathcal{N}=1$ Super Yang-Mills
action,
$S=\dfrac{1}{g_{YM}^{2}}\int
d^{10}\xi\left(-\dfrac{1}{4}F_{\mu\nu}F^{\mu\nu}+\dfrac{i}{2}\bar{\psi}\Gamma^{\mu}\partial_{\mu}\psi\right).$
(13)
In the case of N parallel D$p$-branes, the p-dimensional branes must be
distinctively labeled from $1\,\text{to}\,\text{N}$. Subsequently, the
massless scalar fields living on the individual D$p$-brane worldvolume are
related to $\text{U}(1)^{N}$ gauge group. The fields arising from the strings
stretching from one brane to the other are labeled as $A^{\mu}_{i,j}$, where
$i,j$ specifies the individual branes that carry the endpoints of the strings.
The strings are oriented such that they consist of $N(N-1)$ fields,
corresponding to $A_{\alpha}=(A_{\mu},\,X^{M})$ individual fields. The mass of
the strings is proportional to the separation distances between the branes. So
the strings become massless Witten1 ; Taylora when the D$p$-branes get very
close to each other. Hence, the open strings can transform under the adjoint
representation U(N). Thus, the corresponding fields can be decomposed
similarly to adjoint supersymmetric gauge field
$\text{U(N)}=\text{SU}(N)\times\text{U}(1)$ in $p+1$-dimensions. With the
stacks of D$p$-branes crossing each other at some angles, one can also break
the U(N) symmetry into SM particle physics i.e.
$\text{SU}(3)\times\text{SU}(2)\times\text{U}(1)$ Lust . In sum, the DBI
action can be generalized into a non-Abelian gauge group by considering stacks
of D$p$-branes instead of a D$p$-brane.
A major motivation for string dual to QCD is derived from the ’t Hooft large
$N_{c}$-limit Hoofta2 ; Witten4 . Though, $\text{SU}(N_{c})$ and
$\text{U}(N_{c})$ have different representations, in the limit of
$N_{c}\rightarrow\infty$, the difference can be overlooked. Thus, the number
of gluons can be approximated as $~{}N_{c}^{2}$. This is more than the quark
degree of freedom $N_{f}N_{c}$, therefore, we expect the dynamics of the
gluons to dominate in this regime Mateos .
## IV Bosonic Strings at Tachyonic Vacuum
Generally, there are two known boundary conditions associated with open
strings. The Dirichlet (fixed) boundary condition is where the coordinates are
normal to the brane, and under Neumann (free) boundary condition is where the
coordinates are parallel to the brane Dai ; Leigh ; Polchinski1 . It has been
established Callan1 that a small disturbance normal to the string and the
brane are likely to reflect back with a phase shift of $\pi$, corresponding to
a Dirichlet boundary condition. Some study in this regard has been carried out
in Rey ; Lee using Nambu-Goto action for strings with their endpoints on a
supergravity background of D$3$-branes. Since the strings attached to the
$3$-branes manifest themselves as electric charges, the Neumann boundary
condition where the endpoints of the strings are freely oscillating will lead
to the production of electromagnetic dipole radiation at the asymptotic outer
area of the brane.
In this section, we will consider Eq.(10) which includes the dynamics of
tachyons on open strings. Accordingly, we will keep all the arguments made for
the $10$-dimensional DBI action, but we will reduce the spacetime dimension
from $10$ to $4$. One of the major differences between superstring theory and
particle theory is that the former lives on $10$-dimensional spacetime and the
latter on $4$-dimensional spacetime. Nonetheless, this discrepancy can be
dealt with using compactification scheme Gell-Mann ; Guendelman ; Randall ,
where the spacetime is divided into an external non-compact spacetime
$\mathcal{M}_{10-d}$ and an internal compact spacetime $\mathcal{M}_{d}$. We
can combine these two phenomena into a single expression as
$\mathcal{M}_{10}=\mathcal{M}_{10-d}\times\mathcal{M}_{d}.$ (14)
We adopt a physically realistic phenomenon where $d=6$. Additionally, we can
set a compact scale $M_{c}=1/R$, where $R$ is the radius of the internally
compact spacetime, smaller than the string mass $M_{s}=1/l_{s}$ Grana ;
Shiraishi . As a result, the energy $E$ required for this work must lie in the
range $E\ll M_{c}\ll M_{s}$. So, the worldvolume coordinate becomes
$\xi=(x^{\mu},x^{6})$, where $\mu=0,1,2,3$. Consequently, we will reduce the
dimension to $1+3$-dimension with $\eta_{\mu\nu}=\text{diag}(+,-,-,-)$ metric
signature. We will also decouple the $6$ available transverse fluctuating
scalar fields and the antisymmetric tensor i.e., $X^{M}=B_{\mu\nu}=0$. Keeping
the dilaton field constant, we get
$\displaystyle S$ $\displaystyle=-\tau_{p}\int
d^{4}xV(T)\sqrt{-\text{det}\left(\eta_{\mu\nu}+\partial_{\mu}T\partial_{\nu}T+(2\pi\alpha^{\prime})F_{\mu\nu}\right)}$
$\displaystyle=-\tau_{p}\int
d^{4}xV(T)\sqrt{1-\eta^{\mu\nu}\partial_{\mu}T\partial_{\nu}T+\dfrac{1}{2}(2\pi\alpha^{\prime})^{2}F_{\mu\nu}F^{\mu\nu}+\cdots}$
$\displaystyle=-\tau_{p}\int
d^{4}xV(T)\left[1-\dfrac{1}{2}\partial_{\mu}T\partial^{\mu}T+\dfrac{1}{4}(2\pi\alpha^{\prime})^{2}F_{\mu\nu}F^{\mu\nu}+\cdots\right].$
(15)
In the above expression we calculate the determinant up to the second-order
derivative discarding higher-order corrections. In view of field theory at
tachyonic vacuum, the D-branes do not vanish completely rather, there are
lower dimensional D-branes present. So, the endpoints of the strings sitting
on the low dimensional D-branes behave like point-like particles which serve
as the source and sink of the flux carrying the color particles. For instance,
expanding around the ’true’ minimum of the potential gets rid of the tachyons
leading to a particle with a positive square mass.
We consider a field configuration such that $T(r)=f(\phi(r))$, in this case,
the potential can be expressed as
$V(T(\phi))=\left(\dfrac{\partial\phi}{\partial T}\right)^{2},$ (16)
so
$\displaystyle\dfrac{1}{2}V(T)\partial_{\mu}T\partial^{\mu}T=\dfrac{1}{2}\left(\dfrac{\partial\phi}{\partial
T}\dfrac{\partial T}{\partial
x}\right)^{2}=\dfrac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi.$ (17)
As a result, the Lagrangian of the system becomes,
$\tau^{-1}_{p}\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)-\frac{1}{4}G(\phi)F_{\mu\nu}F^{\mu\nu},$
(18)
here, we have introduced a dimensionless quantity
$G(\phi)=(2\pi\alpha^{\prime})^{2}V(\phi)$ which will be referred to as a
color dielectric function, subsequently. If we set $(2\pi\alpha^{\prime})=1$
the string tension becomes $T_{string}=1$ and $G(\phi)=V(\phi)$ Brito ; Adamu
; Issifu ; Issifu1 ; Issifu2 . It is important to mention at this point that
the potential of the field, $\phi$, follows the same discussion as contained
in Sec. II.1. It has its minimum at $V(\phi=\langle\phi\rangle_{0})=0$, where
$\langle\phi\rangle_{0}$ is the ’true’ vacuum of the potential. To apply this
theory to asymptotically free systems, the potential must satisfy additional
conditions,
$V(\phi=\langle\phi\rangle_{0})=0\qquad{,}\qquad\dfrac{\partial
V}{\partial\phi}|_{\phi=\langle\phi\rangle_{0}}=0\qquad{\text{and}}\qquad\dfrac{\partial
V}{\partial\phi}|_{\phi=0}=0,$ (19)
necessary for stabilizing its vacuum Rosina ; Adamu ; Kharzeev . These
restrictions make it possible to apply Eq.(18) in investigating asymptotically
free particles such as gluons in a QCD-like fashion. Here, the modified
Abelian gauge mimics the role of the non-Abelian gauge in the ’usual’ QCD
theory. We treat the endpoints of the open string on the lower dimensional
branes as the source of quark and an anti-quark (if fermions are present) or
valence gluons and the string connecting them as gluons that mediate the
interactions. Also, the potential $V(\phi)$ plays a similar role as a quantum
correction in gluodynamics theory that breaks the conformal symmetry to bring
about gluon condensation Kharzeev ; Gaete ; Issifu1 . This model has been used
to study the color confinement of glueballs at a finite temperature in
Ref.Issifu .
Considering brane-anti-brane systems, we note that the model must be invariant
under the global rotation $\phi\rightarrow e^{i\alpha}\phi$. This warrants an
introduction of a complex scalar field $\phi$ with potential $V(|\phi|)$ Sen3
. So, we can redefine the scalar field in a form,
$\phi=\dfrac{\phi_{1}+i\phi_{2}}{\sqrt{2}},$ (20)
to incorporate the D$p$-brane and anti-D$p$-brane dynamics. Supposing that the
original gauge field $F_{\mu\nu}$ is on the worldvolume of the D$p$-brane
represented by a complex scalar field $\phi$, we will have a dual gauge field
$\tilde{F}_{\mu\nu}$ also on the wordvolume of the anti-D$p$-brane represented
by the conjugate field $\phi^{*}$. Therefore, the string here has its
endpoints on remnants of D$p$-brane and the anti-D$p$-brane parallel to each
other. Imposing gauge invariance on the scalar sector, we can define an
Abelian covariant derivative
$\partial_{\mu}\rightarrow D_{\mu}\equiv\partial_{\mu}-iq\tilde{A}_{\mu},$
(21)
where $\tilde{A}_{\mu}$ is a gauge field which is dual to $A_{\mu}$.
Accordingly, the Lagrangian in Eq.(18), can be extended as
$\tau_{p}^{-1}\mathcal{L}=D_{\mu}\phi
D^{\mu}\phi^{*}-V(|\phi|)-\frac{1}{4}G(|\phi|)F_{\mu\nu}F^{\mu\nu}-\dfrac{1}{4}\tilde{F}_{\mu\nu}\tilde{F}^{\mu\nu},$
(22)
where $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ and
$\tilde{F}_{\mu\nu}=\partial_{\mu}\tilde{A}_{\nu}-\partial_{\nu}\tilde{A}_{\mu}$
are two independent Abelian field strengths. Aside from the original
$\text{U}(1)$ gauge invariance of the Lagrangian, it is also invariant under
the $\tilde{\text{U}}(1)$ gauge transformation,
$\phi(x)\rightarrow\phi^{\prime}(x)=e^{-iq\alpha(x)}\phi\qquad\text{and}\qquad\tilde{A}_{\mu}(x)\rightarrow\tilde{A}^{\prime}_{\mu}(x)=\tilde{A}_{\mu}(x)-\partial_{\mu}\alpha(x).$
(23)
This Lagrangian can undergo a spontaneous symmetry breaking (SSB) process
(when we choose an appropriate potential) similar to the usual Abelian Higgs
mechanism Quigg ; Collins1 . In this case, $\phi$ plays a role similar to the
’usual’ Higgs field in the standard model of particle physics. This process
will also lead to the observation of Goldston boson (most likely $\pi^{0}$ due
to the $\tilde{\text{U}}(1)$) corresponding to the number of unbroken
symmetries signaling confinement as well Issifu ; Nielsen . This model has
been exploited in detail to study glueballs and color superconductivity in
Issifu2 . It is important to mention that (22) is valid when we consider
D$p$-brane and an anti-D$p$-brane system in the framework of DBI action Erkal
; Senn . It is suitable for describing massless particles such as glueballs or
gluon confinement.
## V Gauge Theories Modified with $G(\phi)$
### V.1 Fermions Coupled To Fundamental Strings at Tachyonic Vacuum
In this section, we introduce Dirac’s Lagrangian for free particles adopted by
Maxwell in the unification of electric and magnetic field interactions. We
introduce the dynamics of the fermions which were dropped in the previous
discussions as stated below Sec. III. However, it will be introduced here
through Dirac’s equation modified by $G(\phi)$ while taking into consideration
all gauge invariance properties. We start with the well-known Dirac’s
Lagrangian for free particles
$\mathcal{L}_{0}=\bar{\psi}\left(i\gamma^{\mu}\partial_{\mu}-m\right)\psi.$
(24)
Even though this Lagrangian is well known in QED, it is also used to describe
free nucleons in terms of their composite fermions; protons, and neutrons in
strong interaction. It is invariant under global phase rotation,
$\psi\rightarrow\psi^{\prime}=e^{i\alpha}\psi.$ (25)
Noticing that the fields in the Lagrangian are color neutral in nature, to
apply them in studying color particles such as the ones considered here, we
need to modify the fields. Hence, we will modify the bispinors with the
$G(\phi)$ in order to give them some color features. Thus, we perform the
transformations
$\psi\rightarrow\psi^{\prime}\equiv
G^{1/2}\psi\qquad\text{and}\qquad\bar{\psi}\rightarrow\bar{\psi}^{\prime}\equiv\bar{\psi}G^{1/2},$
(26)
where $G$ is a function therefore, the gradient in Dirac’s equation will
transform as
$\partial_{\mu}\psi\rightarrow\partial_{\mu}\psi^{\prime}=\left[(\partial_{\mu}G^{1/2})+G^{1/2}\partial_{\mu}\right]\psi,$
(27)
and Lagrangian (24) becomes
$\displaystyle\mathcal{L}$
$\displaystyle=\bar{\psi}^{\prime}\left[i\gamma^{\mu}\partial_{\mu}-m\right]\psi^{\prime}$
$\displaystyle=\bar{\psi}\left[i\gamma^{\mu}G\partial_{\mu}+i\gamma^{\mu}G^{1/2}(\partial_{\mu}G^{1/2})-mG\right]\psi.$
(28)
The local gauge invariance is violated by $G(\phi)$ and the gradient function
$(\partial_{\mu}G^{1/2})$. To ensure local gauge invariance is satisfied, we
modify all variables involving derivatives, and color neutral fields such as
the Dirac $\gamma$-matrix with $G(\phi)$ and also introduce the
electromagnetic field $A_{\mu}(x)$ in other to make the equation gauge
invariant. Consequently, we will adopt the transformations,
$\gamma^{\mu}\rightarrow\gamma^{\prime\mu}=G^{-1}\gamma^{\mu}\qquad{\text{and}}\qquad\partial_{\mu}\rightarrow
D_{\mu}\equiv\partial_{\mu}-iqA_{\mu},$ (29)
where $D_{\mu}$ is the gauge-covariant derivative and $q$ is the electric
charge. This type of derivative corresponds to momentum transformation,
$p_{\mu}\rightarrow p_{\mu}-qA_{\mu}$. By this transformation, the gauge field
also gets modified and enters the Lagrangian as $GA_{\mu}$. It also introduces
a coupling between the electromagnetic field and matter in a form
$D_{\mu}\psi$. So Eq.(V.1) becomes
$\displaystyle\mathcal{L}$
$\displaystyle=\bar{\psi}\left[i\gamma^{\mu}\partial_{\mu}+i\gamma^{\mu}\partial_{\mu}(\ln
G^{1/2})+qA_{\mu}\gamma^{\mu}-mG\right]\psi$
$\displaystyle=\bar{\psi}\left[i\gamma^{\mu}\partial_{\mu}+q\gamma^{\mu}A_{\mu}-mG\right]\psi,$
(30)
consequently, the electromagnetic field transforms as
$A_{\mu}\rightarrow A^{\prime}_{\mu}\equiv
A_{\mu}+\dfrac{i}{q}\partial_{\mu}(\ln G^{1/2}).$ (31)
Equation (V.1) looks similar to Dirac’s Lagrangian with interaction term that
couples the gauge field $A_{\mu}(x)$ to the conserve current
$j^{\mu}=q\bar{\psi}\gamma^{\mu}\psi$ with a modified mass term
$M(r)=mG(\phi)$. Also, $\bar{\psi}D_{\mu}\psi$ becomes invariant under local
phase rotation, granting interaction between $\bar{\psi}$, $\psi$ and
$A_{\mu}$ with momentum $p_{\mu}\rightarrow i\partial_{\mu}$. In this way, the
Lagrangian has been modified, but we ensure that all conservation laws are
duly respected.
Now, to derive a complete Lagrangian that can mimic QCD theory, we include the
kinetic energy term of the gauge field which has been derived in Eq.(18),
thus,
$\mathcal{L}=\bar{\psi}\left(i\gamma^{\mu}\partial_{\mu}+q\gamma^{\mu}A_{\mu}-mG(\phi)\right)\psi+\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)-\frac{1}{4}G(\phi)F_{\mu\nu}F^{\mu\nu}.$
(32)
This expression looks similar to the usual QED Lagrangian with $\phi$ as
intermediate field/particle and a mass term modified by color dielectric
function, $G(\phi)$. As a result, this expression approximates the non-Abelian
QCD theory with an Abelian one. This was motivated by the projection that the
confining regime of QCD is mostly Abelian dominated Hoofta1 ; Ezawa ; Shiba ;
Suzuki ; Sakumichi . The spinor fields $\psi$ and $\bar{\psi}$ represent the
quarks and antiquarks and the modified gauge field $A_{\mu}$ also describes
gluons. All the long-distance behavior of the gluons is absorbed in $G(\phi)$.
The observed mass of the system $M(r)=mG(\phi)$ is expected to have a fixed
value at the beginning of the interaction $r\rightarrow r_{0}$ and at the end
of the interaction $r\rightarrow r_{*}$, so it can be measured precisely. That
is, the dielectric function should be such that $G(\phi(r\rightarrow
r_{0}))=G(\phi(r\rightarrow r_{*}))=\text{constant}$, where $1/r_{0}$ is the
energy at the beginning of the interaction and $1/r_{*}$ is the energy at the
end of the interaction. Thus, $M(r)=mG(\phi)$ is the constituent quark mass
function of the system. We have presented a detailed study of this theory in
Refs.Issifu1 ; Adamu
Since the renormalization theory remains the systematic approach for which UV
divergences can be resolved Neubert , we will compare the result with the
renormalized QED Lagrangian to properly identify the nature of the dielectric
function in the context of renormalization factor $Z(\mu)$, where $\mu$ is a
scale that comes from the dimensional regularization scheme Bollini ; Hoofta .
The objective is to ensure that the result obtained in Eq.(32) does not pose
any UV divergences. When the real QED Lagrangian is written in terms of the
renormalized factors, it takes the form,
$\mathcal{L}_{\text{QED}}=Z_{\psi}\bar{\psi}(i\gamma_{\mu}\partial^{\mu}-m_{0})\psi-\dfrac{Z_{A}}{4}F^{\mu\nu}F_{\mu\nu}-Z_{1}e\bar{\psi}\gamma_{\mu}A^{\mu}\psi,$
(33)
where
$\psi_{0}=Z_{\psi}^{1/2}\psi\quad\text{,}\quad
A_{0}^{\mu}=Z_{A}^{1/2}A^{\mu}\quad\text{,}\quad
m_{0}=\dfrac{Z^{\prime}_{\psi}}{Z_{\psi}}m\quad\text{and}\quad
e=\dfrac{Z_{\psi}}{Z_{1}}Z_{A}^{1/2}e_{0},$ (34)
the two equations bear some resemblance, so we can compare them. With $Z_{3}$
the gluon propagator renormalization factor, $Z_{1}$ the quark-quark-gluon
vertex renormalization, and $Z_{2}$ the quark self-energy renormalization
factor. Additionally, the covariant derivative can be expressed in terms of
the renormalized factors as
$D^{ren}_{\mu}\,=\,\partial_{\mu}-ie(Z_{1}/Z_{\psi})A_{\mu}$, gauge invariance
requires that $Z_{1}\,=\,Z_{\psi}$. We have substituted the ’conventional’
representation of the renormalization factors $Z_{2}$ and $Z_{3}$ with
$Z_{\psi}$ and $Z_{A}$ respectively, and $Z_{1}\,=\,Z_{\psi}Z_{A}^{1/2}$ to
make the distinction more obvious relative to fermions and the gauge field.
The renormalized fields are without subscript ’0’ Itzykson ; Pascual ; Peskin
; Collins ; Weinberg ; Weinberg1 ; Schwartz . Comparing the results in
Eqs.(32) and (33), we identify $Z_{A}\,=\,Z_{\psi}\,=\,G$ and the gauge
invariance warrant that, $Z_{A}\,=\,1$. Thus, in addition to the color
properties carried by the color dielectric function, it also absorbs the UV
divergences. Consequently, the dielectric function follows the restrictions,
$G\left(\phi(r)\right)\rightarrow\begin{cases}1&\text{for}\,\;r\,\rightarrow\,0,\;\text{deconfinement/Coulombian
regime}\\\ 0&\text{for}\,\;r\,\rightarrow\,r_{*},\;\text{confinement
regime}\end{cases}.$ (35)
From the gauge conditions adopted above, a gluon mass term,
$\mathcal{L}_{\gamma}=\dfrac{1}{2}M^{2}(r)A_{\mu}A^{\mu},$ (36)
will not be invariant under the local gauge transformation in Eq.(31) because
$A_{\mu}A^{\mu}\rightarrow
A^{\prime}_{\mu}A^{\prime\mu}\equiv\left(A_{\mu}+\dfrac{i}{q}\partial_{\mu}(\ln
G^{1/2})\right)\left(A^{\mu}+\dfrac{i}{q}\partial^{\mu}(\ln
G^{1/2})\right)\neq A_{\mu}A^{\mu}.$ (37)
Consequently, local gauge invariance accounts for the existence of massless
photons Quigg .
### V.2 Non-Abelian Gauge Theory
In this section, we will focus on constructing a non-Abelian gauge theory
traditionally used for describing the strong nuclear force. Additionally, we
buy into the projection that proton and neutron have the same mass and are
charge independent due to the strong nuclear force. Therefore, there is an
agreement that isospin is conserved in strong interactions. As in the case of
QED theory, we will base the discussions on $\text{SU}(2)$-isospin gauge
theory introduced by Yang and Mills Yang , elaborated by Shaw Shaw . Here, we
will consider Eq.(24) as the Lagrangian for free nucleons with composite
fermions, protons (p), and neutrons (n)
$\psi\equiv\begin{pmatrix}p\\\ n\end{pmatrix}.$ (38)
The Lagrangian is invariant under global spin rotation
$\psi\rightarrow\psi^{\prime}=e^{i\vec{\tau}\cdot\vec{\alpha}/2}\psi,$ (39)
also, isospin current, $j^{\mu}=\bar{\psi}\gamma^{\mu}({\tau}/{2})\psi$, is
conserved therefore, proton and neutron can be treated symmetrically in the
absence of electromagnetic interactions. In that case, the distinction between
proton and neutron is arbitrary and conventional under this representation.
To maintain the differences between the Abelian theory treated in Sec. V.1 and
the non-Abelian theory intended for this section, we will replace $A_{\mu}$
from the Abelian covariant derivative, $D_{\mu}$, expressed in Eq.(29) with
its non-Abelian counterpart $B_{\mu}$ i.e.,
$D_{\mu}\equiv\partial_{\mu}-igB_{\mu}.$ (40)
We have also replaced $q$ with $g$, the strong coupling constant to make the
analysis distinct from Sec. V.1 and befitting for describing strong
interactions. Despite the similar global gauge invariance satisfied by both
theories, they also exhibit some major differences; the algebra of the non-
Abelian group is more complex and the associated gauge bosons self-interact
due to the structure of the non-Abelian group. Accordingly, the field can be
expressed as,
$B_{\mu}=\dfrac{1}{2}\vec{\tau}\cdot\vec{b}_{\mu}=\dfrac{1}{2}\tau^{a}b^{a}_{\mu}=\dfrac{1}{2}\left({\begin{array}[]{cc}b_{\mu}^{3}&b^{1}_{\mu}-ib^{2}_{\mu}\\\
b^{1}_{\mu}+ib^{2}_{\mu}&-b^{3}_{\mu}\\\
\end{array}}\right)=\dfrac{1}{2}\left({\begin{array}[]{cc}b_{\mu}^{3}&\sqrt{2}\,b^{+}_{\mu}\\\
\sqrt{2}\,b^{-}_{\mu}&-b^{3}_{\mu}\\\ \end{array}}\right),$ (41)
where the three non-Abelian gauge fields are
$\vec{b}_{\mu}=(b_{\mu}^{1},b_{\mu}^{2},b_{\mu}^{3})$. The generators
$\tau^{a}$ $(a=1,\,2$ and $3$) are associated with Pauli’s matrices,
$b_{\mu}^{\pm}=(b_{\mu}^{1}\mp ib_{\mu}^{2})/\sqrt{2}$ are the charged gauge
bosons and the isospin step-up and step-down operators $1/2(\tau_{1}\pm
i\tau_{2})$ exchange $p\leftrightarrow n$ following the absorption of
$b_{\mu}^{\pm}$ boson. The gradient in Eq.(24) will then transform as,
$\displaystyle D_{\mu}\psi\rightarrow D\psi^{\prime}$
$\displaystyle=G^{1/2}\partial_{\mu}\psi+(\partial_{\mu}G^{1/2})\psi-
igB_{\mu}(G^{1/2}\psi)$
$\displaystyle=G^{1/2}\left(\partial_{\mu}-igB^{\prime}_{\mu}\right)\psi,$
(42)
thus,
$\displaystyle-igG^{1/2}B^{\prime}_{\mu}$
$\displaystyle=-igB_{\mu}G^{1/2}+(\partial_{\mu}G^{1/2})\rightarrow$
$\displaystyle B^{\prime}_{\mu}$
$\displaystyle=G^{1/2}B_{\mu}G^{-1/2}+\dfrac{i}{g}(\partial_{\mu}G^{1/2})G^{-1/2}$
$\displaystyle=G^{1/2}\left(B_{\mu}+\dfrac{i}{g}G^{-1/2}(\partial_{\mu}G^{1/2})\right)G^{-1/2}.$
(43)
Again, following similar procedure as adopted for Eq.(V.1) using the necessary
transformations, we obtain
$\displaystyle\mathcal{L}^{\prime}=\bar{\psi}\left[i\gamma^{\mu}\partial_{\mu}+gB_{\mu}\gamma^{\mu}-mG\right]\psi,$
(44)
with gauge invariant transformation similar to Eq.(31),
$B_{\mu}\rightarrow B^{\prime}_{\mu}\equiv
B_{\mu}+\dfrac{i}{g}(\partial_{\mu}\ln G^{1/2}).$ (45)
By comparing Eq.(V.2) and Eq.(45) we can deduce,
$B_{\mu}\rightarrow B^{\prime}_{\mu}=G^{1/2}B_{\mu}G^{-1/2}.$ (46)
The $\text{SU}(2)$-isospin generators can be expressed in terms of commutator
relation,
$\left[\tau^{j},\tau^{k}\right]=2i\varepsilon_{jkl}\tau^{l}.$ (47)
Conventionally, $\tau_{i}$ do not commute with others in different spatial
directions, so only one component can be measured at a time and this is taken
to be the third component $T_{3}=1/2\tau_{3}$, generally in $z$-direction.
We will now develop the field strength tensor that will form the kinetic term
of the gauge field. Starting with the electromagnetism gauge, we can
construct,
$F_{\mu\nu}=\dfrac{1}{2}{\bf
F}_{\mu\nu}\cdot{\bf\tau}=\dfrac{1}{2}F^{a}_{\mu\nu}\tau^{a}\qquad{\text{where}}\qquad
tr\left(\tau^{a}\tau^{b}\right)=2\delta^{ab}.$ (48)
From these transformations, we can develop a gauge invariant field,
$\mathcal{L}^{\prime}_{gauge}=-\dfrac{1}{4}{\bf F}_{\mu\nu}\cdot{\bf
F}^{\mu\nu}=-\dfrac{1}{2}tr\left(F_{\mu\nu}F^{\mu\nu}\right),$ (49)
where the field strength tensor transforms under the dielectric function as
$F_{\mu\nu}\rightarrow F^{\prime}_{\mu\nu}\equiv G^{1/2}F_{\mu\nu}G^{-1/2}.$
(50)
We know from the QED relation constructed in Sec. V.1 that the field strength
can be expressed as
$\displaystyle F^{\prime}_{\mu\nu}$
$\displaystyle=\partial_{\nu}B^{\prime}_{\mu}-\partial_{\mu}B^{\prime}_{\nu}=\partial_{\nu}\left[G^{1/2}B_{\mu}G^{-1/2}+\dfrac{i}{g}(\partial_{\mu}G^{1/2})G^{-1/2}\right]-\partial_{\mu}\left[G^{1/2}B_{\nu}G^{-1/2}+\dfrac{i}{g}(\partial_{\nu}G^{1/2})G^{-1/2}\right]$
$\displaystyle=\left\\{(\partial_{\nu}G^{1/2})B_{\mu}G^{-1/2}+G^{1/2}(\partial_{\nu}B_{\mu})G^{-1/2}+G^{1/2}B_{\mu}(\partial_{\nu}G^{-1/2})+\dfrac{i}{g}\left[\partial_{\nu}(\partial_{\mu}G^{1/2})G^{-1/2}+(\partial_{\mu}G^{1/2})(\partial_{\nu}G^{-1/2})\right]\right\\}$
$\displaystyle-\left\\{(\partial_{\mu}G^{1/2})B_{\nu}G^{-1/2}+G^{1/2}(\partial_{\mu}B_{\nu})G^{-1/2}+G^{1/2}B_{\nu}(\partial_{\mu}G^{-1/2})+\dfrac{i}{g}\left[\partial_{\mu}(\partial_{\nu}G^{1/2})G^{-1/2}+(\partial_{\nu}G^{1/2})(\partial_{\mu}G^{-1/2})\right]\right\\}.$
(51)
We adopted the expression for $B^{\prime}_{\mu}$ defined in Eq.(V.2),
regrouping the terms in the above equation yields,
$\displaystyle F^{\prime}_{\mu\nu}$
$\displaystyle=G^{1/2}\left[\partial_{\nu}B_{\mu}-\partial_{\mu}B_{\nu}\right]G^{-1/2}+\left[(\partial_{\nu}G^{1/2})B_{\mu}-(\partial_{\mu}G^{1/2})B_{\nu}\right]G^{-1/2}+G^{1/2}\left[B_{\mu}(\partial_{\nu}G^{-1/2})-B_{\nu}(\partial_{\mu}G^{-1/2})\right]$
$\displaystyle+\dfrac{i}{g}\left[(\partial_{\mu}G^{1/2})(\partial_{\nu}G^{-1/2})-(\partial_{\nu}G^{1/2})(\partial_{\mu}G^{-1/2})\right]$
$\displaystyle\neq
G^{1/2}\left[\partial_{\nu}B_{\mu}-\partial_{\mu}B_{\nu}\right]G^{-1/2},$ (52)
higher derivative terms were discarded. We can cast this result in a slightly
symmetric form by using the identity $G^{1/2}G^{-1/2}=\mathbb{I}$, so
$\displaystyle\partial_{\mu}(G^{1/2}G^{-1/2})$
$\displaystyle=(\partial_{\mu}G^{1/2})G^{-1/2}+G^{1/2}(\partial_{\mu}G^{-1/2})=0\rightarrow$
$\displaystyle G^{1/2}(\partial_{\mu}G^{-1/2})$
$\displaystyle=-(\partial_{\mu}G^{1/2})G^{-1/2}.$ (53)
Using this identity appropriately leads to,
$\displaystyle F^{\prime}_{\mu\nu}$
$\displaystyle=G^{1/2}\left(\partial_{\nu}B_{\mu}-\partial_{\mu}B_{\nu}\right)G^{-1/2}+\left((\partial_{\nu}G^{1/2})B_{\mu}-B_{\mu}(\partial_{\nu}G^{1/2})\right)G^{-1/2}+\left(B_{\nu}(\partial_{\mu}G^{1/2})-(\partial_{\mu}G^{1/2})B_{\nu}\right)G^{-1/2}$
$\displaystyle+\dfrac{i}{g}\left((\partial_{\nu}G^{1/2})G^{-1/2}(\partial_{\mu}G^{1/2})-(\partial_{\mu}G^{1/2})G^{-1/2}(\partial_{\nu}G^{1/2})\right)G^{-1/2}$
$\displaystyle=G^{1/2}\left(\partial_{\nu}B_{\mu}-\partial_{\mu}B_{\nu}\right)G^{-1/2}+G^{1/2}\left\\{\left[G^{-1/2}(\partial_{\nu}G^{1/2}),B_{\mu}\right]-\left[G^{-1/2}(\partial_{\mu}G^{1/2}),B_{\nu}\right]\right\\}G^{-1/2}$
$\displaystyle+\dfrac{i}{g}G^{1/2}\left[G^{-1/2}(\partial_{\nu}G^{1/2}),G^{-1/2}(\partial_{\mu}G^{1/2})\right]G^{-1/2}.$
(54)
This equation shows the additional terms that come from the non-vanishing
commutators due to the non-Abelian group structure. By this expression, we
deduce that a term can be added to
$\partial_{\nu}B_{\mu}-\partial_{\mu}B_{\nu}$ to modify $F_{\mu\nu}$ to
achieve the desired transformation property we require. With this inspiration,
the observed electromagnetic field strength tensor can be modified to read,
$F_{\mu\nu}=\dfrac{1}{iq}\left[D_{\nu},D_{\mu}\right].$ (55)
Substituting the definition for $D_{\mu}$ in Eq.(29) into the above
expression, we obtain
$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+iq\left[A_{\nu},A_{\mu}\right].$
(56)
The commutator vanishes for Abelian theories. Applying this transformation to
Eq.(50) yields,
$F^{\prime}_{\mu\nu}=\partial_{\nu}B^{\prime}_{\mu}-\partial_{\mu}B^{\prime}_{\nu}+ig\left[B^{\prime}_{\nu},B^{\prime}_{\mu}\right].$
(57)
Expanding the commutator using the transformation Eq.(V.2) to enable us to
compare the outcome to the nonvanishing commutator relations in Eq.(V.2) leads
to,
$\displaystyle ig\left[B^{\prime}_{\nu},B^{\prime}_{\mu}\right]$
$\displaystyle=ig\left[\left(G^{1/2}B_{\nu}G^{-1/2}+\dfrac{i}{g}(\partial_{\nu}G^{1/2})G^{-1/2}\right),\left(G^{1/2}B_{\mu}G^{-1/2}+\dfrac{i}{g}(\partial_{\mu}G^{1/2})G^{-1/2}\right)\right]$
$\displaystyle=igG^{1/2}\left[B_{\nu},B_{\mu}\right]G^{-1/2}-G^{1/2}\left\\{\left[G^{-1/2}(\partial_{\nu}G^{1/2}),B_{\mu}\right]-\left[G^{-1/2}(\partial_{\mu}G^{1/2}),B_{\nu}\right]\right\\}G^{-1/2}$
$\displaystyle-\dfrac{i}{g}G^{1/2}\left[G^{-1/2}(\partial_{\nu}G^{1/2}),G^{-1/2}(\partial_{\mu}G^{/2})\right]G^{-1/2}.$
(58)
The commutator relations that come after the first term in the last step are
the exact terms required to cancel the extra terms in Eq.(V.2). Therefore, the
field strength tensor expressed in Eq.(57) has the required structure under
local gauge transformation. Combining Eqs.(44) and (49) gives rise to modified
Yang-Mills Lagrangian Quigg ; Yang ,
$\mathcal{L}_{\text{YM}}=\bar{\psi}\left(i\gamma^{\mu}\partial_{\mu}+g\gamma^{\mu}B_{\mu}-mG\right)\psi-\dfrac{1}{2}tr\left(F_{\mu\nu}F^{\mu\nu}\right),$
(59)
where $M(r)=mG(\phi)$ is the modified nucleon mass. This Lagrangian is also
invariant under local gauge transformations and does not permit the existence
of mass term $M^{2}(r)B_{\mu}B^{\mu}$. It should also be noted that
introducing the non-Abelian gauge leads to the automatic cancellation of the
color dielectric function. Hence, we can infer that the color dielectric
function attached to the Abelian gauge induces strong interaction properties.
Using the expression in Eq.(41) and the commutator relation in Eq.(47), we can
rewrite the transformed version of Eq.(57) as
$\displaystyle F_{\mu\nu}$
$\displaystyle=\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}+ig\left[B_{\mu},B_{\nu}\right]\rightarrow$
$\displaystyle F^{l}_{\mu\nu}$
$\displaystyle=\dfrac{1}{2}\left(\partial_{\mu}(\tau^{l}b^{l}_{\nu})-\partial_{\nu}(\tau^{l}b^{l}_{\mu})\right)+\dfrac{ig}{4}\left(-2i\varepsilon_{jkl}(b^{j}_{\nu}b^{k}_{\mu}\tau^{l})\right)$
$\displaystyle=\partial_{\mu}b^{l}_{\nu}-\partial_{\nu}b^{l}_{\mu}+g\varepsilon_{jkl}b^{j}_{\nu}b^{k}_{\mu},$
(60)
in the last step we have dropped the three isospin generators $\tau^{l}$ of
the gauge field because they are linearly independent. Generally, non-Abelian
gauge groups that do not fall under $\text{SU}(2)$, the Levi-Civitá symbol
$\varepsilon_{jkl}$ is replaced with the antisymmetric structure constant
$f_{jkl}$.
The introduction of the $\text{SU}(2)$-isospin symmetry by Heisenberg
Heisenberg preceded the development of the quark model. That notwithstanding,
the only known fundamental components of the nucleon describing strong
interactions are up-quark ($u$) and down-quark ($d$). While the proton is
composed of two $u$-quarks and a $d$-quark, the neutron is also composed of
two $d$-quarks and an $u$-quark. Indeed, these particles remain the only known
constituents of proton and neutron with almost the same mass and coupling
force. Hence, Eq.(59) can be used to study the behavior of quarks and gluons
inside hadrons. In that case, the up/down quarks are treated as having the
same mass and $0$-charge, so they are seen as similar particles with different
isospin states, $I_{3u/d}=\pm 1/2$, same as the nucleon. Interestingly, the
spin addition of the quark constituents of proton and neutron agree with
$I_{3}=1/2$ and $I_{3}=-1/2$ respectively. Similar to the isospin
representation of the nucleon field $\psi$ in Eq.(38) the quark field can be
represented as
$\psi_{q}\equiv\begin{pmatrix}u\\\ d\end{pmatrix}.$ (61)
Nevertheless, the mathematical structure of this theory is the same as QCD,
the theory that describes the characteristics of quarks and gluons inside the
hadrons.
Again, the strong interaction is well known for its invariance under quark
color permutations, so we can express the quark wave function for color
triplet representation as
$\psi_{q}\equiv\begin{pmatrix}R\\\ G\\\ B\end{pmatrix},$ (62)
where $\text{R}\equiv\text{red}$, $\text{G}\equiv\text{green}$ and
$\text{B}\equiv\text{blue}$. This is invariant under $\text{SU}(3)$
transformation of the form
$\psi\rightarrow\psi^{\prime}\equiv\exp\left(i\dfrac{1}{2}\lambda_{j}\alpha_{j}\right)\psi,$
(63)
where $\alpha_{j},\,j=1,\,2,\,3,\cdots,8$ are the eight phase angles and
$\lambda_{j}$ is a $3\times 3$ matrix representing eight independent traceless
Hermitian generators of the group. Ignoring the differences in quark masses,
it can also be applied in studying u, d and s quark systems,
$\psi_{q}\equiv\begin{pmatrix}u\\\ d\\\ s\end{pmatrix}.$ (64)
The generators are fundamentally equivalent to Pauli’s matrices for the
$\text{SU}(2)$ representation and they satisfy the Lie algebra,
$\left[\lambda_{i},\lambda_{j}\right]=2if_{ijk}\lambda_{k},$ (65)
similar to Eq.(47), $f_{ijk}$ is the structure constant. Consequently, to
generalize the relation in Eq.(V.2) for QCD under the $\text{SU}(3)$ color
symmetry, we substitute the antisymmetric tensor $\varepsilon_{ijk}$ for the
antisymmetric structure constant $f_{ijk}$ Collins1 .
Finally, the models exhibit the expected asymptotic free Gross ; Politzer
behavior at high energy regions while at low energies color confinement and
hadronization sets in. Here, $M(r)$ represents constituent quark mass function
Atkinson while $m$ is the bare quark mass. As analyzed in Sec. V.1, the
constituent mass of the quarks in the asymptotically free region can be
determined as $M(r\rightarrow r_{0})=M_{0}$, while the constituent mass at the
nonperturbative (low energy) region where color confinement is expected, will
be $M(r\rightarrow r_{*})=M_{*}$. Hence, it is possible to have the same
constituent quark mass in both the UV ($r\rightarrow r_{0}$) and the IR
($r\rightarrow r_{*}$) regimes, if indeed the bare quark mass $m$ in both
regimes are the same Adamu ; Eichten , because the $G(\phi)$ behaves as,
$G(\phi(r\rightarrow r_{0}))=G(\phi(r\rightarrow r_{*}))=\text{constant}$.
However, there is evidence that $m$ is small in the IR regime and large in the
UV regime Adamu ; Issifu1 . Accordingly, if we require color confinement in
both regions, $M_{0}\,>\,M_{*}$ because higher bare quark mass $m$ is required
to obtain confinement in the UV region than it is required in the IR region.
## VI Phenomenology of Glueball Confinement
In this section we will use one of the models built in Sec. IV specifically,
Eq.(18) to build a model that describes the features of confining glueballs.
Apart from the insight into the behavior of the glueballs in both the IR and
the UV regions, it will serve as a test to the models developed earlier. The
model will be based on electric field confinement, commonly referred to as the
chromoelectric flux confinement. In that light, we will define the indices of
the gauge field such that the chromomagnetic flux is eliminated from the
system (i.e. $j^{\nu}=(\rho,\,\vec{0})$) and only the static sector of the
scalar field, i.e. $\mu=j$, is available for analysis. This ensures that the
color particles that generate the gluons are static. This section will enable
us to see how the Abelian gauge can be used to approximate a non-Abelian
theory. Additionally, the color dielectric function $G(\phi)$ absorbs the
long-distance dynamics of the gluons such that the photon propagator emanating
from the Abelian gauge field does not decouple at longer wavelengths. We will
further demonstrate that the $G(\phi)$ is directly related to the QCD
$\beta$-function and the strong running coupling constantly.
### VI.1 The Model
The equations of motion for Eq.(18) are,
$\partial_{\mu}\partial^{\mu}\phi+\dfrac{1}{4}\dfrac{\partial
G(\phi)}{\partial\phi}F_{\mu\nu}F^{\mu\nu}+\dfrac{\partial
V}{\partial\phi}=0,$ (66)
and
$\partial_{\mu}[G(\phi)F^{\mu\nu}]=0.$ (67)
Expressing the above equations in spherical coordinates,
$\dfrac{1}{r^{2}}\dfrac{d}{dr}\left(r^{2}\dfrac{d\phi}{dr}\right)=-\dfrac{1}{2}\dfrac{\partial
G(\phi)}{\partial\phi}E^{2}+\dfrac{\partial V(\phi)}{\partial\phi},$ (68)
where, $F^{\mu\nu}F_{\mu\nu}=F^{0j}F_{0j}+F^{i0}F_{i0}=-2E^{2}$ (only electric
field components are considered) and $V=G$ as discussed in Sec. IV.
Accordingly,
$\displaystyle\dfrac{1}{r^{2}}\dfrac{d}{dr}\left(r^{2}\dfrac{d\phi}{dr}\right)$
$\displaystyle=\dfrac{\partial}{\partial\phi}\left[\dfrac{\lambda^{2}}{2}\dfrac{1}{V(\phi)}\dfrac{1}{r^{4}}+V\right]\rightarrow$
$\displaystyle\nabla^{2}\phi$
$\displaystyle=\dfrac{\partial}{\partial\phi}\left[\dfrac{\lambda^{2}}{2}\dfrac{1}{V(\phi)}\dfrac{1}{r^{4}}+V\right],$
(69)
where
$E=\dfrac{\lambda}{r^{2}G(\phi)},$ (70)
and
$\lambda=\dfrac{q}{4\pi},$ (71)
$\lambda$ is an integration constant, we also substituted $\varepsilon_{0}=1$
to achieve the desired objective. Now we choose a potential that satisfies all
the conditions expressed in Sec. IV thus,
$V(\phi)=\dfrac{\rho}{4}[(\alpha\phi)^{2}-a^{2}]^{2},$ (72)
where $\rho$ and $\alpha=1/f_{\alpha}$ are dimensionless constants,
$f_{\alpha}$ is the tachyon decay constant. This potential contains tachyonic
mode at $V^{\prime\prime}(0)$, so the fields cannot be quantized around this
point — see Sec. II.1 for detailed discussions. However, we can remove the
tachyonic modes by shifting the vacuum $\phi\rightarrow\phi_{0}+\eta$ and
quantizing around the true minimum, $\phi_{0}=\pm a/\alpha$. Consequently,
$m_{\phi}^{2}=\dfrac{\partial^{2}V}{\partial\phi^{2}}\bigg{|}_{\phi=\phi_{0}}=2\rho\alpha^{2}a^{2},$
(73)
and the potential can be expanded for the small perturbation $\eta$, which
will be referred to as a glueball field with a real square mass,
$m_{\phi}^{2}$, hence,
$\displaystyle V(\eta)=G(\eta)$
$\displaystyle=V(\phi)\bigg{|}_{\phi_{0}}+\dfrac{\partial
V}{\partial\phi}\bigg{|}_{\phi_{0}}\eta+\dfrac{1}{2}\dfrac{\partial^{2}V}{\partial\phi^{2}}\bigg{|}_{\phi_{0}}\eta^{2}$
$\displaystyle=\dfrac{1}{2}m_{\phi}^{2}\eta^{2}.$ (74)
Considering that the particles are sufficiently separated such that color
confinement can be observed, we ignore the $1/r^{4}$ term in Eq.(VI.1) and
simplify it as
$\displaystyle\nabla^{2}(\eta)$ $\displaystyle=\dfrac{\partial
V}{\partial\phi}\bigg{|}_{\phi_{0}}+\dfrac{\partial^{2}V}{\partial\phi^{2}}\bigg{|}_{\phi_{0}}\eta\rightarrow$
$\displaystyle\nabla^{2}\eta$
$\displaystyle=\dfrac{\partial^{2}V}{\partial\phi^{2}}\bigg{|}_{\phi_{0}}\eta,$
(75)
leading to
$\eta^{\prime\prime}(r)+\dfrac{2}{r}\eta^{\prime}(r)-m^{2}_{\phi}\eta=0.$ (76)
This equation has several solutions but we choose two of such solutions
suitable for the analysis,
$\eta(r)=\dfrac{a\cosh(m_{\phi}r)}{\alpha
m_{\phi}r}\qquad\text{and}\qquad\eta(r)=\dfrac{a\sin(m_{\phi}r)}{\alpha
m_{\phi}r}.$ (77)
Each solution corresponds to the characteristics of the particle in a
particular regime i.e., IR and UV regimes respectively. The solution at the IR
regime will give rise to a linear confining potential and the UV solution will
lead to a Cornell-like potential.
### VI.2 Confining Potentials
In this section we will present the confining potentials derived from the
model by considering the electrodynamic potential,
$V=\mp\int Edr.$ (78)
Substituting the equation at the left side of the solution Eq.(77) and Eq.(70)
into Eq.(78) leads to
$\displaystyle V_{c}(r)$
$\displaystyle=\dfrac{2\lambda\alpha^{2}\tanh(m_{\phi}r)}{a^{2}m_{\phi}}+c$
$\displaystyle=m_{\phi}\lambda\tanh(m_{\phi}r)\qquad\text{for}\qquad
a^{4}\rho=1$ (79)
where $c$ is an integration constant that is set to zero in the last step.
Considering that $m_{\phi}r\ll 1$, $c=0$ and $\lambda=1$ corresponding to the
positive part of the potential $V$, we can deduce the QCD string tension
$\sigma$ to be,
$\sigma_{L}={m_{\phi}^{2}}.$ (80)
Here, we can infer that confinement is occasioned by the magnitude of the
glueball mass, in the limit of vanishing glueball mass there will be no
confinement. Furthermore, this potential leads to linear growth in $r$, and at
some critical distance, $r_{*}=1/\sqrt{\sigma_{L}}$ Bali the potential begins
to flatten up leading to hadronization. It is known from the flux tube models
for confining color particles that, $r\gg r_{*}$ Bali .
Figure 1: Flux tube-like potential
Increase in distance $r$ increases the strength of confinement until $r\geq
r_{*}$ where the curve is expected to start flattening, signaling
hadronization.
On the other hand, taking the solution at the right side of Eq.(77) and
following the same process as followed above, we get
$\displaystyle V_{s}(r)$
$\displaystyle=-\dfrac{2\lambda\alpha^{2}\cot(m_{\phi}r)}{a^{2}m_{\phi}}+c$
$\displaystyle\simeq-\dfrac{2\lambda\alpha^{2}}{a^{2}m_{\phi}}\left[\dfrac{1}{m_{\phi}r}-\dfrac{m_{\phi}r}{3}-{\cal
O}(r^{3})\right]+c$
$\displaystyle\simeq\dfrac{2\lambda\rho\alpha^{2}a^{2}}{\rho
a^{4}m^{2}_{\phi}}\left[-\dfrac{1}{r}+\dfrac{m_{\phi}^{2}r}{3}\right],$ (81)
in the last step, we set the integration constant $c=0$, also choosing the
positive part of the potential corresponding to $\lambda=1$ and $\rho
a^{4}=1$, we arrive at the Cornell-like potential for confining heavy quarks
i.e.,
$V_{s}(r)=-\dfrac{1}{r}+\dfrac{m^{2}_{\phi}r}{3},$ (82)
corresponding to a string tension
$\sigma_{s}=\dfrac{m_{\phi}^{2}}{3}.$ (83)
It is important to recognize that the critical distance,
$r_{*s}=1/\sqrt{\sigma_{s}}$, in this regime marks the transition from the
asymptotic freedom region to the confining region.
Figure 2: A graph of Cornell-like potential
An increase in the distance also leads to an increase in the strength of
confinement.
We know from Sec. III that the string tension
$T_{\text{string}}\sim\sigma_{L}\sim\sigma_{s}\sim 1\,\text{GeV}/\text{fm}$,
hence the glueball mass in the IR regime becomes $m_{L}\approx 1\,\text{GeV}$
corresponding to glueball mass of isoscalar resonance $f_{0}(980)$ Tanabashi .
The commonly known ratio of $m(0^{++})/\sqrt{\sigma_{L}}$ in QCD theory in
$\text{SU}(\infty)$ limit Albanese ; Bacilieri ; Teper , at this regime, can
be determined as $m_{L}/\sqrt{\sigma_{L}}\approx 1$. Likewise in the UV
regime, we have a glueball of mass $m_{s}\approx 1.73\,\text{GeV}$
corresponding to the lightest scalar glueball mass of resonance $f_{0}(1710)$.
The result obtained here is precisely the same as the results obtained from
QCD lattice calculations Tanabashi ; Morningstar ; Loan ; Chen ; Lee1 ; Bali1
also, $m_{s}/\sqrt{\sigma_{s}}\approx 1.73$. The critical distances become
$r_{*s}=r_{*}=1\,\text{fm}$ for both the IR and the UV regimes. While critical
distance in the IR regime $r_{*}$ refers to the transition from confinement to
hadronization regions, critical distance in the UV regime $r_{*s}$ refers to
the transition from the asymptotically free region to the confining region.
### VI.3 Gluon Condensation
Classical theory for gluodynamics is invariant under the scale transformation
$x\rightarrow\lambda x$, this leads to a scale current $s_{\mu}$ which is
related to the energy momentum tensor trace $\theta^{\mu}_{\mu}(x)$ as
$\partial^{\mu}s_{\mu}=\theta^{\mu}_{\mu}.$ (84)
In the absence of quantum corrections, $\theta^{\mu}_{\mu}=0$, the theory
remains conformally invariant. This will lead to a vanishing gluon
condensation $\langle F_{\mu\nu}^{a}F^{a\mu\nu}\rangle=0$. On the other hand,
when quantum correction, $-|\varepsilon_{v}|$, is introduced, the conformal
symmetry is broken leading to non-vanishing gluon condensate $\langle
F_{\mu\nu}^{a}F^{a\mu\nu}\rangle\neq 0$ and energy-momentum trace anomaly
comes to play
$\theta^{\mu}_{\mu}=\dfrac{\beta(g)}{2g}F^{a}_{\mu\nu}F^{a\mu\nu},$ (85)
with vacuum expectation,
$\langle\theta^{\mu}_{\mu}\rangle=-4|\varepsilon_{v}|.$ (86)
The leading term of the QCD $\beta$-function is known to be,
$\beta=-\dfrac{11g^{3}}{(4\pi)^{2}}.$ (87)
Now, calculating the energy-momentum tensor trace of Eq.(18) for the glueball
field $\eta$ using the relation,
$\theta^{\mu}_{\mu}=4V(\eta)+\eta\square\eta,$ (88)
we get,
$\displaystyle\theta^{\mu}_{\mu}$ $\displaystyle=4V(\eta)-\eta\dfrac{\partial
G}{\partial\eta}F^{\mu\nu}F_{\mu\nu}-\eta\dfrac{\partial V}{\partial\eta}$
$\displaystyle=4\tilde{V}-\eta G^{\prime}(\eta)F^{\mu\nu}F_{\mu\nu},$ (89)
where $G^{\prime}$ and $V^{\prime}$ represent first derivative with respect to
$\eta$ and $\tilde{V}(\eta)=V(\eta)-\eta V^{\prime}(\eta)/4$. Also, rescaling
$\tilde{V}(\eta)$ with the energy density $-|\varepsilon_{v}|$ i.e.
$\tilde{V}(\eta)\rightarrow-|\varepsilon_{v}|\tilde{V}(\eta)$ together with
the vacuum expectation value in Eq.(86) we get,
$\left\langle\eta
G^{\prime}(\eta)F_{\mu\nu}F^{\mu\nu}\right\rangle=4|\varepsilon_{v}|\langle
1-\tilde{V}\rangle,$ (90)
with this equation, we recover the classical result in the limit
$|\varepsilon_{v}|\rightarrow 0$ i.e. $\langle F_{\mu\nu}F^{\mu\nu}\rangle=0$.
Using the potential expressed in Eq.(VI.1) we can determine,
$\displaystyle\tilde{V}$ $\displaystyle=V-\dfrac{\eta V^{\prime}}{4}$
$\displaystyle=\dfrac{m_{\phi}^{2}\eta^{2}}{4},$ (91)
as a result, Eq.(90) can be expressed as
$\left\langle
2G(\eta)F_{\mu\nu}F^{\mu\nu}\right\rangle=4|\varepsilon_{v}|\left\langle
1-\dfrac{m^{2}_{\phi}\eta^{2}}{4}\right\rangle,$ (92)
we can identify the gluon mass Issifu2 ,
$m_{A}^{2}=\dfrac{m_{\phi}^{2}}{4}.$ (93)
Furthermore, taking the expectation value of Eq.(66) in terms of the glueball
field $\eta$, we can express
$\displaystyle\left\langle\dfrac{m^{2}_{\phi}\eta}{4}F^{\mu\nu}F_{\mu\nu}\right\rangle-\left\langle
m^{2}_{\phi}\eta\right\rangle=0,$ (94)
consequently, the mean glueball field $\bar{\eta}$ has two possible solution
i.e. $\bar{\eta}=0$ and $\bar{\eta}=1$. Higher glueball condensate corresponds
to $\bar{\eta}=0$ whilst lower glueball condensate corresponds to
$\bar{\eta}=1$ Carter .
Figure 3: Gluon condensation
An increase in the mean glueball field $\bar{\eta}$ decreases the gluon
condensate until it vanishes at maximum $\bar{\eta}$.
### VI.4 Strong Running Coupling $\alpha_{s}$ and QCD $\beta$-Function
Comparing Eqs.(85) and (VI.3), we can relate
$\displaystyle\dfrac{\beta(g)}{2g}=-\eta G^{\prime}(\eta)\rightarrow$
$\displaystyle\beta(1/r^{2})=-2g\eta G^{\prime}(\eta).$ (95)
We can also extract the strong running coupling $\alpha_{s}$ using the
renormalization group theory Deur ,
$\displaystyle\beta(Q^{2})=Q^{2}\dfrac{d\alpha_{s}}{dQ^{2}},$ (96)
comparatively;
$\beta(\eta)\simeq-\eta\dfrac{d(G)}{d\eta}=-m_{\phi}^{2}\eta^{2}(r)=\beta(1/r^{2})\qquad\text{we
set}\qquad g=1.$ (97)
Therefore, the strong coupling can be identified as
$\alpha_{s}(\eta)=G(\eta)=\alpha_{s}(1/r^{2})$. QCD $\beta$-function is
naturally a negative quantity showing the asymptomatic freedom nature of the
strong coupling. It also reveals the anti-screening behavior of the theory at
higher energies. The strong running coupling, on the other hand, gives an
insight into the growing precision of hadron scattering experiments at high
energy limits. And at low energy limits, within the scale of hadron mass, it
enhances understanding of hadron structure, color confinement, and
hadronization. Now, substituting the solution of the glueball field $\eta$ and
expanding it for $r\rightarrow 0$ we obtain,
$\displaystyle\alpha_{s}(1/r^{2})=\left[1-\dfrac{m_{\phi}^{2}r^{2}}{3}\right].$
(98)
Also, we can associate the spacelike momentum $Q$ with $Q\equiv 1/r$, then
$\alpha_{s}(Q^{2})=\left[1-\dfrac{m_{\phi}^{2}}{3Q^{2}}\right].$ (99)
In terms of the four-vector momentum i.e. $Q^{2}\equiv-q^{2}$, the strong
coupling becomes
$\alpha_{s}(q^{2})=\left[1+\dfrac{m_{\phi}^{2}}{3q^{2}}\right],$ (100)
and the $\beta$-function becomes,
$\beta(q^{2})=-2\left[1+\dfrac{m_{\phi}^{2}}{3q^{2}}\right].$ (101)
We observe that in the limit $q^{2}\rightarrow 0$ the strong coupling shows a
singularity, generally referred to as the Landau singularity. It marks the
failure of perturbative QCD. The singularity is attributed to the self-
interacting gluons with hadron degrees of freedom in the IR regime leading to
color confinement Olive . In that case, the gluons dynamically acquire mass at
$q^{2}\rightarrow 0$ i.e. $q^{2}\cong m_{A}^{2}$ Badalian ; Yu which
increases the coupling infinitely. Hence, the singularity can be removed by
fixing a freezing point Badalian ; Badalian1 to the strong running coupling
at $q^{2}\cong m_{A}^{2}$ i.e.,
$\alpha_{s}(q^{2})=\left[1+\dfrac{m_{\phi}^{2}}{3(q^{2}+m^{2}_{A})}\right],$
(102)
and
$\beta(q^{2})=-2\left[1+\dfrac{m_{\phi}^{2}}{3(q^{2}+m^{2}_{A})}\right].$
(103)
Thus, the ’so called’ gluon mass is more pronounced at $q^{2}\rightarrow 0$
and its effect gradually fades off in the limit where $q^{2}\rightarrow\infty$
Cornwall1 . A recent analysis of this subject is contained in Deur .
(a) Left Panel (b) Right Panel
Figure 4: Strong Running Couple $\alpha_{s}$ (left) and $\beta$-Function
(right) against $q$, with a Landau Ghost Pole
The graphs show an unphysical behavior at $q\rightarrow 0$, this is due to the
presence of dynamically generated gluon mass. The self-interacting gluons and
the strong force that exist between them are capable of creating bound states
with hadron degrees of freedom. These graphs depict the behavior of
$\alpha_{s}$ and $\beta$ observed from pQCD.
(a) Left Panel (b) Right Panel
Figure 5: Strong Running Coupling $\alpha_{s}$ (left) and $\beta$-Function
(right) against $q$ with a freezing point at $q\rightarrow 0$
Here, the Landau singularity has been fixed by introducing the gluon mass
$m_{A}$. The presence of the gluon mass is more pronounced at $q\rightarrow 0$
and gradually vanishes in the limit $q\rightarrow\infty$. Consequently,
$\alpha(0)\simeq 2.3$ and $\beta(0)\simeq-4.7$ from the graphs.
## VII Conclusion
We modified the DBI action to develop models that are capable of mimicking the
phenomenon of QCD theory using an Abelian gauge field. The models were based
on the behavior of opened string with its endpoints on the D$p$-brane. In
studying color particles, the endpoints of the string serve as the source and
sink of the color charges. Additionally, the models are efficient in
investigating glueballs when the tachyons condense and transform into
glueballs with real square masses that keep them confined. Without fermions,
the models are suitable for studying the bound states of gluons and the
dynamics of glueballs. To study the dynamics of quarks, we showed how the
model can be coupled with standard model fermions systematically. Here, the
particles involved are glueball-fermion-mix in a confined state. Moreover, we
demonstrated that the color dielectric function coupled with the Abelian gauge
capable of causing color confinement vanishes automatically when we introduced
the non-Abelian gauge field in Sec. V.2. Consequently, the presence of
$G(\phi)$ coupled with the Abelian gauge field was to induce non-Abelian
characteristics.
We also developed one of the models to demonstrate its ability to explain some
basic characteristics of strong interactions. We derived the linear and
Cornell-like potentials that are used to describe color particles in
phenomenological QCD. The linear potential is motivated by the string model of
hadrons whilst the Cornell potential is motivated by Lattice QCD calculations.
The Cornell potential is particularly important in QCD because it shows both
the asymptotic freedom and color-confining behavior exhibited by the model. We
also calculated the strong running $\alpha_{s}$ coupling and the QCD
$\beta$-function and compared their behavior with the traditional QCD theory.
However, in the model framework, we are able to fix the non-physical Landau
ghost pole that occurs at the low energy region of the model by assuming the
existence of gluon mass $m_{A}$ at low momentum region, $q^{2}\sim m^{2}_{A}$.
The model leads to the determination of gluon condensate and how the glueball
fields contribute to the condensate.
Furthermore, the models can be discretized using path integral formalism and
investigated under lattice field theory with the availability of the required
computational artifacts. As observed in the model developed in Sec. VI, the
linear and the Cornell-like potentials can be used to study hadron and
quarkonia spectra. Other hadron properties can also be studied from these
models when the appropriate spin contributions to the potential are added.
Extending the models to study the characteristics of particles at a finite
temperature will pave way for understanding, chiral symmetry breaking and
restoration, confinement/deconfinement, and quark-gluon-plasma phase
transitions. Finally, the models can be applied in investigating physical
systems such as pions.
###### Acknowledgements.
This work was supported by Conselho Nacional de Desenvolvimento Científico e
Tecnológico (CNPq) project No.: 168546/2021-3, Brazil. F.A.B. would like to
thank CNPq, CAPES and CNPq/PRONEX/FAPESQ-PB (Grant No. 165/2018), for partial
financial support. F.A.B. also acknowledges support from CNPq (Grant No.
312104/2018-9), Brazil.
## References
* (1) J. H. Schwarz, _The Early Years of String Theory: A Personal Perspective_ , arXiv:0708.1917 [hep-th].
* (2) J. H. Schwarz, _Superstring theory_ , Phys. Rep. 89, 223 322 (1982).
* (3) M. B. Green, J. H. Schwarz and E. Witten, _Superstring Theory. Vol. I & II_, Cambridge University Press, Cambridge 1987.
* (4) J. Polchinski, _An Introduction to the Bosonic String_ , Cambridge University Press, Cambridge 2011.
* (5) C. V. Johnson, _D-Brane Primer_ ,arXiv:hep-th/0007170.
* (6) B. Zwiebach, _A First Course in String Theory_ , Cambridge University Press, Cambridge 2004.
* (7) K. Becker, Melanie Becker and J. H. Schwarz, _String Theory and M-Theory: A Modern Introduction_ , Cambridge University Press, Cambridge 2006.
* (8) D. J. Gross and F. Wilczek, _Ultraviolet behavior of non-abelian gauge theories_ , Phys. Rev. Lett. 30, 1343 1346 (1973).
* (9) H. D. Politzer, _Reliable perturbative results for strong interactions?_ Phys. Rev. Lett. 30, 1346 1349 (1973).
* (10) C. Quigg, _Gauge Theories of the Strong, Weak, and Electromagnetic Interactions_ , Princeton University Press, USA, 2013.
* (11) L. O’Raifeartaigh, _The dawning of gauge theory_ , Princeton Univ. Press, Princeton, NJ, USA, 1997.
* (12) P. D. B. Collins, A. D. Martin and E. J. Squires, _particle Physics and Cosmology_ , A John Wiley & Sons, Inc., (1989) Canada.
* (13) O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, _Large N Field Theories, String Theory and Gravity_ , Phys. Rept. 323, 183 386 (2000) arXiv:hep-th/9905111.
* (14) J. M. Maldacena, _The Large N Limit of Superconformal Field Theories and Supergravity_ , Adv. Theor. Math. Phys. 2, 231 252 (1998) arXiv:hep-th/9711200.
* (15) M. Graña and H. Triendl, _String Theory Compactifications_ , SpringerBriefs in Physics (2017); M. Graña, _Flux compactifications in string theory: a comprehensive review_ , Phys. Rept. 423 91 158 (2006) arXiv:hep-th/0509003.
* (16) J. Polchinski, _Introduction to Gauge/Gravity Duality_ , arXiv:1010.6134 [hep-th].
* (17) S. De Haro, D. R. Mayerson and J. N. Butterfield, _Conceptual Aspects of Gauge/Gravity Duality_ , Foundations of Physics, 46, 1381 1425 (2016) arXiv:1509.09231 [physics.hist-ph].
* (18) C. Amsler, _The Quark Structure of Hadrons_ , Springer Nature Switzerland AG, 2018.
* (19) A. Sen, _Descent Relations Among Bosonic D-branes_ , Int. J. Mod. Phys. A 14, 4061 4078 (1999) arXiv:hep-th/9902105.
* (20) F. Bigazzi and A. L. Cotrone, _String theory meets QCD_ , Frascati Phys. Ser. 54, 378 385 (2012).
* (21) K. Bardakci, _Dual models and spontaneous symmetry breaking_ , Nucl. Phys. B 68, 331 348 (1974); K. Bardakci and M. B. Halpern, _Explicit Spontaneous Breakdown in a Dual Model_ , Phys. Rev. D 10, 4230 (1974); K. Bardakci and M.B. Halpern, _Explicit Spontaneous Breakdown in a Dual Model. 2. N Point Functions_ , Nucl. Phys. B 96, 285 306 (1975); K. Bardakci, _Spontaneous Symmetry Breakdown in the Standard Dual String Model_ , Nucl. Phys. B 133, 297 314 (1978).
* (22) A. Sen, _Stable Non-BPS Bound States of BPS D-branes_ , JHEP 9808, 010 (1998) arXiv:hep-th/9805019.
* (23) A. Sen, _BPS D-branes on Non-supersymmetric Cycles_ , JHEP 9812 021 (1998); _Tachyon Condensation on the Brane Antibrane System_ , JHEP 9808, 012 (1998) arXiv:hep-th/9805170; _Type I D-particle and its Interactions_ , JHEP 9810, 021 (1998).
* (24) A. Sen, _$\text{SO}(32)$ Spinors of Type I and Other Solitons on Brane-Antibrane Pair_, JHEP 9809 023(1998).
* (25) A. Recknagel and V. Schomerus, _Boundary Deformation Theory and Moduli Spaces of D-Branes_ , Nucl. Phys. B 545, 233 282 (1999) arXiv:hep-th/9811237; C. G. Callan, I. R. Klebanov, A. W. W. Ludwig and J. M. Maldacena, _Exact Solution of a Boundary Conformal Field Theory_ , Nucl. Phys. B 422, 417 448 (1994) arXiv:hep-th/9402113; J. Polchinski and L. Thorlacius, _Free Fermion Representation of a Boundary Conformal Field Theory_ , Phys. Rev. D 50, 622 626 (1994) arXiv:hep-th/9404008.
* (26) J. A. Harvey, D. Kutasov and E. J. Martinec, _On the relevance of tachyons_ , arXiv:hep-th/0003101; P. Fendley, H. Saleur and N.P. Warner, _Exact solution of a massless scalar field with a relevant boundary interaction_ , Nucl. Phys. B 430, 577 596 (1994) arXiv:hep-th/9406125; J. Majumder and A. Sen, _Vortex Pair Creation on Brane-Antibrane Pair via Marginal Deformation_ , JHEP 0006, 010 (2000) arXiv:hep-th/0003124.
* (27) A. Sen, _Universality of the Tachyon Potential_ , JHEP 9912, 027 (1999), arXiv:hep-th/9911116.
* (28) A. Sen and B. Zwiebach, _Tachyon condensation in string field theory_ , JHEP 0003, 002 (2000) arXiv:hep-th/9912249.
* (29) A. Kostelecky and R. Potting, _Expectation Values, Lorentz Invariance, and CPT in the Open Bosonic String_ , Phys. Lett. B 381, 89 96 (1996) arXiv:hep-th/9605088; N. Berkovits, _The Tachyon Potential in Open Neveu-Schwarz String Field Theory_ , JHEP 0004, 022 (2000) arXiv:hep-th/0001084; W. Taylor, _D-brane effective field theory from string field theory_ , Nucl. Phys. B 585, 171 192 (2000) arXiv:hep-th/0001201; N. Moeller and W. Taylor, _Level truncation and the tachyon in open bosonic string field theory_ , Nucl. Phys. B 583, 105 144 (2000) arXiv:hep-th/0002237; J. A. Harvey and P. Kraus, _D-Branes as Unstable Lumps in Bosonic Open String Field Theory_ , JHEP 0004, 012 (2000) arXiv:hep-th/0002117; R. de Mello Koch, A. Jevicki, M. Mihailescu and R. Tatar, _Lumps and P-branes in Open String Field Theory_ ; Phys. Lett. B 482, 249 254 (2000) arXiv:hep-th/0003031; N. Moeller, A. Sen and B. Zwiebach, _D-branes as Tachyon Lumps in String Field Theory_ ; JHEP 0008, 039 (2000) arXiv:hep-th/0005036.
* (30) V. A. Kostelecky and S. Samuel, _The Static Tachyon Potential in the Open Bosonic String Theory_ , Phys. Lett. B 207, 169 173 (1988).
* (31) N. Berkovits, A. Sen and B. Zwiebach, _Tachyon condensation in superstring field theory_ , Nuc. Phys. B 587, 147 178 (2000).
* (32) J. A. Harvey, Per Kraus, F. Larsen and E. J. Martinec, _D-branes and Strings as Non-commutative Solitons_ , JHEP 0007, 042 (2000) arXiv:hep-th/0005031.
* (33) R. Gopakumar, S. Minwalla and A. Strominger, _Noncommutative Solitons_ , JHEP 0005, 020 (2000) arXiv:hep-th/0003160; K. Dasgupta, S. Mukhi and G. Rajesh, _Noncommutative Tachyons_ , JHEP 0006, 022 (2000) arXiv:hep-th/0005006; J. A. Harvey, P. Kraus and F. Larsen, _Exact Noncommutative Solitons_ , JHEP 0012, 024 (2000) arXiv:hep-th/0010060.
* (34) A. A. Gerasimov and S. L. Shatashvili, _On Exact Tachyon Potential in Open String Field Theory_ , JHEP 0010, 034 (2000) arXiv:hep-th/0009103; D. Kutasov, M. Marino and G. Moore, _Some Exact Results on Tachyon Condensation in String Field Theory_ , JHEP 0010, 045 (2000) arXiv:hep-th/0009148; D. Ghoshal and A. Sen, _Normalization of the Background Independent Open String Field Theory Action_ , JHEP 0011, 021 (2000) arXiv:hep-th/0009191; J. A. Minahan and B. Zwiebach, _Field theory models for tachyon and gauge field string dynamics_ , JHEP 0009, 029 (2000) arXiv:hep-th/0008231; _Effective Tachyon Dynamics in Superstring Theory_ , JHEP 0103, 038 (2001) arXiv:hep-th/0009246.
* (35) A. Sen, _Fundamental Strings in Open String Theory at the Tachyonic Vacuum_ , J. Math. Phys. 42, 2844 2853 (2001) arXiv:hep-th/0010240.
* (36) E. Witten, _Bound States Of Strings And p-Branes_ , Nucl. Phys. B 460,335 350 (1996) arXiv:hep-th/9510135.
* (37) U. Lindström and R. von Unge, _A Picture of D-branes at Strong Coupling_ , Phys. Lett. B 403, 233 238 (1997) arXiv:hep-th/9704051; U. Lindström, M. Zabzine and A. Zheltukhin, _Limits of the D-brane action_ , JHEP 9912, 016 (1999) arXiv:hep-th/9910159; U. Lindström and M. Zabzine, _Strings at the Tachyonic Vacuum_ , JHEP 0103, 014 (2001) arXiv:hep-th/0101213.
* (38) H. Gustafsson and U. Lindstrom, _A Picture of D-branes at Strong Coupling II. Spinning Partons_ , Phys. Lett. B 440 43 49 (1998) arXiv:hep-th/9807064.
* (39) A. Sen, _Tachyon Dynamics in Open String Theory_ , Int. J. Mod. Phys. A 20,5513 5656 (2005) arXiv:hep-th/0410103; _Tachyons in String Theory_ , Ann. Henri Poincaré 4, Suppl. 1, S31 S42 (2003).
* (40) E. Witten, _Non-commutative geometry and string field theory_ , Nuc. Phys. B 268, 253 294 (1986); N. Seiberg and E. Witten, _String Theory and Noncommutative Geometry_ , JHEP 9909, 032, (1999) arXiv:hep-th/9908142.
* (41) O. M. P. Bilaniuk, V. K. Deshpande, and E. C. G. Sudarshan, _“Meta” Relativity_ , Am. J. Phys. 30, 718 (1962); O. M. P. Bilaniuk and E.C.G. Sudarshan, _Particles beyond the light barrier_ , Phys. Today 22N5, 43 51 (1969).
* (42) C. M. Hull and P. K. Townsend, _Unity of Superstring Dualities_ , Nucl. Phys. B 438, 109 137 (1995) arXiv:hep-th/9410167.
* (43) E. Witten, _String Theory Dynamics In Various Dimensions_ , Nucl. Phys. B 443, 85 126 (1995) arXiv:hep-th/9503124.
* (44) D. J. Gross, A. Neveu, J. Scherk, and J. H. Schwarz, _Renormalization and Unitarity in the Dual-Resonance Model_ , Phys. Rev. D 2, 697 (1970).
* (45) C. Lovelace, _Pomeron form-factors and dual Regge cuts_ , Phys. Lett. B 34, 500 506 (1971).
* (46) B. Zwiebach, _Oriented Open-Closed String Theory Revisited_ , Ann. Phys. 267 193 248 (1998), arXiv:hep-th/9705241.
* (47) J. Hughes, J. Liu and J. Polchinski, _Supermembranes_ , Phys. Lett. B 180, 370 (1986).
* (48) M. Born and L. Infeld, _Foundations of the new field theory_ , Proc. Roy. Soc. Lond. A A 144, 425 451 (1934).
* (49) G. W. Gibbons, _Born-Infeld particles and Dirichlet p-branes_ , Nucl. Phys. B 514, 603 639 (1998) arXiv:hep-th/9709027.
* (50) P. A. M. Dirac, _An Extensible model of the electron_ , Proc. Roy. Soc. Lond. A 268, 57 67 (1962).
* (51) J. Polchinski, S. Chaudhuri and C. V. Johnson, _Notes on D-Branes_ , arXiv:hep-th/9602052; J. Polchinski, _TASI Lectures on D-Branes_ , arXiv:hep-th/9611050.
* (52) W. Taylor, _Lectures on D-branes, Gauge Theory and M(atrices)_ , arXiv:hep-th/9801182.
* (53) R. G. Leigh, _Dirac-Born-Infeld Action from Dirichlet Sigma Model_ , Mod. Phys. Lett. A 4, 2767 (1989).
* (54) E. Bergshoeff, E. Sezgin, P.K. Townsend, _Supermembranes and Eleven-Dimensional Supergravity_ , Phys. Lett. B 189, 75 78 (1987).
* (55) P. K. Townsend, _The eleven-dimensional supermembrane revisited_ , Phys. Lett. B 350, 184 187 (1995) arXiv:hep-th/9501068.
* (56) M. R. Garousi, _Tachyon couplings on non-BPS D-branes and Dirac-Born-Infeld action_ , Nucl. Phys. B 584, 284 299 (2000) arXiv:hep-th/0003122.
* (57) J. Kluson, _Proposal for non-BPS D-brane action_ , Phys. Rev. D 62 126003 (2000) arXiv:hep-th/0004106.
* (58) A. A. Tseytlin, _Born-Infeld action, supersymmetry and string theory_ , hep-th/9908105 [hep-th].
* (59) W. Taylor, _Lectures on D-branes, tachyon condensation, and string field theory_ , arXiv:hep-th/0301094.
* (60) W. Taylor and B. Zwiebach, _D-Branes, Tachyons, and String Field Theory_ , arXiv:hep-th/0311017.
* (61) A. Sen, _Non-BPS States and Branes in String Theory_ , arXiv:hep-th/9904207.
* (62) C. G. Callan, C. Lovelace, C. R. Nappi, and S. A. Yost, _Loop corrections to superstring equations of motion_ , Nucl. Phys. B 308, 221 284 (1988). A. Abouelsaood, C. G. Callan, C. R. Nappi and S. A. Yost, _Open strings in background gauge fields_ , Nucl. Phys. B 280, 599 624 (1987).
* (63) M. R. Garousi, _Slowly varying tachyon and tachyon potential_ , JHEP 0305 05 (2003) arXiv:hep-th/0304145.
* (64) T. Banks, W. Fischler, S. H. Shenker and L. Susskind, _M Theory As A Matrix Model: A Conjecture_ , Phys. Rev. D 55, 5112 5128 (1997) arXiv:hep-th/9610043.
* (65) D. Lust, _Intersecting Brane Worlds — A Path to the Standard Model?_ , Class. Quant. Grav. 21, S1399 1424 (2004) arXiv:hep-th/0401156; I. Antoniadis, E. Kiritsis, J. Rizos and T. N. Tomaras, _D-branes and the Standard Model_ , Nucl. Phys. B 660, 81 115 (2003) arXiv:hep-th/0210263; L. E. Ibanez, F. Marchesano and R. Rabadan, _Getting just the Standard Model at Intersecting Branes_ , JHEP 0111, 002 (2001) arXiv:hep-th/0105155; A. Chatzistavrakidis, H. Steinacker and G. Zoupanos, _Intersecting branes and a standard model realization in matrix models_ , arXiv:1107.0265 [hep-th].
* (66) G. ’t Hooft, _A Planar Diagram Theory for Strong Interactions_ , Nucl. Phys. B 72, 461 (1974).
* (67) E. Witten, _Baryons in the $1/N_{c}$ Expansion_, Nucl. Phys. B 160, 57 115 (1979).
* (68) D. Mateos, _String Theory and Quantum Chromodynamics_ , Class. Quant. Grav. 24, S713 S740 (2007) arXiv:0709.1523 [hep-th].
* (69) J. Dai, R. G. Leigh and J. Polchinski, _New Connections Between String Theories_ , Mod. Phys. Lett. A 4, 2073 2083 (1989).
* (70) J. Polchinski, _Dirichlet-Branes and Ramond-Ramond Charges_ , Phys. Rev. Lett. 75, 4724 4727 (1995) arXiv:hep-th/9510017.
* (71) C. G. Callan Jr. and J. M. Maldacena, _Brane Dynamics From the Born-Infeld Action_ , Nucl. Phys. B 513, 198 212 (1998) arXiv:hep-th/9708147.
* (72) S. -J. Rey, J.-T. Yee, _Macroscopic strings as heavy quarks: Large-N gauge theory and anti-de Sitter supergravity_ , Eur. Phys. J. C 22, 379 394 (2001) arXiv:hep-th/9803001.
* (73) S. Lee, A. Peet and L. Thorlacius, _Brane-Waves and Strings_ , Nucl. Phys. B 514, 161 176 (1998) arXiv:hep-th/9710097.
* (74) E. Guendelman, A. Kaganovich, E. Nissimov and S. Pacheva, _Space-Time Compactification/Decompactification Transitions Via Lightlike Branes_ , Gen. Rel. Grav. 43, 1487 1513 (2011) arXiv:1007.4893 [hep-th].
* (75) M. Gell-Mann and B. Zwiebach, _Spacetime compactification induced by scalars_. Phys. Lett. B 141, 333 (1984).
* (76) L. Randall and R. Sundrum, _An Alternative to Compactification_ , Phys. Rev. Lett. 83, 4690 4693 (1999) arXiv:hep-th/9906064.
* (77) K. Shiraishi, _Compactification of spacetime in $SU(\infty)$ Yang-Mills theory_, Classical and Quantum Gravity 6, 2029 2034 (1989) arXiv:1301.6213 [hep-th].
* (78) F. A. Brito, M.L.F. Freire, W. Serafim, _Confinement and screening in tachyonic matter_ , Eur.Phys.J. C 74, 12 3202 (2014).
* (79) A. Issifu and F. A. Brito, _The (De)confinement Transition in Tachyonic Matter at Finite Temperature_ , Adv. Hig. Ener. Phys. 2019, 9450367 (2019).
* (80) Adamu Issifu, Julio C.M. Rocha, Francisco A. Brito, _Confinement of Fermions in Tachyon Matter at Finite Temperature_ , Adv. High. Ener. Phys. 2021, 6645678 (2021), arXiv:2012.15102 [hep-ph].
* (81) A. Issifu and F. A. Brito, _Confinement of Fermions in Tachyon Matter_ , Adv. High. Ener. Phys. 2020, 1852841 (2020).
* (82) A. Issifu and F. A. Brito, _An Effective Model for Glueballs and Dual Superconductivity at Finite Temperature_ , Adv. High. Ener. Phys. 2021, 5658568 (2021), arXiv:2105.01013 [hep-ph].
* (83) M. Rosina, A. Schuh and H. J. Pirner, _Lattice QCD and the soliton bag model_ , Nucl. Phys. A 448, 557 566 (1986).
* (84) D. Kharzeev, E. Levin, and K. Tuchin, _Classical gluodynamics in curved space–time and the soft pomeron_ , Phys. lett. B 547, 21 30 (2002).
* (85) R. Dick, _Vector and scalar confinement in gauge theory with a dilaton_ , Phys. Lett. B 409, 321 324 (1997).
* (86) P. Gaete and E. Spallucci, _Confinement from gluodynamics in curved space-time_ , Phys. Rev. D 77, 027702 (2008) arXiv:0707.2738 [hep-th].
* (87) O. Bergman and M. R. Gaberdiel, _Stable non-BPS D-particles_ , Phys. Lett. B 441 133 140 (1998); _Non-BPS States in Heterotic - Type IIA Duality_ , JHEP 9903 013(1999).
* (88) A. Sen, _Field Theory of Tachyon Matter_ , Mod. Phys. Lett. A 171797 1804 (2002), arXiv:hep-th/0204143.
* (89) D. Erkal, D. Kutasov, and O. Lunin, _Brane-Antibrane Dynamics From the Tachyon DBI Action_ , arXiv:0901.4368 [hep-th].
* (90) A. Sen, _Dirac-Born-Infeld Action on the Tachyon Kink and Vortex_ , Phys. Rev. D 68, 066008 (2003), arXiv:hep-th/0303057.
* (91) H. Bech Nielsen and S. Chadha, _On How to Count Goldstone Bosons_ , Nucl. Phys. B 105, 445 453 (1976).
* (92) G. ’t Hooft, _Topology of the Gauge Condition and New Confinement Phases in Nonabelian Gauge Theories_ , Nucl. Phys. B 190, 455 478 (1981).
* (93) Z. F. Ezawa and A. Iwazaki, _Abelian dominance and quark confinement in Yang-Mills theories_ , Phys. Rev. D 25, 2681 (1982).
* (94) H. Shiba and T. Suzuki, _Monopoles and string tension in $\text{SU}(2)$ QCD_, Phys. Lett. B 333, 461 466 (1994) arXiv:hep-lat/9404015.
* (95) T. Suzuki and I. Yotsuyanagi, _Possible evidence for Abelian dominance in quark confinement_ , Phys. Rev. D 42, 4257 (1990); J. D. Stack, W. W. Tucker and R. J. Wensley, _Confinement in $\text{SU}(3)$: Simple and generalized maximal Abelian gauge_, arXiv:hep-lat/0205006; V. G. Bornyakov, H. Ichie, Y. Mori, D. Pleiter, M. I. Polikarpov, G. Schierholz, T. Streuer, H. Stüben, and T. Suzuki (DIK collaboration), Phys. Rev. D 70, 054506 (2004); V. G. Bornyakov, E. -M. Ilgenfritz and M. Muller-Preussker, _Universality check of Abelian monopoles_ , Phys. Rev. D 72, 054511 (2005) hep-lat/0507021 [hep-lat]; G. S. Bali, V. Bornyakov, M. Müller-Preussker, and K. Schilling, _Dual superconductor scenario of confinement: A systematic study of Gribov copy effects_ , Phys. Rev. D 54, 2863 (1996).
* (96) N. Sakumichi and H. Suganuma, _Perfect Abelian dominance of quark confinement in $\emph{SU}(3)$ QCD_, Phys. Rev. D 90, 111501 (2014); _Three-quark potential and Abelian dominance of confinement in $\emph{SU}(3)$ QCD_, Phys. Rev. D 92, 034511 (2015).
* (97) M. Neubert, _Les Houches Lectures on Renormalization Theory and Effective Field Theories_ , arXiv:1901.06573 [hep-ph].
* (98) C. G. Bollini and J. J. Giambiagi, _Dimensional Renormalization: The Number of Dimensions as a Regularizing Parameter_ , Nuovo Cim. B 12, 20 26 (1972)
* (99) Gerard ’t Hooft and M.J.G. Veltman, _Regularization and Renormalization of Gauge Fields_ , Nucl. Phys. B 44 189 213 (1972).
* (100) C. Itzykson and J.B. Zuber, _Quantum Field Theory_ , McGraw-Hill, New York, USA, 1980.
* (101) P. Pascual and R. Tarrach, _QCD: Renormalization for the Practitioners_ , Lect. Notes Phys. 194 1 27 (1984).
* (102) M. E. Peskin and D. V. Schroeder, _An Introduction to quantum field theory_ , Addison-Wesley (1995).
* (103) J. Collins, _Foundations of perturbative QCD_ , Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. 32 1 624 (2011).
* (104) S. Weinberg, _The Quantum theory of fields. Vol. 1: Foundations_ , Cambridge University Press, 2005.
* (105) S. Weinberg, _The quantum theory of fields. Vol. 2: Modern applications_ , Cambridge University Press, 2013.
* (106) M. D. Schwartz, _Quantum Field Theory and the Standard Model_ , Cambridge University Press, 2014.
* (107) C. N. Yang and R. L. Mills, _Conservation of Isotopic Spin and Isotopic Gauge Invariance_ , Phys. Rev. 96, 191 (1954).
* (108) G. S. Bali, _QCD forces and heavy quark bound states_ , Phys. Rept. 343, 1 136 (2001) arXiv:hep-ph/0001312.
* (109) M. Tanabashi et al., (Particle Data Group), Phys. Rev. D 98, 030001 (2018) and 2019 updated.
* (110) M. Albanese et al. [APE Collaboration], _Glueball Masses and String Tension in Lattice QCD_ , Phys. Lett. B 192, 163 169 (1987).
* (111) P. Bacilieri et al. [APE Collaboration], _Scaling in Lattice (QCD): Glueball Masses and String Tension_ , Phys. Lett. B 205, 535-539 (1988).
* (112) M. Teper, _Glueballs, strings and topology in SU(N) gauge theory_ , Nucl. Phys. Proc. Suppl. 109 A, 134 140 (2002) arXiv:hep-lat/0112019.
* (113) C. J. Morningstar and M. Peardon, _Glueball spectrum from an anisotropic lattice study_ , Phys. Rev. D 60, 034509 (1999).
* (114) M. Loan, X.-Q. Luo, Z.-H. Luo,_Monte Carlo study of glueball masses in the Hamiltonian limit of $\text{SU}(3)$ lattice gauge theory_, Int. J. Mod. Phys. A 21, 2905 2936 (2006).
* (115) Y. Chen, et al.,_Glueball spectrum and matrix elements on anisotropic lattices_ , Phys. Rev. D 73, 014516 (2006).
* (116) W. Lee and D. Weingarten, _Scalar Quarkonium Masses and Mixing with the Lightest Scalar Glueball_ , Phys. Rev. D 61, 014015 (2000) arXiv:hep-lat/9910008.
* (117) G. S. Bali et al., (UKQCD), _A comprehensive lattice study of $\text{SU}(3)$ glueballs_, Phys. Lett. B 309, 378 384 (1993) arXiv:hep-lat/9304012.
* (118) W. Heisenberg, _Über den Bau der Atomkerne. I._ Zeitschr. f. Phys. 77, 1 (1932); English translation in D. M. Brink, Nuclear Forces, Pergamon, Oxford, 1965, pp. 144 154.
* (119) D. Atkinson and P. W. Johnson, _Current and constituent quark masses: Beyond chiral-symmetry breaking_ , Phys. Rev. D 41, 1661 1666 (1990).
* (120) E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, and T. M. Yan,_Charmonium: The model_ , Phys. Rev. D 17, 3090 3117 (1978).
* (121) G .W. Carter, O. Scavenius, I.N. Mishustin and P.J. Ellis, Phys. Rev. C 61, 045206 (2000).
* (122) A. Deur, S. J. Brodsky, G. F. de Teramond, _The QCD Running Coupling_ , Prog. Part. Nuc. Phys. 90 1 (2016) arXiv:1604.08082 [hep-ph].
* (123) K. A. Olive et al., [Particle Data Group Collaboration], Chin. Phys. C 38, 090001 (2014); G. M. Prosperi, M. Raciti and C. Simolo, Prog. Part. Nucl. Phys. 58,387 438 (2007); G. Altarelli, PoS Corfu2012 002, (2013) 1303.6065 [hep-ph].
* (124) A. M. Badalian, A. I. Veselov and B. L. G. Bakker, Phys. Atom. Nucl. 67, 1367 1377 (2004) hep-ph/0311010 [hep-ph]; Phys. Rev. D 70, 016007 (2004); A. M. Badalian and V. L. Morgunov, Phys. Rev. D 60, 116008 (1999); A .M. Badalian and D. S. Kuzmenko, Phys. Atom. Nucl. 67, 561 563 (2004).
* (125) Yu. A. Simonov, Phys. Atom. Nucl. 74, 1223 (2011) 1011.5386 [hep-ph].
* (126) A. M. Badalian and D. S. Kuzmenko, Phys. Rev. D 65, 016004 (2001); A. M. Badalian, Phys. Atom. Nucl. 63, 2173 2183 (2000).
* (127) J. M. Cornwall, Phys. Rev. D 26, 1453 (1982).
|
# Optimal control from inverse scattering via single-sided focusing
Michael D. Schneider<EMAIL_ADDRESS>Caleb Miller George F. Chapline
Jane Pratt Dan Merl Lawrence Livermore National Laboratory, 7000 East
Avenue, Livermore, CA 94550, USA.
###### Abstract
We describe an algorithm to solve Bellman optimization that replaces a sum
over paths determining the optimal cost-to-go by an analytic method localized
in state space. Our approach follows from the established relation between
stochastic control problems in the class of linear Markov decision processes
and quantum inverse scattering. We introduce a practical online computational
method to solve for a potential function that informs optimal agent actions.
This approach suggests that optimal control problems, including those with
many degrees of freedom, can be solved with parallel computations.
††preprint: LLNL-JRNL-841953
## Introduction
Optimal control of noisy systems [1, 2] arises in numerous applications
including robotics [3], production planning [4], power systems [5], traffic
management [6], and financial investments [7]. Indeed the introduction of
noise to intrinsically noiseless physical systems allows for efficient
solution by optimal control methods. In addition, optimal control is closely
related to reinforcement learning (RL) [8], which has seen a surge of research
activity and applications in the past decade, spurred in part by computational
advances and new more scalable solution algorithms. New methods to solve
optimal control problems might thus have wide ranging impacts in both
traditional control applications and RL-based machine learning problems.
In the dynamic programming approach to solving optimal control problems [9],
one seeks to optimize the expected accumulated ‘cost’ along state space
trajectories that reach a target state. The immediate costs incurred can
include terms for both state and action costs [e.g., 10]. The choice of cost
functions has often been determined by mathematical tractability or by
heuristic assumptions about the desired performance of the control solutions.
In robotics applications with high-dimensional state spaces, cost function
specification has been adapted to the problem of ‘trajectory planning’ where
the control solution is modeled as deviations from an underlying dynamical
model [11, 12]. A dynamical systems view of optimally controlled trajectories
can also be related to a free energy principle for the state cost [13]. In
this letter, we take a similar dynamical systems perspective and show that the
immediate state cost function in the optimal control cost can be associated
with a known least-action principle for a class of stochastic optimal control
problems. The derived dynamical model immediately yields optimally controlled
trajectories, obviating the need for iterative solutions as in dynamic
programming.
We consider control problems in the class of Linear Markov Decision Processes
(LMDPs) [14, 15, 16]. LMDPs get their name because the Hamilton-Jacobi-Bellman
equation for the optimal cost-to-go becomes a linear differential operator
after a suitable exponential transform of the cost-to-go function. In the
continuous time case, LMDP models can be solved using path integral Monte
Carlo techniques [17, 18, 19], while discrete time formulations can be solved
using a variety of methods including eigenvalue problems and temporal
difference reinforcement learning [20]. In this work, we exploit the
relationship of LMDP models to the Schrödinger equation [21, 17, 22, 23, 18,
24] to reinterpret the control cost function in terms of a dynamic potential.
Similar approaches linking integrable systems to stochastic optimal control of
mean field games was presented in Swiecicki _et al._ [25] and control of
ensembles of non-interacting entities in Bakshi _et al._ [26]. One potential
advantage of introducing a Schrödinger representation of optimal control is
that the Schrödinger solutions automatically explore all possible control
paths, which can be seen in the path integral formulation of Schrödinger
equation solutions [27, 18]
Quantum inverse scattering defines a one-to-one relationship between
asymptotic ‘scattering data’ and the potential [e.g., 28]. Rose [29, 30, 31]
showed that Schrödinger scattering solutions can be focused such that the
transmitted scattering wave is a Dirac $\delta$-function at a specified later
time. These ‘single-sided focusing’ methods give a scattering interpretation
to the topic of Schrödinger bridges [32], which derive from a problem
originally introduced by Schrödinger [33] to provide a probabilistic
derivation of his wave equation. Carroll [34] derived a similar result by
considering the relationships between spectral representations of related
families of second-order differential operators. Dyson [35, 36] discovered a
relationship between quantum inverse scattering methods in two-dimensions and
optimal feedback control of a noisy temporal signal. The single-sided focusing
method of Rose considered a known potential and sought solutions for the
incident wave that would be focused. In contrast, here we consider a known
incident wave and seek the potential, and thus implicitly the state cost, that
causes the transmitted wave to be focused to a narrow peak.
## Schrödinger control problems
LMDP control problems are described by the following process model for a
$p$-dimensional state $\mathbf{x}$,
$d\mathbf{x}=\mathbf{b}(t,\mathbf{x}(t))\,dt+\mathsf{C}\mathbf{a}(t,\mathbf{x}(t))\,dt+d\xi,$
(1)
where $\mathbf{b}$ is a $p$-dimensional dynamical drift, $\mathsf{C}$ is a
$p\times n$ control projection matrix, $\mathbf{a}$ is an $n$-dimensional
control variable, and $d\xi$ describes a $p$-dimensional Wiener process with
$\mathbb{E}(d\xi d\xi)=\nu dt$ for a $p\times p$ covariance $\nu$. We seek to
find a control $\mathbf{a}(t,x(t))$, $t_{0}=0<t<t_{f}$ such that the following
cost function is mimized [18],
$L(t_{0},\mathbf{x}_{f},\mathbf{a}(\cdot))\equiv\mathbb{E}_{P(t_{0},\mathbf{x}_{0})}\Biggl{[}q_{f}(\mathbf{x}(t_{f}))+\\\
\int_{t_{0}}^{t_{f}}dt\,\left(\frac{1}{2}\mathbf{a}^{\top}(t,\mathbf{x}(t))\mathsf{m}^{-1}\mathbf{a}(t,\mathbf{x}(t))+q(t,\mathbf{x}(t))\right)\Biggr{]},$
(2)
where $q$ is an arbitrary state cost function, $q_{f}$ is an asserted final
time state cost, $\mathsf{m}$ is a matrix of control cost weights, and
$P(t_{0},\mathbf{x}_{0})$ denotes the distribution of paths under the dynamics
of Equation (1) that start at state $\mathbf{x}_{0}$ at time $t_{0}$. The
action cost is a quadratic function of $\mathbf{x}$, which can be interpreted
as the continuous state space limit of a Kullback-Liebler divergence action
cost that appears in the discrete state space formulation of LMDPs [37].
Minimizing the expected cost, $L$, with respect to the choice of actions
defines the optimal cost-to-go function, which is the primary objective of the
optimal control calculation,
$J(t,\mathbf{x})\equiv\underset{\mathbf{a}(t\rightarrow t_{f})}{\rm min}\
L(t,\mathbf{x},\mathbf{a}).$ (3)
Then, substituting the optimal control variable $\mathbf{a}(t,\mathbf{x}(t))$
leads to the Hamilton-Jacobi-Bellman (HJB) equation [16] that can be used to
determine the optimal cost-to-go function,
$-\partial_{t}J=-\frac{1}{2}\left(\nabla
S\right)^{\top}\mathsf{C}\mathsf{m}^{-1}\mathsf{C}^{\top}\nabla
S+\mathbf{b}\cdot\nabla J+{\rm Tr}\left(\frac{\nu}{2}\Delta J\right)+q,$ (4)
with boundary condition $J(t_{f},\mathbf{x})=q_{f}(\mathbf{x})$. By applying a
Cole-Hopf transform [38, 39], we obtain an expression for the desirability
function [15], which represents a partition function over optimally controlled
stochastic paths [18],
$z(t,\mathbf{x})\equiv e^{-J(t,\mathbf{x})/\lambda}.$ (5)
With this transform, the HJB equation becomes linear in $z$,
$\partial_{t}z+\frac{1}{2}{\rm Tr}\left(\nu\Delta
z\right)+\mathbf{b}\cdot\nabla z-\frac{q}{\lambda}z=0.$ (6)
Here we have taken the noise covariance
$\nu=\lambda\mathsf{C}\mathsf{m}^{-1}\mathsf{C}^{\top}$, which determines the
scalar parameter $\lambda$ in equation (5) from $\nu$, the control cost
weights $\mathsf{m}$ and control projection matrix $\mathsf{C}$ [18].
As the next step in identifying the Schrödinger equation related to this
control problem, define a new transform [21, 27, 40],
$z(t,\mathbf{x})=\exp\left(R(t,\mathbf{x})-S(t,\mathbf{x})/\lambda\right).$
(7)
Substituting Equation (7) back into Equation (6) we find the sum of two
expressions,
$0=-\frac{1}{\lambda}\Biggl{[}\partial_{t}S+\mathbf{b}\cdot\nabla
S-\frac{1}{2}\left(\nabla
S\right)^{\top}\mathsf{C}\mathsf{m}^{-1}\mathsf{C}^{\top}\nabla S\\\
+\frac{1}{2}{\rm Tr}\left(\nu\Delta S\right)+\tilde{q}\Biggr{]}\\\
+\Biggl{[}\partial_{t}R+(\mathbf{b}-\mathbf{v})\cdot\nabla R\Biggr{]},$ (8)
where $\tilde{q}\equiv q+\mathcal{B}$,
$\mathcal{B}\equiv-\frac{\lambda^{2}}{2}\left[{\rm
Tr}\left(\mathsf{m}^{-1}\Delta R\right)+\left(\nabla
R\right)^{\top}\mathsf{m}^{-1}\nabla R\right],$ (9)
is familiar as the Bohm potential in quantum mechanics [41], and
$\mathbf{v}\equiv\mathsf{m}^{-1}\nabla S$ is a current velocity, familiar from
Nelson stochastic mechanics [42]. The expression in the first set of brackets
of Equation (8) is thus one side of an HJB equation for $S$ but with a cost
function that is modified from that in the HJB equation for $J$ by the
addition of the Bohm potential. The expression in the second set of brackets
of Equation (8) is one side of a continuity equation, which we interpret in
terms of a state probability density $\mu\equiv\exp(2R)$. Equation (8) is
satisfied if the expressions in each line are separately equal to zero, giving
an HJB equation for $S$ and a continuity equation for $\mu$. But, in general
we only require the sum to be zero so that lack of conservation of state
probability can be compensated for by proportional deviations from HJB
optimality for the cost-to-go function $S$.
Because the Bohm potential is equal to the Fisher information for the density
$\mu$, we can see that high Fisher information, corresponding to a narrow
density, incurs more control cost than low information, corresponding to a
broad density. This trend is consistent with the theory of the linear Bellman
equation [18] where control becomes more expensive in regions of state space
less likely to be visited by the process noise.
With Equation (7) and separation of the two lines of Equation (8), we have
arrived at an interpretation of the desirability function $z$ in terms of a
diffusion process with state density $\mu$ that is controlled with optimal
cost-to-go $S$. Nagasawa [21] shows that just such a diffusion process can be
equivalently described by the Schrödinger equation upon making the
identifications,
$\displaystyle z=\exp(R-S/\lambda)$
$\displaystyle\leftrightarrow\psi=\exp(R-iS/\lambda),$ (10)
$\displaystyle\hat{z}=\exp(R+S/\lambda)$
$\displaystyle\leftrightarrow\psi^{\dagger}=\exp(R+iS/\lambda),$
$\displaystyle z\hat{z}$ $\displaystyle=\mu=\psi\psi^{\dagger}.$ (11)
The wavefunction $\psi$ thus defined satisfies the Schrödinger equation for a
particle in a magnetic field,
$i\hbar\partial_{t}\psi(t,\mathbf{x})=-\frac{\hbar}{2}{\rm
Tr}\left(\nu\Delta\psi\right)\\\
+i\hbar\mathbf{b}(t,\mathbf{x})\cdot\nabla\psi+V(t,\mathbf{x})\psi,$ (12)
where $\mathbf{b}$ is now interpreted as a magnetic vector potential and $V$
is a new scalar potential that is related to the state cost $\tilde{q}$ [21,
equation 4.14],
$V=\tilde{q}-2\partial_{t}S-\nu(\nabla S)^{2}-2\mathbf{b}\cdot\nabla S.$ (13)
By solving the Schrödinger equation with this potential $V$, we obtain both
functions $R$ and $S$, which yield the solution to the optimal control problem
in the class of linear Markov decision processes solved by the linear Bellman
equation.
## Focusing principle
The relation in Equation (13), if interpreted directly, makes the Schrödinger
equation nonlinear and thus difficult to solve. However, if the potential $V$
is known, then we can more easily solve the linear Schrödinger equation.
The theory of quantum inverse scattering describes the problem of determining
an unknown potential from spatially asymptotic information about the
wavefunction, called the ‘scattering data’ [43]. We will identify the
scattering data with the starting and final target state distributions for a
finite horizon optimal control problem.
The unique potential that corresponds to given scattering data can be
determined from a least-action principle where the action is defined as the
integrated squared amplitude of the ‘tail’ of the transmitted incident wave
[29, 31],
$A\equiv\int d\mathbf{x}\,\left[\phi(t_{f},\mathbf{x})-\phi_{\rm
in}(\mathbf{x})\right]^{2},$ (14)
where $\phi_{\rm in}$ denotes an incident wavefront and $\phi(t,\mathbf{x})$
is the transmitted wave. By minimizing this action, an incident wave is
focused to a Dirac $\delta$-function upon passing over the potential, which is
called single-sided focusing [30].
The Euler-Lagrange equation for the action in Equation (14) is the Marčenko
integral equation [31], which in one dimension is,
$\Omega(-\tau;x_{f})+\Gamma(\tau+x_{f})+\int_{-\infty}^{\infty}d\tau^{\prime}\,\Gamma(\tau+\tau^{\prime})\Omega(-\tau^{\prime};x_{f})=0,$
(15)
for $\tau<x_{f}$ and $\Omega(-\tau)=0$ for $\tau>x_{f}$ and where $\Gamma$ is
the inverse Fourier transform of the reflection coefficient (assuming no bound
states) and $\Omega$ is a causal kernel that is related to the potential as,
$V(x)=2\partial_{x}\Omega(x,x).$ (16)
Focusing in two-dimensions can be derived from an analogous two-dimensional
Marcenko equation [44, 45, 46].
In [30], it is shown that maximum focusing corresponds to focusing of the real
part of the wavefunction $\psi$. Focusing the real part of the wavefunction
implies focusing also of the probability current density, $j$, so that,
$j(t_{f},x)\equiv e^{2R(t_{f},x)}\partial_{x}S(t_{f},x)\approx
m\delta(x-x_{f}).$ (17)
Because the cost-to-go $S(t_{f},x)$ at the final time is asserted as a
boundary condition, the function $\partial_{x}S$ can be arbitrary. The only
way to ensure the current density $j$ is focused is if the state probability
density is focused,
$\mu(t_{f},x)=e^{2R(t_{f},x)}=\delta(x-x_{f}).$ (18)
By using the focusing principle to derive $V$ from given boundary conditions
for $\psi$, we can obtain $\tilde{q}$ from Equation (13) and then get the
original state cost $q=\tilde{q}-\mathcal{B}$. Thus, for the stochastic
finite-horizon optimal control problem defined in Equations (1) and (2) with
convex state cost $q_{f}(x)$ at final time $t_{f}$, there exists a unique
$q(x,t)$ that minimizes the variance in the final state at $t=t_{f}$ relative
to an asserted target value at $x=x_{f}$.
## Numerical algorithm
Inspired by the action in Equation (14), we define a ‘focusing metric’ for
numerical optimization of the potential $V$,
$\mathcal{F}(V)\equiv\sum_{x}\left[{\rm
Re}\left(\psi(t_{f},\mathbf{x};V)\right)-{\rm target}(\mathbf{x})\right]^{2},$
(19)
where $\psi$ is required to be a solution of the Schrödinger equation (12) and
the ‘target’ is an asserted function that is square-normalizable.
The optimization of the metric $\mathcal{F}$ with respect to $V$ can be
accomplished by solving the inverse scattering problem via numerical
integration of the Marčenko equation. However, obtaining the reflection
coefficient (and any bound states) from the initial state distribution can be
numerically challenging. Instead, we use gradient descent to optimize the
metric $\mathcal{F}$, where the gradient operates through the numerical
eigensolver for the time-independent Schrödinger equation using the Jax
software library [47].
## Numerical example
To demonstrate the focusing algorithm, we adopt the common test problem of an
agent navigating a simple maze in two dimensions. The maze is composed of both
exterior walls and interior barriers. The agent is positioned in the top left
corner of the maze at $t_{0}=0$ and must reach the bottom right corner of the
maze by $t_{f}=0.6$. The agent path is terminated if the agent touches a wall
or barrier.
We solve the control problem on an arbitrary small test system consisting of a
discretized spatial grid of $51\times 51$ cells over an extent $-1\leq
x_{1},x_{2}\leq 1$. We take $\hbar=\lambda=1$ and $m=0.5$. We solve the
Schrödinger equation by first finding the energy eigenfunctions and
eigenvalues for a given potential $V$. We then expand the wavefunction in
eigenfunctions, truncating to the 15 eigenfunctions with the smallest
eigenvalues. We initialized $V$ to a quadratic function of distance on the
grid measured from the starting location in the top left. We then performed
gradient descent with a learning rate of 0.02 until the slope of the learning
curve flattened out.
Figure 1: The potential that approximately minimizes the focusing metric for
a two-dimensional gridworld maze. The agent begins in the top left of the grid
and must navigate to the bottom right without hitting any of the exterior or
interior walls.
We show the resulting potential $V$ that approximately minimizes $\mathcal{F}$
in Figure (1) and the state probability distribution over time for the
optimally-trained agent in Figure (2). We see that the probability density
focuses to a narrow target distribution at the final time as desired. In
addition, the probablity density at all times navigates the barriers
effectively to ensure zero probability of early termination of the agent maze
navigation episode.
Figure 2: The agent state probability density as a function of time for the
optimally controlled solution to navigating a gridworld maze. Time evolves in
each panel from top left to bottom right.
## Conclusions
We have shown that we can solve stochastic optimal control problems through
computation of a potential function with an inverse scattering method. We
assume we are given knowledge about the environment at the current time step
and an asserted final time goal that the probability distribution of the agent
state is ‘focused’ on specified state space locations. The control problem is
then solved by computing the potential for all states and times that serves to
drive an agent towards the goal. The potential is computed by matching the
wavefunction solution of the Schrödinger equation to a desired target, which
is a computation that can be executed in parallel across states and times.
The agent in this formulation is viewed as a dynamical system that follows
‘forces’ equal to the gradients of the potential function. Equivalently, the
HJB optimal cost-to-go for the agent can be obtained from the phase of the
wavefunction. While not demonstrated here, the approach admits a trivial
generalization to multi-agent systems, where each agent learns its own
potential.
Single-sided focusing minimizes the path length (in phase units) to reach a
desired end state. Various paths considered in the variational context define
a surface embedded in state space, whose area is the loss of information
obtained by the agent relative to the optimal path [27]. For small deviations
from the optimal path, the state cost function is determined by the
probability distribution of the agent state, giving a localized description of
the entire control solution that may be easily parallelized in future
computational applications and can be contrasted with non-local Euclidean path
integral methods [19] or episode-based training in RL [8].
The focusing action in Equation (14) is only one possible choice for
determining the potential. Indeed, other actions are known to produce Euler-
Lagrange equations that describe integrable systems, such as the Korteweg-de
Vries (KdV) equation [48]. The optimal cost functions for control problems
might then be derived as solutions to a broader class of integrable models. In
the most general case of a time-dependent potential, the Schrödinger equation
can be interpreted as one of the Lax operators [49] for a completely
integrable dynamical model for the potential. This is another way that
integrability can appear as a defining feature for solutions of stochastic
optimal control problems.
###### Acknowledgements.
This work was performed under the auspices of the U.S. Department of Energy by
Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Funding for this work was provided by LLNL Laboratory Directed Research and
Development grant 22-SI-001.
## References
* Stengel [1994] R. F. Stengel, _Optimal control and estimation_ (Courier Corporation, 1994).
* Alekseev [2013] V. M. Alekseev, _Optimal control_ (Springer Science & Business Media, 2013).
* Abdallah _et al._ [1991] C. Abdallah, D. M. Dawson, P. Dorato, and M. Jamshidi, Survey of robust control for rigid robots, IEEE Control Systems Magazine 11, 24 (1991).
* Ivanov _et al._ [2012] D. Ivanov, A. Dolgui, and B. Sokolov, Applicability of optimal control theory to adaptive supply chain planning and scheduling, Annual Reviews in control 36, 73 (2012).
* Christensen _et al._ [2013] G. S. Christensen, M. E. El-Hawary, and S. Soliman, _Optimal control applications in electric power systems_ , Vol. 35 (Springer Science & Business Media, 2013).
* Gugat _et al._ [2005] M. Gugat, M. Herty, A. Klar, and G. Leugering, Optimal control for traffic flow networks, Journal of optimization theory and applications 126, 589 (2005).
* Bertsimas and Lo [1998] D. Bertsimas and A. W. Lo, Optimal control of execution costs, Journal of financial markets 1, 1 (1998).
* Sutton and Barto [2018] R. S. Sutton and A. G. Barto, _Reinforcement learning: An introduction_ (MIT press, 2018).
* Bertsekas [2012] D. Bertsekas, _Dynamic programming and optimal control: Volume I_ , Vol. 1 (Athena scientific, 2012).
* Berkovitz [2013] L. D. Berkovitz, _Optimal control theory_ , Vol. 12 (Springer Science & Business Media, 2013).
* Ijspeert _et al._ [2001] A. J. Ijspeert, J. Nakanishi, and S. Schaal, Trajectory formation for imitation with nonlinear dynamical systems, in _Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180)_ , Vol. 2 (IEEE, 2001) pp. 752–757.
* Schaal _et al._ [2007] S. Schaal, P. Mohajerian, and A. Ijspeert, Dynamics systems vs. optimal control—a unifying view, Progress in brain research 165, 425 (2007).
* Friston [2010] K. Friston, The free-energy principle: a unified brain theory?, Nature reviews neuroscience 11, 127 (2010).
* Fleming and Mitter [1982] W. H. Fleming and S. K. Mitter, Optimal control and nonlinear filtering for nondegenerate diffusion processes, Stochastics: An International Journal of Probability and Stochastic Processes 8, 63 (1982).
* Todorov [2009] E. Todorov, Efficient computation of optimal actions, Proceedings of the National Academy of Sciences 106, 11478 (2009), https://www.pnas.org/content/106/28/11478.full.pdf .
* Kappen [2005a] H. J. Kappen, Linear theory for control of nonlinear stochastic systems, Physical review letters 95, 200201 (2005a).
* Pra and Pavon [1990] P. D. Pra and M. Pavon, On the markov processes of schrödinger, the feynman-kac formula and stochastic control, in _Realization and Modelling in System Theory_ (Springer, 1990) pp. 497–504.
* Kappen [2005b] H. J. Kappen, Path integrals and symmetry breaking for optimal control theory, Journal of Statistical Mechanics: Theory and Experiment 2005, P11011 (2005b).
* Kappen and Ruiz [2016] H. J. Kappen and H. C. Ruiz, Adaptive importance sampling for control and inference, Journal of Statistical Physics 162, 1244 (2016).
* Todorov [2006] E. Todorov, Linearly-solvable markov decision problems, Advances in neural information processing systems 19 (2006).
* Nagasawa [1989] M. Nagasawa, Transformations of diffusion and schrödinger processes, Probability theory and related fields 82, 109 (1989).
* Dai Pra [1991] P. Dai Pra, A stochastic control approach to reciprocal diffusion processes, Applied mathematics and Optimization 23, 313 (1991).
* Pavon and Wakolbinger [1991] M. Pavon and A. Wakolbinger, On free energy, stochastic control, and schrödinger processes, in _Modeling, Estimation and Control of Systems with Uncertainty_ (Springer, 1991) pp. 334–348.
* Ohsumi [2019] A. Ohsumi, An interpretation of the Schrödinger equation in quantum mechanics from the control-theoretic point of view, Automatica 99, 181 (2019).
* Swiecicki _et al._ [2016] I. Swiecicki, T. Gobron, and D. Ullmo, Schrödinger approach to mean field games, Physical review letters 116, 128701 (2016).
* Bakshi _et al._ [2020] K. Bakshi, D. D. Fan, and E. A. Theodorou, Schrödinger approach to optimal control of large-size populations, IEEE Transactions on Automatic Control 66, 2372 (2020).
* Chapline [2001] G. Chapline, Quantum mechanics as self-organized information fusion, Philosophical Magazine 81, 541 (2001).
* Newton [1989] R. G. Newton, _Inverse Schrödinger Scattering in Three Dimensions_ (Springer-Verlag, 1989).
* Rose [1996] J. H. Rose, Global minimum principle for schrödinger equation inverse scattering, Physical review letters 77, 4126 (1996).
* Rose [2001] J. H. Rose, “single-sided” focusing of the time-dependent schrödinger equation, Physical Review A 65, 012707 (2001).
* Rose [2003] J. H. Rose, Single-sided focusing and the minimum principle of inverse scattering theory, Inverse Problems 20, 243 (2003).
* Chen _et al._ [2016] Y. Chen, T. T. Georgiou, and M. Pavon, On the relation between optimal transport and schrödinger bridges: A stochastic control viewpoint, Journal of Optimization Theory and Applications 169, 671 (2016).
* Schrödinger [1931] E. Schrödinger, _Über die umkehrung der naturgesetze_ (Verlag der Akademie der Wissenschaften in Kommission bei Walter De Gruyter u …, 1931).
* Carroll [1990] R. Carroll, On the ubiquitous Gelfand-Levitan-Marčenko (GLM) equation, Acta Applicandae Mathematica 18, 99 (1990).
* Dyson [1975] F. J. Dyson, Photon noise and atmospheric noise in active optical systems, J. Opt. Soc. Am. 65, 551 (1975).
* Dyson [1976] F. J. Dyson, Old and New Approaches to the Inverse Scattering Problem, (1976).
* Todorov [2008] E. Todorov, General duality between optimal control and estimation, in _2008 47th IEEE Conference on Decision and Control_ (IEEE, 2008) pp. 4286–4292.
* Hopf [1950] E. Hopf, The partial differential equation ut + uux = $\mu$xx, Communications on Pure and Applied Mathematics 3, 201 (1950).
* Cole [1951] J. D. Cole, On a quasi-linear parabolic equation occurring in aerodynamics, Quarterly of Applied Mathematics 9, 225 (1951).
* Chapline [2004] G. Chapline, Quantum mechanics and pattern recognition, International Journal of Quantum Information 2, 295 (2004).
* Bohm [1952] D. Bohm, A suggested interpretation of the quantum theory in terms of” hidden” variables. i, Physical review 85, 166 (1952).
* Nelson [2020] E. Nelson, _Dynamical theories of Brownian motion_ (Princeton university press, 2020).
* Morse and Feshbach [1954] P. M. Morse and H. Feshbach, Methods of theoretical physics, American Journal of Physics 22, 410 (1954).
* Cheney [1984] M. Cheney, Inverse scattering in dimension two, Journal of mathematical physics 25, 94 (1984).
* Cheney [1985] M. Cheney, Two-dimensional inverse scattering: Compactness of the generalized marchenko operator, Journal of mathematical physics 26, 743 (1985).
* Yagle [1998] A. E. Yagle, Discrete gel'fand-levitan and marchenko matrix equations and layer stripping algorithms for the discrete two-dimensional schrödinger equation inverse scattering problem with a nonlocal potential, Inverse Problems 14, 763 (1998).
* Bradbury _et al._ [2018] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, JAX: composable transformations of Python+NumPy programs (2018).
* Zakharov and Faddeev [1971] V. E. Zakharov and L. D. Faddeev, Korteweg-de vries equation: A completely integrable hamiltonian system, Functional Analysis and Its Applications 5, 280 (1971).
* Lax [1968] P. D. Lax, Integrals of nonlinear equations of evolution and solitary waves, Communications on pure and applied mathematics 21, 467 (1968).
|
# Split Learning without Local Weight Sharing
To Enhance Client-side Data Privacy
Ngoc Duy Pham, Tran Khoa Phan, Alsharif Abuadbba, Yansong Gao, Doan Nguyen,
Naveen Chilamkurti Pham, Phan, Nguyen, and Chilamkurti are with School of
Computing, Engineering, and Mathematical Sciences, La Trobe University,
Victoria, Australia. Email: {ngocduy.pham, k.phan, o.nguyen,
<EMAIL_ADDRESS>and Gao are with CSIRO’s Data61 &
Cybersecurity CRC, Aus-tralia. Email: {sharif.abuadbba,
<EMAIL_ADDRESS>authors: T. K. Phan and N.
Chilamkurti
###### Abstract
Split learning (SL) aims to protect user data privacy by distributing deep
models between client-server and keeping private data locally. In SL training
with multiple clients, the local model weights are shared among the clients
for local model update. This paper first reveals data privacy leakage
exacerbated from local weight sharing among the clients in SL through model
inversion attacks. Then, to reduce the data privacy leakage issue, we propose
and analyze privacy-enhanced SL (P-SL) (or SL without local weight sharing).
We further propose parallelized P-SL to expedite the training process by
duplicating multiple server-side model instances without compromising
accuracy. Finally, we explore P-SL with late participating clients and devise
a server-side cache-based training method to address the forgetting phenomenon
in SL when late clients join. Experimental results demonstrate that P-SL helps
reduce up to $50\%$ of client-side data leakage, which essentially achieves a
better privacy-accuracy trade-off than the current trend by using differential
privacy mechanisms. Moreover, P-SL and its cache-based version achieve
comparable accuracy to baseline SL under various data distributions, while
cost less computation and communication. Additionally, caching-based training
in P-SL mitigates the negative effect of forgetting, stabilizes the learning,
and enables practical and low-complexity training in a dynamic environment
with late-arriving clients.
###### Index Terms:
Split learning, privacy preservation, privacy leakage, honest-but-curious,
CNN.
## I Introduction
Deep learning (DL), influenced by the rapid growth of data, is becoming
increasingly important in our daily lives. However, the privacy of data used
in the model needs to be protected as required by various privacy regulations
[1]. Split learning (SL) [2, 3, 4, 5] is one popular collaborative learning
technique that aims to protect user privacy by enabling model training without
exposing users’ raw private data. In a simple vanilla setting, SL divides a
deep model into two parts deployed between a client (data owner) and a server
(computing service), where only smashed data (local model part’s output after
feeding raw data) is exposed for collaborative training with the server part
[2]. Compared to federated learning (FL) [6], SL is suitable for DL
applications on resource-constrained devices (e.g., IoT, mobile) because the
clients only need to run the first few layers of the deep model, while the
server handles the rest, which involves the most costly computations. With the
growing availability of different sources of data, SL has been extended to
process learning on multiple clients [2, 7, 8, 9]. In [10, 11], the authors
conduct a comprehensive evaluation of SL across various scenarios, ranging
from a low to a high number of clients, balanced to imbalanced and extreme
data distributions, etc., to provide a thorough insight into SL.
Figure 1: Demonstration of data leakage at the client side of SL: The raw
private image (left) is reconstructed (right) by a malicious client through
the model inversion attack.
Regarding SL on multiple data sources, clients typically share their local
weights with others to aggregate the learned knowledge from different data
sources. This can be done by sequentially passing weights to the next client
[2] or by averaging all local weights at the client side [9]. In these
settings, it is assumed that only the server is semi-trustworthy (honest-but-
curious [12]) while all clients trust each other. However, if a client is
malicious and colludes with the server, sharing local weights can lead to
potential data leakage. Fig. 1 demonstrates an example of data leakage in the
original SL [2] with two clients, $C_{1}$ and $C_{2}$. In this scenario,
$C_{1}$ acts as the victim, while $C_{2}$ serves as an adversary capable of
colluding with the server. $C_{2}$ employs the model inversion attack [13] to
train a decoder [13, 14] using $C_{1}$’s shared local weights. This decoder is
then utilized by $C_{2}$ to reconstruct raw data given $C_{1}$’s smashed data,
which becomes exposed during training or inference. The smashed data can be
acquired by $C_{2}$ through collusion with the server or eavesdropping on the
communication of $C_{1}$. Furthermore, a decoder trained on $C_{2}$’s local
model could be utilized to attack the subsequent client, which receives
$C_{2}$’s local weights for model updates (e.g., $C_{3}$ if available). This
situation exemplifies the white-box setting of the model inversion attack in
[13], where the target (local) model is publicly accessible to the
nearby/adjacent adversaries due to local weight sharing. Further details about
the white-box model inversion attack and its high efficiency can be found in
[13, 15]. To address these privacy concerns, we raise the research question
(RQ): How to develop novel effective SL-based training methods to minimize
data leakage in multi-client SL? In response to this question, we propose a
privacy-enhanced SL (P-SL) scheme, which fundamentally obviate sharing local
weights among clients during training. The proposed P-SL not only preserves
client-server privacy of the original SL but also enhances data privacy at the
client side. To the best of our knowledge, this work is the first to identify
data leakage in SL exacerbated by the default local weight sharing and
investigate P-SL performance under various data distributions.
Furthermore, in SL, apart from the issue of data leakage among clients,
ensuring the commitment of all clients to participate in the training process
simultaneously poses a significant challenge [11]. Due to various network,
energy, and resource constraints, some devices may not be active throughout
the entire training process or may join the training process at a later stage,
after collaborative training has already concluded. Handling the training of
new clients who join after the initial training, referred to as newcomer
training, presents a challenge. This raises another RQ, How to ensure stable,
low complexity, and high accuracy P-SL in dynamic environments where
additional clients join later? As the first training cycle has already been
completed, the learning of new clients can deteriorate the knowledge learned
by existing clients, leading to the phenomenon of forgetting [16] as
recognized in [11]. To overcome this challenge, we devise a cache-based
approach to address the forgetting phenomenon, thus enhancing the learning
performance of P-SL. In summary, the contributions of this paper are:
* •
We identify a new threat model, where participants (both clients and the
server) are assumed to be honest-but-curious. Based on this threat model, we
reveal the issue of client-side data leakage in the original SL and its
variants through the lens of model inversion attacks.
* •
To address the privacy concerns under the defined threat model, we propose
P-SL, which significantly reduces data leakage by up to $50\%$ compared to the
original SL. In P-SL, clients no longer share their local weights but can
still collaborate with the server to leverage their local knowledge and
improve training effectiveness.
* •
We conduct a comprehensive empirical evaluation using various datasets with
different data distributions to demonstrate that the learning performance of
P-SL is comparable to that of the original SL. Additionally, we propose a
parallelized version of P-SL that enables simultaneous learning by clients
(clients training is performed sequentially in the original SL), reducing the
training time without sacrificing accuracy.
* •
To tackle the forgetting phenomenon experienced by existing clients when new
clients join the training process, we propose a server-side caching approach
for P-SL. The approach allows the training of newly arriving clients without
the need for retraining existing clients, thereby reducing training overhead.
Our experimental results highlight the advantages of caching in SL,
particularly in dynamic training environments.
The rest of this paper is structured as follows: Section II provides
background information on SL and its variants, different local data
distributions, and the current state of research on privacy preservation in
SL. Section III presents the identified threat model underlying our proposed
P-SL approach. In Section IV, we explore the parallelization of P-SL and
propose the cache-based approach, which handles newly arriving clients to
ensure the reliability of P-SL. Section V presents the experimental results in
terms of accuracy and privacy of the proposal, followed by the conclusion and
future directions for research in Section VI.
## II Background
This section presents the background information on SL with its variants for
multiple clients, distribution of user data, and current research on privacy
preservation for SL.
### II-A Vanilla split learning
A deep model, denoted as $h_{\theta}:\mathcal{X}\mapsto\mathcal{Y}$, is a
hypothesis that maps an input $x\in\mathcal{X}$ to an output
$y\in\mathcal{Y}$. Model training involves finding the parameters (weights)
$\theta$ that accurately capture the relationship between $\mathcal{X}$ and
$\mathcal{Y}$. In order to ensure user data privacy during model training, SL
[2] divides the layers of the deep model into multiple parts. In a simple
vanilla setting, the model is split into two components:
$h_{\theta}=f_{u}\cdot g_{w}$. The localized part $f_{u}$ contains the initial
layers, while the remaining part $g_{w}$ is deployed on the server, which is
typically the most computationally intensive component. During training, the
client performs forward propagation on its local data batch and sends the
resulting output (referred to as intermediate data or smashed data) along with
the corresponding ground-truth labels
$\left(f_{u}(x^{batch}),y^{batch}\right)$ to the server. The server continues
forward propagation on the received smashed data to compute the loss between
$y^{batch}$ and $g_{w}(f_{u}(x^{batch}))$. The gradients of the loss function
are then back-propagated at the server until the split layer, at which point
the deep model is cut/split. The split layer’s gradients are sent back to the
client, where the remaining back-propagation is performed locally, all the way
to the first layer. Based on the computed gradients, both the client and the
server update their respective weights, $u$ and $w$. This process, known as
simple vanilla SL, serves as the core mechanism for many other variants,
including SL with multiple clients and our proposed approach.
### II-B SL with multiple clients
SL can be extended to train a deep model on $N\geq 2$ clients. The deep model
is also split into two parts, $f_{\theta}=f_{u}\cdot g_{w}$, where $f_{u}$ is
distributed to all clients ($f_{u_{i}}$ to client $C_{i}$), and $g_{w}$ is
deployed on the central server. The training procedure involves utilizing data
from multiple clients in a round-robin fashion. In this setting, when the
training process of client $C_{i-1}$ is completed, client $C_{i}$ receives the
weights $u_{i-1}$ of $C_{i-1}$ to initialize its own weights $u_{i}$. Then,
client $C_{i}$ continues training on its own data, collaborating with the
server following the vanilla setting. Once the training is finished, client
$C_{i}$ shares its trained weights, $u_{i}$, with the next client $C_{i+1}$
[2]. This process continues until the last client $C_{N}$ completes training,
and the weights $u_{N}$ trained by client $C_{N}$ are the model weights that
are passed back to all clients for inference.
The model training process in SL is typically performed sequentially among the
clients, which can render significant latency. To address this issue, several
studies have focused on improving the training speed. In [8], the authors set
up the mini-batch of each client proportional to its local data size, allowing
for parallel processing of the training model. All clients are initialized
with the exact weights, and after each iteration, the gradients are averaged
before being updated on the clients. This synchronization strategy ensures
that all clients will have the same model weights during training.
SplitFed learning (SFL) [9] is an innovative approach that combines the
strengths of FL and SL. In SFL, clients perform forward propagation in
parallel on their respective data and send the smashed data to a central
server. Upon receiving the gradients from the server, the clients execute the
back-propagation step and then send the updated weights to a Fed server. The
Fed server aggregates the weight updates using an averaging function
($Avg(\cdot)$) and disseminates a single update to all clients. Similar to
[8], in SFL, after each global epoch, clients synchronize their models with
identical weights, which also renders them susceptible to model inversion
attacks in a white-box setting such as an adversarial client possesses the
same model weights as the victims.
### II-C Privacy-enhancing SL approaches
Critical privacy vulnerabilities of SL are based on the fact that a neural
network is naturally predisposed to be functionally inverted [17]. That is,
the smashed data exposed by clients may be exploited to recover the raw input
data. Therefore, privacy protection techniques in SL typically aim to minimize
data leakage from the smashed data. For example, the noise defense approach
[18, 19] applies additive Laplacian noise to the smashed data before sending
it to the server. By introducing noise, the target model is no longer a one-
to-one function, making it harder for an attacker to learn the mapping from
smashed to raw data. Another method involves adding latent noise through
binarization [20] to reduce the correlation between smashed and input data.
However, these mechanisms require efforts to mitigate the impact of noise
perturbation on model accuracy [21].
The work [22] aims to reduce raw data leakage by adding an additional distance
correlation-based loss term to the loss function. This distance correlation
loss minimizes the correlation between the raw and smashed data, ensuring that
the smashed data contains minimal information for reconstructing the raw data
while still being valuable for achieving model utility. In [20], the authors
extend this idea by suggesting that the additional loss term can be any
leakage metric, not limited to distance correlation. However, the application
of an extra loss term may still result in privacy leakage because the smashed
data exposes too much information to be adequately protected by a single
leakage metric in the loss function [17]. To overcome this limitation, the
authors in [23] propose client-based privacy protection, which employs two
different loss functions computed on the client and server sides. In line with
this approach, [24] designs a framework consisting of two steps: a pre-
training step that establishes a feature extractor with strong model-inversion
resistance, and a resistance transfer step that initializes the client-side
models using the feature extractor. This framework requires sufficient
computational resources for pre-training on a source task and may be
vulnerable during the early training epochs. To preserve both data privacy and
label information, [25] employs sample diversity to mix the smashed data of
client-side models and create obfuscated labels before transmitting them from
clients to the server. The mixed smashed data maintains a low distance
correlation with the raw data, thereby preventing private data from being
individually reconstructed. However, it should be noted that this mixing
technique does not effectively reduce data leakage as intended when performing
inference on a single data sample.
In multi-head SL (MHSL) [26, 27], the authors explore the feasibility of SFL
without requiring client-side synchronization. The objective is to reduce the
extra communication and computation overhead at the client side due to the
synchronization. The study is extended to the case which the server gains
information of raw data through the clients’ smashed data, but the evaluation
does not determine which approach, MHSL or SFL, results in less information
leakage. Moreover, the analysis solely focuses on the leakage from visualizing
smashed data, which exhibits limited quality as demonstrated in [20]. In
contrast, our approach prioritizes privacy and considers a more comprehensive
attack within an extended threat model. Additionally, we evaluate our scheme
across multiple scenarios and data distributions.
### II-D SL under diverse data distributions
In general, data is often distributed among clients in an imbalanced manner.
For example, some sensors may be more active than others, resulting more data
from certain sources. Similarly, in domains like healthcare, larger
institutions tend to have more patient data available [11]. In a
classification task, under balanced data distribution, each client holds
samples from all classes in similar quantities. However, in an imbalanced data
distribution, each client still possesses samples from all classes, but the
total number of samples for each class is imbalanced. It is important to note
that the ratio of samples between classes at each client remains similar to
the overall dataset ratio. In the study from [11], the authors investigate
three different data distributions for user data: balanced, imbalanced, and
non-IID (non-independent identically distributed). Their findings reveal that
SL performs well (compared to FL) under both balanced and imbalanced data
while being highly sensitive to non-IID data. Therefore, this study mainly
focuses on investigating and evaluating SL specifically under balanced and
imbalanced data settings.
## III Privacy-enhancement split learning
We define a threat model as the underlying context for the proposed P-SL and
the analysis of data leakage.
### III-A Threat model
In traditional SL, the server is assumed to be honest-but-curious [28],
meaning it follows the training procedure but may have a curiosity about the
raw data from clients. This threat model is assumed in the aforementioned
works on SL privacy protection techniques. In our study, we extend this
assumption to include honest-but-curious clients as well. To the best of our
knowledge, our work is the first to consider both honest-but-curious clients
and server in SL.
Model Inversion Attack. Under this new threat model, any participant in
collaborative learning can utilize model inversion attacks to reveal the
private data of other users (clients). The model inversion attack consists of
three phases: 1) gathering/generating a training set; 2) training a decoder
(inverse network [15]) with the data; and 3) recovering raw data from smashed
data using the decoder. The attacker can be any adversarial client or the
server, as outlined in the following scenarios, with the assumption of client-
server collusion for sharing smashed data or querying the local model. Note
that due to local weight sharing, all clients in SL have similar local model.
* •
Attacker as a client: An adversarial client trains the decoder on data
generated using its raw data and its local model. Subsequently, the raw data
of victim clients can be reconstructed from the smashed data received from the
server.
* •
Attacker as the server: The curious server generates training data by (black-
box) querying the adversarial client with a bag of samples (of the same type
as raw data [15]). This data is then used to train the decoder, which can be
employed to reconstruct raw data from any client’s smashed data.
We define the leakage of user data as the disparity between the reconstructed
data and the original private raw data of a client. The quality of the
reconstruction can be evaluated using various metrics such as mean squared
error (MSE), structural similarity index measure (SSIM), peak signal-to-noise
ratio (PNSR), Kullback–Leibler divergence, and so on [13, 29, 20, 30]. In our
work, we utilize SSIM as the main metric for measuring data leakage.
### III-B Privacy-enhancement SL (P-SL) algorithm
Figure 2: P-SL architecture with differences from original SL [2] and SFL [9].
In order to lessen the adverse effects of model inversion attacks, we propose
a non-local-weight-sharing method at the client side for SL. The proposed P-SL
algorithm is presented in Alg. 1, followed by computation analysis.
Fig. 2 illustrates the architecture of P-SL, highlighting its differences from
SL and SFL. The proposed P-SL is based on SL, where multiple clients connect
to a central server without communication among the clients (such as sharing
snapshots [2] \- local weights) or the use of a Fed server (for local model
aggregation [9]). Alg. 1 presents the collaborative training procedure between
clients and the server in the proposed P-SL. In the initial phase, clients and
the server receive their corresponding parts, $C_{i}\leftarrow f_{u}$ and
$S\leftarrow g_{w}$, from a split model, $h_{\theta}=f_{u}\cdot g_{w}$. Then,
they initialize their model weights, $u_{i}$ and $w$. During a global epoch,
following a round-robin manner, each client $C_{i}$ starts its training with
the server, following the simple vanilla SL procedure, which is demonstrated
by the inner while loop (lines $2-10$). Note that the box (lines $5-8$)
represents the operations executed at the server, and the transmission of data
between clients and the server (e.g., smashed data, labels, gradients, etc.)
is done via a network connection. Once the training of $N$ clients is
completed, we have $N$ different local models combined with the server model
to form $N$ different deep models (i.e. $h_{\theta_{i}}=f_{u_{i}}\cdot g_{w}$
where $1\leq i\leq N$). After training, each client performs inference on its
live data using its local private model in combination with the shared server
model. In contrast to SL and SFL, P-SL maintains the client-server
collaboration but prohibits weight exchanges among clients, thereby reducing
local computation, which will be examined in the following.
Algorithm 1 Procedure for one global epoch of P-SL.
Initialize:
$Clients$ and $Server$ receive their model parts
$Clients$ and $Server$ initialize their model weights
each $Client_{i}$among all the $Clients$$Client_{i}$has data to train with
$Server$
1:$Client_{i}$ does forward propagation on its data
2:$Client_{i}$ sends smashed data and labels to $Server$
3:$Server$ propagates incoming data on its layers
4:$Server$ computes errors based on the labels
5:$Server$ back-propagates gradients until its first layer
6:$Server$ sends gradients of split layer to $Client_{i}$
7:$Client_{i}$ back-propagates the received gradients
8:$Client_{i}$ and $Server$ update their model weights
### III-C Computation analysis
The convergence guarantee of pure SL on homogeneous data is straightforward
and can be reduced to the case of SGD [31]. However, the non-local-weight-
sharing of P-SL introduces additional challenges due to the presence of
multiple different local models that can be individually combined with the
server model, denoted as $h_{\theta_{i}}=f_{u_{i}}\cdot g_{w}$ for training.
Let’s consider $g_{w}$ as the primary deep model being trained on the smashed
data of all clients. If the training (smashed) data remains stable (with
minimal changes), the convergence of P-SL can be simplified to the SGD case.
Therefore, the convergence of P-SL relies on the convergence of each client.
After a few training rounds, when a client’s local model has converged,
indicating small local weight updates, the corresponding smashed data from
that client also stabilizes. Stable training data facilitates the convergence
of the server-side model’s training. Additionally, the learning of the server-
side model is influenced by the size of the training data, specifically the
size of the smashed data. A larger smashed data size leads to a more extensive
training dataset, resulting in a more complex server-side model (e.g., more
layers) required to learn and memorize. This insight aligns with the
experimental findings in [26, 27], where the authors evaluate model accuracy
with different cutting points (which determine the division of the deep model
for deployment on clients and the server, respectively). Performance tends to
degrade when the client model is thicker (possessing more local layers)
because it requires more time to converge, and the corresponding smashed data
size also increases. However, for low-end devices, it is preferable to have
thin client models, which will facilitate the learning performance of P-SL, as
discussed above.
TABLE I: Computation and communication costs at a client of SL, SFL, and P-SL during one global epoch. Scheme | Computation | Communication
---|---|---
SL | $\frac{|\mathcal{X}|}{N}C^{P}+C^{U}$ | $2\frac{|\mathcal{X}|}{N}S+2|U|$
SFL | $\frac{|\mathcal{X}|}{N}C^{P}+C^{U}$ | $2\frac{|\mathcal{X}|}{N}S+2|U|$
P-SL | $\frac{|\mathcal{X}|}{N}C^{P}$ | $2\frac{|\mathcal{X}|}{N}S$
In order to analyze total computation and communication costs of P-SL and
provide a comparison to SL and SFL, we assume a balanced data distribution for
simplicity. Let’s consider the following variables: $N$ as the number of
clients, $|\mathcal{X}|$ as the total number of dataset items, $S$ as the size
of the split layer (the last layer of $f_{u}$), $C^{P}$ as the computation
cost for processing one forward and backward propagation on $f_{u}$ with one
data item, $C^{U}$ as the cost for updating a client’s local weights from the
received weights from the previous client or the Fed server, and $|U|$ as the
size of the local model $f_{u}$. Table I demonstrates that the formulated
computation and communication costs at the client side in P-SL are lower than
SL and SFL, respectively, due to the absence of local weight sharing. The
reduction in costs depends on the size of the local model and is independent
on data distribution. The factor of $2$ in the communication costs represents
the uploading of smashed data and the downloading of corresponding gradients
($2\frac{|\mathcal{X}|}{N}S$), or the uploading and downloading of local
models ($2|U|$) at the client side.
## IV P-SL for scalable and dynamic environments
In this section, we further investigate and propose approaches to enhance the
performance of P-SL in a dynamic environment where multiple server instances
exist or newly participating clients arrive.
### IV-A Parallelizing P-SL with multiple server instances
Figure 3: Parallelizing P-SL with two server instances.
In the proposed P-SL, each client conducts collaborative training with the
server separately. This results in high latency and large idle time at the
client side, as only one client is active during training. Therefore, we can
process clients’ training simultaneously if multiple server instances are
available. In Fig. 3, paralleling P-SL follows the steps below in each round:
1. 1.
Setup phase: During this phase, all clients receive the same model $f_{u}$,
and the server starts with model $g_{w}$. The server sets up a pool of $m$
instances (in this example, there are two instances).
2. 2.
Client computation: Clients connect to the server and are associated with
available instances. Then, they perform forward propagation on their local
models using their local data in parallel and independently. Afterwards, they
send their smashed data to the server.
3. 3.
Server computation: The corresponding server instances perform forward-
backward operations on the received smashed data from the clients and send
back the computed gradients.
4. 4.
Client-server collaboration: The collaborative training between a client and
the corresponding server instance is indicated by label ①. Upon completing the
training, resulting in a pair client-server model, $f_{u_{i}}\cdot g_{w_{j}}$,
the server instance becomes available and waits in the pool for the next
client to connect (label ③).
5. 5.
Server model aggregation: When a server instance becomes available after
training, a snapshot of the server model weights, $w_{j}$, is recorded (label
②). After a certain period of time or a predetermined number of snapshots is
recorded, the server aggregates (using the $Avg(.)$ function) all snapshots to
form a new version of the server model weights, $w^{*}$. Then all server
instances, $g_{w_{j}}$, update their weights to the new aggregated weights,
$w^{*}$, for the next round of training.
It should be noted that the aggregation of the server models, $g_{w_{j}}$, is
performed asynchronously, and the degree of parallelization depends on the
number of server instances. The parallelization of P-SL differs from the
client-side parallelization in SFL, as it does not require the Fed server for
local model aggregation.
### IV-B P-SL with newly participating clients
#### IV-B1 A case study
In practice, setting up training when all clients simultaneously participate
presents challenges due to the unstable nature of IoT/mobile devices. While
previous studies, such as [11], have examined offline clients’ participation
during training, there is limited research on the scenario where a new client
with its data wants to join the training to benefit from the knowledge
acquired by existing clients. In order to address this real-world situation,
we conduct experiments involving $6$ clients, where we initially allow $4$
clients ($C_{1},C_{3},C_{4},C_{6}$) to collaboratively learn their models
using P-SL, referred to as the first training phase. Subsequently, $C_{2}$ and
$C_{5}$ join the training at a later stage, which we refer to as the second
training phase. Both $C_{2}$ and $C_{5}$ possess their own data and aim learn
their models while leveraging the knowledge from other clients’ data. Two
possible solutions can be considered for the second training phase: 1.
Training all clients, which would impose additional overhead on the existing
clients; and 2. Training only the new arriving clients, reducing training
complexity. While a hybrid approach, involving training new clients for a few
epochs and then training all clients together, is also possible, we focus on
the extreme cases (train all or train new) to study the impact of introducing
new information to existing knowledge.
TABLE II: Accuracy ($\%$) results of $6$ clients, with $2$ joining late
on Fashion dataset. Client | $\boldsymbol{C_{1}}$ | $\boldsymbol{C_{2}}$ | $\boldsymbol{C_{3}}$ | $\boldsymbol{C_{4}}$ | $\boldsymbol{C_{5}}$ | $\boldsymbol{C_{6}}$
---|---|---|---|---|---|---
Training stage | Balanced data distribution
$1^{st}$ w. 4 clients | $91.4$ | | $91.5$ | $91.6$ | | $91.4$
$2^{nd}$ w. ALL clients | $92.6$ | $92.6$ | $92.6$ | $92.4$ | $92.3$ | $92.3$
$2^{nd}$ w. NEW clients | $91.0$ | $91.4$ | $90.9$ | $90.6$ | $91.6$ | $91.2$
Training stage | Imbalanced data distribution
$1^{st}$ w. 4 clients | $88.7$ | | $91.0$ | $91.6$ | | $92.1$
$2^{nd}$ w. ALL clients | $90.5$ | $90.2$ | $92.3$ | $92.6$ | $92.6$ | $93.2$
$2^{nd}$ w. NEW clients | $87.5$ | $89.8$ | $91.0$ | $91.2$ | $92.1$ | $92.1$
We conduct experiments on both the Fashion and CIFAR10 datasets to investigate
the scenarios involving new clients joining the training process. The detailed
settings are deferred to the experimental evaluation section. Table II
presents the accuracy results of the first training phase (without $C_{2}$ and
$C_{5}$), the second training phase with solution one (training all clients),
and solution two (training new clients only). Please note that all training
are performed using P-SL. From the obtained results, we observe that training
all clients helps new clients learn their deep models while slightly improving
the accuracy of existing clients (e.g., $C_{1}$ and $C_{6}$) due to the
reinforcement learning from the newcomers’ data. Conversely, training only the
new clients leads to the server forgetting the knowledge learned from existing
clients, thereby reducing the learning performance of the new joining clients
($C_{2}$ and $C_{5}$). Additionally, the accuracy of the existing clients also
decreases due to the updating of the server model during training with the new
clients. Similar effect can be observed in Fig. 4, which visualizes the
results on CIFAR10. After the second training phase with the new clients only,
the accuracy of the existing clients dropped by $10\%-20\%$ with both balanced
and imbalanced data, indicating the phenomenon of forgetting in deep learning.
Therefore, training all clients when newcomers join is a suitable approach to
maintain the benefits of collaborative learning. However, retraining the
existing clients incurs network and computation overhead, which is a
limitation for low-end devices.
(a) Balanced data
(b) Imbalanced data
Figure 4: Accuracy ($\%$) results of $6$ clients, with $2$ joinings late on
CIFAR10 dataset.
In summary, our experiments on P-SL involving clients joining after the
initial learning phase demonstrate that retraining the entire network is
beneficial for newcomers and enhances the performance of existing clients.
However, this approach also incurs additional costs for the existing clients,
which can be a disadvantage, particularly for low-end devices. In this
context, it is preferable that existing clients do not need to retrain, which
in turn reduces accuracy due to the forgetting phenomenon.
#### IV-B2 Cached-based P-SL algorithm
Algorithm 2 Server executes in cache-based P-SL.
Input: Smashed data and labels from $Client_{i}$
Output: Gradients of the split layer to $Client_{i}$
1:$Server$ caches $Client_{i}$’s smashed data and labels cache pool is not
empty
2:$Server$ randomly selects some smashed data and labels from cache pool
3:$Server$ concatenates the cached smashed data into $Client_{i}$’s smashed
data
4:$Server$ concatenates the cached labels into $Client_{i}$’s labels
5:$Server$ propagates the concatenated data on its layers
6:$Server$ computes errors based on the concatenated labels
7:$Server$ back-propagates the gradients until its first layer
8:$Server$ slices the gradients based on the split layer’s size
9:$Server$ sends the sliced gradients of split layer to $Client_{i}$
In order to address the issue of forgetting, we propose an enhanced method for
training only newcomers to reduce the additional cost of retraining while
preserving the knowledge acquired by existing clients. Our approach involves
caching the smashed data sent from clients to the server during training,
which enhances the learning process of the server model. By caching the data,
the server can review knowledge while incorporating new information,
mitigating the catastrophic forgetting phenomenon that can occur when a model
is serially trained among clients. To incorporate caching into the server part
of P-SL, we modify the execution as depicted in the box from line $5$ to line
$8$ in Alg. 1. This modification enables the server to cache smashed data from
all clients. Subsequently, this cached data can be combined with incoming
smashed data during the training of the next client, allowing the server to
‘review’ previous knowledge. The specific details of this modification are
presented in Alg. 2.
For each iteration of client $C_{i}$’s training, upon receiving the smashed
data, $z_{i}=f_{u_{i}}(x_{i}^{train})$, and the corresponding labels,
$y_{i}^{train}$, the server stores them in a cache pool (line $1$). Before
performing forward propagation, the server randomly selects cached data
$(z^{cache},y^{cache})$ from the cache pool and concatenates it with the
incoming smashed data and labels from client $C_{i}$ to form
$([z_{i},z^{cache}],[y_{i}^{train},y^{cache}])$ as shown in lines $3-5$.
Subsequently, the server proceeds with the forward and backward passes using
the concatenated data as usual (lines $6-9$). Let $\mathcal{L}$ denote the
loss function used to measure the discrepancy between the ground-truth labels
and the model’s predicted outputs. The gradients at the server’s last layer
are computed as follows:
$\displaystyle\nabla\mathcal{L}(\mathrm{outputs},\mathrm{labels})=$
$\displaystyle\underset{u_{i},w}{\nabla}$
$\displaystyle\left[\mathcal{L}\left(g_{w}([z_{i},z^{cache}]),[y_{i}^{train},y^{cache}]\right)\right]$
It is important to note that the computed gradients for the split layer have
the size of the concatenated data instead of the size of $z_{i}$. Therefore,
the server needs to slice the gradients to fit the size of $z_{i}$ before
sending them to the client (line $9$). The execution in clients remains the
same as in P-SL, as shown in Alg. 1. From the above equation, the gradients
are computed not only based on the errors from training with $C_{i}$’s data
but also from other clients’ data (cached smashed data and labels).
Consequently, by updating $g_{w}$ using these gradients, the server can
simultaneously learn new knowledge from $C_{i}$’s data and review knowledge
previously acquired from other clients’ data.
#### IV-B3 Computation and privacy analysis
In cache-based P-SL, we have made modifications only to the server’s
procedure, ensuring that the cost at the client side remains the same as in
P-SL. The additional costs incurred at the server, such as storing cached data
and processing concatenation, are considered acceptable because the server is
assumed to possess sufficient computing resources to serve multiple clients.
Moreover, we have the flexibility to control the size of the cached data,
allowing us to adjust the server’s performance accordingly. As a result,
cache-based P-SL does not increase the cost at the client side, thereby
preserving the benefits when applied to IoT/mobile environments.
In terms of data privacy, P-SL already safeguards against sharing local
weights among clients, thereby reducing the risk of model inversion attacks by
a malicious client. Additionally, the caching approach in cache-based P-SL
does not violate any privacy concerns, as the cached data is public by default
in SL, and clients willingly to share it with the server to derive the utility
of the learning process. In summary, cache-based P-SL does not increase the
cost at the client side or compromise the privacy of local private data.
However, there is an additional overhead in terms of computing and storage
resources at the server, which is more feasible to handle compared to low-end
devices at the client side. To comprehensively evaluate the learning
performance of the proposed scheme, we conduct experiments and present the
results in the following section.
## V Experimental evaluation
In order to conduct experiments for evaluating the performance of the proposed
P-SL, we consider classification tasks on small-scale image datasets using
deep models based on 2D-CNN. Table III provides a summary of the selected
datasets along with the corresponding Very Deep Convolutional Networks (VGG)
[32] based deep models. The deep models are divided into two parts: the first
two convolutions are deployed at the clients, while the rest of the model
resides on the server.
TABLE III: Datasets and corresponding deep learning models Dataset | Input size | Samples | Deep model architecture
---|---|---|---
| | | Client side | Server side
Fashion | $1\times 28\times 28$ | $60,000$ | 2conv | 4conv+1dense
CIFAR10 | $3\times 32\times 32$ | $60,000$ | 2conv | 8conv+1dense
In these evaluations, we select two datasets: Fashion [33] and CIFAR10 [34],
both of which consists of 10 classes and have separate train and test sets. We
distribute a total of $60k$ samples from the train set among $N$ clients and
evaluate the learning performance of each client using the same test set. To
simulate an imbalanced data distribution, we assign a varying number of
samples to each client following a half bell curve of the standard normal
distribution. Table IV provides the details of the total number of data
samples allocated to each client for the case of $N=6$ clients.
TABLE IV: Imbalanced data distribution for $6$ clients Client index | $\boldsymbol{C_{1}}$ | $\boldsymbol{C_{2}}$ | $\boldsymbol{C_{3}}$ | $\boldsymbol{C_{4}}$ | $\boldsymbol{C_{5}}$ | $\boldsymbol{C_{6}}$
---|---|---|---|---|---|---
Splitting ratio | $1\%$ | $3\%$ | $9\%$ | $19\%$ | $30\%$ | $38\%$
No. of samples | $600$ | $1.8k$ | $5.4k$ | $11.4k$ | $18k$ | $22.8k$
### V-A P-SL training accuracy
TABLE V: Benchmarking accuracy ($\%$) of SL and SFL Dataset | Balanced data | Imbalanced data
---|---|---
| SL | SFL | SL | SFL
Fashion | $93.7$ | $93.1$ | $93.7$ | $93.4$
CIFAR10 | $85.6$ | $84.2$ | $85.4$ | $84.6$
Using the selected datasets and local data distributions described in the
previous section, we implement P-SL with $N=6$ clients and a central server.
After training, we measure the learning performance of each client when
performing inference on a test set collaboratively with the server
($h_{\theta_{i}}=f_{u_{i}}\cdot g_{w}$). We compare the results with multiple
SL, referred to as $m$SL, where we set up $N$ different SL processes between
$N$ client-server pairs ($h_{\theta_{i}}=f_{u_{i}}\cdot g_{w_{i}}$). Note
that, with $m$SL we have $6$ different server instances, while P-SL uses a
single shared instance of the server model. We also include the results of SL
and SFL for benchmarking reference in Table V.
TABLE VI: Accuracy ($\%$) results with Fashion dataset Client | $\boldsymbol{C_{1}}$ | $\boldsymbol{C_{2}}$ | $\boldsymbol{C_{3}}$ | $\boldsymbol{C_{4}}$ | $\boldsymbol{C_{5}}$ | $\boldsymbol{C_{6}}$
---|---|---|---|---|---|---
Scheme | With balanced data
$m$SL | $89.5$ | $90.1$ | $89.8$ | $90.0$ | $89.9$ | $90.2$
P-SL | $92.5$ | $92.5$ | $92.4$ | $92.6$ | $92.6$ | $92.6$
Scheme | With imbalanced data
$m$SL | $78.6$ | $84.9$ | $88.2$ | $90.4$ | $91.1$ | $92.1$
P-SL | $88.8$ | $91.1$ | $92.1$ | $92.8$ | $92.9$ | $92.9$
The training accuracy of each client with the Fashion dataset is presented in
Table VI, while the results with the CIFAR10 dataset are visualized in Fig. 5.
With $m$SL, the accuracy of each client depends on the amount of data samples
held by that client, as expected. Therefore, under balanced data, these
clients have similar accuracy (around $90\%$ with Fashion), while their
accuracy ranges from lower ($C_{1}$ with $78.6\%$) to higher ($C_{6}$ with
$92.1\%$) values with imbalanced data (see Table VI). We can visually observe
similar results in Fig. 5, which shows the results using CIFAR10, a more
complex and difficult dataset that leads to a higher accuracy difference
between clients with fewer and more data samples.
(a) Balanced data
(b) Imbalanced data
Figure 5: Accuracy ($\%$) results with CIFAR10 dataset.
In the proposed P-SL, despite the separate training of local models, the
learning performance is better than $m$SL attributing to the shared server
model, which aggregates knowledge from all clients. P-SL achieves a $3\%$
higher accuracy with the Fashion dataset and a $12\%$ higher accuracy with the
CIFAR10 dataset compared to $m$SL under a balanced data distribution. Under an
imbalanced distribution, the results are even more impressive as we observe
significant accuracy improvements for clients with fewer data, such as
$C_{1}$, $C_{2}$, etc. (see Fig. 5b). This demonstrates the benefit of P-SL in
collaborative learning, even without weight sharing among clients. We also
compare our results with SL and SFL, which achieve state-of-the-art
collaborative learning performance. By sharing local models among clients,
knowledge is aggregated at both the client and server sides, resulting in
higher accuracy for SL and SFL compared to P-SL, which only aggregates
knowledge at the server. In summary, our experiments demonstrate that P-SL,
without local weight sharing, still benefits collaborative learning between
multiple clients and a central server. Under imbalanced data distribution,
clients with fewer data can learn more by training with clients having more
data.
In our experiments, we follow a fixed training order for the clients, starting
with $C_{1}$, followed by $C_{2}$, and so on up to $C_{6}$. This fixed
training order may have an impact on the accuracy performance since the
learning process can be influenced by the presence of more or fewer data
samples during training, especially under imbalanced data distribution.
However, a detailed investigation of the training order and its effects is
deferred to the next section for further analysis and discussion.
### V-B Privacy preservation at client side with P-SL
We conduct experiments to evaluate the privacy preservation of P-SL. During
the training process, we employ model inversion attacks to reconstruct the raw
data of all clients using the smashed data that clients send to the server.
Specifically, these experiments are conducted with $6$ clients under a
balanced data distribution. While all clients are engaged in training, we
train a decoder using the local weights and data of a client, which in our
case is client $C_{1}$ (representing a malicious client who is curious about
the data of other clients). The trained decoder is used to reconstruct raw
data from the smashed data that any client sent to the server. Based on this
experiment setup, we evaluate the privacy preservation by measuring the amount
of data leakage from clients’ raw data. Data leakage is quantified using the
SSIM [35], which is a perceptual metric that assesses image quality
degradation. SSIM provides a measure of similarity between the raw and
reconstructed images and has been commonly used in previous works to evaluate
data leakage, such as in [13, 30, 36]. Unlike other metrics like MSE or PSNR,
SSIM offers a more intuitive and interpretable metric, where values range
between $0$ and $1$. A value of $0$ indicates the least similarity, while a
value of $1$ represents the highest similarity, indicating the most leakage.
Figure 6: Data leakage at client side in P-SL: raw private image (leftmost)
and the reconstructed ones using smashed data of $C_{1},C_{2},$ and $C_{3}$,
respectively. TABLE VII: Data leakage (SSIM) comparison between P-SL, SL,
and SFL when $C_{1}$ is the attacker. Scheme | $\boldsymbol{C_{1}}$ | $\boldsymbol{C_{2}}$ | $\boldsymbol{C_{3}}$ | $\boldsymbol{C_{4}}$ | $\boldsymbol{C_{5}}$ | $\boldsymbol{C_{6}}$
---|---|---|---|---|---|---
SL | $0.97$ | $0.96$ | $0.95$ | $0.95$ | $0.95$ | $0.95$
SFL | $0.97$ | $0.97$ | $0.97$ | $0.97$ | $0.97$ | $0.97$
P-SL | $0.97$ | $0.50$ | $0.51$ | $0.52$ | $0.53$ | $0.50$
P-SL* | 4$e$-4 | 1$e$-2 | 1$e$-2 | 2$e$-2 | 1$e$-2 | 2$e$-2
*: Data leakage is measured using the MSE metric.
Fig. 6 demonstrates the reconstructed images from other clients assuming that
$C_{1}$ is malicious. The decoder is trained using $C_{1}$’s local model and
its raw data, resulting in clear reconstructions from $C_{1}$’s smashed data.
However, the quality of reconstruction significantly drops when applying the
decoder to the smashed data of other clients. The reconstructed images from
$C_{2}$ and $C_{3}$ in Fig. 6 are vague and contain high noise compared to the
raw images. The numerical results presented in Table VII reveal that the
reconstruction quality (SSIM value) of all clients (except $C_{1}$) in P-SL is
only around $0.5$. On the other hand, with SL and SFL, as partially visualized
in Fig. 1, $C_{1}$ can almost entirely reconstruct the raw data of all other
clients, with SSIM values exceeding $0.95$. This is similar to $C_{1}$ self-
reconstructing its own data.
The last row of Table VII presents the leakage of P-SL measured by MSE between
the raw and reconstructed data. The results show that the errors in
reconstructing data for clients $C_{2}$ to $C_{6}$ are more than $100\times$
higher than the reconstruction error for $C_{1}$. While a value of $0$ in the
MSE metric indicates full leakage (no reconstruction error), there is no upper
bound for non-leak or partial leak scenarios. Additionally, the correlation of
leakage among clients does not exhibit a clear and meaningful trend when using
MSE, unlike when using SSIM. Therefore, we primarily present the leakage
measure using the SSIM metric. In [30], the authors made a similar observation
and suggested using SSIM to measure leakage instead of MSE and its related
PSNR.
TABLE VIII: Data leakage of P-SL under imbalanced data
with different attackers. Attacker | $\boldsymbol{C_{1}}$ | $\boldsymbol{C_{2}}$ | $\boldsymbol{C_{3}}$ | $\boldsymbol{C_{4}}$ | $\boldsymbol{C_{5}}$ | $\boldsymbol{C_{6}}$
---|---|---|---|---|---|---
$\boldsymbol{C_{1}}$ | $0.95$ | $0.32$ | $0.51$ | $0.53$ | $0.48$ | $0.39$
$\boldsymbol{C_{6}}$ | $0.42$ | $0.45$ | $0.60$ | $0.71$ | $0.73$ | $0.97$
We also conduct experiments with imbalanced data and obtained similar results,
as presented in Table VIII. Based on the experimental results, we can conclude
that P-SL outperforms SL and SFL in preserving data privacy at the client
side. However, it is important to note that the SSIM values between
reconstructed and raw images in P-SL are still significantly different from
$0$, indicating the presence of leakage. This leakage can be attributed to the
query-free attack described in [13], where the attacker does not require
knowledge of the target model or the ability to query it. The only assumption
for this type of attack is that the attacker and the victim share the same
data distribution. In our experiments, the data is uniformly distributed among
all clients, regardless of the number of samples they have. Therefore, the
local model of the attacker acts as the shadow model for the query-free
attack, leading to partial data leakage. Furthermore, an interesting
observation from Table VIII is that attackers with more data (e.g. $C_{6}$)
can reconstruct higher quality images compared to the ones with fewer data
(e.g., $C_{1}$).
Comparison to differential privacy. Noise-based protection techniques, such as
differential privacy (DP) [37], are commonly used to ensure guaranteed privacy
for user’s private data. Recently, various approaches [18, 38, 30, 20] have
been proposed to apply local DP for protecting user data privacy in SL. This
makes DP a competitive approach when considering the defined threat model
mentioned above. In order to compare DP to P-SL, we utilize the Laplace
mechanism, as described in [30]. However, it is important to note that there
exists a trade-off between accuracy and privacy. Increasing the amount of
added noise for higher privacy leads to a reduction in model accuracy. In this
context, we define privacy as the dissimilarity between the reconstructed and
raw data, represented by $(1-$SSIM$)$. The results are shown in Fig. 7, where
we also plot the results of P-SL. The experiments are conducted on the Fashion
dataset with varying levels of noise. From the figure, it is evident that DP
(line) exhibits a trade-off, wherein sacrificing accuracy results in higher
privacy. On the other hand, the result of P-SL (dot) stays above the line,
indicating that P-SL preserves more privacy (greater dissimilarity in
reconstruction) with less sacrifice in accuracy. According to the experiments,
P-SL achieves privacy similar to applying DP with $\epsilon=1$, while
maintaining an accuracy comparable to utilizing DP with $\epsilon=3$.
Therefore, P-SL is a more efficient method than DP in defending against model
inversion attacks.
Figure 7: Accuracy - privacy trade-off comparison between P-SL and DP with
various noise levels.
### V-C P-SL in dynamic environments
#### V-C1 With multiple server instances
Table IX presents the experimental results of parallelizing P-SL with $6$
clients and $2$ server instances. Each client is randomly associated with a
server instance to perform the training. With two servers available, two
groups of clients can process training in parallel, theoretically speeding up
the training by a factor of two. Based on the reported results, it is evident
that parallelized P-SL achieves similar results to sequential P-SL, where we
sequentially train each client with a single server. Therefore, parallelized
P-SL can be considered ‘scalable’, as it speeds up the training without
compromising the model’s accuracy.
TABLE IX: Accuracy ($\%$) results when parallelizing P-SL
with two server instances. Client | $\boldsymbol{C_{1}}$ | $\boldsymbol{C_{2}}$ | $\boldsymbol{C_{3}}$ | $\boldsymbol{C_{4}}$ | $\boldsymbol{C_{5}}$ | $\boldsymbol{C_{6}}$
---|---|---|---|---|---|---
Data dist. | Fashion dataset
Balance | $92.1$ | $92.3$ | $92.4$ | $92.2$ | $92.1$ | $92.1$
Imbalance | $90.0$ | $91.0$ | $92.2$ | $92.6$ | $92.8$ | $92.7$
Data dist. | CIFAR10 dataset
Balance | $81.0$ | $81.6$ | $81.8$ | $81.4$ | $81.4$ | $81.5$
Imbalance | $59.1$ | $74.6$ | $81.1$ | $83.1$ | $84.1$ | $83.8$
#### V-C2 Training newcomers
By leveraging cached data during the training of new clients, cache-based P-SL
facilitates the review of previous knowledge, resulting in more stable
performance and higher accuracy for both new and existing clients. Fig. 8
illustrates the learning performance of newcomers ($C_{2}$ and $C_{5}$) and
existing clients ($C_{1}$, $C_{3}$, $C_{4}$, and $C_{6}$) using P-SL (left
column) and cached-based P-SL (right column) on the Fashion and CIFAR10
datasets. Through the use of caching, learning with only newcomers in P-SL
becomes more stable, and the accuracy achieved is comparable to that of
retraining the entire network. Therefore, we can train newcomers exclusively
using cache-based P-SL, thereby saving the additional cost associated with
full retraining while experiencing only a slight reduction in the accuracy of
existing clients.
---
(a) Fashion dataset
---
(b) CIFAR10 dataset
Figure 8: Learning performance comparison between non-cached (left column) and
cached-based (right column) P-SL with Fashion and CIFAR10 datasets when
training newcomers, $C_{2}$ and $C_{5}$ (second training).
(a) Without caching
(b) With caching
Figure 9: Learning performance of P-SL on imbalanced CIFAR10 where the order
of clients participating in the training each epoch is random.
#### V-C3 Order of clients in training
In the previous section, we have mentioned the impact of the order in which
clients participate in the training on the final accuracy, particularly in
scenarios with imbalanced data distribution where some clients have more data
than others. We conduct experiments with P-SL, where each epoch, we randomly
select the order of clients to participate in the training with the server. We
compare the learning performance to the fixed order, which starts from $C_{1}$
and ends at $C_{6}$ each epoch, to assess the effect of client order. The
experimental results reveal no significant difference between training with a
fixed or random order under balanced data distribution. Due to the similar
quantity and distribution of data, the learning performance of the server with
$C_{1}$ is also similar to that of any other client. However, under imbalanced
data distribution, the learning performance of the server with clients having
more data would differ from those with fewer data. We plot the learning
performance of P-SL in Fig. 9a, where it can be observed that the achieved
accuracy is not stable. However, when using cache-based P-SL (shown in Fig.
9b), the caching approach demonstrates its effectiveness in stabilizing the
learning curve.
TABLE X: Accuracy ($\%$) results of P-SL, SL, and SFL w/w.o. caching under non-IID data distribution. | Fashion dataset | CIFAR10 dataset
---|---|---
Scheme | P-SL | SL | SFL | P-SL | SL | SFL
Without caching | $54.6$ | $54.8$ | $58.2$ | $45.5$ | $52.8$ | $53.8$
With caching | $56.1$ | $92.8$ | $91.2$ | $47.2$ | $83.2$ | $80.0$
#### V-C4 Training with Non-IID data
As the training data collected by the individual clients based on their local
environment and usage patterns, it is practical to assume that it is non-IID
distributed, e.g., data collected from a single person can only be obtained
[39]. Going forward, we continue to evaluate the learning performance of our
proposed method in a non-IID setting, which simulates the worst-case
statistical heterogeneity of local data. Following the non-IID setting
described in [11], we distribute data from only half of the classes ($5$ out
of $10$) to each client. The accuracy results obtained after the full training
process of SL, SFL, and P-SL with and without caching are reported in Table X.
The results demonstrate that all the evaluated schemes are sensitive to non-
IID data, as the learning objectives of each client diverge when the training
data is heterogeneous [11]. These findings align with those in [10, 11], where
SL performs worse than SFL in non-IID settings. Our proposed P-SL is
particularly sensitive to non-IID data due to the absence of client-side
synchronization. Unfortunately, although caching helps stabilize the training
loss, it only slightly improves learning accuracy, limiting the applicability
of P-SL in non-IID settings. However, caching supports the learning of SL and
SFL in non-IID data scenarios, enabling them to achieve comparable results to
those obtained in IID data settings (both balanced and imbalanced). This
advancement promotes the development of SL in non-IID data environments, an
area that has received limited study thus far.
Based on the above experiments, we can conclude that caching plays a crucial
role in stabilizing and maintaining the learning performance of P-SL. As for
parallelization to speed up training, this caching approach can be extended to
the server with multiple instances that share a cache pool. Furthermore, the
strategy for caching, such as determining which and how much data to cache,
will be a topic for future research.
## VI Conclusion
This paper aims to address the issue of data leakage in traditional SL and its
variants that arise from the sharing of local weights during client training.
We propose and analyze a variant called SL without local weight sharing, P-SL,
to enhance privacy preservation for user data. Experimental results across
various data distributions demonstrate that P-SL enables collaborative
learning from distributed clients while reducing data leakage at the client
side by half, while maintaining comparable accuracy to SL and SFL.
Furthermore, P-SL can be parallelized to expedite client training without
sacrificing model accuracy. We also investigate P-SL in a dynamic environment
where new clients join the training, which can impact the existing clients due
to the forgetting phenomenon. To address this, we propose a server-caching
mechanism for P-SL, which facilitates the review of learned knowledge during
training with newcomers. Experiment results show that cached-based P-SL
stabilizes the learning performance and allows for training only the late-
arriving clients, reducing client-side overhead while mitigating the server-
side forgetting issue. In conclusion, this paper presents P-SL as an effective
approach for preserving user data privacy in collaboratively distributed
learning, particularly for IoT/mobile devices in real-world dynamic
environments. Future research directions include exploring caching strategies
to enhance the proposal further.
## References
* [1] X. Liu, L. Xie, Y. Wang, J. Zou, J. Xiong, Z. Ying, and A. V. Vasilakos, “Privacy and Security Issues in Deep Learning: A Survey,” _IEEE Access_ , vol. 9, pp. 4566–4593, 2021.
* [2] O. Gupta and R. Raskar, “Distributed learning of deep neural network over multiple agents,” _Journal of Network and Computer Applications_ , vol. 116, pp. 1–8, 2018.
* [3] P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” 2018. [Online]. Available: https://arxiv.org/abs/1812.00564
* [4] M. G. Poirot, P. Vepakomma, K. Chang, J. Kalpathy-Cramer, R. Gupta, and R. Raskar, “Split Learning for collaborative deep learning in healthcare,” 2019.
* [5] P. Vepakomma and R. Raskar, _Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning_. Cham: Springer International Publishing, 2022, pp. 439–451.
* [6] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated Machine Learning: Concept and Applications,” _ACM Trans. Intell. Syst. Technol._ , vol. 10, no. 2, 2019.
* [7] V. Turina, Z. Zhang, F. Esposito, and I. Matta, _Combining Split and Federated Architectures for Efficiency and Privacy in Deep Learning_ , 2020, pp. 562–563.
* [8] J. Jeon and J. Kim, “Privacy-Sensitive Parallel Split Learning,” in _2020 International Conference on Information Networking_ , 2020, pp. 7–9.
* [9] C. Thapa, P. C. Mahawaga Arachchige, S. Camtepe, and L. Sun, “SplitFed: When Federated Learning Meets Split Learning,” _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 36, no. 8, pp. 8485–8493, 2022.
* [10] Y. Gao, M. Kim, S. Abuadbba, Y. Kim, C. Thapa, K. Kim, S. A. Camtep, H. Kim, and S. Nepal, “End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things,” in _2020 International Symposium on Reliable Distributed Systems_ , 2020, pp. 91–100.
* [11] Y. Gao, M. Kim, C. Thapa, A. Abuadbba, Z. Zhang, S. Camtepe, H. Kim, and S. Nepal, “Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things,” _IEEE Transactions on Computers_ , vol. 71, no. 10, pp. 2538–2552, 2022.
* [12] A. J. Paverd and A. C. Martin, “Modelling and automatically analysing privacy properties for honest-but-curious adversaries,” Tech. Rep., 2014.
* [13] Z. He, T. Zhang, and R. B. Lee, “Model Inversion Attacks against Collaborative Inference,” in _Proceedings of the 35th Annual Computer Security Applications Conference_ , 2019, pp. 148–162.
* [14] D. P. Kingma and M. Welling, _An Introduction to Variational Autoencoders_ , 2019, vol. 12.
* [15] Z. He, T. Zhang, and R. B. Lee, “Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems,” _IEEE Internet of Things Journal_ , vol. 8, no. 12, pp. 9706–9716, 2021.
* [16] R. M. French, “Catastrophic forgetting in connectionist networks,” _Trends in Cognitive Sciences_ , vol. 3, no. 4, pp. 128–135, 1999.
* [17] Q. Duan, S. Hu, R. Deng, and Z. Lu, “Combined Federated and Split Learning in Edge Computing for Ubiquitous Intelligence in Internet of Things: State-of-the-Art and Future Directions,” _Sensors_ , vol. 22, no. 16, 2022\.
* [18] S. Abuadbba, K. Kim, M. Kim, C. Thapa, S. A. Camtepe, Y. Gao, H. Kim, and S. Nepal, “Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?” in _Proceedings of the 15th ACM Asia Conference on Computer and Communications Security_ , 2020, pp. 305–318.
* [19] T. Titcombe, A. J. Hall, P. Papadopoulos, and D. Romanini, “Practical Defences Against Model Inversion Attacks for Split Neural Networks,” 2021.
* [20] N. D. Pham, A. Abuadbba, Y. Gao, K. T. Phan, and N. Chilamkurti, “Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction,” _Trans. Info. For. Sec._ , vol. 18, pp. 3088–3100, 2023.
* [21] J. Wang, J. Zhang, W. Bao, X. Zhu, B. Cao, and P. S. Yu, “Not Just Privacy: Improving Performance of Private Deep Learning in Mobile Cloud,” in _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 2018, pp. 2407–2416.
* [22] P. Vepakomma, A. Singh, O. Gupta, and R. Raskar, “NoPeek: Information leakage reduction to share activations in distributed deep learning,” in _2020 International Conference on Data Mining Workshops_ , 2020, pp. 933–942.
* [23] V. Turina, Z. Zhang, F. Esposito, and I. Matta, “Federated or Split? A Performance and Privacy Analysis of Hybrid Split and Federated Learning Architectures,” in _2021 IEEE 14th International Conference on Cloud Computing_ , 2021, pp. 250–260.
* [24] J. Li, A. S. Rakin, X. Chen, Z. He, D. Fan, and C. Chakrabarti, “ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning,” in _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2022, pp. 10 184–10 192.
* [25] D. Xiao, C. Yang, and W. Wu, “Mixing Activations and Labels in Distributed Training for Split Learning,” _IEEE Transactions on Parallel and Distributed Systems_ , vol. 33, no. 11, pp. 3165–3177, 2022.
* [26] P. Joshi, C. Thapa, S. Camtepe, M. Hasanuzzamana, T. Scully, and H. Afli, “Splitfed learning without client-side synchronization: Analyzing client-side split network portion size to overall performance,” in _Proceedings of the 7th Collaborative European Research Conference_ , vol. 2348, 2021, pp. 1–7.
* [27] P. Joshi, C. Thapa, S. Camtepe, M. Hasanuzzaman, T. Scully, and H. Afli, “Performance and Information Leakage in Splitfed Learning and Multi-Head Split Learning in Healthcare Data and Beyond,” _Methods and Protocols_ , vol. 5, no. 4, 2022.
* [28] A. J. Paverd and A. C. Martin, “Modelling and Automatically Analysing Privacy Properties for Honest-but-Curious Adversaries,” Tech. Rep., 2014.
* [29] H. Dong, C. Wu, Z. Wei, and Y. Guo, “Dropping Activation Outputs With Localized First-Layer Deep Network for Enhancing User Privacy and Data Security,” _IEEE Transactions on Information Forensics and Security_ , vol. 13, no. 3, pp. 662–670, 2018.
* [30] J. Ryu, Y. Zheng, Y. Gao, A. Abuadbba, J. Kim, D. Won, S. Nepal, H. Kim, and C. Wang, “Can differential privacy practically protect collaborative deep learning inference for IoT?” _Wireless Networks_ , 2022.
* [31] Y. Li and X. Lyu, “Convergence Analysis of Sequencial Split Learning on Heterogeneous Data,” 2023.
* [32] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 2014.
* [33] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms,” 2017.
* [34] A. Krizhevsky, “Learning multiple layers of features from tiny images,” Tech. Rep., 2009.
* [35] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE Transactions on Image Processing_ , vol. 13, no. 4, pp. 600–612, 2004.
* [36] H. R. Roth, A. Hatamizadeh, Z. Xu, C. Zhao, W. Li, A. Myronenko, and D. Xu, _Split-U-Net: Preventing Data Leakage in Split Learning for Collaborative Multi-modal Brain Tumor Segmentation_. Springer Nature Switzerland, 2022, pp. 47–57.
* [37] C. Dwork, _Differential Privacy_. Boston, MA: Springer US, 2011, pp. 338–340.
* [38] P. C. Mahawaga Arachchige, P. Bertok, I. Khalil, D. Liu, S. Camtepe, and M. Atiquzzaman, “Local Differential Privacy for Deep Learning,” _IEEE Internet of Things Journal_ , vol. 7, no. 7, pp. 5827–5842, 2020.
* [39] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated Learning with Non-IID Data,” 2018.
|
# Thermally-driven scintillator flow in the SNO+ neutrino detector
J.D. Wilson for the SNO+ Collaboration
###### Abstract
The SNO+ neutrino detector is an acrylic sphere of radius 6 m filled with
liquid scintillator, immersed in a water-filled underground cavern, with a
thin vertical neck (radius 0.75 m) extending upwards about 7 m from the sphere
to a purified nitrogen cover gas. To explain a period of unexpected motion of
the scintillator, time-dependent flow simulations have been performed using
OpenFoam. It appears that the motion, inferred from subsequent 24 h-averaged
patterns of transient radon (${}^{222}\mathrm{Rn}$) contamination introduced
during earlier recirculation of scintillator, can be explained as owing to
heat transfer through the detector wall that induced buoyant flow in a thin
wall boundary layer. This mechanism can result in transport of contaminant,
should it be introduced, down the neck to the sphere on a time scale of
several hours. If the scintillator happens to be thermally stratified, the
same forcing produces internal gravity waves in the spherical flow domain, at
the Brunt-Väisälä frequency. Nevertheless, oscillatory motion being by its
nature non-diffusive, simulations confirm that imposing strong thermal
stratification over the depth of the neck can mitigate mixing due to transient
heat fluxes.
###### keywords:
neutrino detector , spherical cavity , internal convection , scintillator
motion
[inst]organization=Department of Earth & Atmospheric Sciences, University of
Alberta, city=Edmonton, state=Alberta, country=Canada
Simulation of thermally-driven scintillator motion in a spherical neutrino
detector
Fluid ascends (descends) in laminar boundary layer along relatively warm
(cool) wall
Volume conservation demands compensating descent (ascent) of interior fluid
This mechanism explains observed motion of transient radon contamination
$\mathcal{O}[0.1\,\mathrm{W\,m^{-2}}]$ heat flux (or
$\mathcal{O}[0.1\,\mathrm{K}]$ thermal inhomogeneity) $\rightarrow$ speeds
$\mathcal{O}[\mathrm{mm\,s^{-1}}]$
## 1 Introduction
This paper will report numerical simulations of laminar convective flow within
a spherical container, carried out using OpenFoam to investigate a flow
phenomenon observed within the SNO+ liquid scintillator particle detector. It
complements several earlier papers [1, 2, 3] addressing a similar subject, and
although the novelty of the work lies in its context rather than its
methodology, the simplicity of the problem (in terms of domain geometry, fluid
homogeneity and thermal forcing) implies that the results we present should
have some generality, supplementing what is a surprisingly sparse literature
[4] on a common and important type of flow [5, 6] occurring in laboratory
beakers, storage tanks and so forth.
For present purposes there is no need to delve deeply into the SNO+ science
goals and methodology (for details see [7, 8, 9]). The SNO+ detector is
located in a rock cavity some 2 km underground near Sudbury, Ontario. The
scintillator fluid, a linear alkylbenzene (LAB), is contained within an
acrylic sphere of 6 m radius (Fig. 1), and rises up a cylindrical ‘neck’
(radius 0.75 m) to the interface with a ‘cover gas’ of ultrapure N2 within the
Universal Interface (UI), about 13 m above the centre of the sphere. The
detector sphere (hereafter ‘AV’ for ‘acrylic vessel’) and neck ‘float’ within
a surrounding mass of purified water that fills the rock cavity to a level
about 1 m below the top of the neck, hold-down ropes compensating for the
buoyancy of the AV (the scintillator’s density is nominally about
$860\,\mathrm{kg\,m^{-3}}$). Within the water, approximately concentric with
the AV and at a radius of about 8.5 m from its centre, some 9400 inward-
looking photomultipler tubes (PMTs), mounted on a frame referred to as the
PSUP (PMT Support), respond to photons emerging from the detector — and carry
the information from which physics data are deduced.
It is characteristic of this type of detector that one seeks to discriminate a
relatively small number of target events, e.g. scattering of an electron by a
solar neutrino, or a hypothetical neutrinoless double beta decay, from a
‘background’ of vastly more numerous events that stem mostly from natural
radioactivity within and near the detector. Accordingly every effort is made
to minimize radioactive contamination, by optimally selecting material parts,
and by filtration and purification of the scintillator and external water.
Nevertheless in practice the scintillator will carry a low level of activity,
particularly from short-lived progeny of the ${}^{238}\mathrm{U}$ and
${}^{232}\mathrm{Th}$ chains during and after filling and recirculation
operations, and this is monitored by quantifying the associated event rates.
The contaminant of primary interest in this paper is radon
(${}^{222}\mathrm{Rn}$, half-life 3.8 days), which is present in the
laboratory air at a level of about $10^{2}\;\mathrm{Bq\,m^{-3}}$ [7]. Within
the AV the concentration of ${}^{222}\mathrm{Rn}$ can be quantified from the
decay rates of its daughter radionuclides, i.e. the transformation of the
daughter bismuth (${}^{214}\mathrm{Bi}$) by beta decay to polonium
(${}^{214}\mathrm{Po}$) and the subsequent alpha decay of the
${}^{214}\mathrm{Po}$, a sequence producing a delayed coincidence signal that
is easily identified. It is critical for SNO+ that the radon level in the
detector be many orders of magnitude below that in ambient air or water, which
is achieved by isolating the scintillator from room air to the maximum
possible degree111At present the ratio of the radon concentration in the cover
gas to that in the air is smaller than $2\times 10^{-4}$ [7], and further
improvement is anticipated. Please note that the level of BiPo-214 activity
evident in Figs. (2,5) below was seen just after the completion of
scintillator operations to mix in the fluor (2,5-diphenyhloxazole or ‘PPO’,
incorporated at a concentration of 2.2 g/L as a wavelength-shifter), and is
much higher than, and not characteristic of, the level now achieved..
Evidently then, in the context of the purity of liquid scintillator particle
detectors — SNO+ being one of several in operation — one is going to be
concerned to understand and perhaps control scintillator motion, which may
transport a contaminant into the “fiducial volume”, i.e. a subregion of
scintillator sufficiently distant from the wall as to exclude background
events occurring in or near the AV walls. Absent any pumping or intentional
agitation, scintillator motion can be driven only by thermal gradients or
temporal trends, and in practice such convective motion fields are very weak.
In this regard one may take the example of the Borexino (liquid scintillator)
underground neutrino detector, which is similar in size and type to SNO+, and
whose scintillator motion patterns have been investigated in a sequence of
papers [10, 1, 2]. Early measurements gave proof of thermally-driven motion
within the detector, driven largely by seasonal variation of laboratory air
temperature, and modifications were made to reduce the amplitude of that
motion (insulation of the outer wall of the detector, and provision of active
temperature controls) with the result that lateral inhomogeity of the
detector’s boundary temperature was reduced to a level of order 0.1 K. Even
so, the scientific goals of the Borexino experiment necessitate quantification
of the weak convective motion222These authors note the paucity of existing
studies of the “natural convection problem under consideration, represented by
fluid flow inside [a] closed, stratified, near-equilibrium system”., as
reported by [2].
Returning to the SNO+ experiment, it had been envisaged that other than during
intervals of ‘AV recirculation’ (i.e. extraction, purification and
reintroduction of LAB), stable thermal stratification of scintillator in the
neck would mitigate convective transport from the gas/liquid interface at the
top of the neck down into the fiducial volume. In that context, and motivated
by observation of an unanticipated pattern of scintillator motion, it is the
purpose of this paper to present numerical simulations that illustrate and
typify the types of flow that may occur in the detector in response to
idealized but plausible thermal inhomogeneities.
The outline of the paper is as follows. Section 2 will detail an actual
contamination event, a quiescent period (i.e. no pumping) during which a
transient and decaying radon layer was observed to slowly sink within the AV.
Section 3 will briefly cover OpenFoam (‘OF’, the open source software used to
simulate flow in the detector) and its method of application. Direct
scintillator velocity measurements are unavailable, and so in Section 4
simulations of a closely analogous flow, viz. that within a 3 L spherical
flask of water subjected to controlled warming from its boundary, is compared
with corresponding measurements [11]. Confidence in the methodology having
been established, Section 5 will show that modest wall heat fluxes (of order
$0.1\,\mathrm{W\,m^{-2}}$) induce buoyancy-driven motion of scintillator
within the spherical AV that explains (semi-quantitatively) what has been
observed, and that if such forcing combines with stable temperature
stratification of the scintillator fluid a high frequency wave motion ensues.
Finally Section 6 focuses on mixing down the neck, as driven by distinct types
of thermal disturbance, and examines the impact of bulk thermal stratification
upon that mixing.
## 2 Observed motion of a contamination layer
In early June 2021, just after the LAB fill had been completed, an interesting
event was observed in the pattern of BiPo-214 decays within the SNO+ detector.
After some 5 h of AV recirculation on 31 May, a radon-contaminated layer had
formed333The ${}^{222}\mathrm{Rn}$ entered with the AV recirculation
operations, as expected and as previously seen by other scintillator
experiments. During ‘normal’ recirculation, scintillator is extracted from the
bottom of the AV and re-injected at the base of the neck. Experience has shown
that the behaviour of the re-introduced scintillator depends on its
temperature relative to that within the AV. Newly returned fluid may promptly
undergo mixing, or may form a transient blob or layer at the top of the AV
sphere that later sinks., rather uniform in its activity (as seen in the
pattern formed by daily-summation of BiPo-214 decays) and spanning
approximately the layer $2.5\leq z\leq 5$ m (the coordinate origin used here
is the centre of the AV sphere, and $z$ is the vertical coordinate). During
the subsequent two weeks, until the next interval of AV recirculation on 14
June, the AV was subjected to no disturbance other than the nitrogen
($\mathrm{N}_{2}$) bubblers444One bubbler emits at the base of the AV, and
another at the base of the neck. The bubble stream consists of small – circa.
1 cm diameter – single bubbles, with a cumulative volume flow rate (at
atmospheric pressure) of about 150 L in two weeks., and any unmeasured thermal
inhomogeneity or trend.
It was observed that over those two weeks the contaminated layer slowly sank,
decaying as expected owing to the short (3.8 day) half-life of
${}^{222}\mathrm{Rn}$, but more or less retaining its layered form. Fig. (2)
plots that slow descent, as captured by daily integrations of the Bi-214 and
Po-214 decays plotted in $(z,\rho^{2})$ coordinates, where
$\rho=\sqrt{x^{2}+y^{2}}$.
The radon ‘blob’ of 31 May - 14 June 2021 was not the first such to be
observed, but it stands out because of its occurrence during a long interval
without external disturbance (the normal situation was for recirculation
operations to be paused only on weekends). The need to understand the origin
of these events and their evolution prompted the present work, a numerical
study of thermally-driven circulation in the AV. What is sought is an initial
thermal state, and a choice of thermal forcing on the AV wall, that results in
the (assumed) static initial state developing a convective circulation that
achieves the observed ‘translation’ of the blob.
### 2.1 Further features of ‘the blob’
The volume of scintillator recirculated on 31 May was $\sim 17\mathrm{m^{3}}$,
more than the volume of the entire neck, which is
$V_{\mathrm{neck}}\sim\pi(0.75)^{2}\times 6.8\sim 12\;\mathrm{m^{3}}$. However
the initial volume of the contaminated layer, estimated as
$V_{\mathrm{blob}}\approx\int\limits_{z=2.5}^{5}\;\pi\rho^{2}(z)\;dz\;,$ (1)
is $\sim 170\;\mathrm{m^{3}}$. This implies that the ‘blob’ of 31 May was not
an undiluted mass of recently recirculated scintillator — mixing had already
occurred.
By inspection of Fig. (2) a descent rate of the order of
$1\,\mathrm{cm\,hr^{-1}}$ (or $1/4\;\mathrm{m\,day^{-1}}$) can be inferred for
the contaminated layer, decreasing as time increases. If Fig. (2) is
interpreted as showing a layer that simply sinks, undistorted, in the AV,
there must be a mechanism to move uncontaminated fluid from beneath to above
the blob. Letting $z_{m}$ be the mean height of the blob and supposing that
the volumetric flow rate of uncontaminated fluid across that plane is
$Q(z_{m})\;\mathrm{m^{3}\,s^{-1}}$, then in the simplest, purely geometrical
picture the descent rate is
$\frac{dz_{m}}{dt}\approx\frac{Q(z_{m})}{\pi}\;\frac{1}{z_{m}^{2}-R^{2}}\,.$
(2)
For example if $z_{m}=4$ m and $dz_{m}/dt=-0.25/(24\times 3600)$, i.e. 0.25 m
sink over one day, then $Q\sim\tfrac{1}{2}\,\mathrm{m^{3}\,hr^{-1}}$.
Unfortunately the thermal stratification of the scintillator, relevant to any
understanding of its motion, is not measured. A vertical profile of
temperature is however measured in the cavity water, some metres outside the
PSUP. As of 3 June 2021 that profile had been rather steady over the preceding
weeks, and was height-invariant to within about $\pm 0.5$ K from the base of
the rock cavity to a height just a few metres below the base of the neck. If
one assumes the temperature profile within the AV was qualitatively similar,
then the descent of the blob within the AV occurred in a thermal environment
that did not mitigate firmly against vertical motion, except near the top of
the neck, a region that should be irrelevant to motion within the AV sphere.
## 3 OpenFoam
Calculations were performed using OpenFoam (‘OF’) v2006555From OpenCFD
(www.openfoam.com), an affiliate of Engineering Systems International (ESI).,
running under Linux (OpenSuSe Leap 15.2 in single processor mode, or Ubuntu
20.04 in multi-processor mode). The surface shell for each chosen flow domain
was generated using Salome666From Open Cascade, www.salome-platform.org to
produce ‘stl’ (surface triangle language) files, from which OF’s blockMesh and
snappyHexMesh utilities generated the three-dimensional computational
mesh777The mesh cells are polyhedra, the majority (more specifically)
hexahedra.. Far from domain boundaries the cell size was uniform, and
controlled by a setting in OF’s ‘blockMeshDict’ file. Near the walls, layers
of cells were added (following a prescription set in ‘snappyHexMeshDict’) to
enhance resolution.
The simulations described here used OF’s solver for time-dependent
compressible flows, ‘buoyantPimpleFoam’, with an added field variable $C$
representing ${}^{222}\mathrm{Rn}$ and obeying the conservation equation
$\frac{\partial\rho C}{\partial
t}=\,-\,\nabla\cdot\left(\rho\mathbf{U}C\,-\,\rho\mathcal{D}_{c}\,\nabla
C\right)\,-\,\rho\,\frac{C}{\tau}$ (3)
where $\mathbf{U}$ is the velocity vector; $\rho$ is the fluid density,
modelled888In this approximation density is independent of pressure. Some
simulations, not reported here, used the ‘incompressible’ solver
‘buoyantBoussinesqPimpleFoam’ (of OF v2006), albeit with an advection term
added to the energy equation to parameterize the influence of an implicit
uniform and unvarying ‘background’ temperature stratification. as
$\rho=\rho_{0}\,\left[1\,+\,\beta_{T}\,\left(T-T_{0}\right)\right]$ (4)
where $\beta_{T}$ is the thermal expansion coefficient; $\mathcal{D}_{c}$ is
the molecular diffusivity of $C$ (prescribed as
$\mathcal{D}_{c}=\nu/\mathrm{Sc}$, Sc being the Schmidt number); and $\tau$ is
the decay lifetime, in the case of ${}^{222}\mathrm{Rn}$ equal to $4.76\times
10^{5}$ s. Tested in an unforced simulation whose strongly stable
stratification mitigated against evolution away from a static initial state,
the Rn layer, without moving, simply decayed with the imposed half-life.
Apart from the added field variable $C$, the only other modification made to
the solver was to incorporate the Coriolis effect. All simulations assumed
laminar flow, which amounts to an assumption that the flows were adequately
resolved, both spatially (on the given mesh) and temporally (with the time
step applied), obviating any necessity to account for subgrid motion. Further
details will be provided below, where they are specific to a given simulation
and/or not necessarily obvious even to readers who may be familiar with
OpenFoam.
## 4 Verification against a measured ‘analog’ flow
Chow and Akins [11] reported an experiment in which water, contained in a 3 L
spherical flask999The neck of the flask was inclined at 45∘ to the vertical.
For the calculations here, the neck was neglected and the radius $R$ of the
sphere computed (from the 3 L volume) to be $R=8.95$ cm., was subjected to
controlled heating. The flask was immersed within a continuously-stirred water
bath, the entire system initially being isothermal at $5^{\circ}$ C. The
temperature of the water bath was then progressively increased in such a
manner as to sustain a time-invariant temperature difference
$T_{\mathrm{b}}-T_{\mathrm{c}}$ between the water bath and the centre of the
flask. Suspended in the flask water were glass spheres, whose displacements
were measured by a photographic system arranged to provide information on a
vertical plane through the centre of the sphere. The authors reported that
this fluid system evolves into a pseudo-steady state (i.e. the motion is
steady, though the temperature field steadily warms), and presented a single
transect of the vertical velocity of the water along an equatorial radius
corresponding to the condition that $\Delta
T=\overline{T}_{\mathrm{w}}-T_{\mathrm{c}}=2.5$ K, where
$\overline{T}_{\mathrm{w}}$ denotes the mean temperature of the inner wall,
computed by the authors from measurements and known physical
properties101010It is possible that the $2.5^{\circ}\mathrm{C}$ temperature
difference had been intended by the authors to refer to the control variable
$T_{\mathrm{b}}-T_{\mathrm{c}}$, however certainly they (Chow & Akins) define
$\Delta T$ as “fluid temperature difference between inside wall and center of
sphere”, and they state in the caption for their Fig. 8 that $\Delta
T=2.5^{\circ}\mathrm{C}$.. The value of the Rayleigh number
$\mathrm{Ra}=\frac{g\,\beta_{T}\,R^{3}\,\Delta T}{\nu\,D_{T}}$ (5)
corresponding to the measured velocity transect was $\mathrm{Ra}=6.5\times
10^{6}$ ($D_{T}$ being the thermal diffusivity; other variables as defined
above), and from variations of their experimental configuration Chow and Akins
reported that the flow in the flask was laminar provided $\mathrm{Ra}<10^{7}$.
Chow & Akins interpreted (and perhaps designed) their experiment in the
context of a prior numerical study [12] of free convection in a sphere, and
their measurement provides here the opportunity to gain confidence in the
application of OF, in the context of a flow similar in its geometry and
forcing to the case of the SNO+ detector: simulations of the laboratory beaker
experiment and of the SNO+ neutrino detector differ only as regards the length
scale of the mesh, and the physical properties of the fluid. Regarding the
latter, Table (1) specifies the OF thermophysical model used in all the
simulations of this paper. Temperature-dependent properties (density, specific
heat, viscosity, Prandtl no.) are represented by polynomials, and for the
laboratory beaker simulations, the medium being water, polynomials provided by
[13] (HolzmannCFD) were used.
The domain boundary for simulations of the Chow-Akins flow consisted of a
single spherical ‘patch’ (arbitrarily named ‘shell’), and the desired initial
and boundary conditions for temperature were imposed in the OF file
‘caseFolder/0/T’ per the specification of Table (2). By way of explanation,
folder ‘0’ at top level within the ‘case folder’ contains a file specific to
each dependent variable, and that file prescribes the initial and boundary
conditions in standard OF parlance. It is worth noting that, because the
simulation holds the entire surface of the domain boundary at
$T_{\mathrm{c}}+2.5^{\circ}\mathrm{C}$, it cannot exactly parallel the
experiment, for in the latter the flow itself (or rather, the experimental
setup in its entirety) determined the inner wall temperature, which could not
have been exactly isothermal owing to the anisotropy imposed by gravity.
Fig. (3) compares the measured Chow-Akins velocity transect with two
simulations:
* 1.
Simulation ‘lores’ was performed on a mesh having a total of 309,290 cells,
including those within three layers of cells impressed (by the OF utility
snappyHexMesh) to represent the near wall region. A 1st-order accurate
discretization (‘Gauss upwind’) was used for convective terms, i.e. terms of
form $\nabla\cdot(\mathbf{U}\phi)$ where $\mathbf{U}$ is the velocity, with
components (UX,UY,UZ) along axes (X,Y,Z) in OF notation, and $\phi$ stands for
the convected property.
* 2.
Simulation ‘hires’ used a mesh with 779,506 cells, there being 10 cell layers
impressed to cover the wall region, the outermost of which had a depth about
0.01 mm (i.e. $\sim R/10^{4}$). For this simulation a 2nd-order accurate
discretization was used for convective terms.
Overall Fig. (3) shows that the agreement of the two solutions with each other
and with the measurements is very good. On their graph, Chow & Akins
extrapolated from their outermost (largest $R$) measurement back towards zero,
implying they had not measured velocities closer to the wall than that
outermost datum. Thus it need not be thought that there is a discrepancy
between the simulations and the data between the outermost measured velocity
and the wall. As is evident from Fig. (4), showing the simulated velocity
profile near the domain wall, certainly the higher-resolution OF run provided
good resolution of the near wall flow, and it compares nicely with a solution
(for this same problem) given by [5], who numerically integrated the non-
dimensional vorticity and temperature equations in polar coordinates, pre-
supposing azimuthal symmetry. From Fig. (4) one sees that the boundary-layer
Reynolds number $\mathrm{Re}=Ud/\nu$ (where $d\sim 0.5$ mm is the wall
boundary layer depth, $U\sim 2\;\mathrm{mm\,s^{-1}}$ the ‘free stream’
velocity and $\nu\sim 10^{-6}\;\mathrm{m^{2}\,s^{-1}}$ the kinematic
viscosity) is $\mathrm{O}[1]$, compatible with laminar boundary-layer motion.
Given the near equivalence of the OF model and mesh configurations needed to
simulate this small scale experiment and the SNO+ detector, the results of
this section provide reassurance that OF should be a satisfactory tool to
examine thermally-driven SNO+ flows111111Two provisos must be listed here: (i)
that an adequate computational mesh be provided, and (ii) that the distinction
in length scale (between SNO+ and the Chow-Akins experiment), viz. a factor of
$6/0.09=67$, is not so large as to imply that the regimes of flow in the two
cases (under the given type of forcing) are qualitatively dissimilar.
According to the simulations, boundary layer Reynolds numbers are of the order
of unity for both systems.. In that regard, and anticipating later results, it
is interesting to note that (according to the simulation of the Chow-Akins
experiment) the warm wall boundary layer does not ‘detach’ from the wall over
the lower hemisphere, and similarly in computations of a ‘reverse Chow-Akins
flow’ (i.e. with the wall being held colder rather than warmer than the centre
of the flask) the cool wall boundary layer was found to remain attached to the
wall over the upper hemisphere.
## 5 Simulation of a sinking radon layer
Simulations of this section relate to the sinking radon layer of Sec. (2),
with the domain of the calculations encompassing the entire AV, including the
neck. The rate of sink (of the 24-h averaged pattern) was of the order of 0.25
m/day, and the mechanism seems likely to have been the transfer of fluid
volume within the wall boundary layer from beneath to above the contamination
layer. It is of interest, then, to determine an initial state and forcing that
will replicate what had been observed. As no direct measurements of motion in
the SNO+ AV exist, and the true initial state and thermal boundary conditions
are inaccessible, the purpose of the following simulations is not to gain a
highly accurate solution for a specific (but unmeasured) case, but rather, to
outline what qualitative patterns of motion exist – and their qualitative
effect as demonstrated by the displacement and mixing of a contamination layer
from its initial position.
The puzzle presented by the sinking radon layer (Fig. 2) was the absence of
any apparent transport mechanism, i.e. in particular the level base of the
sinking layer seemed to suggest either that the 24h-averaged pattern was
hiding an eddy motion that was accomplishing the transport, or, that fluid
volume was being invisibly transferred across/through the contaminant layer
along the AV axis or walls. A clue as to the nature of the flow forcing was
provided by the observation that in early June 2021, i.e. during the
phenomenon, the LAB/cover-gas interface was rising. This could be explained
only by thermal expansion of the LAB and demanded an effective heat addition
rate of about 300 W, a figure that implies a mean heat flux density over the
AV wall of the order of $1\;\mathrm{W\,m^{-2}}$.
Table (3) lists the values adopted for the material properties of LAB.
Assuming the motion patterns of interest will primarily be thermally-driven,
the property that stands out as being of most importance is the thermal
expansion coefficient ($\beta_{T}$). The most uncertain of the properties
listed is the Schmidt number (ratio of kinematic viscosity to mass
diffusivity), used to evaluate the mass diffusivity $\mathcal{D}_{\mathrm{c}}$
of a species $C$ whose transport equation (Eq. 3) was added to the solver to
represent movement and decay of ${}^{222}\mathrm{Rn}$. This however has no
bearing on the computed pattern of flow, and unless the Peclet number
$U\ell/\mathcal{D}_{\mathrm{c}}$ were spectacularly small, convective
transport by the bulk velocity field will be overwhelmingly more imortant than
diffusion. The value adopted for the kinematic viscosity $\nu$ is nominal
(note: $U$ and $\ell$ are characteristic velocity and length scales).
Within the folder ‘casefolder/constant’ a file (typically named
‘thermophysicalProperties’, but for some solvers named ‘transportProperties’)
provides the numerical values of these material properties, along with the
user’s choices for the equation of state and the thermodynamic energy variable
to be adopted (enthalpy or sensible energy). Table (4) gives the ‘mixture’
block of the OF ‘thermophysicalProperties’ dictionary, for the SNO+
simulations to be shown.
Unless stated otherwise the simulations to follow were made on what will be
termed the ‘medium mesh,’ comprising a total of 661,536 cells, and including
three layers of cells adjacent to the walls. The depth of the outermost layer
of cells was 2.7 cm, and gridlengths based on minimum, mean and maximum cell
volume ($V$) were $h=V^{1/3}=(0.009,0.11,0.28)\;\mathrm{m}$. The radon
contamination layer initially spanned $2.5\leq z\leq 5$ m, with $C=1$ within
the layer and $C=0$ elsewhere. The wall boundary condition for $C$,
‘zeroGradient’, assured there would be no flux to or from walls. (Note: $C$ is
named ‘Conc’ on some of the figures to follow.)
### 5.1 Initial state isothermal
It was quickly found that if the wall heat flux were specified to be as large
as $q=1\;\mathrm{W\,m^{-2}}$ the rate of sink of the radon layer was
excessively large. However when the forcing heat flux was reduced to
$q=+0.1\;\mathrm{W\,m^{-2}}$ and applied over the upper hemisphere of the AV
(with $q=0$ elsewhere), qualitative agreement with the observed behaviour of
the radon layer was obtained. This can be seen on Fig. (5), where the observed
(daily-average) contaminant field on several days is compared with the
instantaneous contaminant field from the simulation121212For comparison with
the detector data the simulated concentration fields have been numerically
integrated in azimuth, with resolution $dz\times
d(x^{2}+y^{2})=0.1\,\mathrm{m}\times 0.1\,\mathrm{m^{2}}$. Resolution of the
simulated field is limited by the OpenFoam mesh length, which away from the AV
wall is approximately $0.25$ m. at a comparable time since cessation of
recirculation. Although the correlation of the respective times is inexact,
the strong similarity of observation and simulation suggests the latter has
captured the essence of the transport mechanism.
Fig. (6) summarizes the main qualitative elements of the motion at $t=12$ h
after initialization. Buoyancy of the wall boundary layer drives the motion,
and away from the wall a compensating counter-current is set up131313This
counter-current is obvious to see in the lower neck, on Fig. (6), but
difficult to distinguish in a view of the bulk of the spherical AV, where even
in the upper hemisphere it is weak and non-uniform., with the result of
displacing the contamination layer downward (Fig. 7). Adopting from Fig. (6)
approximate values for maximum speed and boundary-layer depth, the boundary-
layer Reynolds number $\mathrm{Re}=Ud/\nu$ evaluates to $\mathrm{Re}\sim 1$ or
smaller – implying laminar motion.
Fig. (8) plots the time evolution to $t=14$ days of three diagnostics of fluid
and contaminant motion. Total kinetic energy is still increasing as of 14
days, albeit at a falling rate, while a measure $\langle w\rangle$ of mean
vertical velocity near the upper hemisphere wall (proportional to the
ascending volume flux in the wall boundary layer) approaches a steady value.
The rate of change of the contaminant-mass weighted mean height
$\overline{z}=\frac{\sum_{i}\,z_{i}\,C_{i}\,dV_{i}}{\sum_{i}\,C_{i}\,dV_{i}}$
(6)
(where $i$ indexes the cell height $z_{i}$, concentration $C_{i}$ and volume
$dV_{i}$) also relaxes with increasing time, as expected. It is worth noting
here that due to the specified thermal boundary condition, the simulated flow
cannot attain a true steady state: mean temperature must continue increasing,
and this impacts the density and consequently (though perhaps to a minor
extent) the motion field.
Finally, the adequacy of the mesh resolution needs to be addressed. Fig. (9)
compares the rate of descent $d\overline{z}/dt$ of the contamination layer as
computed on four different meshes, and suggests that the medium mesh chosen
for the simulations of this section is adequate, given the exploratory nature
of the investigation.
### 5.2 Influence of stratification
The previous section established a plausible mechanism for the phenomenon that
prompted the study, i.e. the observed slow sink (without apparent mixing) of a
contaminant layer within the SNO+ neutrino detector. Given the impossibility
of knowing the true initial state and forcing for the June 2021 contamination
layer, there is no logical basis for demanding a better accord with the data
than has been shown. It remains interesting, however, to determine (again,
qualitatively) what effect thermal stratification of the detector might have.
To that end Fig. (10) examines the effect of stratification on the contaminant
field in the wall boundary layer at time $t=8$ h, comparing two simulations
that are both forced by $q=0.1\,\mathrm{W\,m^{-2}}$ on the upper hemisphere
wall. In one case, the initial state is isothermal ($T=290$ K), while in the
other a uniform initial temperature gradient (of wall and fluid) was imposed,
$T=290+0.1(z-5)$. The interesting point is that the stable stratification
appears to have ‘immobilized’ the wall boundary layer, since (contrary to the
unstratified case) clean fluid has not moved upward in the wall layer to
replace contaminated fluid. On this evidence alone, one might expect of the
stratified case a lower rate of sink of the contamination layer.
Fig. (11) compares the fields of vertical motion on $y=0$ for the same two
simulations, and shows that stratification of the initial state has both
reduced exchange between the neck and the spherical AV, and enhanced the
organisation of flow in the latter, with the formation of a vertically
coherent pattern of ascent and descent. This pattern, evident also on Fig.
(12), shows roughly an axial symmetry. For this case the Brunt-Väisälä
(angular) frequency
$\omega_{\mathrm{BV}}=\sqrt{g\,\beta_{T}\,\frac{dT_{0}}{dz}}\;,$ (7)
based on the initial stratification (which essentially still prevails) gives a
natural frequency $N_{\mathrm{BV}}=\omega_{\mathrm{BV}}/2\pi=0.0047$ Hz, and a
corresponding period $T_{\mathrm{BV}}=214$ s. Fig. (13) gives a sequence of
snapshots of vertical velocity at 17 s intervals, and establishes that the
vertical motion pattern is oscillatory at the Brunt-Väisälä frequency. Seen in
animation, one can observe the axial columns migrate outward from the axis
towards the AV wall141414The horizontal velocity components, not shown,
exhibit no obvious columnar organization, i.e. the wave motion does not
feature organized axial rotation.. A purely oscillatory motion can accomplish
no mixing, which explains how it can be that the temperature field is little
changed relative to the initial state (see panel on Fig. 13) despite the
vertical coherence of the velocity field.
Returning to the effect of thermal stability on an idealized contamination
layer (again, initially $C=1$ over $2.5\leq z\leq 5$ m and $C=0$ elsewhere),
Fig. (14) conveys an ambiguous picture. When sink rate is defined in terms of
mass-mean contaminant height $\overline{z}$, weak and moderate stratification
reduce the sink rate – although the relationship between sink rate
$d\overline{z}(t)/dt$ and initial stratification $dT_{0}/dz$ is not monotonic.
Perhaps the complexity here relates to the fact that contaminant is also to
some degree mixed upward: if bulk thermal stratification mitigates against
ascent of the warm wall layer as suggested by Fig. (10), then perhaps some
fluid-mechanical feedback, spontaneously arising in such a manner as to limit
excessive heating of the near-wall fluid, accentuates upward mixing of the
contaminant. But if we define a ‘base’ height ($\overline{z}_{\mathrm{base}}$)
for the contamination layer, as being the mean height of cells that (i) are
below the initial base of the uniform contamination layer ($z<2.5$ m) and (ii)
carry ‘low’ concentration151515Arbitrarily $0.05f(t)\leq C\leq 0.1f(t)$, where
$f(t)=\exp(-t/\tau)$ is the radioactive decay factor., Fig. (14) shows that
firm stratification enhances descent of the base of the contamination layer.
Admittedly this measure of base height is subjective and on the evidence of
the figure somewhat erratic; but given that stable thermal stratification
results (under the chosen forcing) in organized vertical motion in the bulk of
the detector (Figs. 11–13), then an accelerated downward spread of the
contamination layer within the spherical domain does not seem implausible.
## 6 Contaminant transport down the neck
The AV neck being the only path, aside from scintillator operations, by which
radon is observed to enter the AV sphere, it is of interest to focus on flow
and transport over that limited volume of the detector, i.e. a cylinder of
radius 0.75 m and spanning $6\leq z\leq 12.75$ m. Here it is relevant to
clarify the thermal conditions prevailing outside the neck, as governed by its
immersion in water over approximately the lower 6 m of its length but in cover
gas over an uppermost section of about 1 m in length. The cover gas
temperature is approximately that of the laboratory air, about
$17-20^{\circ}$C, whereas the water is controlled at a considerably lower
temperature of about $12^{\circ}$C by “cavity recirculation”. The later
process extracts water from the top of the column and returns it, after
chilling, through two streams that respectively re-enter within and outside
the PSUP structure, cooling water bodies referred to as “PSUP water” and
“cavity water”. It had been expected that this warm uppermost layer lying atop
much cooler fluid would suppress convective motion in the neck, thereby
limiting the rate of contaminant ingress and ensuring there would be at least
a factor of 50 reduction in the ${}^{222}\mathrm{Rn}$\- level in the
scintillator at the bottom of the neck relative to the top.
Simulations of this section cover two simple and plausible forms of
destabilizing thermal non-uniformity that induce convective mixing down the
neck; more complex forms of thermal forcing are easily imagined, but those
covered below suffice. The liquid/gas interface at $z=12.75$ m is treated as a
‘free slip/no leak’ surface, and the sidewall and bottom boundary as ‘no
slip/no leak’ (i.e. $\mathbf{U}=0$).
### 6.1 Steady wall heat flux
Simulations of this sub-section were forced by a prescribed heat flux $q=\pm
0.1\,\mathrm{W\,m^{-2}}$ on the side wall of the neck, disturbing an initial
state that was either unstratified ($dT_{0}/dz=0$) or uniformly stratified
($dT_{0}/dz=0.025\,\mathrm{K\,m^{-1}}$ or $0.25\,\mathrm{K\,m^{-1}}$). The
properties of this system imply a velocity scale
$w_{*}=\left(\nu\,g\,\beta_{T}\,|q|\right)^{1/4}\;,$ (8)
which for these simulations evaluates to $2.6\times
10^{-4}\,\mathrm{m\,s^{-1}}$. Initially the contaminant concentration was
$C=1$ for $z\geq 12.5$ m and $C=0$ elsewhere, although owing to mesh
irregularity there was a degree of smearing across that nominal $C=1/0$
interface.
Fig. (15) shows the flow at $t=2$ h from initialization, when an isothermal
initial state is forced by a cooling wall heat flux
$q=-0.1\,\mathrm{W\,m^{-2}}$. Sink in the wall boundary layer is necessarily
compensated by weak ascent away from the walls, with the result that the
contamination layer is drawn down along the wall, but elsewhere is compressed
upwards. At $t=2$ h the standard deviation of the vertical velocity over the
whole domain was $\sigma_{UZ}=1.1\times 10^{-4}\,\mathrm{m\,s^{-1}}$, so that
$\sigma_{UZ}/w_{*}=0.4$. The largest magnitudes for vertical velocity were of
order $5\times 10^{-4}\,\mathrm{m\,s^{-1}}$ (0.5 mm/s), two orders of
magnitude larger than was the case in an unforced simulation at $t=2$ h.
Switching the sign of the wall heat flux, Fig. (16) shows a very different
flow pattern at $t=2$ h, with the ascending volume flux of the wall boundary
layer necessitating sink away from the wall. The contamination layer is as a
result pushed away from the wall, and moves downward. The vertical velocity
full scale range is very comparable with the previous case, and the statistic
$\sigma_{UZ}/w_{*}$ is identical.
When the initial state is thermally stratified, even weakly so, the
simulations give evidence of wave motion, with the mesh size introducing a
non-physical length scale on that pattern (see Fig. 17). Whereas for the
neutral cases a single closed $\mathrm{UZ}=0$ contour more or less partitioned
the flow into ascent/descent regions, when initial stratification is imposed
that contour degenerates into a maze of bubbles, and so is not shown. The
stratification ($0.25\;\mathrm{K\,m^{-1}}$) does not, however, impede the
descent of the cool wall layer.
Fig. (18) plots, for each of the simulations of this section, the height
$z_{\mathrm{min}}(t)$ of the lowest cell bearing non-zero contaminant
concentration, ‘non-zero’ being arbitrarily taken to mean $C\geq 5\times
10^{-4}$. The general effect of stratification is to retard the descent of
fluid elements originating in the contamination layer. For the case
$(q=-0.1,dT_{0}/dz=0)$ contamination reaches the base of the neck within about
4 hours, although the bulk of the contaminant remains trapped high in the neck
in the weakly ascending flow away from the wall. For the situation
$(q=+0.1,dT_{0}/dz=0)$ a longer time will be required before contamination
first reaches the base of the neck, a point that must be qualified by noting
that when the contaminant does first arrive, it will do so in greater volume,
i.e. not only within the wall layer but in the bulk of the neck.
### 6.2 Steady wall temperature anomaly
In Section 6.1 motion in the neck was forced by a steady lateral flux of heat
into the neck, which implies a continuous source of buoyancy — that is,
whatever the thermal response of the fluid adjacent to the wall, it continues
to be undergo warming. In reality, and excepting unusual circumstances such as
a failure of SNO+ water chillers, long intervals of steady heat addition or
extraction from the SNO+ detector are not realistic. In this section
simulations examine two instances wherein the motion is forced, instead, by a
wall temperature anomaly relative to the initial fluid temperature, i.e. a
transient wall temperature excess or deficit and thus a transient flux of
buoyancy. The resulting adjustment of the fluid temperature to the
(prescribed, constant) wall temperature implies that eventually a static
steady (equilibrium) state can be expected.
In this section the origin for the $z$-axis is the base of the neck. The
initial contaminant profile, irrelevant to the eventual steady state, was
prescribed as linear ($C=z/7$). The upper and lower boundary conditions for
tracer (i.e. radon) concentration were
$C=\begin{cases}1\;,z=7\;\mathrm{m}\;,\\\ 0\;,z=0\;\mathrm{m}\;\end{cases}$
and the sidewall boundary condition was specified as ‘zeroGradient’ (i.e. no
flux). At steady state this scenario must result in a downward flux of $C$
that, integrated over horizontal planes, must be independent of height.
Furthermore $C_{\mathrm{avg}}(z)$, representing radon concentration averaged
over the horizontal plane, serves as an indicative measure of the degree of
‘isolation’ of any point in the neck relative to the contaminant load at the
top of the neck.
#### 6.2.1 Transient negative buoyancy
A simulation was performed with the neck idealized into three sections from
base to top, with lengths $(L_{1},L_{2},L_{3})=(3,3,1)$ m, and with the wall
temperatures held fixed at respectively
$(T_{1w},T_{2w},T_{3w})=(12,11.9,20)\;^{\circ}$C. This prescription was
motivated by the fact of the uppermost section being surrounded by cover gas
and the lower two sections by the much cooler cavity and PSUP water.
Furthermore because the water masses (within and outside the PSUP sphere) are
of non-uniform and time-varying temperature, destabilizing the system by
holding the neck wall temperature of the middle section slightly colder than
that of the lowest section seems within the range of possible circumstances.
Initial fluid temperature was prescribed as
$T=\begin{cases}20^{\circ}\;,6<z\leq 7\;\mathrm{m}\;,\\\ 12^{\circ}\;,z\leq
6\;\mathrm{m}\;.\end{cases}$
Fig. (19) summarizes the evolution of the concentration and vertical velocity
fields to $t=10$ hr. A negatively buoyant wall boundary layer develops
rapidly, and draws contaminant downwards along the neck, while the
compensating vertical motion outside the boundary layer lifts tracer upwards.
By 10 hr the contaminant field has evolved (roughly speaking) into two well
mixed layers, with the boundary between the upper and lower layers of
respectively high and low concentration standing about 0.5 m below the base of
the warm wall segment. The profile of horizontally-averaged concentration
($C_{\mathrm{avg}}$, see Fig. 20a) captures this ‘step’ between high
concentration above and somewhat lower concentration below the base of the
warm layer, and the cause of the step is explicit in the profile of the
standard deviation of vertical velocity ($\sigma_{\mathrm{UZ}}$, see Fig.
20b): vertical motion is nulled not only at the top and bottom boundaries
(courtesy of the imposed boundary condition), but also greatly suppressed near
the gravitationally stable temperature step. If $C_{\mathrm{avg}}$ is taken as
an indication of the extent to which contaminant concentration is buffered
relative to its value at the top of the neck, then this simulation indicates
that the thermal stability owing to existence of a much warmer fluid layer at
the top of the neck cannot suppress the vertical exchange that owes to a wall
temperature inhomogeneity even so small as of $\mathcal{O}[0.1\,\mathrm{K}]$,
for $C_{\mathrm{avg}}$ remains $\mathcal{O}[1]$ in the lower layers, excepting
at the base (where it is artifically held to $C=0$ by the boundary condition).
#### 6.2.2 Transient positive buoyancy
The final simulations were motivated by the intuition that strong (stable)
stratification of the detector ought to suppress or at least limit the mixing
induced by transient and weak thermal disturbances. The initial fluid
temperature profile was specified as
$T=285+\gamma\,z$
with $\gamma=1\;\mathrm{K\,m^{-1}}$, and the (steady) wall temperature as
$T_{w}=285+\gamma\,z+\Delta
T\;\exp\left[-\frac{(z-1)^{2}}{2\sigma^{2}}\right]$
with (for the case to be shown) $\Delta T=4$ K and $\sigma=0.4$ m.
Fig. (21) summarizes the simulation, which reaches a steady state by about
$t=18$ h. The consequence of the ring of warm wall temperature is a “mixed
layer” within which the scalar contaminant ($C_{\mathrm{avg}}$) is uniform. As
intuition would suggest, the forcing results in mixing that is restricted to a
layer over which heated parcels rise due to their buoyancy relative to the
cooler fluid about them161616Having attained an upward momentum, parcels are
liable to “overshoot” and rise above their layer of neutral buoyancy,
decelerate, then sink again.. Also evident on Fig. (21) is a degree of ‘false’
or ‘numeric’ motion and mixing, most clearly seen around $z=6.5$ m.
Characterized by $\sigma_{\mathrm{UZ}}$ the level of unforced motion quickly
comes to a steady state, and is certainly an artiface of the numerical
solution: in unforced simulations $\sigma_{\mathrm{UZ}}$ diminishes with
increasing mesh refinement. It owes to discretization errors which are larger
in regions where the mesh cells (mostly hexahedra) feature non-orthogonal
boundaries, such as as near intersecting domain boundaries that are endowed
with ‘added’ cell layers.
Equivalent calculations with a weaker wall temperature excess produce a
correspondingly narrower mixed layer, and one can conclude (without surprise)
that in combination with uniform neck stratification
$\gamma\,[\mathrm{K\,m^{-1}}]$ a modest wall temperature anomaly of magnitude
$T^{\prime}$ will induce (transient) mixing over a depth of order
$\gamma^{-1}\,T^{\prime}$. Presumably this simple argument has its limits –
for example if the perturbation were to induce a motion field sufficiently
strong as to erase the initial stratification.
## 7 Discussion and Conclusion
Apart from numerous geophysical and planetary studies, the literature on
convective flow of a materially homogeneous fluid within a sphere is
surprisingly sparse. The present study of motion in the SNO+ liquid
scintillator neutrino detector has been motivated by what had been a puzzling
phenomenon, and was initiated with the goal of providing a plausible
explanation for that motion. The numerical method (OF’s standard solver
‘buoyantPimpleFoam’) being well tried, we have relied on the demonstrated
consistency of simulations with measurements in a geometrically congruent
small scale experiment (Section 4) as testimony to the realism – qualitative
or better – of the full scale (SNO+) calculations.
The simulations suggest that the phenomenon prompting the study, i.e. the slow
descent within the detector of a radon contamination layer, was driven by
volume exchange from beneath to above that layer, the needed upward volume
flux ($Q_{\mathrm{vol}}^{+}$) taking place due to and within a thin convective
boundary layer on the AV wall. Consistency of the simulation with the observed
multi-day phenomenon hinges to some extent on the absence of any firm
constraint in specifying the forcing for the simulated flow, but our choice,
viz. a weak heat flux density $q=0.1\,\mathrm{W\,m^{-2}}$ into the fluid from
the wall of the upper hemisphere, is not dissimilar from what is implied by
(sporadically observed) thermal expansion of the scintillator fluid.
Interestingly too, short episodes of contamination rising and spreading
laterally from the base of the detector, seen after intervals of so-called
‘reverse’ circulation of the scintillator fluid – ‘reverse’ meaning the LAB is
reintroduced at the base of the detector, rather than the base of the neck –
are consistent with OF simulations that impose a cooling heat flux around the
lower hemisphere of the detector.
The azimuthal symmetry of the SNO+ detector may suggest to readers that the
full 3-dimensional (3D) treatment adopted here was uneconomic, and certainly
theoretical studies of convection within a sphere (e.g. [5, 12, 14]) have
presupposed azimuthal symmetry. Over the course of the work some early
simulations adopted as the domain just a single quadrant of the detector, but
the additional planar boundaries made mesh generation a more complex task; and
as to reducing the problem to 2D, although this would be fine in the context
of steady state treatment, in 2D one loses temporal fidelity. In that regard,
it appears that there are two or more timescales for the development of this
flow. The shortest timescale $\tau_{\mathrm{bl}}$ ($\sim 1\;\mathrm{hr}$, or
smaller) relates to the establishment of motion in the wall boundary layer,
and so soon as total time $t\gg\tau_{bl}$ (i.e. a few hours after initiation)
the sink rate of the blob is established, albeit decreasing with increasing
$t$ as the blob sinks, with associated increase of its planar surface area. On
a longer time scale ($\tau_{\mathrm{bulk}}$) gross thermal stratification (in
response to the wall heat flux) and quasi-horizontal swirl (presumably
originating in the Coriois term) are established in the bulk of the AV.
Overall this work indicates that even what might seem to be ‘small’ thermal
inhomogeneities or trends can suffice to bring about significant motion (up to
$\mathcal{O}[\mathrm{mm/sec}]$) in this type of fluid ‘body’, and the fact of
these simulations being for a spherical geometry probably does not limit this
suggestion to that particular shape. The motion engendered by some forms of
thermal disturbance can be mitigated by ensuring that a stabilizing vertical
thermal gradient is sustained, but in this regard the duration and strength of
the forcing disturbance are important – a sustained buoyancy flux is liable,
eventually, to induce complete mixing. In short, if scintillator motion is to
be avoided then (horizontal) thermal homogeneity must be assured to a very
fine level.
### Acknowledgements
The author thanks the SNO+ collaboration for access to the data that spurred
the questions addressed here, and in particular Drs. A. Wright, V. Lozza, A.
Hallin and C. Krauss, who provided regular suggestions and feedback that
guided the progression of this work. Dr. J.-S. Wang produced the SNO+ data
plots shown in Figs. (2,5). Funding for the author’s participation in SNO+
derives from a Natural Sciences and Engineering Research Council of Canada
(NSERC) Discovery Grant.
## References
* [1] V. Di Marcello, et al., Fluid-dynamics and transport of 210Po in the scintillator Borexino detector: A numerical analysis, Nucl. Instrum. Methods A. 964 (2020) 163801. doi:10.1016/j.nima.2020.163801.
* [2] V. Di Marcello, et al., Natural convection and transport of background contamination in the Borexino neutrino detector, J. Fluids Eng. 144 (8) (2022) 081210. doi:10.1115/1.4053895.
* [3] S. Zhang, J. Li, Y. Su, et al., A method for sharing dynamic geometry information in studies on liquid-based detectors, Nucl. Sci. Tech. 32 (21) (2021). doi:10.1007/s41365-021-00852-8.
* [4] D. Pepper, J. Heinrich, et al., Transient Natural Convection within a Sphere Using a 3-D Finite Element Method, in: Finite Elements in Fluids, CIMNE/Pineridge Press, 1993, pp. 369–378.
* [5] J. Hutchins, E. Marschall, Pseudosteady-state natural convection transfer inside spheres, Int. J. Heat Mass Transf. 32 (11) (1989) 2047–2053.
* [6] E. Arquis, et al., Mixed Convection in a Spherical Enclosure, J. Heat Transf., Trans. ASME 115 (1993) 1066–1069.
* [7] SNO+ Collaboration _et al_., The SNO+ Experiment, JINST 16 (2021) P08059. doi:10.1088/1748-0221/16/08/P08059.
* [8] The SNO+ Collaboration _et al_., Development, characterisation, and deployment of the SNO+ liquid scintillator, JINST 16 (2021) P05009. doi:10.1088/1748-0221/16/05/P05009.
* [9] The SNO+ Collaboration _et al_., Current Status and Future Prospects of the SNO+ Experiment, Advances in High Energy Physics (2016) 6194250doi:10.1155/2016/6194250.
* [10] D. Bravo-Berguño, et al., The Borexino Thermal Monitoring & Management System and simulations of the fluid-dynamics of the Borexino detector under asymmetrical, changing boundary conditions, Nucl. Instrum. Methods A. 885 (2018) 38–53. doi:10.1016/j.nima.2017.12.047.
* [11] M. Chow, R. Akins, Pseudosteady-State Natural Convection Inside Spheres, J. Heat Transf., Trans. ASME 97 (1975) 54–59.
* [12] H. Whitley, R. Vachon, Transient Laminar Free Convection in Closed Spherical Containers, J. Heat Transf., Trans. ASME 94 (1972) 360–366.
* [13] T. Holzmann, Mathematics, Numerics, Derivations and OpenFOAM, 4th Edition, Holzmann CFD, Leoben, 2016.
* [14] G. Anguiano-Orozco, R. Avila, Vortex Ring Formation within a Spherical Container with Natural Convection, CMES. Computer Modeling in Engineering & Sciences 49 (09 2009).
* [15] X. Zhou, Q. Zhang, Q. Liu, et al., Densities, isobaric thermal expansion coefficients and isothermal compressibilities of linear alkylbenzene, Phys. Scr. 90 (2015) 055701. doi:10.1088/0031-8949/90/5/055701.
* [16] Wenjie Wu, et al., Thermal diffusivity and specific heat capacity of linear alkylbenzene, Physica Scripta 94 (10) (2019). doi:10.1088/1402-4896/ab1cea.
## TABLES
Table 1: The ‘thermoType’ block in the OpenFoam dictionary (file)
“thermophysicalProperties”, as used for simulations of the Chow-Akins
experiment and of the SNO+ detector.
thermoType { type heRhoThermo; mixture pureMixture; transport polynomial;
thermo hPolynomial; equationOfState icoPolynomial; specie specie; energy
sensibleEnthalpy; }
Table 2: Specification (in standard OF file ‘casefolder/0/T’) of the initial
and boundary conditions on temperature, for simulation of the Chow-Akins
experiment. An integer identifier (arbitrarily named ‘origin_cellID’) is
associated with the coordinate origin (i.e. centre of the sphere). The
arbitrarily named variable ‘T00_plus’ is set equal to the temperature at cell
‘origin_cellID’ incremented by 2.5 K, and imposed as the temperature
everywhere on the domain boundary (whose patch name is ‘shell’). Thus the
domain boundary is sustained at a temperaure 2.5 K warmer than the temperature
at the origin.
internalField uniform 278.0; boundaryField { shell { type codedFixedValue;
value uniform 278; name T00_plus; code #{ const volScalarField& T =
db().lookupObject<volScalarField>("T"); const fvMesh & mesh = T.mesh(); scalar
origin_cellID = mesh.findCell(point(0.0, 0.0, 0.0)); scalar value =
T[origin_cellID]+2.5; // wall temperature excess operator==( value ); #}; } }
Table 3: Values used for the material properties of LAB. Note: Trials with a wide range of alternative values for $\mathrm{Sc}$ suggested the uncertainty in its value is (for present purposes) inconsequential. Property | Value | Source
---|---|---
Density, $\rho_{0}$ | $858\,\mathrm{kg\,m^{-3}}$ | Zhou et al. [15])
Kinematic viscosity, $\nu$ | $1\times 10^{-5}\,\mathrm{m^{2}\,s^{-1}}$ | Material Data Safety Sheet (“5-10 cps”)
Thermal expansion coefft., $\beta_{T}$ | $8.8\times 10^{-4}\,\mathrm{K^{-1}}$ | Zhou et al. [15])
Thermal diffusivity., $\kappa$ | $7.2\times 10^{-8}\,\mathrm{m^{2}\,s^{-1}}$ | Wu et al. [16])
Specific heat capacity, $c_{p}$ | $2300\,\mathrm{J\,kg^{-1}\,K^{-1}}$ | Wu et al. [16])
Prandtl no., $\mathrm{Pr}$(=$\nu/\kappa$) | 140 | (rounded)
Schmidt no., $\mathrm{Sc}$ | 140 | (surmised)
Table 4: The ‘mixture’ block in the OpenFoam dictionary (file)
“thermophysicalProperties”, for SNO+ simulations. Note: ‘mu’ ($\mu$) is the
dynamic viscosity ($\mu=\rho\nu$) and ‘kappa’ is the thermal conductivity
($\kappa,\,\mathrm{W\,m^{-1}\,K^{-1}}$). The reference temperature $T_{0}$ for
the polynomial giving the density, $\rho=\rho_{0}+\beta_{T}\,(T-T_{0})$, has
been taken as 290 K.
mixture { specie { molWeight 240.0; } thermodynamics { CpCoeffs<8> (2300.0 0 0
0 0 0 0 0); Sf 0; Hf 0; } equationOfState { rhoCoeffs<8> (1077 -0.755 0 0 0 0
0 0); } transport { muCoeffs<8> (0.00858 0 0 0 0 0 0 0); kappaCoeffs<8> (0.143
0 0 0 0 0 0 0); Pr 140; } }
## FIGURES
Figure 1: Schematic diagram of the SNO+ neutrino detector, the “flow domain”
of interest [7]. Floating in water in the rock cavity, an acrylic sphere
(radius 6 m) is surrounded by the PSUP (PMT support frame) bearing
photomultiplier tubes (PMTs). The sphere is connected by the ‘neck’ (radius
0.75 m) to the ‘deck’ some 7 m above the sphere. High in the neck is the
liquid/gas interface, where the scintillator meets the purified
$\mathrm{N}_{2}$ ‘cover gas’. Figure 2: The sinking ${}^{222}\mathrm{Rn}$
layer of 31 May - 14 June, 2021, as seen in daily-summed event distributions
(sum of ${}^{214}\mathrm{Bi}$ and ${}^{214}\mathrm{Po}$ decays) plotted in
$(z,\rho^{2})$ coordinates ($\rho=\sqrt{x^{2}+y^{2}}$). The colour code gives
the hourly event count per $z-\rho^{2}$ bin, averaged over 24 hr (bin widths
$\Delta z,\Delta\rho^{2}$ respectively $0.1\,\mathrm{m}$ and
$0.1\,\mathrm{m^{2}}$). Note: the high level of contamination indicated here
is not typical of the SNO+ detector, and represents an anomaly. Figure 3:
Pseudo-steady convective vertical motion of water undergoing heating in a
spherical container (radius $R=0.08947$ m). Lineplot: vertical velocity
[$\mathrm{mm\,s^{-1}}$] along a radius at the equator, comparing the
measurements by Chow & Akins [11] (Fig. 8) against lower- and higher-
resolution OpenFoam simulations. The multiplicity (in the case of the higher
resolution simulation) results from the availability of multiple mesh cells in
close proximity to the chosen radius ($y=0$, $0\leq x\leq R$). Inset: slice
$y=0$ of the higher-resolution OF solution at $t=600$ s, with auto-ranged
colour scale quantified in $\mathrm{m\,s^{-1}}$. Figure 4: A clip of the UZ
(vertical velocity) field on $y=0$ at $t=600$ s, from the higher-resolution OF
simulation of the Chow & Akins experiment. The ruler (black) sits along
$y=z=0$ with its (left) end at the wall ($x=-0.0895$ m), its gradation
interval being 0.1 mm. The ten ‘added layers’ of the mesh are visible, the
depth of the outermost layer being about 0.01 mm ($\sim 10^{-4}\times R$). The
colour scale range is autoscaled to the visible clip, and gives UZ in
$\mathrm{m\,s^{-1}}$. Figure 5: Comparison of observed (left) and simulated
positions of the radon layer of 31 May – 14 June 2021. Axis ranges are $-6\leq
z\leq 6$ m (vertical) and $0\leq x^{2}+y^{2}\leq 36$ m, (horizontal). Initial
state isothermal and static, with concentration $C=1$ over $2.5\leq z\leq 5$ m
and $C=0$ elsewhere. Evolution forced by heat flux density
$q=+0.1\;\mathrm{W\,m^{-2}}$ on the upper hemisphere. If radon evolution were
due solely to radioactive decay, for the times shown (at right) the normalised
concentration time sequence would be: $C(t)/C(0)=(1,0.83,0.58,0.23,0.14)$.
Figure 6: Views of the motion in the AV at time $t=12$ h, an isothermal
initial state being disturbed by heating ($q=0.1\;\mathrm{W\,m^{-2}}$) over
the the upper hemisphere (excluding the neck). All images are on the $y=0$
slice. Upper-left panel ($T$) shows the warm wall boundary layer (vertical
ruler of length 1 m with 0.2 m increments). Upper-middle panel shows vertical
velocity (UZ) near the AV wall at $z=3$ m, with (only) 6 colour-values
permitted and the scale chosen to distinguish positive from negative UZ (in
reality, UZ is not uniform outside the wall layer). Lower-left panel (velocity
magnitude $\mathrm{U}$, $\mathrm{m\,s^{-1}}$) is a blow-up of the wall layer
near $z=4$ m (ruler is 0.1 m long). The panel on the right (vertical velocity)
shows that the warm wall layer penetrates up along the wall of the neck,
inducing sink in the interior of the neck. Figure 7: The contaminant
distribution on $y=0$ at $t=(0,12)$ h. Initial state isothermal, driven by
$q=0.1\;\mathrm{W\,m^{-2}}$ on the upper hemisphere. Ruler is 2.5 m long, with
its base at the origin and with 0.25 m increments. Cell sidelength in the
interior is about 0.25 m. The ascending warm wall boundary layer is
transporting uncontaminated fluid upwards from beneath the contamination
layer, and the accumulation of that fluid volume above the original
contaminant layer forces the latter downward. Figure 8: Time evolution of
total kinetic energy (KE), of the mean vertical velocity ($\langle w\rangle$)
computed over a thin outer layer of the upper hemisphere ($0\leq z\leq 5.94$
m, $\sqrt{x^{2}+y^{2}+z^{2}}\geq 5.9$ m), and of the contaminant mass weighted
mean height of the radon contamination layer (computed over all cells).
Initial state isothermal, driven by $q=0.1\;\mathrm{W\,m^{-2}}$ on the upper
hemisphere. Figure 9: Impact of the computational mesh on the rate of descent
of the radon contamination layer, for four different choices of mesh (cell
counts respectively: $\circ\;379738$, $\bullet\;661536$, $\oplus\;1003934$,
$\otimes\;1088629$). The minimum and maximum cell sizes
($h^{\mathrm{min}},h^{\mathrm{max}}$) are computed from cell volumes as
$V^{1/3}$, and cited in centimeters (rounded). The property plotted is the
deviation ($\Delta\overline{z}$) of the contaminant-mass-weighted mean height
of the radon layer from its initial value (the deviation is plotted because
the initial value $\overline{z}(0)$ differs slightly from mesh to mesh, owing
to the spatial irregularity of the mesh cells produced by snappyHexMesh). In
every case an isothermal initial state is disturbed by a heat flux
$q=+0.1\;\mathrm{W\,m^{-2}}$ over the upper hemisphere of the AV, and the OF
solver is the same for all cases (i.e. a slight modification of
‘buoyantPimpleFoam’). The solid circles (and line) correspond to the
‘standard’ mesh used for most ‘full detector’ calculations. Figure 10:
Comparison of contaminant concentration fields on $y=0$ at $t=8$ hr for two
runs, both forced by heat flux $q=+0.1\;\mathrm{W\,m^{-2}}$ on the upper
hemisphere of the AV. Upper: initial state uniformly stratified,
$T=290+0.1(z-5)$. Lower: initial state isothermal. In the isothermal case the
wall boundary layer features uncontaminated fluid, brought up from beneath the
radon layer by the buoyant wall layer flow – this is not seen, however, in the
stratified case. Figure 11: Comparison of fields of vertical velocity UZ on
$y=0$ at $t=8$ hr for two runs, both forced by heat flux
$q=+0.1\;\mathrm{W\,m^{-2}}$ on the upper hemisphere of the AV. Left: initial
state isothermal. Right: initial state uniformly stratified, $T=290+0.1(z-5)$.
Stratification has diminished volume flux between the spherical AV and its
neck, but resulted in an organised pattern of ascent/descent in the bulk of
the AV. The colour scale on the right embraces the range in UZ (on $y=0$) for
that case, whereas the range on the left has been fixed so as to match (the
full range on the left was $-5.2\times 10^{-4}\leq\mathrm{UZ}\leq 8.9\times
10^{-4}\;\mathrm{m\,s^{-1}}$). Figure 12: Vertical velocity UZ on slices at
$z=(0,3.75)$ m at $t=8$ hr, for the simulation with stratified initial state
and forced by heat flux $q=+0.1\;\mathrm{W\,m^{-2}}$ on the upper hemisphere
of the AV. Columns of ascent/descent cut these slices, showing vertical
continuity (as also seen on Fig. 11) and a far from perfect axial symmetry.
Figure 13: Oscillatory pattern of vertical velocity UZ [$\mathrm{m\,s^{-1}}$],
in stably-stratified detector forced by heat flux $q=+0.1\;\mathrm{W\,m^{-2}}$
on the upper hemisphere of the AV. Vertical velocity sequence showing one
half-cycle of oscillation, as viewed on a vertical slice ($y=0$) and an
equatorial ($z=0$) slice. The interval between images (to be viewed clockwise,
$0...6$) is $\Delta t=17$ s, and time ‘0’ is $t=$7 hr + 17 s. The full period
of the oscillation is approximately twelve steps (i.e. $204$ s), matching the
Brunt-Väisälä period (214 s). In the upper-left corner a spherical clip
(radius 6 m) on $y=0$ shows temperature deviation [K] from the initial state
($T=290+0.1(z-5)$) at $t=8$ hr. The colour scale for UZ is fixed across all
plots. Figure 14: Effect of initial thermal stratification $dT_{0}/dz$ on the
rate of sink of a contamination layer, initially confined to $2.5\leq z\leq 5$
m where $C=1$. $\overline{z}$ is the contaminant-mass-mean height, and
$\overline{z}_{\mathrm{base}}$ is the mean height over a subset of cells for
which $z\leq 2.5$ m and $0.05f(t)\leq C\leq 0.1f(t)$, where
$f(t)=\exp(-t/\tau)$ is the radioactive decay factor, $\tau=(\ln
2)^{-1}\,T_{1/2}$ with $T_{1/2}=3.8$ days. Simulations forced by heat flux
$q=+0.1\;\mathrm{W\,m^{-2}}$ on the upper hemisphere of the AV. Figure 15:
Contaminant motion down the neck of the SNO+ AV. Slices on $y=0$, at $t=2$ h.
Initial state isothermal, driven by $q=-0.1\;\mathrm{W\,m^{-2}}$ on the neck
sidewall. Each panel shows a segment (or all) of the $y=0$ plane, and in each
case the colour scale range is adapted to the (whole or partial) view shown.
The cool, sinking wall boundary layer (seen on left and right panels) is
carrying a contaminant (‘Conc’, initially present with concentration $C=1$ at
$z\geq 12.5$ m) downwards. The panel on the right gives a close-up view of the
wall boundary layer, with the ruler (length 0.1 m, height $z=9.375$ m)
indicating that at that point the boundary layer depth was about 0.05 m (and
spanning 4-5 layers of cells). The single contour shown on the other two
panels denotes $\mathrm{UZ}=0$, delineating regions of sinking and ascending
fluid. Figure 16: Contaminant motion down the neck of the SNO+ AV. Slices on
$y=0$, at $t=2$ h. Initial state isothermal, driven by
$q=+0.1\;\mathrm{W\,m^{-2}}$ on the neck sidewall. The single contour shown on
the left and middle panels is $\mathrm{UZ}=0$. On the rightmost panel the
ruler at $z=12.35$ m has length 0.1 m, and the vectors are proportional to
$\mathbf{U}$ in magnitude and direction. Figure 17: Contaminant motion down
the neck of the SNO+ AV. Slices on $y=0$, at $t=2$ h. Stably-stratified
initial state $T=290+0.25(z-6)$, driven by $q=-0.1\;\mathrm{W\,m^{-2}}$ on the
neck sidewall. On the rightmost panel the ruler has length 0.1 m, and vectors
are proportional to $\mathbf{U}$ in magnitude and direction. Though not
evident from the figure, the cool wall boundary layer shows little variation
in speed or depth over almost the entire length of the neck. Figure 18:
Descent of a contamination layer (initially, $C=1$ at $z\geq 12.5$ m) down the
neck of the SNO+ neutrino detector. Time evolution of the height
($z_{\mathrm{min}}$) of the lowest scalar-bearing cell ($C\geq 0.0005$) for
various combinations of wall heat flux ($q=\pm 0.1\;\mathrm{W\,m^{-2}}$) and
initial stratification ($dT_{0}/dz,\;\mathrm{K\,m^{-1}}$). Note:
$z_{\mathrm{min}}(0)$, not shown, is not exactly equal to 12.5 m (i.e. the
nominal base of the initial contamination layer) due to the irregular shape
and finite size of the mesh cells. Figure 19: Evolution of the contaminant
field (‘Conc’ or $C$) and vertical velocity (UZ) in an idealization of the
neck of the SNO+ neutrino detector, shown on $y=0$ slices. The motion is due
to inhomogeneity of wall temperature, which is fixed at
$T_{w}=(12,11.9,20)^{\circ}$ C in respectively the lowest 3 m, the central 3
m, and the uppermost 1 m of the neck. Initially (left panel) $C=z/7$, but that
distribution (shown with same colour scale at $t=(30,600)$ min) is quickly
disturbed by the motion (though held by the boundary conditions to be 0/1 at
base/top of the domain). By $t=30$ min the negatively-buoyant wall boundary
layer is well developed, drawing contaminant downward. At $t=600$ min the
contaminant distribution shows a sharp drop below the base of the uppermost
(and sharply warmer) section. Figure 20: Profiles of (a) horizontally-averaged
concentration $C_{\mathrm{avg}}(z)$ and (b) the standard deviation
$\sigma_{\mathrm{UZ}}(z)$ of vertical velocity. Properties defined over
horizontal layers (of depth 0.14 m), in the simulation corresponding to Fig.
(19). Motion is the consequence of wall temperature in the layer $3<z<6$ m
being slightly cooler than wall temperature at $z\leq 3$ m. Figure 21:
Profiles of (a) horizontally-averaged concentration $C_{\mathrm{avg}}(z)$, (b)
standard deviation $\sigma_{\mathrm{UZ}}(z)$ of vertical velocity and (c)
temperature rise relative to the initial state. Properties for each time are
defined over horizontal layers (of depth 0.14 m). Motion is the consequence of
a ‘Gaussian’ warm ring of wall temperature centred at $z=1$ m, where the
excess temperature is 4 K. The ‘unforced’ profile of $\sigma_{\mathrm{UZ}}$,
quantifying the level of false (or ‘numeric’) mixing, is for $t=9$ hr from a
simulation on the same mesh and differing only in that there is no forcing
warm wall ring (the level of unforced motion is to all intents and purposes
time-independent).
|
# Selective Reverse PAC Coding for Sphere Decoding
Xinyi Gu, Mohammad Rowshan, Member, IEEE, and Jinhong Yuan, Fellow, IEEE The
authors are with the School of Electrical Eng. and Telecom., University of New
South Wales (UNSW), Sydney, Australia. (e-mail<EMAIL_ADDRESS>and {m.rowshan<EMAIL_ADDRESS>work was supported in part by the
Australian Research Council (ARC) Discovery Project under Grant DP220103596.
###### Abstract
Convolutional precoding in polarization-adjusted convolutional (PAC) codes can
reduce the number of minimum weight codewords (a.k.a error coefficient) of
polar codes. This can result in improving the error correction performance of
(near) maximum likelihood (ML) decoders such as sequential decoders and sphere
decoders. However, PAC codes cannot be decoded by sphere decoding. The reason
is twofold: 1) Sphere decoding of polar codes is performed from the last bit -
due to the lower rectangular shape of the polar transform. Whereas the shape
of PAC codes generator matrix is no longer triangular. 2) One may modify the
precoding matrix to get a lower-triangular shape. However, this may reduce the
minimum distance of the code due to the formation of unwanted cosets. This
work proposes a selective convolutional precoding scheme with transposed
precoding matrix to reduce the error coefficient while avoiding the reduction
in the minimum distance. The numerical results show the improvement of block
error rate by 0.2-0.6 dB, depending on the code rate, in medium and high SNR
regimes.
###### Index Terms:
Polar codes, PAC codes, sphere decoding, error coefficient, precoding, code
construction.
## I INTRODUCTION
Polar codes [1] form a class of capacity-achieving codes. However, they do not
provide a satisfactory error correction performance with a relatively low
complexity successive cancellation (SC) decoding in finite block length. To
address this drawback, SC list (SCL) decoding was introduced in [2] that
provides a near maximum likelihood (ML) block error rate (BLER) at the cost of
high computational complexity. Furthermore, it was recently proposed to
concatenate convolutional codes and polar codes resulting in a family of codes
known as polarization-adjusted convolutional (PAC) codes [3]. PAC codes can
reduce the error coefficient of the underlying polar codes due to the impact
of convolutional precoding on the formation of minimum weight codewords [4].
This reduction is expected to improve the performance of PAC codes under ML
decoders such as sequential decoders [5] and sphere decoder.
It is known that an ML decoder based on an exhaustive search has a complexity
of $O(2^{K})$, where $K$ is the number of information bits. Sphere decoding
(SD) based on the depth-first approach [7] reduces the scope of search
effectively. It first finds a close candidate solution in terms of Euclidean
distance (ED), not necessarily the closest one, to the received sequence.
Then, it searches for a closer candidate to the received sequence by tree
pruning, if there exists. Sphere decoding was adapted to polar codes in [8]
and then several complexity reduction techniques and approaches such as
bounded metrics for SD and list SD (LSD)were proposed in [9, 10, 11, 12] to
explore the decoding tree structure in search of the optimal or near-optimal
path.
In this work, we propose a selective convolutional precoding scheme in the
reverse direction for sphere decoding of polar codes. This scheme
significantly reduces the error coefficient of polar codes while avoiding the
challenging problem of the reduction in the minimum distance of the underlying
polar code. We also analyze the impact of this precoding scheme on the
generation of minimum weight codewords in the cosets with respect to the
employed convolutional polynomials. The numerical results show that the
proposed techniques can improve the block error rate of polar codes under
sphere decoding by 0.2 - 0.6 dB. The power gain tends to be larger at high
rates and in high SNR regimes. Needless to mention that this performance
improving scheme can be employed along with the complexity reduction
techniques available in the literature though it is not in the scope of this
work.
## II PRELIMINARIES
We denote by $\mathbb{F}_{2}$ the finite field with two elements. The
cardinality of a set is denoted by $|\cdot|$. The _weight_ of a vector
$\mathbf{e}\in\mathbb{F}_{2}^{n}$ is
$w(\mathbf{e})\triangleq|\operatorname{supp}(\mathbf{e})|$. The all-one vector
$\mathbf{1}$ and all-zero vector $\mathbf{0}$ are defined as vectors with all
identical elements of 1 or 0, respectively. The summation in $\mathbb{F}_{2}$
is denoted by $\oplus$. Let $[\ell,u]$ denote the range
$\\{\ell,\ell+1,\ldots,u\\}$ and bold letters denote vectors. The (binary)
representation of $i\in[0,2^{n}-1]$ in $\mathbb{F}_{2}$ is defined as
$\operatorname{bin}(i)=i_{n-1}...i_{1}i_{0}$, where $i_{0}$ is the least
significant bit, that is $i=\sum_{a=0}^{n-1}i_{a}2^{a}$. A $K$-dimensional
subspace $\mathcal{C}$ of $\mathbb{F}_{2}^{N}$ is called a linear $(N,K,d)$
_code_ over $\mathbb{F}_{2}$ if the minimum distance of $\mathcal{C}$,
$\operatorname{d}(\mathcal{C})\triangleq\min_{\mathbf{c}\in\mathcal{C},\mathbf{c}^{\prime}\in\mathcal{C},\mathbf{c}\neq\mathbf{c}^{\prime}}\operatorname{d}(\mathbf{c},\mathbf{c}^{\prime})=w_{\min},$
(1)
is equal to $d$. We refer to $N$ and $K$ as the _length_ and the _dimension_
of the code. The vectors in $\mathcal{C}$ are called _codewords_. Note that
the minimum weight of a nonzero codeword $w_{\min}$ in a linear code
$\mathcal{C}$ is equal to its minimum distance
$\operatorname{d}(\mathcal{C})$.
For a binary input additive white Gaussian noise (BI-AWGN) channel at high
signal-to-noise ratio (SNR), according to [6, Sect. 10.1], the block error
rate (BLER) of linear codes under soft-decision maximum likelihood (ML)
decoding can be approximated by
$P_{e}^{ML}\approx A_{w_{\min}}Q(\sqrt{2\operatorname{d}(\mathcal{C})\cdot
R\cdot E_{b}/N_{0}}),$ (2)
where $A_{w_{\min}}$ denotes the number of minimum-weight codewords, a.k.a
error coefficient because it plays the role of a coefficient for calculating
the BLER bound, $Q(\cdot)$ is the tail probability of the normal distribution
$\mathcal{N}(0,1)$, and $R$ is the code rate. As $A_{w_{\min}}$ is directly
proportional to the lower bound for the error correction performance of a
code, it can be used as a measure to anticipate the direction of change in the
block error rate when $A_{w_{\min}}$ changes.
### II-A Polar Codes and PAC Codes
Polar codes of length $N=2^{n}$ are constructed based on the $n$-th Kronecker
power of binary Walsh-Hadamard matrix
$\mathbf{G}_{2}={\footnotesize\begin{bmatrix}1&0\\\ 1&1\end{bmatrix}}$, that
is, $\boldsymbol{G}_{N}=\mathbf{G}_{2}^{\otimes n}$ which we call it polar
transform throughout this paper. We denote polar transform by rows
$\mathbf{g}_{i},i\in[0,N-1]$, and elements $g_{i,j},i,j\in[0,N-1]$. A
generator matrix of the polar code is formed by selecting a set of rows of
$\boldsymbol{G}_{N}$. We use $\mathcal{I}$ to denote the set of indices of
these rows and $\mathcal{C}(\mathcal{I})$ to denote the linear code generated
by the set of rows of $\boldsymbol{G}_{N}$ indexed by $\mathcal{I}$. Note that
$\mathcal{I}\subseteq[0,N-1]=[0,2^{n}-1]$. The characterization of the
information set $\mathcal{I}$ for polar codes relies on the channel
polarization theorem [1] and the concept of bit-channel reliability. A polar
code of length $N=2^{n}$ is constructed by selecting a set $\mathcal{I}$ of
indices $i\in[0,N-1]$ with high reliability [1]. The indices in $\mathcal{I}$
are dedicated to information bits, while the rest of the bit-channels with
indices in $\mathcal{I}^{c}\triangleq[0,N-1]\setminus\mathcal{I}$ are used to
transmit a known value, ’0’ by default, which are called _frozen bits_.
To improve the distance properties of polar codes, it was recently suggested
in [3] to obtain the input vector $\mathbf{u}=[u_{0},\ldots,u_{N-1}]$ by a
convolutional transformation using the binary generator polynomial of degree
$m$, with coefficients $\mathbf{p}=[p_{0},\ldots,p_{m}]$ as follows:
$u_{i}=\sum_{j=0}^{m}p_{j}v_{i-j}.$ (3)
This coding scheme is called polarization-adjusted convolutional (PAC) coding.
The convolution operation can be represented in the form of upper triangular
matrix shown in [4] where the rows of the pre-transformation matrix
$\mathbf{P}$ are formed by shifting the vector $\mathbf{p}=(p_{0},p_{1},\ldots
p_{m})$ one element at a row. Note that $p_{0}=p_{m}=1$ by convention. Then,
we can obtain $\mathbf{u}$ by matrix multiplication as
$\mathbf{u}=\mathbf{v}\mathbf{P}$.
As a result of this pre-transformation, $u_{i}$ for $i\in\mathcal{I}^{c}$ may
no longer be frozen (i.e., $u_{i}\in\\{0,1\\}$) unlike in polar codes where
$u_{i}=0$ always holds. Then, vector $\mathbf{u}$ is mapped to codeword vector
$\mathbf{x}=\mathbf{u}\mathbf{G}_{N}$. Overall, we have
$\mathbf{x}=\mathbf{v}\mathbf{P}\mathbf{G}_{N}$.
It was analytically shown in [4, 13] that by conventional (forward)
convolutional precoding, the number of generated codewords with the minimum of
weight in the cosets, denoted by $A_{i,w_{\min}}(\mathcal{I})$ for coset
$\mathcal{C}_{i}(\mathcal{I})$, reduces relative to polar codes (with no
precoding).
### II-B Channel Model and Modulation
A binary code $\mathcal{C}$ of length $N$ and dimension $K$ maps a message of
$K$ bits into a codeword $\mathbf{c}$ of $N$ bits to be transmitted over a
noisy channel. The channel alters the transmitted codeword such that the
receiver obtains an $N$-symbol vector $\mathbf{y}$. An ML decoder supposedly
compares $\mathbf{y}$ with all the $2^{K}$ modulated codewords in the codebook
and selects the one closest to $\mathbf{y}$. In other words, the ML decoder
finds a modulated codeword $\textup{x}(\mathbf{c})$ such that
$\hat{\mathbf{c}}=\underset{\mathbf{c}\in\mathcal{C}}{\text{arg max
}}p\big{(}\mathbf{y}|\textup{x}(\mathbf{c})\big{)}.$ (4)
For additive white Gaussian noise (AWGN) channel with noise power of
$\sigma^{2}_{n}=N_{0}/2$, the conditional probability
$p\big{(}\mathbf{y}|\textup{x}(\mathbf{c})\big{)}$ is given by
$p\big{(}\mathbf{y}|\textup{x}(\mathbf{c})\big{)}=\frac{1}{(\sqrt{\pi
N_{0}})^{N}}\text{exp}\left(-\sum_{i=0}^{N-1}\left(y_{i}-\textup{x}(c_{i})\right)^{2}/N_{0}\right).$
(5)
Observe that maximizing $p(\mathbf{y}|\textup{x}(\mathbf{c}))$ under binary
phase-shift keying (BPSK) modulation that maps $c_{i}\in\\{0,1\\}$ to
$s_{i}\in\\{1,-1\\}$, i.e., $s_{i}=1-2c_{i}$, is equivalent to minimizing
$d^{2}_{E}=\sum_{i=0}^{N-1}(y_{i}-\textup{x}(c_{i}))^{2}=\sum_{i=0}^{N-1}\left(y_{i}-(1-2\bigoplus_{j=i}^{N-1}u_{j}g_{j,i})\right)^{2},$
which is called squared Euclidean distance (SED). Therefore, for
$\mathcal{U}=\\{\boldsymbol{u}:u_{i}=0,i\in\mathcal{I}^{c}\text{ and
}u_{i}\in\\{0,1\\},i\in\mathcal{I}\\}$ where $|\mathcal{U}|=2^{K}$, we have
$\hat{\boldsymbol{u}}=\underset{\boldsymbol{u}\in\mathcal{U}}{\text{arg min
}}\big{(}\mathbf{y}-(\mathbf{1}-2\boldsymbol{u}\boldsymbol{G}_{N})\big{)}^{2}.$
(6)
### II-C Sphere Decoding (SD)
The sphere decoding is founded based on a simple idea: We search over integer
lattice points $\mathbf{1}-2\boldsymbol{u}\boldsymbol{G}_{N}$ for valid
$\boldsymbol{u}$’s that lie in a certain sphere with radius $r$ around a
received noisy vector $\mathbf{y}$ in $N$-dimensional space. In the first step
of the algorithm, the sphere decoder performs a depth-first search with a very
large initialized $r_{0}$. Radius $r_{0}$ may reduce later as the decoder
explores the decoding tree looking for alternative candidates. By finding the
candidate that allows the sphere to have a minimum radius, SD achieves the ML
estimation for the transmitted data $\boldsymbol{u}$ with respect to the
received vector $\mathbf{y}$.
We adopt the squared Euclidean distance introduced in (6) to determine the
likelihood of the followed path. Hence, for the estimated bits $u_{i}^{N-1}$,
the branch metric $m_{i}(u_{i}^{N-1})$ is
$m_{i}(u_{i}^{N-1})\triangleq\left(y_{i}-\big{(}1-2\bigoplus_{j=i}^{N-1}u_{j}g_{j,i}\big{)}\right)^{2}.$
(7)
Based on the low triangle form of $\mathbf{G}_{N}$, SD starts estimating the
bit values from level $l=N-1$. The radius $r_{0}$ then works as a constraint
on the path metrics for levels $l=N-1,...,0$ so that for candidates within the
sphere, we have
$\sum_{i=l}^{N-1}m_{i}\left(u_{i}^{N-1}\right)\leq r_{0}^{2}.$ (8)
The first path candidate is found when it reaches $l=0$ for the first time.
Then, the radius is updated to the path metric of the current path, i.e.
$r_{0}^{2}=\sum_{i=0}^{N-1}m_{i}(u_{i}^{N-1})$. Only one path is followed at a
time throughout the decoding. To search for candidates with a smaller SED, the
level is moved backward (i.e. $l=l+1$) to explore an alternative solution for
$u_{N-1}^{l}$.
The branching condition in (8) is examined by the updated $r_{0}$ during the
path expansion. The paths that fail to meet the condition are located outside
the sphere. Regarded as invalid candidates, they are pruned and will not be
revisited. Recall that for $l\in\mathcal{I}^{c}$, $u_{l}$ is set to 0. Hence,
the path is only split when $l\in\mathcal{I}$.
To reduce the computational complexity, path metrics with a lower bound
provide a stricter branching condition. Bound
$\Lambda_{i}=\underset{\boldsymbol{u}\in\mathcal{U}}{\text{min}}\big{(}m_{i}(u_{i}^{N-1})\big{)}$
is given as the minimum branch metric for two possible cases at a certain
estimated bit. Consequently, the bounded path metrics for branching evaluation
can be computed as
$\displaystyle\sum_{i=l}^{N-1}m_{i}(u_{i}^{N-1})+\sum_{i=0}^{l-1}\Lambda_{i}\leq
r_{0}^{2}.$ (9)
The radius $r_{0}$ is updated once a new valid path is developed over the
decoding tree given that the new path has a metric smaller than the current
$r_{0}$. Until all the feasible paths have been attempted by shrinking the
radius of the sphere, the tree search completes. The survived path is the
demanded ML estimation for $\boldsymbol{u}$ that has the minimum SED.
## III Reverse and Selective Convolutional Precoding
Estimating from the last bit $u_{N-1}$ is the prerequisite of decoding with SD
to have the generator matrix $\mathbf{G}$ as a lower triangular matrix such
that we get $x_{N-1}=u_{N-1}$. The convolutional precoding performed by
$\mathbf{P}\mathbf{G}_{N}$ does not give a generator matrix in the form of a
lower Toeplitz matrix. As a result, we cannot evaluate
$x_{N-1}=p_{0}v_{N-m-1}\oplus\cdots\oplus p_{m}v_{N-1}$ in the first step of
sphere decoding. In this section, a different approach for convolutional
precoding of polar codes for sphere decoding is proposed.
To preserve the lower triangular matrix after convolutional precoding, the
convolution operation is conducted in a reverse direction rather than as in
(3). That is,
$u_{i}=\sum_{j=0}^{m}p_{j}v_{i+j}.$ (10)
Observe the difference between (10) and (3) in the subscript of $v$. This
converts the upper Toeplitz matrix to a lower one. Matrix $\mathbf{P_{r}}$ for
the reverse convolution can be represented as the transpose of $\mathbf{P}$,
i.e., $\mathbf{P_{r}}=\mathbf{P^{T}}$.
$\mathbf{P_{r}}=\bNiceMatrix p_{0}&0\Cdots 0\\\ p_{1}p_{0}\\\
\Vdots\Ddots\Ddots\Ddots\Vdots\\\ p_{m}\Cdots p_{1}p_{0}\\\ 0\\\
\Vdots\Ddots\Ddots\Ddots\Ddots\Ddots\Vdots\\\ p_{m}\Cdots p_{1}p_{0}0\\\
0\Cdots 0p_{m}\Cdots p_{1}p_{0}$ (11)
The mapping of input vector $\mathbf{v}$ then becomes
$\mathbf{x}=\mathbf{v}\mathbf{P_{r}}\mathbf{G}_{N}$, making SD a possible
solution for decoding PAC codes. To distinguish these codes from PAC codes, we
call them _reverse PAC_ (R-PAC) codes in the rest of the paper.
However, the minimum distance $d_{min}$ of the code might be reduced by this
reverse precoding scheme, leading to the degradation in the error correction
performance. To explain the reason, let us define cosets as follows:
###### Definition 1.
Given a set $\mathcal{I}\subseteq[0,N-1]$ for a polar code, we define the set
of codewords $\mathcal{C}_{i}(\mathcal{I})\subseteq\mathcal{C}(\mathcal{I})$
for each $i\in\mathcal{I}$ in a coset of the subcode
$\mathcal{C}(\mathcal{I}\setminus[0,i])$ of $\mathcal{C}(\mathcal{I})$ as
$\mathcal{C}_{i}(\mathcal{I})\triangleq\left\\{\mathbf{g}_{i}\oplus\bigoplus_{h\in\mathcal{H}}\mathbf{g}_{h}\colon\mathcal{H}\subseteq\mathcal{I}\setminus[0,i]\right\\}\subseteq\mathcal{C}(\mathcal{I}),$
(12)
where the $i$-th row of the polar transform $\boldsymbol{G}_{N}$, i.e.,
$\mathbf{g}_{i}$, is the coset leader.
We know that the minimum distance of a polar code equals to the minimum weight
of the rows of the generator matrix or the non-frozen rows in the polar
transform. That is,
$d_{min}=w_{min}=\min(\\{w(\mathbf{g}_{i}):i\in\mathcal{I}\\}).$ (13)
Hence, we use $d_{min}$ and $w_{min}$ interchangeably throughout this paper.
On the other hand, according to Corollary 3 in [13], we have
$w(\mathbf{g}_{i}\oplus\bigoplus_{j\in\mathcal{H}}\mathbf{g}_{h})\geq
w(\mathbf{g}_{i}),$ (14)
where $\mathcal{H}\subseteq[i+1,2^{n}-1]$. Hence, the number of the minimum
weight codewords denoted by $A_{i,w_{\min}}(\mathcal{I})$, can be formed in
the cosets $\mathcal{C}_{i}(\mathcal{I})$ where the coset leader has weight
$w(\mathbf{g}_{i})=w_{min}$.
In the forward convolution used in the conventional PAC codes, since for every
$\mathcal{C}_{i}(\mathcal{I})$, we have $i\in\mathcal{I}$, the minimum
distance of PAC codes is preserved. However, the backward of convolution in
the reverse precoding by (10) might form some coset
$\mathcal{C}_{i}(\mathcal{I})$ where $i\in\mathcal{I}^{c}$ and
$w(\mathbf{g}_{i})<w_{min}$. This would reduce the minimum distance $w_{min}$
of the code as per (14).
###### Example 1.
Suppose we have the polar code (8,4) with $\mathcal{I}=\\{3,5,6,7\\}$,
$\mathbf{p}=[1\;0\;1\;1]$ and $\mathbf{v}=[0\;0\;0\;1\;0\;1\;0\;0]$. The
minimum distance of this code is
$w_{min}=4=w(\mathbf{g}_{i}),i\in\\{3,5,6\\}$. Now, if the reverse precoding
process forms a coset where the coset leader is row
$\mathbf{g}_{i},i\in\mathcal{I}^{c}=\\{0,1,2,4\\}$, then according to (14),
codeword(s) with weight $w(\mathbf{c})<4$ might be generated, because we have
$w(\mathbf{g}_{i})<w_{min}$ for $i\in\mathcal{I}^{c}$. For instance, when
precoding the frozen bits with indices $i=2,1,0$, we get $u_{2}=1\cdot
0+0\cdot 1+1\cdot 0+1\cdot 1=1,u_{1}=1,$ and $u_{0}=1$, respectively. Observe
that $\mathbf{c}=[0\;0\;1\;0\;1\;1\;0\;0]$ and the weight of the code is
$w(\mathbf{c})=3$, which is less than $w_{min}$ of the polar code. This occurs
as a result of having $\mathbf{g}_{0}$ as a coset leader and hence according
to (14), the weight can be $w(\mathbf{g}_{0})=1$ or larger (here, it is 3).
To avoid the reduction in $w_{min}$, we propose a selective precoding scheme
as follows:
$\text{$u_{i}$}=\begin{dcases*}\sum_{j=0}^{m}p_{j}v_{i+j}&if $w(g_{i})\geq
w_{min}$\\\ v_{i}&otherwise\\\ \end{dcases*}.$ (15)
The input vector $\mathbf{v}$ in Example 1 is then encoded to the codeword
$\mathbf{c}=[1\;1\;0\;0\;1\;1\;0\;0]$, where $w(\mathbf{c})=4=w_{min}$. With
selective precoding, $i\in\mathcal{I}^{c}$ for $w(\mathbf{g}_{i})<w_{\min}$
remain frozen such that the cosets are always led by some
$i\in\mathcal{I}^{c}\cup\mathcal{I},w(\mathbf{g}_{i})\geq w_{\min}$. Hence,
for all possible codewords, $w(\mathbf{c})\geq w_{min}$ is achieved.
We use the abbreviation SR-PAC for the selective reverse PAC (R-PAC) in the
rest of the paper. We use the notion R-PAC($m+1$) and SR-PAC($m+1$) to denote
the constraint length, $m+1$, of the convolutional precoding in R-PAC and SR-
PAC coding.
## IV Analysis: Factors Impacting Error coefficient
In this section, we analyze the error coefficient $A_{w_{\min}}$ of some codes
and how (selective) reverse convolutional precoding of polar codes (in short,
R-PAC or SR-PAC coding) affect $A_{w_{\min}}$. Here, we use a method based on
the list sphere decoder (LSD) for enumeration.
To obtain the number of the minimum weight codewords, all zero codeword
$\mathbf{0}$ is transmitted at a very high SNR (e.g., 20 dB). The LSD is
employed to find the $L$ most likely paths for the received sequence. In this
setup, we are assuming that the $L$ survived paths contain all or most of the
minimum weight non-zero codewords given that we choose a large enough list
size $L$. Hence, by counting the total codewords with minimum non-zero weight
in the list, we can obtain $A_{w_{\min}}$. Out of all codewords counted as
$A_{w_{\min}}$, we need to find $A_{i,w_{\min}}(\mathcal{I})$ for every $i$
where $w(\mathbf{g}_{i})=w_{min}$. This can simply be performed by classifying
the minimum weight codewords in the list based on the index of the first non-
zero bit.
As illustrated in the last section, reverse precoding might introduce
codewords with smaller weight than the minimum distance which weakens the
codes. By analyzing codes with different rates and various precoding
polynomials, we find that for low-rate codes, because of the sparse
distribution of information bits, frozen rows with smaller $w_{min}$ have
higher chances to be combined with other high-weight rows and become the coset
leaders. This results in the reduction of the minimum distance of the codes.
This is where SR-PAC coding needs to be employed instead to avoid a reduction
in the minimum distance of the R-PAC codes.
While for high-rate codes, dense distribution of the information bits usually
enables reverse precoding to preserve $w_{min}$. In such conditions, applying
reverse convolution selectively does not provide a small $A_{w_{\min}}$ as in
R-PAC codes. By limiting the row combinations in SR-PAC coding, the error
coefficient can not be reduced as many as in R-PAC coding. Hence, R-PAC codes
in this case outperform the SR-PAC codes.
Tables I and II list $A_{i,w_{\min}}(\mathcal{I})$ for code (64,14) with
selective reverse precoding and reverse precoding, where
$\mathbf{p}_{7}=[1\;1\;0\;1\;1\;0\;1]$ is used for R-PAC(7) and SR-PAC(7)
whereas $\mathbf{p}_{10}=[1\;1\;0\;1\;1\;0\;1\;1\;0\;1]$ is used for R-PAC(10)
and SR-PAC(10). Note that although rows $i=27,39,43,45$ have
$w(\mathbf{g}_{i}=16)$, they are frozen bits in polar coding, hence
$A_{i,w_{\min}}(\mathcal{I})$ is undefined for them in Table I.
TABLE I: The number of minimum-weight codewords in coset $\mathcal{C}_{i}$ for SR-PAC(64,14) code with $w_{min}=16$. | Polar | SR-PAC(7) | SR-PAC(10) |
---|---|---|---|---
$i$ | $w(g_{i})$ | $A_{i,16}$ | $A_{i,16}$ | $A_{i,16}$ |
27 | 16 | | 16 | 4 |
39 | 16 | | 0 | 30 |
43 | 16 | | 32 | 18 |
45 | 16 | | 16 | 4 |
46 | 16 | 32 | 0 | 4 |
51 | 16 | 64 | 36 | 6 |
53 | 16 | 32 | 16 | 2 |
54 | 16 | 16 | 8 | 1 |
57 | 16 | 16 | 8 | 3 |
58 | 16 | 8 | 4 | 0 |
60 | 16 | 4 | 1 | 1 |
Total | 172 | 137 | 73 |
TABLE II: The number of minimum-weight codewords in coset $\mathcal{C}_{i}$ for R-PAC(64,14) code with $w_{min}=12$. | R-PAC(7) | R-PAC(10)
---|---|---
$i$ | $w(g_{i})$ | $A_{i,12}$ | $A_{i,12}$
37 | 8 | 0 | 1
40 | 4 | 3 | 0
41 | 8 | 4 | 0
48 | 4 | 5 | 3
Total | 12 | 4
According to (14), the minimum distance of the proposed codes is determined by
the minimum $w(g_{i})$ which plays the role of the coset leaders. Precoding
reversely brings the main range of leading cosets ahead of the first
information bit of polar codes (in natural ascending order). That is, a coset
may be led by a frozen row with an index smaller than the first information
bit. Clearly, this happens due to reverse precoding. In the proposed SR-PAC
coding, all the coset leaders have $w(g_{i})=16$ so that they can maintain
$w_{min}=16$ as polar codes. While in R-PAC coding, we would have cosets led
by frozen rows with $w(\mathbf{g}_{i})=8$ that results in a reduction in the
weight of the minimum weight codewords down to $w_{min}=12$ for both R-PAC(7)
and R-PAC(10). Note that it is not easy to generate rules for the cases when
R-PAC codes reduce the minimum distance because it highly depends on the rate
profile and convolutional polynomial. we leave this for future research.
## V Numerical Results and Discussions
We consider several sample codes for the numerical evaluation of the proposed
scheme. The block error rates (BLER) of the (selective) reverse PAC codes of
(64, 50) and (64, 14) are shown in Figs. 1 and 3, whereas Fig. 2 shows the
BLER of the code (128, 110). Error correction performance of R-PAC or SR-PAC
codes is compared against SC and SCL decoders. Let SCLD($L$) denote the SCL
decoder with list size $L$. The polynomials mentioned in Section IV are used
for the precoding of R-PAC and SR-PAC (except for the constraint length
$m+1=4$ where $\mathbf{p}_{4}=[1\;1\;0\;1]$). Table III gives the minimum
distance with the corresponding error coefficient for the underlying codes.
Note that concatenating these high-rate short codes with a short CRC can
result in significant performance degradation due to a large rate loss.
TABLE III: Minimum weight $w_{min}$ and the corresponding error coefficient $A_{w_{\min}}$ of polar codes (∗ refers to SR-PAC). code | (64, 50) | (64, 14) | (128, 110)
---|---|---|---
$w_{min}$ | $A_{w_{\min}}$ | $w_{min}$ | $A_{w_{\min}}$ | $w_{min}$ | $A_{w_{\min}}$
Polar | 4 | 944 | 16 | 172 | 4 | 4099
SR/R-PAC(4) | 4 | 435 | 16 | 220∗ | 4 | 1621
SR/R-PAC(7) | 4 | 98 | 16 | 137∗ | 4 | 240
SR/R-PAC(10) | 4 | 70∗ | 16 | 73∗ | 4 | 99∗
Fig. 1: Performance comparison for SR/R-PAC(64,50).
It can be seen in Fig. 1 that SD and SCL decoders have the same error
performance for high-rate polar code (64,50), while with SD, the proposed
R-PAC and SR-PAC outperform polar codes. From Table III, we know that
$A_{w_{\min}}$ of R-PAC(4) reduces almost by half compared to polar code. Now
relating the error coefficient to the curves, the 0.2 dB power gain in low SNR
regimes and a larger gain in high SNR regimes are expected due to a reduction
in the error coefficient. As discussed in the last section, longer polynomial
results in smaller $A_{w_{\min}}$ for R-PAC and SR-PAC. R-PAC(10) decreases
$w_{min}$ of the code, hence we use SR-PAC(10). By employing this code, more
than 92.5$\%$ of the minimum weight codewords are eliminated, bringing the
curve closer to the dispersion bound [14] by achieving up to 0.6 dB power
gain.
Fig. 2: Performance comparison for SR-PAC(128,110).
Similar conclusions can be drawn for longer codes at high rates, e.g.,
(128,110). As illustrated in Table III and Fig. 2, the considerable decrease
in $A_{w_{\min}}$ for SR-PAC(10) enables the code to outperform its
counterpart, polar code.
Fig. 3: Performance comparison for SR-PAC(64,14).
For low-rate code (64,14), limited by a relatively small reduction in
$A_{w_{\min}}$, the performance gain is not as significant as high-rate codes.
As shown in Fig. 3, at least 0.2 dB gain can be obtained by the selective
reverse precoding scheme.
## VI CONCLUSION
In this paper, we propose an approach to concatenate convolutional codes with
polar codes for sphere decoding. The proposed R-PAC and SR-PAC codes have a
remarkably smaller error coefficient compared to polar codes. This reduction
in the error coefficient results in the considerable improvement of the BLER
under sphere decoding, in particular for high-rate codes and in high SNR
regimes. Since this work is focused on improving the code performance by
precoding, the techniques to reduce the complexity and design code
constructions for each precoding scheme can be considered as the direction of
future works.
## References
* [1] E. Arıkan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051-3073, Jul. 2009.
* [2] I. Tal and A. Vardy, “List Decoding of Polar Codes,” in IEEE Transactions on Information Theory, vol. 61, no. 5, pp. 2213-2226, May 2015.
* [3] E. Arıkan, “From sequential decoding to channel polarization and back again,” arXiv preprint arXiv:1908.09594 (2019).
* [4] M. Rowshan and E. Viterbo, “On Convolutional Precoding in PAC Codes,” 2021 IEEE Globecom Workshops, Madrid, Spain, 2021, pp. 1-6.
* [5] M. Rowshan, A. Burg and E. Viterbo, “Polarization-adjusted Convolutional (PAC) Codes: Fano Decoding vs List Decoding,” in _IEEE Trans. on Vehicular Tech._ , vol. 70, no. 2, pp. 1434-1447, Feb. 2021.
* [6] S. Lin and D. J. Costello, “Error Control Coding,” 2nd Edition, Pearson Prentice Hall, Upper Saddle River, 2004, pp. 395-400.
* [7] M. Pohst, “On the computation of lattice vectors of minimal length successive minima and reduced bases with applications,” Proc. ACM SIGSAM, vol. 15, 1981.
* [8] S. Kahraman and M. E. Celebi, “Code based efficient maximum-likelihood decoding of short polar codes,” Proc. IEEE Int. Symp. Inf. Theory, July 2012.
* [9] J. Guo and A. Guillén i Fàbregas, “Efficient sphere decoding of polar codes,” 2015 IEEE Intl Symp. on Inf. Theory (ISIT), 2015, pp. 236-240.
* [10] S. A. Hashemi, C. Condo and W. J. Gross, “List sphere decoding of polar codes,” Asilomar Conf. Signals Syst. Comput., pp. 1346-1350, 2015.
* [11] H. Zhou, W. J. Gross, Z. Zhang, X. You and C. Zhang, “Efficient Sphere Polar Decoding via Synchronous Determination, in IEEE Transactions on Vehicular Technology, vol. 69, no. 6, pp. 6777-6781, June 2020.
* [12] C. Husmann, P. C. Nikolaou and K. Nikitopoulos, “Reduced latency ML polar decoding via multiple sphere-decoding tree searches,” IEEE Trans. Veh. Technol., vol. 67, no. 2, pp. 1835-1839, Oct. 2018.
* [13] M. Rowshan, S.H. Dau, and E. Viterbo, “Error Coefficient-reduced Polar/PAC Codes,” arXiv preprint arXiv:2111.088435 (2021).
* [14] Y. Polyanskiy, H. V. Poor and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307-2359, May 2010.
|
Revisiting the Shadow Stress Tensor
in Celestial CFT
Shamik Banerjee1,2 and Sabrina Pasterski3
1 National Institute of Science Education and Research (NISER), Bhubaneswar
752050, Odisha, India
2 Homi Bhabha National Institute, Anushakti Nagar, Mumbai, India-400085
3 Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5, Canada
We revisit the standard construction of the celestial stress tensor as a
shadow of the subleading conformally soft graviton. In its original
formulation there is an obstruction to reproducing the expected $TT$ OPE in
the double soft limit. We propose a modification to the definition which
circumvents this obstruction and then extend this change of basis beyond the
conformally soft and single helicity sectors. In the process we investigate
how (non)-commutativity of double soft limits is tied to the decoupling of
primary descendants, and how our choice of celestial basis determines which
symmetries are manifest at the level of the OPE beyond the MHV sector.
###### Contents
1. 1 Introduction
2. 2 Symmetries of the Subleading Soft Graviton
3. 3 A Modified ‘Shadow Basis’
4. 4 Mixed Helicity Amplitudes: A Proposal
5. 5 Discussion
6. A Lessons from Gelfand
1. A.1 Generalized Homogeneous Functions
2. A.2 Representations at Integer Points
7. B Complexifying the Celestial Sphere
8. C OPE between modified stress tensors: $\bar{T}_{mod}\bar{T}_{mod}$
## 1 Introduction
Over the past few years celestial holographers have made progress towards
constructing a dual description for gravitational scattering in asymptotically
flat spacetimes [1]. Under this holographic map $\mathcal{S}$-matrix elements
are mapped to correlators in a conformal theory living on the celestial
sphere. The main advantage of this program is that it effectively reorganizes
amplitudes in terms of the available symmetries [2].
Lorentz invariance guarantees that a basis of boost eigenstates will transform
as quasi-primaries. In the simplest case of massless particles, we can reach
boost eigenstates from energy eigenstates by performing a Mellin transform in
the energies. The saddle point approximation tells us that when we push our
Cauchy slice to null infinity to prepare the in and out states, this procedure
picks out operators which are local on the celestial sphere but smeared along
the generators of $\mathcal{I}^{\pm}$. However, from the bulk perspective
there is some flexibility in our dictionary [3]. Namely, intertwiners like the
shadow transform map quasi-primaries to quasi-primaries.
A major motivation for pursuing the Celestial CFT (CCFT) construction comes
from the fact that the Lorentz symmetry is enhanced when we couple to gravity.
Intriguingly, a universal coupling to the subleading soft graviton [4] is
equivalent to the Ward identity for a Virasoro symmetry [5] and gives rise to
a candidate stress tensor [6]. The fact that we have such a stress tensor
hints that the dual might behave more like a local CFT than we would otherwise
have grounds to assert. Moreover, this would naïvely help fix the ambiguity in
our basis: the standard $T{\cal O}$ OPE is reproduced by the single subleading
soft graviton insertion, when the ${\cal O}$ operator is a ‘Mellin basis’
operator. However, $T$ itself is constructed as a shadow transform of the
$\Delta=0$ conformally soft graviton.
So which basis should we be using? When trying to phrase the soft physics in a
standard 2D CFT language, it appears that the natural dictionary for currents
[7, 8, 9, 10] involves shadow transforms of the conformally soft modes [11,
12, 13, 14, 15]. However, we see that the Mellin basis is the natural
dictionary for local operators capturing the finite energy radiative
scattering states, with a continuous spectrum on the principal series. It has
also been shown [16, 17] that in the standard Mellin basis the subleading soft
graviton theorem is equivalent to the Ward identity of
$\widehat{\overline{sl}}_{2}$ current algebra. This has been checked
explicitly by computing the celestial OPE of two positive helicity soft
gravitons in the MHV sector and this computation shows that the OPE can be
written as a linear combination of supertranslation and
$\widehat{\overline{sl}}_{2}$ current algebra descendants of positive helicity
gravitons. Meanwhile, the construction of $w_{1+\infty}$ generators from the
residues of the Mellin transformed amplitudes involves another intertwiner
that appears in split signature: the light transform [18, 19].
While the saddle point approximation is strictly only valid for the radiative
states, the standard procedure is to analytically continue off the principal
series [11, 20] to study behavior of celestial amplitudes in the
complex-$\Delta$ plane [21, 22]. In particular, we would like to be able to
continuously take the conformally soft limits of the $T{\cal O}$ OPE. In doing
so we encounter a variety of puzzles. Firstly, applying a shadow
transformation only in this limit appears ad hoc. Secondly, we run into
subtleties with double soft limits in either basis [23, 24, 8, 9, 25, 26],
which will be at the center of our explorations here. Thirdly, even after
resolving these issues, different basis choices make different symmetries
manifest at the level of the OPE.
More succinctly: in a 2D CFT an operator and its shadow cannot both be treated
as local operators. In the celestial context we have an additional constraint:
because the currents come from limits of the hard operators it seems like
either our dictionary should have all of the operators be shadowed or not.
This presents a tension between our dictionary of local operators in the 2D
theory, the existence of symmetry generators, and consistency of their
algebra.111It would be interesting to explore how this generalizes to higher
dimensions. Note that for $d>2$ we only need to demand that the operators are
quasi-primaries which will still be the case in the shadow basis [7, 10].
Focusing on the subleading conformally soft graviton sector, we can phrase the
puzzle as follows: We know that a positive helicity subleading soft graviton,
after shadow transformation, becomes the antiholomorphic stress tensor of a
$2$D (celestial) CFT. So we expect that we should be able to write the OPE in
the MHV sector as a linear combination of $\overline{Vir}$ primary and
descendants. But, this is not what we find. Rather, the OPE is written in
terms of supertranslation and $\widehat{\overline{sl_{2}}}$ current algebra
descendants. So a natural question is, what happens to the antiholomorphic
stress tensor? In this paper, we provide an answer to this question.
We begin by revisiting the standard construction of the celestial stress
tensor from [6], defined as a shadow of the subleading soft graviton[4, 5]. In
its original formulation, there is an obstruction to reproducing the expected
$TT$ OPE in the double soft limit considered in [8]. With the insights of the
celestial diamond and nested primary descendants [27, 28, 29, 30] this can be
cleanly phrased in terms of the weight $\Delta=-1$ reparameterization mode
from which both the celestial stress tensor and the subleading soft graviton
descend. Here, we propose a modification to the standard definition [6] which
circumvents this obstruction and then extend this change of basis beyond the
conformally soft and single helicity sector. This procedure is reminiscent of
monodromy projections familiar from standard CFT in the context of extracting
contributions from local operator exchanges to the conformal blocks [31, 32].
In the process, we show how the (non)-commutativity of double soft limits
signal obstructions to the decoupling of primary descendants in correlators,
as well as how our choice of basis determines which symmetries are manifest at
the level of the OPE.
This paper is organized as follows. In section 2 we revisit double soft limits
of the subleading soft graviton in the positive helicity sector, where we
observe an obstruction to the standard stress tensor Ward identity. We can
avoid this by introducing a modified shadow stress tensor. We then extend this
modification beyond the conformally soft sector in section 3 and to mixed
helicity amplitudes in section 4. Finally we conclude with a summary in
section 5. Some mathematical background is included in Appendix A, while we
make further contact with other incarnations of the celestial stress tensor in
Appendix B, before showing more explicitly how our modified stress tensor
avoids the aforementioned obstruction in Appendix C.
Before proceeding let us set up some notation. Unless otherwise specified, we
will consider massless scattering in (1,3) signature where the external
momenta take the form
$p_{i}=\epsilon_{i}\omega_{i}(1+z_{i}{\bar{z}}_{i},z_{i}+{\bar{z}}_{i},i({\bar{z}}_{i}-z_{i}),1-z_{i}{\bar{z}}_{i}),$
(1.1)
where $\epsilon_{i}=\pm 1$ and $\omega_{i}\geq 0$. Upon performing a Mellin
transform in the frequency variables $\omega_{i}$ the $\mathcal{S}$-matrix
gets mapped to a correlator of operators
$\prod_{i=1}^{n}\int
d\omega_{i}\omega_{i}^{\Delta_{i}-1}A(p_{i})=\langle\phi^{\epsilon_{1}}_{h_{1},{\bar{h}}_{1}}(z_{1},{\bar{z}}_{1})...\phi^{\epsilon_{n}}_{h_{n},{\bar{h}}_{n}}(z_{n},{\bar{z}}_{n})\rangle,$
(1.2)
which transform as quasi-primaries of weight
$h_{i}=\frac{1}{2}(\Delta_{i}+J_{i}),{\bar{h}}_{i}=\frac{1}{2}(\Delta_{i}-J_{i})$
under the Lorentz group. Here $J_{i}$ matches the helicity of the $i^{th}$
particle in an ‘all out’ convention. In Euclidean CFTs, the shadow transform
acts as an intertwiner between representations with Weyl reflected weights
$h_{i}\mapsto 1-h_{i},~{}~{}\bar{h}_{i}\mapsto 1-\bar{h}_{i}$. We will use
$\tilde{\phi}^{\epsilon_{i}}_{h_{i},{\bar{h}}_{i}}$ to denote the operators
reached by composing the Mellin transform (1.2) with the 2D shadow transform.
Unless necessary we will suppress the label $\epsilon_{i}$ on the external
operators.
## 2 Symmetries of the Subleading Soft Graviton
One feature of the celestial map (1.2) is that it sends powers of $\omega$ in
the soft expansion of gauge bosons to poles in the conformal dimension at
integer values of $\Delta$ when we analytically continue from the principal
series capturing radiative states to the complex plane. We will focus on soft
limits of the positive helicity graviton in this section. The subleading soft
graviton is picked out as the residue at $\Delta=0$ of the spin $h-\bar{h}=2$
operator $\phi_{h,\bar{h}}$ corresponding to the positive helicity gravitons
$S(z,{\bar{z}})=\mathrm{Res}_{\Delta=0}\phi_{h,\bar{h}}(z,{\bar{z}}).$ (2.1)
The soft theorem [4] tells us that this limit has a universal form in
celestial correlators
$\langle{S(w,\bar{w})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=-\sum_{k=1}^{n}\frac{\left(\bar{z}_{k}-\bar{w}\right)^{2}\bar{\partial}_{k}+2\bar{h}_{k}\left(\bar{z}_{k}-\bar{w}\right)}{w-z_{k}}\langle{\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle.$
(2.2)
The operator $S$ has weights $(1,-1)$. If we take its shadow we land on a
$(0,2)$ operator
$\bar{T}(z,\bar{z})=\frac{3}{\pi}\int
d^{2}w\frac{1}{\left(\bar{z}-\bar{w}\right)^{4}}S(w,\bar{w}),$ (2.3)
which was proposed as a candidate stress tensor in [6]. Indeed, so long as all
of the other operators are hard one can show that
$\langle{\bar{T}(z,\bar{z})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=\sum_{k}\left(\frac{\bar{h}_{k}}{\left(\bar{z}-\bar{z}_{k}\right)^{2}}+\frac{1}{\bar{z}-\bar{z}_{k}}\frac{\partial}{\partial\bar{z}_{k}}\right)\langle{\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle$
(2.4)
matching the expected conformal Ward identity. However, we must revisit our
construction if we want to be able to consistently take a second particle
soft.
As discussed in [33, 34] one expects the double soft limits of amplitudes in
the single helicity sector to commute, while this is not the case for
particles of opposite helicities.222This can happen for spin-0 excitations
when the conformal manifold has nontrivial curvature. See [35] for a nice
recent exploration of marginal deformations of celestial CFTs. The issue that
we run into here arises from the appearance of the shadow transformation in
our definition of $\bar{T}$. We can phrase this problem in an intrinsically 2D
language, and its resolution will be closely related to the monodromy
projections one needs to perform to select the stress tensor vs shadow blocks
[32] in ordinary, non-celestial, CFTs. Namely, equation (2.3) implies the
following relationship between primary333Here we are talking about primaries
of the global conformal group. descendants of $S$ and $\bar{T}$
$\partial\bar{T}(z,\bar{z})=-\frac{1}{2}\bar{\partial}^{3}S(z,\bar{z}).$ (2.5)
From the single soft theorem we see that these descendants vanish away from
other hard operator insertions, namely
$\langle{\partial\bar{T}(z,\bar{z})\cdots}\rangle=\text{Contact Terms}$ (2.6)
and
$\langle{\bar{\partial}^{3}S(z,\bar{z})\cdots}\rangle=\text{Contact Terms}.$
(2.7)
We want to emphasize that (2.5) holds at the level of the wavefunctions [36,
3, 28, 29] so whenever expressions (2.6) and (2.7) are valid, the contact
terms in (2.6) and (2.7) are related by the factor of $-2$ in (2.5). What will
break down, however, is whether the operator equations (2.6) and (2.7)
continue to hold in the presence of other soft insertions.
#### A symmetry-based argument
Before going into further detail, let us give a simple symmetry-based argument
for why (2.6) and (2.7) cannot hold simultaneously as operator equations. In
other words, why we expect $\bar{\partial}^{3}S(z,\bar{z})=0$ to be violated
in a 2D CFT with an antiholomorphic stress tensor. The argument goes as
follows: Assume that we can consistently make one of the positive helicity
hard gravitons in (2.4) subleading soft. This then implies that $S$ is a
$\overline{Vir}$ primary of weight $\bar{h}=-1$. While $\bar{\partial}^{3}S$
is a primary of the global conformal group but it is not a $\overline{Vir}$
primary. Therefore the equation $\bar{\partial}^{3}S=0$ is not
$\overline{Vir}$ invariant and cannot hold in our 2D CFT.444We can restore the
condition of being a Virasoro primary at the expense of introducing the
superrotation Goldstone mode via the Weyl covariant derivatives in [37, 38].
This vacuum structure becomes important at loop level [39, 40] but should not
appear in our discussion of the amplitudes at tree level, where the $\Delta=2$
mode is observed to decouple.
This observation has implications for the symmetries of scattering amplitudes
and the celestial CFT. For example, one can interpret the subleading soft
graviton theorem (2.2) as the Ward identity for $\widehat{\overline{sl}}_{2}$
current algebra [16] and $S(z,\bar{z})$ as the generating function for the
corresponding $\widehat{\overline{sl}}_{2}$ currents. Our observation,
together with (2.5), leads to the conclusion that
If we admit both $S(z,\bar{z})$ and its shadow $\bar{T}(z,\bar{z})$ as local
operators in the theory then both the $\widehat{\overline{sl}}_{2}$ current
algebra and the $\overline{Vir}$ symmetries will be violated.
As a result, the whole $w_{1+\infty}$ tower of symmetries [19] will be also be
absent from the celestial theory. We will discuss this and other consequences
later in the paper.
#### Shadows and Double Soft Limits
Let us now explicitly show that (2.5) implies an inconsistency of combining
consecutive soft limits with the shadow operation and expecting a conservation
law for tree level amplitudes. Starting from the stress tensor insertion
(2.4), if we now send the conformal dimension of one of the hard particles to
$0$ we see that the form of the prefactor on the right hand side of (2.4)
implies there will be non-contact terms in the following level-3 descendant
$\displaystyle\langle{\bar{T}(z,\bar{z})\bar{\partial}^{3}S(w,{\bar{w}})\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle$
(2.8)
$\displaystyle=\left(-\frac{24}{\left(\bar{z}-\bar{w}\right)^{5}}-\frac{12}{\left(\bar{z}-\bar{w}\right)^{4}}\frac{\partial}{\partial\bar{w}}\right)\langle
S(w,{\bar{w}}){\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle+\text{Contact
Terms},$
where the prefactor comes from computing
$\left[\partial_{\bar{w}}^{3},\left(\frac{-1}{\left(\bar{z}-\bar{w}\right)^{2}}+\frac{1}{\bar{z}-\bar{w}}\frac{\partial}{\partial\bar{w}}\right)\right]=\left(-\frac{24}{\left(\bar{z}-\bar{w}\right)^{5}}-\frac{12}{\left(\bar{z}-\bar{w}\right)^{4}}\frac{\partial}{\partial\bar{w}}\right)$
(2.9)
and we have used $\bar{h}=-1$ for the operator $S$. We know from (2.2) that
this correlator has support for $(w,{\bar{w}})$ away from other operator
insertions. Namely the primary descendant $\bar{\partial}^{3}S$ no longer
vanishes as an operator in the presence of a celestial stress tensor
insertion. Using our descendancy identity (2.5) we see that
$\displaystyle\langle{\bar{T}(z,\bar{z})\partial\bar{T}(w,{\bar{w}})\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle$
(2.10)
$\displaystyle=\left(\frac{12}{\left(\bar{z}-\bar{w}\right)^{5}}+\frac{6}{\left(\bar{z}-\bar{w}\right)^{4}}\frac{\partial}{\partial\bar{w}}\right)\langle
S(w,{\bar{w}}){\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle+\text{Contact
Terms}.$
The presence of this non-contact term serves as an obstruction to the OPE for
$\bar{T}$ that would be expected from the BPZ construction. By contrast,
repeating the same manipulations starting from (2.4) we would run into no such
obstruction to
$\displaystyle\langle{S(z,\bar{z})\bar{\partial}^{3}S(w,{\bar{w}})\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=\text{Contact
Terms}$ (2.11)
and similarly
$\displaystyle\langle{S(z,\bar{z})\partial\bar{T}(w,{\bar{w}})\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=\text{Contact
Terms}$ (2.12)
consistent with what we expect from the fact that soft limits of same helicity
gravitons commute and the descendancy relation (2.5), though this will no
longer be the case if we also include the opposite helicity soft sector.
Equation (2.11) tells us we can construct a consistent
$\widehat{\overline{sl}}_{2}$ current algebra from the positive helicity
subleading soft graviton while equation (LABEL:TdecT) tells us that the shadow
stress tensor $\bar{T}$ does not generate a consistent $\overline{Vir}$
symmetry as written, due to an obstruction that arises when more than one
operator goes soft.
#### A Modified Stress Tensor
In the remainder of this section we will show that we can provide a
modification to our definition for the stress tensor so that we get a
consistent $\overline{Vir}$ symmetry. To do so we will take inspiration from
the ‘celestial diamond’ framework of [30, 29] and discussion of gauge
redundancies amongst nested primary descendants in [27, 28]. Namely, we will
start by lifting both $S$ and $\bar{T}$ to a weight $(0,-1)$ operator
$\epsilon$
$S=\partial\epsilon,~{}~{}~{}\bar{T}=-\frac{1}{2}\bar{\partial}^{3}\epsilon$
(2.13)
whose correlators take the form
$\displaystyle\langle{\epsilon(z,\bar{z})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle$
(2.14)
$\displaystyle=-\sum_{k=1}^{n}\left(\ln\mu^{2}|z-z_{k}|^{2}\right)\\{\left(\bar{z}_{k}-\bar{z}\right)^{2}\bar{\partial}_{k}+2\bar{h}_{k}\left(\bar{z}_{k}-\bar{z}\right)\\}\langle{\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle,$
where $\mu$ is an IR regulator of the 2D model. Note that the $\mu$-dependence
drops out because of the global Ward identity
$\sum_{k=1}^{n}\\{\left(\bar{z}_{k}-\bar{z}\right)^{2}\bar{\partial}_{k}+2\bar{h}_{k}\left(\bar{z}_{k}-\bar{z}\right)\\}\langle{\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=0.$
(2.15)
One can see that (2.14) reproduces both (2.2) and (2.4) upon taking the
appropriate derivatives (2.13). Moreover (2.5) follows automatically from
(2.13). The operator $\epsilon$ has the same weights as a reparameterization
mode. While we will postpone this interpretation for the time being, the
manipulations surrounding monodromy projections and distinguishing between
local operator versus shadow exchanges, as encountered in [32], are relevant
to this story.555In the celestial context there are naturally two
symplectically paired ‘memory’ and ‘Goldstone’ modes. The diamond descending
from $\epsilon$ measures the spin memory effect [41], while the paired
superrotation Goldstone mode is the more natural ‘reparameterization’ operator
[42, 43] and descends to its own diamond of SL$(2,\mathbb{C})$ primaries.
Focusing on the memory modes, if we consider splitting the correlator (2.14)
into two parts by replacing
$\log\mu^{2}|z-z_{k}|^{2}=\log\mu(z-z_{k})+\log\mu({\bar{z}}-{\bar{z}}_{k}),$
(2.16) we see that the presence of both terms guarantees the $S$ and its
shadow $\bar{T}$ both descend from this top corner of the diamond. With either
the first or second term on its own, we can essentially kill either the left
or right corner of the celestial diamond. This different projections can be
phrased as gauging one of two transformations [32]
$[\epsilon(z,{\bar{z}})]_{S}=\epsilon(z,{\bar{z}})+\Lambda({\bar{z}}),~{}~{}~{}~{}[\epsilon(z,{\bar{z}})]_{\bar{T}}=\epsilon(z,{\bar{z}})+\Lambda_{0}(z)+\Lambda_{1}(z){\bar{z}}+\Lambda_{2}(z){\bar{z}}^{2}.$
(2.17) However, the additional subtlety we encounter here is that doing such a
monodromy projection of the correlator not only impacts our prescription for
both the $\epsilon$ operator explicitly shown in (2.14) but also the hard
operators whose conformally soft limits are related to $\epsilon$ by (2.13).
In the celestial context we are also interested in understanding the correct
external operators rather than just how to project their correlators onto
‘physical’ exchanges when we decompose them into lower point amplitudes. This
lends itself to a slightly different phrasing of the problem. As we will
discuss in the next section, being able to consistently take the conformally
soft limit implies a similar restriction on what hard operators we can insert.
We can choose a different Green’s function so that
$S=\partial\epsilon_{S},~{}~{}~{}\langle\bar{\partial}^{3}\epsilon_{S}(w,{\bar{w}})\prod_{i=1}^{n}\phi_{h_{i},{\bar{h}}_{i}}(z_{i},{\bar{z}}_{i})\rangle=\text{Contact
Terms},$ (2.18)
where the left hand equation holds away from other operator insertions and the
right hand equation assumes all other operator insertions are hard. More
concretely, assuming that we can analytically continue our expression for the
soft theorem (2.2) off of the celestial sphere $z^{*}={\bar{z}}$, we can
define following non-local operator
$\epsilon_{S}=\int_{z_{0}}^{z}dwS(w,{\bar{z}}),$ (2.19)
where the open contour runs between a reference point, $z_{0}$, and $z$. Since
$S$ has weight $h=1$ the integrand has the appropriate left-handed weight. The
correlation function of $\epsilon_{S}$ is given by
$\displaystyle\langle{\epsilon_{S}(z,\bar{z})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle$
(2.20)
$\displaystyle=-\sum_{k=1}^{n}\ln\mu\left(z-z_{k}\right)\\{\left(\bar{z}_{k}-\bar{z}\right)^{2}\bar{\partial}_{k}+2\bar{h}_{k}\left(\bar{z}_{k}-\bar{z}\right)\\}\langle{\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle.$
Now defining $\bar{T}$ via the usual shadow definition (2.3), we can construct
the following candidate for a modified stress tensor666Note that it is
compatible to view our correction term as removing a contribution from the
hard charge, a weight $\Delta=3$, $J=-1$ operator constructed from the matter
fields
$\partial\bar{T}=j_{3,-1}~{}~{}\Leftrightarrow~{}~{}\partial(\bar{T}-\partial^{-1}j_{3,-1})=0,$
(2.21) so that the Ward identity enforces conservation of the modified stress
tensor, identically [44].
$\bar{T}_{\rm
mod}=\bar{T}+\frac{1}{2}\bar{\partial}^{3}\epsilon_{S},~{}~{}~{}\partial\bar{T}_{\rm
mod}=0.$ (2.22)
It’s conservation follows from (2.5) and the first equation in (2.18), while
the fact that it reduces to $\bar{T}$ in the absence of other soft insertions
follows from the second equation in (2.18). If we repeat the same
manipulations that led to the obstruction (LABEL:TdecT) with $\bar{T}$
replaced with $\bar{T}_{\rm mod}$, we indeed find that the non-contact terms
vanish, as expected from the operator equation (2.18).777See appendix C for
the details. However the analog of (LABEL:Sdec) implies that
$\bar{\partial}^{3}S$ is non-vanishing in the presence of a $\bar{T}_{\rm
mod}$ insertion so, much like the monodromy projection description, you give
up the $\widehat{\overline{sl}}_{2}$ symmetry to get a manifest
$\overline{Vir}$ symmetry.
## 3 A Modified ‘Shadow Basis’
The essence of our construction of $T_{mod}$ boiled down to exploiting an
ambiguity in the Green’s functions lifting us from the subleading soft
graviton to the reparameterization mode. This nested structure of primary
descendants is known to persist for other conformally soft operators with
(negative) integer weights. While we now have a prescription for the stress
tensor that reproduces the correct (centerless) Virasoro Ward identity for
both single and double soft insertions, there is still a tension with the fact
that all of the other operators are un-shadowed. Even at the level of matching
the single soft insertions, we are not shadow transforming all operators
simultaneously, but constructing a highly non-local object only for
conformally soft modes. In this section we discuss how to generalize our
construction of $T_{mod}$ beyond the subleading conformally soft limit to
general complex weights.
Let’s start with the integer point case first. We will focus on the
holomorphic sector here, though an analogous discussion holds for the anti-
holomorphic case. See also the discussion in Appendix (A). Suppose
$\phi_{h}(z)$ is a SL$(2,\mathbb{C})$ primary. Namely,
$L_{0}\phi_{h}=h\phi_{h},\ L_{1}\phi_{h}=0.$ (3.1)
The representation is spanned by the descendants $L_{-1}^{n}\phi_{h}$. A
‘primary descendant’ will be a state in this module such that
$\psi_{n}=L_{-1}^{n}\phi_{h},~{}~{}~{}L_{1}\psi_{n}=0.$ (3.2)
Applying the standard commutation relations we get
$L_{1}\psi_{n}=n\left(n+2h-1\right)L_{-1}^{n-1}\phi_{h}.$ (3.3)
Therefore a non-trivial ‘primary descendant’ exists for the unique value
$n=1-2h.$ (3.4)
Now, in terms of the fields, we can write
$\psi_{n}(z)=\partial^{1-2h}\phi_{h}(z).$ (3.5)
Since $h$ takes on the special (half) integer value (3.4), this is an ordinary
derivative. However, we can try to analytically continue this to an arbitrary
complex number $h$.
#### An Integral Representation
Let us consider the differential equation
$\frac{d^{n}}{dz^{n}}g_{n}(z)=g(z).$ (3.6)
One solution to (3.6) can be written as
$g_{n}(z)=\frac{1}{\Gamma(n)}\int_{a}^{z}dw(z-w)^{n-1}g(w).$ (3.7)
Here $g_{n}(z)$ represents the $n$-fold integral of the function $g(z)$.
Formally we can invert this by taking $n\mapsto-n$ in which case the $n$-fold
derivative of the function $g(z)$ is given by
$\frac{d^{n}}{dz^{n}}g(z)=\lim\limits_{\epsilon\rightarrow
0}\frac{1}{\Gamma(\epsilon-n)}\int_{a}^{z}dw(z-w)^{\epsilon-n-1}g(w).$ (3.8)
Here $n$ is still an integer. Analytically continuing this to general $n=1-2h$
we can formally write
$\partial^{1-2h}\phi_{h}(z)=\frac{1}{\Gamma(2h-1)}\int_{z_{0}}^{z}\frac{dw}{(z-w)^{2-2h}}\phi_{h}(w)\equiv\phi^{\prime}_{1-h}(z_{0},z).$
(3.9)
The choice of a reference point $z_{0}$ explicitly breaks the expected global
conformal covariance, since this is clearly no longer a local operator. It
will be convenient to take our reference point $z_{0}\rightarrow\infty$. While
the point at infinity is not preserved by the global conformal group, we do
have some control over the behavior of the scattering amplitudes at large
angular separation. Indeed take $z_{0}\rightarrow\infty$ keeping $z$ fixed we
expect $\phi_{h}(z_{0})\rightarrow\frac{1}{z_{0}^{2h}}$. Meanwhile,
$\frac{\partial}{\partial
z_{0}}\phi^{\prime}_{1-h}(z_{0},z)=-\frac{1}{\Gamma(2h-1)}\frac{\phi_{h}(z_{0})}{(z-z_{0})^{2-2h}}$
(3.10)
so the leading term in $z_{0}$ is given by
$\frac{\partial}{\partial
z_{0}}\phi^{\prime}_{1-h}(z_{0},z)\sim\frac{1}{z_{0}^{2}}\rightarrow 0,\
z_{0}\rightarrow\infty.$ (3.11)
It is natural to call (3.9) an ‘incomplete light ray operator.’ We see that
the integrand is the same (and is the holomorphic ‘half’ of the shadow
kernel). As compared to the ones appearing in [45, 46, 47] it extends over
half the range.
#### Celestial Diamond Redux
Now let us see how these appear in our construction of a modified shadow
basis. We will need to restore the dependence on ${\bar{z}}$. Consider a
primary field $\phi_{h,\bar{h}}$ and its shadow
$\tilde{\phi}_{1-h,1-\bar{h}}$. For generic weights we will define our shadow
transform as follows
$\displaystyle\widetilde{\mathcal{O}_{h,\bar{h}}}(w,\bar{w})=\frac{K_{h,\bar{h}}}{2\pi}\int
d^{2}w^{\prime}\frac{\mathcal{O}_{h,\bar{h}}(w^{\prime},{\bar{w}}^{\prime})}{(w-w^{\prime})^{2-2h}({\bar{w}}-{\bar{w}}^{\prime})^{2-2\bar{h}}}\,.$
(3.12)
The normalization constant $K_{h,\bar{h}}$ was chosen to be
$K_{h,\bar{h}}=h+\bar{h}+|h-\bar{h}|-1$ in [3, 29] so that
$\widetilde{\widetilde{\mathcal{O}_{h,\bar{h}}}}=(-1)^{2(h-\bar{h})}\mathcal{O}_{h,\bar{h}}$,
matching the conventions of [48]. Meanwhile
$K_{h,\bar{h}}=\frac{\Gamma(2-2\bar{h})}{\Gamma(2h-1)}$ in [49]. We can leave
it unspecified for the time being, so as to keep track of which statements are
independent of this convention.
To connect to our discussion above, we need to understand the limiting
behavior near integer points. The identity
$\partial_{z}\frac{1}{\bar{z}}=\pi\delta^{(2)}(z)$ (matching the conventions
of [49]) tells us that for special integer weights the shadow kernel becomes a
Green’s function [29], namely
$\partial_{{\bar{w}}^{\prime}}^{\bar{k}}\frac{({\bar{w}}^{\prime}-{\bar{w}})^{\bar{k}-1}}{(w^{\prime}-w)^{k+1}}=\pi(\bar{k}-1)!\frac{(-1)^{k}}{k!}\partial_{w^{\prime}}^{k}\delta^{(2)}(w^{\prime}-w)\,,$
(3.13)
and similarly for the holomorphic sector for $k$ and $\bar{k}$ flipped.
Meanwhile from Gelfand [50] we know that
$\left[\frac{z^{\lambda}{\bar{z}}^{\mu}}{\Gamma\left(\frac{1}{2}(\mu+\lambda)+\frac{1}{2}|\mu-\lambda|+1\right)}\right]_{\overset{\lambda=-k-1}{\mu=-l-1}}=\pi\frac{(-1)^{k+l+j}j!}{k!l!}\delta^{(k,l)}(z,{\bar{z}}),$
(3.14)
where $j=\frac{1}{2}(\mu+\lambda)-\frac{1}{2}|\mu-\lambda|=\min\\{k,l\\}$. We
will explore these integer point representations in more detail in Appendix A;
however, the essence of the celestial diamond story we need here is that the
shadow kernel can be interpreted as either Green’s function or differential
operators taking us between conformal primaries with Weyl reflected weights
[29, 30] (see also [20]). We can carry these observations away from the
integer points by treating the shadow and light transforms as pseudo-
differential operators.
The pseudo-differential operators introduced above are such that as we limit
to integer points we have the following descendancy relation
$(-1)^{2h-1}\Gamma(2-2h)\partial^{2h-1}\tilde{\phi}_{1-h,1-\bar{h}}=K_{h,\bar{h}}\Gamma(2\bar{h}-1)\bar{\partial}^{1-2\bar{h}}\phi_{h,\bar{h}}$
(3.15)
generalizing (2.5). The analog of (2.18) is to introduce a field
$f_{1-h,\bar{h}}$ such that
$\displaystyle\phi_{h,\bar{h}}=(-1)^{2h-1}\Gamma(2-2h)\partial^{2h-1}f_{1-h,\bar{h}},~{}~{}~{}\langle\bar{\partial}^{1-2\bar{h}}f_{1-h,\bar{h}}\prod_{i=1}^{n}\phi_{h_{i},{\bar{h}}_{i}}(z_{i},{\bar{z}}_{i})\rangle=\text{Contact
Terms}$
(3.16)
so that the analog of (2.22) is
$\tilde{\phi}_{1-h,1-\bar{h};mod}=\tilde{\phi}_{1-h,1-\bar{h}}-K_{h,\bar{h}}\Gamma(2\bar{h}-1)\bar{\partial}^{1-2\bar{h}}f_{1-h,\bar{h}},$
(3.17)
while (3.15) guarantees the analog of the conservation law
$\partial\bar{T}_{mod}=0$, implying the primary descendants that appear at
integer points decouple. In terms of our explicit integral expressions for
these pseudo differential operators we have
$\scalebox{0.98}{\mbox{$\displaystyle\tilde{\phi}_{1-h,1-\bar{h};mod}=\frac{K_{h,\bar{h}}}{2\pi}\left[\int_{\hat{\mathbb{C}}}d^{2}w+\frac{i}{2}(1-e^{2\pi
i(2-2h)})\int_{-\infty}^{z}dw\int_{-\infty}^{\bar{z}}d{\bar{w}}\right]\frac{{\phi}_{h,\bar{h}}(w,{\bar{w}})}{(z-w)^{2-2h}({\bar{z}}-{\bar{w}})^{2-2{\bar{h}}}}$}},$
(3.18)
where
$K_{h,\bar{h}}=\frac{\Gamma(2-2\bar{h})}{\Gamma(2h-1)}=\frac{\Gamma(2-2h)}{\Gamma(2\bar{h}-1)},~{}~{}~{}h-\bar{h}=\pm
2.$ (3.19)
The formula is the same for both positive and negative helicity
gravitons.888For $\Delta=1-|J|-n$ for $n\in\mathbb{Z}_{>}$ we expect both
holomorphic and antiholomorphic primary descendants from the representation
theory, however we see that this would appear to involve an additional
subtraction as compared to (3.18).
We see that if we complexify the shadow integral kernel, for generic weights
there is a branch cut that extends from $w=z$ to $w=\infty$ in the complex $w$
plane and similarly, from ${\bar{w}}={\bar{z}}$ to ${\bar{w}}=\infty$ in the
complex ${\bar{w}}$ plane. The monodromy around $z=w$ is the phase $e^{2\pi
i(2h-2)}$. Taking this into account, we can recognize the normalization as
arising from the discontinuity across the branch cut in the complex $w$ plane
for a particular orientation of the contour. We won’t pursue this
interpretation further here, but will revisit the complexification of the
celestial sphere to $\mathbb{CP}^{1}\times\mathbb{CP}^{1}$ in appendix B in
the context of comparing our subtraction to other constructions in the
literature.
Now we’ve seen that by construction $\partial^{2h-1}\tilde{\phi}_{mod}$
vanishes in correlators. Because of (3.18) and (3.15) this is true both for
contact terms as well so we are free to smear our correlators. Rather than
restrict to the other operators being of $\phi_{h,{\bar{h}}}$ type, they can
also be shadowed or modified shadow operators
$\langle\partial^{2h-1}\tilde{\phi}_{1-h,1-\bar{h};mod}\tilde{\phi}_{1-h_{i},1-\bar{h}_{i};mod}\ldots\rangle={\rm
Contact~{}Terms}.$ (3.20)
In particular, for $h=1$ and $\bar{h}=-1$ this reduces to the conservation law
for $\bar{T}_{mod}$ in these modified correlators
$\langle\partial\bar{T}_{mod}\tilde{\phi}_{1-h_{i},1-\bar{h}_{i};mod}\ldots\rangle={\rm
Contact~{}Terms}.$ (3.21)
In these expressions we’ve assumed that the $...$ included no other soft
limits of the opposite helicity sector. We will now turn to the generic mixed
helicity case.
## 4 Mixed Helicity Amplitudes: A Proposal
Let us now go beyond the single helicity sector. The conventional double soft
theorems for positive and negative helicity gravitons are known to have
ambiguities [34]. Here we will see that sequential conformally soft limits
present the same type of obstruction to the decoupling of primary descendants.
Again we will start by focusing on the subleading soft graviton. Let $\bar{S}$
denote the $\Delta=0,J=-2$ soft graviton extracted as in (2.1) above, but for
the negative helicity sector. If we consider a mixed helicity correlator and
take a sequential conformal soft limit where the $+$ helicity operator is
taken to $\Delta=0$ first, we find
$\begin{gathered}\langle{S(w_{1},\bar{w}_{1})\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle_{-+}\\\
=-\frac{\left(\bar{w}_{2}-\bar{w}_{1}\right)^{2}\bar{\partial}_{w_{2}}+2\left(\bar{w}_{2}-\bar{w}_{1}\right)}{w_{1}-w_{2}}\langle{\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle\\\
-\sum_{i}\frac{\left(\bar{z}_{i}-\bar{w}_{1}\right)^{2}\bar{\partial}_{i}+2\bar{h}_{i}\left(\bar{z}_{i}-\bar{w}_{1}\right)}{w_{1}-z_{i}}\langle{\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle,\end{gathered}$
(4.1)
while taking the opposite order gives
$\begin{gathered}\langle{S(w_{1},\bar{w}_{1})\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle_{+-}\\\
=-\frac{\left(w_{1}-w_{2}\right)^{2}\partial_{w_{1}}+2\left(w_{1}-w_{2}\right)}{\bar{w}_{2}-\bar{w}_{1}}\langle{S(w_{1},\bar{w}_{1})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle\\\
-\sum_{i}\frac{\left(z_{i}-w_{2}\right)^{2}\partial_{i}+2h_{i}\left(z_{i}-\bar{w}_{2}\right)}{\bar{w}_{2}-\bar{z}_{i}}\langle{S(w_{1},\bar{w}_{1})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle.\end{gathered}$
(4.2)
While one can further plug in the form of the soft theorem to evaluate the
remaining correlators in these expressions and see that
$(\ref{plus})\neq(\ref{minus})$, we’ve paused here because we already
encounter an obstruction to the null state decoupling we’d expect. Namely,
from the single soft limits we expect the primary descendants
$\bar{\partial}^{3}S$ and $\partial^{3}\bar{S}$ to decouple. However we see
that in (4.1) we have $\bar{\partial}^{3}S(z,\bar{z})=0$ but
$\partial^{3}\bar{S}(z,\bar{z})\neq 0$, while the opposite is true for (4.2).
Any attempt to try to evade the ambiguity by specifying a priori which order
of soft limits to take still comes at a cost. In the $(-+)$ case we preserve
the $\widehat{\overline{sl}}_{2}$ current algebra but lose the
$\widehat{sl}_{2}$ current algebra. Similarly in the $(+-)$ case the
$\widehat{sl}_{2}$ current algebra still holds but we lose the
$\widehat{\overline{sl}}_{2}$ symmetry. The fact that we are unable to
preserve both decoupling conditions at the same time is perhaps not as
surprising given that, taken together, the two current algebras do not form a
closed algebra. Meanwhile we don’t have obvious candidates for the additional
generators of Diff$(S^{2})$ (see also [51]). Phrased in this manner, we
recognize this as the same type of problem that we started with in the single
helicity sector. Namely, just as we saw from the double soft theorems that we
couldn’t consistently realize the $\overline{Vir}$ and the
$\widehat{\overline{sl}}_{2}$ current algebra symmetries at the same, we
similarly need to look for additional generators since the
$\widehat{\overline{sl}}_{2}$ current algebra and $\overline{Vir}$ do not form
a closed algebra.
By contrast the $\overline{Vir}$ and $\widehat{sl}_{2}$ generators form a
closed algebra. The same is true for $Vir$ and $\widehat{\overline{sl}}_{2}$
(and also for $Vir\times\overline{Vir}$). Can we use this to augment our soft
algebras beyond the single helicity sector? Let’s return for a moment to the
expression for the $(-+)$ limit given by (4.1), and now do a shadow
transformation of $S(w_{1},\bar{w}_{1})$
$\begin{gathered}\langle{\bar{T}(w_{1},\bar{w}_{1})\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle_{-+}\\\
=\left(\frac{1}{\left(\bar{w}_{1}-\bar{w}_{2}\right)^{2}}+\frac{1}{\bar{w}_{1}-\bar{w}_{2}}\frac{\partial}{\partial\bar{w}_{2}}\right)\langle{\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle\\\
+\sum_{i}\left(\frac{\bar{h}_{i}}{\left(\bar{w}_{1}-\bar{z}_{i}\right)^{2}}+\frac{1}{\bar{w}_{1}-\bar{z}_{i}}\frac{\partial}{\partial\bar{z}_{i}}\right)\langle{\bar{S}(w_{2},\bar{w}_{2})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle.\end{gathered}$
(4.3)
We see that the problem we encountered above goes away, namely
$\partial^{3}\bar{S}=0$ and $\partial\bar{T}=0$. One can check that the same
is true if we started with the opposite choice $(+-)$ of conformally soft
limits. Now of course from the previous section we know that we want to
promote $\bar{T}\mapsto\bar{T}_{mod}$ to get a consistent $\overline{Vir}$
algebra but those statements were within that single helicity sector.
Taking a step back, we have seen that consecutive soft limits in the mixed
helicity sector run into a problem because of the fact that we have both
holomorphic and antiholomorphic poles. Equation (4.3) suggests that one way to
resolve this issue will be to consider, instead of the standard [52] celestial
amplitudes, amplitudes of the form
$\langle{+++\ldots\widetilde{-}_{mod}\widetilde{-}_{mod}\widetilde{-}_{mod}\ldots}\rangle$
(4.4)
and
$\langle{---\ldots\widetilde{+}_{mod}\widetilde{+}_{mod}\widetilde{+}_{mod}\ldots}\rangle$
(4.5)
where one helicity sector has been mod-shadowed before taking any soft limit.
In the first case, we expect that the symmetry that is realized locally is at
least as big as the semi-direct product of ${Vir}$ and $w_{1+\infty}$ and,
similarly, in the second case it is the semi-direct product of
${\overline{Vir}}$ and $\overline{w}_{1+\infty}$. In this paper, while we will
not try to give a proof of this proposal, we end this section by pointing out
some of its novel features.
The main point of our proposal is that the symmetry that is locally realized
at the level of $\cal S$-matrix depends on the choice of basis for the
asymptotic states. This may not be very surprising given that the basis
transformation which takes us from (4.4) to (4.5) is non-local. In one
description we declare the mod-shadowed operators $\widetilde{+}_{mod}$ as
local and in the other description, the conventional $+$ operators are local.
But, there is no useful description in which both the $+$ and
$\widetilde{+}_{mod}$ coexist as local operators. This is consistent with our
observation in section-$2$ that if we admit both $S(z,\bar{z})$ and
$\bar{T}(z,\bar{z})=\tilde{S}(z,\bar{z})$ as local operators in the theory
then we lose both the $\widehat{\overline{sl_{2}}}$ and $\overline{Vir}$
symmetries.
A relatively simple check of our proposal can be made in the MHV sector. First
of all, it was shown in [53] that the MHV sector has a symmetry group which is
the semidirect product of $Vir$ and $w_{1+\infty}$. This is consistent with
our proposal here. Furthermore, we know that when the MHV sector is described
by the standard amplitudes $\langle{--+++\cdots+}\rangle$, the celestial OPE
of two gravitons can be written [16] in terms of supertranslation and
$\widehat{\overline{sl}}_{2}$ current algebra descendants. According to our
proposal if instead, we describe the MHV sector by amplitudes of the form
$\langle{--\widetilde{+}_{mod}\widetilde{+}_{mod}\cdots\widetilde{+}_{mod}}\rangle$
then one should be able to write the celestial OPE between two gravitons as a
linear combination of supertranslation and $\overline{Vir}$ descendants. This
will be a nontrivial check of our proposal and we hope to return to some of
these problems in near future.
## 5 Discussion
Since the early days of celestial CFT, there has been an underlying debate
about which scattering basis in the bulk should correspond to the local
operators in the celestial dictionary. From the point of view of the global
Lorentz symmetry any intertwiner between primaries should give an equivalent
basis. However, the extrapolate dictionary [30] seems to prefer the Mellin
transformed basis because of the way that the momentum space and position
space celestial spheres get identified under the large-$r$ saddle point near
null infinity [54]. The enhancement from global SL$(2,\mathbb{C})$ to a
Virasoro symmetry would seem to settle this debate if it were not for the fact
that the symmetry generators are constructed from a non-local shadow
transform, while their Ward identities transforming hard operators demand that
those are defined in terms of the ordinary Mellin transformed basis.
Here we’ve shown that there is a tension with the definition of the stress
tensor as the shadow operator and shown how to remedy it. The underlying
reason for this tension is more general than our CCFT application: an operator
and its shadow can’t both be local. However the fact that we are necessarily
confronted with it is unique to the celestial case. We cannot separately
prescribe different dictionaries for the hard and soft modes without being
able to interpolate between the two when we continuously deform our hard
operator dimensions to various conformally soft limits. In particular, the
Virasoro symmetry picks out local operator for hard particles but wants to
have its cake and eat it to with the soft graviton mode defining the stress
tensor.
Our modification involves a certain projection of one half of the celestial
diamonds so that there is only one radiative soft mode. We then showed how to
extend this to operators of any weight, so that we can go to a basis where the
corrected shadow transform is again a conformally soft limit. In the end,
we’ve seen that different bases make different symmetries manifest at the
level of the celestial OPE and explored how certain helicity-asymmetric
representations of the amplitudes can be better suited to taking multi-soft
limits. These explorations expand upon a variety of exciting approaches to
elucidate the symmetries and structure of the 4D $\mathcal{S}$-matrix [55, 16,
8, 18, 19, 46, 38, 20, 51, 56].
### Acknowledgements
We would like to thank Yangrui Hu, Ashoke Sen, Atul Sharma, Andrew Strominger,
Tomasz Taylor and Herman Verlinde for useful conversations. The work of SB is
partially supported by the Swarnajayanti Fellowship (File No-
SB/SJF/2021-22/14) of the Department of Science and Technology and SERB, India
and by SERB grant MTR/2019/000937 (Soft-Theorems, S-matrix and Flat-Space
Holography). SP is supported by the Celestial Holography Initiative at the
Perimeter Institute for Theoretical Physics and has been supported by the Sam
B. Treiman Fellowship at the Princeton Center for Theoretical Science.
Research at the Perimeter Institute is supported by the Government of Canada
through the Department of Innovation, Science and Industry Canada and by the
Province of Ontario through the Ministry of Colleges and Universities.
## Appendix A Lessons from Gelfand
In this appendix we briefly summarize some results from volumes 1 and 5 of
Gelfand’s series on generalized functions [50, 57]. In particular, we will
want to review his construction of homogeneous generalized functions and the
behavior of our SL$(2,\mathbb{C})$ primaries near integer conformal dimensions
so that we can understand various contact terms that appear and how to handle
their analytic continuations to other signatures.
### A.1 Generalized Homogeneous Functions
Appendix B of Gelfand Volume 1 [50] discusses generalized functions of a
complex variable. These are relevant for our understanding of the
distributions that appear when we take descendants at special values of the
conformal dimension. In particular our mode expansion of the conformal
primaries involves so-called homogeneous generalized functions, whose
definitions and properties we review here.
Following [50], a function $F(z,{\bar{z}})$ is a homogeneous function of
degree $(\lambda,\mu)$ if for $a\in\mathbb{C}-\\{0\\}$
$F(az,\bar{a}{\bar{z}})=a^{\lambda}\bar{a}^{\mu}F(z,{\bar{z}}).$ (A.1)
This can be promoted to a generalized function by considering the following
functional
$(F,\varphi)=\int d^{2}zF(z,{\bar{z}})\varphi(z,{\bar{z}})$ (A.2)
for an appropriate set of test functions $\varphi$ with bounded support so
that we are free to integrate by parts
$(f,\varphi^{(j,k)})=(-1)^{j+k}(f^{(j,k)},\varphi).$ (A.3)
In particular, via a change of variables we see that for our homogeneous
functions we have
$(F,\varphi(z/a,{\bar{z}}/\bar{a}))=a^{\lambda+1}\bar{a}^{\mu+1}(F,\varphi).$
(A.4)
For ${\rm Re}(\mu+\lambda)>-2$ we can defined the generalized function
$z^{\lambda}{\bar{z}}^{\mu}$ via the functional
$(z^{\lambda}{\bar{z}}^{\mu},\varphi)$ which converges for $\varphi$
infinitely differentiable with bounded support. To extend this to ${\rm
Re}(\mu+\lambda)<-2$ we need to regulate the integral. Gelfand identifies a
prescription such that the answer away from integer points (see below) is
defined by analytic continuation in the scaling dimensions from the region
${\rm Re}(\mu+\lambda)>-2$ where the integral converges. This function is
analytic away from $\lambda,\mu=-k-1$ for $k\in\mathbb{Z}_{>}$ where it has
simple poles
${\rm
res}_{\overset{\lambda=-k-1}{\mu=-l-1}}(z^{\lambda}{\bar{z}}^{\mu},\varphi)=\frac{2\pi}{k!l!}\varphi^{(k,l)}(0,0).$
(A.5)
Here the residue is in the single complex variable $s=\mu+\lambda$ with
$n=\mu-\lambda\in\mathbb{Z}$ fixed. Since the difference in scaling dimensions
is integer valued both $\mu$ and $\lambda$ will be integer valued at these
points, and are naturally labeled by $k,l\in\mathbb{Z}_{>}$ as above.
We can formally attach this residue of the integral to the generalized
function $z^{\lambda}{\bar{z}}^{\mu}$
${\rm
res}_{\overset{\lambda=-k-1}{\mu=-l-1}}z^{\lambda}{\bar{z}}^{\mu}=\frac{2\pi}{k!l!}(-1)^{k+l}\delta^{(k,l)}(0,0).$
(A.6)
By introducing a gamma function with the same pole locations we can define an
object whose limits to integral values is the distribution
$\left[\frac{z^{\lambda}{\bar{z}}^{\mu}}{\Gamma\left(\frac{1}{2}(\mu+\lambda)+\frac{1}{2}|\mu-\lambda|+1\right)}\right]_{\overset{\lambda=-k-1}{\mu=-l-1}}=\pi\frac{(-1)^{k+l+j}j!}{k!l!}\delta^{(k,l)}(z,{\bar{z}}),$
(A.7)
where $j=\frac{1}{2}(\mu+\lambda)-\frac{1}{2}|\mu-\lambda|=\min\\{k,l\\}$. In
particular, this implies that
$\partial_{\bar{z}}z^{-1+\alpha}{\bar{z}}^{\alpha}=\alpha
z^{-1+\alpha}{\bar{z}}^{-1+\alpha}~{}~{}\Rightarrow~{}~{}\lim\limits_{\alpha\rightarrow
0}\partial_{\bar{z}}z^{-1+\alpha}{\bar{z}}^{\alpha}=\pi\delta(z,{\bar{z}}).$
(A.8)
This replaces the familiar relation
$\partial_{\bar{z}}z^{-1}=\pi\delta(z,{\bar{z}})$ with a form that we can more
readily analytically continue between signatures. Specifically, we see that
the definitions of these generalized functions are implicitly tied to our
integration contour in (A.2) (here the Riemann sphere).
### A.2 Representations at Integer Points
Volume 5 of Gelfand [57] discusses infinite dimensional representations of the
Lorentz group. Each such representation is labeled by a pair of complex
numbers $\chi=(n_{1},n_{2})$ where $\left(n_{1}-n_{2}\right)\in\mathbb{Z}$.
The representation operator $T_{\chi}(g)$, for $g\in SL(2,\mathbb{C})$, acts
on the space $D_{\chi}$ of functions $\phi(z,\bar{z})$ via
$T_{\chi}(g)\phi(z,\bar{z})=(\beta
z+\delta)^{n_{1}-1}(\bar{\beta}\bar{z}+\bar{\delta})^{n_{2}-1}\phi\left(\frac{\alpha
z+\gamma}{\beta
z+\delta},\frac{\bar{\alpha}{\bar{z}}+\bar{\gamma}}{\bar{\beta}{\bar{z}}+\bar{\delta}}\right).$
(A.9)
We see that that in terms of our usual notation for the weights $(h,\bar{h})$
we have
$n_{1}=1-2h,\ n_{2}=1-2\bar{h},\ n_{1}-n_{2}=-2(h-\bar{h})=-2J\in\mathbb{Z}.$
(A.10)
The shadow transform induces a simultaneous Weyl reflection on the left and
right handed weights. This corresponds to the map ${\rm
Sh}:D_{\chi}\rightarrow D_{-\chi}$. Namely
$\chi=(n_{1},n_{2})\rightarrow-\chi=(-n_{1},-n_{2})$ while
${\rm Sh}\circ\phi(z,\bar{z})=\int
d^{2}z_{1}(z-z_{1})^{-n_{1}-1}(\bar{z}-\bar{z}_{1})^{-n_{2}-1}\phi(z_{1},\bar{z}_{1}).$
(A.11)
Now suppose $\chi=(h,\bar{h})$ is a representation where $h$ and $\bar{h}$ are
neither simultaneously positive nor simultaneously negative integers. Then the
shadow transform is one-to-one and onto
$\chi=(n_{1},n_{2})\sim-\chi=(-n_{1},-n_{2}).$ (A.12)
The so-called integer points are special representations for which
$\chi=(n_{1},n_{2})$ are either simultaneously positive or simultaneously
negative. These are at the heart of the null state relations and celestial
diamonds of [27, 28, 58, 30]. At integer points the representation $D_{\chi}$
has an invariant subspace and so is reducible. For example suppose $n_{1}$ and
$n_{2}$ are both positive integers. Then $D_{\chi}$ contains polynomials of
the form
$\phi(z,\bar{z})=\sum_{i=0}^{n_{1}-1}\sum_{j=0}^{n_{2}-1}c_{ij}z^{i}\bar{z}^{j},$
(A.13)
which are closed under the action of SL$(2,\mathbb{C})$. This invariant
subspace is called $E_{\chi}$. This has dimension $n_{1}n_{2}$.
Now consider the ‘shadow’ representation $D_{-\chi}$. Within this
representation there is a invariant subspace which can be described as
follows. Consider the space of all functions $\phi(z,\bar{z})\in D_{-\chi}$
which satisfy the conditions
$b_{ij}=\int dzd\bar{z}z^{i}\bar{z}^{j}\phi(z,\bar{z})=0,\ 0\leq i\leq
n_{1}-1,\ 0\leq j\leq n_{2}-1.$ (A.14)
One can show that under the SL$(2,\mathbb{C})$ action on $\phi(z,\bar{z})$ the
numbers $b_{ij}$ transform linearly and so the space described by the
conditions $b_{ij}=0$ is invariant under the action of SL$(2,\mathbb{C})$.
This invariant subspace of $D_{-\chi}$ is denoted by $F_{-\chi}$.
We can phrase this more cleanly as follows. Comparing to our discussion of
generalized functions above we see that the residue of the shadow kernel at
these integer points reduces to a differential operator
$\partial^{n_{1}}\bar{\partial}^{n_{2}}:D_{\chi}\mapsto D_{-\chi}.$ (A.15)
The subspace $E_{\chi}\subset D_{\chi}$ is the kernel of this operator.
Meanwhile $f\in F_{-\chi}$ can be written as
$\partial^{n_{1}}\bar{\partial}^{n_{2}}d$ for some $d\in D_{\chi}$. Namely
$\partial^{n_{1}}\bar{\partial}^{n_{2}}D_{\chi}=F_{-\chi}\subset D_{-\chi}.$
(A.16)
To construct a non-degenerate pairing with elements of $E_{\chi}$ we need to
restrict to the equivalence class $[D_{-\chi}]\sim D_{-\chi}+F_{-\chi}$ which
is the co-kernel of our differential map (A.15). This is closely related to
the equivalence classes encountered in the BMS flux algebra of [37, 38],
however for the leading through sub-subleading radiative soft gravitons only
one of the two (holomorphic or anti-holomorphic) submodules has a primary
descendant.
## Appendix B Complexifying the Celestial Sphere
We will use this appendix to tie together some loose ends regarding the
contour prescriptions for our modified shadow transform. Our starting point
will be to view the complexified Riemann sphere as the cross section of the
complexified null cone in momentum space. Let $p^{\mu}\in\mathbb{C}^{4}$ be
coordinates on complexified momentum space. The complexified celestial sphere
is then the following quadric in $\mathbb{CP}^{3}$
$-(p^{0})^{2}+(p^{1})^{2}+(p^{2})^{2}+(p^{3})^{2}=0.$ (B.1)
This is known to be bi-holomorphic to $\mathbb{CP}^{1}\times\mathbb{CP}^{1}$
via the Segre embedding
$\mathbb{CP}^{1}\times\mathbb{CP}^{1}\rightarrow\mathbb{CP}^{3}$
$([x:y],[a:b])\mapsto([xa:ya:xb:yb])$ (B.2)
up to a linear transformation. In terms of the standard coordinates for
covering the north pole patch of the celestial sphere
$[x:y]=[1:z],~{}~{}~{}~{}[a:b]=[1:{\bar{z}}]$ (B.3)
the map (B.2) reduces to
$[x^{0}:x^{1}:x^{2}:x^{3}]=[1:z:{\bar{z}}:z{\bar{z}}]=[p^{0}+p^{3},p^{1}+ip^{2},p^{1}-ip^{2}:p^{0}-p^{3}].$
(B.4)
We thus see that the complexified celestial sphere can be naturally though of
as $\mathbb{CP}^{1}\times\mathbb{CP}^{1}$. The celestial sphere in
$\mathbb{R}^{1,3}$ corresponds to the locus $z^{*}={\bar{z}}$, while the locus
$\\{z^{*}=z,{\bar{z}}^{*}={\bar{z}}\\}$ lands us on (a quotient of) the
celestial torus, relevant to $\mathbb{R}^{2,2}$.
As an example of how this complexification of the celestial sphere is useful,
let us examine how a different choice of contour prescription allows [38] to
evade the obstruction encountered by [8]. We wil start with the
$\Delta=3,J=\pm 1$ modes sourcing the subleading soft graviton
$\mathcal{J}=\mathcal{J}_{soft}+\mathcal{J}_{hard},~{}~{}~{}\bar{\mathcal{J}}=\bar{\mathcal{J}}_{soft}+\bar{\mathcal{J}}_{hard},$
(B.5)
where, at linearized order,
$\mathcal{J}_{soft}=-\frac{1}{2}\bar{\partial}^{3}S,~{}~{}~{}\bar{\mathcal{J}}_{soft}=-\frac{1}{2}\partial^{3}\bar{S}.$
(B.6)
The authors of [38] define the celestial stress tensor as follows
${\bf T}(z)=\oint_{C}\frac{d{\bar{z}}}{2\pi
i}\bar{\mathcal{J}}_{soft}(z,{\bar{z}}),~{}~{}~{}\bar{\bf
T}({\bar{z}})=\oint_{C}\frac{dz}{2\pi i}\mathcal{J}_{soft}(z,{\bar{z}}).$
(B.7)
If we formally evaluate the shadow transform (2.3) using the contour
prescription (see also the recent discussion in [59])
$\int d^{2}w\Rightarrow\pi\oint\frac{dw}{2\pi i}\oint\frac{d{\bar{w}}}{2\pi
i}$ (B.8)
of [38] we get $\bar{T}(z,{\bar{z}})\Rightarrow\bar{\bf T}({\bar{z}})$ which
is anti-meromorphic by construction. Now in Appendix A we saw that our
understanding of generalized functions is tied to the parings of the form
(A.2) which involve a choice of metric. For example, a double contour of the
form (B.8) gives
$\lim_{\alpha\rightarrow 0}\alpha\oint\frac{dz}{2\pi
i}z^{-1+\alpha}\oint\frac{d{\bar{z}}}{2\pi i}{\bar{z}}^{-1+\alpha}=0$ (B.9)
in contrast to (A.8). This is consistent with the fact that the Green’s
function for $\partial_{z}^{n_{1}}\partial_{\bar{z}}^{n_{2}}$ is signature-
dependent. For example when we restrict to the codimension 2 locus
$(z,{\bar{z}})\in\mathbb{R}^{1,1}$ rather than
$\partial_{\bar{z}}z^{-1}=\pi\delta(z,{\bar{z}})$ we get
$\partial_{\bar{z}}[\theta({\bar{z}})\delta(z)]=\delta(z)\delta({\bar{z}})$ on
the locus ${\bar{z}}=z^{*}$. The upshot is that (so long as we analytically
continue our dimensions slightly away from the integer points) we can start
from the amplitudes, complexify our mode expansion so that
$(z,{\bar{z}})\in\mathbb{C}^{2}$, take derivatives in $z$ and ${\bar{z}}$ as
we see fit, but when it comes to discussing what contact terms appear our
choice of contour matters.
$w$$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\Gamma_{z}$$~{}~{}~{}z$
Figure 1: Branch cut for the shadow kernel in the complexified celestial
sphere coordinate $w$. The blue dots indicate locations of other operator
insertions which from the form of the tree level celestial OPEs are known to
give poles in $w$.
For our discussions in the main text, it’s important to note that the shadow
integral kernel has a branch cut once $z$ and ${\bar{z}}$ are independent.
Indeed, the Green’s function we are using in section 3 can be rephrased in
terms of Cauchy’s residue theorem. Consider a function $f(w)$ that is
holomorphic near the point $w=z$. By Cauchy’s integral formula the $n$th
derivative at $w=z$ can be extracted by the contour integral
$f^{(n)}(z)=\frac{\Gamma(n+1)}{2\pi
i}\oint_{C_{z}}dw\frac{f(w)}{(w-z)^{n+1}},$ (B.10)
where the contour $C_{z}$ is along a closed curve going counterclockwise
around the point $w=z$. If we try to analytically continue $n=1-2h$ away from
an integer value, the integrand develops a branch cut stretching between $w=z$
and $w=\infty$. We can form a closed contour by replacing $C_{z}$ with a
keyhole contour $\Gamma_{z}$. Because $f(w)=\phi(w)\sim w^{-2h}$, the
integrand goes like $w^{-2}$ so there is no residue at infinity. The
contribution from two points on opposite sides of the branch cut in Figure 1
will differ by the following phase
$e^{\pi i(n+1)}-e^{-\pi i(n+1)}=-2i\sin(n\pi)=\frac{2\pi
i}{\Gamma(n+1)\Gamma(-n)}$ (B.11)
so that
$f^{(n)}(z)\ni\frac{1}{\Gamma(-n)}\int_{-\infty}^{z}\frac{f(w)}{(w-z)^{n+1}}$
(B.12)
matching (3.7). The small arc and branch cut contributions are related by the
residues that appear from collinear limits with other operator insertions in a
given $\cal S$-matrix element (indicated schematically by the blue dots in
figure 1).
## Appendix C OPE between modified stress tensors:
$\bar{T}_{mod}\bar{T}_{mod}$
Let’s start by reviewing the elements of the derivation in [8] that get
modified once we take into account the reparameterization mode. In order to
reproduce the standard centerless Virasoro symmetry, a certain smearing of the
subleading soft mode must be shown to vanish
$\begin{gathered}\langle{\bar{T}(z,{\bar{z}})\bar{T}(w,{\bar{w}})\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=\text{Standard
terms with $c=0$ }~{}~{}~{}~{}~{}\\\
~{}~{}~{}~{}+\frac{6}{\pi(\bar{w}-\bar{z})^{2}}\int
d^{2}z_{1}\frac{1}{(\bar{z}-\bar{z}_{1})^{2}(\bar{w}-\bar{z}_{1})^{2}}\langle{S(z_{1},\bar{z}_{1})\prod_{i=2}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle.\end{gathered}$
(C.1)
One needs to be careful, since while the angular dependence of the regulated
$z_{1}$ integral vanishes due to conformal invariance (c.f. section 4.1 of
[8]) the in the regulated $z_{1}$, there is an overall factor of $\Gamma[0]$.
One thus needs to be careful not to drop subleading terms in the
$\Delta\rightarrow 0$ expansion of the object it is multiplying. By taking a
$w$ derivative of both sides, we can see that there is an obstruction to this
integral vanishing via (2.5) and (LABEL:Sdec).
We can now show that if we repeat the same manipulations that led to the
obstruction (LABEL:TdecT) with $\bar{T}$ replaced with $\bar{T}_{\rm mod}$,
the non-contact terms vanish. The correlation function we are interested in
takes the form
$\displaystyle\begin{aligned}
&\langle{\bar{T}_{mod}(z,\bar{z})\bar{T}_{mod}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle=\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\langle{\bar{T}(z,\bar{z})\bar{T}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle+\langle{\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(z,\bar{z})\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\langle{\bar{T}(z,\bar{z})\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle+\langle{\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(z,\bar{z})\bar{T}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle.\end{aligned}$
(C.2)
Now, as discussed in section 2, the first term is (C.1). Using that
$\langle{\bar{\partial}^{3}\epsilon_{S}(z,\bar{z})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle=0$
(C.3)
as an operator equation, and that we can choose the memory operators (2.19) to
have vanishing correlators [30]
$\langle\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(z,\bar{z})\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(w,\bar{w})\rangle=0$
(C.4)
we see that the second term on the right hand side of (C.2) vanishes. To
obtain the mixed correlators involving $\bar{T}$ and $\epsilon_{S}$ we use the
relation
$\displaystyle\begin{gathered}\langle{\bar{T}(z,\bar{z})\bar{\partial}^{3}S(w,\bar{w})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle=\left(-\frac{24}{\left(\bar{z}-\bar{w}\right)^{5}}-\frac{12}{\left(\bar{z}-\bar{w}\right)^{4}}\frac{\partial}{\partial\bar{w}}\right)\langle{S(w,\bar{w})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle\end{gathered}$
(C.5)
and the fact that $S(z,\bar{z})$ and $\epsilon_{S}(z,\bar{z})$ transform in
the same way under $\overline{Vir}$ transformations. This follows from the
definition $S=\partial\epsilon_{S}$ and leads to
$\displaystyle\begin{gathered}\langle{\bar{T}(z,\bar{z})\frac{1}{2}\bar{\partial}^{3}\epsilon_{S}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle=\left(-\frac{12}{\left(\bar{z}-\bar{w}\right)^{5}}-\frac{6}{\left(\bar{z}-\bar{w}\right)^{4}}\frac{\partial}{\partial\bar{w}}\right)\langle{\epsilon_{S}(w,\bar{w})\prod_{i=1}^{n}\phi_{h_{i},\bar{h}_{i}}(z_{i},\bar{z}_{i})}\rangle,\end{gathered}$
(C.6)
where we have also used (C.3). Putting everything together we get the standard
answer with zero central charge
$\begin{gathered}\langle\bar{T}_{mod}(z,\bar{z})\bar{T}_{mod}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})\rangle=\\\
\left(\frac{2}{(\bar{z}-\bar{w})^{2}}+\frac{1}{\bar{z}-\bar{w}}\frac{\partial}{\partial\bar{w}}\right)\langle{\bar{T}_{mod}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle\\\
+\sum_{i}\left(\frac{\bar{h}_{i}}{(\bar{z}-\bar{z}_{i})^{2}}+\frac{1}{\bar{z}-\bar{z}_{i}}\frac{\partial}{\partial\bar{z}_{i}}\right)\langle{\bar{T}_{mod}(w,\bar{w})\prod_{i}\phi_{i}(z_{i},\bar{z}_{i})}\rangle.\end{gathered}$
(C.7)
## References
* [1] S. Pasterski, M. Pate, and A.-M. Raclariu, “Celestial Holography,” in 2022 Snowmass Summer Study. 11, 2021. arXiv:2111.11392 [hep-th].
* [2] A. Strominger, Lectures on the Infrared Structure of Gravity and Gauge Theory. Princeton University Press, 2018. arXiv:1703.05448 [hep-th].
* [3] S. Pasterski and S.-H. Shao, “Conformal basis for flat space amplitudes,” Phys. Rev. D96 no. 6, (2017) 065022, arXiv:1705.01027 [hep-th].
* [4] F. Cachazo and A. Strominger, “Evidence for a New Soft Graviton Theorem,” arXiv:1404.4091 [hep-th].
* [5] D. Kapec, V. Lysov, S. Pasterski, and A. Strominger, “Semiclassical Virasoro symmetry of the quantum gravity $\mathcal{S}$-matrix,” JHEP 08 (2014) 058, arXiv:1406.3312 [hep-th].
* [6] D. Kapec, P. Mitra, A.-M. Raclariu, and A. Strominger, “2D Stress Tensor for 4D Gravity,” Phys. Rev. Lett. 119 no. 12, (2017) 121601, arXiv:1609.00282 [hep-th].
* [7] D. Kapec and P. Mitra, “A $d$-Dimensional Stress Tensor for Minkd+2 Gravity,” JHEP 05 (2018) 186, arXiv:1711.04371 [hep-th].
* [8] A. Fotopoulos, S. Stieberger, T. R. Taylor, and B. Zhu, “Extended BMS Algebra of Celestial CFT,” JHEP 03 (2020) 130, arXiv:1912.10973 [hep-th].
* [9] A. Fotopoulos, S. Stieberger, T. R. Taylor, and B. Zhu, “Extended Super BMS Algebra of Celestial CFT,” JHEP 09 (2020) 198, arXiv:2007.03785 [hep-th].
* [10] D. Kapec and P. Mitra, “Shadows and soft exchange in celestial CFT,” Phys. Rev. D 105 no. 2, (2022) 026009, arXiv:2109.00073 [hep-th].
* [11] L. Donnay, A. Puhm, and A. Strominger, “Conformally Soft Photons and Gravitons,” JHEP 01 (2019) 184, arXiv:1810.05219 [hep-th].
* [12] M. Pate, A.-M. Raclariu, and A. Strominger, “Conformally Soft Theorem in Gauge Theory,” Phys. Rev. D100 no. 8, (2019) 085017, arXiv:1904.10831 [hep-th].
* [13] A. Guevara, “Notes on Conformal Soft Theorems and Recursion Relations in Gravity,” arXiv:1906.07810 [hep-th].
* [14] T. Adamo, L. Mason, and A. Sharma, “Celestial amplitudes and conformal soft theorems,” Class. Quant. Grav. 36 no. 20, (2019) 205018, arXiv:1905.09224 [hep-th].
* [15] A. Puhm, “Conformally Soft Theorem in Gravity,” JHEP 09 (2020) 130, arXiv:1905.09799 [hep-th].
* [16] S. Banerjee, S. Ghosh, and P. Paul, “MHV graviton scattering amplitudes and current algebra on the celestial sphere,” JHEP 02 (2021) 176, arXiv:2008.04330 [hep-th].
* [17] S. Banerjee, S. Ghosh, and S. S. Samal, “Subsubleading soft graviton symmetry and MHV graviton scattering amplitudes,” JHEP 08 (2021) 067, arXiv:2104.02546 [hep-th].
* [18] A. Guevara, E. Himwich, M. Pate, and A. Strominger, “Holographic symmetry algebras for gauge theory and gravity,” JHEP 11 (2021) 152, arXiv:2103.03961 [hep-th].
* [19] A. Strominger, “$w_{1+\infty}$ Algebra and the Celestial Sphere: Infinite Towers of Soft Graviton, Photon, and Gluon Symmetries,” Phys. Rev. Lett. 127 no. 22, (2021) 221601.
* [20] L. Donnay, S. Pasterski, and A. Puhm, “Goldilocks modes and the three scattering bases,” JHEP 06 (2022) 124, arXiv:2202.11127 [hep-th].
* [21] N. Arkani-Hamed, M. Pate, A.-M. Raclariu, and A. Strominger, “Celestial amplitudes from UV to IR,” JHEP 08 (2021) 062, arXiv:2012.04208 [hep-th].
* [22] C.-M. Chang, Y.-t. Huang, Z.-X. Huang, and W. Li, “Bulk locality from the celestial amplitude,” SciPost Phys. 12 no. 5, (2022) 176, arXiv:2106.11948 [hep-th].
* [23] J. Distler, R. Flauger, and B. Horn, “Double-soft graviton amplitudes and the extended BMS charge algebra,” JHEP 08 (2019) 021, arXiv:1808.09965 [hep-th].
* [24] A. H. Anupam, A. Kundu, and K. Ray, “Double soft graviton theorems and Bondi-Metzner-Sachs symmetries,” Phys. Rev. D 97 no. 10, (2018) 106019, arXiv:1803.03023 [hep-th].
* [25] M. Campiglia and A. Laddha, “BMS Algebra, Double Soft Theorems, and All That,” arXiv:2106.14717 [hep-th].
* [26] D. Kapec, “Soft Particles and Infinite-Dimensional Geometry,” arXiv:2210.00606 [hep-th].
* [27] S. Banerjee, P. Pandey, and P. Paul, “Conformal properties of soft operators: Use of null states,” Phys. Rev. D 101 no. 10, (2020) 106014, arXiv:1902.02309 [hep-th].
* [28] S. Banerjee and P. Pandey, “Conformal properties of soft-operators. Part II. Use of null-states,” JHEP 02 (2020) 067, arXiv:1906.01650 [hep-th].
* [29] S. Pasterski, A. Puhm, and E. Trevisani, “Celestial diamonds: conformal multiplets in celestial CFT,” JHEP 11 (2021) 072, arXiv:2105.03516 [hep-th].
* [30] S. Pasterski, A. Puhm, and E. Trevisani, “Revisiting the conformally soft sector with celestial diamonds,” JHEP 11 (2021) 143, arXiv:2105.09792 [hep-th].
* [31] D. Simmons-Duffin, “Projectors, Shadows, and Conformal Blocks,” JHEP 04 (2014) 146, arXiv:1204.3894 [hep-th].
* [32] F. M. Haehl, W. Reeves, and M. Rozali, “Reparametrization modes, shadow operators, and quantum chaos in higher-dimensional CFTs,” JHEP 11 (2019) 102, arXiv:1909.05847 [hep-th].
* [33] A. E. Lipstein, “Soft Theorems from Conformal Field Theory,” JHEP 06 (2015) 166, arXiv:1504.01364 [hep-th].
* [34] T. Klose, T. McLoughlin, D. Nandan, J. Plefka, and G. Travaglini, “Double-Soft Limits of Gluons and Gravitons,” JHEP 07 (2015) 135, arXiv:1504.05558 [hep-th].
* [35] D. Kapec, Y. T. A. Law, and S. A. Narayanan, “Soft Scalars and the Geometry of the Space of Celestial CFTs,” arXiv:2205.10935 [hep-th].
* [36] C. Cheung, A. de la Fuente, and R. Sundrum, “4D scattering amplitudes and asymptotic symmetries from 2D CFT,” JHEP 01 (2017) 112, arXiv:1609.00732 [hep-th].
* [37] G. Barnich and R. Ruzziconi, “Coadjoint representation of the BMS group on celestial Riemann surfaces,” JHEP 06 (2021) 079, arXiv:2103.11253 [gr-qc].
* [38] L. Donnay and R. Ruzziconi, “BMS flux algebra in celestial holography,” JHEP 11 (2021) 040, arXiv:2108.11969 [hep-th].
* [39] S. Pasterski, “A Comment on Loop Corrections to the Celestial Stress Tensor,” arXiv:2205.10901 [hep-th].
* [40] L. Donnay, K. Nguyen, and R. Ruzziconi, “Loop-corrected subleading soft theorem and the celestial stress-tensor,” arXiv:2205.11477 [hep-th].
* [41] S. Pasterski, A. Strominger, and A. Zhiboedov, “New Gravitational Memories,” JHEP 12 (2016) 053, arXiv:1502.06120 [hep-th].
* [42] E. Himwich, Z. Mirzaiyan, and S. Pasterski, “A Note on the Subleading Soft Graviton,” JHEP 04 (2021) 172, arXiv:1902.01840 [hep-th].
* [43] A. Ball, E. Himwich, S. A. Narayanan, S. Pasterski, and A. Strominger, “Uplifting AdS3/CFT2 to flat space holography,” JHEP 08 (2019) 168, arXiv:1905.09809 [hep-th].
* [44] Y. Hu and S. Pasterski, “Celestial Conformal Colliders,” arXiv:2211.14287 [hep-th].
* [45] A. Atanasov, W. Melton, A.-M. Raclariu, and A. Strominger, “Conformal block expansion in celestial CFT,” Phys. Rev. D 104 no. 12, (2021) 126033, arXiv:2104.13432 [hep-th].
* [46] A. Sharma, “Ambidextrous light transforms for celestial amplitudes,” JHEP 01 (2022) 031, arXiv:2107.06250 [hep-th].
* [47] A. Guevara, “Celestial OPE blocks,” arXiv:2108.12706 [hep-th].
* [48] D. Simmons-Duffin, “Projectors, Shadows, and Conformal Blocks,” JHEP 04 (2014) 146, arXiv:1204.3894 [hep-th].
* [49] H. Osborn, “Conformal Blocks for Arbitrary Spins in Two Dimensions,” Phys. Lett. B718 (2012) 169–172, arXiv:1205.1941 [hep-th].
* [50] I. Gel’fand and G. Shilov, Generalized Functions, Volume 1: Properties and Operations. 978-1-4704-2658-3, 1964.
* [51] J. H. Schwarz, “Diffeomorphism Symmetry in Two Dimensions and Celestial Holography,” arXiv:2208.13304 [hep-th].
* [52] S. Pasterski, S.-H. Shao, and A. Strominger, “Gluon Amplitudes as 2d Conformal Correlators,” Phys. Rev. D96 no. 8, (2017) 085006, arXiv:1706.03917 [hep-th].
* [53] S. Banerjee, S. Ghosh, and P. Paul, “(Chiral) Virasoro invariance of the tree-level MHV graviton scattering amplitudes,” JHEP 09 (2022) 236, arXiv:2108.04262 [hep-th].
* [54] T. He, V. Lysov, P. Mitra, and A. Strominger, “BMS supertranslations and Weinberg’s soft graviton theorem,” JHEP 05 (2015) 151, arXiv:1401.7026 [hep-th].
* [55] T. He, P. Mitra, A. P. Porfyriadis, and A. Strominger, “New Symmetries of Massless QED,” JHEP 10 (2014) 112, arXiv:1407.3789 [hep-th].
* [56] S. Banerjee and S. Ghosh, “MHV gluon scattering amplitudes from celestial current algebras,” JHEP 10 (2021) 111, arXiv:2011.00017 [hep-th].
* [57] I. Gel’fand, M. Graev, and N. Vilenkin, Generalized Functions, Volume 5: Integral Geometry and Representation Theory. AMS Chelsea Publishing, 1966.
* [58] S. Pasterski and H. Verlinde, “HPS meets AMPS: how soft hair dissolves the firewall,” JHEP 09 (2021) 099, arXiv:2012.03850 [hep-th].
* [59] S. He, P. Mao, and X.-C. Mao, “$T\bar{T}$ deformed soft theorem,” arXiv:2209.01953 [hep-th].
|
# To think inside the box, or to think out of the box?
Scientific discovery via the reciprocation of insights and concepts
Yu-Zhe Shi${}^{1,2\,\star{}\,\textrm{{\char 0\relax}}}$, Manjie
Xu${}^{1,2\,\star{}}$, Wenjuan Han2, Yixin Zhu${}^{1\,\textrm{{\char
0\relax}}}$
1Institute for AI, Peking University 2PersLEARN
⋆Equal contributors ✉<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
If scientific discovery is one of the main driving forces of human progress,
insight is the fuel for the engine, which has long attracted behavior-level
research to understand and model its underlying cognitive process. However,
current tasks that abstract scientific discovery mostly focus on the emergence
of insight, ignoring the special role played by domain knowledge. In this
concept paper, we view scientific discovery as an interplay between _thinking
out of the box_ that actively seeks insightful solutions and _thinking inside
the box_ that generalizes on conceptual domain knowledge to keep correct.
Accordingly, we propose Mindle, a semantic searching game that triggers
scientific-discovery-like thinking spontaneously, as infrastructure for
exploring scientific discovery on a large scale. On this basis, the meta-
strategies for insights and the usage of concepts can be investigated
reciprocally. In the pilot studies, several interesting observations inspire
elaborated hypotheses on meta-strategies, context, and individual diversity
for further investigations.
## Introduction
How do scientists come up with novel ideas that lead to significant
discoveries? Psychologists have been working long to understand the underlying
cognitive processes (Schickore, , 2022) to facilitate the progress of
scientific discovery (Campbell, , 1960). Among the diverse philosophical
theories interpreting _discovery_ , the one mostly being cited is that
discovery refers to _the eureka moment_ , a.k.a. _the Aha! moment_ , of having
a new _insight_ (Auble et al., , 1979; Kounios and Beeman, , 2009).
Originating from problem-solving, insight is the process that reconstructs the
representation of the target problem. Given the insight, the solution can be
achieved much more straightforwardly than that before the reconstruction has
been done (Ohlsson, , 1984). People tend to follow prior knowledge when
solving a problem because experience shows this may lead to success (Öllinger
et al., , 2008). But after times of trial-and-error, people can predict the
error of current problem representation (Dubey et al., , 2021), and this may
be the eve of a sudden coming of the Aha! moment. Studies have shown that
solutions discovered by insights usually be more promising than those
generated by analytical approaches (Salvi et al., , 2016), though the latter
requires much more workload than the former—this echoes how specifically
adapted representations outperform prior ones on novel problems.
Meanwhile, the prior representation is also necessary, for it provides the
relevant domain knowledge for the problem—insight is just useless without the
sense of how to deal with the problem. This contrast becomes crucial in terms
of scientific discovery—given a scientific inquiry as the target problem, the
entry is the purported paradigm of the domain that seems relevant to the
problem, which is deeply rooted in _domain knowledge_ (_i.e_., atoms,
theories, and claims) and shapes the meta-cognitive strategic knowledge
(_i.e_., methodologies). Hence, representation reconstruction is extremely
hard when the target problem becomes a scientific inquiry because that at
least means generalization from a scientific to another given few observations
(Tenenbaum and Griffiths, , 2001), and even means a paradigm shift (Kuhn, ,
1970); but without relying on domain knowledge, a scientist can go toward
nowhere because all ideas come from somewhere.
The dilemma we are facing is ubiquitous in scientific discovery. On the one
side, we must _think out of the box_ , such that to avoid missing the flashed-
by insights; but unconventional ways of thinking may also lead to ridiculous
solutions, taking a detour even compared with analytical solutions. On the
other side, we have to _think inside the box_ because domain knowledge keeps
us aware of what we are doing and where we are going; but following an
established paradigm totally restricts our mind in the prior representation of
the problem (see Fig. 1). To achieve solutions successfully and efficiently,
the two mindsets should interplay with each other. When and where should this
happen?
Many scientists deal with such an interplay well—they gain insights from the
eureka moment, which are later developed into representative scientific
discoveries, such as the development of Einstein’s special relativity
(Einstein, , 1982), the discovery of Kekule structure (Gruber, , 1981). This
pattern is also found in many works throughout the life of Gauss (Dunnington
et al., , 2004). Both historical experiences and experimental results drive
the interest in understanding how insight is obtained and making computational
models (Langley and Jones, , 1988), to get close to the ultimate
goal—automated production of insights that improve scientific discovery.
Unfortunately, stories of scientists cannot lead to concrete modeling and
evaluation work at the behavior level rather than the metaphysical level, for
post-hoc simulating how scientists disentangle meta-cognitive strategies from
domain knowledge is difficult and imprecise. Hence, we are on the request of
an experimental environment that abstractly simulates the process of
scientific discovery—the domain knowledge should be crucial for solving the
problem and should be general enough to carry out large-scale behavioral
studies (Almaatouq et al., , 2021), without losing of group convergence or
individual diversity. To the best of our knowledge, we are the first to
explicitly consider the interplay between _insight-seeking_ and _domain-
knowledge-relying_. Hence, we propose Mindle111Visit mindle.cn to interact
with the web-based user interface., a semantic searching game that triggers
scientific-discovery-like thinking spontaneously, as infrastructure for
exploring scientific discovery on a large scale, filling the gap in the
literature.
Figure 1: Overview of insight in scientific discovery. In a classic Gestalt
problem, a problem solver first uses domain knowledge to analyze the problem,
then seeks for insight once she gets trapped; after reconstructing the problem
representation, she again uses domain knowledge to reach the solution. In this
case, though domain knowledge constrains the thinking, it serves as the
vehicle toward the target.
## The reciprocation of insights and concepts
Based on the dilemma over insight-seeking and domain-knowledge-relying, the
most critical feature that distinguishes scientific discovery from normal
insight problem-solving is that domain knowledge plays a crucial role in both
empowering the solution to be correct and restricting the emergence of insight
solutions efficiently. Hence, to understand scientific discovery _inside the
box_ , we should understand how the organization of concepts in domain
knowledge affects meta-cognitive strategies in advance. Conversely, to unveil
the process of scientific discovery _outside the box_ , we should look into
how insightful decisions intervene in using concepts. This bidirectional
pathway echoes the reciprocation of insights and concepts by identifying these
two questions: (1) How a problem grounded on conceptual knowledge improve the
study of insight problem-solving? (2) How does the usage of conceptual
knowledge driven by insight problem-solving improve the study of knowledge
representation? Below, we sketch Mindle by answering these questions.
### Concepts improve the study of insight problem solving
Relying on conceptual knowledge is not an obstacle to investigating scientific
discovery, but a better chance for understanding insight problem-solving.
Current insight problem-solving tasks have been long troubled by the subjects’
unawareness of the meta-cognitive strategies they have used (Metcalfe and
Wiebe, , 1987). Though this inability naturally unveils the sudden come of
insights, it is avoidable by improving experimental tasks. Some current
insight problems focus on stimulating the eureka moment, such as the nine-dot
problem (Kershaw and Ohlsson, , 2004), the matchstick arithmetic (Knoblich et
al., , 1999), and the eight-coin problem (Ormerod et al., , 2002)—these tasks
provide highly confined problem space such that subjects can solve the
problems without applying any semantics or commonsense knowledge other than
the specific background knowledge given by the problem settings. Although such
designs are motivated to disentangle representation reconstruction from
solution cleanly, it makes interpreting the solutions from the trajectories
hard, since the trajectories can only be mapped to the given problem settings.
Hence, if we expect mapping behavioral trajectories to meta-strategic
knowledge, a generally understandable semantics of problem context is
necessary, such as conceptual knowledge in human language. Other insight
problems introduce general semantics, including insight physical problem
solving (_i.e_., intuitive physics as semantics) (Allen et al., , 2020) and
remote association test (_i.e_., word association as semantics) (Mednick, ,
1963). However, these tasks come in a one-to-one input-output fashion, where
the measured behavior is directly generated from the single stimuli one-step,
thus hard to track the representation change. This is crucial in scientific
discovery because there are usually multiple steps of insights that lead to
the target (Moszkowski, , 1972). In contrast to one-shot problem solving,
scientific discovery is more like a path-finding process where the navigation
map changes every time reaching a critical point. Putting the two reasons
together, Mindle should equip with general-understandable semantics in the
context of the target problem.
### Insight problem-solving improves the study of conceptual knowledge
The domain knowledge of sciences is believed to be organized as conceptual
knowledge (Hiebert and Lefevre, , 1986; Rittle-Johnson et al., , 2001).
Conceptual knowledge systematically combines declarative and procedural
knowledge, consisting of both facts about the concepts and active processes
about how concepts interact with each other (Abend, , 2008). Though there are
many perspectives on concept representation, we take the theory theory as a
prerequisite because it is the most accepted theory in terms of scientific
knowledge representation (Gopnik, , 1994), thus we can mimic the domain
knowledge under the form of theory theory. One implementation of theory theory
is that concepts are maintained in a fully-connected network, where each
concept is related to all other concepts in the set—many calculi on fully-
connected graphs, such as the general pattern theory (Grenander, , 2012), can
be applied to formalize the operations over concepts. Such tools help describe
scientific knowledge and meta-strategies in a computable way. Thanks to that,
the theory theory can also model how a child acquires concepts in cognitive
development (Carey, , 1985; Gopnik and Meltzoff, , 1997; Carey, , 2009);
people similarly organize normal concepts to organize scientific knowledge.
Hence, we may use such fully-connected networks statistically extracted from
natural language corpus to simulate the domain knowledge of sciences, so we
can carry out large-scale behavioral studies. Most interestingly, there is a
major feature shared by both normal and scientific knowledge—though the
semantics of concepts and relations are static and invariable viewing
knowledge in the world holistically, they may be highly overloaded according
to diverse context, task utility, and inner preference, from the view of
individuals (Wang and Bi, , 2021). In this way, the compounded concepts are
projected to simplified semantic attribute spaces (Grand et al., , 2022),
which are much more tractable to be processed due to the rational use of
limited cognitive resources (Gershman et al., , 2015; Lieder and Griffiths, ,
2020; Ho et al., , 2022). On this basis, conceptual knowledge in scientific
discovery should be studied in a dynamic way rather than a static way.
However, most current behavior-level experimental methods on semantic
understanding of concepts are confined to general and fixed contexts (Huth et
al., , 2016; Wang et al., , 2020), in a straightforward way that is given the
stimuli (input words) to obtain the descriptions or similarity judgments
(output measurements) directly; other studies on the use of concepts, such as
memory replay and human reinforcement learning (Momennejad et al., , 2017),
mostly focus on the short-term memory for skill learning without retrieving
from long-term memory that has been already formed. To capture the two
features, Mindle should test the representation of conceptual knowledge in
sequential decision-making.
In summary, we profile Mindle with two unique features to simulate the process
of scientific discovery (see Tab. 1 for details), and importantly, the two
work reciprocatively: (1) Mindle should equip with a general-understandable
conceptual knowledge as domain knowledge to help interpret the process of
insight problem solving; (2) Mindle should equip with a sequential decision-
making task to stimulate the flexible dynamic use of conceptual knowledge.
Table 1: The analogy of scientific discovery to solving Mindle
| scientific discovery | solving Mindle
---|---|---
target | solve a scientific query | find out the secret word
output | the hidden answer is shaped by _the known_ | the secret word is among _the known_
problem abstraction | path searching from status quo to _the unknown_ | path searching from current guess to _the unknown_
problem context | conceptual scientific domain knowledge | conceptual knowledge from natural corpus
knowledge representation | concepts connected under logical or intuitive relations | concepts connected under intuitive relations
maintained representation | generalizing from one scientific concept to another | generalizing from one concept to another
reconstruction | changing the domain knowledge or methodology used | changing meta-strategy in action or semantics level
rationality | should be studied from specific perspectives | should be used in specific semantics subspaces
diversity | scientists with different background think differently | people with different background think differently
accessibility | captured by only a few individuals | captured by most individuals
## Mindle: an infrastructure for large-scale studies on scientific discovery
Given the specific features that Mindle should capture, we describe how to
implement Mindle. As the infrastructure for large-scale behavioral studies,
Mindle should meet three elementary satisfaction: (1) providing appropriate
tasks that abstract the target real problem to be studied; (2) providing
correct computational models to evaluate the results; (3) the abstract task
itself should be natural rather than artificial, interesting rather than
boring, to make sure that subjects are easy to get into the task; (4) the task
is easy to be propagated and is robust to unexpected user behaviors. Since the
appropriateness of task abstraction has been illustrated in detail, we
describe how Mindle meets the last three.
### The semantic searching game
Mindle requires the participants to dig out a hidden secret word. In every
single challenge, the participant is given only a starting word. In each
guess, the participant inputs a guessed word, and Mindle outputs a score (the
given starting word can be viewed as the initial guess). The score indicates
how _far_ is the current guess to the target secret word, on a 0 to 100
scale—when the score is more close to 100, the guess is more close to the
target. Once the secret word is hit, one challenge ends. The participant can
choose to quit at any time during the challenge. The guesses can be entered by
the participant through an input text bar or be selected by the participant
through some given options; the policy for proposing options will be described
later. A variant of challenge mode comes with a hint implying what topic the
secret word is related to. The topic can either be abstract or concrete, such
as _kitchen supplies_ , _classical music_ , or _freedom_ , which confines the
problem space.
### Score and vocabulary
The score is provided by the cosine similarity between the embedding vectors
of the current guess and the target word. The embedding vectors can be
generated by arbitrary embedding methods, such as Word2Vec (Mikolov et al.,
2013b, ), Skip-gram (Mikolov et al., 2013a, ), or Glove (Pennington et al., ,
2014). The score transforms the 0 to 1 scale of cosine similarity to a 0 to
100 scale. The vocabulary we used here is a subset consisting of about 40K
most frequently used concepts in the natural corpus. Here we denote the set as
$C\in\mathcal{C}$ where $\mathcal{C}$ is the space of all concepts. In bar-
entering mode, participants would be informed if the input guess is out of
$C$.
### Conceptual knowledge representation
We have mentioned that conceptual knowledge can be represented as a fully-
connected network. It can be implemented as a directed graph encoding an
adjacent matrix. The graph is obtained from natural corpus through a
Transformer model (Vaswani et al., , 2017). The connection weights in the
graph capture the co-occurrence frequency for each pair of concepts. Let graph
$G=\langle C,C\times C\rangle$ be the system of concepts, each node $c_{i}\in
C$ denotes a concept, $c_{k},k\neq i$ is a possible related concepts that
shapes $c_{i}$, and $w(c_{i},c_{k}),k\neq i$ indicates the weight $c_{i}$ is
shaped by $c_{k}$ among all related concepts. The higher $w(c_{i},c_{k})$ is,
the semantics of $c_{i}$ is more influenced by that of $c_{k}$. Note that
usually $w(c_{i},c_{k})\neq w(c_{k},c_{i})$ because the weights indicates the
conditional probability $p(c_{i}|c_{k})=w(c_{i},c_{k})/\sum_{j\neq
i}w(c_{i},c_{j})$. Intuitively, this can be viewed as the probability of
describing the concept $c_{i}$ by concept $c_{k}$ in a rational fashion (Frank
and Goodman, , 2012). $p(c_{i})$ is a vector that has $p(c_{i}|c_{k})$ as its
$k$-th dimension. And since the statistical feature is obtained from the
general natural corpus, the real conceptual knowledge distribution in
individuals’ minds is more or less different because that is highly
conditioned on diverse individual prior experiences. In this way we define all
general and individual operations over concepts as formal operations on a
graph.
### Behavior modeling
Solving Mindle can be ideally modeled as a mdp (mdp). An mdp is a tuple
$T=(S,A,P,R,\rho)$, where $S=C$ is the set of states, $A=C$ is the set of
actions, $P:S\times A\times S\mapsto[0,1]$ is the transition probability
function $s_{t+1}\sim P(\cdot|s_{t},a_{t})$, $R:S\times A\times
S\mapsto\mathbb{R}$ is the reward function (the game score here plays the role
of reward) indicating in what condition the task is solved, and $\rho$ is the
initial guess $\rho$ where $s_{0}\sim\rho$. Under this formulation, solving
Mindle is finding the policy $\pi=(s_{0},a_{0},s_{1},a_{1},\dots)$ that
generates trajectory $\tau\sim\pi$ optimizing the objective
$\max_{\pi}\mathbb{E}_{\tau\sim\pi}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t},s_{t+1})]$.
But this modeling is extremely ideal because the spaces of state and action
are much smaller than the whole set of concepts, due to highly limited memory
and attention slots. Hence, a mask $M$ can be applied to both $S$ and $A$ that
$S\cdot M\subset C$ and $A\cdot M\subset C$. $M$ can either be predefined by
heuristics (will be introduced next) or be extracted from trajectories
generated by subjects in pre-experiments.
### Similarity, rule, and the unrelated
Figure 2: An example of conceptual knowledge representation.
Given the current guess, where can we go? Take _worker_ as the exemplar
current guess, there are three types of candidate actions (see Fig. 2): (1)
Concepts that are highly similar to current guess, _e.g_., _labor_ , _navvy_ ,
and _staff_. (2) Concepts that are highly related to current guess by specific
rules, _e.g_., _machine_ (by rule _usage_), _salary_ (by rule _reward_), and
_supervisor_ (by rule _organization_). (3) Concepts that are hardly related to
current guess by any mean, _e.g_., _arts_ , _word_ , and _hippocampus_.
Suppose we have K proposals in each type and denote $c_{t}$ and $c_{t+1}$ as
current and next guess, respectively. The similar concepts are generated by
top-K $\max_{k}cos(\text{Vec}(c_{t}),\text{Vec}(c_{k})),k\neq t$, the related
concepts are generated by top-K $\max_{k}w(c_{t},c_{k}),k\neq t$, and the
unrelated concepts are generated by top-K $\min_{k}w(c_{t},c_{k}),k\neq t$.
Most similar concepts are synonyms to $c_{t}$, which are not highly related to
$c_{t}$; exceptions exist, thus making a filter afterward. When selecting
related concepts, we try to maximize the diversity of the candidate concepts
by filtering out duplicate concepts generated by the same rule, _e.g_., _boss_
and _supervisor_ (by rule _organization_). We implement this by applying
Agglomerative clustering to the candidate concepts and selecting from the root
(Murtagh and Legendre, , 2014). The three types are actions in the higher
level of $A$, which can be observed directly and easily identified to decision
patterns, such as _persistent searching in a local minima_ or _flexibly
jumping between locals_. Besides analyzing behavior data, the graph pruned by
the three modes is also used to control the hardness of challenges by tuning
the length of the shortest path and the number of possible paths traversing
from the initial guess to the target word.
### Evaluation metrics
One significant indicator to be evaluated is the emergence of the eureka
moment. We use both absolute and relevant measurements. First, we identify the
Aha! moments in a single trajectory
$\tau=(s_{0},a_{0},a_{1},s_{1},\dots,a_{-1},s_{-1})$. First, we calculate the
first-order reward difference between adjacent trials $\Delta
r(t)=r_{t+1}-r_{t}$. For action $a_{t}$, if there exists $i<t$ that $\Delta
r(i)\geq\Delta r(t)$, let $r(i:t)=\sum_{k=i+1}^{t}r_{k}/(t-i)$, if not then
let $i=0$; if there exists $j>t$ that $\Delta r(j)\geq\Delta r(t)$, let
$r(t:j)=\sum_{k=t+1}^{j}r_{k}/(j-t)$, if not then let $j=-1$. We have $\Delta
a(t)=r(t:j)-r(i:t)$ as the cumulative difference between the critical points.
Those critical points with high $a(t)$ tend to be eureka moments from the view
of a single trajectory. Also, we observe the trajectories in a counterfactual
fashion, and we ask what would happen if other actions have been taken. The
action space can be either the concept space $C$ or masked $C\cdot M$ in the
low level, or the space of the three high-level action types (similarity,
rule, and unrelated). Then, we define the updating rate as
$\min\\{0,1-\min_{a\in A}R(s_{t},a,s_{t+1})/R(s_{t},a_{t},s_{t+1})\\}$. Thus,
the current guess can be viewed as an intervention to the status quo following
the semantics gradient, supporting investigations on semantics overload. The
evaluation metrics can be flexibly modified to meet the need of specific
experiments.
### Playability as a game
In the post-interview of our pilot study (25 subjects, 11 are female), after 3
challenges per subject, 24 in 25 subjects think that _playing Mindle is
interesting_ and 18 in 24 subjects think that _I want to play Mindle everyday_
; 15 in 25 have succeeded at least one challenge and 8 in 15 have experienced
at least one eureka moment in at least one challenge. All solved challenges
are finished in about 80 steps of guessing on average, and the number of
insightful solutions is about 50. These results imply that Mindle is
attracting enough with an appropriate success rate, having the potential to
propagate with ease. Also, Mindle is unique for its sense of infinite answer
space and harmless trial (Hamari, , 2007). Thus, Mindle can be played through
a long term, even though a challenge is disrupted in the middle—instead, a
long period of discontinuous thinking may even stimulate the Aha! moment.
## Case Studies
In this section, we discuss three interesting observations from the pilot
study. These phenomena may inspire elaborated hypotheses that lead to further
investigations. This shows how Mindle empowers large-scale experiments.
### Thinking out of the box: action level
Some challenges succeed due to action-level insights. The trajectories have a
common point—participants switch smoothly between _searching in a local
minima_ or _making traversals between locals globally_. Once they have been
optimizing a local minima for a time without gaining a significant score
increase, they changed to jump randomly between concepts that seem unrelated
to each other, until they hit a local with a much higher score. Then they
settle down again to optimize the guess locally with synonyms or similar
concepts. The eureka moments usually come when hitting a _hot solution_ in the
global random jump. Interestingly, the concepts in the trajectories are highly
different from each other, indicating that such behavior pattern is not
constrained by semantics, but be driven by the inner preference of
participants. That is, no matter what concepts I am guessing, I just switch
between local search and global jump flexibly. This lead to a hypothesis: Do
people apply meta-strategies ignoring the problem context?
### Thinking out of the box: semantic level
Some challenges succeed due to semantic-level insights. This happens when
participants hit a local with a relatively high score, which is easy to
believe that target lies in this local. Affected by prior experience,
participants tend to search in a subspace of the semantics space, where the
selected concepts are projected onto a plate of reduced semantics space. For
example, a participant has guessed _school_ , _class_ , and _grade_ , where
she projects the concept to the semantics subspace of school-related concepts.
However, these words cannot help go further. The participant decides to
project the anchor concept, say _class_ , to another semantics space, say
_computer-related terminologies_. Then, she guessed _type_ and unexpectedly
takes a large step toward the target. Hence, she understands that she has been
trapped in a semantics subspace. This case shows that representation
reconstruction can also change the semantics subspace. Hence, we come up with
another hypothesis: Do people apply meta-strategies according to the problem
context? This hypothesis seems to be in contrast to the one in the last
paragraph, but combining these two together, we have a comparative hypothesis,
which is more related to our big picture on scientific discovery: Do people
use meta-strategy as policy regardless of context, or subject to the
subjective understanding of context semantics?
### Thinking inside the box: semantic level
Some challenges succeed through analytical solutions, especially when the
participant, fortunately, reaches the right track at the start. The
participant optimizes a gradient of the semantics landscape in mind. The
gradient can be extremely flexible—for example, hierarchy, _arts_ to
_painting_ to _gallery_ ; extent, _large_ to _larger_ to _largest_ ; or the
distance with human, _human_ to _chimpanzee_ to _monkey_. The hypothesis space
for such a gradient is almost infinite because the semantics space is in very
high dimensionality (Grand et al., , 2022). Since the choice of the gradient
is subjected to personal cognitive bias, the trajectories for the same
challenge can be highly diverse. This echoes the diverse _mindsets_ of
different genres of science. And compared with previous work on testing
personal diversity in knowledge representation (Wang and Bi, , 2021), Mindle
(1) stimulates the spontaneous use of the commonsense knowledge, in contrast
to other experimental paradigms that probe human knowledge representation
explicitly; (2) empowers the scaling-up of pilot studies, both in the
broadness of semantics and the diversity of subjects, to obtain more
elaborated results on the landscape that where people converge or diverge on
concept representation. By recovering all trajectories generated by the same
group of participants, we can build a computational model that captures their
semantics landscape, _i.e_., a function that outputs the sense of _which
concepts are more similar or more related to each other than to others_. The
function can be approximated through inverse reinforcement learning (Abbeel
and Ng, , 2004), thus we can analyze group diversities of concept
representation quantitatively. In this way, we may reverse-engineer the
organization of conceptual knowledge in peoples’ minds.
### Combining the three hypotheses
Testing the three hypotheses helps us understand the interplay between
insight-seeking and domain-knowledge-relying. First, on a confined problem
space, we test the existence of these thinking patterns, by stimulating the
spontaneous use of action-level metastrategies, semantic-level metastrategies,
and semantic-level landscape optimization. After this, we study given
different constrained topics, are people tend to emerge and converge to a set
of similar parameters that controls the interplay. Assuming that insight-
seeking and domain-knowledge-relying are on the two ends of a continuum, one
possible hypothesis is that people rationally control the interplay according
to their uncertainty on the use of domain knowledge—high uncertainty on
_knowing what_ may lead to reconstruction at action level; high uncertainty on
_deciding which_ may lead to reconstruction at semantic level; and low
uncertainty may lead to maintaining the current representation. Such intuition
defines the _balance point_ between the two ends, thus irrational behaviors,
_e.g_., more close to insight-seeking, can be identified comparing with the
rational case. On this basis, we scale up the behavioral studies to generalize
the results to larger groups of individuals, and also collect a large dataset
of trajectories and reverse-engineer the semantic landscapes in the open
domain.
(a)
(b)
(c)
Figure 3: (a) Concepts that have high and low associations to concept _dog_ ;
(b) Concepts that have high similarity to concept _cat_ ; (c) Concepts that
have high similarity to concept _class_ , with the potential for the from the
semantic subspace of _education-related-concepts_ (_e.g_., _school_ and
_grade_) to _computer-related-concepts_ (_e.g_., _subclass_ and _type_).
### Acknowledgement
The authors thank Mr. David Turner for inspiring and helpful discussions on
designing Mindle.
## References
* Abbeel and Ng, (2004) Abbeel, P. and Ng, A. Y. (2004). Apprenticeship learning via inverse reinforcement learning. In International Conference on Machine Learning (ICML).
* Abend, (2008) Abend, G. (2008). The meaning of ‘theory’. Sociological Theory, 26(2):173–199.
* Allen et al., (2020) Allen, K. R., Smith, K. A., and Tenenbaum, J. B. (2020). Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. Proceedings of the National Academy of Sciences (PNAS), 117(47):29302–29310.
* Almaatouq et al., (2021) Almaatouq, A., Becker, J. A., Bernstein, M., Botto, R., Bradlow, E., Damer, E., Duckworth, A. L., Griffiths, T., Hartshorne, J. K., Law, E., and et al. (2021). Scaling up experimental social, behavioral, and economic science.
* Auble et al., (1979) Auble, P. M., Franks, J. J., and Soraci, S. A. (1979). Effort toward comprehension: Elaboration or “aha”? Memory & Cognition, 7(6):426–434.
* Campbell, (1960) Campbell, D. T. (1960). Blind variation and selective retentions in creative thought as in other knowledge processes. Psychological Review, 67(6):380.
* Carey, (1985) Carey, S. (1985). Conceptual change in childhood. MIT Press.
* Carey, (2009) Carey, S. (2009). The Origin of Concepts. Oxford University Press.
* Dubey et al., (2021) Dubey, R., Ho, M. K., Mehta, H., and Griffiths, T. (2021). Aha! moments correspond to metacognitive prediction errors.
* Dunnington et al., (2004) Dunnington, G. W., Gray, J., and Dohse, F.-E. (2004). Carl Friedrich Gauss: titan of science. MAA.
* Einstein, (1982) Einstein, A. (1982). How i created the theory of relativity. Physics Today, 35(8):45–47.
* Frank and Goodman, (2012) Frank, M. C. and Goodman, N. D. (2012). Predicting pragmatic reasoning in language games. Science, 336(6084):998–998.
* Gershman et al., (2015) Gershman, S. J., Horvitz, E. J., and Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245):273–278.
* Gopnik, (1994) Gopnik, A. (1994). The theory theory. In Mapping the mind: Domain specificity in cognition and culture, pages 257–293. Cambridge University Press.
* Gopnik and Meltzoff, (1997) Gopnik, A. and Meltzoff, A. N. (1997). Words, thoughts, and theories. MIT Press.
* Grand et al., (2022) Grand, G., Blank, I. A., Pereira, F., and Fedorenko, E. (2022). Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behaviour, pages 1–13.
* Grenander, (2012) Grenander, U. (2012). A calculus of ideas: a mathematical study of human thought. World Scientific.
* Gruber, (1981) Gruber, H. E. (1981). On the relation between aha experiences’ and the construction of ideas. History of Science, 19(1):41–59.
* Hamari, (2007) Hamari, J. (2007). Gamification. The Blackwell Encyclopedia of Sociology, pages 1–3.
* Hiebert and Lefevre, (1986) Hiebert, J. and Lefevre, P. (1986). Conceptual and procedural knowledge in mathematics: An introductory analysis. In Conceptual and procedural knowledge: The case of mathematics, pages 1–27. Erlbaum.
* Ho et al., (2022) Ho, M. K., Abel, D., Correa, C. G., Littman, M. L., Cohen, J. D., and Griffiths, T. L. (2022). People construct simplified mental representations to plan. Nature, 606(7912):129–136.
* Huth et al., (2016) Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E., and Gallant, J. L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600):453–458.
* Kershaw and Ohlsson, (2004) Kershaw, T. C. and Ohlsson, S. (2004). Multiple causes of difficulty in insight: the case of the nine-dot problem. Journal of Experimental Psychology: Learning, memory, and cognition, 30(1):3.
* Knoblich et al., (1999) Knoblich, G., Ohlsson, S., Haider, H., and Rhenius, D. (1999). Constraint relaxation and chunk decomposition in insight problem solving. Journal of Experimental Psychology: Learning, memory, and cognition, 25(6):1534.
* Kounios and Beeman, (2009) Kounios, J. and Beeman, M. (2009). The aha! moment: The cognitive neuroscience of insight. Current Directions in Psychological Science, 18(4):210–216.
* Kuhn, (1970) Kuhn, T. S. (1970). The structure of scientific revolutions. University of Chicago Press: Chicago.
* Langley and Jones, (1988) Langley, P. and Jones, R. (1988). A computational model of scientific insight. The nature of creativity: Contemporary psychological perspectives, 177:201.
* Lieder and Griffiths, (2020) Lieder, F. and Griffiths, T. L. (2020). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43.
* Mednick, (1963) Mednick, M. T. (1963). Research creativity in psychology graduate students. Journal of Consulting Psychology, 27(3):265.
* Metcalfe and Wiebe, (1987) Metcalfe, J. and Wiebe, D. (1987). Intuition in insight and noninsight problem solving. Memory & Cognition, 15(3):238–246.
* (31) Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
* (32) Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NeurIPS).
* Momennejad et al., (2017) Momennejad, I., Russek, E. M., Cheong, J. H., Botvinick, M. M., Daw, N. D., and Gershman, S. J. (2017). The successor representation in human reinforcement learning. Nature Human Behaviour, 1(9):680–692.
* Moszkowski, (1972) Moszkowski, A. (1972). Conversations with Einstein. Sidgwick & Jackson.
* Murtagh and Legendre, (2014) Murtagh, F. and Legendre, P. (2014). Ward’s hierarchical agglomerative clustering method: which algorithms implement ward’s criterion? Journal of Classification, 31(3):274–295.
* Ohlsson, (1984) Ohlsson, S. (1984). Restructuring revisited: I. summary and critique of the gestalt theory of problem solving. Scandinavian Journal of Psychology, 25(1):65–78.
* Öllinger et al., (2008) Öllinger, M., Jones, G., and Knoblich, G. (2008). Investigating the effect of mental set on insight problem solving. Experimental Psychology, 55(4):269.
* Ormerod et al., (2002) Ormerod, T. C., MacGregor, J. N., and Chronicle, E. P. (2002). Dynamics and constraints in insight problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28(4):791.
* Pennington et al., (2014) Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Annual Conference on Empirical Methods in Natural Language Processing (EMNLP).
* Rittle-Johnson et al., (2001) Rittle-Johnson, B., Siegler, R. S., and Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2):346.
* Salvi et al., (2016) Salvi, C., Bricolo, E., Kounios, J., Bowden, E., and Beeman, M. (2016). Insight solutions are correct more often than analytic solutions. Thinking & Reasoning, 22(4):443–460.
* Schickore, (2022) Schickore, J. (2022). Scientific discovery. In Zalta, E. N. and Nodelman, U., editors, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, Winter 2022 edition.
* Tenenbaum and Griffiths, (2001) Tenenbaum, J. B. and Griffiths, T. L. (2001). Generalization, similarity, and bayesian inference. Behavioral and Brain Sciences, 24(4):629–640.
* Vaswani et al., (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS).
* Wang and Bi, (2021) Wang, X. and Bi, Y. (2021). Idiosyncratic tower of babel: Individual differences in word-meaning representation increase as word abstractness increases. Psychological Science, 32(10):1617–1635.
* Wang et al., (2020) Wang, X., Men, W., Gao, J., Caramazza, A., and Bi, Y. (2020). Two forms of knowledge representations in the human brain. Neuron, 107(2):383–393.
## Appendix A User interface of Mindle
Given a starting word, the users are expected to navigate toward a secret
target word. Users travel in the semantic world by guessing words. For each
time jumping to a word, the users will get a similarity score indicating the
distance between their current position and the target. For the sake of
Mindle’s pilot study, we designed two versions of our Mindle game. Especially,
the Web-based Mindle is designed for both lab-based and online experiments
(see Fig. 4(a)), and the Mobile-based Mindle can be accessed from mobile
terminals, making it suitable for larger-scale online experiments (see Fig.
4(b)).
(a)
(b)
Figure 4: User interface of Mindle. (a) Web-based Mindle; (b) Mobile-based
Mindle
## Appendix B Examples of conceptual knowledge representation
Tab. 2 shows some canonical examples of conceptual knowledge representation.
Each example shows the highly associated concepts to the centered concept. A
set of stop words have been removed to make the results more representative.
Table 2: Examples of top-associated conceptual knowledge representation Word | Top associated concept
---|---
school | building, county, education, national, community, family, government, city, house, university
cat | life, name, family, medal, known, me, house, into, your, time
work | press,have,government,community, title, zone, action, field, works, working, more, time
apple | park, famous, story, award, album, music, different, out, title
train | tour, population, district, project, zone,
gift | what, new, different, ’something, house, special, story, inside
## Appendix C Examples of score contour line
Table 3: The score contour line based on the cos similarity.
Top 1 Top 10 Top 100 Top 500
(a) cat Target | Word | Cos Similarity
---|---|---
| cat | 1
| cats | 0.809938
| dog | 0.760946
| kitten | 0.746498
| feline | 0.732624
| beagle | 0.715058
| puppy | 0.707545
| pup | 0.693429
| pet | 0.689153
| felines | 0.675593
| chihuahua | 0.670976
| bassets | 0.520498
| rooster | 0.51893
| owl | 0.518349
| pinscher | 0.517772
| tiger | 0.517296
| piglet | 0.516684
| kelpie | 0.515803
| dachshunds | 0.515763
| schnauzers | 0.514645
| bird | 0.514626
| earless | 0.392231
| hoarder | 0.392067
| lynx | 0.392044
| shrike | 0.392036
| panleukopenia | 0.391696
| iguanas | 0.391474
| doglike | 0.391446
| yelping | 0.391318
| crow | 0.391283
cat | rabbity | 0.391252
(b) green Target | Word | Cos Similarity
---|---|---
| green | 1
| greener | 0.809938
| red | 0.760946
| greening | 0.746498
| yellow | 0.732624
| blue | 0.715058
| brown | 0.707545
| florescent | 0.693429
| greenest | 0.689153
| nongreen | 0.675593
| purple | 0.670976
| pistache | 0.520498
| echeveria | 0.51893
| paspalum | 0.518349
| greened | 0.517772
| bicolored | 0.517296
| multicolored | 0.516684
| brittlebush | 0.515803
| arborvitaes | 0.515763
| stripes | 0.514645
| sienna | 0.514626
| conserve | 0.392231
| photovoltaic | 0.392067
| leafier | 0.392044
| euonymous | 0.392036
| alpenglow | 0.391696
| coppery | 0.391474
| tomatillo | 0.391446
| beautifying | 0.391318
| marram | 0.391283
green | tangelo | 0.391252
Tab. 3(b) shows some canonical examples of the similarity-based semantic
landscape. A contour line is shaped by different concepts with the same-level
score to the target concept. Each example shows a series of contour lines by
different scores. Concept _cat_ has a single major semantic meaning, while
_green_ has two major semantic meanings. Hence, the similar concepts to _cat_
lie in a single semantic subspace, while those to green lie in two semantic
subspaces (color and plant).
## Appendix D Player trajectories
Figure 5 shows some exemplar trajectories generated by players. Thinking
patterns mentioned in the paper, such as _insight-seeking_ and _domain-
knowledge-relying_ can be clearly observed in these trajectories. Besides,
several a.k.a. _Aha! moment_ can be observed in the test process.
(a) Target Word: Finish
(b) Target Word: Northest
(c) Target Word: Friend
Figure 5: Examples of player trajectories in Mindle.
*[mdp]:
|
# Super-CLEVR: A Virtual Benchmark to
Diagnose Domain Robustness in Visual Reasoning
Zhuowan Li$1$ Xingrui Wang$2$ Elias Stengel-Eskin $1$
Adam Kortylewski$3,4$ Wufei Ma$1$ Benjamin Van Durme$1$ Alan Yuille$1$
$1$ Johns Hopkins University $2$ University of Southern California
$3$ Max Planck Institute for Informatics $4$ University of Freiburg
###### Abstract
Visual Question Answering (VQA) models often perform poorly on out-of-
distribution data and struggle on domain generalization. Due to the multi-
modal nature of this task, multiple factors of variation are intertwined,
making generalization difficult to analyze. This motivates us to introduce a
virtual benchmark, Super-CLEVR, where different factors in VQA domain shifts
can be isolated in order that their effects can be studied independently. Four
factors are considered: visual complexity, question redundancy, concept
distribution and concept compositionality. With controllably generated data,
Super-CLEVR enables us to test VQA methods in situations where the test data
differs from the training data along each of these axes. We study four
existing methods, including two neural symbolic methods NSCL[45] and
NSVQA[59], and two non-symbolic methods FiLM [50] and mDETR[29]; and our
proposed method, probabilistic NSVQA (P-NSVQA), which extends NSVQA with
uncertainty reasoning. P-NSVQA outperforms other methods on three of the four
domain shift factors. Our results suggest that disentangling reasoning and
perception, combined with probabilistic uncertainty, form a strong VQA model
that is more robust to domain shifts. The dataset and code are released at
https://github.com/Lizw14/Super-CLEVR.
## 1 Introduction
Visual question answering (VQA) is a challenging task that assesses the
reasoning ability of models to answer questions based on both visual and
linguistic inputs. Current VQA methods are typically developed on standard
benchmarks like VQAv2 [16] or GQA [25], with the implicit assumption that
testing data comes from the same underlying distribution as training data.
However, as has been widely studied in computer vision [15, 51, 36],
algorithms trained on one domain often fail to generalize to other domains.
Moreover, having learned the distributional prior of training data, models
often struggle on out-of-distribution tests. This has been studied in VQA from
the perspective of domain transfer [8, 62, 57], dataset bias [2, 48, 11],
counter-factual diagnosis [47, 9], and out-of-distribution benchmarking [30].
The multi-modal nature of VQA gives rise to multiple intertwined factors of
variation, making domain shift an especially difficult problem to study. For
example, [8] suggests that VQA domain shifts are a combination of differences
in images, questions or answers; and [39] reveals a gap between synthetic and
real VQA datasets by differences in the over-specification of questions and
the underlying distribution of concepts. However, despite a wealth of research
on domain generalization in VQA [3, 26, 57, 62], there is no systematic
analysis of the contributing factors in domain shifts.
Figure 1: We decompose VQA domain shifts into four contributing factors:
visual complexity, question redundancy, concept distribution and concept
compositionality. The domain shifts along each factor can be independently
studied with the proposed Super-CLEVR dataset.
To this end, we introduce a virtual benchmark, Super-CLEVR, which enables us
to test VQA algorithms in situations where the test data differs from the
training data. We decompose the domain shift into a set of isolated
contributing factors, so that their effects can be diagnosed independently. We
study four factors: visual complexity, question redundancy, concept
distribution, and concept compositionality. These are illustrated in Fig. 1
and described in Sec. 3.1. With controllable data generation using our
SuperCLEVR virtual benchmark, we are able to isolate the different factors in
VQA domain shifts so that their effects can be studied independently. Compared
with the original CLEVR dataset [27], Super-CLEVR contains more complicated
visual components and has better controllability over the domain shift
factors. As shown in Fig. 1, the Super-CLEVR dataset contains images rendered
from 3D graphical vehicle models in the UDA-Part dataset [40], paired with
questions and answers automatically generated from templates. The objects and
questions are sampled based on the specified underlying probability
distribution, which can be controlled to produce distribution shifts in
different factors.
With Super-CLEVR, we diagnose the domain robustness of current VQA models.
Four representative models are studied: for the classic two-stream feature
fusing architecture, we choose FiLM [50]; for a large-scale pretrained model
we take mDETR [29]; we use NSCL [45] and NSVQA [59] as representative neuro-
symbolic methods. We observe that all these models suffer from domain shifts
to varying degrees of sensitivity. We analyze each factor separately to
examine the influence of different model designs. Specifically, we find that
the step-by-step design of neural modular methods enhances their robustness to
changes in question redundancy compared with non-modular ones; however, the
non-modular models are more robust to visual complexity. Furthermore, thanks
to its decomposed reasoning and perception, NSVQA is more robust to concept
distribution shifts.
While existing models suffer from domain shifts with different
characteristics, we make a technical improvement over NSVQA which enables it
to significantly outperform existing models on three of the four factors. In
particular, we inject probabilities into the deterministic symbolic executor
of NSVQA, empowering it to take into account the uncertainty of scene
understanding. We name our model _probabilistic NSVQA_ (P-NSVQA), and show
that its performance improvement in both the in-domain and out-of-domain
settings. With superior results of P-NSVQA, we suggest that disentangling
reasoning from vision and language understanding, together with probabilistic
uncertainty, gives a strong model that is robust to domain shifts.
Our contributions are as follows. (1) We introduce the Super-CLEVR benchmark
to diagnose VQA robustness along four different factors independently. This
benchmark can also be used for part-based reasoning. (2) We enhance a neural-
symbolic method by taking the uncertainty of visual understanding into account
in reasoning. (3) We conduct detailed analysis of four existing methods, as
well as our novel approach to study the influence of model designs on distinct
robustness factors. We conclude that disentangled reasoning and perception
plus explicit modeling of uncertainty leads to a more robust VQA model.
## 2 Related work
Visual question answering (VQA). Popular VQA methods fall into three
categories. Two-stream methods extract features for image and questions using
CNN and LSTM respectively, then enable interaction between the two modalities
with different feature fusing methods [4, 13, 32, 60, 24, 50, 37]. Neural
symbolic methods, on the other hand, use a parse-then-execute pipeline where
the question is parsed into a functional program, which is then executed on
the image using neural modules [59, 45]. Recently, transformers-based models
have achieved impressive performance on various vision-and-language tasks by
pretraining on large scale dataset then finetuning for downstream tasks [55,
44, 38, 63, 29]. We choose FiLM [50], NSCL[45] and mDETR[29] as category
representatives.
VQA datasets. Datasets containing real images and human-written questions have
been widely used to benchmark VQA models, _e.g_. VQA [5], VQAv2 [16], Visual
7w [65], VizWiz [18], Visual Genome [35], COCO QA [52], etc. However,
subsequent work has revealed the strong prior and bias in those datasets which
might be exploited by models to correctly predict the answers without
reasoning [47, 17, 48, 1, 16, 30, 31]. Attempts to address this problem
include better balancing datasets [2] and creating counterfactual examples
[11, 9]. To assess a model’s true reasoning ability, the CLEVR dataset [27]
proposes to generate complex multi-step questions on synthetic images, which
is then extended to various vision-and-language tasks [41, 34, 58, 61, 6, 53,
23]. The GQA dataset [25] extends CLEVR-style questions to real images. Our
benchmark is distinct from existing ones because we introduce more complex
visual scenes into CLEVR and provide controllability to study domain
robustness on isolated factors.
Domain shift in VQA. Domain shift is a long-standing challenge in computer
vision, explore in prior works in domain adaptation [14, 22, 15, 43] and
domain generalization [51, 36]. Recent works have focused on domain shifts in
VQA. [8, 57] improves model adaptation between datasets by feature learning.
[62] analyze domain shifts between nine popular VQA datasets and proposes an
unsupervised method to bridge the gaps. [39] generalize symbolic reasoning
from synthetic to real dataset. [3] introduce a question-answer generation
module that simulates the domain shifts. [26] propose a training scheme X-GGM
to improve out-of-distribution generalization. [6] assess models
generalization on the CLOSURE of linguistic components. In contrast to prior
works we study each of the different domain shift factors independently with
our virtual benchmark.
## 3 Super-CLEVR
### 3.1 Motivation: domain shift factors
Visual complexity. A major difference between different VQA datasets is visual
complexity. For example, in the CLEVR dataset, objects are simple, atomic
shapes while in real-world data, objects are more complex and have
hierarchical parts. While hard to quantify, visual complexity is related to
various factors, such as object variety, object size, background, texture,
lighting, occlusion, view point, etc. In our work, we control visual
complexity by introducing more challenging objects that can have distinct
attributes associated with their parts, and by optionally pasting various
textures onto objects. Examples of generated images with different complexity
levels are shown in Fig. 1.
Question redundancy. Question redundancy refers to the amount of over-
specified or redundant information in the question, which can be in the form
of either attributes or relationships. For example, in Fig. 1, “what color is
the large bus behind the cyan car”, large (attributes) and behind the cyan car
(relationship) are redundant because there is only one bus in the image. As
observed in linguistics and cognitive science [12, 54, 49, 33], human speakers
may include over-specified information when identifying a target object, which
has also been studied in referring expression generation [46]. For VQA, as
analyzed in [39], a significant difference between synthetic and real datasets
is that real questions contain some redundant information, which sometimes is
a distraction leading to model prediction errors. Therefore, in this work, we
generate questions with different redundancy levels and study the effect of
question redundancy on model behaviors.
Concept distribution. The distributions of concept, _i.e_. objects (_e.g_.
car) and attributes (_e.g_. large), are distinct across different VQA
datasets. For example, while colors are well-balanced in CLEVR dataset, in the
GQA dataset, the color distribution is long-tailed where “white” appears $>50$
times more frequently than “gold”. Long-tailed distributions have been a
challenge in many computer vision tasks [42, 64, 20, 56, 28]. In VQA, the
long-tailed concept distribution not only hinders the learning of infrequent
concepts due to few training samples, but also introduces strong biases and
priors in the dataset that may mislead the models. For example, “tennis” is
the correct answer to most questions with “what sport is …” [2]. With strong
priors in data, it is hard to assess the true reasoning capacity of current
models. While previous works address this problem by carefully re-balancing
datasets [2], in our work, we controllably vary the concept distribution in
our dataset and study model robustness to concept distribution shifts.
Concept compositionality. Concept compositionality refers to how different
concepts (shapes, attributes) compose and co-occur with each other, _e.g_.
roses are usually red while violets are usually blue [30]. Concept
compositionality can be viewed as a conditional concept distribution in the
context of other concepts. Shifts in concept compositionality impede the
generalization of VQA models. For example, the model may fail to recognize a
green banana because most bananas are yellow in the training data [7].
Previous works evaluate the out-of-distribution performance by collecting
counterfactual testing examples [47]. In our work, we control the
compositionality of shapes and colors with an intuitive motivation: if, for
example, in the training data, bicycles are red and cars are blue, will the
models be able to recognize blue bicycles and red cars in testing?
Figure 2: Super-CLEVR contains 21 vehicle models belonging to 5 categories,
with controllable attributes.
### 3.2 Dataset generation
Super-CLEVR follows a similar data generation pipeline as CLEVR, but with more
complex visual components and better control of domain gap factors. We
describe the generation procedure below.
Objects with parts. To improve the visual complexity of CLEVR scenes, we
replace the simple shapes (_e.g_., cube, sphere) in CLEVR dataset with
vehicles from UDA-Part dataset [40]. There are 21 vehicle models, belonging to
5 categories: car, motorbike, aeroplane, bus, and bicycle. Each 3D model comes
with part annotations, _e.g_., left front wheel or left right door for car.
Examples for the vehicle models are shown in Fig. 2. We remove or merge small
parts from the original annotations to avoid severe difficulty in visual
understanding. The full object and parts list is in the supplementary
material.
Attributes. Besides the attributes in the original CLEVR dataset, _i.e_.
color, material, size, we optionally add texture as an additional attribute to
increase visual complexity. Note that in order to enable part-based questions,
the attributes (color or material) of object parts can be different from that
of the object. For example, a blue car can have a red wheel or a green door.
In this case, the attribute of the holistic object refers to the attribute of
its main body (_e.g_. the blue car has blue frame).
Scene rendering. Following CLEVR, each scene contains 3 to 10 objects. The
objects are placed onto the ground plane with random position and orientation.
When placing the objects, we ensure that the objects do not overlap with each
other and we avoid severe occlusion by thresholding the number of visible
pixels for each object. Random jitters are added to lamp and camera positions.
When rendering, we also save the ground-truth bounding boxes and segmentation
masks for each of the objects and their parts, which are required when
training some of the models.
Question generation. Super-CLEVR follows similar question generation pipeline
in CLEVR, which instantiates question templates using the underlying reasoning
program that can be operated on the scene graph. For example, the program
select_shape(truck) $\to$ query_color($\cdot$) can be instantiated as question
“what is the color of the truck”. Therefore, redundancy level of questions can
be controlled by removing or adding redundant reasoning steps in the
underlying reasoning program.
### 3.3 Controlling the dataset
To study domain generalization, we generate several variants of the dataset
for each of the domain shift factors. The variants of the datasets serve as
different data domains to test the model robustness. Here we describe the
method for controllably generating the dataset variants.
Visual complexity. We generate three variants of the dataset with different
levels of visual complexity: easy, mid (middle) and hard. The only difference
between the 3 versions is visual complexity: for the easy version, objects
with different sizes, colors and materials are placed into the scene; for the
middle version, we choose 3 parts on each object that are visible and randomly
change their attributes; for the hard version, we further add random textures
to the objects and parts. An example of the 3 dataset versions can be found in
Fig. 1. Note that the scene layout and the questions are shared, so that the
influence of visual complexity can be isolated and studied independently.
Question redundancy. Three variants of the dataset with different redundancy
levels are generated: rd-, rd (default), rd+. By default (rd), as in original
CLEVR dataset, the questions contain some redundant attributes resulting from
random sampling, while all redundant relationships are removed. In rd-, we
also remove all redundant attributes from the questions, leading to no
redundancy in the questions. In rd+, we add all possible attributes and
relationships into the question, so that questions contain a high level of
redundancy. For all the variants, the questions are ensured to be valid.
Concept distribution. We generate three dataset variants with different
concept distributions: bal (balanced), slt (slightly unbalanced) and long
(long-tail distributed). More specifically, we change the distribution of
shapes, colors and materials while the distribution of size is kept fixed in
order to keep visual complexity consistent, since objects with smaller sizes
are visually harder to recognize. By default (bal), the shapes and attributes
are randomly sampled, leading to a balanced distribution. For slt and long,
the concept distribution $\mathbf{d}$ is generated by $d_{i}=a^{-i}$, where
$i$ is the index of the concept. $a$ is a hyper-parameter controlling the
length of the tail. A larger $a$ leads to more imbalanced distribution and
$a=1$ leads to flat distribution (cf. Fig. 1). For slt, $a=1.3$; for long,
$a=2.0$. In addition, to better analyze the performance on the frequent and
rare concepts, we generate three variants for testing purpose only: head
(frequent concepts in the long-tail distribution), tail (infrequent/rare
concepts), and oppo (opposite to the long-tail distribution). We test each
model on those three variants to analyze the performance on concepts with
different degrees of frequency.
Concept compositionality. We generate 3 versions of the dataset, co-0, co-1
and co-2, with different compositions of the 21 shapes (from 5 categories) and
the 8 colors. The compositionality of the dataset is controlled with the co-
distribution matrix $M\in\mathbb{R}^{21\times 8}$, where each entry $M_{ij}$
is the probability an object of the $i$-th shape has the $j$-th color. Entries
in each row of $M$ sum up to $1$. In the version co-0, $M$ is a flat matrix so
that the shapes and colors are randomly composed. In co-1, each shape in one
category has a different color distribution, _e.g_. truck and sedan, while
shapes from different categories may share the same color distribution _e.g_.
sedan and airliner. Oppositely, in co-2, we make the shapes in same category
have the same color distribution, whiles shapes from different categories have
different distributions. The motivation is that since shapes from the same
category are visually similar, the difference in co-1 and co-2 will help
analyze the difference in model predictions on visually similar objects and
dissimilar objects when composed with different color distributions.
Dataset Statistics. Every dataset variant contains 30k images, including 20k
for training, 5k for validation and 5k for testing. Each image is paired with
10 object-based and 10 part-based questions. By default, the dataset refers to
the version with mid visual complexity level (_i.e_. objects are untextured
and has up to 3 parts with distinct attributes), rd redundancy level, balanced
(bal) concept distribution and random (co-0) compositionality. More dataset
statistics are in supplementary materials.
## 4 Evaluated methods
### 4.1 Simple baselines
Random, Majority. These simple baselines pick a random or the most frequent
answer for each question type in the training set as the predicted answer.
LSTM. This question-only baseline encodes the question word embeddings with
LSTM [21] and predicts answer with an MLP on top of the final hidden states of
the LSTM.
CNN+LSTM. The image is represented with features extracted by CNN and question
is encoded by LSTM. An MLP predicts answer scores based on the concatenation
of image and question features.
### 4.2 Existing models
FiLM. We choose Feature-wise Linear Modulation [50] as a representative of
classic two-stream feature merging methods. The question features extracted
with GRU [10] and image features extracted with CNN are fused with the
proposed FiLM module.
mDETR. mDETR [29] is a transformer-based detector trained to detects objects
in an image conditioned on a text query. The model is pretrained with 1.3M
image and text pairs and can be finetuned for various downstream tasks like
referring expression understanding or VQA.
NSCL. The Neuro-Symbolic Concept Learner [45] is a representative neural
symbolic method. NSCL executes neural modules on the scene representation
based on the reasoning program, during which the modules learns embeddings of
each concept with the answer supervision.
NSVQA. Neural-Symbolic VQA [59] is a neural symbolic method composed of three
components: A scene parser (Mask-RCNN [19]) that segments an input image and
recovers a structural scene representation, a question parser that converts a
question from natural language into a program and a program executor that runs
the program on the structural scene representation to obtain the answer.
Notably, compared to NSCL, the individual components of NSVQA can be learned
separately, hence, for example the scene parser can be learned from data that
does not necessarily have Visual-Question annotations.
### 4.3 Probabilistic NSVQA (P-NSVQA)
Since the program executor in NSVQA is a collection of deterministic, generic
functional modules, it can be augmented with a probabilistic reasoning process
that takes into account the confidence of the predictions of the scene parser.
This allows the model to execute the program that has the largest joint
likelihood, instead of only taking the maximal likelihood execution at each
step of the program. The experiment results demonstrate a significant
performance improvement of this probabilistic approach over the deterministic
NSVQA model proposed in [59].
In particular, we interpret the confidence of the Mask-RCNN output as a
likelihood function for all detected object classes $p_{object}$ and their
attributes $p_{att}$. Moreover, we define a likelihood $p_{spatial}$ for the
spatial relations between objects (behind, in front, left, right) that is
proportional to the distance between the centers of two bounding boxes. Given
a reasoning program containing multiple reasoning steps, we execute each step
based on the scene parsing likelihood and produce an step-wise output with
confidence. Finally, we use a factorized model, multiplying the output for all
the steps to get the final answer prediction. We refer readers to the Appendix
for more details.
### 4.4 Implementation details
Training mDETR requires ground-truth grounding of question tokens to image
regions, which is available in Super-CLEVR. NSCL requires bounding box of
objects, which can predicted using a trained Faster RCNN, and the reasoning
program, which can be parsed using a trained parser. Similarly, ground-truth
programs are used for training NSVQA and P-NSVQA. Note that we empirically
find that the question-to-program parsing is a relatively easy task ($>99\%$
accuracy using a simple LSTM), so we focus more on models’ reasoning ability
in our analysis.
Unless specified, the models are trained with default setting as in the
official implementation. FiLM is trained for 100k iterations with batch size
256. mDETR is trained for 30 epochs with batch size 64 using 2 GPUs for both
the grounding stage and the answer classification stage. NSCL is trained for
80 epochs with batch size 32. For NSVQA and P-NSVQA, we first train the object
parser (Mask RCNN [19]) for 30k iterations with batch size 16, then train the
attribute extraction model (using the Res50 backbone) for 100 epochs with
batch size 64. For P-NSVQA, when counting the objects or determining whether
objects exist in the scene, we use a threshold (0.7) to obtain the final
selected objects. Early stopping is used based on validation accuracy. All the
models are trained with 200k questions. We repeat experiments on default split
for 3 times with different random seeds and get std of $\pm 0.10$ (P-NSVQA)
and $\pm 0.40$ (NSVQA), showing the statistical significance of our results,
then we only run other experiments once.
## 5 Results and analysis
In this section, we first show evaluation results for in-domain setting, then
provide results and analysis on out-of-domain evaluation. Finally, we describe
additional studies and future works.
### 5.1 In-domain results
In-domain evaluation refers to the setting where the training and testing data
come from the same domain (the default dataset variant in this case). We
compare the in-domain results on Super-CLEVR and CLEVR. The results are shown
in Fig. 3.
Figure 3: Comparison of models’ accuracy on Super-CLEVR and the original CLEVR
dataset.
For all the models, the performance is lower on Super-CLEVR than CLEVR,
suggesting that Super-CLEVR is a more challenging benchmark. The scenes with
vehicles are much harder visually for the models to understand compared with
the simpler shapes from CLEVR. Note that the performance gap on two datasets
for simple baselines (Random, Majority, LSTM, CNN+LSTM) is smaller than for
the other better models. This is because Super-CLEVR contains more object
types, and therefore the performance of simply guessing is lower than on
CLEVR.
When comparing the performance of different models, we find that the neural
modular methods, _i.e_. NSCL, NSVQA, P-NSVQA, perform much better than non-
modular ones. This is not surprising given their nearly perfect performance on
the original CLEVR dataset, which shows its strong ability to model synthetic
images. The large-scale pretrained grounding model mDETR, which is a leading
model on both real and synthetic images, also achieves good performance
(82.7%) on Super-CLEVR. The two-stream method FiLM does not achieve very
strong performance (53.2%), but is still much better than the other simple
baselines.
Our proposed P-NSVQA outperforms all the other models. In particular, on
Super-CLEVR, it outperforms its deterministic counterpart, NSVQA, by 2.96%.
This shows the advantage of taking into account the probabilities when the
scenes are challenging thus the model’s uncertainty of predictions can be
utilized.
### 5.2 Out-of-domain results
| FiLM | mDETR | NSCL | NSVQA | Prob NSVQA
---|---|---|---|---|---
Visual Complexity
| easy | mid | hard | easy | mid | hard | easy | mid | hard | easy | mid | hard | easy | mid | hard
easy | 59.96 | 53.95 | 50.66 | 93.36 | 84.30 | 82.97 | 95.13 | 92.31 | 90.81 | 95.19 | 94.19 | 94.09 | 96.76 | 95.98 | 96.37
mid | 57.41 | 53.28 | 50.18 | 83.34 | 82.36 | 81.27 | 84.5 | 89.10 | 86.33 | 81.99 | 92.80 | 93.78 | 86.25 | 95.76 | 95.11
hard | 55.95 | 53.11 | 50.47 | 79.71 | 79.94 | 80.71 | 76.85 | 78.66 | 85.08 | 73.11 | 79.71 | 92.65 | 79.81 | 86.47 | 95.36
Question Redundancy
| rd- | rd | rd+ | rd- | rd | rd+ | rd- | rd | rd+ | rd- | rd | rd+ | rd- | rd | rd+
rd- | 51.42 | 52.54 | 53.51 | 83.94 | 80.37 | 66.28 | 88.64 | 88.82 | 90.33 | 92.95 | 92.94 | 92.67 | 95.66 | 95.72 | 95.43
rd | 50.39 | 53.28 | 54.78 | 82.77 | 82.36 | 70.36 | 88.45 | 89.10 | 91.45 | 91.19 | 92.78 | 92.14 | 94.87 | 95.72 | 95.43
rd+ | 46.14 | 52.30 | 71.47 | 78.48 | 84.05 | 90.42 | 87.94 | 88.34 | 91.16 | 91.38 | 91.96 | 92.80 | 94.88 | 95.47 | 95.72
Concept Distribution
| bal | slt | long | bal | slt | long | bal | slt | long | bal | slt | long | bal | slt | long
bal | 50.47 | 53.04 | 54.35 | 80.71 | 75.79 | 74.54 | 85.08 | 83.79 | 75.10 | 92.65 | 90.82 | 83.74 | 95.36 | 94.89 | 89.88
long | 49.43 | 54.75 | 62.96 | 79.06 | 80.29 | 90.66 | 85.33 | 89.42 | 91.10 | 92.73 | 93.38 | 92.53 | 96.31 | 96.32 | 95.25
head | 48.60 | 58.06 | 61.60 | 80.75 | 79.60 | 87.46 | 84.58 | 88.39 | 90.19 | 93.87 | 94.82 | 92.48 | 96.42 | 96.80 | 95.92
tail | 51.80 | 48.70 | 50.08 | 81.50 | 70.88 | 60.94 | 86.10 | 80.27 | 60.55 | 90.26 | 89.20 | 75.32 | 94.08 | 93.20 | 82.68
oppo | 49.06 | 48.93 | 46.68 | 79.13 | 68.37 | 56.98 | 85.07 | 77.86 | 55.14 | 91.22 | 88.65 | 71.32 | 95.76 | 94.09 | 79.74
Concept Compositionality
| co-0 | co-1 | co-2 | co-0 | co-1 | co-2 | co-0 | co-1 | co-2 | co-0 | co-1 | co-2 | co-0 | co-1 | co-2
co-0 | 53.28 | 57.00 | 56.1 | 83.36 | 77.03 | 82.43 | 89.1 | 82.52 | 83.77 | 92.80 | 90.11 | 91.59 | 95.76 | 94.02 | 95.12
co-1 | 52.41 | 60.57 | 56.67 | 79.46 | 82.45 | 83.93 | 78.89 | 87.18 | 84.2 | 78.74 | 89.99 | 90.67 | 87.12 | 94.53 | 94.78
co-2 | 52.96 | 57.37 | 60.53 | 80.03 | 77.41 | 87.24 | 78.40 | 81.55 | 88.84 | 77.85 | 89.28 | 92.23 | 87.19 | 93.49 | 95.61
Table 1: Accuracy of models trained and tested on different domains. Column
headings indicate training settings, while rows indicate the dataset variant
for testing. The best performance in each row (_i.e_. the best training
setting) is marked in bold and best performance in each column (_i.e_. the
best testing setting) is underlined. Description for different splits is in
Sec. 3.3 and analysis is in Sec. 5.2.
In this section, we train and test the five models (FiLM, mDETR, NSCL, NSVQA
and P-NSVQA) on different dataset variants, and diagnose their domain
robustness on each of the four domain shift factors. Please refer to Sec. 3.3
for a description of different variants. The validation accuracy is used for
analysis here and the results are shown in Tab. 1.
All the methods suffer from domain shifts. The results show that the best
performance mostly occurs in situations where the model is tested on the same
dataset variant as it is trained on, _i.e_. the bold or underlined numbers
fall mostly on the diagonals in Tab. 1.
We compare the domain robustness of the five models by measuring the relative
performance decrease when the testing data differs from the training data,
_i.e_. smaller performance drop on different testing domains means better
robustness. Based on this intuition, for easier understanding of Tab. 1, we
propose a measurement metric for domain robustness named Relative Degrade (RD)
to better analyze the results. We define Relative Degrade as the the
percentage of accuracy decrease when the model is tested under a domain shift,
_i.e_. the accuracy drop divided by the in-domain accuracy. Specifically, if a
model gets accuracy $a$ under in-domain testing (_i.e_. testing with the same
dataset variant as training) and accuracy $b$ under out-of-domain testing
(_i.e_. testing with a different dataset variant from training), then
$RD=(a-b)/a$. Since we train each model on three data variants, the $RD$’s for
the three models are averaged to measure its domain robustness.111For concept
distributions, we compute relative degrade with a slight change: we compute
the accuracy drop from head to tail and the drop from long to oppo, take their
average, and divide by the accuracy on bal.
| Visual | Redund. | Dist. | Comp.
---|---|---|---|---
FiLM | 4.03 | 21.33 | 28.46 | 9.04
mDETR | 9.81 | 19.05 | 36.34 | 9.45
NSCL | 15.57 | 0.92 | 37.44 | 15.40
NSVQA | 17.48 | 1.72 | 20.92 | 11.44
Prob NSVQA | 12.88 | 0.84 | 13.72 | 7.00
Table 2: Relative Degrade under domain shifts, _i.e_. the percentage of
accuracy decrease when the model is tested with domain that differs with
training. Lower RD means better robustness.
Tab. 2 shows the Relative Degrade of the five models on the four factors. We
see that P-NSVQA outperforms other models by a significant margin on three of
the four factors, indicating that it has better overall domain robustness. In
the following, we take a closer look at the results on each of the factors
separately, to diagnose the influence of different model designs.
Question redundancy. Neural modular methods are much more robust to question
redundancy shifts than non-modular ones. The relative degrades for modular
methods are less than $2\%$, while one-modular ones degrade for around $20\%$.
Due to the step-by-step design of the reasoning in modular methods, each
reasoning step is independent of the others so that the models are less likely
to learn the spurious correlation between question and answers. Therefore the
modular methods are less vulnerable to change in question/program length.
Visual complexity. Different from our findings on question redundancy, for
domain shifts in visual complexity, non-modular methods are more robust
compared to modular ones. As shown in Tab. 2, while FiLM and mDETR gets less
than $10\%$ degrade, NSCL and (P-)NSVQA degrade for more than $12\%$. The
reason might be that the simple reasoning modules in modular methods can not
process the visual signals as well as the dense non-modular models.
Comparing P-NSVQA with NSVQA, we find that injecting probability into
deterministic symbolic reasoning greatly improves the robustness on visual
complexity ($4.04\%$ decrease in RD). This suggests that some errors in visual
understanding can be corrected and recovered by taking into account the
uncertainty of visual parsing and combining the results of each reasoning step
with probability.
Concept distribution. While all the four existing models suffer a lot (larger
than $20\%$ RD) on domain shifts in concept distribution, we see that the
symbolic method NSVQA is better than the other three (by more than $7.5\%$).
With the disentangled reasoning and visual understanding components in NSVQA,
the distribution priors in the images and the programs/answers cannot
intertwine with each other, which prevent the model heavily relying on the
priors. With uncertainty, we can further boost the robustness of NSVQA with a
large margin (from $21\%$ to $14\%$ RD).
Moreover, the head-tail results suggests that the overall accuracy, which is
commonly used to measure VQA performance, should be taken with cautious. When
the testing split is imbalanced, the seemingly high accuracy is misleading
because the head concepts dominates the testing while the tail ones are not
well-reflected. For example, for NSCL, although it gets high accuracy (91%) on
the long-tailed data, its performance is only 60.6% on the tail concepts. In
real-world datasets, the data are usually not well-balanced, which suggests
the value of synthetic testing.
Concept compositionality. Comparing the existing methods, we find that the
non-modular methods seems to be more robust than modular methods NSCL or
NSVQA. However, with uncertainty, P-NSVQA improves the result of NSVQA, which
even outperforms the non-modular methods. This suggest the large potential of
better robustness of modular methods by improving current models.
In summary, while non-modular methods are more robust to visual complexity
shifts, the modular symbolic methods (improved with uncertainty) are more
robust on the other three factors. By disentangling reasoning with visual
understanding, separately executing every each reasoning step then merging the
results of the steps using probabilities based on uncertainty, our P-NSVQA
outperforms all the existing models in question redundancy, concept
distribution and compositionality. Therefore, we suggest that symbolic
reasoning with uncertainty leads to strong VQA models that are robust to
domain shifts.
### 5.3 More analysis and future work
Synthetic-to-real transfer. We provide an additional proof-of-concept study to
show that the findings drawn from Super-CLEVR dataset can transfer to real
datasets. In the following experiments, we show our finding that neuro-
symbolic methods (NSCL, NSVQA, P-NSVQA) are more robust than mDETR on question
redundancy also holds true on the real GQA dataset [25]. More precisely, we
progressively removed the redundant operations from the reasoning program in
GQA testdev split, and then regenerated questions using a program-to-question
generator. Using the change of models’ testing accuracy as the redundant
operations are removed, we can evaluate the models’ robustness towards
question redundancy. The results are show in Tab. 3.222For implementation of
NSCL on a real-word dataset, we use the model in [39] (the version without
calibration). The model accuracies on the original not-perturbed GQA testdev
split are as following: mDETR (61.67%), NSCL (56.13%), NSVQA (39.58%), P-NSVQA
(39.66%). We observe that the performance drop of mDETR is much larger than
neuro-symbolic methods as the redundant information is progressively removed,
which indicates that symoblic methods have better question redundancy than
mDETR on GQA dataset. This is consistent with our findings on Super-CLEVR.
| 0% | 14% | 32% | 70% | 91% | 100%
---|---|---|---|---|---|---
mDETR | 0 | -4.82 | -8.46 | -13.16 | -13.88 | -14.56
NSCL | 0 | -0.14 | -0.34 | -1.09 | -1.71 | -2.59
NSVQA | 0 | -3.47 | -4.80 | -7.01 | -7.02 | -7.02
P-NSVQA | 0 | -1.93 | -3.15 | -5.73 | -5.91 | -5.78
Table 3: Accuracy drop on the GQA dataset when redundant information is
progressively removed.
Reasoning with part-object hierarchies. In addition to evaluating domain
generalization, Super-CLEVR can be extended for broader purposes, _e.g_. part-
based reasoning. We can ask questions like “what is the color of the front
wheel of the bike?”, “what is the color of the vehicle that has a yellow
wheel”, etc. Those questions require the model to correctly understanding the
part-object hierarchy, which is an ability that current VQA models lack.
Limitations. The main limitations of our work lie in the synthetic nature of
our dataset. Future efforts can be made in collecting better controlled and
balanced real datasets for model diagnosis. We emphasize that the purpose of
the dataset is for model diagnosis and that models should also be tested on
real data.
## 6 Conclusion
We diagnose domain shifts in visual reasoning using a proposed virtual
benchmark, Super-CLEVR, where distinct factors can be independently studied
with controlled data generation. We evaluate four existing methods and show
that all of them struggle with domain shifts, highlighting the importance of
out-of-domain testing. Among the evaluated methods, neural modular methods are
more robust towards question redundancy. In particular, NSVQA with
disentangled perception and reasoning shows better robustness towards
distribution and compositionality shifts. We further propose P-NSVQA, which
improves NSVQA with uncertainty in the reasoning modules. We show that P-NSVQA
outperforms all the existing methods in both in-domain testing and out-of-
domain testing. With detailed analysis, our study suggests that disentangling
reasoning and perception, combined with probabilistic uncertainty, form a
strong VQA model that is more robust to domain shifts. We hope our analysis
may facilitate better understanding of strengths and weaknesses of VQA models
and, more broadly, future work might explore using the Super-CLEVR benchmark
for other tasks like part-based reasoning.
### Acknowledgements
We would like to thank Nils Holzenberger, Kate Sanders, Chenxi Liu, Zihao
Xiao, Qing Liu, Reno Kriz, and David Etter for their helpful comments and
suggestions, as well as the anonymous reviewers. Zhuowan Li is supported by
ONR N00014-21-1-2812 and grant IAA80052272 from the Institute for Assured
Autonomy at JHU. Elias Stengel-Eskin is supported by an NSF GRFP. A.
Kortylewski acknowledges support via his Emmy Noether Research Group funded by
the German Science Foundation (DFG) under Grant No.468670075.
## References
* [1] Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356, 2016.
* [2] Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. Don’t just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971–4980, 2018.
* [3] Arjun Akula, Soravit Changpinyo, Boqing Gong, Piyush Sharma, Song-Chun Zhu, and Radu Soricut. Crossvqa: scalably generating benchmarks for systematically testing vqa generalization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2148–2166, 2021.
* [4] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086, 2018.
* [5] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433, 2015.
* [6] Dzmitry Bahdanau, Harm de Vries, Timothy J O’Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, and Aaron Courville. Closure: Assessing systematic generalization of clevr models. arXiv preprint arXiv:1912.05783, 2019.
* [7] Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. Rubi: Reducing unimodal biases for visual question answering. Advances in neural information processing systems, 32, 2019.
* [8] Wei-Lun Chao, Hexiang Hu, and Fei Sha. Cross-dataset adaptation for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5716–5725, 2018.
* [9] Long Chen, Xin Yan, Jun Xiao, Hanwang Zhang, Shiliang Pu, and Yueting Zhuang. Counterfactual samples synthesizing for robust visual question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10800–10809, 2020.
* [10] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
* [11] Corentin Dancette, Rémi Cadène, Damien Teney, and Matthieu Cord. Beyond question-based biases: Assessing multimodal shortcut learning in visual question answering. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1554–1563, 2021.
* [12] William Ford and David Olson. The elaboration of the noun phrase in children’s description of objects. Journal of Experimental Child Psychology, 19(3):371–382, 1975.
* [13] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016.
* [14] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180–1189. PMLR, 2015.
* [15] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016\.
* [16] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
* [17] Vipul Gupta, Zhuowan Li, Adam Kortylewski, Chenyu Zhang, Yingwei Li, and Alan Yuille. Swapmix: Diagnosing and regularizing the over-reliance on visual context in visual question answering. arXiv preprint arXiv:2204.02285, 2022.
* [18] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3608–3617, 2018.
* [19] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
* [20] Ruifei He, Jihan Yang, and Xiaojuan Qi. Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6930–6940, 2021.
* [21] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
* [22] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pages 1989–1998. PMLR, 2018.
* [23] Yining Hong, Li Yi, Josh Tenenbaum, Antonio Torralba, and Chuang Gan. Ptr: A benchmark for part-based conceptual, relational, and physical reasoning. Advances in Neural Information Processing Systems, 34, 2021.
* [24] Drew A Hudson and Christopher D Manning. Compositional attention networks for machine reasoning. 2018\.
* [25] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700–6709, 2019.
* [26] Jingjing Jiang, Ziyi Liu, Yifan Liu, Zhixiong Nan, and Nanning Zheng. X-ggm: Graph generative modeling for out-of-distribution generalization in visual question answering. In Proceedings of the 29th ACM International Conference on Multimedia, pages 199–208, 2021.
* [27] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901–2910, 2017.
* [28] Lie Ju, Xin Wang, Lin Wang, Tongliang Liu, Xin Zhao, Tom Drummond, Dwarikanath Mahapatra, and Zongyuan Ge. Relational subsets knowledge distillation for long-tailed retinal diseases recognition. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 3–12. Springer, 2021.
* [29] Aishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. Mdetr–modulated detection for end-to-end multi-modal understanding. arXiv preprint arXiv:2104.12763, 2021.
* [30] Corentin Kervadec, Grigory Antipov, Moez Baccouche, and Christian Wolf. Roses are red, violets are blue… but should vqa expect them to? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2776–2785, 2021.
* [31] Corentin Kervadec, Theo Jaunet, Grigory Antipov, Moez Baccouche, Romain Vuillemot, and Christian Wolf. How transferable are reasoning patterns in vqa? 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4205–4214, 2021.
* [32] Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. Advances in Neural Information Processing Systems, 31, 2018.
* [33] Ruud Koolen, Martijn Goudbeek, and Emiel Krahmer. Effects of scene variation on referential overspecification. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33, 2011.
* [34] Satwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. arXiv preprint arXiv:1903.03166, 2019.
* [35] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32–73, 2017.
* [36] Da Li, Jianshu Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, and Timothy M Hospedales. Episodic training for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1446–1455, 2019.
* [37] Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. Relation-aware graph attention network for visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10313–10322, 2019.
* [38] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. ECCV 2020, 2020.
* [39] Zhuowan Li, Elias Stengel-Eskin, Yixiao Zhang, Cihang Xie, Quan Hung Tran, Benjamin Van Durme, and Alan Yuille. Calibrating concepts and operations: Towards symbolic reasoning on real images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 14910–14919, October 2021.
* [40] Qing Liu, Adam Kortylewski, Zhishuai Zhang, Zizhang Li, Mengqi Guo, Qihao Liu, Xiaoding Yuan, Jiteng Mu, Weichao Qiu, and Alan Yuille. Learning part segmentation through unsupervised domain adaptation from synthetic vehicles. In CVPR, 2022.
* [41] Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L Yuille. Clevr-ref+: Diagnosing visual reasoning with referring expressions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4185–4194, 2019.
* [42] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2537–2546, 2019.
* [43] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. Advances in neural information processing systems, 31, 2018.
* [44] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019.
* [45] Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision. In International Conference on Learning Representations, 2019.
* [46] Margaret Mitchell, Kees Van Deemter, and Ehud Reiter. Generating expressions that refer to visible objects. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1174–1184, 2013.
* [47] Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xiansheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12695–12705, 2021.
* [48] Yulei Niu, Kaihua Tang, Hanwang Zhang, Zhiwu Lu, Xian-Sheng Hua, and Ji-Rong Wen. Counterfactual vqa: A cause-effect look at language bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12700–12710, June 2021.
* [49] Thomas Pechmann. Incremental speech production and referential overspecification. 1989\.
* [50] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
* [51] Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12556–12565, 2020.
* [52] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. Advances in neural information processing systems, 28, 2015.
* [53] Leonard Salewski, A Koepke, Hendrik Lensch, and Zeynep Akata. Clevr-x: A visual reasoning dataset for natural language explanations. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pages 69–88. Springer, 2022.
* [54] Susan Sonnenschein. The development of referential communication skills: Some situations in which speakers give redundant messages. Journal of Psycholinguistic Research, 14(5):489–508, 1985.
* [55] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019.
* [56] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769–8778, 2018.
* [57] Yiming Xu, Lin Chen, Zhongwei Cheng, Lixin Duan, and Jiebo Luo. Open-ended visual question answering by multi-modal domain adaptation. arXiv preprint arXiv:1911.04058, 2019.
* [58] Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenenbaum. Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442, 2019.
* [59] Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B Tenenbaum. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. In Advances in Neural Information Processing Systems (NIPS), 2018\.
* [60] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6281–6290, 2019.
* [61] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5317–5327, 2019.
* [62] Mingda Zhang, Tristan Maidment, Ahmad Diab, Adriana Kovashka, and Rebecca Hwa. Domain-robust vqa with diverse datasets and methods but no target labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7046–7056, 2021.
* [63] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Making visual representations matter in vision-language models. CVPR 2021, 2021.
* [64] Xiao Zhang, Zhiyuan Fang, Yandong Wen, Zhifeng Li, and Yu Qiao. Range loss for deep face recognition with long-tailed training data. In Proceedings of the IEEE International Conference on Computer Vision, pages 5409–5418, 2017.
* [65] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995–5004, 2016.
## Appendix A Dataset statistics
Fig. 4 shows the distribution of question types in the Super-CLEVR dataset.
The question type is determined by the type of the last operation in the
question program.
Figure 4: Distribution of question types in Super-CLEVR.
## Appendix B List of objects
Super-CLEVR contains 21 objects from 5 categories: airplane, bicycle, bus, car
and motorcycle. They are shown in Fig. 5 and Tab. 4.
catetory | objects
---|---
airplane | airliner, biplane, jet, fighter
bicycle | utility bike, tandem bike, road bike, mountain bike
bus | articulated bus, double bus, regualr bus, school bus
car | truck, suv, minivan, sedan,wagon
motorcycle | chopper, scooter, cruiser, dirtbike
Table 4: Super-CLEVR dataset contains 21 objects from 5 categories. Figure 5:
There are 21 objects belonging to 5 categories in the Super-CLEVR dataset.
## Appendix C Dataset controlling
Fig. 6 shows the concept distribution for dataset variants bal, slt and long;
and variants head, tail and oppo for testing purpose. Fig. 7 shows the concept
co-distribution Matrix $M$ for controlling the concept compositionality (for
variants co-0, co-1 and co-2). The descriptions for the variants are in Sec.
3.3.
Figure 6: Concept distribution for dataset variants bal, slt, long, head, tail
and oppo. Figure 7: Concept co-distribution matrix $M$ for dataset variants
co-0, co-1 and co-2.
## Appendix D Definition of Relative Degrade
Here we describe Relative Degrade for each domain shift factors. We use
$A^{i}_{j}$ to denote the accuracy of model trained using data variant $i$ and
tested with data variant $j$.
Visual complexity:
$RD=Avg(\frac{A^{easy}_{easy}-A^{easy}_{mid}}{A^{easy}_{easy}},\frac{A^{easy}_{easy}-A^{easy}_{hard}}{A^{easy}_{easy}},\frac{A^{mid}_{mid}-A^{mid}_{easy}}{A^{mid}_{mid}},\frac{A^{mid}_{mid}-A^{mid}_{hard}}{A^{mid}_{mid}},\frac{A^{hard}_{hard}-A^{hard}_{easy}}{A^{hard}_{hard}},\frac{A^{hard}_{hard}-A^{hard}_{mid}}{A^{hard}_{hard}})$
Question redundancy,
$RD=Avg(\frac{A^{rd\text{-}}_{rd\text{-}}-A^{rd\text{-}}_{rd}}{A^{rd\text{-}}_{rd\text{-}}},\frac{A^{rd\text{-}}_{rd\text{-}}-A^{rd\text{-}}_{rd\text{+}}}{A^{rd\text{-}}_{rd\text{-}}},\frac{A^{rd}_{rd}-A^{rd}_{rd\text{-}}}{A^{rd}_{rd}},\frac{A^{rd}_{rd}-A^{rd}_{rd\text{+}}}{A^{rd}_{rd}},\frac{A^{rd\text{+}}_{rd\text{+}}-A^{rd\text{+}}_{rd\text{-}}}{A^{rd\text{+}}_{rd\text{+}}},\frac{A^{rd\text{+}}_{rd\text{+}}-A^{rd\text{+}}_{rd}}{A^{rd\text{+}}_{rd\text{+}}})$
Concept distribution,
$RD=\frac{1}{3}\sum_{k\in
S}\frac{(A^{k}_{head}-A^{k}_{tail})+(A^{k}_{long}-A^{k}_{oppo})}{2\cdot
A^{k}_{k}},S=\\{bal,slt,long\\}$
Concept compositionality,
$RD=Avg(\frac{A^{co\text{-}0}_{co\text{-}0}-A^{co\text{-}0}_{co\text{-}1}}{A^{co\text{-}0}_{co\text{-}0}},\frac{A^{co\text{-}0}_{co\text{-}0}-A^{co\text{-}0}_{co\text{-}2}}{A^{co\text{-}0}_{co\text{-}0}},\frac{A^{co\text{-}1}_{co\text{-}1}-A^{co\text{-}1}_{co\text{-}0}}{A^{co\text{-}1}_{co\text{-}1}},\frac{A^{co\text{-}1}_{co\text{-}1}-A^{co\text{-}1}_{co\text{-}2}}{A^{co\text{-}1}_{co\text{-}1}},\frac{A^{co\text{-}2}_{co\text{-}2}-A^{co\text{-}2}_{co\text{-}0}}{A^{co\text{-}2}_{co\text{-}2}},\frac{A^{co\text{-}2}_{co\text{-}2}-A^{co\text{-}2}_{co\text{-}1}}{A^{co\text{-}2}_{co\text{-}2}})$
## Appendix E More details about P-NSVQA
Given a image containing $n$ objects, we maintain a vector of probability
$\mathbf{p}=[p^{1},p^{2},\ldots,p^{n}]$, where $p^{k}$ means the probability
that object $k$ is selected. We update $\mathbf{p}$ when executing the
reasoning operations step by step. In the following, we describe how to
compute $\mathbf{p}$ for each kind of operations.
* •
$scene$
Initialize all the values in $\mathbf{p}$ to 1.
* •
$filter_{identifier}[attribute]$ (_e.g_. $filter_{color}[red]$)
For object $k$,
$p^{k}=p^{k}*P^{k}_{attribute}$
Here $P^{k}_{attribute}$ is the probability of object $k$ having the
$attribute$, which is predicted by the visual scene parsing model.
* •
$relate\\_{spacial}$ (including $relate\\_{behind}$, $relate\\_{front}$,
$relate\\_{right}$, $relate\\_{left}$)
The output of the relate operation is the probabilities of each object being
on the $spacial$ side of the given object. For example, $relate\\_{left}(i)$
computes the probabilities of objects to be on the left side of the given
object $i$.
$p^{k}_{front}=\frac{1}{1+e^{-b[(y_{k}-y_{i})+a]}}$
$p^{k}_{behind}=\frac{1}{1+e^{-b[(y_{i}-y_{k})+a]}}$
$p^{k}_{right}=\frac{1}{1+e^{-b[(x_{k}-x_{i})+a]}}$
$p^{k}_{left}=\frac{1}{1+e^{-b[(x_{i}-x_{k})+a]}}$
Here $i$ is the input object and $(x_{i},y_{i})$ is the center of it. $a$ and
$b$ are hyperparameters. In our experiments, we set $a=20$ and $b=0.02$.
* •
$same\\_{color}$, $same\\_{shape}$, $same\\_{size}$, $same\\_{material}$
The same operation returns the probabilities of each object having the same
attribute as the given object $i$. For example, for object $k$ and attribute
color,
$p^{k}=cosine\\_similarity(P^{k}_{color},P^{i}_{color})$
* •
$intersect$, $union$
Given two probability vectors $\mathbf{p_{1}},\mathbf{p_{2}}$, we calculate
their intersection or union:
$Intersection:\mathbf{p}=\mathbf{p_{1}}\odot\mathbf{p_{2}}$
$Union:\mathbf{p}=1-(1-\mathbf{p_{1}})\odot(1-\mathbf{p_{2}})$
Here $\odot$ is the pointwise product.
|
figurec
# Reflection Equivariant Quantum Neural Networks
for Enhanced Image Classification
Maxwell West<EMAIL_ADDRESS>School of Physics, The University
of Melbourne, Parkville, 3010, VIC, Australia Martin Sevior School of
Physics, The University of Melbourne, Parkville, 3010, VIC, Australia
Muhammad Usman<EMAIL_ADDRESS>School of Physics, The University of
Melbourne, Parkville, 3010, VIC, Australia Data61, CSIRO, Clayton, 3168, VIC,
Australia
###### Abstract
Machine learning is among the most widely anticipated use cases for near-term
quantum computers, however there remain significant theoretical and
implementation challenges impeding its scale up. In particular, there is an
emerging body of work which suggests that generic, data agnostic quantum
machine learning (QML) architectures may suffer from severe trainability
issues, with the gradient of typical variational parameters vanishing
exponentially in the number of qubits. Additionally, the high expressibility
of QML models can lead to overfitting on training data and poor generalisation
performance. A promising strategy to combat both of these difficulties is to
construct models which explicitly respect the symmetries inherent in their
data, so-called geometric quantum machine learning (GQML). In this work, we
utilise the techniques of GQML for the task of image classification, building
new QML models which are equivariant with respect to reflections of the
images. We find that these networks are capable of consistently and
significantly outperforming generic ansatze on complicated real-world image
datasets, bringing high-resolution image classification via quantum computers
closer to reality. Our work highlights a potential pathway for the future
development and implementation of powerful QML models which directly exploit
the symmetries of data.
1\. Introduction
The significant interest in the possibility of realising superior machine
learning algorithms on quantum computers has led over the last few years to
intensive study of the capabilities and limitations of quantum machine
learning (QML) models [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Although remarkable
improvements in the capability of QML methods over their classical
counterparts have been reported for some specific use cases [11, 12, 13], the
extent to which they can be expected to perform well on general datasets
remains an open question. In fact, recent work has suggested that generic,
commonly employed QML architectures such as variational quantum circuits will
face significant limitations due to their high expressibility and potentially
low trainability resulting from barren plateaus in their training landscapes
[14, 15, 16, 17, 18, 19]. Efforts to address this, while still maintaining
sufficiently complicated circuits to allow for the possibility of quantum
advantage, are ongoing [18, 19, 20, 21, 22], but the ultimate resolution of
the problem of barren plateaus remains unclear, and a key research direction
of QML. Separately, in the case of image classification, the application of
QML is still limited to relatively simple and low resolution datasets [23] by
the current hardware limitations, with the development of high performance QML
models for complex image datasets another important open problem. In this work
we utilise techniques developed to tackle poor generalisation and barren
plateaus to design new QML models which explicitly exploit the reflection
symmetry inherent in many image datasets (see Figure 1). We establish that
these reflection equivariant models can provide superior performance on
complex images at the forefront of the current capabilities of QML, both
obtaining higher accuracies and possessing parameter gradients which vanish
more slowly than those of a generic counterpart.
Symmetry has long played an important role in physics, facilitating both the
discovery and understanding of the laws of nature [24]. More recently, the
symmetries of data have been recognised to play an important role in machine
learning, informing the design of some of the most effective classical
classification algorithms [25]. Convolutional neural networks (CNNs), for
example, which have been famously successful in the classification of image
data, begin by applying a set of so-called convolutional filters, which act on
all regions of the image identically – they have translation equivariance [26,
27]. The loss of expressibility that this necessarily entails has been found
to be more than compensated for by their increased trainability, with CNNs
having now dominated image classification for years [26, 28]. More generally,
in recent years the new field of geometric deep learning (GDL) [29, 30, 31,
25] has begun to explore the role of symmetry respecting neural network
architectures beyond such Euclidean translational equivariance, studying for
example symmetries of data which lives on graphs [32] or Riemannian manifolds
[33]. The observed success of these strategies naturally raises the question
of whether such techniques could also help to build QML models which benefit
from utilising the symmetries of their data.
Figure 1: Reflection Equivariant Quantum Neural Networks. (a) We consider
image data whose labels are left invariant by the action of a group
$\mathcal{G}$. In the case shown (from the CIFAR-10 dataset [34]), the label
“horse” applies to the image both before and after a reflection about the
central vertical axis. The action of the symmetry group on the encoded quantum
states of the images is determined by a unitary representation $R$ of
$\mathcal{G}$ satisfying
$\ket{\psi(g(\boldsymbol{x}))}=R_{g}\ket{\psi(\boldsymbol{x})}\ \forall
g\in\mathcal{G},\boldsymbol{x}\in\mathcal{X}$, i.e. the diagram commutes. (b)
A schematic depiction of the generic quantum variational classifier (QVC) that
we employ for image classification. The pale green subcircuit $\mathcal{E}$
implements amplitude encoding. The variational component of the circuit
consists of the unit in the dotted box repeated 100 times (with different
parameters in each layer). Finally, $\sigma_{z}$ measurements are made on each
qubit to determine the label prediction. (c) A modified circuit which
possesses reflection equivariance. The encoding subcircuit
$\tilde{\mathcal{E}}$ now includes an additional change of basis following the
amplitude encoding (see the text and Figure 3). In the variational section,
which again consists of the unit in the dotted box repeated 100 times, gates
which commute wth the symmetry operations are chosen. Finally, $\sigma_{z}$
measurements are again made to determine the label prediction, although this
time products of the measurement outcomes on neighbouring qubits are used in
order to retain equivariance (see the text). While (b) and (c) depict 3-qubit
cartoons of the two models, the actual circuits employ either 10 or 12 qubits
depending on the dataset due to the resolution of the images:
1$\times$28$\times$28 for MNIST, and 3$\times$32$\times$32 for CIFAR-10 and
CelebA (see Figure 2).
Indeed, promising recent work has incorporated ideas from GDL into QML,
resulting in the emerging subject of geometric quantum machine learning (GQML)
[35, 36, 37, 38, 39, 40, 41, 42]. For example, this approach has been used to
construct QML models which, classifying data that enjoys permutation symmetry,
provably avoid barren plateaus [38]. In this work we turn the techniques of
GQML to the problem of image classification, an important example of a problem
for which deep learning frameworks drastically outperform all other methods.
The translational equivariance of CNNs has already been imported to the
quantum setting, with quantum convolutional neural networks (QCNNs) offering a
pathway to efficient image classification on quantum computers [8]. Here we
consider the alternative symmetry of reflections, noting that the labels
assigned to an image will often be independent of reflections of that image
about various axes (see Figure 1(a)), as for example in most common object
identification problems. We construct reflection equivariant quantum neural
networks (REQNNs, see Figure 1(c)) and show that they consistently outperform
a generic model (see Figure 1(b)) when benchmarked across three standard image
datasets (CIFAR-10 [34], MNIST [26], and Celeb-A [43] – see Figure 2 for
examples of images from each dataset), despite having access to fewer
trainable parameters. These results provide concrete evidence supporting the
emerging philosophy that sacrificing generality and expressibility in favour
of targeting a smaller but more meaningful fraction of the model space is a
promising approach forward for QML. Moreover, by demonstrating enhanced
performance for image classification through the use of reflection equivariant
networks, separate to the previous use of a translation equivariant
convolutional structure in QCNNs [8], our results encourage the future
development of QML models which exploit as much of the available symmetry
information of their data as possible, including extensions to additional
symmetries and the simultaneous consideration of multiple symmetries. The
practical realisation of tailored QML models such as these will bring the
possibility of successfully applying quantum computing to the many domains of
ML, both image-based and otherwise, closer to reality.
2\. Geometric Quantum Machine Learning
We begin by briefly summarising the aspects of GQML relevant to our
construction of REQNNs, introducing the notation and formalism required to
discuss equivariance and symmetry operations on data encoded into quantum
states. Interested readers can find further details of GQML in Refs. [36, 37].
We consider the classification of (image) data $\boldsymbol{x}\in\mathcal{X}$,
with associated labels $y(\boldsymbol{x})\in\mathcal{Y}$. Our QML models will
follow a standard three-step procedure: a data encoding circuit which maps the
classical image data $\boldsymbol{x}$ to a quantum state,
$\boldsymbol{x}\mapsto\ket{\psi(\boldsymbol{x})}$, followed by a variational
circuit $U_{\theta}$, followed by measurements of a set of operators
$\\{M_{j}\\}_{j=1}^{n_{\mathrm{classes}}}$ to determine the class label. For a
given set of parameters $\theta\in\Theta$ the prediction
$\hat{y}_{\theta}(\boldsymbol{x})$ of the model on an input $\boldsymbol{x}$
is given by
$\hat{y}_{\theta}(\boldsymbol{x})=\operatorname*{argmax}_{j}\bra{\psi(\boldsymbol{x})}\mathcal{U}_{\theta}^{\dagger}M_{j}\mathcal{U}_{\theta}\ket{\psi(\boldsymbol{x})}$
(1)
We also introduce a symmetry group $\mathcal{G}$ which we will identify with
its action on $\mathcal{X}$, i.e. writing
$\mathcal{G}:\mathcal{X}\to\mathcal{X}$, with $g(\boldsymbol{x})$ for
$g\in\mathcal{G}$ the image obtained by performing the symmetry transformation
$g$ on $\boldsymbol{x}$. We emphasise that the symmetries are expected to be
respected only at the level of the labels of the data, i.e.
$y(g(\boldsymbol{x}))=y(\boldsymbol{x})\ \forall\boldsymbol{x}\in\mathcal{X},\
g\in\mathcal{G}$, but perhaps $g(\boldsymbol{x})\neq\boldsymbol{x}$. In this
work we will focus on the group which enacts reflections about the central
vertical axis of the images, so that $\mathcal{G}\cong\mathbb{Z}_{2}$. The
group will act on the Hilbert space $\mathcal{H}$ of the quantum computer via
a unitary representation $R:\mathcal{G}\to\mathrm{Aut}(\mathcal{H})$ which
satisfies $R(g)\ket{\psi(\boldsymbol{x})}=\ket{\psi(g(\boldsymbol{x}))}$.
Henceforth we write $R_{g}\equiv R(g)$. We wish for the predictions of our QNN
to be $\mathcal{G}$-invariant, i.e.
$\hat{y}_{\theta}(\boldsymbol{x})=\hat{y}_{\theta}(g(\boldsymbol{x}))\qquad\forall
g\in\mathcal{G},\boldsymbol{x}\in\mathcal{X},\theta\in\Theta$ (2)
From Equation 1 we have that
$\displaystyle\hat{y}_{\theta}(g(\boldsymbol{x}))$
$\displaystyle=\operatorname*{argmax}_{j}\bra{\psi(g(\boldsymbol{x}))}\mathcal{U}_{\theta}^{\dagger}M_{j}\mathcal{U}_{\theta}\ket{\psi(g(\boldsymbol{x}))}$
(3)
$\displaystyle=\operatorname*{argmax}_{j}\bra{\psi(\boldsymbol{x})}R_{g}^{\dagger}\mathcal{U}_{\theta}^{\dagger}M_{j}\mathcal{U}_{\theta}R_{g}\ket{\psi(\boldsymbol{x})}$
(4)
and therefore the condition of Equation 2 will be satisfied if
$\left[R_{g},\
\mathcal{U}_{\theta}^{\dagger}M_{j}\mathcal{U}_{\theta}\right]=0\qquad\forall
g\in\mathcal{G},\boldsymbol{x}\in\mathcal{X},\theta\in\Theta$ (5)
We refer to QNNs satisfying this condition as reflection equivariant.
3\. Results
Our first task in building REQNNs is to establish the unitary representation
of the symmetry group $\mathcal{G}$. As in our case
$\mathcal{G}\cong\mathbb{Z}_{2}$, there is only one nontrivial symmetry
operator $R_{g}$, namely the one which maps a state to the state encoding the
reflection of the image encoded by the original state. As we have
$R_{g}\ket{\psi(\boldsymbol{x})}=\ket{\psi(g(\boldsymbol{x}))}$ for all images
$\boldsymbol{x}$, $R_{g}$ will be determined by the form of the data encoding
map, to which we now turn. Due to our desire to classify high dimensional
image data using only the relatively small number of qubits available to
classical simulators we need to employ a method of encoding which is highly
efficient in the number of qubits required. For this reason we choose
amplitude encoding, i.e., if $\boldsymbol{x}$ is a vector containing the pixel
values of an image then we construct the state
$\boldsymbol{x}\mapsto\ket{\psi(\boldsymbol{x})}=\sum_{i}x_{i}\ket{i}$
As there are $2^{n}$ amplitudes available for an $n$ qubit state, only
$\lceil\log_{2}(C\texttimes L\texttimes W)\rceil$ qubits are needed, where $C$
is the number of channels of the image, $L$ is the length, and $W$ the width.
For MNIST, we have $(C,L,W)=(1,28,28)$ and therefore require 10 qubits, and
for CIFAR-10 and CelebA $(C,L,W)=(3,32,32)$, requiring 12. In both cases we
append and prepend zeros equally in order to obtain an input vector of length
a power of two in order to proceed with the amplitude encoding.
Figure 2: Image datasets. We consider three standard image datasets from the
ML literature: MNIST [26], CIFAR-10 [34] and CelebA [43]. The MNIST dataset
consists of handwritten digits, CIFAR-10 of various objects and animals, and
CelebA of human faces. In the case of MNIST we restrict our attention to the
digits “0”, “1” and “8”. Having done this, all of the data we consider has
labels which are unchanged under a horizontal reflection and is therefore
suitable for classification by our REQNNs. Figure 3: Amplitude Encoding.
Encoding for an example $4\times 4$ image. The pixels are numbered by the
index of the basis state into the amplitude of which they are encoded. (a) The
standard case. For example, the pixel value of the pixel in the top right
corner is encoded into the amplitude of $\ket{3}\equiv\ket{0011}$. (b) Here we
permute the order of the encoding. For example, the pixel value of the pixel
in the top right corner is now encoded into the amplitude of
$\ket{15}\equiv\ket{1111}$. In this case a horizontal reflection is
represented by $X^{\otimes n}$. Figure 4: Quantum neural network
performances. We compare the performance of the generic and reflection
equivariant QNNs across a range of image datasets. As the different datasets
have varying resolutions, the number of qubits needed to implement the models
also changes (see the Results section), providing another axis along which to
contrast the performance of the two classes of models. We find that the
reflection equivariant QNNs learn more quickly than their generic counterparts
consistently across different datasets, number of classes, and number of
qubits used to implement the QNNs. The plotted accuracies refer to 500 test
samples, calculated at various points throughout the training process. (a, b)
Two and four class classification using the CIFAR-10 dataset, respectively.
(c) Three class classification with the MNIST dataset. Although this dataset
is quite simple, and QNNs have previously been used to achieve high accuracy
on all ten classes [13], we restrict here to the digits “0”, “1” and “8” as
they are the only ones which (approximately) respect the reflection symmetry.
Both models achieve high test accuracy ($>$96%), but the reflection
equivariant model learns more quickly. (d) The CelebA dataset consists of
images of human faces. We consider the classification task of determining the
gender of the imaged person, again finding that the reflection equivariant
model significantly outperforms its generic counterpart.
Unfortunately, standard amplitude encoding will render $R_{g}$ a complicated,
non-local operator which will be difficult to work with in practice,
especially on real hardware with constrained qubit connectivity. In order to
rectify this we consider, for our equivariant models, a slight modification of
standard amplitude encoding which entails rearranging the order in which the
pixel values are encoded so as to produce an encoding with respect to which we
have $R_{g}=X^{\otimes n}$ (see Figure 3). The standard order in which the
pixel values are read is shown in Figure 3(a) for an example of a 4 × 4
greyscale (i.e. only one channel) image, and our alternate encoding in Figure
3(b). The encoding strategy of Figure 3(a) is employed in the generic model
(Figure 1(b)), and the strategy of Figure 3(b) in the equivariant model
(Figure 1(c) of the main text). In the second case, a reflection of the image
is represented at the Hilbert space level by the operator $X^{\otimes n}$,
which exchanges the basis states $\ket{i}$ and $\ket{2n-1-i}$. The case of
three channel RGB images is similar, with the order that the data is encoded
chosen so as to enforce the requirement that, for every pixel and every
channel, the amplitude of the states $\ket{i}$ and $\ket{2n-1-i}$ are the
values (in a given channel) of a pair of pixels related by the vertical
reflection. With this choice of encoding we have that
$R_{g}\ket{\psi(\boldsymbol{x})}=\ket{\psi(g(\boldsymbol{x}))}$ for $g=e,r$
the identity and horizontal reflection operations on the images respectively,
with $R_{e}=I$, $R_{r}=X^{\otimes n}$. Armed with the representation of our
symmetry group (consisting simply of the operators $\\{I,X^{\otimes n}\\}$) we
are ready to begin constructing REQNNs.
Given a symmetry group $\mathcal{G}$ and a unitary representation $R$, various
ways of constructing equivariant QNNs have been proposed. Ref. [35], for
example, takes a standard set of gates and “symmetrises” them, building new
gates which are guaranteed to commute with the symmetry representation. Here,
benefiting from the simplicity of our representation, we adopt a different
approach, simply manually selecting gates and measurements which commute with
$X^{\otimes n}$ and therefore satisfy Equation 5. Schematic depictions of the
two models considered in this work are shown in Figure 1(b,c). The model of
Figure 1(b) is a “generic ansatz” consisting of amplitude encoding followed by
a standard variational circuit followed by $\sigma_{z}$ measurements on the
first $n_{\mathrm{classes}}$ qubits. The prediction of the model is taken to
be the class corresponding to the qubit which reports the largest such
measurement (i.e. $M_{j}=\sigma^{(j)}_{z}$ in the notation of Equation 1, with
$\sigma^{(j)}_{z}$ the Pauli $z$ operator acting on the $j$th qubit). The
reflection equivariant model of Figure 1(c) differs in several ways. First,
the encoding stage is slightly modified as previously discussed (see also
Figure 3). Second, the variational component is built from $R_{x}$ and
$R_{yy}$ gates, both of which commute with $X^{\otimes n}$. Finally, the class
labels are determined by measurements
$M_{j}=\sigma^{(j)}_{z}\otimes\sigma^{(j+1\ \mathrm{mod}\ n)}_{z}$, which also
commute with $X^{\otimes n}$. This QNN therefore satisfies the equivariance
condition of Equation 5.
We implement the networks within the Pennylane framework [44], and train them
using the Nesterov momentum optimiser [45]. The results are shown in Figure 4.
Our results show that the reflection equivariant model consistently
outperforms the generic model, despite its lower expressibility and the
generic model containing 50% more trainable parameters. The increased
performance is particularly notable in the cases of the more complicated
datasets, CIFAR-10 and Celeb-A, which consist of colour (RGB) images. In the
case of MNIST, and although the difference in final test accuracies is
negligible, we nonetheless see the reflection equivariant model learning more
quickly in the initial stage of training, with its focus narrowed to the more
meaningful subset of reflection insensitive decision functions.
$0$$2{,}000$$4{,}000$$6{,}000$$8{,}000$$10{,}000$$0.4$$0.5$$0.6$$0.7$$0.8$Training
SamplesTest AccuracyCIFAR-10; Trucks and ShipsReflection EquivariantGeneric
AnsatzHybrid Figure 5: Effect of reflection equivariance. In addition to the
considered generic and reflection equivariant models we trial a “hybrid” model
with the same variational circuit structure and final measurement strategy as
the reflection equivariant model, but which uses the standard amplitude
encoding map ${\mathcal{E}}$ (Figure 3 (a)) instead of the modified map
$\tilde{\mathcal{E}}$ (Figure 3 (b)). Therefore, the hybrid model continues to
commute with $X^{\otimes n}$, but this operator no longer represents a
meaningful transformation of the data. The considerable drop in accuracy from
the reflection equivariant model, despite sharing the same variational
architecture, confirms the importance of respecting meaningful symmetries of
the data.
$2$$4$$6$$8$$10$$12$$14$$16$$10^{-6}$$10^{-5}$$10^{-4}$$10^{-3}$$10^{-2}$$10^{-1}$$n_{\mathrm{qubits}}$Tp$\mathrm{Var}\left[\partial_{\theta_{n}^{(0)}}\langle
M_{1}\rangle\right]$Parameter Gradient VariancepReflection EquivariantGeneric
Ansatz Figure 6: Variance of parameter gradients. We calculate the derivative
of the expectation value of the measured operator $M_{1}$ in both the generic
and reflection equivariant cases with respect to the first variational
parameter in the final layer of the circuits. This is repeated for 3000 random
circuit initialisations, and the variance of the results is plotted as a
function of the number of qubits. At the number of qubits relevant for this
work ($n_{\mathrm{qubits}}=10,12$) we find that the variance of the gradients
in the equivariant case is several orders of magnitude higher than the generic
case. This is consistent with the improved trainability observed for the
equivariant networks, as well as the increased volatility of the test
accuracies seen in Figures 4 and 5.
As the reflection equivariant and generic classifiers considered in this work
are constructed from substantially different quantum circuits, there is a
possibility that the enhanced performance of the reflection equivariant model
is due to some other factor than its respecting of the symmetry. In order to
separate the effect of simply moving to a different circuit architecture from
the role played by the reflection equivariance we consider a “hybrid” model
consisting of the standard encoding $\mathcal{E}$ of Figure 1(b) (see also
Figure 3(a)), and the variational body and measurements of Figure 1(c). This
produces a model with the same restricted expressibility of the reflection
equivariant model, which remains equivariant with respect to the operator
$X^{\otimes n}$, but for which $X^{\otimes n}$ no longer enacts a meaningful
symmetry. The results, shown in Figure 5, show the reflection equivariant
model significantly outperforming this hybrid model, confirming the importance
of building networks which respect genuine symmetries of the data.
Being represented by a group with only two elements, we do not expect the
respecting of reflection symmetry to lead to a provable avoidance of barren
plateaus as has been previously reported in the case of permutation symmetry
[38], but our results here are an encouraging sign that, even in the absence
of such guarantees, considerable gains in accuracy may be realised by such
models in practice. To explore this further we numerically investigate the
variance of the gradient of a measured Pauli observable in both the generic
and reflection equivariant cases (see Figure 6, c.f. Figure 3 of Ref. [14]).
As expected, the (universal) generic model experiences a barren plateau, with
exponentially vanishing gradients. While we also see approximately exponential
decreases in the (non-universal) reflection equivariant model, the rate of
decrease is drastically reduced, indicating that training will remain feasible
for much larger quantum circuits. This is consistent with our expectation
that, while the REQNNs may also asymptotically experience barren plateaus,
they will be able to offer enhanced performance in the highly interesting
region of several tens of qubits explored in this work and accessible on near-
term hardware.
4\. Conclusion
Geometric quantum machine learning is rapidly emerging as a promising research
direction which may ameliorate two of the key challenges facing QML: barren
plateaus and overfitting. We have applied the techniques of GQML to the task
of creating QML models for image classification which are equivariant with
respect to horizontal reflections, finding a consistent improvement over
generic, symmetry agnostic models, despite those models being more expressive.
These encouraging results join the previous GQML literature [35, 36, 37, 38,
39, 40, 41, 42] in demonstrating clear advantages in building tailored QML
models which respect the symmetry of the data they are attempting to classify,
rather than simply employing universal models, and provide further evidence of
the potential of this research direction in the NISQ era and beyond.
The authors acknowledge useful discussions with Jamie Heredge. MW acknowledges
the support of the Australian Government Research Training Program
Scholarship. Computational resources were provided by the National Computing
Infrastructure (NCI) and Pawsey Supercomputing Centre through the National
Computational Merit Allocation Scheme (NCMAS). This work was supported by
Australian Research Council Discovery Project DP210102831.
## References
* [1] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, “Quantum machine learning,” _Nature_ , vol. 549, no. 7671, pp. 195–202, 2017.
* [2] K. Beer, D. Bondarenko, T. Farrelly, T. J. Osborne, R. Salzmann, D. Scheiermann, and R. Wolf, “Training deep quantum neural networks,” _Nature Communications_ , vol. 11, no. 1, pp. 1–6, 2020.
* [3] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, “Supervised learning with quantum-enhanced feature spaces,” _Nature_ , vol. 567, no. 7747, pp. 209–212, 2019.
* [4] J. Romero, J. P. Olson, and A. Aspuru-Guzik, “Quantum autoencoders for efficient compression of quantum data,” _Quantum Science and Technology_ , vol. 2, no. 4, p. 045001, 2017.
* [5] P.-L. Dallaire-Demers and N. Killoran, “Quantum generative adversarial networks,” _Physical Review A_ , vol. 98, no. 1, p. 012324, 2018.
* [6] N. Killoran, T. R. Bromley, J. M. Arrazola, M. Schuld, N. Quesada, and S. Lloyd, “Continuous-variable quantum neural networks,” _Physical Review Research_ , vol. 1, no. 3, p. 033063, 2019.
* [7] M. Schuld and N. Killoran, “Quantum machine learning in feature hilbert spaces,” _Physical review letters_ , vol. 122, no. 4, p. 040504, 2019.
* [8] I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks,” _Nature Physics_ , vol. 15, no. 12, pp. 1273–1278, 2019.
* [9] M. Schuld, “Supervised quantum machine learning models are kernel methods,” _arXiv preprint arXiv:2101.11020_ , 2021.
* [10] S. L. Tsang, M. T. West, S. M. Erfani, and M. Usman, “Hybrid quantum-classical generative adversarial network for high resolution image generation,” _arXiv preprint arXiv:2212.11614_ , 2022.
* [11] Y. Liu, S. Arunachalam, and K. Temme, “A rigorous and robust quantum speed-up in supervised machine learning,” _Nature Physics_ , vol. 17, no. 9, pp. 1013–1017, 2021.
* [12] H.-Y. Huang, M. Broughton, J. Cotler, S. Chen, J. Li, M. Mohseni, H. Neven, R. Babbush, R. Kueng, J. Preskill _et al._ , “Quantum advantage in learning from experiments,” _Science_ , vol. 376, no. 6598, pp. 1182–1186, 2022.
* [13] M. T. West, S. M. Erfani, C. Leckie, M. Sevior, L. C. Hollenberg, and M. Usman, “Benchmarking adversarially robust quantum machine learning at scale,” _arXiv preprint arXiv:2211.12681_ , 2022.
* [14] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, “Barren plateaus in quantum neural network training landscapes,” _Nature Communications_ , vol. 9, no. 1, pp. 1–6, 2018.
* [15] Z. Holmes, K. Sharma, M. Cerezo, and P. J. Coles, “Connecting ansatz expressibility to gradient magnitudes and barren plateaus,” _PRX Quantum_ , vol. 3, no. 1, p. 010313, 2022.
* [16] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, “Cost function dependent barren plateaus in shallow parametrized quantum circuits,” _Nature Communications_ , vol. 12, no. 1, pp. 1–12, 2021.
* [17] S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cincio, and P. J. Coles, “Noise-induced barren plateaus in variational quantum algorithms,” _Nature Communications_ , vol. 12, no. 1, pp. 1–11, 2021.
* [18] A. Pesah, M. Cerezo, S. Wang, T. Volkoff, A. T. Sornborger, and P. J. Coles, “Absence of barren plateaus in quantum convolutional neural networks,” _Physical Review X_ , vol. 11, no. 4, p. 041011, 2021.
* [19] T. L. Patti, K. Najafi, X. Gao, and S. F. Yelin, “Entanglement devised barren plateau mitigation,” _Physical Review Research_ , vol. 3, no. 3, p. 033090, 2021.
* [20] A. Skolik, J. R. McClean, M. Mohseni, P. van der Smagt, and M. Leib, “Layerwise learning for quantum neural networks,” _Quantum Machine Intelligence_ , vol. 3, no. 1, pp. 1–11, 2021.
* [21] T. Volkoff and P. J. Coles, “Large gradients via correlation in random parameterized quantum circuits,” _Quantum Science and Technology_ , vol. 6, no. 2, p. 025008, 2021.
* [22] E. Grant, L. Wossnig, M. Ostaszewski, and M. Benedetti, “An initialization strategy for addressing barren plateaus in parametrized quantum circuits,” _Quantum_ , vol. 3, p. 214, 2019.
* [23] S. Lu, L.-M. Duan, and D.-L. Deng, “Quantum adversarial machine learning,” _Physical Review Research_ , vol. 2, no. 3, p. 033212, 2020.
* [24] E. Noether, “Invariante variationsprobleme,” _Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse_ , vol. 1918, pp. 235–257, 1918. [Online]. Available: http://eudml.org/doc/59024
* [25] T. S. Cohen _et al._ , “Equivariant convolutional networks,” 2021.
* [26] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” _Proceedings of the IEEE_ , vol. 86, no. 11, pp. 2278–2324, 1998.
* [27] R. Kondor and S. Trivedi, “On the generalization of equivariance and convolution in neural networks to the action of compact groups,” in _International Conference on Machine Learning_. PMLR, 2018, pp. 2747–2755.
* [28] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [29] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst, “Geometric deep learning: Going beyond euclidean data,” _IEEE Signal Processing Magazine_ , vol. 34, no. 4, pp. 18–42, 2017.
* [30] M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković, “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges,” _arXiv preprint arXiv:2104.13478_ , 2021.
* [31] Z. Li, H. Zheng, E. Thiede, J. Liu, and R. Kondor, “Group-equivariant neural networks with fusion diagrams,” _arXiv preprint arXiv:2211.07482_ , 2022\.
* [32] B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in _Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2014, pp. 701–710.
* [33] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst, “Geodesic convolutional neural networks on riemannian manifolds,” in _Proceedings of the IEEE international conference on computer vision workshops_ , 2015, pp. 37–45.
* [34] A. Krizhevsky, G. Hinton _et al._ , “Learning multiple layers of features from tiny images,” 2009.
* [35] J. J. Meyer, M. Mularski, E. Gil-Fuster, A. A. Mele, F. Arzani, A. Wilms, and J. Eisert, “Exploiting symmetry in variational quantum machine learning,” _arXiv preprint arXiv:2205.06217_ , 2022.
* [36] M. Ragone, P. Braccia, Q. T. Nguyen, L. Schatzki, P. J. Coles, F. Sauvage, M. Larocca, and M. Cerezo, “Representation theory for geometric quantum machine learning,” _arXiv preprint arXiv:2210.07980_ , 2022.
* [37] Q. T. Nguyen, L. Schatzki, P. Braccia, M. Ragone, P. J. Coles, F. Sauvage, M. Larocca, and M. Cerezo, “Theory for equivariant quantum neural networks,” _arXiv preprint arXiv:2210.08566_ , 2022.
* [38] L. Schatzki, M. Larocca, F. Sauvage, and M. Cerezo, “Theoretical guarantees for permutation-equivariant quantum neural networks,” _arXiv preprint arXiv:2210.09974_ , 2022.
* [39] F. Sauvage, M. Larocca, P. J. Coles, and M. Cerezo, “Building spatial symmetries into parameterized quantum circuits for faster training,” _arXiv preprint arXiv:2207.14413_ , 2022.
* [40] A. Skolik, M. Cattelan, S. Yarkoni, T. Bäck, and V. Dunjko, “Equivariant quantum circuits for learning on weighted graphs,” _arXiv preprint arXiv:2205.06109_ , 2022.
* [41] M. Larocca, F. Sauvage, F. M. Sbahi, G. Verdon, P. J. Coles, and M. Cerezo, “Group-invariant quantum machine learning,” _arXiv preprint arXiv:2205.02261_ , 2022.
* [42] H. Zheng, G. S. Ravi, H. Wang, K. Setia, F. T. Chong, and J. Liu, “Benchmarking variational quantum circuits with permutation symmetry,” _arXiv preprint arXiv:2211.12711_ , 2022.
* [43] S. Yang, P. Luo, C.-C. Loy, and X. Tang, “From facial parts responses to face detection: A deep learning approach,” in _Proceedings of the IEEE international conference on computer vision_ , 2015, pp. 3676–3684.
* [44] V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, M. S. Alam, S. Ahmed, J. M. Arrazola, C. Blank, A. Delgado, S. Jahangiri _et al._ , “Pennylane: Automatic differentiation of hybrid quantum-classical computations,” _arXiv preprint arXiv:1811.04968_ , 2018.
* [45] Y. Nesterov, “A method for solving the convex programming problem with convergence rate $o(1/k^{2})$,” _Proceedings of the USSR Academy of Sciences_ , vol. 269, pp. 543–547, 1983.
|
# PIZZA: A new benchmark for complex
end-to-end task-oriented parsing
Konstantine Arkoudas
Alexa AI
<EMAIL_ADDRESS>
&Nicolas Guenon des Mesnards
Alexa AI
<EMAIL_ADDRESS>
&Melanie Rubino
Alexa AI
<EMAIL_ADDRESS>
&Sandesh Swamy
Alexa AI
<EMAIL_ADDRESS>
&Saarthak Khanna
Alexa AI
<EMAIL_ADDRESS>
&Weiqi Sun
Alexa AI
<EMAIL_ADDRESS>
&Khan Haidar
Alexa AI
<EMAIL_ADDRESS>
###### Abstract
Much recent work in task-oriented parsing has focused on finding a middle
ground between flat slots and intents, which are inexpressive but easy to
annotate, and powerful representations such as the lambda calculus, which are
expressive but costly to annotate. This paper continues the exploration of
task-oriented parsing by introducing a new dataset for parsing pizza and drink
orders, whose semantics cannot be captured by flat slots and intents. We
perform an extensive evaluation of deep-learning techniques for task-oriented
parsing on this dataset, including different flavors of seq2seq systems and
RNNGs. The dataset comes in two main versions, one in a recently introduced
utterance-level hierarchical notation that we call TOP, and one whose targets
are executable representations (EXR). We demonstrate empirically that training
the parser to directly generate EXR notation not only solves the problem of
entity resolution in one fell swoop and overcomes a number of expressive
limitations of TOP notation, but also results in significantly greater parsing
accuracy.
## 1 Introduction
Virtual assistants like Siri and Alexa are becoming ubiquitous, and their
ability to understand natural language has come a long way since their early
days. Traditionally such systems are based on semantic frames that assign a
unique intent to every utterance and a single slot label to every utterance
token (Tur and De Mori, 2011; Mesnil et al., 2013; Liu and Lane, 2016). This
results in flat representations that are unable to capture the sort of
structured semantics that are needed for even moderately complex tasks. Even
something as simple as I’d like two large pizzas with ham and also one diet
pepsi and one coke would be challenging to represent with a single intent and
flat slots.111For example, it is not sufficient to merely tag the numbers;
they must be properly grouped with the corresponding orders.
Figure 1: End-to-end task-oriented parsing workflows when using EXR, TOP, and
TOP-Decoupled representations.
The semantic parsing community has a long history of exploring very rich
semantic representations, such as the typed lambda calculus and dependency
graphs (Zettlemoyer and Collins, 2012; Banarescu et al., 2013; Berant and
Liang, 2014). However, annotating utterances with such intricately detailed
representations is difficult, so these have not found wide adoption in the
industry. Recent work on task-oriented parsing has sought to increase
expressivity without imposing an excessive burden on annotation or parsing. A
prominent thread of work here was launched by the introduction of a
hierarchical notation that we will call TOP222The term TOP is also widely used
to refer to a particular dataset that was introduced by Gupta et al. (2018).
Here we use the term to refer to the general representational scheme rather
than that specific dataset. in which the utterance text and its semantics are
organized into a tree structure similar to a constituency parse tree. When
linearizing such a tree, pairs of matching parentheses are used to achieve the
necessary nesting. For instance, annotating the foregoing example might result
in the following string:
(ORDER I’d like (PIZZAORDER (NUMBER two)(SIZE large)pizzas with (TOPPING
ham))and also
---
(DRINKORDER (NUMBER one)(DRINKTYPE diet pepsi))and (DRINKORDER (NUMBER one)
(DRINKTYPE coke)))
This approach injects semantics into the utterance text by hierarchically
wrapping semantic constructors—such as PIZZAORDER—around appropriate utterance
segments. These constructors can be viewed as composable slots and/or intents.
The leaves of this tree, read from left to right, reconstruct the full
utterance text.
This annotation scheme has expressive limitations of its own, e.g., it cannot
accommodate utterances with non-projective semantics, such as I would like
three pizzas, one with peppers and the others with ham, which could be done
with more expressive representations. Nevertheless, TOP notation is much more
expressive than flat intents and slots and considerably more accessible to
annotators than heavier-duty formalisms, so it presents a reasonable
compromise between expressivity and practicality. In addition, it has been
shown that deep-learning models can be taught to map utterances to this type
of notation effectively Gupta et al. (2018).
Ultimately, semantic parsing should be producing executable representations
(EXR for short) that a back end service can process directly. An executable
representation for the above pizza order would be something along the
following lines:
(ORDER (PIZZAORDER (NUMBER 2)
(SIZE LARGE)
(TOPPING HAM))
(DRINKORDER (NUMBER 1)
(DRINKTYPE DIET_PEPSI))
(DRINKORDER (NUMBER 1)
(DRINKTYPE COKE)))
Note that this representation does not contain any natural language text,
unlike the earlier TOP representation.
As part of this paper we propose a discussion on the advantages and drawbacks
of using either of those representations in a production environment such as
that of Alexa. We also provide an empirical study to quantify and analyze the
effectiveness of models to produce such representations.
If the semantic parser outputs TOP notation, then a separate entity-resolution
stage must be implemented, along with integration logic for producing the
final EXR. Figure 1 contrasts the end-to-end workflow of a parser that
produces EXR vs that of one that produces TOP (the third alternative, TOP-
Decoupled, is a variant of TOP we will discuss in the next section). Training
a semantic parser to generate EXR directly obviates the need to maintain a
separate downstream entity resolution (ER) component and annotating training
data into EXR format would not incur extra burden.
The EXR approach is able to handle non-projective and other challenging types
of semantics, as long as these are expressible in EXR notation. To take the
preceding example of three pizzas, one with peppers and the others with ham,
the target EXR semantics would simply be the conjunction of two clauses of the
form (PIZZAORDER (NUMBER 1) (TOPPING PEPPERS)) and (PIZZAORDER (NUMBER 2)
(TOPPING HAM)). Constructs of this form can be easily generated by
probabilistic context-free grammars (PCFGs) with semantic actions (more on
these below), and potentially learned by statistical models like DNNs (Deep
Neural Networks).
To be able to measure the effectiveness of NLU models to predict either of
those more expressive target representations, we release PIZZA, a new task-
oriented parsing benchmark. Unlike the TOP dataset from Gupta et al. (2018),
PIZZA allows to evaluate systems end-to-end by providing EXR representations.
The training set of PIZZA consists of a large collection of synthetically
generated utterances, while the dev and test sets are much smaller and human-
generated. The dataset comes in three versions, one in EXR notation, one in
TOP notation, and one in TOP-Decoupled (discussed in next section).
Oftentimes the NLU system is bootstrapped with a grammar $G$ that emits
semantic representations. That grammar is then sampled to produce utterances
that serve as training data for a statistical model $M$. At runtime, an
utterance $u$ is analyzed by running $G$ and $M$ concurrently and their
results are then combined by some algorithm (e.g., merged and reranked), with
the results of $G$ often taking precedence, particularly when $G$ parses $u$
successfully and only produces one interpretation. This is also the scenario
that we emulate in this paper. Because intents and slots are typically flat,
$G$ is usually a finite state transducer (FST) that outputs an intent and a
label for each utterance token. But since FSTs are unable to capture scope and
unbounded stack-based recursion, we instead use a hand-crafted PCFG augmented
with semantic actions that directly produce EXR. We used the derivations
produced by this grammar to automatically produce TOP counterparts of the EXR
training data, and likewise for the dev and test sets, though for dev and test
utterances that were not parsable by the PCFG, TOP annotations had to be given
by hand.
Our contributions in this paper are as follows:
1. 1.
We release a new dataset333https://github.com/amazon-research/pizza-semantic-
parsing-dataset for task-oriented parsing, in three versions, EXR, TOP, and
TOP-Decoupled.
2. 2.
We perform an extensive empirical evaluation of different deep-learning
techniques for task-oriented parsing, including a number of seq2seq
architectures, Recurrent Neural Network Grammars (RNNGs), and a new pipeline
technique that works by flattening tree structures.
3. 3.
We show that systems that generate EXR notation significantly outperform their
TOP-generating counterparts, which calls for further research in generating
executable semantic representations with resolved entities instead of a blend
of utterance tokens and semantic constructors.
4. 4.
While the PCFG achieves remarkable accuracy on its own (68%), we show that
seq2seq systems generalize beyond the PCFG that was used to train them.
Specifically, the best performing seq2seq system parses 60% of the utterances
that the PCFG cannot handle.
## 2 The PIZZA dataset
The main characteristics of the natural-language orders comprising the PIZZA
dataset can be summarized as follows. Each order can include any number of
pizza and/or drink suborders. These suborders are labeled with the
constructors PIZZAORDER and DRINKORDER, respectively. Each top-level order is
always labeled with the root constructor ORDER. Both pizza and drink orders
can have number and size attributes. A pizza order can have any number of
topping attributes, each of which can be negated. Negative particles can have
larger scope with the use of the or particle, e.g., no peppers or onions will
negate both peppers and onions. Toppings can be modified by quantifiers such
as a lot or extra, a little, etc. A pizza order can have a style attribute
(e.g., thin crust style or chicago style). Styles can be negated. Each drink
order must have a drinktype (e.g., coke), and can also have a containertype
(e.g., bottle) and/or a volume modifier (e.g., three 20 fl ounce coke cans).
We view ORDER, PIZZAORDER, and DRINKORDER as intents, and the rest of the
semantic constructors as composite slots, with the exception of the leaf
constructors, which are viewed as entities (resolved slot values).
A simple example of an order is the query one medium-size pizza with peppers
and ham but no onions. Figure 2 (a) depicts an EXR semantic tree for this
order. Note that while the order of siblings in EXR trees has no effect on
operational semantics (more precisely, every internal tree node is commutative
and associative), using a consistent ordering scheme can have a nontrivial
impact on learnability (see Appendix E).
[ORDER [PIZZAORDER [NUMBER [1]] [SIZE [MEDIUM]] [TOPPING [PEPPERS]] [TOPPING
[HAM]] [NOT [TOPPING [ONIONS]]]]]
[ORDER [PIZZAORDER [NUMBER [one]] [SIZE [medium-size]] [pizza with] [TOPPING
[peppers]] [and] [TOPPING [ham]] [but no] [NOT [TOPPING [onions]]]]]
Figure 2: (a) EXR and (b) TOP representation, for the order one medium-size
pizza with peppers and ham but no onions.
The tree in Figure 2 (b) is a TOP representation of the same utterance. Here,
internal tree nodes are not commutative or associative, because permuting two
subtrees will violate the constraint that a left-to-right traversal of the
leaves must reconstruct the utterance text. It has been recognized before that
this constraint imposes expressive limitations on TOP representations. In
particular, TOP is not well-positioned to handle long-range dependencies. An
example given by Aghajanyan et al. (2020) is the utterance On Monday, set an
alarm for 8 AM. It would be preferable here for the representation to have a
single date-time constructor (slot) containing both the day and the time, but
in TOP this is impossible due to the intervening text set an alarm separating
Monday from 8 AM. While these limitations may be of relatively little
practical importance in English, they would be more acutely felt in languages
with more flexible word orders. Aghajanyan et al. (2020) mitigate this by
pruning tokens that do not appear as children of leaf slots. They call the
resulting notation decoupled TOP. In this example, the decoupled TOP tree
would be identical to that shown in Figure 2 (b), except that the leaves
corresponding to the utterance segments pizza with, and, and but no would be
removed.
The training data of PIZZA was synthetically generated by sampling a subset of
our grammar (see Appendix A). Because the grammar was written to maximize
recall, only a relatively small subset of its rules were sampled. Care was
taken to generate natural-sounding orders and to avoid pathological instances,
such as a pizza with mushrooms and no mushrooms. Once the utterances were
generated, their EXR semantics were obtained by running our parser on them
(the grammar generates EXR directly). TOP counterparts were automatically
generated by an algorithm that analyzes the EXR derivations produced by the
parser. The total number of synthetically generated utterances are about 2
million, but because the number of sampled patterns is small, the training set
has more lexical than structural diversity (as is often the case with the
initial data of grammar-based systems facing a cold start). Consequently, the
dataset has a good amount of structural redundancy and a well-pretrained model
fine-tuned on a small subset of the data can achieve performance on par with a
model trained on the full dataset (see the ablation results in Appendix F).
The test and validation data are human-generated and were collected by two
MechanicalTurk tasks: (1) A paraphrasing task where the annotators were shown
a synthetically generated order in plain English and were asked to rephrase it
in a more natural way. (2) A text generation task where annotators were asked
to formulate orders of their choice, for themselves or for a group of people.
This free-style collection process resulted in 1K unbiased natural language
orders, from which we manually extracted the semantics in EXR format. Each
utterance was annotated by two individuals, and we only kept the ones where
both annotations matched. The TOP versions of the dev and test sets were
obtained by first running our parser on the utterances, analyzing the
derivations to extract the necessary TOP information, and then manually
correcting the output trees as needed. Finally, the dev and test portions of
TOP-Decoupled were obtained from TOP by removing tokens that do not appear as
children of leaf slot constructors. Additional dataset creation details can be
found in Appendix B, and dataset examples in Appendix G.
The following table presents some statistics on the dataset:
| Train | Dev | Test
---|---|---|---
Number of utterances | 2,456,446 | 348 | 1,357
Unique entities | 367 | 109 | 180
PCFG accuracy | 100% | 70% | 68%
| Avg entities per utterance
---
5.32 | 5.37 | 5.42
| Avg intents per utterance
---
1.76 | 1.25 | 1.28
A significant proportion of human-generated utterances requires generalization
from synthetic data and cannot be fully parsed by the PCFG parser.
While the domain of pizza ordering is conceptually simple (and familiar to
all), the semantics can get fairly subtle. For instance, in the query I’d like
four pizzas, half with ham and half with peppers, it’s not obvious if the
request is for two pizzas with ham and two with peppers, or for four pizzas
where the half of each has peppers and the other half ham. Another factor
making this particular dataset challenging is the distribution shift between
the training and testing data. This is also a common theme in practice, as
semantic parsing systems are usually faced with a cold start problem, whereby
initial versions are built by developers who may not be able to anticipate
user queries in the wild. Hence, test utterances collected subsequently may
diverge significantly from the utterances that the initial system was designed
to handle.
## 3 Models
In this section we introduce seq2seq models as well as a new pipelined
approach. We also experimented with Insertion Transformers Stern et al. (2019)
and RNNGs Dyer et al. (2016) but defer the details to Appendix H.
### 3.1 Seq2seq architectures
We model the generation of EXR/TOP representations from natural language
utterances as a sequence-to-sequence (seq2seq) learning problem Sutskever et
al. (2014) where the source sequence is the utterance and the target sequence
is a linearized rendering of the EXR/TOP tree (obtained from a preorder
traversal of the tree). For encoding and decoding we explored Long-Short Term
Memory (LSTMs) Hochreiter and Schmidhuber (1997) as well as Transformer
encoders Vaswani et al. (2017) and BART decoders Lewis et al. (2019) as the
starting point for our task. Figure 3 illustrates this setup when using a
Transformer-based encoder and decoder; the target sequence is a TOP
representation.
Figure 3: Our Transformer-based seq2seq architecture.
The decoder is fine-tuned (trained from scratch in case of LSTMs) to generate
both natural language tokens and semantic constructors like PIZZAORDER. For
the LSTMs we used a single-layer 512-dimensional bi-directional encoder and
decoder with attention and pretrained fastText embeddings Bojanowski et al.
(2017). For the Transformers, we performed experiments with the Large
(12-layer) variation of BART. We kept the BART token and position embeddings
frozen and only unfroze the other layers. We do not constrain the decoder
vocabulary and preserve the original pre-trained vocabulary. We train our
models to minimize the sequence cross-entropy loss against the chosen target
representation.
### 3.2 Pipeline model
We also introduce a “divide, flatten, and conquer” approach that uses a
pipeline of two models run in sequence: an intent segmentation (IS) model to
determine the intent spans present in the input utterance, and a conventional
named entity recognition (NER) model to assign flattened entity labels to the
tokens of each intent span identified by the first model. These models are
trained separately on two different sequence labeling tasks. For example in
two large pizzas with ham and one diet coke, the labeling for the intent
segmentation model would be:
B-PIZZAORDER I-PIZZAORDER I-PIZZAORDER
I-PIZZAORDER I-PIZZAORDER Other
B-DRINKORDER I-DRINKORDER I-DRINKORDER.
The example carries two different intent spans, one for a pizza order and one
for a drink order. As a result, the NER model will have two different inputs:
two large pizzas with ham and one diet coke. The NER labels for each will be:
B-NUMBER B-SIZE Other Other B-TOPPING
and
B-NUMBER B-DRINKTYPE I-DRINKTYPE.
To handle cases where hierarchy is needed, we compress more information into
flattened labels, e.g., a negated pizza topping will be labeled as
NEG_TOPPING. Because these models need to have a 1:1 mapping between the input
tokens and output labels, they can’t be trained on the EXR notation or the
TOP-Decoupled notation. Both models use a BERT-based encoder Devlin et al.
(2019).
## 4 Results and analysis
The main results444In Appendix H we provide results for other models. are
summarized in Table 1. EXR is taken as the ground truth, so the outputs of
models that produce TOP or TOP-Decoupled notation must then undergo entity
resolution (ER) in order to produce a final output in EXR format, which can be
compared against the ground truth. See Appendix D for ER details.
Our metric is Exact Match accuracy (EM), which checks for an exact match
between the ground truth and prediction semantic trees, modulo sibling order.
In addition, for the models that produce TOP representations, we throw out all
utterance tokens that are not in a leaf slot.
Table 1: EM against EXR ground truth, grouped by notation type used for training. Mean and standard error from models trained on 5 different seeds. Model | Dev (348 utt) | Test (1357 utt) | PCFG Error Test Subset (434 utt)
---|---|---|---
PCFG Parser | 69.54 | 68.02 | 0.0
TOP | | |
LSTM | 44.42 ± 2.02 | 41.44 ± 2.15 | 12.63 ± 0.50
BART | 63.91 ± 0.31 | 62.17 ± 0.31 | 27.56 ± 0.31
Pipeline | 68.10 ± 0.44 | 64.08 ± 0.32 | 20.97 ± 0.96
TOP-Decoupled | | |
LSTM | 46.55 ± 1.03 | 42.15 ± 1.52 | 14.79 ± 1.11
BART | 74.60 ± 0.15 | 71.73 ± 0.13 | 40.55 ± 0.50
EXR | | |
LSTM | 38.51 ± 3.58 | 33.28 ± 4.17 | 14.93 ± 1.22
BART | 81.26 ± 0.48 | 78.56 ± 0.31 | 59.49 ± 0.74
#### Generating EXR improves performance.
Training BART to generate TOP-Decoupled instead of TOP improves performance
from 62% to 72% EM. Generating EXR further improves BART model by 7-EM
points.555LSTM results do not show the same trend for EXR, but the results are
merely given as a baseline for which no extensive hyperparameter search was
carried out. This change not only gives better performance, owing to a smaller
decoder vocabulary and shorter output sequences, but also generates the final
executable representation needed for downstream processing by the voice
assistant.
#### Generating EXR adds more than just entity resolution.
One might consider the poor performance of TOP or TOP-Decoupled compared to
EXR is strictly due to the entity resolution step. However, in manual error
analysis666Manual error analysis was performed on the dev data set, where a
single example may be counted in multiple error categories. of BART models,
shown in Figure 4 (c), only 30% of TOP-Decoupled errors that BART on EXR got
correct could have been improved with perfect entity resolution, though many
of these examples also had other errors in them.
In addition, Appendix D presents similar EM results on parsing only, without
subsequent ER, and also when perfect ER is applied, further confirming that
the ER step is not the main reason for the poorer performance of these models.
#### Generating EXR comes with challenges.
While the gains in performance in generating EXR call for further research on
end-to-end systems, it must be noted that the integration of such models in
production has some caveats. Notably, every time we want to expand the
catalogs with new entities, the whole system needs to be retrained to augment
its output space. That being said, it would likely not be retrained from
scratch and only a couple model updates could suffice. For a pipelined system,
the ER component would also need to be retrained, but the NLU component could
remain unchanged. It is important to note that generating EXR is more
appealing for a closed-world task like food-ordering where there is a finite
menu with likely less than a couple thousand entities. For applications such a
Music where there are dozens of millions of artist and song names, directly
generating EXR brings additional challenges that need to be addressed through
further innovations.
#### Some seq2seq models outperform the PCFG.
The PCFG parser achieves high accuracy on its own (68%), showing that a well-
crafted grammar can be highly effective by itself. However, powerful
pretrained language models and judicious choice of target notation and
architecture can yield DNN models that generalize beyond the PCFG, even though
they were trained only on PCFG-generated data. In particular, BART trained on
TOP-Decoupled and EXR achieves 72% and 79% EM, respectively. In addition, on
the portion of the test set that PCFG gets wrong, called _"PCFG Error Test
Subset"_ above, BART on EXR reaches 60% EM.
#### Seq2seq complements the PCFG.
Most of the PCFG errors in the dev set that BART got correct were due to
diverse phrasings used to specify toppings, sizes and negations. Although many
common phrasings are captured in the grammar, not all can be handled a priori.
This is where a seq2seq model fine-tuned on a strong language model can
outperform. Consider the request i’d like a large pizza with sausage and ham
and also add some extra cheese. It is challenging for the original PCFG to
parse extra cheese as a topping given the preceding phrase and also add some,
but BART trained on EXR makes the correct prediction in this case. Manual
analysis of 50 out of the 70 dev set errors in Figure 4 (a) shows that BART on
EXR generalizes beyond the phrasing patterns defined by the PCFG.777Semi-
supervised learning from live traffic could be used to improve PCFG structural
diversity..
On the other hand, there are commonly phrased requests that the PCFG correctly
parses but BART gets wrong; see Figure 4 (b) summarizing those 31 errors, 9%
of dev set. In lengthier requests, especially those with multiple intents,
BART trained on EXR misses slots and doesn’t generate all of the intents
requested. Consider the request i would like to order a medium pizza with
italian sausage a small pizza with beef and mushrooms a large combination
pizza and four large pepsis, which the PCFG correctly parses. While BART on
EXR correctly generates a PIZZAORDER intent for the medium pizza and the
DRINKORDER intent, the other two PIZZAORDER intents get incorrectly combined
into one with two TOPPING slots missing.
The pipeline model does very well (64%), but seems less complementary with the
PCFG than BART. This model would also not be an appropriate choice for a
domain with deep semantic trees.
Figure 4: (a) PCFG dev set errors that BART EXR got correct (b) BART EXR dev
set errors that PCFG got correct (c) BART TOP-Decoupled dev set errors that
BART EXR got correct
## 5 Related Work
Semantic parsing is traditionally conceived as mapping natural language to
logical forms that can be either directly executed or readily translated into
an executable form, and the community has a long history of studying richly
expressive languages for such logical forms, most notably variants of the
typed lambda calculus Zelle and Mooney (1996); Zettlemoyer and Collins (2012).
While these representations can capture the semantics of a tremendous range of
natural language constructs, they are challenging to annotate and to parse.
There is also a long tradition of shallow semantic representations,
particularly in the context of task-oriented parsing, using semantic frames
based on intents and slots; prominent datasets here have been ATIS Price
(1990) and more recently, SNIPS Coucke et al. (2018). These representations
are much more limited in their expressivity, but they are much more accessible
to annotators and easier to parse.
Recent work in the field has focused on increasing the expressive power of
task-oriented parsing, moving beyond flat intents and slots to accommodate
compositionality, but without overly complicating annotation and parsing.
Gupta et al. (2018) introduced the “TOP notation”, a tree-based representation
for task-oriented parsing with the key property that traversing the leaves
from left to right reconstructs the utterance text. More recent work has
introduced a so-called “decoupled” variant of TOP Aghajanyan et al. (2020).
## 6 Conclusions
We have introduced a new dataset for task-oriented parsing, PIZZA, with three
notational versions, EXR, TOP, and decoupled TOP. EXR semantic trees are
variable-free, directly executable by the back end, and contain no utterance
tokens, unlike both TOP and decoupled TOP. We performed an extensive empirical
evaluation of a number of DNN-based techniques for semantic parsing on this
dataset.
A key original motivation for introducing TOP notation was that its structure
is similar to standard constituency parses, allowing for easy adaptation of
algorithms developed for phrase structure parsing for inference, algorithms
such as linear-time RNNGs. This does not appear to be a compelling
consideration at present, as more recent results have shown that direct
seq2seq approaches based on transformer architectures and large pretrained
models outperform techniques such as RNNGs. That is consistent with the
findings presented in this paper.
## Acknowledgments
The authors thank Chi-Liang Liu for his diligent work in the early stages of
this project, and Saleh Soltan for helping us to use BART models in our
framework. We thank Stephen Rawls for his careful review and are also indebted
to all the colleagues who helped in the review, and with manual inspection and
annotation of hundreds of utterances.
## References
* Aghajanyan et al. (2020) Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Michael Haeger, Haoran Li, Yashar Mehdad, Veselin Stoyanov, Anuj Kumar, Mike Lewis, and Sonal Gupta. 2020. Conversational semantic parsing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5026–5035, Online. Association for Computational Linguistics.
* Banarescu et al. (2013) Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013\. Abstract meaning representation for sembanking. In _Proceedings of the 7th linguistic annotation workshop and interoperability with discourse_ , pages 178–186.
* Berant and Liang (2014) Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1415–1425.
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information.
* Coucke et al. (2018) Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, Maël Primet, and Joseph Dureau. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. _BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding_. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota.
* Dyer et al. (2016) Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. _arXiv preprint arXiv:1602.07776_.
* Fokker (1995) Jeroen Fokker. 1995. Functional parsers. In _Advanced Functional Programming, First International Spring School on Advanced Functional Programming Techniques, Båstad, Sweden, May 24-30, 1995, Tutorial Text_ , volume 925 of _Lecture Notes in Computer Science_ , pages 1–23. Springer.
* Gupta et al. (2018) Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. _arXiv preprint arXiv:1810.07942_.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. _Long Short-Term Memory_. Neural Computation 9:8, 1735-1780.
* Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_.
* Liu and Lane (2016) Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. _arXiv preprint arXiv:1609.01454_.
* Mesnil et al. (2013) Grégoire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. In _Interspeech_ , pages 3771–3775.
* Price (1990) P. J. Price. 1990. Evaluation of spoken language systems: the ATIS domain. In _Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990_.
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
* Rongali et al. (2020) Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don’t parse, generate! a sequence to sequence architecture for task-oriented semantic parsing. In _Proceedings of The Web Conference 2020_ , pages 2962–2968.
* See et al. (2017) Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1073–1083. Association for Computational Linguistics.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1715–1725, Berlin, Germany.
* Stern et al. (2019) Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations.
* Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In _Advances in Neural Information Processing Systems 27_ , page 3104–3112.
* Tur and De Mori (2011) Gokhan Tur and Renato De Mori. 2011. _Spoken language understanding: Systems for extracting semantic information from speech_. John Wiley & Sons.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
* Zelle and Mooney (1996) John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In _AAAI/IAAI, Vol. 2_ , pages 1050–1055. MIT Press.
* Zettlemoyer and Collins (2012) Luke S Zettlemoyer and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. _arXiv preprint arXiv:1207.1420_.
## Appendix A PCFG
In our PCFG framework, non-terminals are executable machines akin to
stochastic RTNS (recursive transition networks) or parser combinators Fokker
(1995). To write a production for such a non-terminal is to write declarative
code for the corresponding machine. This code is written in a custom-designed
functional programming language that provides special-purpose facilities for
defining and parsing grammars with arbitrarily complicated semantic actions.
As an example, the following code snippet defines a grammar for the
paradigmatic context-free language of balanced parentheses
$\\{a^{n}\,b^{n}\>|\>n\geq 0\\}$:
def S = id + "a" * S * "b"
The addition sign + indicates alternation (akin to the pipe | of the more conventional textbook notation) and the multiplication sign * indicates juxtaposition (just white space in conventional notation); the latter binds tighter than the former. The keyword id stands for the $\epsilon$ (empty) string of formal language theory, and quoted strings act as terminals. The entire input is a definition (indicated by def) that binds the name S to the corresponding recursive machine (non-terminal).
Semantic actions can be inserted anywhere in a machine definition and
manipulate an implicit semantic stack, an abstraction that is baked into the
operational semantics of this programming language. This is a stack of EXR
trees built during semantic parsing. Basic data types such as numbers,
strings, and nullary semantic constructors (constants) serve as tree leaves.
For instance, suppose we view the semantics of a string of $n$ balanced
parentheses pairs $a^{n}\,b^{n}$ as the number $n$. Then here is how we might
augment the above grammar with appropriate semantic actions to extract that
number and deposit it on the semantic stack upon conclusion of parsing:
def push(t) = fun S => t::S
def succ = fun n::S => n+1::S
def S = push(0) +
"a" * S * "b" * succ
Here push is a semantic action that pushes an arbitrary term (tree) on the
semantic stack, and succ is another action that increments the number that is
assumed to be on the top of the stack. The syntax form fun
$x_{1},\ldots,x_{n}$ => $e$, where $e$ is an arbitrary expression of this
language, defines an anonymous higher-order function. A stack is just a list
that grows on its left end, and :: is a list constructor (so that
$x\mbox{\tt::}l$ denotes the list obtained by prepending $x$ to the list $l$).
The language allows for direct use of slot catalogs inside machine definitions
and scales to catalogs with millions of entries. These catalogs are like slot
value gazetteers, except that they map phrases to unique entity identifiers,
thus enabling entity resolution to be bundled up with semantic parsing. A
catalog entry is therefore a triple of the form $(t_{1}\cdots t_{n},e,p)$
where $t_{1}\cdots t_{n}$ is a phrase consisting of one or more tokens, $e$ is
a globally unique identifier denoting an entity, and $p$ is the conditional
probability of the given phrase denoting the given entity, conditioned on the
catalog. The phrase $t_{1}\cdots t_{n}$ is called an entity alias.
Table 2: Statistics on the entity aliases used in the definition of the pizza grammar. Each entity, like PEPPERS, can have more than one natural language alias, for example peppers or bell peppers. Slot catalog | Number of unique slot entities | Average number of aliases per entity
---|---|---
TOPPING | 85 | 1.88
SIZE | 8 | 2.25
NUMBER | 15 | 2.27
STYLE | 23 | 2.39
QUANTITY | 2 | 18.5
DRINKTYPE | 22 | 2.77
VOLUME | 11 | 9.18
CONTAINERTYPE | 2 | 8
The pizza domain has only a few small slot catalogs. Some basic statistics for
these are shown in Table 2. As mentioned above, each entity may be associated
with multiple aliases, each with its own conditional probability. For
instance, ricotta and ricotta cheese may be two different aliases for the
TOPPING entity RICOTTA_CHEESE.
The main parsing technique of this framework is a probabilistic top-down
algorithm with memoization that is written as an inference system for an
abstract execution machine. Bottom-up parsing is also implemented to allow for
left recursion, but top-down parsing is typically more efficient. The pizza
grammar is written in this language in about 50 lines of code, plus the
catalogs. Left recursion is not needed, so utterances are parsed with the top-
down algorithm.
The algorithm takes an utterance $u$ as input and produces a ranked list of
triples $(e_{1},d_{1},p_{1})\ldots,(e_{n},d_{n},p_{n})$ as output, where each
$e_{i}$ is an EXR, $d_{i}$ is a formal derivation tree that details the
incremental construction of $e_{i}$, and $p_{i}$ is the derivation’s
probability. The list is ranked by derivation probability, so that the first
EXR, $e_{1}$, provides the most likely interpretation of $u$. Probabilities
are initially set by MLE (Maximum Likelihood Estimation) without a prior. Even
in that simple setting, we have found PCFG derivation probabilities to be
useful, as they result in a bias for shorter derivations, which are simpler
explanations of sorts, essentially enforcing a form of Occam’s razor. A subset
of this grammar was sampled to produce the training utterances.
## Appendix B Dataset details
We manually reviewed each paraphrase and kept 1,264 correct paraphrases. Each
utterance was paraphrased by five different annotators. We found that 6% of
the paraphrases were incorrect; most omitted one topping or the relevant
quantity in a suborder. See Table 3 for examples.
Table 3: Subset of utterances obtained from the Mechanical Turk paraphrasing task. Synthetically generated order | Obtained paraphrase | Valid
---|---|---
| I think I’ll have a small thin crust pie with tuna and sausage. | Yes
| Can I try the small sausage and tuna pie and I would like thin crust please. | Yes
small pie with sausage and tuna and thin crust please | i only want a small pizza and please for toppings i’ll take sausage and tuna and i’d like thin crust. thank you | Yes
| Order a small thin crust pizza with sausage and tuna. | Yes
| I need a small sausage and tuna pizza with a thin crust. | Yes
| I need five large fantas, two pepsis and one coke please. | Yes
| Get me two Pepsis, a Coke, and five large Fantas. | Yes
two pepsis and a coke and five large fantas | i’m here for some soft drinks. i need a couple of pepsis, one coke, and i’ll take a total of five fantas; make them large. that’s all. . | No
| i’d love to have two pepsis and a coke along with five large fantas | Yes
| Can I get two pepsis, five large fantas and a coke.. | Yes
We also manually reviewed each of the unbiased natural language orders from
the free-style collection task, and removed 275 utterances requesting toppings
or drink items too unrealistic to appear on an actual pizza restaurant
menu—see Table 4 for examples. After removing orders without inter-annotator
agreement and orders with questionable semantics (a pie with peppers, ham and
extra ham) or with non-projective semantics (since these cannot be represented
in TOP), we ended up with 441 annotated utterances.
Table 4: Subset of utterances obtained from the unconstrained Mechanical Turk generation task. Unconstrained human generated order | Valid | Difficulty
---|---|---
Two bacon and olive pizzas. | Yes | Easy
Put an order in for me that includes two medium pizzas with extra sauce but light cheese and include one sausage pizza. | Yes | Interm
I want to order two large pizzas with ham and pepperoni, and with mushrooms on one of them, and six large cokes. | Yes | Hard
I want to order six extra large pizzas with pepperoni, sausage, and mushrooms, and four extra large pizzas with the same toppings plus green peppers and extra cheese. | Yes | Hard
I need one large pizza with every topping plus extra black olives on half of it, and a medium pizza with pepperoni and extra cheese, and six large cokes. | Yes | Hard
I need a large ham and onion pizza and a large order of spaghetti | No | N/A
Order me two large pizzas with green peppers and white onions with two sodas. | No | N/A
I would like to order two sodas and a medium pizza with olives and onions on it. | No | N/A
Can I get a pineapple pizza on a thin crust cut into squares | No | N/A
Five medium pizzas with peppers and lobster chunks. | No | N/A
I would like a medium thin crust pizza with with spicy chorizo, Manchego cheese, and red onion, a bottle of Mondavi merlot, and another bottle of Prosecco. | No | N/A
The released data received no processing except for removing non-ASCII
characters, the sole exception being the ñ in jalapeño. Examples of actual
collected utterances and their extracted semantics can be found in Appendix G,
Table 8.
## Appendix C Model details
#### Tokenizers
For LSTMs and RNNGs, we split on whitespace to get independent tokens. For our
BART-based models we use the GPT-2 Radford et al. (2019) tokenizer prior to
feeding natural language utterances into the encoder. Our Insertion
Transformer models make use of BPE Sennrich et al. (2016) tokenizers with a
vocabulary of $30,000$ to align with the pretrained encoders used in the
network.
#### Data preprocessing
We preprocess the dataset before training as follows: (a) we remove the
leading ORDER constructor from the target output sequences, since it is a
universal top-level constructor and there is nothing to be learned for it; (b)
we downcase the entity tokens to facilitate their handling by the tokenizers.
#### Pipeline model
The data used in the pipeline system is prepared by converting the strings in
TOP format to flattened labels for both the IS and NER model. For example, if
the TOP string is:
(ORDER I’d like (PIZZAORDER (NUMBER two)
---
(SIZE large)pizzas with (TOPPING ham)))
the IS labeling will be:
Other Other B-PIZZAORDER I-PIZZAORDER
I-PIZZAORDER I-PIZZAORDER I-PIZZAORDER
and the NER labeling will be:
B-NUMBER B-SIZE Other Other B-TOPPING
During evaluation, the output from IS and NER model is converted back to TOP
or EXR, which is then used to compute the EM score.
The models are fine-tuned on the PIZZA dataset using the ADAM optimizer with a
Noam scheduler, setting the base learning rate to 0.075. We make use of F1
scores to select the best models for both IS and NER during the validation
phase. We observe that the IS model’s F1 ranges from 76-79 (after
experimenting with different seeds) while the NER model’s F1 ranges from
95-96. Intent segmentation is therefore the bottleneck in the pipeline system.
This large difference in performance is likely related to the difference in
the lengths of the input sequences for the two models. The NER model receives
much shorter sequences than the IS model, which should make it less error
prone.
There are cases when the IS and NER models can produce solitary I- labels. For
example, for the utterance two large pizzas with ham, the NER model might
output: B-NUMBER B-SIZE Other Other I-TOPPING. Here, the I-TOPPING label is
not preceded by B-TOPPING. During evaluation we apply a post-processing step
to convert such solitary I- labels to Other. Without this step, the model’s EM
is 62.96 and 19.91 on the test data and on the subset of test data where PCFG
makes an error, respectively (with EXR ground truth).
#### Hyperparameter tuning
A total of 5 training and evaluation runs were run for each model. Final
hyperparameter selection was performed on Tesla V100 GPUs by manually tuning
and choosing the highest EM during evaluation on the dev set for all models
except for the pipeline model, which used F1 metric. The best hyperparameter
configuration values are in bold, as follows.
BART searched the following hyperparameter ranges: Batch size: [512, 768,
1024]; pretrained blocks dropout: [0.05, 0.10, 0.20, 0.30]; learning rates:
[0.15, 0.30, 0.5]; learning rate multiplier schedule: [0 to 1.0 + incremental
step every 20k updates, 0 to 0.0175 + incremental step every 20k updates, 0 to
0.0175 + incremental step every 200 updates]. 30 search trials were performed.
Insertion Transformer models searched these ranges: Batch size [128, 256,
512], final fine-tuning learning rate multiplier: [0.002, 0.1, 0.2, 0.7];
learning rate: [0.05, 0.1, 0.15, 0.3]; Dropout: [0.05, 0.1, 0.3]. Best batch
sizes were 128 (decode-from-source) and 256 (decode-from-scratch). 12 search
trials were performed for each decoding strategy model.
RNNG model search included: Beam size: [1, 5, 10]; Optimizer: [SimpleSDG,
Adam]; Dropout: [0, 0.5]; Learning rate decay [0.05, 0.5]. Best learning rate
schedule: initial rate: 0.001, learning rate decay factor: 0.5, decay
interval: 1000. 10 search trials were performed.
Pipeline model search was performed on the Intent Segmentation model; the
obtained hyperparameters values were also used on the NER model. The search
range included: Batch Size: [128, 256, 512] examples; (Base learning rate,
warm up steps): [(0.075, 500), (0.100, 500), (0.125, 500), (0.050, 500),
(0.075, 250), (0.075, 1000)]; k-best viterbi: [1,3,5]; ([steps, LR
multiplier], final LR multiplier): [([(100, 0.0)], 0.20), ([(100, 0.0), (200,
0.1)], 0.20), ([(50, 0.0)], 0.20), ([(50, 0.0)], 0.30)]. 16 search trials were
performed.
The bi-LSTM was used as a simple baseline for the other systems and
hyperparameter search for it was simplified to: (a) enabling decoder attention
[yes, no] and (b) learning Rate: [1E-3,1E-4]. The final configuration was an
1-layer 512-dimensional bidirectional-LSTM with decoder attention, trained
with the Adam optimizer and using a learning rate of 1E-4, batch size of 1,000
words per batch, and norm-clipping of 0.1.
## Appendix D Entity resolution (ER)
Models trained to predict TOP or TOP-Decoupled notation against EXR ground
truth require an additional entity resolution step to compare against the
final EXR. The ER step was performed by a simple rule-based system that maps
relevant phrases to unique entity identifiers using the grammar catalogs. In
addition, the ER step adds a default (NUMBER 1) slot if needed, because the
EXR ground truth includes the slot NUMBER for every order. Note that in a more
complicated domain that might involve millions of entities and high degrees of
ER ambiguity, the seq2seq system would likely include a reranker taking
contextual signals such as popularity counts as additional inputs; as
mentioned in Appendix A, the PCFG is already equipped to handle this scenario.
The following two studies investigate the role ER plays in the performance of
the TOP and TOP-Decoupled models against EXR ground truth in this dataset.
#### Results on TOP ground truth
Table 5 shows EM results for models trained on TOP and TOP-Decoupled notation
against TOP and EXR ground truth. Observe that performance against TOP ground
truth is very close to performance against EXR ground truth, indicating that
the errors in TOP and TOP-Decoupled models are not primarily due to the entity
resolution step, as a similar number of errors occur against TOP ground truth.
Interestingly, for the LSTM models, EM against EXR ground truth is actually
slightly better than it is against TOP. One might expect that the ER step can
only add errors, not make corrections. However, there are cases where the TOP
prediction is incorrect but it is salvaged by the ER step, which ultimately
produces a correct EXR form. Here is an example: i want an order of one large
pizza. The NUMBER slot is based on the wrong utterance token in the TOP
prediction, but it resolves to the same correct clause, (NUMBER 1).
TOP Prediction:
---
(ORDER i want (PIZZAORDER (NUMBER an)
order of one (SIZE large)pizza ))
TOP Ground Truth:
(ORDER i want an order of (PIZZAORDER
(NUMBER one)(SIZE large)pizza ))
TOP Prediction after ER:
(ORDER (PIZZAORDER (NUMBER 1)
(SIZE LARGE)))
EXR Ground Truth:
(ORDER (PIZZAORDER (NUMBER 1)
(SIZE LARGE)))
Table 5: Test set EM against TOP and EXR ground truth, grouped by notation type used for training. Mean and standard error from models trained on 5 different seeds. Model | EXR Ground Truth | TOP Ground Truth |
---|---|---|---
TOP | | |
Insert-Ptr-Decode-Source | 40.13 ± 0.50 | 40.82 ± 0.51 |
LSTM | 41.44 ± 2.15 | 40.44 ± 2.08 |
Insert-Ptr-Decode-Scratch | 42.05 ± 0.58 | 42.65 ± 0.61 |
RNNG | 50.27 ± 3.52 | 50.95 ± 3.59 |
BART | 62.17 ± 0.31 | 62.45 ± 0.34 |
Pipeline | 64.08 ± 0.32 | 65.22 ± 0.32 |
TOP-Decoupled | | |
LSTM | 42.15 ± 1.52 | 41.18 ± 1.50 |
Insert-Ptr-Decode-Scratch | 47.89 ± 0.21 | 47.89 ± 0.21 |
BART | 71.73 ± 0.13 | 71.79 ± 0.10 |
#### Results with perfect ER
In the PIZZA dataset, there are certain entities which appear only during
test/validation, but not during training. If we were to have a record of all
possible entities and their aliases as they appear in the entire dataset
(including the dev and test sets), then ER from the TOP variants to EXR could
improve. We verified this by adding all entities into our ER pipeline and
running BART models on the TOP variants. Table 6 shows that EM improves by
9.5% and 4.5% on the PCFG error test subset when we use the TOP and TOP-
Decoupled variants respectively. Total performance improvements on the overall
test set are much more modest, showing that even with perfect ER, the
performance of the TOP variants remains significantly lower than the
performance of the model that directly generates EXR.
Table 6: EM against EXR ground truth - with the original grammar entities and all entities. Model | PCFG Error Test Subset | Test |
---|---|---|---
TOP | | |
BART (original) | 27.56 ± 0.31 | 62.17 ± 0.31 |
BART (all entities) | 30.18± 0.28 | 62.64 ± 0.30 |
TOP-Decoupled | | |
BART (original) | 40.55 ± 0.50 | 71.73 ± 0.13 |
BART (all entities) | 42.54 ± 0.53 | 72.44 ± 0.13 |
## Appendix E Clause permutation experiments
To further investigate the impact of concrete semantic representation choices
on model performance, we analyzed how rearranging the order of sibling nodes
in the linearized EXR affects training. For a given input utterance, such as
two pizzas with no cheese and a bottle of seven up, there is a large number of
equivalent linearized EXRs with the same semantics (this number grows
exponentially with the size of the utterance).
All three entries in Table 7 have identical semantics, but the EXR-natural
order is closely aligned with the order in which the entities of interest
appear in the natural language utterance. When reading the utterance and the
“natural” EXR simultaneously from left to right, we see the same entities
appearing in the same order: a pizza order, information about number, topping,
then a drink order, and information about container and drink type.
Table 7: Different ways of representing the linearized semantic EXR for the same input. Aligning source and target order eases training. Natural Language | two pizzas with no cheese and a bottle of seven up
---|---
EXR-natural order | (ORDER (PIZZAORDER (NUMBER 2) (NOT (TOPPING CHEESE))) (DRINKORDER (CONTAINERTYPE BOTTLE ) (DRINKTYPE SEVEN_UP)))
EXR-random order | (ORDER (DRINKORDER (DRINKTYPE SEVEN_UP) (CONTAINERTYPE BOTTLE )) (PIZZAORDER (NUMBER 2) (NOT (TOPPING CHEESE))))
EXR-sorted string order | (ORDER (DRINKORDER (CONTAINERTYPE BOTTLE ) (DRINKTYPE SEVEN_UP)) (PIZZAORDER (NOT (TOPPING CHEESE)) (NUMBER 2)))
The EXR-random order corresponds to a random permutation of siblings at each
level of the tree. In the example given in Table 7, the PIZZAORDER and
DRINKORDER siblings are switched, as well as the DRINKTYPE and CONTAINERTYPE
siblings, but not NUMBER and NOT.
Finally, the EXR-sorted string order is obtained by applying a deterministic
ordering function at each level of the tree, which is a simple lexicographic
sort over a list of strings. For example, when given the list [PIZZAORDER,
DRINKORDER], the returned order is [DRINKORDER, PIZZAORDER] since ’D’ precedes
’P’ in the alphabet. Similarly, NUMBER and NOT are reversed since NO precedes
NU in alphabetical order.
The order of siblings in the experiments reported in this paper is almost
identical to the natural order, in that the ordering of each entity among each
suborder (pizza or drink) is the natural order, but the ordering across the
higher level suborders is reversed. To illustrate, if the natural language
specifies a pizza order and then a drink order as in the above example, then
the linearized EXR representation will be of the form (ORDER (DRINKORDER ..)
(PIZZAORDER ..)
We trained our models with 3 different orderings in training EXRs:
* •
the reference setting with the ordering used to report results throughout this
paper as described above;
* •
a randomly permuted sibling setting; and
* •
the string list sorting ordering setting.
To better understand the significance of any discrepancies observed among
these three settings, we ran a sanity experiment where the setting is the same
as in the reference but with a changed training seed. That allowed us to
determine whether changes due to a different ordering are significant relative
to changes as simple as changing the seed.
The dev unordered EM over the course of training can be observed in Figure 5.
We can make the following observations:
Figure 5: Dev performance along training when using different ordering of
clauses in linearized EXR
* •
A clause ordering that is more aligned with the given utterance eases
training, as the blue and orange curves reach better performance much faster
than the red and green curves.
* •
The random permutation (green curve) is not as bad as the deterministic
reordering (red curve). Eventually it catches up to the blue curve, indicating
that the model learned to disregard the noise we artificially injected by
reordering the nodes.
* •
The lexicographic ordering (red curve) leads to faster gains initially but
ends up much lower than the random one. We hypothesize than learning the
artificial but systemic ordering is easier for the model initially than trying
to figure out the order pattern, but is detrimental in the long run.
* •
All of the foregoing observations are significant compared to the much smaller
difference brought about by simply changing the training seed (blue and orange
curves).
## Appendix F Training data ablations for BART on EXR
We performed ablation studies with our best BART model by varying the amount
of training data fed to the model. Figure 6 shows our results when we vary the
training data size from 0.5% ($\sim$12k samples) to 100% ($\sim$2 million
samples).
Figure 6: Ablation study on BART by varying the amount of training data
We observed that reducing the amount of training data results in BART models
that perform as well as the BART models that are trained on the full set. This
is due primarily to the large amount of redundancy in the large training set,
and also to BART’s pretraining. We noticed that the models take longer to
converge when using less training data. For instance, the 0.5% model had to be
trained for 200 epochs, whereas the final 100% model converges within one
single epoch.
## Appendix G Data examples
While the full dataset will be made available through an external repository,
the following short list gives a flavor of these utterances:
* •
one pie with ham is an easy synthetically generated order;
* •
can I get one medium size with extra bacon and mushrooms but little cheese and
a small pepsi is a harder synthetically generated order;
* •
today I’ll get a ham and bacon pizza, medium size is an easy human-generated
order; and
* •
two pepperoni pies, one with mushrooms and the other with cheese on half only
is a harder human-generated order.
Table 8 gives additional examples of natural language orders and their EXR
semantics. The examples in that table are the result of the two Mechanical
Turk tasks described in Section 2 and Appendix B.
Table 8: Some utterances obtained from Mechanical Turk tasks. Natural Language | EXR Linearized Semantics
---|---
I like to have two medium size pizzas | (ORDER (PIZZAORDER (NUMBER 2) (SIZE MEDIUM)))
I want a vegan pizza, small. | (ORDER (PIZZAORDER (NUMBER 1) (SIZE SMALL) (STYLE VEGAN)))
I need to put in an order of two large cheese pizzas with extra cheese | (ORDER (PIZZAORDER (NUMBER 2) (SIZE LARGE) (COMPLEX_TOPPING (QUANTITY EXTRA) (TOPPING CHEESE)))
i’ll try two medium pizzas and tuna with extra cheese but hold on pesto | (ORDER (PIZZAORDER (NUMBER 2) (SIZE MEDIUM) (COMPLEX_TOPPING (QUANTITY EXTRA) (TOPPING CHEESE)) (NOT (TOPPING PESTO)) (TOPPING TUNA)))
Get me a pineapple and bacon pizza without the extra cheese. | (ORDER (PIZZAORDER (NUMBER 1) (TOPPING PINEAPPLE) (TOPPING BACON) (NOT (COMPLEX_TOPPING (QUANTITY EXTRA) (TOPPING CHEESE)))
Two large pizzas, one with cherry tomatoes and one with onions | (ORDER (PIZZAORDER (NUMBER 1) (SIZE LARGE) (TOPPING CHERRY_TOMATOES)) (PIZZAORDER (NUMBER 1) (SIZE LARGE) (TOPPING ONIONS)))
Can I have a small pizza with salami and red onion with a little basil on the top | (ORDER (PIZZAORDER (NUMBER 1) (SIZE SMALL) (COMPLEX_TOPPING (QUANTITY LIGHT) (TOPPING BASIL)) (TOPPING RED_ONIONS) (TOPPING SALAMI)))
I want to order a medium big meat pizza and two medium cokes. | (ORDER (DRINKORDER (DRINKTYPE COKE) (NUMBER 2) (SIZE MEDIUM)) (PIZZAORDER (NUMBER 1) (SIZE MEDIUM) (STYLE MEAT_LOVER)))
Put in an order for two cans of coke and one large pizza with beef and black olives. | (ORDER (DRINKORDER (CONTAINERTYPE can) (DRINKTYPE COKE) (NUMBER 2)) (PIZZAORDER (NUMBER 1) (SIZE LARGE) (TOPPING BEEF) (TOPPING OLIVES)))
I’d like a small supreme pizza and a two liter coke. | (ORDER (DRINKORDER (DRINKTYPE COKE) (NUMBER 1) (VOLUME 2 LITER)) (PIZZAORDER (NUMBER 1) (SIZE SMALL) (STYLE SUPREME)))
I need twenty pizzas for my party, make half of them pepperoni and the other ten cheese. | (ORDER (PIZZAORDER (NUMBER 10) (TOPPING CHEESE)) (PIZZAORDER (NUMBER 10) (TOPPING PEPPERONI)))
i want a couple of pizzas; make them both medium please. i want mushrooms, pesto but hold the onions. | (ORDER (PIZZAORDER (NUMBER 2) (SIZE MEDIUM) (TOPPING PESTO) (NOT (TOPPING ONIONS)) (TOPPING MUSHROOMS)))
Order me two large cheese pizzas one with pineapple and the other with jalapenos. | (ORDER (PIZZAORDER (NUMBER 1) (SIZE LARGE) (TOPPING CHEESE) (TOPPING JALAPENO_PEPPERS)) (PIZZAORDER (NUMBER 1) (SIZE LARGE) (TOPPING CHEESE) (TOPPING PINEAPPLE)))
As Table 4 illustrates, the obtained orders came with varying degrees of
sophistication and some of the collected responses are quite challenging to
parse correctly. The task also resulted in a number of invalid orders that we
filtered out as described in Appendix B.
## Appendix H Other models
### H.1 Insertion based transformers
In addition to auto-regressive decoding, we explored the non-auto-regressive
generation techniques introduced in Insertion Transformers Stern et al.
(2019), using utterances as the source and TOP representations as the target.
We followed the parallel generation strategy where more than a single token
can be generated at any time step. We explored decoding from scratch
(inserting both semantic labels and utterance tokens) as well as decoding from
source (inserting only semantic tokens around the full source utterance
sequence initialized at start). We used a pointer generator network See et al.
(2017) in conjunction with a pretrained 12-layer encoder and 4-layer decoder
in all our experiments. Because these models are pointer generators, where the
leaf node slots are generated by pointing back to the utterance input, they
cannot be trained on EXR notation. The decode-from-source model requires the
entire utterance sequence to be present in the target, so it also cannot be
trained on TOP-Decoupled notation. The models were fine-tuned using
validation-based early stopping with a learning rate of 0.15.
The poor results observed in Table 9 could be explained by a smaller 4-layer
decoder, a smaller pre-training dataset (Wikipedia only) and lack of beam
search during decoding.
### H.2 RNNGs
We also explored recurrent neural network grammars (RNNGs) and implemented
beam search on top of the framework provided by Dyer et al. (2016). We used a
beam size of $5$ and preserved the rest of the settings as given by Dyer et
al. (2016). Here the source remains the natural language utterance and the
target is a TOP tree produced by a sequence of SHIFT, REDUCE, and NT (non-
terminal) actions. The model is trained to predict the sequence of actions
that assemble the target parse tree. Because this sequence of actions is
applied against the entire source utterance, this RNNG model cannot be trained
on EXR or TOP-Decoupled notations.
As shown in Table 9 the RNNG results are not competitive with seq2seq
approaches, as initially found out in Rongali et al. (2020).
Table 9: EM against EXR ground truth, grouped by notation type used for training. Mean and standard error from models trained on 5 different seeds. Model | Dev (348 utt) | Test (1357 utt) | PCFG Error Test Subset (434 utt)
---|---|---|---
PCFG Parser | 69.54 | 68.02 | 0.0
TOP | | |
Insert-Ptr-Decode-Source | 40.00 ± 0.83 | 40.13 ± 0.50 | 8.85 ± 0.38
Insert-Ptr-Decode-Scratch | 45.29 ± 0.45 | 42.05 ± 0.58 | 10.78 ± 0.30
RNNG | 55.06 ± 3.50 | 50.27 ± 3.52 | 20.83 ± 3.86
TOP-Decoupled | | |
Insert-Ptr-Decode-Scratch | 50.69 ± 0.19 | 47.89 ± 0.21 | 13.78 ± 0.45
|
# Model-independent test for the cosmic distance duality relation with
Pantheon and eBOSS DR16 quasar sample
Bing Xu School of Electrical and Electronic Engineering, Anhui Science and
Technology University, Bengbu, Anhui 233030, China Zhenzhen Wang Department
of Physics, Anhui Normal University, Wuhu, Anhui 241000, China Kaituo Zhang
Department of Physics, Anhui Normal University, Wuhu, Anhui 241000, China
<EMAIL_ADDRESS>Qihong Huang School of Physics and Electronic Science,
Zunyi Normal University, Zunyi 563006, Guizhou, China Jianjian Zhang School
of Electrical and Electronic Engineering, Anhui Science and Technology
University, Bengbu, Anhui 233030, China
###### Abstract
In this paper, we carry out a new model-independent cosmological test for the
cosmic distance duality relation (CDDR) by combining the latest five baryon
acoustic oscillations (BAO) measurements and the Pantheon type Ia supernova
(SNIa) sample. Particularly, the BAO measurement from extended Baryon
Oscillation Spectroscopic Survey (eBOSS) data release (DR) 16 quasar sample at
effective redshift $z=1.48$ is used, and two methods, i.e. a compressed form
of Pantheon sample and the Artificial Neural Network (ANN) combined with the
binning SNIa method, are applied to overcome the redshift-matching problem.
Our results suggest that the CDDR is compatible with the observations, and the
high-redshift BAO and SNIa data can effectively strengthen the constraints on
the violation parameters of CDDR with the confidence interval decreasing by
more than 20 percent. In addition, we find that the compressed form of
observational data can provide a more rigorous constraint on the CDDR, and
thus can be generalized to the applications of other actual observational data
with limited sample size in the test for CDDR.
## 1 Introduction
Based on three fundamental hypotheses that the spacetime is described by a
metric theory of gravity, photons always travel along null geodesics and their
number is conserved, Etherington (1993, 2007) proved a famous cosmic distance-
duality relation (CDDR), which connects the luminosity distance (LD) $D_{\rm
L}$ and the angular diameter distance (ADD) $D_{\rm A}$ at the same redshift
$z$ through the following identity
$\frac{D_{\rm{L}}(z)}{D_{\rm{A}}(z)}(1+z)^{-2}=1\,.$ (1)
This relation is independent of the Einstein field equations and the nature of
matter, and has been widely used in astronomical observations and modern
cosmology as a fundamental relation. For example, the CDDR has been used to
test the geometrical shape of galaxy clusters (Holanda et al., 2011), the
temperature profile and the gas mass density profile of galaxy clusters (Cao
et al., 2011, 2016). However, a violation of one of the hypotheses leading to
the CDDR might be possible, which may be considered as a signal of exotic
physics (Bassett & Kunz, 2004a, b). Therefore, it is necessary to test the
reliability of the CDDR accurately before applying to various astronomical
theories.
A straightforward approach to test the validity of CDDR is to constrain the
parameterization function
$\eta(z)\equiv{D_{\rm{L}}(z)}/{D_{\rm{A}}(z)}(1+z)^{-2}$ with the LD and ADD
of some objects at the same redshift. Here the function $\eta(z)$ represents
the possible violation of the standard CDDR, and can be parameterized in
distinct forms, such as $\eta(z,\eta_{1})=1+\eta_{1}z$,
$\eta(z,\eta_{2})=1+\eta_{2}z/(1+z)$, and
$\eta(z,\eta_{3})=1+\eta_{3}\mathrm{ln}(1+z)$, where the parameter $\eta_{i}$
can be constrained by observational data. Following this idea, a lot of works
have been devoted to test the validity of CDDR by combining LD data inferred
from the observations of type Ia supernovae (SNIa), HII galaxies, or gamma-ray
burst with ADD data determined from different observations such as the X-ray
plus Sunyaev-Zel’dovich (SZ) effect of galaxy clusters, and the gas mass
fraction measurement in a galaxy cluster (Uzan et al., 2004; De Bernardis et
al., 2006; Holanda et al., 2010, 2011, 2012; Li et al., 2011; Nair et al.,
2011; Meng et al., 2012; Yang et al., 2013; Santos et al., 2015; Hu & Wang,
2018; da Silva et al., 2020; Bora & Desai, 2021), the angular size of ultra-
compact radio sources (Li & Lin, 2018; Liu et al., 2021), and strong
gravitational lensing (Liao et al., 2016; Holanda et al., 2016, 2017; Ruan et
al., 2018; Lima et al, 2021; Qin et al., 2021). These works do not suggest
that the CDDR deviates significantly from the real universe. In addition,
since the baryon acoustic oscillations (BAO) measurement, which is a very
precise experiment and plays an important role in modern cosmology, can
provide more accurate ADD data than the observations described above, it is
also used to test the CDDR and has been proved to be a very powerful tool to
test the relation (Wu et al., 2015). Recently, some works have attempted to
test the CDDR with LD data from different SNIa observations and the ADD data
derived from different BAO measurements. For example, combining the Union2.1
SNIa data (Suzuki et al., 2012) with five BAO ADD data points from the WiggleZ
Dark Energy Survey (Blake et al., 2012), the Sloan Digital Sky Survey (SDSS)
Data Release 7 (DR7) (Xu et al., 2013) and DR11 (Samushia et al., 2014), Wu et
al. (2015) tested the validity of CDDR and found that there was no violation
of CDDR. A similar result was obtained by Lin et al. (2018), in which the
authors tested the validity of CDDR by using the same BAO data points plus the
ADD data from galaxy clusters and the latest SNIa sample, i.e. the Pantheon
sample from the Pan-STARRS1 Medium Deep Survey which is the largest SNIa
sample released to date and consists of 1048 SNIa data covering the redshift
range of $0.01<z<2.3$ (Scolnic et al., 2018). Xu & Huang (2020) updated the
constraints with the Baryon Oscillation Spectroscopic Survey (BOSS) DR12 data
and the Pantheon sample, and they obtained consistent results. However, it
should be noted that although the redshift of the newest Pantheon SNIa sample
is up to $z\sim 2.3$, these tests still suffer from low redshift range, i.e.
$z<1$, due to the lack of observations at higher redshift. Most recently,
eBOSS collaboration provided a precise BAO measurement from the final quasar
sample of BOSS DR16 at effective redshift $z=1.48$ (Neveux et al., 2020; Alam
et al., 2021). In this paper, we therefore plan to check the validity of CDDR
by combining the newest BAO measurements with the Pantheon sample. Using these
data, the tests of CDDR with the ADD data derived from BAO measurement can
reach high redshift range $z\sim 1.5$.
Here, it should be pointed out that since any deviation from the CDDR can
contribute to the non-conservation of the photon number (Ellis, 2007),
exploring the CDDR is equivalent to testing the cosmic opacity of the
universe. Thus, the parameter of the CDDR can also be constrained by the
parametrization function of optical depth $\tau(z)$ rather than $\eta(z)$ with
their relation being $\eta(z)=e^{\tau(z)/2}$ (Lima et al., 2011). And many
works have been made to perform the constraints on the cosmic opacity by using
various astronomical observations, and the results show that there is no
obvious evidence of an opaque universe (More et al., 2009; Nair et al., 2012;
Chen et al., 2012; Li et al., 2013; Holanda et al., 2013; Liao et al., 2013,
2015; Wang et al., 2017; Ma et al., 2019; Qi et al., 2019b; Wei, 2019; Zhou et
al., 2019; Liu et al., 2020; Fu et al., 2020; Geng et al., 2020; Xu et al.,
2021; He et al., 2022). However, it is worth mentioning that combining the
simulated gravitation waves from DECi-hertz Interferometer Gravitational-wave
Observatory (DECIGO) and the Einstein Telescope (ET) with the observations of
SNIa, HII galaxies and monochromatic X-ray and ultraviolet (UV) luminosity of
quasars, the authors in the references (Liu et al., 2020; Geng et al., 2020)
have tested the cosmic opacity out to high redshifts and found that the
constraint results is slightly sensitive to the parameterization of $\tau(z)$.
In this paper, considering that the expressions of $\eta(z)$ have several
advantages such as a manageable one-dimensional phase space and a good
sensitivity to observational data (Holanda et al., 2011), we only use the
parameterizations of $\eta(z)$ as listed in the previous text to check the
validity of CDDR.
A common issue we will encounter when performing the tests is that
measurements of LD from SNIa data and ADD from BAO data are not available at
the same redshift. In response to this issue, several methods have been
proposed in the literatures. Holanda et al. (2010) and Li et al. (2011)
derived the LD by using the SNIa data whose redshift is closest to the
cluster’s within the range $\Delta z=|z-z_{\rm SNIa}|<0.005$. Meng et al.
(2012) extended the method by binning all SNIa data available in the range
$\Delta z$. Cardone et al. (2012) applied a local regression technique to the
SNIa data at redshift windows of interest with adjustable bandwidth. These
methods fail at $z=1.48$ because the current SNIa data have very few
distributions at high redshifts $z>1$. The values of LD at the redshift of BAO
measurements can also obtain by using the Gaussian Processes (GPs) (Nair et
al., 2015; Rana et al., 2017; Mukherjee & Mukherjee, 2021). However, when the
number of observed events is not enough, GPs are not reliable for the
reconstruction of SNIa (Zhou & Li, 2019) and lead to great uncertainty (Lin et
al., 2018). In addition, Ma & Corasaniti (2018) applied a Bayesian statistical
method to calculate the SNIa luminosity distance moduli at the redshifts of
ADD data from BAO measurements. Although this method can effectively reduce
the uncertainty and consider the correlation between data points, it needs to
assume auxiliary cosmological information to obtain the values of LD and ADD.
In this paper, we use two methods to overcome the redshift-matching problem.
Firstly, we propose a new method to overcome the issue. In this method, a
compressed form of the Pantheon SNIa sample is provided by using a piecewise
linear function of $\mathrm{ln}(z)$ proposed by Betoule et al. (2014), which
is shown that it still remains accurate for cosmological constraints. By this
methods, we can derive the LD values at the redshift of BAO data points self-
consistently from the binned apparent magnitude values based on the linear
function. Most recently, an Artificial Neural Network (ANN) is used to
reconstruct the Hubble diagrams from the observed HII galaxy and radio quasar
samples and then test the CDDR (Liu et al., 2021). The main purpose of using
an ANN is to construct an approximate function that associates input data with
output data. This machine learning reconstruction method is data-driven and
makes no assumptions about the data, which suggests that it is completely
model-independent, and have shown outstanding performance in solving
cosmological problems in both accuracy and efficiency, such as analyzing
gravitational waves (Li et al., 2020; George & Huerta, 2018), estimating
cosmological parameters (Fluri et al., 2019; Wang et al., 2020b, 2021), and
studying the evolution of dark energy models (Escamilla-Rivera et al., 2020).
Specially, it has been shown to be able to reconstruct a safer function than
GPs when there are fewer data points (Wang et al., 2020a). In order to see
whether the conclusions are changed by a different method and improve the
robustness of the test, we will then apply the ANN method to reconstruct the
apparent magnitude-redshift relation $m_{\rm B}(z)$ which is used to test the
CDDR by combining the newest BAO measurements. Furthermore, in our analysis,
the nuisance parameters used in the construction of likelihood function, such
as the absolute B-band magnitude $M_{\rm B}$ and the fiducial value of sound
horizon $r_{\rm d,f}$, are marginalized analytically with flat priors to avoid
the tests that rely on any auxiliary cosmological information.
This paper is organized as follows: we introduce the data and methodology used
in our analysis in Section 2. In section 3, we apply the method to test the
CDDR and yield results. Conclusions and discussions are presented in Section
4.
## 2 Data and Methodology
In this work, we aim to use ADD derived from the newest BAO measurements and
LD derived from the latest Pantheon sample to test the CDDR by constraining
the function $\eta(z)$ parameterized with $\eta_{i}$. So we now introduce
briefly the BAO data used in our analysis. The BAO refers to an overdensity or
clustering of baryonic matter at certain length scales due to the oscillations
of acoustic waves which propagated in the early universe. It happens on some
typical scales and can provide a standard ruler for length scale in cosmology
to explore the expansion history of the universe. The length of this standard
ruler ($\sim$150 Mpc in today’s universe) corresponds to the distance that a
sound wave propagating from a point source at the end of inflation would have
traveled before decoupling. Most recently, the eBOSS Collaboration have
released their final BAO observations and summarized fourteen measurements of
$D_{\rm V}(z)/r_{\rm d}$, $D_{\rm M}(z)/r_{\rm d}$ and $D_{\rm H}(z)/r_{\rm
d}$, covering the redshift range $0.15\leq z\leq 2.33$ (see Tabel III in Ref.
(Alam et al., 2021)). Here, $r_{\rm d}$ is the sound horizon, $D_{\rm V}(z)$,
$D_{\rm M}(z)$ and $D_{\rm H}$ is the spherically-averaged distance, the
comoving ADD and the Hubble distance, respectively. These measurements are
obtained from the final observations of clustering using galaxies, quasars,
and Ly$\alpha$ forests from the completed SDSS lineage of experiments in
large-scale structure, composing of data from SDSS, SDSS-II, BOSS, and eBOSS,
and thus allow us to perform a comprehensive assessment of the cosmological
model and parameter. In this paper, given that the largest redshift of
Pantheon sample $z<2.3$ (Scolnic et al., 2018) and the ADD is associate with
the comoving ADD through the relation $D_{\rm M}=(1+z)D_{\rm A}$, we will only
use the five measurements of $D_{\rm M}(z)/r_{\rm d}$ from the SDSS-III BOSS
DR12 galaxy sample (Alam et al., 2017), the SDSS-IV eBOSS DR16 LRG sample
(Gil-Marín et al., 2020; Bautista et al., 2021), the SDSS-IV eBOSS DR16 ELG
sample (Tamone et al., 2020; de Mattia et al., 2021) and the SDSS-IV eBOSS
DR16 quasar sample (Neveux et al., 2020; Hou et al., 2021), which are
summarized in Table 1. Here, it is necessary to point out that although the
BAO measurements are widely used in cosmological analysis, the so-called
fitting problem still remains a challenge for BAO peak location as a standard
ruler (Ellis & Stoeger, 1987). In particular, the environmental dependence of
the BAO location has recently been detected by Roukema et al. (2015, 2016).
Moreover, Ding et al. (2015) and Zheng et al. (2016) pointed out a noticeable
systematic difference between H(z) measurements based on BAO and those
obtained with differential aging techniques. Since these problems are related
to the calibration of $r_{\rm d}$, we will therefore introduce a fiducial
value of sound horizon $r_{\rm d,f}$ as a nuisance parameter to calculate the
values of ADD from the measurements of BAO and marginalize its influence with
a flat prior in the analysis.
Table 1: Summary of the dimensionless $D_{\rm M}(z)/r_{\rm d}$ measurements from different BAO samples. $z_{\rm eff}$ | $\mathrm{value}$ | $\mathrm{Survey}$ | $\mathrm{Reference}$
---|---|---|---
$0.38$ | $10.23\pm 0.17$ | $\mathrm{BOSS\,Galaxy}$ | Alam et al. (2017)
$0.51$ | $13.36\pm 0.21$ | $\mathrm{BOSS\,Galaxy}$ | Alam et al. (2017)
$0.70$ | $17.86\pm 0.33$ | $\mathrm{eBOSS\,LRG}$ | Gil-Marín et al. (2020)
$0.85$ | $19.5\pm 1.0$ | $\mathrm{eBOSS\,ELG}$ | Tamone et al. (2020)
$1.48$ | $30.69\pm 0.80$ | $\mathrm{eBOSS\,Quasar}$ | Neveux et al. (2020)
Now we turn our attention on the Pantheon compilation released by the Pan-
STARRS1 Medium Deep Survey (Scolnic et al., 2018), which consists of 1048 SNIa
data covering the redshift range $0.01<z<2.3$. And the observed distance
modulus of each SNIa in this compilation is given by
$\mu_{\rm obs}(z)=m^{*}_{\rm B}-(M_{\rm B}-\alpha X_{1}+\beta\mathcal{C})\,.$
(2)
Here, $m^{*}_{\rm B}$ is the observed peak magnitude in rest frame B-band,
$X_{1}$ is the time stretching of the light-curve, $\mathcal{C}$ is the SNIa
color at maximum brightness, $M_{\rm B}$ is the absolute magnitude, $\alpha$
and $\beta$ are two nuisance parameters, which should be fitted simultaneously
with the cosmological parameters. In order to avoid the dependence of $\alpha$
and $\beta$ on the cosmological model, Kessler & Scolnic (2017) proposed a new
method called BEAMS with Bias Corrections (BBC) to calibrate the SNIa, and the
corrected apparent magnitude $m^{\ast}_{\rm B,corr}=m^{\ast}_{\rm B}+\alpha
X_{1}-\beta C+\Delta_{\rm B}$ for all the SNIa is reported in Ref. (Scolnic et
al., 2018) with $\Delta_{\rm B}$ being the correction term. And the observed
distance modulus is rewritten as
$\mu_{\rm obs}=m^{*}_{\rm B,corr}-M_{\rm B}\,.$ (3)
To test the validity of the CDDR with the BAO measurements and Pantheon SNIa
sample, we must obtain the values of LD from SNIa data at the redshifts of BAO
data. For this purpose, we use two methods to derive the apparent magnitude
values at the redshifts of BAO measurements, in order to achieve the CDDR
testing and obtain convincing results. In the first way, we provide the
cosmological information of the Pantheon SNIa sample in a compressed form with
the method proposed by Betoule et al. (2014). Here, instead of binning the
distance modulus used in (Betoule et al., 2014), the corrected apparent
magnitude is approximated by a piecewise linear function of $\mathrm{ln}(z)$
by defining on each segment $z_{b}\leq z\leq z_{b+1}$ as:
$\overline{m}_{\rm B}(z)=(1-\alpha)m_{{\rm B},b}+\alpha m_{{\rm B},b+1}\,.$
(4)
Here $\alpha=\mathrm{ln}(z/z_{b})/\mathrm{ln}(z_{b+1}/z_{b})$, $m_{{\rm B},b}$
is the apparent magnitude at $z_{b}$. In consideration of the smallest and
largest redshift of the Pantheon sample, we use 36 ln-spaced control points
$z_{b}$ in the redshift range $0.01<z<2.3$. Then using the above interpolation
function to fit the Pantheon data, we can obtain the compressed form by
minimizing the $\chi^{2}$ function,
$\chi^{2}=\left[\mathbf{m}^{*}_{\rm B}-\mathbf{\overline{m}}_{\rm
B}\right]^{\rm T}\cdot\mathbf{Cov}^{-1}\cdot\left[\mathbf{m}^{*}_{\rm
B}-\mathbf{\overline{m}}_{\rm B}\right].$ (5)
Here, $\mathbf{Cov}=\mathbf{D}_{\rm stat}+\mathbf{C}_{\rm sys}$ is the
covariance matrix, where the statistical matrix $\mathbf{D}_{\rm stat}$ has
only diagonal elements and $\mathbf{C}_{\rm sys}$ is the systematic
covariance.
Figure 1: Left: The exact measurements of apparent magnitude in Pantheon
sample (blue) and its compressed form (red). Right: Comparison of the
cosmological constraints obtained from the exact Pantheon sample likelihood
(filled blue contour) with compressed version (dashed red contour).
In order to constrain the 36 parameter spaces of $m_{\rm B,b}$, we have
modified the publicly available Markov Chain Monte Carlo (MCMC) package
CosmoMC (Lewis & Bridle, 2002) to conduct the calculation. We list the binned
apparent magnitude in Table 2 with their covariance matrix given in Table A1
in the section of Appendix. For a visual comparison, we also plot the exact
measurements of apparent magnitude in Pantheon sample and its compressed form
in the left of Figure 1. One can see that the compressed form of Pantheon
sample can provide a good approximation of the relationship between the
apparent magnitude and redshift revealed by the full exact measurements.
Furthermore, as an illustration, a comparison of the cosmological constraints
obtained from the approximate and full version of the Pantheon likelihood for
the $w$CDM model is shown on the right side of Figure 1. It is clear that the
differences of the constraint results from the approximate and full version
are very small. This means that the compressed form still remains accurate for
cosmological constraints.
Using the compressed form listed in Table 2 and the interpolation function
given in Equation 4, we can derive the values of $\overline{m}_{\rm B}$ from
the compressed form of Pantheon sample at the redshift of BAO measurements.
Instead of interpolating directly from the two nearest observations, the
values of $\overline{m}_{\rm B}$ obtained in our analysis take into account
the influence of all observations in two adjacent redshift intervals at a
chosen redshift. This can effectively solve the negative impacts caused by the
small sample size, such as great uncertainty. We refer to this method as ‘I’
and summarize the interpolated values of $\overline{m}_{\rm B}$ at the
redshifts of BAO data in Table 3.
Table 2: Binned apparent magnitude fitted to the Pantheon sample. $z_{b}$ | $m_{{\rm B},b}$ | $z_{b}$ | $m_{{\rm B},b}$ | $z_{b}$ | $m_{{\rm B},b}$ | $z_{b}$ | $m_{{\rm B},b}$
---|---|---|---|---|---|---|---
0.010 | $13.909\pm 0.143$ | $0.041$ | $16.862\pm 0.046$ | $0.164$ | $20.078\pm 0.026$ | $0.664$ | $23.633\pm 0.042$
0.012 | $14.131\pm 0.132$ | $0.047$ | $17.234\pm 0.050$ | $0.191$ | $20.530\pm 0.022$ | $0.775$ | $24.082\pm 0.037$
0.014 | $14.604\pm 0.088$ | $0.055$ | $17.536\pm 0.043$ | $0.224$ | $20.882\pm 0.022$ | $0.905$ | $24.503\pm 0.041$
0.016 | $14.751\pm 0.059$ | $0.065$ | $17.965\pm 0.049$ | $0.261$ | $21.231\pm 0.020$ | $1.058$ | $24.893\pm 0.073$
0.019 | $15.209\pm 0.077$ | $0.075$ | $18.294\pm 0.050$ | $0.305$ | $21.663\pm 0.021$ | $1.235$ | $25.433\pm 0.117$
0.022 | $15.482\pm 0.053$ | $0.088$ | $18.695\pm 0.041$ | $0.356$ | $22.042\pm 0.022$ | $1.443$ | $25.572\pm 0.152$
0.025 | $15.839\pm 0.041$ | $0.103$ | $19.094\pm 0.032$ | $0.416$ | $22.469\pm 0.028$ | $1.686$ | $26.248\pm 0.222$
0.030 | $16.204\pm 0.041$ | $0.120$ | $19.386\pm 0.026$ | $0.486$ | $22.836\pm 0.030$ | $1.969$ | $26.125\pm 0.295$
0.035 | $16.550\pm 0.036$ | $0.140$ | $19.825\pm 0.025$ | $0.568$ | $23.272\pm 0.031$ | $2.300$ | $26.967\pm 0.294$
Table 3: the values of $\overline{m}_{\rm B}$ at the redshifts of BAO measurements obtained from two methods. ${\rm method}$ | $0.38$ | $0.51$ | $0.70$ | $0.85$ | $1.48$
---|---|---|---|---|---
${\rm I}$ | $22.218\pm 0.024$ | $22.969\pm 0.030$ | $22.787\pm 0.040$ | $24.331\pm 0.039$ | $25.682\pm 0.164$
${\rm II}$ | $22.236\pm 0.045$ | $22.990\pm 0.044$ | $23.824\pm 0.061$ | $24.317\pm 0.071$ | $25.785\pm 0.180$
In order to see whether the conclusions are changed by a different method and
improve the robustness of the conclusion, we then use Artificial Neural
Network (ANN) to reconstruct the $m_{\rm B}(z)$ function from the observation
data and obtain the value of $m_{\rm B}$ from the reconstructed function to
overcome the redshift-matching problem at high redshifts. In general terms,
ANN is a Deep Learning algorithm consisting of 3 layers - Input, Hidden and
Output. The input layer has $n$ nodes, each of which inputs an independent
variable, followed by the $m$ linked hidden layers and the output layer with
activation function nodes in the basic architecture (Schmidhuber, 2015). By
using adam optimization (Kingma & Ba, 2014), ANN estimates the error gradient
from observations in the training dataset, and then updates the model weights
and bias estimates during back propagation process to iterate toward an
optimal solution. The process can be conveniently described in a vectorized
way (Wang et al., 2020a),
$\bm{z}_{i+1}=\bm{b}_{i+1}+\bm{x}_{i}\bm{W}_{i+1}\,,$ (6)
$\bm{x}_{i+1}=f(\bm{z}_{i+1})\,,$ (7)
where $\bm{x}_{i}$ is the input vector at the $i$th layer, $\bm{b}_{i+1}$, and
$\bm{W}_{i+1}$ are the offset vector and linear weights matrix with learnable
parameter elements, $\bm{z}_{i+1}$ is the output vector after linear
transformation of $\bm{x}_{i}$, and $f$ is the Exponential Linear Unit (ELU)
activation function (Clevert et al., 2015), which has the form
$\displaystyle f(x)=\begin{cases}x&x>0\\\ \alpha(\exp(x)-1)&x\leq
0\end{cases}\,,$ (8)
where $\alpha$ denotes the hyper-parameter that controls the value to which an
ELU saturates for negative net inputs and is set to 1 (Wang et al., 2020a).
The key of ANN is to find the function $f_{W,\mathbf{b}}$ which makes its
output $f_{W,\mathbf{b}}(\mathbf{x})$ as close to the target value
$\mathbf{y}$ as possible. To achieve this goal, the difference between the
predicted value $f_{W,\mathbf{b}}(\mathbf{x})$ of the current network and the
target value $\mathbf{y}$, equal to the defined loss function $\mathcal{L}$,
should be minimum. During the training and evaluation strategy, the weight
matrix of each layer is updated constantly to minimize $\mathcal{L}$.
Depending on the gradient descent method used, ANN reduces the loss value by
continuously moving the loss value in the opposite direction of the current
corresponding gradient. Formally, in a vectorized way (LeCun et al., 2012)
$\displaystyle\frac{\partial\mathcal{L}}{\partial z_{i+1}}$
$\displaystyle=f^{\prime}(z_{i+1})\frac{\partial\mathcal{L}}{\partial
x_{i+1}}\,,$ $\displaystyle\frac{\partial\mathcal{L}}{\partial W_{i+1}}$
$\displaystyle=x_{i}^{T}\frac{\partial\mathcal{L}}{\partial z_{i+1}}\,,$
$\displaystyle\frac{\partial\mathcal{L}}{\partial x_{i}}$
$\displaystyle=W_{i+1}^{T}\frac{\partial\mathcal{L}}{\partial z_{i+1}}\,,$
$\displaystyle\frac{\partial\mathcal{L}}{\partial{b}_{i+1}}$
$\displaystyle=\left(\frac{\partial\mathcal{L}}{\partial z_{i+1}}\right)\,,$
(9)
where the operator $\partial$ stands for partial derivatives, and $f^{\prime}$
is the derivative of the nonlinear function $f$.
Using the publicly released code called Reconstruct Functions with ANN
(ReFANN) 111https://github.com/Guo-Jian-Wang/refann (Wang et al., 2020a),
which explicitly describe the ANN method, we reconstruct the apparent
magnitude-redshift relation as a function of logarithmic redshift $\left(\rm
lnz\right)$ and show the function with the estimation of $1\sigma$ confidence
region in Figure 2. From Figure 2, it is easy to see that the uncertainties of
the reconstructed function by ANN are almost equal to that of the
observations, and the $1\sigma$ confidence region reconstructed by ANN can be
considered as the average level of observational error. We refer the reader to
Ref. (Wang et al., 2020a) for further details on this issue. Therefore, it is
difficult to achieve a robust CDDR test by using the $m_{\rm B}$ values
obtained from the reconstructed function directly. Given that the sample size
of Pantheon with redshift $z<1$ is sufficient to apply the binning method
(Meng et al., 2012), we bin the actual observational sample by taking $\lvert
z_{\rm SNIa}-z_{\rm BAO}\rvert<0.005$ to obtain the values of $m_{\rm B}$. Due
to the limited sample size of Pantheon at the redshift around $z=1.48$ and the
failure of binning method, the value with $m_{\rm B}=25.785\pm 0.180$ derived
from the reconstructed function is used. This method is denoted as ‘II’. We
summarize the values obtained by the method of combining the binned SNIa and
ANN at the Table 3.
Figure 2: The reconstructed function $m_{\rm B}(z)$ with corresponding
$1\sigma$ errors by using ANN (black line), and the exact measurements of
apparent magnitude in Pantheon sample (red).
Using the measurements of $D_{\rm M}/r_{\rm d}$ given in Table 1 and
$\overline{m}_{\rm B}$ in Table 3, we can obtain the best-fitting CDDR
parameters $\eta_{i}$ and the confidence regions by minimizing the following
$\chi^{2}$ function,
$\chi^{2}=\sum^{5}_{i=1}\frac{{\Delta\mu_{i}}^{2}}{s^{2}_{i}}\,,$ (10)
where $\Delta\mu_{i}=\Delta_{i}-\mathcal{M}=5\mathrm{log_{10}}[D_{\rm
M}/r_{\rm d}(1+z)\eta(z)]-\overline{m}_{\rm B}-\mathcal{M}$ with
$\mathcal{M}=M_{\rm B}-5\mathrm{log_{10}}r_{\rm d,f}-25$,
$s^{2}_{i}=\sigma^{2}_{\mu_{i,{\rm BAO}}}+\sigma^{2}_{\mu_{i,{\rm SN}}}$ with
$\sigma_{\mu_{\rm BAO}}=5\sigma_{D_{\rm M}/r_{\rm d}}/\left(D_{\rm M}/r_{\rm
d}\mathrm{ln}10\right)$, and $\sigma_{\mu_{\rm SN}}=\sigma_{\overline{m}_{\rm
B}}$. In our analysis, we marginalize analytically the likelihood function
over $\mathcal{M}$, the combination of $M_{B}$ and $r_{\rm d,f}$, with the
method proposed in (Conley et al., 2011) by assuming a flat prior on
$\mathcal{M}$. And the marginalized $\chi^{2}$ is written as
$\chi^{2}_{\rm marg}=a-\frac{b^{2}}{f}+\mathrm{ln}\frac{f}{2\pi}\,,$ (11)
where $a=\sum\limits_{i=1}^{5}\Delta^{2}_{i}/s^{2}_{i}$,
$b=\sum\limits_{i=1}^{5}\Delta_{i}/s^{2}_{i}$ and
$f=\sum\limits_{i=1}^{5}1/s^{2}_{i}$. Here, it should be pointed out that
although the values of binned apparent magnitude are correlated, it is
difficult to obtain correlations of the values of $\overline{m}_{\rm B}$ used
in the tests. Given that the value of $\sigma_{\mu_{\rm BAO}}$ used in
Equation 10 is greater than the one of $\sigma_{\overline{m}_{\rm B}}$ at the
same redshift and the values of $D_{\rm M}/r_{\rm d}$ are not correlated
except the two data points at low redshift, we have therefore ignored the
correlation of all data points in the analysis.
## 3 Results
Table 4: Summaries of the 68% limits of $\eta_{1}$, $\eta_{2}$ and $\eta_{3}$ from the low-redshift data points (Sub) and the all data points (All) by using two methods. $\mathrm{method\,I}$ | $\eta_{1}$ | $\eta_{2}$ | $\eta_{3}$
---|---|---|---
Sub | $-0.052^{+0.085}_{-0.077}$ | $-0.137^{+0.203}_{-0.177}$ | $-0.085^{+0.132}_{-0.118}$
All | $-0.064^{+0.057}_{-0.052}$ | $-0.181^{+0.160}_{-0.141}$ | $-0.110^{+0.097}_{-0.088}$
$\mathrm{method\,II}$ | $\eta_{1}$ | $\eta_{2}$ | $\eta_{3}$
Sub | $-0.037^{+0.110}_{-0.097}$ | $-0.101^{+0.269}_{-0.225}$ | $-0.061^{+0.173}_{-0.149}$
All | $-0.039^{+0.070}_{-0.062}$ | $-0.119^{+0.207}_{-0.176}$ | $-0.070^{+0.122}_{-0.107}$
Since that the low redshift $(z<1)$ SNIa plus BAO data have been used to test
the validity of CDDR in the literatures (Wu et al., 2015; Lin et al., 2018; Xu
& Huang, 2020), for a comparison, we perform the constraints using the first
four data points (Sub) and all five data points (All) listed in Table 3,
respectively, in order to test the constraint ability of high redshift BAO
measurement on the CDDR.
Figure 3: The likelihood distributions of $\eta_{1}$, $\eta_{2}$ and
$\eta_{3}$ from the low-redshift data points (Sub) and the all data
points(All) by using method I.
Figure 4: The likelihood distributions of $\eta_{1}$, $\eta_{2}$ and
$\eta_{3}$ from the low-redshift data points (Sub) and the all data
points(All) by using method II.
We first focus on the results obtained using method I and show the
marginalized likelihood distributions of CDDR parameters $\eta_{i}$ are shown
in Figure 3 with the corresponding 68% limits summarizing in Table 4. From
Figure 3 and Table 4, one can see that when the low redshift data is used, the
CDDR is valid at $1\sigma$ CL for all three parameterizations with
$\eta_{1}=-0.052^{+0.085}_{-0.077}$, $\eta_{2}=-0.137^{+0.203}_{-0.177}$,
$\eta_{3}=-0.085^{+0.132}_{-0.118}$. However, once the data point at redshift
$z=1.48$ is combined, the different results are obtained. In this case,
$\eta_{1}=-0.064^{+0.057}_{-0.052}$, $\eta_{2}=-0.181^{+0.160}_{-0.141}$,
$\eta_{3}=-0.110^{+0.097}_{-0.088}$, respectively, which means that the CDDR
is valid at $2\sigma$ CL for all three parameterizations. This suggests that
the tests of CDDR are not sensitive to the parametrization of $\eta(z)$, and
there is no obvious evidence for violation of the CDDR. In addition, it is
easy to see that by comparing the results obtained from the low redshift data
with $\Delta\eta_{1}=0.162$, $\Delta\eta_{2}=0.380$, $\Delta\eta_{3}=0.250$,
there is a significant improvement on the constraints on $\eta_{i}$ with
$\Delta\eta_{1}=0.109$, $\Delta\eta_{2}=0.301$, $\Delta\eta_{3}=0.185$, and
the confidence intervals are reduced by 33%, 21%, 26% for the three
parameterizations respectively when the high redshift data is added.
We then turn to the results in Figure 4 and Table 4 obtained by using method
II. Similar to the results obtained by using method I, the CDDR is consistent
well with the SNIa and BAO observations. From the low redshift data,
$\eta_{1}=-0.037^{+0.110}_{-0.097}$, $\eta_{2}=-0.101^{+0.269}_{-0.225}$,
$\eta_{3}=-0.061^{+0.173}_{-0.149}$, and $\eta_{1}=-0.039^{+0.070}_{-0.062}$,
$\eta_{2}=-0.119^{+0.207}_{-0.176}$, $\eta_{3}=-0.070^{+0.122}_{-0.107}$ from
all data. In other words, the high redshift data improves the constraints on
the violation parameters from $\Delta\eta_{1}=0.207$, $\Delta\eta_{2}=0.494$,
$\Delta\eta_{3}=0.322$ to $\Delta\eta_{1}=0.132$, $\Delta\eta_{2}=0.383$,
$\Delta\eta_{3}=0.229$ with the precisions increasing by 36%, 22% and 29%
corresponding to the first, second and third parameterized forms. In addition,
we can see that each $1\sigma$ error is obviously larger than that from the
method I. This is the advantage of method I that the observations can make a
more rigorous constraint on the CDDR as more SNIa data are used in this method
to derive more precise values of $m_{\rm B}$ at the BAO redshifts.
In order to highlight the constraint ability of the newest BAO measurements,
it is necessary to compare our results with the previous constraints on
$\eta_{i}$ from different data sets of SNIa and BAO. Combining five BAO data
covering the redshift $0.44\leq z\leq 0.57$ with the binned SNIa data in the
range $\Delta z=|z_{\rm BAO}-z_{\rm SNIa}|<0.005$ from Union2.1 sample, Wu et
al. (2015) tested the CDDR with the first two parameterizations used in this
paper. They found that when the dimensionless Hubble constant $h$ is
marginalized with a flat prior, the constraints on $\eta_{i}$ is very weak
with $\eta_{1}=-0.174^{+0.253}_{-0.199}$ and
$\eta_{2}=-0.409^{+0.529}_{-0.381}$. Using the latest Pantheon sample and BOSS
DR12 BAO measurements covering the redshift $0.31\leq z\leq 0.72$, Xu & Huang
(2020) updated the constraints, and found that $\eta_{1}=-0.07\pm 0.12$,
$\eta_{2}=-0.20\pm 0.27$ and $\eta_{3}=-0.12\pm 0.18$, which revealed more
stringent constraints. Furthermore, Lin et al. (2018) tested the CDDR by
combining the ADD data from galaxy clusters and BAO measurements with the
Pantheon sample. In their work, besides the five BAO data used in (Wu et al.,
2015), the authors added one BAO data point at redshift $z=2.34$. However,
they found that the results are almost unaffected when the high redshift BAO
data is added, because that the reconstructed $\mu-z$ function with the
Gaussian processes has large uncertainty at high redshift, and the best-
fitting values are $\eta_{1}=-0.04\pm 0.12$ and $\eta_{2}=-0.05\pm 0.22$.
Through these comparisons, one can conclude that newest BAO data, especially
the eBOSS DR16 data at effective redshift $z=1.48$, could effectively
strengthen the constraints on the violation parameters of CDDR.
Finally, it is worth comparing our results with some tests that have been
already studied by using other high-redshift astrophysical probes to obtain
ADD previously. In particular, many efforts have been made to perform robust
tests of CDDR by combining the ADDs derived from ultra-compact structure in
radio quasars (Cao et al., 2017) with the LDs obtained from Pantheon SNIa
sample (He et al., 2022), the relation between the UV and X-ray luminosities
of quasars (Zheng et al., 2020), observations of HII galaxies (Liu et al.,
2021), and simulated gravitational wave data (Qi et al., 2019a), respectively.
In these works, the authors tested CDDR in the redshift range $z>2$, and found
that the validity of CDDR was in good agreement with the current observational
data. More notably, their results showed that these combinations of
observational data could derive robust constraints on the violation parameter
at the precision of $10^{-2}$ or higher, whereas in our work, the constraint
results obtained from current SNIa and BAO data do not achieve such accuracy.
However, given that those works all use simulated data to solve the redshift
matching problem, while we directly use five data points derived from the
actual observational data, we can expect a better validity check of the CDDR
as more high-precision, high-redshift observations of available BAO and SNIa.
## 4 Conclusion and Discussions
The CDDR plays a fundamental role in astronomical observations and modern
cosmology, while it may be violated if one of the assumptions underlying this
relation is not true. In this paper, we have proposed a new model-independent
test for the CDDR with the Pantheon SNIa sample and the newest BAO
measurements including the eBOSS DR16 quasar sample at effective redshift
$z=1.48$. In our analysis, three parameterized forms
$\eta\left(z,\eta_{1}\right)=1+\eta_{1}z$,
$\eta\left(z,\eta_{2}\right)=1+\eta_{2}z/(1+z)$, and
$\eta\left(z,\eta_{3}\right)=1+\eta_{3}{\rm ln}(1+z)$ are used to describe the
possible violation of CDDR. In particular, two methods are used to derive the
values of $m_{\rm B}$ at the redshifts of BAO measurements to overcome the
redshift-matching problem. We first provide a compressed form of the Pantheon
SNIa sample by using a piecewise linear function of $\mathrm{ln}(z)$, which
shows that it still remains accurate for cosmological constraints, to derive
the values of $m_{\rm B}$ at the redshifts of BAO measurements. We then obtain
the values by using the binning SNIa method at redshift $z<1$ and the ANN
method at $z=1.48$.
The results show that the tests of CDDR are not sensitive to the
parametrization of $\eta(z)$. And there is no obvious violation of the CDDR.
Moreover, it is observed that the high redshift BAO and SNIa data can
effectively strengthen the constraints on the parameters of CDDR. The
confidence intervals of the violation parameter in the three parameterizations
are decreased by 33%, 21%, 26% in the framework of first method, while they
are reduced by 36%, 22%, 29% in the second method, respectively. Furthermore,
by comparing the results obtained from the two methods, we find that applying
the method of compressed form of Pantheon sample, we can use more actual SNIa
data to derive a more precise value of $m_{\rm B}$ and thus get more rigorous
constraints on the violation parameter.
As the final remarks, due to the lack of high redshift SNIa sample, the test
of CDDR with BAO and SNIa data is hard to reach a higher redshift, though
eBOSS collaboration has release one other high redshift data, i.e. the
measurement with Ly$\rm\alpha$ absorption and quasars from BOSS and eBOSS at
an effective redshift $z=2.33$ (du Mas des Bourboux et al., 2020). We can
expect that the validity of the CDDR will be better checked with the more
accurate and wider redshift range measurements of BAO and SNIa from the
further observations, such as the Euclid Satellite (Amendola et al., 2018) and
Dark Energy Spectroscopic Instrument (DESI, Aghamousa et al. (2016)).
Furthermore, the method of compressed form of observations proposed in this
paper presents a new idea of overcoming the redshift-matching problem in high
redshift range. We therefore expect that the method can be extended to other
high redshift standard candles with limited sample size, such as gamma ray
burst and HII galaxies, to test the CDDR, which is an interesting topic for
future investigation.
We thank the anonymous reviewer for his/her very enlightening comments. We are
very grateful to Guojian Wang for his introduction of the ReFANN code. This
work was supported by the National Natural Science Foundation of China under
grants Nos. 11505004, 11865018, and 12192221, and the Anhui Provincial Natural
Science Foundation of China (1508085QA17).
## References
* Aghamousa et al. (2016) Aghamousa, A., Aguilar, J., et al. 2016, arXiv:1611.00036
* Alam et al. (2021) Alam, S., Aubert, M., & Avila S. et al., 2021, Phys. Rev. D, 103, 083533
* Alam et al. (2017) Alam, S., Ata, M., & Bailey, S. et al., 2017, Mon. Not. R. Astron. Soc., 470, 2617
* Amendola et al. (2018) Amendola, L., Appleby, S., Avgoustidis, A. et al.,2018, Living Rev. Relativ. 21, 2
* Bassett & Kunz (2004a) Bassett, B. A. & Kunz, M., 2004, Astrophys. J., 607, 661
* Bassett & Kunz (2004b) Bassett, B. A. & Kunz, M., 2004, Phys. Rev. D, 69, 101305
* Bautista et al. (2021) Bautista, J. E., Paviot, R., & Vargas M., et al., 2021, Mon. Not. R. Astron. Soc., 500, 736
* Betoule et al. (2014) Betoule, M., Kessler, R., & Guy J. et al., 2014, Astron. Astropart., 568, A22
* Blake et al. (2012) Blake, C., Brough, S. & Colless, M. et al., 2012, Mon. Not. Roy. Astron. Soc., 425, 405
* Bora & Desai (2021) Bora, K. & Desai, S., 2021, J. Cosmol. Astropart. Phys., 06, 052
* Cardone et al. (2012) Cardone, V. F., Spiro, S., Hook, I., & Scaramella, R. 2012, Phys. Rev. D, 85, 123510
* Cao et al. (2011) Cao, S., Zhu, Z.-H., 2011, SCPMA 54, 5
* Cao et al. (2016) Cao, S. Biesiada, M., Zheng, X., Zhu, Z.-H., 2016, MNRAS 457, 281
* Cao et al. (2017) Cao. S., Biesiada, M., Jackson, J. et al., 2018, J. Cosmol. Astropart. Phys., 2017, 012
* Chen et al. (2012) Chen, J., Wu, P., Yu, H., & Li, Z. 2012, J. Cosmol. Astropart. Phys., 10, 029
* Clevert et al. (2015) Clevert, D.-A., Unterthiner, T., & Hochreiter, S. 2015, arXiv:1511.07289
* Conley et al. (2011) Conley, A., Guy, J., & Sullivan, M. et al., 2011, Astrophys. J. Suppl., 192, 1
* Ding et al. (2015) Ding, X., Biesiada, M., & Cao, S.et al., 2015, Astrophys. J. Lett., 803, L22
* du Mas des Bourboux et al. (2020) du Mas des Bourboux, H., Rich, J., & Font-Ribera, A., 2020, Astrophys. J., 901, 153
* De Bernardis et al. (2006) De Bernardis, F., Giusarma, E.,& Melchiorri, A., 2006, Int. J. Mod. Phys. D, 15, 759
* da Silva et al. (2020) da Silva, W. J. C., Holanda, R. F. L., & Silva, R., 2020, Phys. Rev. D, 6, 063513
* de Mattia et al. (2021) de Mattia, A., Ruhlmann-Kleider, V.,& Raichoor, A. et al., 2021, Mon. Not. R. Astron. Soc., 501, 5616
* Escamilla-Rivera et al. (2020) Escamilla-Rivera, C., Carvajal Quintero, M. A., & Capozziello, S., J. Cosmol. Astropart. Phys., 2020, 03, 008
* Ellis (2007) Ellis, G. F. R. 2007, Gen. Rel. Grav., 39, 1047
* Ellis & Stoeger (1987) Ellis, G. F. R. & Stoeger, W., 1987, Class. Quant. Grav., 4, 1697
* Etherington (1993) Etherington, I. M. H., 1993, Philos. Mag. 15, 761
* Etherington (2007) Etherington, I. M. H., 2007, Gen. Relativ. Gravit., 39, 1055
* Fluri et al. (2019) Fluri, J., Kacprzak, & T., Lucchi, A., et al. 2019, Phys. Rev. D, 100, 063514
* Fu et al. (2020) Fu, X., Yang, J., & Chen, Z., 2020, Eur. Phys.J. C, 80, 893
* Geng et al. (2020) Geng, S., Cao, S., & Liu, T., et al., 2020, Astrophys. J., 905, 54
* George & Huerta (2018) George, D., & Huerta, E. A. 2018, Phys. Rev. D, 97, 044039
* He et al. (2022) He, Y., Pan, Y. & Shi, D.P. et al., 2022, arXiv:2206.04946
* Holanda et al. (2016) Holanda, R. F. L., Busti, V. C., & Alcaniz, J. S., 2016, J. Cosmol. Astropart. Phys., 02, 054
* Holanda et al. (2017) Holanda, R. F. L., Busti, V. C., Lima, J. A. S., & Alcaniz, J. S., 2017, J. Cosmol. Astropart. Phys., 09, 039
* Holanda et al. (2010) Holanda, R. F. L., Lima, J. A. S., & Ribeiro, M. B., 2010, Astrophys. J., 722, L233
* Holanda et al. (2011) Holanda, R. F. L., Lima, J. A. S., & Ribeiro, M. B., 2011, Astron. Astrophys., 528, L14
* Holanda et al. (2012) Holanda, R. F. L., Lima, J. A. S., & Ribeiro, M. B., 2012, Astron. Astrophys., 538, A131
* Holanda et al. (2013) Holanda, R. F. L., Carvalho, J.C., Alcaniz, J.S., 2013, J. Cosmol. Astropart. Phys., 1304, 027
* Hou et al. (2021) Hou, J., Sánchez, A. G., & Ross, A. J. et al., 2021, Mon. Not. R. Astron. Soc., 500, 1201
* Hu & Wang (2018) Hu, J. & Wang, F. Y., 2018, Mon. Not. Roy. Astron. Soc., 477, 5064
* Kessler & Scolnic (2017) Kessler, R., & Scolnic, D., 2017, Astrophys. J., 836, 56
* Kingma & Ba (2014) Kingma, D. P., & Ba, J. 2014, arXiv:1412.6980
* LeCun et al. (2012) LeCun, Y., Bottou, L., Orr, G. B., & M$\mathrm{\ddot{u}}$ller, K. R. 2012, Neural Networks: Tricks of the Trade https://nyuscholars.nyu.edu/en/publications/efficient-backprop
* Lewis & Bridle (2002) Lewis, A., & Bridle, S., 2002, Phys. Rev. D, 66, 103511
* Li et al. (2011) Li, Z., Wu, P. & Yu, H., 2011, Astrophys. J., 729, L14
* Li et al. (2013) Li, Z., Wu, P, Yu, H., & Zhu, Z. 2013, Phys. Rev. D, 87, 103013
* Li & Lin (2018) Li, X. & Lin, H. N., 2018, Mon. Not. Roy. Astron. Soc., 474, 313
* Li et al. (2020) Li, X., Yu, W., Fan, X., & Babu, G. J., 2020, Front. Phys. 15, 54501
* Lima et al. (2011) Lima, J., Cunha, J., Zanchin, V., 2011, Astrophys. J. Lett. 742, L26
* Liao et al. (2013) Liao, K., Li, Z., Ming, J., & Zhu, Z., 2013, Phys. Lett. B, 718, 1166
* Liao et al. (2015) Liao, K., Avgoustids, & Li, Z., 2015, Phys. Rev. D, 92, 123539
* Liao et al. (2016) Liao, K., Li, Z., & Cao, S. et al., 2016, Astrophys. J. 822, 74
* Lima et al (2021) Lima, F.S., Holanda, R. F. L., Pereira, S. H., & da Silva, W. J. C., 2021, J. Cosmol. Astropart. Phys., 08, 035
* Lin et al. (2018) Lin, H., Li, M., & Li, X., 2018, Mon. Not. Roy. Astron. Soc., 480, 3117
* Liu et al. (2021) Liu, T., Cao, S. & Zhang S. et al., 2021, Eur. Phys. J. C, 81, 903
* Liu et al. (2020) Liu, T., Cao, S., & Biesiada, M. et al., 2020, Astrophys. J., 899, 71
* Ma & Corasaniti (2018) Ma C. & Corasaniti, P.-S., 2018, Astrophys. J., 861, 124
* Ma et al. (2019) Ma, Y. B., Cao, S., & Zhang, J. et al., 2019, Astrophys. J., 887, 163
* Gil-Marín et al. (2020) Gil-Marín, H., Bautista, J. E., & Paviot, R. et al., 2020, Mon. Not. R. Astron. Soc., 498, 2492
* Meng et al. (2012) Meng, X., Zhang, T. & Zhan, H., 2012, Astrophys. J., 745, 98
* More et al. (2009) More, S.,Bovy, J., & Hogg, D.W., 2009, Astrophys. J., 696, 1727
* Mukherjee & Mukherjee (2021) Mukherjee, P. Mukherjee, A., 2021, Mon. Not. Roy. Astron. Soc., 504, 3938
* Nair et al. (2011) Nair, R., Jhingan, S., & Jain, D., 2011, J. Cosmol. Astropart. Phys,. 05, 023
* Nair et al. (2012) Nair, R., Jhingan, S., & Jain, D. 2012, J. Cosmol. Astropart. Phys., 12, 028
* Nair et al. (2015) Nair, R., Jhingan, S., & Jain, D., 2015, Phys. Lett. B, 745, 64
* Neveux et al. (2020) Neveux, R., Burtin, E., & de Mattia, A.et al., 2020, Mon. Not. R. Astron. Soc., 499, 210
* Qin et al. (2021) Qin, J., Melia, F., & Zhang, T. J., 2021, Mon. Not. Roy. Astron. Soc., 502, 3500
* Qi et al. (2019a) Qi, J.Z., Cao, S., & Zheng C. et al., 2019a, Phys. Rev. D, 99, 063507
* Qi et al. (2019b) Qi, J.Z., Cao, S., Pan, Y. & Li, J., 2019b, Phys. Dark Univ., 26, 100338
* Rana et al. (2017) Rana, A., Jain, D., & Mahajan, S. et al., 2017, J. Cosmol. Astropart. Phys., 07, 010
* Roukema et al. (2015) Roukema, B., Buchert, T., Ostrowski, J. J. & France, M. J., 2015, Mon. Not. R. Astron. Soc., 448, 1660
* Roukema et al. (2016) Roukema, B., Buchert, T., Fuji, H. & Ostrowski J. J., 2016, Mon. Not. R. Astron. Soc., 456, L45
* Ruan et al. (2018) Ruan, C., Melia, F., & Zhang, T., 2018, Astrophys. J., 866, 31
* Samushia et al. (2014) Samushia, L., Reid, B. A., & White, M. et al., 2014, Mon. Not. Roy. Astron. Soc., 439, 3504
* Santos et al. (2015) Santos-da-Costa, S., Busti, V. C., & Holanda, R. F. L., 2015, J. Cosmol. Astropart. Phys., 10, 061
* Scolnic et al. (2018) Scolnic, D. M., Jones, D. O., & Rest, A., et al., 2018, Astrophys. J., 859, 101
* Schmidhuber (2015) Schmidhuber, J., 2015, Neural networks, 61, 85
* Suzuki et al. (2012) Suzuki, N., Rubin, D., & Lidman, C. et al., 2012, Astrophys. J., 746, 85
* Tamone et al. (2020) Tamone, A., Raichoor, A., & Zhao, C. et al., 2020, Mon. Not. R. Astron. Soc., 499, 5527
* Uzan et al. (2004) Uzan, J.-P., Aghanim, N., & Mellier, Y., 2004, Phys. Rev. D, 70, 083533
* Wang et al. (2017) Wang, G., Wei, J., Li, Z., Xia, J., & Zhu, Z., 2017, Astrophys. J., 847, 45
* Wang et al. (2020a) Wang, G. J., Ma, X. J., Li, S. Y., & Xia, J. Q., 2020, Astrophys. J. Suppl., 246, 13
* Wang et al. (2020b) Wang, G. J., Li, S. Y., & Xia, J. Q., 2020, Astrophys. J. Suppl., 249, 25
* Wang et al. (2021) Wang, G. J., Ma, X. J., & Xia, J. Q., 2021, Mon. Not. Roy. Astron. Soc., 501, 5714
* Wei (2019) Wei, J., 2019, Astrophys. J., 876, 66
* Wu et al. (2015) Wu, P., Li, Z., Liu, X., & Yu, H., 2015, Pyhs. Rev. D, 92, 023520
* Xu & Huang (2020) Xu, B. & Huang, Q. H., 2020, Eur. Phys. J. Plus, 135, 447
* Xu et al. (2013) Xu, X., Cuesta, A. J., Padmanabhan, N., Eisenstein, D. J., & McBride, C. K., 2013, Mon. Not. Roy. Astron. Soc. 431, 2834
* Xu et al. (2021) Xu, B., Zhang, K., & Huang, Q., et al., 2021, Phys. Dark Univ., 33, 100875
* Yang et al. (2013) Yang, X., Yu, H., Zhang, Z., & Zhang, T., 2013, Astrophys. J., 777, L24
* Zheng et al. (2016) Zheng, X., Ding, X., & Biesiada, M. et al., 2016, Astrophys. J., 825, 1
* Zheng et al. (2020) Zheng, X., Liao, K., & Biesiada, M. et al., 2020, Astrophys. J., 892, 103
* Zhou & Li (2019) Zhou, H. & Li, Z., 2019, Chinese Physics C, 43, 035103
* Zhou et al. (2019) Zhou, L., Fu, X., Peng, Z., & Chen, J., Phys. Rev. D, 2019, 100, 123539
## Appendix A The covariance matrix of binned apparent magnitude parameters
The covariance matrix of 36 binned apparent magnitude parameters $m_{\rm B,b}$
obtained by running CosmoMC is given in Table A1.
$\setcounter{MaxMatrixCols}{36}\begin{pmatrix}$203884$&$-52108$&$20037$&$-370$&$
4708 $&$2351 $&$2861 $&$ 2619 $&$2786 $&$ 3303 $&$ 2692$&$ 2675 $&$ 561 $&$
2694 $&$ -909 $&$-405 $&$ -282 $&$-1082 $&$-777 $&$ -943 $&$ -414 $&$ -144 $&$
-32 $&$ -841 $&$ -1417 $&$ 291 $&$ -1732 $&$-911 $&$ -1627 $&$ -1142 $&$ 680
$&$ 572 $&$3094 $&$ 3949 $&$ 2467 $&$ 4738$\\\ $ $&$173352 $&$ -43188 $&$17031
$&$ -5196 $&$ 5404$&$2332$&$ 2652$&$ 2488 $&$3113 $&$1905 $&$2768$&$ 1324
$&$797$&$ -962 $&$-482 $&$-186 $&$-650 $&$-1513 $&$-964$&$ -525$&$ 452
$&$-394$&$ -382 $&$-1286 $&$56 $&$-2119$&$ -1080$&$ -1784$&$ -487$&$ 879$&$
1421$&$ 1311$&$ 2460 $&$4089$&$ -276$\\\ $ $&$ $&$77426$&$ -17286$&$ 14544$&$
-167$&$ 3000$&$ 2258$&$ 2767 $&$ 3767 $&$1849$&$ 2504 $&$1318 $&$ 2469$&$ -894
$&$-270 $&$-139$&$ -803$&$ -1051 $&$ -871 $&$-427$&$ -275 $&$52 $&$-752 $&$
-1044 $&$ 54 $&$ -1961$&$ -531 $&$-2295$&$ -758 $&$ -316 $&$395$&$ 675 $&$4616
$&$-452 $&$ 4355 $\\\ $ $&$ $&$ $&$ 34537$&$ -14612 $&$ 8038 $&$1414$&$ 2761
$&$2200 $&$ 2337 $&$1921 $&$2215 $&$1179 $&$ 1554 $&$-632 $&$-572 $&$-464 $&$
-760 $&$-1322 $&$-976 $&$ -649 $&$ 13 $&$-435 $&$-393 $&$-1001 $&$ 33 $&$
-1623 $&$ -1351 $&$ -1600 $&$ -354 $&$872 $&$594 $&$ 2828 $&$2235 $&$ 4016$&$
2628$\\\ $ $&$ $&$ $&$ $&$58692 $&$ -6054 $&$4238 $&$ 1148 $&$2185 $&$1128 $&$
772 $&$939 $&$115 $&$ 862 $&$-1425$&$ -1031 $&$-423 $&$ -702 $&$-1129 $&$
-1026 $&$ -731 $&$-144 $&$-421 $&$ -196 $&$-857 $&$176 $&$-712 $&$ -1201$&$
-846 $&$-401 $&$ 1169 $&$ 1583 $&$1373 $&$3621 $&$1674 $&$ 2137$\\\ $ $&$ $&$
$&$ $&$ $&$28229 $&$ -3112$&$ 3304 $&$1613 $&$ 1225 $&$ 1142 $&$1169 $&$443
$&$979 $&$ -1034$&$ -728 $&$-414 $&$ -688 $&$ -1138 $&$ -930 $&$-650$&$ -129
$&$ -448 $&$-275$&$ -691 $&$87 $&$-1228 $&$ -950 $&$-875$&$ -226 $&$1209$&$
1011 $&$2292 $&$2750 $&$ 3186 $&$ 3863$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$16952$&$
-2567 $&$3003 $&$ 1863 $&$ 1288 $&$1681 $&$ 846 $&$1168$&$ -614 $&$-214$&$
-240 $&$ -551 $&$-773 $&$ -449$&$ -458 $&$-52$&$ -128 $&$ -601$&$ -760 $&$31
$&$-1226 $&$-411 $&$ -1399 $&$ -502 $&$889 $&$ -20 $&$ 2123 $&$2599 $&$2211
$&$3547$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ 16613$&$-2115 $&$2756 $&$ 632 $&$
1008 $&$511 $&$ 870 $&$-450 $&$ -253$&$ -365 $&$-507 $&$-916 $&$-615$&$ -383
$&$ 52 $&$-31 $&$-356 $&$ -810$&$ 54 $&$-1231 $&$ -346 $&$ -1326 $&$-206 $&$
698 $&$1210 $&$ 2498 $&$ 2907 $&$ 3060 $&$ 3295$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ 13173 $&$ -1131 $&$ 1980 $&$1201 $&$ 522 $&$1154 $&$ -696 $&$-199 $&$
-327 $&$-666 $&$-910 $&$-690 $&$ -497 $&$-36 $&$ -123 $&$-469 $&$ -729 $&$168
$&$ -889 $&$-424 $&$ -1127 $&$ -239 $&$931 $&$ 663 $&$ 1443 $&$2335 $&$2143
$&$1982$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$21518$&$ -1913$&$ 4007$&$
1419$&$ 2100 $&$-481 $&$-316 $&$ -261 $&$-545$&$ -950 $&$-658 $&$ -458
$&$-106$&$ -352 $&$-414 $&$-852$&$ -83 $&$-988 $&$ -547 $&$-1415$&$ -583$&$
214$&$ 485$&$ 679 $&$2047$&$ 1649 $&$1614 $\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$24705 $&$-492 $&$1642 $&$1465 $&$ -834 $&$-713$&$ 112 $&$-472$&$
-595 $&$-418 $&$ -424 $&$ -88 $&$-420 $&$ 153 $&$-901 $&$ 222$&$ -483 $&$-1601
$&$-425 $&$ -487 $&$-288$&$ -1973 $&$-307$&$ -2116 $&$ -2838 $&$ -2060 $\\\ $
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$18883$&$ -1433 $&$ 3199 $&$-432
$&$-314 $&$186 $&$ -341$&$ -368 $&$-755 $&$-237 $&$-130$&$ -137 $&$-433$&$
-712 $&$-163$&$ -896 $&$-730 $&$-1297 $&$ -1246 $&$ -485 $&$-265 $&$293
$&$-159 $&$ -410 $&$460 $\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$23573$&$ -3649 $&$ 196 $&$-242 $&$ 60 $&$-131$&$ -332 $&$-95 $&$ -290 $&$98
$&$90 $&$-475 $&$ -276 $&$-273 $&$-523 $&$ -302 $&$-1185 $&$-1050 $&$-257 $&$
190 $&$1703 $&$ -246$&$ 386 $&$ -234$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ 25154 $&$-3435$&$577 $&$ 217 $&$-608 $&$ -368$&$ -387 $&$-102
$&$277 $&$117 $&$ -455$&$ -1144 $&$-279 $&$ -654 $&$ 126 $&$-1863$&$ -1126 $&$
-631 $&$ -1347 $&$ -501 $&$1101$&$ 1353 $&$ 1584 $\\\ $ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$16493$&$ -1266 $&$856$&$ 343 $&$903 $&$ 397
$&$495 $&$76 $&$209 $&$ 164 $&$ -109 $&$-416 $&$206 $&$579 $&$-235 $&$ -855
$&$-1491 $&$-869 $&$ -2292 $&$-2046 $&$-1905 $&$ -3326 $\\\ $ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ 10143 $&$-970 $&$ 800 $&$357
$&$422$&$ 455 $&$ 180 $&$322 $&$-123 $&$ -61 $&$-440 $&$ -182 $&$2 $&$-717
$&$-856 $&$ -1140 $&$ -1315 $&$ -862 $&$53 $&$36 $&$-527$\\\ $ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$6687 $&$ -1148 $&$ 1021 $&$111
$&$301 $&$188$&$177$&$ 114 $&$-51 $&$ -356 $&$-49 $&$ -123 $&$-134 $&$ -798
$&$-1233 $&$-807 $&$ -1283 $&$ -1327 $&$ -1756 $&$ -360 $\\\ $ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$6295 $&$ -888 $&$1040$&$
414 $&$203 $&$343 $&$ 185$&$ 373 $&$-512 $&$ 36 $&$-224 $&$ -202$&$ -912 $&$
-1885$&$ -1833 $&$ -2394 $&$-2306$&$ -2521 $&$ -3224$\\\ $ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$6624 $&$ -999 $&$ 1038
$&$56 $&$243 $&$265 $&$ 510 $&$ -554 $&$174 $&$ -246 $&$ -9 $&$ -1012 $&$
-1852 $&$ -987 $&$-2536 $&$ -1783 $&$ -2572 $&$ -2392 $\\\ $ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$4840 $&$-698 $&$
504 $&$205 $&$287 $&$ 187 $&$-372 $&$56 $&$-123 $&$ -123 $&$-626 $&$ -1388 $&$
-2277 $&$ -1713 $&$ -2848 $&$-1476 $&$-2627 $\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$5006 $&$ -786 $&$649 $&$ 98
$&$250 $&$-539 $&$-248 $&$ -484 $&$-513 $&$-822 $&$ -1930 $&$ -2462 $&$ -3491
$&$ -1662 $&$-3183 $&$-2022 $\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$3932 $&$ -607 $&$ 265 $&$-25 $&$-255
$&$ -328 $&$-195 $&$-547 $&$-799 $&$ -1078 $&$ -2075 $&$ -2016 $&$ -1970
$&$-2032 $&$-2180$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$4419$&$-1124$&$509$&$-454$&$-295$&$-213$&$-622$&$-621$&$-1236$&$-1467$&$-1576$&$-1795$&$-1104$&$-1291$\\\
$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$
$&$4684$&$-941$&$511$&$325$&$-238$&$322$&$-366$&$-897$&$-1832$&$-2795$&$-2824$&$-3094$&$-3059$\\\
$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$
$&$7959$&$-1992$&$1090$&$-366$&$614$&$-467$&$-1329$&$-1507$&$-3123$&$-3675$&$-2914$&$-2453$\\\
$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$
$&$8938$&$-1264$&$1473$&$300$&$916$&$1325$&$1124$&$1483$&$-422$&$668$&$581$\\\
$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$9366$&$-2470$&$2386 $&$337 $&$327$&$123 $&$-818
$&$-1642$&$-1103$&$-1996$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$17279$&$-3461$&$2239$&$489$&$2921$&$2683$&$2022$&$3483$&$2904$\\\ $ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$13398$&$-1070$&$4028$&$1684$&$920$&$-251
$&$926$&$852$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$16903$&$-5699$&$7044$&$4796$&$6705$&$5288$&$5750$\\\ $ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$53196$&$4874$&$18735$&$10287$&$17295$&$14601$\\\ $ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$136322$&$-55347$&$66087$&$-3182$&$29635$\\\ $ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$232004$&$-71141$&$94864$&$23007$\\\ $ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$490753$&$-256627$&$82484$\\\ $ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$872142$&$-77339$\\\ $ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$
$&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$ $&$863104$\\\
\end{pmatrix}\times 10^{-7}\qquad$
Table A1: Covariance matrix of the binned apparent magnitudes.
|
# On the Compatibility between Neural Networks and Partial Differential
Equations for Physics-informed Learning
Kuangdai Leng Jeyan Thiyagalingam
###### Abstract
We shed light on a pitfall and an opportunity in physics-informed neural
networks (PINNs). We prove that a multilayer perceptron (MLP) only with ReLU
(Rectified Linear Unit) or ReLU-like Lipschitz activation functions will
always lead to a vanished Hessian. Such a network-imposed constraint
contradicts any second- or higher-order partial differential equations (PDEs).
Therefore, a ReLU-based MLP cannot form a permissible function space for the
approximation of their solutions. Inspired by this pitfall, we prove that a
linear PDE up to the $n$-th order can be strictly satisfied by an MLP with
$C^{n}$ activation functions when the weights of its output layer lie on a
certain hyperplane, as called the out-layer-hyperplane. An MLP equipped with
the out-layer-hyperplane becomes “physics-enforced”, no longer requiring a
loss function for the PDE itself (but only those for the initial and boundary
conditions). Such a hyperplane exists not only for MLPs but for any network
architecture tailed by a fully-connected hidden layer. To our knowledge, this
should be the first PINN architecture that enforces point-wise correctness of
PDEs. We show a closed-form expression of the out-layer-hyperplane for second-
order linear PDEs, which can be generalised to higher-order nonlinear PDEs.
###### keywords:
partial differential equation , deep learning , physics-informed neural
network , ReLU activation function
††journal: MASKED
[inst1]organization=Scientific Computing Department, STFC,
addressline=Rutherford Appleton Laboratory, city=Didcot, postcode=OX11 0QX,
country=UK
[height=4.5, nodespacing=14mm, nodesize=25pt, layerspacing=23mm] [count=3,
bias=false, title=Input, text=$x^{]}_{\hiddenlayer}$[count=3, bias=false,
title=First
hidden
layer, text=$x^{]}_{\linklayers}$[count=4, bias=false, title=Last
hidden
layer, text=$x^{]}_{\linklayers}$[count=1, title=Output
layer, text=$\tilde{u}$[thick, ->] (7.2,-2.9) – (8.2,-1.3); [align=center] at
(9.1, -1.0) Data loss ;
[thick, ->] (7.5,-3.3) – (8.1,-3.3); [align=center] at (9.1, -4.9) AutoDiff;
[fill=blue!35, rounded corners=.4cm, align=center, inner sep=.2cm] at (9.1,
-3.3) $\dfrac{\partial\tilde{u}}{\partial\mathbf{x}^{0}}$,
$\dfrac{\partial}{\partial\mathbf{x}^{0}}\dfrac{\partial\tilde{u}}{\partial\mathbf{x}^{0}}$,
$\cdots$ ; [thick, ->] (10.1,-3.3) – (10.7,-3.3);
[from layer=2, from node=1, to layer=3, to node=1, label=$w^{2}_{\link}$rom
layer=2, from node=2, to layer=3, to node=1, label=$w^{2}_{\link}$rom layer=2,
from node=3, to layer=3, to node=1, label=$w^{2}_{\link}$rom layer=2, from
node=4, to layer=3, to node=1, label=$w^{2}_{\node}$lign=center] at (11.5,
-3.3) ~~PDE loss~~
IC loss
BC loss ;
[align=center] at (2.3,-6.2) $\underbrace{\quad\quad\quad}$; [fill=pink,
rounded rectangle, rounded rectangle east arc=none, align=center, anchor=east]
at (2.35,-6.9) $\mathbf{W}^{1}$; [fill=blue!35, rounded rectangle, rounded
rectangle west arc=none, align=center, anchor=west] at (2.3,-6.9)
$\,\mathbf{z}^{1}$; [align=center] at (2.3,-7.5) $\downarrow$;
[align=center] at (4.6,-6.2) $\underbrace{\quad\quad\quad}$; [fill=pink,
rounded rectangle, rounded rectangle east arc=none, align=center, anchor=east]
at (4.65,-6.9) $\mathbf{W}^{2}$; [fill=blue!35, rounded rectangle, rounded
rectangle west arc=none, align=center, anchor=west] at (4.6,-6.9)
$\,\mathbf{z}^{2}$; [align=center] at (4.6,-7.5) $\downarrow$;
[fill=green!40, rounded rectangle, align=center, inner sep=0.2cm] at
(-.1,-8.2) PDE
coefficients; [align=center] at (1.2,-8.2) $\rightarrow$; [draw, align=center,
inner sep=.2cm] at (3.5,-8.2) Steps 1$\sim$5 in Algorithm 1; [align=center] at
(5.8,-8.2) $\rightarrow$;
[fill=blue!35, rounded rectangle, align=center] at (7.6,-8.2)
$\mathbf{g}^{[1]},\mathbf{g}^{[2]},\mathbf{g}^{[3]},\mathbf{g}^{[4]}$;
[fill=pink, rounded rectangle, align=center, inner sep=.2cm] at (7.8,-6.9)
Trainables
$\lambda^{[2]},\lambda^{[3]},\lambda^{[4]}$;
[align=center] at (9.4,-8.1) $\nearrow$;
[align=center] at (9.4,-7.1) $\searrow$;
[fill=orange!50, rounded rectangle,inner sep=.2cm, align=center] at
(10.3,-7.6) $\mathbf{w}^{2}$;
[->, thick, orange] (10.3,-7.2) to [out=70,in=-70] (5.8,-4.8);
We prove that a multilayer perceptron (MLP) with only ReLU-like activation
functions will always lead to a vanished Hessian, and thus cannot provide a
feasible solution space for any second- or higher-order partial differential
equations (PDEs).
We prove that an MLP with $C^{n}$ activation functions can _always_ (i.e.,
regardless of data) satisfy a $n$-th-order linear PDE _exactly_ (i.e., zero
generalisation error) if the weights of its output layer lie on a hyperplane
decided by the given PDE.
We give a closed-form expression of this hyperplane for second-order linear
PDEs along with an implementation. To the best of our knowledge, this should
be the first network architecture that enforces point-wise correctness of PDEs
(instead of their initial or boundary conditions).
## 1 Introduction
Simulation and inversion of partial differential equations (PDEs) play a
crucial role in applied physics and scientific computing. With conventional
numerical methods being computationally expensive, neural networks and deep
learning technologies, in particular the physics-informed neural networks
(PINNs), have made a new and promising route for simulating and inverting
PDEs. Originally formalised by Raissi et al. [1], PINNs have found numerous
applications across various domains [2, 3], featuring remarkable technical
advancements from many aspects, such as the extension to integro-differential
equations [4], graph neural operators [5], identification of nonlinear
operators [6], multi-fidelity models [7], scalable learning in subdomains [8]
and variational inference [9]. Different from data-driven surrogate models
[10] aimed only at minimising misfit to data, PINNs are trained also to
minimise the physics-based loss functions for the governing equation and the
initial and boundary conditions. These PDE-based loss functions are expected
to aid the neural networks in understanding the underlying physics whereby to
achieve a lower generalisation error with a reduced amount of training data. A
vanilla PINN using a multilayer perceptron (MLP) architecture is shown in Fig.
1, where the key is to utilise the utmost strength of deep neural networks in
automatic differentiation (AutoDiff).
[height=4.5, nodespacing=14mm, nodesize=25pt, layerspacing=23mm] [count=3,
bias=false, title=Input, text=$x^{]}_{\hiddenlayer}$[count=3, bias=false,
title=First
hidden
layer, text=$x^{]}_{\linklayers}$[count=4, bias=false, title=Last
hidden
layer, text=$x^{]}_{\linklayers}$[count=1, title=Output
layer, text=$\tilde{u}$[thick, ->] (7.2,-2.9) – (8.2,-1.3); [align=center] at
(9.1, -1.0) Data loss ;
[thick, ->] (7.5,-3.3) – (8.1,-3.3); [align=center] at (9.1, -4.9) AutoDiff;
[fill=blue!35, rounded corners=.4cm, align=center, inner sep=.2cm] at (9.1,
-3.3) $\dfrac{\partial\tilde{u}}{\partial\mathbf{x}^{0}}$,
$\dfrac{\partial}{\partial\mathbf{x}^{0}}\dfrac{\partial\tilde{u}}{\partial\mathbf{x}^{0}}$,
$\cdots$ ; [thick, ->] (10.1,-3.3) – (10.7,-3.3);
[align=center] at (11.5, -3.3) PDE loss
IC loss
BC loss ;
Figure 1: A vanilla PINN with an MLP architecture. The input of the MLP or
$\mathbf{x}^{0}$ contains the independent variables of the PDE, and the
output, $\tilde{u}(\mathbf{x}^{0})$, is the wanted approximate solution to the
PDE. The acronyms “IC” and “BC” are used to denote initial and boundary
conditions, respectively.
Despite their initial success across many domains, PINNs are still far from
being qualified as a routine for solving or inverting PDEs. Many previous
studies have reported or been motivated by the underperformance of a vanilla
PINN (i.e., an end-to-end MLP) from different perspectives. Examples relevant
to our study here include: training in the Fourier space for better
consistency between a PINN and a target PDE in terms of their
parameterisations of solutions [11, 12], using multi-scale or multi-frequency
neural networks [13, 14] or domain decomposition techniques [8, 15] to enhance
the learnability at high frequencies [16], using adaptive activation functions
[17] or adaptive data sampling [18, 19] to address unbalanced gradients of the
loss functions [20, 21], and explicitly embedding relevant laws or conditions
of physics into a network architecture, such as boundary conditions [22, 23,
24], physical symmetry [25] and invariance [26]. These studies more or less
boil down to one fundamental question: _given a data regime, to what extent a
neural network can be compatible with the target PDE system_.
For clarity, let us consider the following function spaces. Let
$\mathcal{F}_{\mathrm{PDE}}$, $\mathcal{F}_{\mathrm{IC}}$,
$\mathcal{F}_{\mathrm{BC}}$ be the function spaces that respectively satisfy
the governing equation, the initial conditions and the boundary conditions.
Given a well-posed PDE system, their intersection
$\mathcal{F}_{\mathrm{PDE}}\cap\mathcal{F}_{\mathrm{IC}}\cap\mathcal{F}_{\mathrm{BC}}$
must be a skeleton set, i.e., it must contain one and only one element, as
denoted by $u$, which is the true solution. A neural network also form a
function space parameterised by its weights, as denoted by
$\mathcal{F}_{\mathrm{NN}}$. Clearly, a neural network can be a good candidate
for physics-informed learning only if $u\in\mathcal{F}_{\mathrm{NN}}$ within
the data regime, of which the necessary conditions are
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{PDE}}\neq\emptyset$,
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{IC}}\neq\emptyset$ and
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{BC}}\neq\emptyset$.
Meanwhile, for a neural network to work efficiently (in terms of data quantity
and convergence rate), the following differences,
$\mathcal{F}_{\mathrm{NN}}-\mathcal{F}_{\mathrm{PDE}}$,
$\mathcal{F}_{\mathrm{NN}}-\mathcal{F}_{\mathrm{IC}}$ and
$\mathcal{F}_{\mathrm{NN}}-\mathcal{F}_{\mathrm{BC}}$, should be as small as
possible. The above mentioned studies can be well associated with these two
purposes; for example, [13, 14, 8, 15, 17] can be understood as increasing the
expressiveness of $\mathcal{F}_{\mathrm{NN}}$ to capture localised features in
$u$ at high frequencies, [11, 12, 25, 26] as decreasing
$\mathcal{F}_{\mathrm{NN}}-\mathcal{F}_{\mathrm{PDE}}$ by changing basis
functions or imposing energy conservation, and [22, 23, 24] as enforcing
$\mathcal{F}_{\mathrm{NN}}-\mathcal{F}_{\mathrm{BC}}=\emptyset$ or
$\mathcal{F}_{\mathrm{NN}}\in\mathcal{F}_{\mathrm{BC}}$.
In this paper, we will show a case of incompatibility between
$\mathcal{F}_{\mathrm{NN}}$ and $\mathcal{F}_{\mathrm{PDE}}$
($\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{PDE}}=\emptyset$) and
provide a simple and novel approach to enforce full compatibility between them
($\mathcal{F}_{\mathrm{NN}}\in\mathcal{F}_{\mathrm{PDE}}$), both regardless of
data. We will prove that an MLP with only Rectified Linear Unit (ReLU) or
ReLU-like Lipschitz activation functions will always cause a vanished Hessian,
which contradicts any second- or higher-order PDEs. In other words, any
approximate solution yielded by a ReLU-based MLP is non-permissible by such
PDEs. While this incompatibility has been explicitly mentioned and avoided in
some applications [27, 28], we do see quite a few applications having
(probably) falling into this trap. Notably, He et al. [29] has shown that
ReLU-based MLPs are formally equivalent to the piecewise linear interpolation
used in the finite element methods [30], implying that a ReLU-based MLP may
still be available for second- or higher-order PDEs. We argue that such
equivalence is only apparent, and the two methods are essentially different in
terms of the locality of function parameterisation; to take advantage of such
formal equivalence, certain measures (e.g., learning in subdomains [8, 15])
must be taken to bridge the gap between the two. We will revisit this issue in
latter part of this paper.
As inspired by the above ReLU-induced incompatibility, we will prove that a
_linear_ PDE up to the $n$-th order can be _strictly_ satisfied by an MLP with
$C^{n}$ activation functions when the weights of its output layer lie on a
hyperplane decided by the PDE, which we refer to as the _out-layer-
hyperplane_. Such a hyperplane exists for any network architecture tailed by a
fully-connected hidden layer. A PINN equipped with the out-layer-hyperplane
becomes “physics-enforced”, or
$\mathcal{F}_{\mathrm{NN}}\in\mathcal{F}_{\mathrm{PDE}}$, no longer requiring
a loss function for the governing equation. To the best of our knowledge, this
should be the first network architecture that enforces point-wise correctness
of the governing equation. We will provide a closed-form expression of the
out-layer-hyperplane for second-order linear PDEs approximated by an MLP and
provide an implementation. We will also discuss its generalisation to other
network architectures and higher-order nonlinear PDEs.
The rest of this paper is organised as follows. In Section 2, we derive the
Hessian of an MLP-approximated solution, followed by Section 3 where we
discuss the incompatibility caused by ReLU. Next, we propose the concept of
out-layer-hyperplane in Section 4, along with its proof and realisation. After
a brief summary of supportive codes in Section 5, we conclude this paper in
Section 6. As for notations adopted in this paper, we use superscripts to
denote layer indices (not powers, except those on $\mathbb{R}$) and subscripts
to denote element indices. Einstein summation notation is assumed for
subscript indices except those put in parentheses. We also omit the dot symbol
for inner product. For example, a matrix multiplication
$\mathbf{A}\cdot\mathbf{B}$ will be written as
$\mathbf{A}\mathbf{B}=A_{ij}B_{jk}=\sum_{j}A_{i(j)}B_{(j)k}$.
## 2 Hessian of an MLP approximation
Let $u(\mathbf{x}^{0})\in\mathbb{R}$ denote the true solution of a PDE, where
$\mathbf{x}^{0}\in\mathbb{R}^{d^{0}}$ are the independent variables such as
the spatial and temporal coordinates. Taking a 2D homogeneous wave equation on
a membrane for example, we have $\mathbf{x}^{0}=\\{x,y,t\\}$ with $d^{0}=3$,
and $u(\mathbf{x}^{0})$ satisfies the governing equation
$u_{xx}+u_{yy}-u_{tt}/c^{2}=0$ (with $c$ being the wave velocity) and certain
initial and boundary conditions. For simplicity, here we assume scalar-valued
PDEs, but our conclusions hold for vector-valued ones (e.g.,
$\mathbf{u}(\mathbf{x}^{0})\in\mathbb{R}^{3}$).
Let $\tilde{u}(\mathbf{x}^{0})$ be a solution approximated by an MLP that has
$L$ layers, the $k$-th layer of which has $d^{k}$ neurons with weights
$\mathbf{W}^{k}\in\mathbb{R}^{d^{k}\times d^{k-1}}$ and biases
$\mathbf{b}^{k}\in\mathbb{R}^{d^{k}}$. This approximate solution can be
formulated as the following composition of functions:
$\tilde{u}(\mathbf{x}^{0})=\mathcal{L}^{L}\circ\sigma^{L-1}\circ\mathcal{L}^{L-1}\circ\cdots\circ\sigma^{2}\circ\mathcal{L}^{2}\circ\sigma^{1}\circ\mathcal{L}^{1}(\mathbf{x}^{0}),$
(1)
where $\mathcal{L}^{k}$ is the affine transformation endowed by the $k$-th
layer:
$\mathbf{z}^{k}:=\mathcal{L}^{k}(\mathbf{x}^{k-1})=\mathbf{W}^{k}\mathbf{x}^{k-1}+\mathbf{b}^{k},$
(2)
for $\mathbf{x}^{k-1}\in\mathbb{R}^{d^{k-1}}$ and
$\mathbf{z}^{k}\in\mathbb{R}^{d^{k}}$, and $\sigma^{k}$ is the activation
function of the $k$-th layer,
$x^{k}_{i}=\sigma^{k}(z^{k}_{i}),\quad i\in\\{1,2,\cdots,d^{k}\\}.$ (3)
Because we assume $d^{L}=1$, we write $\mathbf{W}^{L}$ as $\mathbf{w}^{L}$ for
a consistent notation.
Based on the chain rule, the Jacobian of $\tilde{u}$, i.e.,
$\nabla\tilde{u}\in\mathbb{R}^{d^{0}}$, is given by
$\nabla\tilde{u}=\mathbf{w}^{L}\mathbf{F}^{L-1}\mathbf{W}^{L-1}\cdots\mathbf{F}^{2}\mathbf{W}^{2}\mathbf{F}^{1}\mathbf{W}^{1},$
(4)
where $\mathbf{F}^{k}\in\mathbb{R}^{d^{k}\times d^{k}}$ is a diagonal matrix
(it is diagonal because $\sigma^{k}$ is an element-wise operation) containing
the first derivatives of $\sigma^{k}(z)$:
$\mathbf{F}^{k}:=\frac{\partial\mathbf{x}^{k}}{\partial\mathbf{z}^{k}}=\mathrm{diag}\
\Big{\\{}\left.\frac{d\sigma^{k}}{dz}\right|_{z=z^{k}_{1}},\left.\frac{d\sigma^{k}}{dz}\right|_{z=z^{k}_{2}},\cdots,\left.\frac{d\sigma^{k}}{dz}\right|_{z=z^{k}_{3}}\Big{\\}}.$
(5)
For the purpose of generalising our results to other network architectures and
PDE classes, we rewrite $\nabla\tilde{u}$ in the following top-down manner:
$\nabla\tilde{u}=\mathbf{w}^{L}\mathbf{U}=\mathbf{w}^{L}\mathbf{P}^{k}\mathbf{F}^{k}\mathbf{Q}^{k},\quad\forall
k\in\\{1,2,\cdots,L-1\\},$ (6)
for which we define
$\displaystyle\mathbf{P}^{k}:=$ $\displaystyle\
\frac{\partial\mathbf{x}^{L-1}}{\partial\mathbf{x}^{k}}=\mathbf{F}^{L-1}\mathbf{W}^{L-1}\cdots\mathbf{F}^{k+1}\mathbf{W}^{k+1}\in\mathbb{R}^{d^{L-1}\times
d^{k}},$ (7) $\displaystyle\mathbf{Q}^{k}:=$ $\displaystyle\
\frac{\partial\mathbf{z}^{k}}{\partial\mathbf{x}^{0}}=\mathbf{W}^{k}\mathbf{F}^{k-1}\mathbf{W}^{k-1}\cdots\mathbf{F}^{2}\mathbf{W}^{2}\mathbf{F}^{1}\mathbf{W}^{1}\in\mathbb{R}^{d^{k}\times
d^{0}},$ (8) $\displaystyle\mathbf{U}:=$ $\displaystyle\
\frac{\partial\mathbf{x}^{L-1}}{\partial\mathbf{x}^{0}}=\mathbf{P}^{k}\mathbf{F}^{k}\mathbf{Q}^{k}\in\mathbb{R}^{d^{L-1}\times
d^{0}},\quad\forall k\in\\{1,2,\cdots,L-1\\}.$ (9)
To show the Hessian of $\tilde{u}$, i.e.,
$\nabla\nabla\tilde{u}\in\mathbb{R}^{d^{0}\times d^{0}}$, we first express eq.
(6) in index notation,
$\tilde{u}_{,m}=w_{i}^{L}P_{ij}^{k}F_{jl}^{k}Q_{lm}^{k},\quad\forall
k\in\\{1,2,\cdots,L-1\\},$ (10)
where $()_{,i}:=\frac{\partial}{\partial x_{i}^{0}}()$. The Hessian of
$\tilde{u}$ can be shown by the following steps:
$\displaystyle\tilde{u}_{,mn}=\frac{\partial\tilde{u}_{,m}}{\partial
x_{n}^{0}}=$ $\displaystyle\ w_{i}^{L}\sum_{k=1}^{L-1}P_{ij}^{k}\frac{\partial
F_{jl}^{k}}{\partial x^{0}_{n}}Q_{lm}^{k}$ (11a) $\displaystyle=$
$\displaystyle\ w_{i}^{L}\sum_{k=1}^{L-1}P_{ij}^{k}\frac{\partial
F_{jl}^{k}}{\partial z^{k}_{p}}\frac{\partial z^{k}_{p}}{\partial
x^{0}_{n}}Q_{lm}^{k}$ (11b) $\displaystyle=$ $\displaystyle\
w_{i}^{L}\sum_{k=1}^{L-1}P_{ij}^{k}\frac{\partial F_{jl}^{k}}{\partial
z^{k}_{p}}Q_{pn}^{k}Q_{lm}^{k}$ (11c) $\displaystyle=$ $\displaystyle\
w_{i}^{L}\sum_{k=1}^{L-1}\sum_{j=1}^{d^{k}}P_{i(j)}^{k}s_{(j)}^{k}Q_{(j)m}^{k}Q_{(j)n}^{k}.$
(11d)
In the above derivation, eq. (11a) takes the derivative of _each_
$\mathbf{F}^{k}$ in eq. (4) with respect to $\mathbf{x}^{0}$ and sums them up,
showing the result in the spirit of eq. (10); eq. (11b) expands
$\frac{\partial\mathbf{F}^{k}}{\partial\mathbf{x}^{0}}$ by the chain rule,
i.e.,
$\frac{\partial\mathbf{F}^{k}}{\partial\mathbf{x}^{0}}=\frac{\partial\mathbf{F}^{k}}{\partial\mathbf{z}^{k}}\frac{\partial\mathbf{z}^{k}}{\partial\mathbf{x}^{0}}$;
eq. (11c) substitutes $\frac{\partial\mathbf{z}^{k}}{\partial\mathbf{x}^{0}}$
with $\mathbf{Q}^{k}$ as per its definition; and, finally, eq. (11d) takes
into account that $\frac{\partial F_{jl}^{k}}{\partial z^{k}_{p}}$ vanishes
unless $j=l=p$, as $\sigma^{k}$ is element-wise, whereas the non-zero terms
$s_{j}^{k}$ are the second derivatives of $\sigma^{k}(z)$:
$s_{j}^{k}:=\left.\frac{d}{dz}\frac{d\sigma^{k}(z)}{dz}\right|_{z=z^{k}_{j}}.$
(12)
For its later generalisation, we rewrite eq. (11) as
$\nabla\nabla\tilde{u}=\mathbf{w}^{L}\mathbf{V},$ (13)
where $\mathbf{V}$ is a third-order tensor defined by
$\displaystyle\mathbf{V}:=$ $\displaystyle\
\frac{\partial}{\partial\mathbf{x}^{0}}\frac{\partial\mathbf{x}^{L-1}}{\partial\mathbf{x}^{0}}\in\mathbb{R}^{d^{L-1}\times
d^{0}\times d^{0}},$ (14a) $\displaystyle V_{imn}=$ $\displaystyle\
\sum_{k=1}^{L-1}\sum_{j=1}^{d^{k}}P_{i(j)}^{k}s_{(j)}^{k}Q_{(j)m}^{k}Q_{(j)n}^{k}.$
(14b)
## 3 Incompatibility by ReLU
For a $\sigma^{k}$ of Lipschitz continuity like ReLU, we have
$\mathbf{s}^{k}\equiv\mathbf{0}$. It is then straightforward to see from eq.
(13) that the Hessian $\nabla\nabla\tilde{u}\equiv\mathbf{0}$ if the $(L-1)$
activation functions are _all_ ReLU-like. We provide a simple code to verify
this; see Section 5. Such a vanished Hessian as imposed by the neural network
is unwanted, which limits $\tilde{u}$ to a function space that excludes the
true solution $u$, or
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{PDE}}=\emptyset$,
regardless of the training data. From the perspective of Probably
Approximately Correct (PAC) learning [31],
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{PDE}}=\emptyset$ means the
breakdown of the “realisability assumption”, which fundamentally weakens a
model’s learnability. For better understanding, let us exam the 2D wave
equation on a membrane, considering the following three cases:
1. 1.
if the wave equation is inhomogeneous, i.e., $u_{xx}+u_{yy}-u_{tt}/c^{2}=f$
with $f=f(x,y,t)$ being the source term, it can never be satisfied by a ReLU-
based $\tilde{u}$ because we have proved that
$\tilde{u}_{xx}=\tilde{u}_{yy}=\tilde{u}_{tt}\equiv 0$;
2. 2.
if the wave equation is homogeneous, i.e., $u_{xx}+u_{yy}-u_{tt}/c^{2}=0$, it
will always be satisfied by a ReLU-based $\tilde{u}$; however, this illusory
benefit is invalid because the network-imposed conditions (not only
$\tilde{u}_{xx}=\tilde{u}_{yy}=\tilde{u}_{tt}\equiv 0$ but also
$\tilde{u}_{xy}=\tilde{u}_{xt}=\tilde{u}_{yt}\equiv 0$) are much stronger than
the wave equation itself;
3. 3.
if the wave equation is both forced and damped, i.e,
$u_{xx}+u_{yy}-u_{tt}/c^{2}-\mu u_{t}/c^{2}=f$ (with $\mu$ being the
coefficient of friction), a ReLU-based MLP will effectively be trained to
minimise $-\mu\tilde{u}_{t}/c^{2}-f$, subject to
$\nabla\nabla\tilde{u}\equiv\mathbf{0}$, rather than to minimise
$\tilde{u}_{xx}+\tilde{u}_{yy}-\tilde{u}_{tt}/c^{2}-\mu\tilde{u}_{t}/c^{2}-f$
as wanted.
The vanished Hessian can be easily avoided by using a smooth activation
function, as many applications have done, with some being alert to the
downside of ReLU for PINNs [27, 28]. Another way is to use non-affine layer
operations, or non-affine $\mathcal{L}$’s in eq. (1), because
$(\sigma\mathcal{L})^{\prime\prime}$ will contain at least one non-zero term
${\sigma}^{\prime}\mathcal{L}^{\prime}$; an example of this is the Fourier
neural operators tailed by ReLU [11]. Note that convolutional filters are also
affine (with a sparse $\mathbf{W}^{k}$), so a ReLU-based convolutional neural
network (CNN) will also end up with a vanished Hessian.
He et al. [29] have studied the similarity between ReLU-based MLPs and the
finite element methods (FEMs) with a linear shape function [30]. They show
that both of them present a piecewise linear function space for the
approximation of the PDE solution, which seems to hint that a ReLU-based MLP
may achieve the same accuracy as a linear FEM does. We argue that such a
similarity or equivalence is only apparent because of the following two major
differences. First, in an FEM, the spatial domain is discretised by a mesh
whereby linear interpolation is localised in each (small) element with fixed
anchor points (i.e., the nodes). Such a local parameterisation with compact
support can fit any complicated functions within in the mesh resolution. In
contrast, a ReLU-based MLP presents a global piecewise linear parameterisation
whose anchor points are stochastically located, due to the stochastic nature
of neural weights, mostly uncontrollable a priori and difficult to learn from
data. Such a difference is visualised in Fig. 2. Second, an FEM is based on a
_weak form_ (or variational form) of the PDE [15] in which the second-order
derivatives have disappeared. However, such a weak form is not considered by a
vanilla PINN, which still computes the Hessian by AutoDiff as part of the loss
function (and it will only get zeros if ReLU is used). The above two
differences are most relevant to the idea of PINNs with domain decomposition
[8, 15], that is, many smaller neural networks are trained in parallel, each
approximating the solution in a smaller subdomain, either using a weak form
[15] or a strong form [8] of the PDE. Domain decomposition can be understood
as to achieve controlled localisation of basis functions by using many neural
networks, similar to an FEM by using many elements. In addition to the weak
form, an alternative way to avoid computing the Hessian is to reformulate a
second-order PDE as a system of first-order PDEs [32], similar to the
staggered-grid finite difference methods [30].
$x$$u(x)$True solutionFinite-element solutionMesh-allowed
solutions$x_{1}$$x_{2}$$x_{3}$$x_{4}$$x_{5}$
(a) FEM
$x$$u(x)$True solutionNetwork-allowed solutions$x_{1}$$x_{5}$
(b) ReLU-based MLP
Figure 2: Sketch of 1D piecewise linear parameterisations (a) by an FEM and
(b) by a ReLU-based MLP. The main difference between (a) and (b) is that the
anchor points are predefined in (a), i.e., $x_{1},x_{2},\cdots,x_{5}$, based
on the required mesh resolution, but are stochastic and uncontrollable in (b).
A simple code is provided to produce the sketched patterns in (b) with
randomly initialised MLPs; see Section 5.
## 4 The out-layer-hyperplane
In the previous section, we have shown that a ReLU-based MLP is incompatible
with a second-order PDE, or
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{PDE}}=\emptyset$. The above
process has delightfully inspired a simple and elegant way to enforce an MLP
to satisfy any second-order linear PDEs, i.e.,
$\mathcal{F}_{\mathrm{NN}}\in\mathcal{F}_{\mathrm{PDE}}$, without imposing any
non-physical constraints. Now we assume that the activation functions are of
$C^{2}$ or above so that $\mathbf{V}$ in eq. (14) does not vanish.
Consider a general second-order, scalar-valued linear PDE as follow:
$\Gamma_{mn}u_{,mn}+\gamma_{m}u_{,m}+\beta u+\alpha=0,$ (15)
where $\alpha\in\mathbb{R}$, $\beta\in\mathbb{R}$,
$\bm{\upgamma}\in\mathbb{R}^{d^{0}}$ and
$\mathbf{\Gamma}\in\mathbb{R}^{d^{0}\times d^{0}}$ are known PDE coefficients.
Let $u(\mathbf{x}^{0})$ be approximated by $\tilde{u}(\mathbf{x}^{0})$.
Substituting $u$ by $z^{L}$ in eq. (2), $u_{,m}$ by eq. (6) and $u_{,mn}$ by
eq. (13), we obtain the following linear system that enforces the MLP to
satisfy the above PDE:
$\psi_{i}w_{i}^{L}+\beta b^{L}+\alpha=0,$ (16)
where $\bm{\uppsi}\in\mathbb{R}^{d^{L-1}}$ are defined by
$\psi_{i}:=\Gamma_{mn}V_{imn}+\gamma_{m}U_{im}+\beta x_{i}^{L-1}.$ (17)
The LHS of eq. (16) is a function of all the weights and biases of the MLP and
the input $\mathbf{x}^{0}$. However, from the angle of network design, we can
regard the weights and biases of the hidden layers ($\mathbf{w}^{k}$ and
$b^{k}$, $k\in\\{1,2,\cdots,L-1\\}$) as free variables or trainable parameters
while constraining those of the output layer ($\mathbf{w}^{L}$ and $b^{L}$) so
that eq. (16) can always hold. Obviously,
$\mathbf{w}^{L}\in\mathbb{R}^{d^{L-1}}$ and $b^{L}\in\mathbb{R}$ lie on a
hyperplane in $\mathbb{R}^{d^{L-1}+1}$, as defined by eq. (16), which we call
the _out-layer-hyperplane_.
We realise the out-layer-hyperplane using its parametric equation. Without
loss of generality, we ignore $b^{L}$ by assuming $\beta=0$ for a compact
notation. Known from basic linear algebra, the parametric equation of the
hyperplane is given by
$\mathbf{w}^{L}=\mathbf{g}^{[1]}+\sum_{p=2}^{d^{L-1}}\lambda^{[p]}\mathbf{g}^{[p]},$
(18)
where the basis vectors $\mathbf{g}^{[p]}\in\mathbb{R}^{d^{L-1}}$,
$p\in\\{1,2,\cdots,d^{L-1}\\}$, can be chosen as
$\left[\begin{array}[]{c}\mathbf{g}^{[1]}\\\
\hdashline[2pt/2pt]\mathbf{g}^{[2]}\\\ \mathbf{g}^{[3]}\\\ \vdots\\\
\mathbf{g}^{[d^{L-1}]}\end{array}\right]=\left[\begin{array}[]{c;{2pt/2pt}cccccc}\frac{-\alpha\psi_{1}}{|\bm{\uppsi}|^{2}}&\frac{-\alpha\psi_{2}}{|\bm{\uppsi}|^{2}}&\frac{-\alpha\psi_{3}}{|\bm{\uppsi}|^{2}}&\cdots&\frac{-\alpha\psi_{d^{L-1}}}{|\bm{\uppsi}|^{2}}\\\
\hdashline[2pt/2pt]-\psi_{2}&\psi_1&0&\cdots&0\\\
-\psi_{3}&0&\psi_1&\cdots&0\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\
-\psi_{d^{L-1}}&0&0&\cdots&\psi_{1}\end{array}\right],$ (19)
and $\lambda^{[p]}\in\mathbb{R}$, $p\in\\{2,3,\cdots,d^{L-1}\\}$ (note that
$p$ starts from 2 here), are the new network parameters taking the place of
$\mathbf{w}^{L}$. Here we put the superscript indices for $\lambda$ and
$\mathbf{g}$ in $[\cdot]$ to differentiate them with the layer indices. Note
that $\mathbf{g}^{[1]}$ represents a point on the hyperplane, and
$\mathbf{g}^{[2]},\mathbf{g}^{[3]},\cdots,\mathbf{g}^{[d^{L-1}]}$ are linearly
independent vectors parallel to the hyperplane. The choice of these basis
vectors is non-unique, such as
$\mathbf{g}^{[1]}\leftarrow\\{-\alpha/\psi_{1},0,0,\cdots,0\\}$, but our
choice in eq. (19) avoids division by elements of $\bm{\uppsi}$ (which may be
close to zero), while the division by $|\bm{\uppsi}|^{2}$ is always safe and
stable. Also note that the total number of the $\lambda$’s is one less than
the size of $\mathbf{w}^{L}$, as $\mathbf{w}^{L}$ must lie on a hyperplane,
which certifies that we do not impose any extra conditions other than the PDE
itself. The forward pass through an MLP equipped with the out-layer-hyperplane
is sketched in Fig. 3 and elaborated in Algorithm 1. A code implementation is
also provided; see Section 5.
Thus far, we have shown the existence and a realisation of the out-layer-
hyperplane for second-order linear PDEs approximated by an MLP. What makes
this idea more attractive is that it can be generalised to other network
architectures and PDE classes, most of which require a trivial effort, as
detailed below.
* 1.
Network architecture. Equation (16) holds for any network architecture tailed
by a fully-connected hidden layer as long as we relax the definitions of
$\mathbf{U}$ and $\mathbf{V}$ in eqs. (9) and (14) respectively to
$\mathbf{U}:=\frac{\partial\mathbf{x}^{L-1}}{\partial\mathbf{x}^{0}}$ and
$\mathbf{V}:=\frac{\partial}{\partial\mathbf{x}^{0}}\frac{\partial\mathbf{x}^{L-1}}{\partial\mathbf{x}^{0}}$
(i.e., abandon eq. (14b)). For an MLP, we have given their closed-form
expressions in eqs. (9) and (14), corresponding to Steps 2$\sim$4 in Algorithm
1. For a general network architecture, we can simply compute $\mathbf{U}$ and
$\mathbf{V}$ by AutoDiff using the first $(L-1)$ layers based on their
generalised definitions.
* 2.
Higher-order linear PDEs. It is straightforward to show that the out-layer-
hyperplane exists for an $n$-th order linear PDE given $C^{n}$ activation
functions. For example, a general third-order linear PDE will require an extra
term $\Omega_{mnr}u_{,mnr}$ in eq. (15), with $\Omega_{mnr}$ being known PDE
coefficients; this term will add $\Omega_{mnr}T_{imnr}$ to the hyperplane
coefficients ${\psi_{i}}$ in eq. (17), where
$\mathbf{T}:=\frac{\partial}{\partial\mathbf{x}^{0}}\frac{\partial}{\partial\mathbf{x}^{0}}\frac{\partial\mathbf{x}^{L-1}}{\partial\mathbf{x}^{0}}$,
and $\mathbf{T}\neq\mathbf{0}$ with $C^{3}$ or above activation functions.
* 3.
Vector-valued linear PDEs. Let us consider a general vector-valued, second-
order linear PDE with solution $\mathbf{u}(\mathbf{x}^{0})\in\mathbb{R}^{3}$,
formulated as
$\Theta_{ijmn}u_{j,mn}+\Pi_{ijm}u_{j,m}+\Lambda_{ij}u_{j}+\alpha_{i}=0,\quad
i\in\\{1,2,3\\},$ (20)
which is the $\mathbb{R}^{3}$ version of eq. (15). Consequently, one can show
that eq. (16) will be generalised to
$\Psi_{ijk}W_{jk}^{L}+\Lambda_{ij}b^{L}_{j}+\alpha_{i}=0,\quad
i\in\\{1,2,3\\},$ (21)
where
$\Psi_{ijk}=\Theta_{ijmn}V_{kmn}+\Pi_{ijm}U_{km}+\Lambda_{ij}x_{k}^{L-1}$,
with $\mathbf{U}$ and $\mathbf{V}$ remaining the same as in eqs. (9) and (14).
Evidently, the above linear system states that the PDE can be enforced if the
weights and biases of the output layer, $\mathbf{W}^{L}\in\mathbb{R}^{3\times
d^{L-1}}$ and $\mathbf{b}^{L}\in\mathbb{R}^{3}$, lie on three coupled
hyperplanes, which can be realised by parametric equations similar to eq.
(18).
* 4.
Nonlinear PDEs. It is not difficult to guess that, to satisfy a nonlinear PDE,
the weights of the output layer must lie on a hypersurface. This is true. For
example, consider $(u_{,i}u)$, a common nonlinear term that appears in many
PDEs such as Navier–Stokes and Bateman-Burgers [33]. It will introduce many
quadratic terms about $\mathbf{w}^{L}$ into eq. (16), namely,
$A_{ijk}w_{j}^{L}w_{k}^{L}$, where $A_{ijk}=U_{ij}x_{k}^{L-1}$. For a general
nonlinear PDE, there is no universal way to parameterise its out-layer-
hypersurface. For certain PDEs, however, a closed-form parameterisation
similar to eq. (18) may be found. At least, concerning second- and third-order
nonlinear PDEs, the parametric equations for the generalised quadratic and
cubic hypersurfaces have been well established [34].
[height=4.5, nodespacing=14mm, nodesize=25pt, layerspacing=23mm] [count=3,
bias=false, title=Input, text=$x^{]}_{\hiddenlayer}$[count=3, bias=false,
title=First
hidden
layer, text=$x^{]}_{\linklayers}$[count=4, bias=false, title=Last
hidden
layer, text=$x^{]}_{\linklayers}$[count=1, title=Output
layer, text=$\tilde{u}$[thick, ->] (7.2,-2.9) – (8.2,-1.3); [align=center] at
(9.1, -1.0) Data loss ;
[thick, ->] (7.5,-3.3) – (8.1,-3.3); [align=center] at (9.1, -4.9) AutoDiff;
[fill=blue!35, rounded corners=.4cm, align=center, inner sep=.2cm] at (9.1,
-3.3) $\dfrac{\partial\tilde{u}}{\partial\mathbf{x}^{0}}$,
$\dfrac{\partial}{\partial\mathbf{x}^{0}}\dfrac{\partial\tilde{u}}{\partial\mathbf{x}^{0}}$,
$\cdots$ ; [thick, ->] (10.1,-3.3) – (10.7,-3.3);
[from layer=2, from node=1, to layer=3, to node=1, label=$w^{2}_{\link}$rom
layer=2, from node=2, to layer=3, to node=1, label=$w^{2}_{\link}$rom layer=2,
from node=3, to layer=3, to node=1, label=$w^{2}_{\link}$rom layer=2, from
node=4, to layer=3, to node=1, label=$w^{2}_{\node}$lign=center] at (11.5,
-3.3) ~~PDE loss~~
IC loss
BC loss ;
[align=center] at (2.3,-6.2) $\underbrace{\quad\quad\quad}$; [fill=pink,
rounded rectangle, rounded rectangle east arc=none, align=center, anchor=east]
at (2.35,-6.9) $\mathbf{W}^{1}$; [fill=blue!35, rounded rectangle, rounded
rectangle west arc=none, align=center, anchor=west] at (2.3,-6.9)
$\,\mathbf{z}^{1}$; [align=center] at (2.3,-7.5) $\downarrow$;
[align=center] at (4.6,-6.2) $\underbrace{\quad\quad\quad}$; [fill=pink,
rounded rectangle, rounded rectangle east arc=none, align=center, anchor=east]
at (4.65,-6.9) $\mathbf{W}^{2}$; [fill=blue!35, rounded rectangle, rounded
rectangle west arc=none, align=center, anchor=west] at (4.6,-6.9)
$\,\mathbf{z}^{2}$; [align=center] at (4.6,-7.5) $\downarrow$;
[fill=green!40, rounded rectangle, align=center, inner sep=0.2cm] at
(-.1,-8.2) PDE
coefficients; [align=center] at (1.2,-8.2) $\rightarrow$; [draw, align=center,
inner sep=.2cm] at (3.5,-8.2) Steps 1$\sim$5 in Algorithm 1; [align=center] at
(5.8,-8.2) $\rightarrow$;
[fill=blue!35, rounded rectangle, align=center] at (7.6,-8.2)
$\mathbf{g}^{[1]},\mathbf{g}^{[2]},\mathbf{g}^{[3]},\mathbf{g}^{[4]}$;
[fill=pink, rounded rectangle, align=center, inner sep=.2cm] at (7.8,-6.9)
Trainables
$\lambda^{[2]},\lambda^{[3]},\lambda^{[4]}$;
[align=center] at (9.4,-8.1) $\nearrow$;
[align=center] at (9.4,-7.1) $\searrow$;
[fill=orange!50, rounded rectangle,inner sep=.2cm, align=center] at
(10.3,-7.6) $\mathbf{w}^{2}$;
[->, thick, orange] (10.3,-7.2) to [out=70,in=-70] (5.8,-4.8);
Figure 3: Forward pass through an MLP ($L=3$) equipped with the out-layer-
hyperplane. In an ordinary MLP, the four elements of $\mathbf{w}^{2}$
(coloured in orange) are trained. Using the out-layer-hyperplane, the three
$\lambda$’s (coloured in pink) become the new trainable parameters, which
always yield a $\mathbf{w}^{2}$ lying on the out-layer-hyperplane so that
$\tilde{u}$ always satisfies the target linear PDE. Algorithm 1 Forward pass
through an MLP equipped with the out-layer-hyperplane.
Input: $\mathbf{x}^{0}\in\mathbb{R}^{d^{0}}$.
Output: $\tilde{u}\in\mathbb{R}$.
Network parameters (for backprop): $\mathbf{W}^{k}\in\mathbb{R}^{d^{k}\times
d^{k-1}}$ and $\mathbf{b}^{k}\in\mathbb{R}^{d^{k}}$,
$k\in\\{1,2,\cdots,L-1\\}$; $\lambda^{[p]}\in\mathbb{R}$,
$p\in\\{2,3,\cdots,d^{L-1}\\}$.
PDE coefficients: $\alpha\in\mathbb{R}$, $\beta\in\mathbb{R}$,
$\bm{\upgamma}\in\mathbb{R}^{d^{0}}$ and
$\mathbf{\Gamma}\in\mathbb{R}^{d^{0}\times d^{0}}$.
1:Perform ordinary forward pass until the output layer, obtaining
$\mathbf{x}^{L-1}\in\mathbb{R}^{d^{L-1}}$ and
$\mathbf{z}^{k}\in\mathbb{R}^{d^{k}}$, $k\in\\{1,2,\cdots,L-1\\}$; #
$\mathbf{z}^{k}$’s are usually unsaved in an ordinary MLP, but we need them to
compute the first and second derivatives of the activation functions;
2:Compute $\mathbf{F}^{k}\in\mathbb{R}^{d^{k}\times d^{k}}$ and
$\mathbf{s}^{k}\in\mathbb{R}^{d^{k}}$, $k\in\\{1,2,\cdots,L-1\\}$ by eqs. (5)
and (12);
3:Compute $\mathbf{P}^{k}\in\mathbb{R}^{d^{L-1}\times d^{k}}$ and
$\mathbf{Q}^{k}\in\mathbb{R}^{d^{k}\times d^{0}}$ $k\in\\{1,2,\cdots,L-1\\}$
by eqs. (7) and (8);
4:Compute $\mathbf{U}\in\mathbb{R}^{d^{L-1}\times d^{0}}$ and
$\mathbf{V}\in\mathbb{R}^{d^{L-1}\times d^{0}\times d^{0}}$ by eqs. (9) and
(14);
5:Compute $\bm{\uppsi}\in\mathbb{R}^{d^{L-1}}$ by eq.(17) and then
$\mathbf{g}^{[p]}\in\mathbb{R}^{d^{L-1}}$, $p\in\\{1,2,\cdots,d^{L-1}\\}$ by
(19); # this is where the PDE coefficients are used;
6:Reconstruct $\mathbf{w}^{L}\in\mathbb{R}^{d^{L-1}}$ by (18); # this is
where the network parameters $\lambda^{[p]}$’s are used;
7:Return $\tilde{u}=\mathbf{w}^{L}\mathbf{x}^{L-1}+b^{L}$.
## 5 Validation by code
To support some of the key statements in this paper, we provide a lightweight
code repository (https://github.com/stfc-sciml/PINN-PDE-compatibility) from
which the following three standalone PyTorch [35] scripts can be found:
* 1.
relu_causes_zero_hessian.py verifies that a ReLU-based MLP will always lead to
a vanished Hessian;
* 2.
piecewise_linear_by_relu.py plots the piecewise linear functions generated by
some random ReLU-based MLPs, as a support to our sketch in Figure 2(b);
* 3.
out_layer_hyperplane.py implements the out-layer-hyperplane in an MLP for a 2D
inhomogeneous wave equation on a membrane:
$u_{xx}+u_{yy}-u_{tt}-u_{t}=\sin(x+y-t)$, following Algorithm 1.
## 6 Conclusions
In physics-informed learning, it is crucial that the function space granted by
a neural network, $\mathcal{F}_{\mathrm{NN}}$, is compatible with that
required by the target PDE, $\mathcal{F}_{\mathrm{PDE}}$. We have proved that
an MLP with only ReLU-like activation functions will always lead to a vanished
Hessian, so it is incompatible with any second- or higher-order PDEs, namely,
$\mathcal{F}_{\mathrm{NN}}\cap\mathcal{F}_{\mathrm{PDE}}=\emptyset$.
Therefore, we proclaim that ReLU should not be used in a vanilla PINN. This
has led us to a simple method to make $\mathcal{F}_{\mathrm{NN}}$ fully
compatible with $\mathcal{F}_{\mathrm{PDE}}$, namely,
$\mathcal{F}_{\mathrm{NN}}\in\mathcal{F}_{\mathrm{PDE}}$. We have proved that,
using sufficiently smooth activation functions, a neural network tailed by a
fully-connected hidden layer can strictly satisfy any high-order linear PDE
when the weights of its output layer lie on a certain hyperplane, as called
the out-layer-hyperplane. A closed-form expression of this hyperplane has been
given for second-order linear PDEs. To the best of our knowledge, this should
be the first PINN architecture that enforces point-wise correctness of PDEs.
Our future work will be focused on the efficacy and performance of the out-
layer-hyperplane for different types of architectures and PDEs.
## Acknowledgement
This work is supported by the EPSRC grant, Blueprinting for AI for Science at
Exascale (BASE-II, EP/X019918/1), which is Phase II of the Benchmarking for AI
for Science at Exascale (BASE) grant.
## References
* [1] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational physics 378 (2019) 686–707.
* [2] G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, L. Yang, Physics-informed machine learning, Nature Reviews Physics 3 (6) (2021) 422–440.
* [3] S. Cuomo, V. S. Di Cola, F. Giampaolo, G. Rozza, M. Raissi, F. Piccialli, Scientific machine learning through physics-informed neural networks: Where we are and what’s next, Journal of Scientific Computing 92 (2022) 88.
* [4] L. Lu, X. Meng, Z. Mao, G. E. Karniadakis, Deepxde: A deep learning library for solving differential equations, SIAM Review 63 (1) (2021) 208–228.
* [5] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, A. Stuart, K. Bhattacharya, A. Anandkumar, Multipole graph neural operator for parametric partial differential equations, Advances in Neural Information Processing Systems 33 (2020) 6755–6766.
* [6] L. Lu, P. Jin, G. Pang, Z. Zhang, G. E. Karniadakis, Learning nonlinear operators via deeponet based on the universal approximation theorem of operators, Nature Machine Intelligence 3 (3) (2021) 218–229.
* [7] X. Meng, G. E. Karniadakis, A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse pde problems, Journal of Computational Physics 401 (2020) 109020.
* [8] B. Moseley, A. Markham, T. Nissen-Meyer, Finite basis physics-informed neural networks (fbpinns): a scalable domain decomposition approach for solving differential equations, arXiv preprint arXiv:2107.07871 (2021).
* [9] L. Yang, X. Meng, G. E. Karniadakis, B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data, Journal of Computational Physics 425 (2021) 109913.
* [10] M. Kasim, D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. Froula, G. Gregori, M. Jarvis, S. Khatiwala, J. Korenaga, et al., Building high accuracy emulators for scientific simulations with deep neural architecture search, Machine Learning: Science and Technology 3 (1) (2021) 015013.
* [11] Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Fourier neural operator for parametric partial differential equations, arXiv preprint arXiv:2010.08895 (2020).
* [12] M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, R. Ng, Fourier features let networks learn high frequency functions in low dimensional domains, Advances in Neural Information Processing Systems 33 (2020) 7537–7547.
* [13] Ziqi, Liu, , 9594, , Z. Liu, Wei, Cai, , 9595, , W. Cai, Zhi-Qin, J. Xu, , 9596, , Z.-Q. J. Xu, Multi-scale deep neural network (mscalednn) for solving poisson-boltzmann equation in complex domains, Communications in Computational Physics 28 (5) (2020) 1970–2001.
* [14] W. Cai, X. Li, L. Liu, A phase shift deep neural network for high frequency approximation and wave problems, SIAM Journal on Scientific Computing 42 (5) (2020) A3285–A3312.
* [15] E. Kharazmi, Z. Zhang, G. E. Karniadakis, hp-vpinns: Variational physics-informed neural networks with domain decomposition, Computer Methods in Applied Mechanics and Engineering 374 (2021) 113547.
* [16] B. Ronen, D. Jacobs, Y. Kasten, S. Kritchman, The convergence rate of neural networks for learned functions of different frequencies, Advances in Neural Information Processing Systems 32 (2019).
* [17] A. D. Jagtap, K. Kawaguchi, G. Em Karniadakis, Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks, Proceedings of the Royal Society A 476 (2239) (2020) 20200334.
* [18] Z. Mao, A. D. Jagtap, G. E. Karniadakis, Physics-informed neural networks for high-speed flows, Computer Methods in Applied Mechanics and Engineering 360 (2020) 112789.
* [19] J. Yu, L. Lu, X. Meng, G. E. Karniadakis, Gradient-enhanced physics-informed neural networks for forward and inverse pde problems, Computer Methods in Applied Mechanics and Engineering 393 (2022) 114823.
* [20] S. Wang, X. Yu, P. Perdikaris, When and why pinns fail to train: A neural tangent kernel perspective, Journal of Computational Physics 449 (2022) 110768\.
* [21] S. Mishra, R. Molinaro, Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for pdes, IMA Journal of Numerical Analysis 42 (2) (2022) 981–1022.
* [22] H. Sheng, C. Yang, Pfnn: A penalty-free neural network method for solving a class of second-order boundary-value problems on complex geometries, Journal of Computational Physics 428 (2021) 110085.
* [23] P. L. Lagari, L. H. Tsoukalas, S. Safarkhani, I. E. Lagaris, Systematic construction of neural forms for solving partial differential equations inside rectangular domains, subject to initial, boundary and interface conditions, International Journal on Artificial Intelligence Tools 29 (05) (2020) 2050009.
* [24] S. Dong, N. Ni, A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks, Journal of Computational Physics 435 (2021) 110242.
* [25] M. Mattheakis, P. Protopapas, D. Sondak, M. Di Giovanni, E. Kaxiras, Physical symmetries embedded in neural networks, arXiv preprint arXiv:1904.08991 (2019).
* [26] J. Ling, A. Kurzawski, J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics 807 (2016) 155–166.
* [27] S. Markidis, The old and the new: Can physics-informed deep-learning replace traditional linear solvers?, Frontiers in big Data (2021) 92.
* [28] B. Moseley, A. Markham, T. Nissen-Meyer, Solving the wave equation with physics-informed deep learning, arXiv preprint arXiv:2006.11894 (2020).
* [29] J. He, L. Li, J. Xu, C. Zheng, Relu deep neural networks and linear finite elements, Journal of Computational Mathematics 38 (3) (2020) 502–527.
* [30] H. Igel, Computational seismology: a practical introduction, Oxford University Press, 2017.
* [31] S. Shalev-Shwartz, S. Ben-David, Understanding machine learning: From theory to algorithms, Cambridge university press, 2014.
* [32] R. J. Gladstone, M. A. Nabian, H. Meidani, Fo-pinns: A first-order formulation for physics informed neural networks, arXiv preprint arXiv:2210.14320 (2022).
* [33] L. Debnath, L. Debnath, Nonlinear partial differential equations for scientists and engineers, Springer, 2005.
* [34] C. Bajaj, Quadric and cubic hypersurface parameterization, Tech. rep., Purdue University (1989).
* [35] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., Pytorch: An imperative style, high-performance deep learning library, Advances in neural information processing systems 32 (2019).
|
# Set-theoretical solutions to the Hom-Yang-Baxter equation and Hom-cycle sets
Kaiqiang Zhang and Xiankun Du K. Zhang: School of Mathematics, Jilin
University, Changchun 130012, China<EMAIL_ADDRESS>X. Du: School of
Mathematics, Jilin University, Changchun 130012, China<EMAIL_ADDRESS>
###### Abstract.
Set-theoretic solutions to the Yang-Baxter equation have been studied
extensively by means of related algebraic systems such as cycle sets and
braces, dynamical versions of which have also been developed. No work focuses
on set-theoretic solutions to the Hom-Yang-Baxter equation (HYBE for short).
This paper investigates set-theoretic solutions to HYBE and associated
algebraic system, called Hom-cycle sets. We characterize left non-degenerate
involutive set-theoretic solutions to HYBE and Hom-cycle sets, and establish
their relations. We discuss connections among Hom-cycle sets, cycle sets, left
non-degenerate involutive set-theoretic solutions to HYBE and the Yang-Baxter
equation.
###### Key words and phrases:
cycle set, Hom-cycle set, Hom-Yang-Baxter equation, left quasigroup, set-
theoretic solution
###### 1991 Mathematics Subject Classification:
16T25, 20N02, 20N05
Corresponding author: X Du
## 1\. Introduction
Let $V$ be a vector space. A solution to the Yang-Baxter equation (YBE
shortly) is a linear map $R:V\otimes V\rightarrow V\otimes V$ such that
$(R\otimes\operatorname{id}_{V})(\operatorname{id}_{V}\otimes
R)(R\otimes\operatorname{id}_{V})=(\operatorname{id}_{V}\otimes
R)(R\otimes\operatorname{id}_{V})(\operatorname{id}_{V}\otimes R).$
YBE first appeared in the work of Yang [39] and Baxter [1]. This equation is
fundamental to quantum groups. It is a central task to determine all solutions
to YBE. However, this is difficult to accomplish. In order to find new
solutions to YBE, Drinfeld [12] in 1992 suggested considering set-theoretic
solutions to YBE, that is, a map $r:X\times X\rightarrow X\times X$, where $X$
is a nonempty set, satisfying
$(r\times\operatorname{id}_{X})(\operatorname{id}_{X}\times
r)(r\times\operatorname{id}_{X})=(\operatorname{id}_{X}\times
r)(r\times\operatorname{id}_{X})(\operatorname{id}_{X}\times r).$ (1.1)
Etingof, Schedler and Soloviev [16], Gateva-Ivanova and Van den Bergh
[20], and Lu, Yan and Zhu [26] initially conducted a systematic study on this
subject. They studied set-theoretic solutions with invertibility,
nondegeneracy and involutivity by using group theory. Gateva-Ivanova in [17]
introduced a combinatorial approach to discuss set-theoretic solutions and she
conjectured that every square-free, non-degenerate involutive set-theoretic
solution $(X,r)$ is decomposable whenever $X$ is finite. This has been proved
by Rump in [30].
Left cycle sets were introduced by Rump [30] to study left non-degenerate
involutive set-theoretic solutions to YBE. Rump showed that there is a
bijective correspondence between left non-degenerate involutive set-theoretic
solutions to YBE and left cycle sets, and non-degenerate solutions correspond
to non-degenerate left cycle sets. He also prove that all finite left cycle
sets are non-degenerate. The theory of cycle sets has been proved to be very
useful in understanding the structure of solutions to YBE (see for example [2,
3, 4, 5, 7, 6, 10, 25]). This theory has been greatly developed, and inspires
theory of braces [8, 18, 21, 31, 36].
Another version of YBE, dynamical quantum Yang-Baxter equation, has been
studied [13, 14], which is closely related to dynamical quantum groups [15].
Their set-theoretic solution, called DYB maps, were proposed by Shibukawa in
[34] and received a lot of attention (see for example [24, 28, 35, 37]).
Dynamical braces and dynamical cycle sets were introduced and related to right
non-degenerate unitary DYB maps [27, 32].
The Hom-Yang-Baxter equation (HYBE shortly) was introduced by Yau [40]
motivated by Hom-Lie algebras, which is related to Hom-quantum groups [42].
Many researchers have devoted considerable attention to HYBE (see for example
[9, 23, 29, 38, 41, 43]). However, to our knowledge, no work concentrates on
set-theoretical solutions to GYBE.
The aim of this paper is to investigate left non-degenerate involutive set-
theoretic solutions to HYBE, corresponding algebraic systems, called Hom-cycle
sets, and their relationship.
The paper is organized as follows. In Section 2, we review some basic
definitions and results, and provide a general categorical framework for the
following discussion. In Section 3, we characterize left non-degenerate
involutive set-theoretic solutions to HYBE. In Section 4, we introduce the
notion of a Hom-cycle set, and prove that there exists a one to one
correspondence between left Hom-cycle sets and left non-degenerate involutive
set-theoretic solutions to HYBE. Section 5 is devoted to relationship among
Hom-cycle sets, cycle sets, left non-degenerate involutive solutions to HYBE
and YBE.
## 2\. Preliminaries
Let $X$ be a nonempty set and let $r:X\times X\rightarrow X\times X$ be a map.
We will write $r(x,y)=(\lambda_{x}(y),\rho_{y}(x))$, where $\lambda_{x}$ and
$\rho_{y}$ are maps from $X$ to itself for all $x,y\in X$. The pair $(X,r)$ is
referred to a quadratic set in [17].
A quadratic set $(X,r)$ (or a map $r$) is called
1. (1)
left (respectively, right) non-degenerate if the map $\lambda_{x}$
(respectively, $\rho_{x}$) is bijective for all $x\in X$;
2. (2)
non-degenerate if $r$ is both left and right non-degenerate;
3. (3)
involutive if $r^{2}=\operatorname{id}$, the identify map;
4. (4)
a set-theoretic solution to YBE if $r$ satisfies (1.1).
The following lemma comes from [16, Proposition 1.6] (see also [19, Lemma
2.4]).
###### Lemma 2.1.
1. (1)
A quadratic set $(X,r)$ is involutive if and only if
$\displaystyle\lambda_{\lambda_{x}(y)}\rho_{y}(x)=x,$ (2.1)
$\displaystyle\rho_{\rho_{x}(y)}\lambda_{y}(x)=x,$ (2.2)
for all $x,y\in X$.
2. (2)
A quadratic set $(X,r)$ is left non-degenerate and involutive if and only if
$\lambda_{x}$ is bijective for all $x\in X$ and
$\rho_{y}(x)=\lambda_{\lambda_{x}(y)}^{-1}(x),$ (2.3)
for all $x,y\in X$.
###### Theorem 2.2.
[16, Proposition 1.6] A quadratic set $(X,r)$ is a set-theoretic solution to
YBE if and only if
1. (1)
$\lambda_{\lambda_{x}(y)}\lambda_{\rho_{y}(x)}(z)=\lambda_{x}\lambda_{y}(z)$,
2. (2)
$\rho_{\lambda_{\rho_{y}(x)}(z)}\lambda_{x}(y)=\lambda_{\rho_{\lambda_{y}(z)}(x)}\rho_{z}(y)$,
3. (3)
$\rho_{z}\rho_{y}(x)=\rho_{\rho_{z}(y)}\rho_{\lambda_{y}(z)}(x)$,
for all $x,y,z\in X$.
The following Theorem comes from [8, Proposition 2] (see also [22, Theorem
9.3.10]).
###### Theorem 2.3.
A quadratic set $(X,r)$ is a left non-degenerate involutive set-theoretic
solution to YBE if and only if the following hold.
1. (1)
$\lambda_{x}$ is bijective for all $x\in X$;
2. (2)
$r^{2}=\operatorname{id}$;
3. (3)
$\lambda_{x}\lambda_{\lambda_{x}^{-1}(y)}=\lambda_{y}\lambda_{\lambda_{y}^{-1}(x)}$
for all $x,y\in X$.
Let $(X,r)$ and $(X^{\prime},r^{\prime})$ be quadratic sets. By a morphism
from $(X,r)$ to $(X^{\prime},r^{\prime})$ we mean a map $f:X\rightarrow
X^{\prime}$ satisfying $(f\times f)r=r^{\prime}(f\times f)$.
###### Lemma 2.4.
Given two quadratic sets $(X,r)$ and $(X^{\prime},r^{\prime})$, and a map
$f:X\rightarrow X^{\prime}$, the following are equivalent:
1. (1)
$f$ is a morphism of quadratic sets;
2. (2)
$f\lambda_{x}=\lambda_{f(x)}^{\prime}f~{}\text{and}~{}f\rho_{x}=\rho_{f(x)}^{\prime}f~{}\text{for
all}~{}x\in X$.
If $r$ and $r^{\prime}$ are both left non-degenerate and involutive, then both
conditions above are equivalent to one of the following conditions:
1. (3)
$f\lambda_{x}=\lambda_{f(x)}^{\prime}f$ for all $x\in X$;
2. (4)
$f\lambda_{x}^{-1}=\lambda_{f(x)}^{\prime-1}f$ for all $x\in X$.
By a Hom-quadratic set we mean a triple $(X,r,\alpha)$ of a nonempty set $X$
with two maps $r:X\times X\to X\times X$ and $\alpha:X\to X$ such that
$r(\alpha\times\alpha)=(\alpha\times\alpha)r$. Thus a Hom-quadratic set is
exactly a quadratic set with an endomorphism.
We will identify the quadratic set $(X,r)$ with the Hom-quadratic set
$(X,r,\operatorname{id})$.
Given two Hom-quadratic sets $(X,r,\alpha)$ and
$(X^{\prime},r^{\prime},\alpha^{\prime})$, a map $f:X\rightarrow X^{\prime}$
is called a morphism of Hom-quadratic sets if
$(f\times f)r=r^{\prime}(f\times f)~{}\text{and}~{}f\alpha=\alpha^{\prime}f.$
Thus a morphism of Hom-quadratic sets is exactly a morphism of quadratic sets
satisfying $f\alpha=\alpha^{\prime}f$.
###### Corollary 2.5.
Given a quadratic set $(X,r)$ and a map $\alpha:X\to X$, the triple
$(X,r,\alpha)$ is a Hom-quadratic set if and only if
$\alpha\lambda_{x}=\lambda_{\alpha(x)}\alpha~{}\text{and}~{}\alpha\rho_{x}=\rho_{\alpha(x)}\alpha~{}\text{for
all}~{}x\in X.$
A Hom-quadratic set $(X,r,\alpha)$ is called left non-degenerate, non-
degenerate, and involutive, respectively, if $r$ has the same properties.
Denote by $\mathsf{QS}$ and $\mathsf{HQS}$ the categories of left non-
degenerate involutive quadratic sets and left non-degenerate involutive Hom-
quadratic sets, respectively. Then $\mathsf{QS}$ is a full subcategory of
$\mathsf{HQS}$ by identifying a quadratic set $(X,r)$ with a Hom-quadratic set
$(X,r,\operatorname{id})$.
By a groupoid we mean a set with a binary operation. For a groupoid $X$,
denote by $\sigma_{x}$ the left multiplication map by $x\in X$ defined by
$\sigma_{x}:X\to X,~{}~{}y\mapsto xy.$
By a left quasigroup we mean a groupoid $X$ such that the left multiplication
maps $\sigma_{x}$ are bijective for all $x\in X$ (see [33, Page 9]).
It should be pointed out that the image of an endomorphism of a left
quasigroup is a left quasigroup, though the image of a homomorphism from a
left quasigroup to a groupoid need not to be a left quasigroup (see [33, Page
15] and [33, Corollary 1.298]).
By a left Hom-quasigroup we mean a pair $(X,\alpha)$ of a left quasigroup $X$
with an endomorphism $\alpha$.
We also write a left Hom-quasigroup $(X,\alpha)$ as $(X,\cdot,\alpha)$ to
indicate the operation $\cdot$ of left quasigroup $X$.
We can identify a left quasigroup $X$ with the left Hom-quasigroup
$(X,\operatorname{id})$.
Let $(X,\alpha)$ and $(X^{\prime},\alpha^{\prime})$ be two left Hom-
quasigroups. A map $f:X\to X^{\prime}$ is called a morphism of left Hom-
quasigroups if $\alpha f=f\alpha^{\prime}$ and $f(xy)=f(x)f(y)$ for all
$x,y\in X$.
From Lemma 2.4, we have the following lemma.
###### Lemma 2.6.
Let $(X,r,\alpha)$ and $(X^{\prime},r^{\prime},\alpha^{\prime})$ be left non-
degenerate involutive Hom-quadratic sets, and let $(X,\cdot,\alpha)$ and
$(X^{\prime},*,\alpha^{\prime})$ be left Hom-quasigroups such that $x\cdot
y=\lambda_{x}^{-1}(y)$ for all $x,y\in X$ and
$x^{\prime}*y^{\prime}=\lambda^{\prime-1}_{x^{\prime}}(y^{\prime})$ for all
$x^{\prime},y^{\prime}\in X^{\prime}$. Then a map $f:X\rightarrow X^{\prime}$
is a morphism of Hom-quadratic sets if and only if it is a morphism of left
Hom-quasigroups.
Denote by $\mathsf{HQG}$ and $\mathsf{QG}$ the categories of left quasigroups
and left Hom-quasigroups, respectively. Then $\mathsf{HQG}$ is a full
subcategory of $\mathsf{QG}$ by identifying a left quasigroup $X$ with a left
Hom-quasigroup $(X,\operatorname{id})$.
Given a left non-degenerate involutive Hom-quadratic set $(X,r,\alpha)$, we
get a left Hom-quasigroups $(X,\cdot,\alpha)$, denoted by $G(X,r,\alpha)$,
with the operation defined by $x\cdot y=\lambda_{x}^{-1}(y)$ for all $x,y\in
X$. Then we have a functor $G:\mathsf{HQS}\to\mathsf{HQG}$ by associating
$(X,r,\alpha)$ with $G(X,r,\alpha)$, and a morphism $f$ in $\mathsf{HQS}$ with
$f$.
Conversely, given a left Hom-quasigroup $(X,\cdot,\alpha)$, we get a Hom-
quadratic set $(X,r,\alpha)$, denoted by $S(X,\cdot,\alpha)$, with
$\lambda_{x}(y)=\sigma_{x}^{-1}(y)$ and $\rho_{y}(x)=\sigma_{x}^{-1}(y)\cdot
x$ for all $x,y\in X$. It is routine to verify that $(X,r,\alpha)$ is left
non-degenerate and involutive. Then we have a functor
$S:\mathsf{HQG}\to\mathsf{HQS}$ by associating $(X,\cdot,\alpha)$ with
$S(X,\cdot,\alpha)$ and a morphism $f$ in $\mathsf{HQG}$ with $f$.
###### Theorem 2.7.
The functors $G$ and $S$ are mutually inverse, and so the categories
$\mathsf{HQS}$ and $\mathsf{HQG}$ are isomorphic.
###### Proof.
It is straightforward. ∎
The functor $G$ induces a functor from $\mathsf{QS}$ to $\mathsf{QG}$ and $S$
induces a functor from $\mathsf{QG}$ to $\mathsf{QS}$. We still denote the
induced functors by $G$ and $S$, respectively. By Theorem 2.7, we have the
following corollary.
###### Corollary 2.8.
The functors $G:\mathsf{QS}\to\mathsf{QG}$ and $S:\mathsf{QG}\to\mathsf{QS}$
are mutually inverse, and so the categories $\mathsf{QS}$ and $\mathsf{QG}$
are isomorphic.
Denote by $\mathsf{S_{ybe}}$ the category of left non-degenerate involutive
solutions to YBE. Then $\mathsf{S_{ybe}}$ is a full subcategory of
$\mathsf{QS}$.
A left quasigroup $X$ is called a left cycle set if $(xy)(xz)=(yx)(yz)$ for
all $x,y,z\in X$ [30].
Denote by $\mathsf{CS}$ the category of left cycle sets. Then $\mathsf{CS}$ is
a full subcategory of $\mathsf{QG}$.
By [30, Proposition 1], the functor $G$ induces a functor from
$\mathsf{S_{ybe}}$ to $\mathsf{CS}$ and $S$ induces a functor from
$\mathsf{CS}$ to $\mathsf{S_{ybe}}$. We still denote the induced functors by
$G$ and $S$, respectively. By Corollary 2.8, [30, Proposition 1] can be
restated as follows.
###### Theorem 2.9.
The functors $G:\mathsf{S_{ybe}}\to\mathsf{CS}$ and
$S:\mathsf{CS}\to\mathsf{S_{ybe}}$ are mutually inverse, and so the categories
$\mathsf{S_{ybe}}$ and $\mathsf{CS}$ are isomorphic.
A groupoid is called $\Delta$-bijective if the map $\Delta$ is bijective,
where $\Delta:X\times X\to X\times X,~{}~{}(x,y)\mapsto(xy,yx)$ [2].
The following lemma comes from [2, Lemma 2.10] (see also Lemma 1.28 in Chapter
XIII of [11]).
###### Lemma 2.10.
A groupoid $X$ is $\Delta$-bijective if and only if there exists an operation
$\circ$ on $X$ (called the dual operation) such that
$\displaystyle(x\cdot y)\circ(y\cdot x)=x,$ (2.4) $\displaystyle(x\circ
y)\cdot(y\circ x)=x,$ (2.5)
for all $x,y\in X$. Furthermore, if conditions hold, then the operation
$\circ$ is unique and the inverse of $\Delta$ is given by $(x,y)\mapsto(x\circ
y,y\circ x)$.
###### Lemma 2.11.
(see [2, Lemma 2.11]) If a groupoid $X$ is $\Delta$-bijective, then the square
map $q:X\to X,~{}~{}x\mapsto x^{2}$ is invertible.
We will say that a groupoid with extra structure is non-degenerate if the
underlying groupoid is $\Delta$-bijective.
Denote by $\mathsf{ndCS}$ the category of non-degenerate left cycle sets. Then
$\mathsf{ndCS}$ is a full subcategory of $\mathsf{QG}$.
Denote by $\mathsf{ndiS_{ybe}}$ the category of non-degenerate involutive
solution to YBE. Then $\mathsf{ndiS_{ybe}}$ is a full subcategory of
$\mathsf{QS}$.
By Corollary 2.8 and Theorem 2.9, Proposition 2 in [30] can be restated as
follows.
###### Theorem 2.12.
The functors $G:\mathsf{ndiS_{ybe}}\to\mathsf{ndCS}$ and
$S:\mathsf{ndCS}\to\mathsf{ndiS_{ybe}}$ are mutually inverse, and so the
categories $\mathsf{ndiS_{ybe}}$ and $\mathsf{ndCS}$ are isomorphic.
## 3\. Set-theoretic solutions to the Hom-Yang-Baxter equation
The Hom-Yang-Baxter equation was proposed by Yau [40] motivated by Hom-Lie
algebras.
###### Definition 3.1.
Given a vector space $V$ and two linear maps $R:V\otimes V\rightarrow V\otimes
V$ and $\alpha:V\rightarrow V$, the triple $(V,R,\alpha)$ is called a solution
to the Hom-Yang-Baxter equation, if
1. (1)
$R(\alpha\otimes\alpha)=(\alpha\otimes\alpha)R$, and
2. (2)
$(\alpha\otimes R)(R\otimes\alpha)(\alpha\otimes
R)=(R\otimes\alpha)(\alpha\otimes R)(R\otimes\alpha)$.
By analogy with set-theoretic solutions to YBE, we introduce set-theoretic
solutions to HYBE.
###### Definition 3.2.
Given a nonempty set $X$ and two maps $r:X\times X\rightarrow X\times X$ and
$\alpha:X\rightarrow X$, the triple $(X,r,\alpha)$ is called a set-theoretic
solution to HYBE, if
1. (1)
$r\circ(\alpha\times\alpha)=(\alpha\times\alpha)\circ r$, and
2. (2)
$(\alpha\times r)(r\times\alpha)(\alpha\times r)=(r\times\alpha)(\alpha\times
r)(r\times\alpha)$.
Clearly, $(X,r)$ is a set-theoretic solution to YBE if and only if
$(X,r,\operatorname{id})$ is a set-theoretic solution to HYBE.
Let $f:X\to X$ be a map and $Y$ a subset of $X$. Denote by $f|_{Y}$ the
restriction of $f$ to $Y$. When there is no ambiguity, we will write $f$ for
the restriction $f|_{Y}$.
By analogy with the relation between set-theoretic solutions to YBE and
solutions to YBE, we have the following theorem, and the proof is immediate.
###### Theorem 3.3.
Let $V$ be a vector space with a basis $X$.
1. (1)
If $(V,R,\alpha)$ is a solution to HYBE such that $R(X\otimes X)\subseteq
X\otimes X$ and $\alpha(X)\subseteq X$, then $(X,R|_{X\otimes X},\alpha|_{X})$
is a set-theoretic solution to HYBE.
2. (2)
Conversely, if $(X,r,\alpha)$ is a set-theoretic solution to HYBE, and
$R:V\rightarrow V$ and $\bar{\alpha}:V\to V$ are linear extensions of $r$ and
$\alpha$, respectively, then $(V,R,\bar{\alpha})$ is a solution to HYBE.
In what follows, a set-theoretic solution is simply called a solution.
###### Lemma 3.4.
A triple $(X,r,\alpha)$ with $r:X\times X\rightarrow X\times X$ and
$\alpha:X\rightarrow X$ is a solution to HYBE if and only if the following
conditions hold for all $x,y,z\in X$,
1. (1)
$\alpha\lambda_{x}=\lambda_{\alpha(x)}\alpha,\text{and}~{}\alpha\rho_{x}=\rho_{\alpha(x)}\alpha$;
2. (2)
$\alpha\lambda_{\alpha(x)}\lambda_{y}=\lambda_{\alpha\lambda_{x}(y)}\lambda_{\rho_{y}(x)}\alpha$;
3. (3)
$\rho_{\lambda_{\rho_{y}(x)}\alpha(z)}\alpha\lambda_{x}(y)=\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\rho_{z}(y)$;
4. (4)
$\alpha\rho_{\alpha(y)}\rho_{x}=\rho_{\alpha\rho_{y}(x)}\rho_{\lambda_{x}(y)}\alpha$.
###### Proof.
It is straightforward. ∎
Theorem 2.2 is a special case of Lemma 3.4.
###### Example 3.5.
A triple $(X,\operatorname{id}_{X\times X},\alpha)$ is a solution to HYBE if
and only if $\alpha^{2}=\alpha$. In this case, it is involutive, but neither
left nor right non-degenerate.
###### Example 3.6.
Let $(X,r,\alpha)$ be Hom-quadratic set. If $(X,r)$ is a solution to YBE, then
$(X,(\alpha\times\alpha)r,\alpha)$ is a solution to HYBE, and the converse
holds if additionally $\alpha$ is injective or surjective.
###### Example 3.7.
A triple $(X,\tau,\alpha)$ with $\tau(x,y)=(y,x)$ and arbitrary map
$\alpha:X\to X$ is a non-degenerate involutive solution to HYBE, called a
trivial solution.
###### Example 3.8.
A triple $(X,r,\alpha)$ with $r(x,y)=(f(y),g(x))$, where $f,g$ are maps from
$X$ to itself, is a solution to HYBE if and only if $\alpha,f,g$ commute.
Furthermore, the solution $(X,r,\alpha)$ is left non-degenerate and involutive
if and only if $f$ is bijective, and $g=f^{-1}$.
We are now in a position to characterize left non-degenerate involutive
solutions to HYBE.
###### Theorem 3.9.
A triple $(X,r,\alpha)$ with $r:X\times X\rightarrow X\times X$ and
$\alpha:X\rightarrow X$ is a left non-degenerate involutive solution to HYBE
if and only if the following conditions hold for all $x,y\in X$,
1. (1)
$\lambda_{x}$ is bijective;
2. (2)
$\rho_{y}(x)=\lambda_{\lambda_{x}(y)}^{-1}(x)$;
3. (3)
$\alpha\lambda_{x}=\lambda_{\alpha(x)}\alpha$;
4. (4)
$\alpha\lambda_{\alpha(x)}\lambda_{\lambda_{x}^{-1}(y)}=\lambda_{\alpha(y)}\lambda_{\lambda_{y}^{-1}(x)}\alpha$;
5. (5)
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}(y)}=\lambda_{\alpha(y)}\lambda_{\lambda_{y}^{-1}\alpha(x)}\alpha$;
6. (6)
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}=\lambda_{y}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha$.
###### Proof.
($\Rightarrow$) (1) and (2) follows from Lemma 2.1(2). (3) follows from Lemma
3.4(1).
To prove (4), replacing $y$ by $\lambda_{x}^{-1}(y)$ in (2) and Lemma 3.4(2),
we get
$\rho_{\lambda_{x}^{-1}(y)}(x)=\lambda_{y}^{-1}(x)~{}~{}\text{and}~{}~{}\alpha\lambda_{\alpha(x)}\lambda_{\lambda_{x}^{-1}(y)}=\lambda_{\alpha(y)}\lambda_{\rho_{\lambda_{x}^{-1}(y)}(x)}\alpha.$
Then (4) follows.
To prove (5), using (2) we can write Lemma 3.4(3) as
$\lambda_{\lambda_{\alpha\lambda_{x}(y)}\lambda_{\rho_{y}(x)}\alpha(z)}^{-1}\alpha\lambda_{x}(y)=\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\lambda_{\lambda_{y}(z)}^{-1}(y).$
It follows by Lemma 3.4(2) that
$\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}(y)=\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\lambda_{\lambda_{y}(z)}^{-1}(y),$
which implies that
$\alpha\lambda_{x}(y)=\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\lambda_{\lambda_{y}(z)}^{-1}(y)$.
Since $\lambda_{y}$ is bijective and $z$ is arbitrary, we can replace
$\lambda_{y}(z)$ by $z$ in the last equation to obtain
$\alpha\lambda_{x}(y)=\lambda_{\alpha\lambda_{\alpha(x)}(z)}\lambda_{\rho_{z}\alpha(x)}\alpha\lambda_{z}^{-1}(y)$.
Thus
$\alpha\lambda_{x}=\lambda_{\alpha\lambda_{\alpha(x)}(z)}\lambda_{\rho_{z}\alpha(x)}\alpha\lambda_{z}^{-1},$
and so
$\alpha\lambda_{x}\lambda_{z}=\lambda_{\alpha\lambda_{\alpha(x)}(z)}\lambda_{\rho_{z}\alpha(x)}\alpha$.
By (2) we have
$\alpha\lambda_{x}\lambda_{z}=\lambda_{\alpha\lambda_{\alpha(x)}(z)}\lambda_{\lambda_{\lambda_{\alpha(x)}(z)}^{-1}\alpha(x)}\alpha,$
in which replacing $z$ by $\lambda_{\alpha(x)}^{-1}(y)$ gives (5).
Now we prove (6). We first claim that the conditions (1) through (5) imply
that
$\alpha\lambda_{x}=\alpha\lambda_{\alpha^{2}(x)}.$ (3.1)
Indeed, replacing $x$ by $\alpha(x)$ in (4), we have
$\alpha\lambda_{\alpha^{2}(x)}\lambda_{\lambda_{\alpha(x)}^{-1}(y)}=\lambda_{\alpha(y)}\lambda_{\lambda_{y}^{-1}\alpha(x)}\alpha,$
which together with (5) gives
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}(y)}=\alpha\lambda_{\alpha^{2}(x)}\lambda_{\lambda_{\alpha(x)}^{-1}(y)}$.
By (1) we get $\alpha\lambda_{x}=\alpha\lambda_{\alpha^{2}(x)}$.
Note that Lemma 3.4(2) implies that
$\lambda_{\rho_{x}(z)}\alpha=\lambda_{\alpha\lambda_{z}(x)}^{-1}\alpha\lambda_{\alpha(z)}\lambda_{x}.$
(3.2)
By (2), we can write 3.4(4) as
$\displaystyle\alpha\lambda_{\lambda_{\rho_{x}(z)}\alpha(y)}^{-1}\rho_{x}(z)=\lambda_{\lambda_{\rho_{\lambda_{x}(y)}\alpha(z)}\alpha\rho_{y}(x)}^{-1}\rho_{\lambda_{x}(y)}\alpha(z),$
for any $z\in X$. It follows by (2) that
$\displaystyle\alpha\lambda_{\lambda_{\rho_{x}(z)}\alpha(y)}^{-1}\lambda_{\lambda_{z}(x)}^{-1}(z)=\lambda_{\lambda_{\rho_{\lambda_{x}(y)}\alpha(z)}\alpha\lambda_{\lambda_{x}(y)}^{-1}(x)}^{-1}\lambda_{\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\alpha(z).$
By using (3.2) and substituting $\lambda_{\rho_{x}(z)}\alpha$ and
$\lambda_{\rho_{\lambda_{x}(y)}\alpha(z)}\alpha$ into the last equation, we
obtain
$\alpha\lambda_{\lambda_{\alpha\lambda_{z}(x)}^{-1}\alpha\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\lambda_{\lambda_{z}(x)}^{-1}(z)=\lambda_{\lambda_{\alpha\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\alpha\lambda_{\alpha^{2}(z)}(x)}^{-1}\lambda_{\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\alpha(z).$
Thus by (3.1), we have
$\alpha\lambda_{\lambda_{\alpha\lambda_{z}(x)}^{-1}\alpha\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\lambda_{\lambda_{z}(x)}^{-1}(z)=\lambda_{\lambda_{\alpha\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\alpha\lambda_{z}(x)}^{-1}\lambda_{\lambda_{\alpha(z)}\lambda_{x}(y)}^{-1}\alpha(z).$
Replacing $x$ by $\lambda_{z}^{-1}(x)$ in the previous equation, we have
$\alpha\lambda_{\lambda_{\alpha(x)}^{-1}\alpha\lambda_{\alpha(z)}\lambda_{\lambda_{z}^{-1}(x)}(y)}^{-1}\lambda_{x}^{-1}(z)=\lambda_{\lambda_{\alpha\lambda_{\alpha(z)}\lambda_{\lambda_{z}^{-1}(x)}(y)}^{-1}\alpha(x)}^{-1}\lambda_{\lambda_{\alpha(z)}\lambda_{\lambda_{z}^{-1}(x)}(y)}^{-1}\alpha(z).$
Noting that $\lambda_{\alpha(z)}\lambda_{\lambda_{z}^{-1}(x)}^{-1}(y)$ is an
arbitrary element of $X$, we may simply denote it by $y$. Then the last
equation can be written as
$\alpha\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}^{-1}\lambda_{x}^{-1}(z)=\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}^{-1}\lambda_{y}^{-1}\alpha(z),$
that is,
$\alpha\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}^{-1}\lambda_{x}^{-1}=\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}^{-1}\lambda_{y}^{-1}\alpha$,
which implies
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}=\lambda_{y}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha.$
This proves (6).
($\Leftarrow$) The nondegeneracy and involutivity of $(X,r,\alpha)$ follows
from (1) and (2) by Lemma 2.1(2). We now prove that $(X,r,\alpha)$ satisfies
the four conditions in Lemma 3.4.
Lemma 3.4(1) follows from Lemma 2.4.
By (2) and (4), we have
$\lambda_{\alpha\lambda_{x}(y)}\lambda_{\rho_{y}(x)}\alpha=\lambda_{\alpha\lambda_{x}(y)}\lambda_{\lambda_{\lambda_{x}(y)}^{-1}(x)}\alpha=\alpha\lambda_{\alpha(x)}\lambda_{\lambda_{x}^{-1}\lambda_{x}(y)}=\alpha\lambda_{\alpha(x)}\lambda_{y},$
which proves Lemma 3.4(2).
To prove Lemma 3.4(3), replacing $x$ by $\alpha(x)$ and $y$ by
$\lambda_{y}(z)$ in Lemma 3.4(2) and using (3.1) we get
$\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha=\alpha\lambda_{\alpha^{2}(x)}\lambda_{\lambda_{y}(z)}=\alpha\lambda_{x}\lambda_{\lambda_{y}(z)}.$
It follows that
$\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}=\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\lambda_{\lambda_{y}(z)}^{-1}.$
(3.3)
Thus by Lemma 3.4(2), (3.3) and (2), we have
$\displaystyle\rho_{\lambda_{\rho_{y}(x)}\alpha(z)}\alpha\lambda_{x}(y)$
$\displaystyle=\lambda_{\lambda_{\alpha\lambda_{x}(y)}\lambda_{\rho_{y}(x)}\alpha(z)}^{-1}\alpha\lambda_{x}(y)$
$\displaystyle=\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}(y)$
$\displaystyle=\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\lambda_{\lambda_{y}(z)}^{-1}(y)$
$\displaystyle=\lambda_{\rho_{\lambda_{y}(z)}(\alpha(x))}\alpha\rho_{z}(y),$
which proves Lemma 3.4(3).
We now prove Lemma 3.4(4). By Lemma 3.4(2), we have
$\displaystyle\lambda_{\rho_{y}(x)}\alpha$
$\displaystyle=\lambda_{\alpha\lambda_{x}(y)}^{-1}\alpha\lambda_{\alpha(x)}\lambda_{y}.$
(3.4)
And by (3.3), we have
$\displaystyle\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha$
$\displaystyle=\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}\lambda_{\lambda_{y}(z)}.$
(3.5)
Then using (2) and (3.4) we obtain that
$\displaystyle\alpha\rho_{\alpha(z)}\rho_{y}(x)$
$\displaystyle=\alpha\lambda_{\lambda_{\rho_{y}(x)}\alpha(z)}^{-1}\rho_{y}(x)$
$\displaystyle=\alpha\lambda_{\lambda_{\alpha\lambda_{x}(y)}^{-1}\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\lambda_{\lambda_{x}(y)}^{-1}(x).$
(3.6)
By (2) and (3.5) we get
$\displaystyle\rho_{\alpha\rho_{z}(y)}\rho_{\lambda_{y}(z)}\alpha(x)$
$\displaystyle=\lambda_{\lambda_{\rho_{\lambda_{y}(z)}\alpha(x)}\alpha\lambda_{\lambda_{y}(z)}^{-1}(y)}^{-1}\rho_{\lambda_{y}(z)}\alpha(x)$
$\displaystyle=\lambda_{\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}\lambda_{\lambda_{y}(z)}\lambda_{\lambda_{y}(z)}^{-1}(y)}^{-1}\rho_{\lambda_{y}(z)}\alpha(x)$
$\displaystyle=\lambda_{\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}(y)}^{-1}\lambda_{\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha(x).$
(3.7)
Substituting $y$ for $\alpha(y)$ in (5), we have
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}=\lambda_{\alpha^{2}(y)}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha$,
which together with (6) yields
$\lambda_{\alpha^{2}(y)}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha=\lambda_{y}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha.$
(3.8)
Replacing $x$ by $\alpha\lambda_{x}(y)$ and $y$ by
$\alpha\lambda_{\alpha(x)}\lambda_{y}(z)$ in (4), respectively, we have
$\alpha\lambda_{\alpha^{2}\lambda_{x}(y)}\lambda_{\lambda_{\alpha\lambda_{x}(y)}^{-1}\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}=\lambda_{\alpha^{2}\lambda_{\alpha(x)}\lambda_{y}(z)}\lambda_{\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}(y)}\alpha.$
By (3.1) and (3.8), we get that
$\alpha\lambda_{\lambda_{x}(y)}\lambda_{\lambda_{\alpha\lambda_{x}(y)}^{-1}\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}=\lambda_{\lambda_{\alpha(x)}\lambda_{y}(z)}\lambda_{\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}(y)}\alpha,$
whence
$\alpha\lambda_{\lambda_{\alpha\lambda_{x}(y)}^{-1}\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\lambda_{\lambda_{x}(y)}^{-1}=\lambda_{\lambda_{\alpha\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha\lambda_{x}(y)}^{-1}\lambda_{\lambda_{\alpha(x)}\lambda_{y}(z)}^{-1}\alpha.$
(3.9)
Lemma 3.4(4) follows from (3.6), (3.7) and (3.9). ∎
###### Corollary 3.10.
Let $(X,r,\alpha)$ be a left non-degenerate involutive solution to HYBE. Then
1. (1)
$\lambda_{x}\alpha=\lambda_{\alpha^{2}(x)}\alpha$ for all $x\in X$;
2. (2)
$\lambda_{x}\alpha^{2}=\alpha^{2}\lambda_{x}$ for all $x\in X$.
###### Proof.
(1) Replacing $y$ by $\alpha(y)$ in Theorem 3.9(5) we have
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}=\lambda_{\alpha^{2}(y)}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha,$
which together with Theorem 3.9(6) yields
$\lambda_{\alpha^{2}(y)}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha=\lambda_{y}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha.$
By Theorem 3.9(3),
$\lambda_{\alpha^{2}(y)}\alpha\lambda_{\lambda_{y}^{-1}(x)}=\lambda_{y}\alpha\lambda_{\lambda_{y}^{-1}(x)}$.
Thus $\lambda_{\alpha^{2}(y)}\alpha\ =\lambda_{y}\alpha$.
(2) By Theorem 3.9(3) and Corollary 3.10(1) we have
$\alpha\lambda_{\alpha(x)}=\lambda_{\alpha^{2}(x)}\alpha=\lambda_{x}\alpha.$
By Theorem 3.9(3), we have
$\alpha^{2}\lambda_{x}=\alpha\lambda_{\alpha(x)}\alpha=\lambda_{x}\alpha^{2}$.
∎
###### Theorem 3.11.
A triple $(X,r,\alpha)$ with $r:X\times X\rightarrow X\times X$ and
$\alpha:X\rightarrow X$ is a left non-degenerate involutive solution to HYBE
if and only if the following statements are true for all $x,y\in X$,
1. (1)
$\lambda_{x}\ $is bijective;
2. (2)
$\rho_{y}(x)=\lambda_{\lambda_{x}(y)}^{-1}(x)$;
3. (3)
$\alpha\lambda_{x}=\lambda_{\alpha(x)}\alpha$;
4. (4)
$\lambda_{x}\alpha=\alpha\lambda_{\alpha(x)}$;
5. (5)
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}(y)}=\lambda_{\alpha(y)}\lambda_{\lambda_{y}^{-1}\alpha(x)}\alpha$.
###### Proof.
($\Rightarrow$) By Theorem 3.9 we need only prove (4). By (3) and Corollary
3.10,
$\alpha\lambda_{\alpha(x)}=\lambda_{\alpha^{2}(x)}\alpha=\lambda_{x}\alpha$,
as desired.
($\Leftarrow$) It suffices to prove (4) and (6) of Theorem 3.9. Replacing $y$
by $\alpha(y)$ in (5) and using (3) and (4) we get
$\alpha\lambda_{x}\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}=\lambda_{\alpha^{2}(y)}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha=\lambda_{y}\lambda_{\lambda_{\alpha(y)}^{-1}\alpha(x)}\alpha,$
which is Theorem 3.9(6).
Replacing $x$ by $\alpha(x)$ in (5) we get
$\alpha\lambda_{\alpha(x)}\lambda_{\lambda_{\alpha^{2}(x)}^{-1}(y)}=\lambda_{\alpha(y)}\lambda_{\lambda_{y}^{-1}\alpha^{2}(x)}\alpha$.
It follows from (3) and (4) that
$\lambda_{\alpha(y)}\lambda_{\lambda_{y}^{-1}\alpha^{2}(x)}\alpha=\alpha\lambda_{\alpha(x)}\lambda_{\lambda_{\alpha^{2}(x)}^{-1}(y)}=\lambda_{\alpha^{2}(x)}\lambda_{\lambda_{\alpha^{3}(x)}^{-1}\alpha(y)}\alpha\\\
=\lambda_{\alpha^{2}(x)}\lambda_{\lambda_{\alpha(x)}^{-1}\alpha(y)}\alpha=\alpha\lambda_{\alpha(x)}\lambda_{\lambda_{x}^{-1}(y)}.$
This proves Theorem 3.9(4). ∎
When $\alpha=\operatorname{id}$, Theorem 3.11 gives Theorem 2.3.
###### Example 3.12.
A triple $(X,r,\alpha)$ such that $\alpha(X)=\\{\theta\\}$ is a left non-
degenerate involutive solution to HYBE if and only if $\lambda_{x}$ is
bijective, $\lambda_{x}(\theta)=\theta$ and
$\rho_{y}(x)=\lambda_{\lambda_{x}(y)}^{-1}(x)$ for all $x,y\in X$.
## 4\. Hom-cycle sets
In this section, we introduce a Hom-version of cycle sets and relate them to
left non-degenerate involutive solutions to HYBE.
###### Definition 4.1.
A left Hom-quasigroup $(X,\alpha)$ is called a left Hom-cycle set if the
following conditions are satisfied for all $x,y,z\in X$,
$\displaystyle\alpha((xy)(\alpha(x)z))=(yx)(\alpha(y)\alpha(z)),$ (4.1)
$\displaystyle\alpha((\alpha(x)y)(xz))=(y\alpha(x))(\alpha(y)\alpha(z)),$
(4.2)
$\displaystyle\alpha((\alpha(x)\alpha(y))(xz))=(\alpha(y)\alpha(x))(y\alpha(z)).$
(4.3)
In what follows, left cycle sets and left Hom-cycle sets are referred to cycle
sets and Hom-cycle sets.
Clearly, $X$ is a cycle set if and only if $(X,\operatorname{id}_{X})$ is a
Hom-cycle set. Hence we will identify the cycle set $X$ with the Hom-cycle set
$(X,\operatorname{id}_{X})$.
Denote by $\mathsf{HCS}$ the category of Hom-cycle sets, which is a full
subcategory of $\mathsf{HQG}$.
Denote by $\mathsf{S_{hybe}}$ the category of left non-degenerate involutive
solution to HYBE, which is a full subcategory of $\mathsf{HQS}$.
###### Theorem 4.2.
The functors $G:\mathsf{S_{hybe}}\to\mathsf{HCS}$ and
$S:\mathsf{HCS}\to\mathsf{S_{hybe}}$ are mutually inverse, and so the
categories $\mathsf{S_{hybe}}$ and $\mathsf{HCS}$ are isomorphic.
###### Proof.
Let $(X,r,\alpha)$ be a left non-degenerate involutive solution to HYBE.
$G(X,r,\alpha)$ is a left Hom-quasigroup. Furthermore, since $x\cdot
y=\lambda_{x}^{-1}(y)$, Theorem 3.9(4)–(6) imply the conditions (4.1)–(4.3).
Thus $G(X,r,\alpha)$ is a Hom-cycle set.
Conversely, let $(X,\alpha)$ be a Hom-cycle set and
$S(X,\alpha)=(X,r,\alpha)$. Then $(X,r,\alpha)$ is a left non-degenerate
involutive Hom-quadratic set. By Lemma 2.1(2) and Corollary 2.5,
$(X,r,\alpha)$ satisfies Theorem 3.9(1)–(3). Furthermore, the conditions
(4.1)–(4.3) imply Theorem 3.9(4)–(6). Thus $S(X,\alpha)$ is a left non-
degenerate involutive solution to HYBE. ∎
Theorem 4.2 generalizes [30, Proposition 1].
###### Theorem 4.3.
A left Hom-quasigroup $(X,\alpha)$ is a Hom-cycle set if and only if the
following conditions hold for all $x,y\in X$,
$\displaystyle\alpha^{2}(x)\alpha(y)=x\alpha(y),$ (4.4)
$\displaystyle(x\alpha(y))(\alpha(x)\alpha(z))=(y\alpha(x))(\alpha(y)\alpha(z)).$
(4.5)
###### Proof.
Suppose $S(X,\alpha)=(X,r,\alpha)$.
($\Rightarrow$) By Theorem 4.2, $(X,r,\alpha)$ is a left non-degenerate
involutive solution to HYBE. Thus (4.4) follows from Theorem 3.11(4), and
(4.5) follows from (4.2).
($\Leftarrow$) By Theorem 4.2 it suffices to prove that $(X,r,\alpha)$ is a
left non-degenerate involutive solution to HYBE. We need to verify that
$(X,r,\alpha)$ satisfies (1)–(5) of Theorem 3.11. In fact, Lemma 2.1(2) and
Corollary 2.5 imply that $(X,r,\alpha)$ satisfies (1)–(3) in Theorem 3.11, and
the conditions (4.4) and (4.5) imply that $(X,r,\alpha)$ satisfies (4)–(5) in
Theorem 3.11, as desired. ∎
###### Example 4.4.
Let $X$ be a nonempty set with a map $\alpha:X\to X$, and let $f:X\to X$ be a
bijection such that $f\alpha=\alpha f$. Define an operation on $X$ by
$xy=f(y)$ for all $x,y\in X$. Then $(X,\alpha)$ is a Hom-cycle set, which
corresponds to the left non-degenerate involutive solution to HYBE defined in
Example 3.8.
###### Example 4.5.
Let $(X,\alpha)$ be a left Hom-quasigroup with $\alpha(X)=\\{\theta\\}$. Then
$(X,\alpha)$ is a Hom-cycle set if and only if $\theta$ is a right zero, i.e.,
$x\theta=\theta$ for all $x\in X$. In this case, $(X,\alpha)$ corresponds to
the solution defined in Example 3.12.
###### Example 4.6.
A trivial solution to HYBE corresponds to a right zero groupoid with an
endomorphism.
###### Example 4.7.
Let $(X,+)$ be an Abelian group with endomorphisms $\varphi,\psi,\alpha$ and
define $x\cdot y=\varphi(x)+\psi(y)$ for all $x,y\in X$. Then
$(X,\cdot,\alpha)$ is a Hom-cycle set if and only if the following hold:
1. (1)
$\psi$ is bijective;
2. (2)
$\varphi\alpha=\alpha\varphi$ and $\psi\alpha=\alpha\psi$;
3. (3)
$\varphi\alpha^{2}=\varphi$;
4. (4)
$\varphi^{2}=\varphi\psi\alpha-\psi\varphi\alpha$.
In fact, (1) is equivalent to saying that $(X,\cdot)$ is a left quasigroup by
[2, Section 4.1]; (2) is equivalent to the assertion that $\alpha$ is an
endomorphism of $(X,\cdot)$; (3) and (4) are equivalent to (4.4) and (4.5),
respectively.
###### Lemma 4.8.
If $(X,\alpha)$ is a left Hom-quasigroup satisfying (4.4), then
$(x\alpha^{2}(y))\alpha(z)=(xy)\alpha(z),$ (4.6)
for all $x,y,z\in X$.
###### Proof.
By using (4.4) we have
$(xy)\alpha(z)=\alpha^{2}(xy)\alpha(z)=(\alpha^{2}(x)\alpha^{2}(y))\alpha(z)=(x\alpha^{2}(y))\alpha(z),$
for all $x,y,z\in X$. ∎
###### Lemma 4.9.
If $(X,\alpha)$ is a left Hom-quasigroup satisfying (4.4), then (4.1), (4.2)
and (4.3) are equivalent.
###### Proof.
Replacing $x$ by $\alpha(x)$ in (4.1) gives (4.2).
Replacing $y$ by $\alpha(y)$ in (4.2) gives (4.3).
Suppose (4.3) holds. Replacing $x$ by $\alpha(x)$ and $y$ by $\alpha(y)$ in
(4.3) gives
$\alpha((\alpha^{2}(x)\alpha^{2}(y))(\alpha(x)z))=(\alpha^{2}(y)\alpha^{2}(x))(\alpha(y)\alpha(z))$.
It follows that
$\alpha((xy)(\alpha(x)z))=(y\alpha^{2}(x))(\alpha(y)\alpha(z)),$
which together with (4.6) gives (4.1). ∎
###### Theorem 4.10.
Let $(X,\alpha)$ be a left Hom-quasigroup. Then $(X,\alpha)$ is a Hom-cycle
set if and only if it satisfies (4.4) and one of (4.1), (4.2) and (4.3).
###### Proof.
It follows from Lemma 4.9 and Theorem 4.3. ∎
###### Theorem 4.11.
A Hom-cycle set is non-degenerate if and only if the corresponding solution to
HYBE is non-degenerate.
###### Proof.
Let $(X,\alpha)$ be a Hom-cycle set and $(X,r,\alpha)$ the corresponding
solution to HYBE.
($\Rightarrow$) Suppose $(X,\alpha)$ is non-degenerate and $\circ$ is the dual
operation defined in Lemma 2.10. Then (2.5) can be rewritten as
$\lambda_{x\circ y}^{-1}(y\circ x)=x$ for all $x,y\in X$. Thus $y\circ
x=\lambda_{x\circ y}(x)$ for all $x,y\in X$. It follows by interchanging $x$
and $y$ that $x\circ y=\lambda_{y\circ x}(y)$ for all $x,y\in X$. Substituting
the last equation into (2.5), we get
$\lambda_{y\circ x}(y)\cdot\newline (y\circ x)=x.$
Denote by $\tau_{y}$ the left multiplication by $y$ with respect to the
operation $\circ$. Then $\rho_{y}\tau_{y}(x)=\rho_{y}(y\circ
x)=\lambda_{y\circ x}(y)\cdot(y\circ x)=x$. Noting that $x\lambda_{x}(y)=y$,
by (2.4) we have
$\tau_{y}\rho_{y}(x)=(x\lambda_{x}(y))\circ(\lambda_{x}(y)x)=x.$
Thus $\rho_{y}$ is bijective, and so $(X,r,\alpha)$ is non-degenerate.
($\Leftarrow$) Suppose $(X,r,\alpha)$ is non-degenerate. Define $x\circ
y=\rho_{x}^{-1}(y)$ for all $x,y\in X$. Replacing $y$ by $\lambda_{x}^{-1}(y)$
in (2.1) we obtain $\rho_{\lambda_{x}^{-1}(y)}(x)=\lambda_{y}^{-1}(x)$, whence
$\rho_{\lambda_{x}^{-1}(y)}^{-1}\lambda_{y}^{-1}(x)=x$, which gives (2.4).
Similarly, replacing $y$ by $\rho_{x}^{-1}(y)$ in (2.2) we can get (2.5). Thus
$(X,\alpha)$ is non-degenerate by Lemma 2.10. ∎
Denote by $\mathsf{ndHCS}$ the category of non-degenerate left Hom-cycle sets.
Then $\mathsf{ndHCS}$ is a full subcategory of $\mathsf{HCS}$.
Denote by $\mathsf{ndiS_{hybe}}$ the category of non-degenerate involutive
solutions to HYBE. Then $\mathsf{ndiS_{hybe}}$ is a full subcategory of
$\mathsf{S_{hybe}}$.
Theorem 4.2 and Theorem 4.11 have the following corollary, which generalize
[30, Proposition 2].
###### Corollary 4.12.
The functors $G:\mathsf{ndiS_{hybe}}\to\mathsf{ndHCS}$ and
$S:\mathsf{ndHCS}\to\mathsf{ndiS_{hybe}}$ are mutually inverse, and so the
categories $\mathsf{ndiS_{hybe}}$ and $\mathsf{ndHCS}$ are isomorphic.
###### Theorem 4.13.
Let $(X,\alpha)$ be a non-degenerate Hom-cycle set with the dual operation
$\circ$. Then $(X,\circ,\alpha)$ is a non-degenerate Hom-cycle set.
###### Proof.
Suppose $S(X,\alpha)=(X,r,\alpha)$. By Theorem 4.11, $(X,r,\alpha)$ is a non-
degenerate involutive solution to HYBE. Replacing $y$ by $\lambda_{x}(y)$ in
(2.4), we have $(x\lambda_{x}(y))\circ(\lambda_{x}(y)x)=x$, whence
$y\circ\rho_{y}(x)=x$. Replacing $x$ by $\rho_{y}^{-1}(x)$ in the last
equation gives $y\circ x=\rho_{y}^{-1}(x)$, and so $\rho_{y}(y\circ x)=x$.
Thus $(X,\circ)$ is a left quasigroup, and we have
$\alpha(x)=\alpha\rho_{y}(y\circ x)=\rho_{\alpha(y)}\alpha(y\circ x),$
whence $\alpha(y\circ
x)=\rho_{\alpha(y)}^{-1}\alpha(x)=\alpha(y)\circ\alpha(x)$. Hence
$(X,\circ,\alpha)$ is a left Hom-quasigroup. Suppose
$S(X,\circ,\alpha)=(X,r^{\circ},\alpha)$. By Corollary 4.12, it suffices to
prove that $(X,r^{\circ},\alpha)$ is a non-degenerate involutive solution to
HYBE. Since $\lambda_{y}^{\circ-1}(x)=y\circ x=\rho_{y}^{-1}(x)$, we have
$\lambda_{y}^{\circ}=\rho_{y}$. Replacing $y$ by $\rho_{x}(y)$ in (2.5), we
have $(x\circ\rho_{x}(y))(\rho_{x}(y)\circ x)=x$. Thus
$y(\rho_{x}(y)\circ x)=x=y\lambda_{y}(x),$
and so we have $\lambda_{y}(x)=\rho_{x}(y)\circ x=\lambda_{x}^{\circ}(y)\circ
x=\rho_{y}^{\circ}(x)$, whence $\rho_{y}^{\circ}=\lambda_{y}$. Thus
$r^{\circ}(x,y)=(\rho_{x}(y),\lambda_{y}(x))$. Clearly $r^{\circ}=\tau r\tau$,
where $\tau(x,y)=(y,x)$. Since $(X,r,\alpha)$ is a non-degenerate involutive
solution to HYBE, so is $(X,r^{\circ},\alpha)$, as desired. ∎
Finite cycle sets and cycle sets with bijective square maps, especially
square-free cycle sets, are non-degenerate [30], but Hom-cycle sets are not
the case.
###### Example 4.14.
Let $X=\\{1,2,3,4\\}$ with a map $\alpha:X\rightarrow X$ such that
$\alpha(X)=\\{1\\}$. Define an operation on $X$ by the following
multiplication table:
$\begin{array}[]{c|cccc}\cdot&1&2&3&4\\\ \hline\cr 1&1&2&3&4\\\ 2&1&2&3&4\\\
3&1&4&3&2\\\ 4&1&3&2&4\end{array}$
Then $(X,\cdot,\alpha)$ is a square-free Hom-cycle set, but not non-degenerate
since $\Delta(3,4)=(2,2)=\Delta(2,2)$.
## 5\. Twists
Let $(X,\alpha)$ be a left Hom-quasigroup. Define an operation on $X$ by
$x\cdot^{\prime}y=\alpha(x)y$ for all $x,y\in X$. Then $(X,\cdot^{\prime})$ is
a left quasigroup, and
$\alpha(x\cdot^{\prime}y)=\alpha\left(\alpha(x)y\right)=\alpha^{2}(x)\alpha(y)=\alpha(x)\cdot^{\prime}\alpha(y).$
Thus $\alpha$ is an endomorphism of $(X,\cdot^{\prime})$. Hence
$(X,\cdot^{\prime},\alpha)$ is a left Hom-quasigroup. We call
$(X,\cdot^{\prime},\alpha)$ the twist of $(X,\alpha)$, denoted by
$T(X,\alpha)$.
Using twist we can define a functor $T:\mathsf{HQG}\to\mathsf{HQG}$ in a
natural way.
A left Hom-quasigroup $(X,\alpha)$ is called an im-cycle set if the following
are satisfied:
$\displaystyle\alpha^{3}(x)\alpha(y)=\alpha(x)\alpha(y),$ (5.1)
$\displaystyle(\alpha(x)\alpha(y))(\alpha(x)\alpha(z))=(\alpha(y)\alpha(x))(\alpha(y)\alpha(z)),$
(5.2)
for all $x,y,z\in X$.
It is clear that $(X,\alpha)$ is an im-cycle set if and only if $\alpha(X)$ is
a cycle set and (5.1) holds.
###### Theorem 5.1.
A left Hom-quasigroup is an im-cycle set if and only if its twist is a Hom-
cycle set.
###### Proof.
Let $(X,\cdot,\alpha)$ be a left Hom-quasigroup. Then its twist
$(X,\cdot^{\prime},\alpha)$ is also a left Hom-quasigroup.
($\Rightarrow$) By (5.1) we have
$\alpha^{2}(x)\cdot^{\prime}\alpha(y)=x\cdot^{\prime}\alpha(y)$ for all
$x,y\in X$. Replacing $x$ by $\alpha(x)$ and $y$ by $\alpha(y)$ in (5.2), we
obtain
$(\alpha^{2}(x)\alpha^{2}(y))(\alpha^{2}(x)\alpha(z))=(\alpha^{2}(y)\alpha^{2}(x))(\alpha^{2}(y)\alpha(z)),$
which implies that
$(x\cdot^{\prime}\alpha(y))\cdot^{\prime}(\alpha(x)\cdot^{\prime}\alpha(z))=(y\cdot^{\prime}\alpha(x))\cdot^{\prime}(\alpha(y)\cdot^{\prime}\alpha(z))$.
By Theorem 4.3, $(X,\cdot^{\prime},\alpha)$ is a Hom-cycle set.
($\Leftarrow$) Since $(X,\cdot^{\prime},\alpha)$ is a Hom-cycle set, applying
Theorem 4.3 to $(X,\cdot^{\prime},\alpha)$ yields (5.1) and
$\alpha(\alpha(x)\alpha(y))(\alpha^{2}(x)\alpha(z))=\alpha(\alpha(y)\alpha(x))(\alpha^{2}(y)\alpha(z)).$
Replacing $x$ by $\alpha(x)$ and $y$ by $\alpha(y)$ in the last equation, we
have
$(\alpha^{3}(x)\alpha^{3}(y))(\alpha^{3}(x)\alpha(z))=(\alpha^{3}(y)\alpha^{3}(x))(\alpha^{3}(y)\alpha(z)).$
By (5.1), we see that $(\alpha(X),\cdot)$ is a cycle set. Thus $(X,\alpha)$ is
an im-cycle set. ∎
###### Corollary 5.2.
The twist of an im-cycle set is a Hom-cycle set.
###### Corollary 5.3.
Let $X$ be a cycle set with an endomorphism $\alpha$ such that (5.1) holds.
Then the twist of $(X,\alpha)$ is a Hom-cycle set.
###### Theorem 5.4.
The twist of a Hom-cycle set is an im-cycle set.
###### Proof.
Let $(X,\cdot,\alpha)$ be a Hom-cycle set with the twist
$(X,\cdot^{\prime},\alpha)$. By (4.4), we can rewrite (4.5) as
$\alpha^{2}(x\alpha(y))(\alpha(x)\alpha(z))=\alpha^{2}(y\alpha(x))(\alpha(y)\alpha(z)),$
that is,
$(\alpha^{2}(x)\alpha^{3}(y))(\alpha(x)\alpha(z))=(\alpha^{2}(y)\alpha^{3}(x))(\alpha(y)\alpha(z)),$
for all $x,y,z\in X$. Thus by (4.6), we have
$(\alpha^{2}(x)\alpha(y))(\alpha(x)\alpha(z))=(\alpha^{2}(y)\alpha(x))(\alpha(y)\alpha(z)),$
which implies
$(x\cdot^{\prime}{y})\cdot^{\prime}(x\cdot^{\prime}\alpha(z))=(y\cdot^{\prime}x)\cdot^{\prime}(y\cdot^{\prime}\alpha(z))$
for all $x,y,z\in X$. Hence $(\alpha(X),\cdot^{\prime})$ is a cycle set. By
(4.4), we have
$\alpha^{4}(x)\cdot\alpha(y)=\alpha^{2}(x)\cdot\alpha(y)$
for all $x,y\in X$, whence
$\alpha^{3}(x)\cdot^{\prime}\alpha(y)=\alpha(x)\cdot^{\prime}\alpha(y)$ for
all $x,y\in X$. Thus $(X,\cdot^{\prime},\alpha)$ is an im-cycle set. ∎
###### Theorem 5.5.
If $(X,\alpha)$ be a non-degenerate Hom-cycle set, then its twist is non-
degenerate.
###### Proof.
Let $T(X,\alpha)=(X,\cdot^{\prime},\alpha)$. By Theorem 4.13,
$(X,\circ,\alpha)$ is a Hom-cycle set. Applying Theorem 4.3 to
$(X,\circ,\alpha)$ we obtain
$\alpha^{2}(x)\circ\alpha(y)=x\circ\alpha(y),$ (5.3)
for all $x,y\in X$. Define $x\circ^{\prime}y=\alpha(x)\circ y$ for all $x,y\in
X$. Then by (4.4), (5.3) and Lemma 2.10 we have
$\displaystyle(x\cdot^{\prime}y)\circ^{\prime}(y\cdot^{\prime}x)=\alpha(\alpha(x)y)\circ(\alpha(y)x)=(x\alpha(y))\circ(\alpha(y)x)=x,$
$\displaystyle(x\circ^{\prime}y)\cdot^{\prime}(y\circ^{\prime}x)=\alpha(\alpha(x)\circ
y)(\alpha(y)\circ x)=(x\circ\alpha(y))(\alpha(y)\circ x)=x.$
Thus $(X,\cdot^{\prime},\alpha)$ is non-degenerate by Lemma 2.10. ∎
###### Lemma 5.6.
Let $(X,\alpha)$ be a left Hom-quasigroup with $\alpha(X)$ singleton. Then its
twist is a non-degenerate left Hom-quasigroup.
###### Proof.
Let $\alpha(X)=\\{\theta\\}$. Then $\theta$ is a right zero of the left
quasigroup $X$. Let $T(X,\alpha)=(X,\cdot^{\prime},\alpha)$. Thus
$\Delta(x,y)=(x\cdot^{\prime}y,y\cdot^{\prime}x)=(\alpha(x)y,\alpha(y)x)=(\theta
y,\theta x)$
for all $x,y\in X$. It follows $\Delta=\sigma_{\theta}\times\sigma_{\theta}$.
Since $\sigma_{\theta}$ is bijective, so is $\Delta$. Hence
$(X,\cdot^{\prime},\alpha)$ is non-degenerate. ∎
###### Remark 5.7.
A Hom-cycle set is not necessarily the twist of an im-cycle set, and vice
versa. For example, let $(X,\alpha)$ be as in Example 4.14. It is degenerate
and both a Hom-cycle set and an im-cycle set, but it is isomorphic to neither
the twist of a Hom-cycle set nor the twist of an im-cycle set by Lemma 5.6.
The following example shows that twist of a Hom-cycle set is not necessarily a
Hom-cycle set.
###### Example 5.8.
Let $F$ be a field of characteristic $\neq 2$, $X$ the vector space ${F}^{3}$
and $\varphi,\psi,\alpha$ the linear endomorphisms of $X$ defined under the
natural basis by the following matrices, respectively,
$A=\begin{pmatrix}0&1&0\\\ 0&0&1\\\
0&0&0\end{pmatrix},~{}B=\begin{pmatrix}1&0&0\\\ 0&1&1\\\
0&0&1\end{pmatrix},~{}C=\begin{pmatrix}-1&0&1\\\ 0&-1&0\\\
0&0&-1\end{pmatrix}.$
Then it is easy to check that $\psi$ is bijective and
$\
\varphi\alpha=\alpha\varphi,~{}\psi\alpha=\alpha\psi,~{}\varphi\alpha^{2}=\varphi,~{}\varphi^{2}=\varphi\psi-\psi\varphi,~{}\varphi^{2}\alpha\neq\varphi^{2}.$
Define $x\cdot y=\varphi(x)+\psi(y)$ for all $x,y\in X$. Then $(X,\cdot)$ is a
cycle set by [2, Section 4.1] and $\alpha$ is an endomorphism of $(X,\cdot)$
satisfying $\alpha^{2}(x)y=xy$ for all $x,y\in X$.
The twist $T(X,\cdot,\alpha)$ of $(X,\cdot,\alpha)$ is a Hom-cycle set By
Corollary 5.3. Clearly, the twist of $T(X,\cdot,\alpha)$ equals
$(X,\cdot,\alpha)$, which is not a Hom-cycle set since (4) in Example 4.7 does
not hold.
###### Corollary 5.9.
If $(X,\alpha)$ is a non-degenerate Hom-cycle set, then the map
$q^{\prime}:X\to X,~{}~{}x\mapsto\alpha(x)x$ is bijective.
###### Proof.
By Theorem 5.5 the twist $(X,\cdot^{\prime},\alpha)$ is non-degenerate. Since
$q^{\prime}$ is the square map of $(X,\cdot^{\prime})$, $q^{\prime}$ is
bijective by Lemma 2.11. ∎
###### Remark 5.10.
The converses of Theorem 5.5 and Corollary 5.9 do not hold. Example 4.14
provides a counterexample to the both cases.
We now apply Theorem 5.1, Corollary 5.3 and Theorem 5.4 to solutions to HYBE.
Let $(X,r,\alpha)$ be a left non-degenerate involutive Hom-quadratic set. We
call $STG(X,r,\alpha)$ the twist of $(X,r,\alpha)$. The twist of
$(X,r,\alpha)$ is also a left non-degenerate involutive Hom-quadratic set.
###### Theorem 5.11.
Let $(X,r,\alpha)$ be a left non-degenerate involutive Hom-quadratic set with
the twist $(X,r^{\prime},\alpha)$. Then
$r^{\prime}(x,y)=(\lambda_{\alpha(x)}(y),\lambda_{\lambda_{\alpha^{2}(x)}\alpha(y)}^{-1}(x))$
(5.4)
for all $x,y\in X$. If $(X,r,\alpha)$ is additionally a solution to HYBE, then
$r^{\prime}(x,y)=(\lambda_{\alpha(x)}(y),\rho_{\alpha(y)}(x))$ (5.5)
for all $x,y\in X$.
###### Proof.
Since $STG(X,r,\alpha)=(X,r^{\prime},\alpha)$, by Theorem 2.7 we have
$TG(X,r,\alpha)=G(X,r^{\prime},\alpha).$
Thus $G(X,r^{\prime},\alpha)$ is the twist of $G(X,r,\alpha)$. Let
$G(X,r,\alpha)=(X,\cdot,\alpha)$. Then
$G(X,r^{\prime},\alpha)=(X,\cdot^{\prime},\alpha)$, where
$x\cdot^{\prime}y=\alpha(x)y$ for all $x,y\in X$. Thus
$\lambda^{\prime}_{x}=\lambda_{\alpha(x)}$, and so
$\rho^{\prime}_{y}(x)=\lambda^{\prime-1}_{\lambda^{\prime}_{x}(y)}(x)=\lambda^{-1}_{\alpha\lambda_{\alpha(x)}(y)}(x)=\lambda^{-1}_{\lambda_{\alpha^{2}(x)}\alpha(y)}(x).$
Therefore (5.4) holds.
Furthermore, if $(X,r,\alpha)$ is a solution to HYBE, then (5.5) follows from
(5.4), Corollary 3.10(1) and (2.3). ∎
###### Remark 5.12.
By Example 5.8 and Theorem 4.2, the twist of a left non-degenerate involutive
solution to HYBE is not necessarily a solution to HYBE.
###### Lemma 5.13.
Let $(X,r,\alpha)$ be a left non-degenerate involutive Hom-quadratic set and
$i,j$ be nonnegative integers such that $0\leq j-i\leq 2$. Then the following
are equivalent.
1. (1)
$r(\alpha^{i+2}\times\alpha^{j})=(1\times\alpha^{2})r(\alpha^{i}\times\alpha^{j})$;
2. (2)
$\lambda_{\alpha^{i+2}(x)}\alpha^{j}=\lambda_{\alpha^{i}(x)}\alpha^{j}$ for
all $x\in X$.
###### Proof.
We first note that for all $x,y\in X$,
$\displaystyle
r(\alpha^{i+2}\times\alpha^{j})(x,y)=(\lambda_{\alpha^{i+2}(x)}\alpha^{j}(y),\rho_{\alpha^{j}(y)}\alpha^{i+2}(x)),$
$\displaystyle(1\times\alpha^{2})r(\alpha^{i}\times\alpha^{j})(x,y)=(\lambda_{\alpha^{i}(x)}\alpha^{j}(y),\alpha^{2}\rho_{\alpha^{j}(y)}\alpha^{i}(x)).$
It suffices to prove that (2) implies
$\rho_{\alpha^{j}(y)}\alpha^{i+2}(x)=\alpha^{2}\rho_{\alpha^{j}(y)}\alpha^{i}(x)$
for all $x,y\in X$. Since $0\leq j-i\leq 2$, we have
$\lambda_{\alpha^{i}(x)}\alpha^{j}=\alpha^{j}\lambda_{\alpha^{i+2-j}(x)}$, and
so
$\alpha^{j}\lambda_{\alpha^{i+2-j}(x)}^{-1}=\lambda_{\alpha^{i}(x)}^{-1}\alpha^{j}$.
Thus
$\lambda_{\alpha^{i+2}(x)}^{-1}\alpha^{j}=\lambda_{\alpha^{i}(x)}^{-1}\alpha^{j}$.
Consequently,
$\alpha^{2}\rho_{\alpha^{j}(y)}\alpha^{i}(x)=\lambda_{\lambda_{\alpha^{i+2}(x)}\alpha^{j+2}(y)}^{-1}\alpha^{i+2}(x)=\lambda_{\alpha^{i+2}(\lambda_{x}\alpha^{j-i}(y))}^{-1}\alpha^{i+2}(x)\\\
=\lambda_{\alpha^{i}(\lambda_{x}\alpha^{j-i}(y))}^{-1}\alpha^{i+2}(x)=\lambda_{\lambda_{\alpha^{i}(x)}\alpha^{j}(y)}^{-1}\alpha^{i+2}(x)\\\
=\lambda_{\lambda_{\alpha^{i+2}(x)}\alpha^{j}(y)}^{-1}\alpha^{i+2}(x)=\rho_{\alpha^{j}(y)}\alpha^{i+2}(x),$
as desired. ∎
###### Corollary 5.14.
Let $(X,r,\alpha)$ be a left non-degenerate involutive solution to HYBE. Then
$r(\alpha^{2}\times\alpha)=(\operatorname{id}\times\alpha^{2})r(\operatorname{id}\times\alpha)$.
###### Proof.
It follows from Corollary 3.10 and Lemma 5.13. ∎
###### Theorem 5.15.
Let $(X,r,\alpha)$ be left non-degenerate involutive Hom-quadratic set with
the twist $(X,r^{\prime},\alpha)$.
1. (1)
If $(X,r,\alpha)$ is a solution to HYBE, then $(\alpha(X),r^{\prime})$ is a
solution to YBE and
$r^{\prime}(\alpha^{2}\times\operatorname{id})=(\operatorname{id}\times\alpha^{2})r^{\prime}$
on $\alpha(X)$.
2. (2)
The twist $(X,r^{\prime},\alpha)$ is a solution to HYBE if and only if
$(\alpha(X),r)$ is a solution to YBE and
$r(\alpha^{2}\times\operatorname{id})=(\operatorname{id}\times\alpha^{2})r$ on
$\alpha(X)$.
3. (3)
If $(X,r)$ is a solution to YBE and
$r(\alpha^{3}\times\alpha)=(\alpha\times\alpha^{3})r$, then
$(X,r^{\prime},\alpha)$ is a solution to HYBE.
###### Proof.
Let $S(X,r,\alpha)=(X,\cdot,\alpha)$. Then
$S(X,r^{\prime},\alpha)=(X,\cdot^{\prime},\alpha)$, the twist of
$(X,\cdot,\alpha)$.
(1) Theorem 4.2 implies that $(X,\cdot,\alpha)$ is a Hom-cycle set. By Theorem
5.4, $(\alpha(X),\cdot^{\prime})$ is a cycle set and (5.1) holds with respect
to the operation $\cdot^{\prime}$. It follows that
$\alpha^{2}(x)\cdot^{\prime}y=x\cdot^{\prime}y$ for all $x,y\in\alpha(X)$,
which implies
$r^{\prime}(\alpha^{2}\times\operatorname{id})=(\operatorname{id}\times\alpha^{2})r^{\prime}$
on $\alpha(X)$ by Lemma 5.13. By Theorem 2.9, $(\alpha(X),r^{\prime})$ is a
solution to YBE.
(2) By Theorem 4.2, $(X,r^{\prime},\alpha)$ is a left non-degenerate
involutive solution to HYBE if and only if $(X,\cdot^{\prime},\alpha)$ is a
Hom-cycle set. Equivalently, by Theorem 5.1 $(\alpha(X),\cdot)$ is a left
cycle set satisfying (5.1). This is equivalent to that $(\alpha(X),r)$ is a
solution to YBE satisfying
$r(\alpha^{2}\times\operatorname{id})=(\operatorname{id}\times\alpha^{2})r$ on
$\alpha(X)$ by Theorem 2.9 and Lemma 5.13.
(3) follows from the sufficiency in (2). ∎
## Acknowledgements
This work is supported by NSF of China (No.12171194, No.11971289).
## References
* Baxter [1972] R. J. Baxter. Partition function of the eight-vertex lattice model. _Ann. Physics_ , 70(1):193–228, 1972. URL https://doi.org/10.1016/0003-4916(72)90335-1.
* Bonatto et al. [2021] M. Bonatto, M. Kinyon, D. Stanovskỳ, and P. Vojtěchovskỳ. Involutive latin solutions of the Yang-Baxter equation. _J. Algebra_ , 565:128–159, 2021. URL https://doi.org/10.1016/j.jalgebra.2020.09.001.
* Castelli et al. [2018] M. Castelli, F. Catino, and G. Pinto. A new family of set-theoretic solutions of the Yang-Baxter equation. _Comm. Algebra_ , 46(4):1622–1629, 2018. URL https://doi.org/10.1080/00927872.2017.1350700.
* Castelli et al. [2019] M. Castelli, F. Catino, and G. Pinto. Indecomposable involutive set-theoretic solutions of the Yang-Baxter equation. _J. Pure Appl. Algebra_ , 223(10):4477–4493, 2019\. URL https://doi.org/10.1016/j.jpaa.2019.01.017.
* Castelli et al. [2020a] M. Castelli, F. Catino, and G. Pinto. About a question of Gateva-Ivanova and Cameron on square-free set-theoretic solutions of the Yang-Baxter equation. _Comm. Algebra_ , 48(6):2369–2381, 2020a. URL https://doi.org/10.1080/00927872.2020.1713328.
* Castelli et al. [2020b] M. Castelli, G. Pinto, and W. Rump. On the indecomposable involutive set-theoretic solutions of the Yang-Baxter equation of prime-power size. _Comm. Algebra_ , 48(5):1941–1955, 2020b. URL https://doi.org/10.1080/00927872.2019.1710163.
* Castelli et al. [2021] M. Castelli, F. Catino, and P. Stefanelli. Indecomposable involutive set-theoretic solutions of the Yang-Baxter equation and orthogonal dynamical extensions of cycle sets. _Mediterr. J. Math._ , 18(6):Paper No. 246, 27pp, 2021. URL https://doi.org/10.1007/s00009-021-01912-4.
* Cedó et al. [2014] F. Cedó, E. Jespers, and J. Okniński. Braces and the Yang-Baxter equation. _Comm. Math. Phys._ , 327(1):101–116, 2014. URL https://doi.org/10.1007/s00220-014-1935-y.
* Chen and Zhang [2014] Y. Chen and L. Zhang. The category of Yetter-Drinfel’d Hom-modules and the quantum Hom-Yang-Baxter equation. _J. Math. Phys._ , 55(3):031702, 18pp, 2014. URL https://doi.org/10.1063/1.4868964.
* Dehornoy [2015] P. Dehornoy. Set-theoretic solutions of the Yang-Baxter equation, RC-calculus, and Garside germs. _Adv. Math._ , 282:93–127, 2015. URL https://doi.org/10.1016/j.aim.2015.05.008.
* Dehornoy et al. [2015] P. Dehornoy, F. Digne, E. Godelle, D. Krammer, and J. Michel. _Foundations of Garside theory_ , volume 22 of _EMS Tracts in Mathematics_. European Mathematical Society (EMS), Zürich, 2015. URL https://doi.org/10.4171/139.
* Drinfeld [1992] V. G. Drinfeld. On some unsolved problems in quantum group theory. In _Quantum groups (Leningrad, 1990)_ , volume 1510 of _Lecture Notes in Math._ , pages 1–8. Springer, Berlin, 1992. URL https://doi.org/10.1007/BFb0101175.
* Etingof and Latour [2005] P. Etingof and F. Latour. _The dynamical Yang-Baxter equation, representation theory, and quantum integrable systems_ , volume 29 of _Oxford Lecture Series in Mathematics and its Applications_. Oxford University Press, Oxford, 2005.
* Etingof and Schiffmann [2001] P. Etingof and O. Schiffmann. Lectures on the dynamical Yang-Baxter equations. In _Quantum groups and Lie theory (Durham), 1999)_ , volume 290 of _London Math. Soc. Lecture Note Ser._ , pages 89–129. Cambridge Univ. Press, Cambridge, 2001.
* Etingof and Varchenko [1998] P. Etingof and A. Varchenko. Solutions of the quantum dynamical Yang-Baxter equation and dynamical quantum groups. _Comm. Math. Phys._ , 196(3):591–640, 1998. URL https://doi.org/10.1007/s002200050437.
* Etingof et al. [1999] P. Etingof, T. Schedler, and A. Soloviev. Set-theoretical solutions to the quantum Yang-Baxter equation. _Duke Math. J._ , 100(2):169–209, 1999. URL https://doi.org/10.1215/S0012-7094-99-10007-X.
* Gateva-Ivanova [2004] T. Gateva-Ivanova. A combinatorial approach to the set-theoretic solutions of the Yang-Baxter equation. _J. Math. Phys._ , 45(10):3828–3858, 2004. URL https://doi.org/10.1063/1.1788848.
* Gateva-Ivanova [2018] T. Gateva-Ivanova. Set-theoretic solutions of the Yang-Baxter equation, braces and symmetric groups. _Adv. Math._ , 338:649–701, 2018. URL https://doi.org/10.1016/j.aim.2018.09.005.
* Gateva-Ivanova and Majid [2008] T. Gateva-Ivanova and S. Majid. Matched pairs approach to set theoretic solutions of the Yang-Baxter equation. _J. Algebra_ , 319(4):1462–1529, 2008. URL https://doi.org/10.1016/j.jalgebra.2007.10.035.
* Gateva-Ivanova and Van den Bergh [1998] T. Gateva-Ivanova and M. Van den Bergh. Semigroups of $I$-type. _J. Algebra_ , 206(1):97–112, 1998. URL https://doi.org/10.1006/jabr.1997.7399.
* Guarnieri and Vendramin [2017] L. Guarnieri and L. Vendramin. Skew braces and the Yang-Baxter equation. _Math. Comp._ , 86(307):2519–2534, 2017. URL https://doi.org/10.1090/mcom/3161.
* Jespers and Okniński [2007] E. Jespers and J. Okniński. _Noetherian semigroup algebras_ , volume 7 of _Algebra and Applications_. Springer, Dordrecht, 2007. URL https://doi.org/10.1007/1-4020-5810-1.
* Jiao and Huang [2018] Z. Jiao and G. Huang. Solutions of Hom-Yang-Baxter equation from monoidal Hom-(co)algebra structures. _Math Notes_ , 104:121–134, 2018. URL https://doi.org/10.1134/S0001434618070131.
* Kamiya and Shibukawa [2011] N. Kamiya and Y. Shibukawa. Dynamical Yang-Baxter maps associated with homogeneous pre-systems. _J. Gen. Lie Theory Appl._ , 5:Art. ID G110106, 9pp, 2011\. URL https://doi.org/10.4303/jglta/G110106.
* Lebed and Vendramin [2017] V. Lebed and L. Vendramin. Homology of left non-degenerate set-theoretic solutions to the Yang-Baxter equation. _Adv. Math._ , 304:1219–1261, 2017. URL https://doi.org/10.1016/j.aim.2016.09.024.
* Lu et al. [2000] J.-H. Lu, M. Yan, and Y.-C. Zhu. On the set-theoretical Yang-Baxter equation. _Duke Math. J._ , 104(1):1–18, 2000. URL https://doi.org/10.1215/S0012-7094-00-10411-5.
* Matsumoto [2013] D. K. Matsumoto. Dynamical braces and dynamical Yang-Baxter maps. _J. Pure Appl. Algebra_ , 217(2):195–206, 2013\. URL https://doi.org/10.1016/j.jpaa.2012.06.012.
* Matsumoto and Shimizu [2018] D. K. Matsumoto and K. Shimizu. Quiver-theoretical approach to dynamical Yang-Baxter maps. _J. Algebra_ , 507:47–80, 2018. URL https://doi.org/10.1016/j.jalgebra.2018.04.003.
* Panaite et al. [2019] F. Panaite, P. T. Schrader, and M. D. Staic. Hom-tensor categories and the Hom-Yang-Baxter equation. _Appl. Categ. Structures_ , 27(4):323–363, 2019\. URL https://doi.org/10.1007/s10485-019-09556-y.
* Rump [2005] W. Rump. A decomposition theorem for square-free unitary solutions of the quantum Yang-Baxter equation. _Adv. Math._ , 193(1):40–55, 2005. URL https://doi.org/10.1016/j.aim.2004.03.019.
* Rump [2007] W. Rump. Braces, radical rings, and the quantum Yang-Baxter equation. _J. Algebra_ , 307(1):153–170, 2007. URL https://doi.org/10.1016/j.jalgebra.2006.03.040.
* Rump [2016] W. Rump. Dynamical groups and braces. _J. Algebra Appl._ , 15(07):1650135, 31pp, 2016\. URL https://doi.org/10.1142/S0219498816501358.
* Shcherbacov [2017] V. Shcherbacov. _Elements of quasigroup theory and applications_. Monographs and Research Notes in Mathematics. CRC Press, Boca Raton, FL, 2017. URL https://doi.org/10.1201/9781315120058.
* Shibukawa [2005] Y. Shibukawa. Dynamical Yang-Baxter maps. _Int. Math. Res. Not._ , 2005(36):2199–2221, 2005\. URL https://doi.org/10.1155/IMRN.2005.2199.
* Shibukawa [2016] Y. Shibukawa. Hopf algebroids and rigid tensor categories associated with dynamical Yang-Baxter maps. _J. Algebra_ , 449:408–445, 2016. URL https://doi.org/10.1016/j.jalgebra.2015.11.007.
* Smoktunowicz [2018] A. Smoktunowicz. On Engel groups, nilpotent groups, rings, braces and the Yang-Baxter equation. _Trans. Amer. Math. Soc._ , 370(9):6535–6564, 2018. URL https://doi.org/10.1090/tran/7179.
* Veselov [2007] A. Veselov. Yang-Baxter maps: dynamical point of view. In _Combinatorial aspect of integrable systems_ , volume 17 of _MSJ Mem._ , pages 145–167. Math. Soc. Japan, Tokyo, 2007. URL https://doi.org/10.2969/msjmemoirs/01701C060.
* Wang et al. [2022] S. Wang, X. Zhang, and S. Guo. Hom-Yang-Baxter equations and Hom-Yang-Baxter systems. _Comm. Algebra_ , 2022. URL https://doi.org/10.1080/00927872.2022.2137518.
* Yang [1967] C. N. Yang. Some exact results for the many-body problem in one dimension with repulsive delta-function interaction. _Phys. Rev. Lett._ , 19(23):1312–1315, 1967. URL https://doi.org/10.1103/PhysRevLett.19.1312.
* Yau [2009] D. Yau. The Hom-Yang-Baxter equation, Hom-Lie algebras, and quasi-triangular bialgebras. _J. Phys. A_ , 42(16):165202, 12pp, 2009. URL https://doi.org/10.1088/1751-8113/42/16/165202.
* Yau [2011] D. Yau. The Hom-Yang-Baxter equation and Hom-Lie algebras. _J. Math. Phys._ , 52(5):053502, 19pp, 2011. URL https://doi.org/10.1063/1.3571970.
* Yau [2012] D. Yau. Hom-quantum groups: I. Quasi-triangular Hom-bialgebras. _J. Phys. A_ , 45(6):065203, 23pp, 2012. URL https://doi.org/10.1088/1751-8113/45/6/065203.
* Yau [2015] D. Yau. The classical Hom-Yang-Baxter equation and Hom-Lie bialgebras. _Int. Electron. J. Algebra_ , 17(17):11–45, 2015\. URL https://doi.org/10.24330/ieja.266210.
|
# Phase transition for discrete nonlinear Schrödinger equation in three and
higher dimensions
Partha S. Dey , Kay Kirkpatrick and Kesav Krishnan Department of
Mathematics, University of Illinois Urbana-Champaign, 1409 W Green Street,
Urbana, Illinois 61801 $\\{$psdey, kkirkpat<EMAIL_ADDRESS>
(Date: August 27, 2024)
###### Abstract.
We analyze the thermodynamics of the focusing discrete nonlinear Schrödinger
equation in dimensions $d\geqslant 3$ with general nonlinearity $p>1$ and
under a model with two parameters, representing inverse temperature and
strength of the nonlinearity, respectively. We prove the existence of limiting
free energy and analyze the phase diagram for general $d,p$. We also prove the
existence of a continuous phase transition curve that divides the parametric
plane into two regions involving the appearance or non-appearance of solitons.
Appropriate upper and lower bounds for the curve are constructed that match
the result in [CK12] in a one-sided asymptotic limit. We also look at the
typical behavior of a function chosen from the Gibbs measure for certain parts
of the phase diagram.
###### Key words and phrases:
Nonlinear Schrödinger Equation, Invariant Measure, Solitons, Free energy,
Dispersive Equations
###### Contents
1. 1 Introduction
2. 2 Main Results
3. 3 Background, Literature Review and Heuristics
4. 4 Gaussian Free Field
5. 5 Soliton Solutions and Minimal Energy
6. 6 Convergence of the Free Energy
7. 7 Analysis of Phases
8. 8 Discussion on the Typical Function in the Subcritical Phase
9. 9 Proofs of Main Theorems
10. 10 Discussion and Further Questions
## 1\. Introduction
Nonlinear Schrödinger (NLS) equations have a fundamental physical importance.
They arise in descriptions of a multitude of classical and quantum phenomena,
examples include nonlinear optics [CLS03], Bose-Einstein condensation [BK04]
and even the complex dynamics of DNA [P04]. A close cousin of the NLS, the
Gross Pitaevski equation was recently used to describe a theory of dark matter
[KMW]. The focusing NLS is of significant mathematical interest due to the
competition between the dispersive character of the linear part of the
equation and the nonlinearity. A striking consequence is the formation of
solitons, localized solutions preserved in time. Additionaly, the NLS has
algebraic structure; it admits a Hamiltonian description and several conserved
quantities. In fact, in dimension one and for non-linearity $p=3$, the NLS is
completely integrable, _i.e.,_ it can be described in terms of a Lax pair
[LP68]. All this being said, the behavior of the focusing NLS is particularly
challenging to understand in higher dimensions and it is this situation that
we aim to address. We first discuss the continuum focusing nonlinear
Schrödinger Equation (NLS).
Let $\psi(t,x)$ be a complex-valued function of time $t$ and spatial variable
$x\in\mathds{R}^{d}$. We say that
$\psi:[0,\infty)\times\mathds{R}^{d}\to\mathds{C}$ satisfies the continuum
focusing NLS with power non-linearity $p>1$ if
$\displaystyle{\mathrm{i}\mkern
2.0mu}\partial_{t}\psi=-\Delta\psi-|\psi|^{p-1}\psi\text{ for all }t,x.$ (1)
The continuum Hamiltonian functional is given by
$\displaystyle
H_{c}(\psi):=\int_{\mathds{R}^{d}}|\nabla\psi|^{2}dx-\frac{2}{p+1}\int_{\mathds{R}^{d}}|\psi|^{p+1}dx.$
(2)
Formally (1) may be rewritten via the variation of the Hamiltonian as
$\displaystyle\frac{d}{dt}\psi={\mathrm{i}\mkern
2.0mu}\frac{\delta}{\delta\psi^{*}}H_{c}(\psi).$
Given the Hamiltonian structure, it is reasonable to address questions
regarding well-posedness and asymptotic behavior via the construction of
invariant measures for the flow. This approach has a rich history, and we will
survey the results about the existence of solutions and invariant measures in
Section 3. There is a significant obstacle to applying the standard method of
construction to the continuum equation in three dimensions and higher, which
we will also address in Section 3. Essentially, the natural candidate is not
normalizable due to spatial regularity issues (see [LRS88]). One way around
this obstacle is to consider a spatial discretization and study the discrete
NLS instead (see [CK12, C14]). For the spatial dimension, we fix an integer
$d\geqslant 3$. Let $\mathbb{T}_{n}^{d}$ be the $d$-dimensional discrete torus
with vertex set indexed by
$\displaystyle V=V_{n}=[n]^{d}:=\\{0,2,\ldots,n-1\\}^{d}$
of size $N=n^{d}$ and edge set $E=E_{n}$. We will denote the $d-$dimensional
integer lattice as $\mathds{Z}^{d}$. We take $h$ to be the spacing between two
neighboring vertices in either case. The discrete nearest neighbor Laplacian
acting on $\ell^{2}(G)$ with spacing $h$ and $G=\mathds{T}^{d}_{n}$ or
$\mathds{Z}^{d}$ (the case under consideration will always be specified as
required) is defined as
$\displaystyle\Delta_{h}\psi_{{\boldsymbol{x}}}:=\frac{1}{h^{2}}\sum_{\boldsymbol{y}\sim{\boldsymbol{x}}}(\psi_{{\boldsymbol{x}}}-\psi_{\boldsymbol{y}}),$
(3)
where $\boldsymbol{y}\sim{\boldsymbol{x}}$ denotes the sum over all nearest
neighbors $\boldsymbol{y}$ of ${\boldsymbol{x}}$ and
$\psi=(\psi_{{\boldsymbol{x}}})_{{\boldsymbol{x}}\in V}\in\ell^{2}(G)$. We may
regard the parameter $1/h$ to be the coupling strength of the lattice. The
focusing Discrete Nonlinear Schrödinger (DNLS) equation on $G$ with
nonlinearity $p$ is defined as coupled system of ODEs with
$(\psi_{{\boldsymbol{x}}}(t))_{{\boldsymbol{x}}\in V},t>0$ satisfying
$\displaystyle{\mathrm{i}\mkern
2.0mu}\frac{d}{dt}\psi_{{\boldsymbol{x}}}(t)=-\Delta_{h}\psi_{{\boldsymbol{x}}}(t)-|\psi_{{\boldsymbol{x}}}(t)|^{p-1}\psi_{{\boldsymbol{x}}}(t),\qquad{\boldsymbol{x}}\in
V,t>0.$ (4)
Equation (4) admits a global solution for $\ell^{2}$ initial data for both
cases of $G$. Like the continuum equation, it may be cast into a Hamiltonian
form. The discrete Hamiltonian associated with the focusing DNLS is
$\displaystyle\begin{split}\mathcal{H}_{h}(\psi)&=h^{d-2}\sum_{({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})\in
E}\left|\psi_{{\boldsymbol{x}}}-\psi_{{\boldsymbol{x}^{\prime}}}\right|^{2}-\frac{2}{p+1}\cdot
h^{d}\sum_{x\in G}|\psi_{{\boldsymbol{x}}}|^{p+1}\\\
&={h^{d-2}}\left\|\nabla\psi\right\|_{2}^{2}-\frac{2}{p+1}\cdot
h^{d}\left\|\psi\right\|_{p+1}^{p+1}.\end{split}$ (5)
Up to scaling by a constant, (5) is the discrete analog of (2). When defined
on the discrete torus, the phase space of the equation is isomorphic to
$\mathds{C}^{N}$, which crucially is finite-dimensional and has a natural
volume form. The volume form is preserved under the flow of the equation via
Liouville’s theorem. As a consequence of Noether’s theorem, the invariance of
the Hamiltonian with respect to multiplication of
$(\psi_{{\boldsymbol{x}}})_{{\boldsymbol{x}}\in V}$ by a constant phase
implies that the $\ell^{2}$ norm is a conserved by the dynamics. We refer to
the $\ell^{2}$ norm as the mass. We immediately obtain an invariant Gibbs
probability measure for the dynamics, defined on $\mathds{C}^{V}$ with
probability density of $(\psi_{{\boldsymbol{x}}})_{{\boldsymbol{x}}\in V}$
proportional to
$\displaystyle
e^{-\beta\mathcal{H}_{h}(\psi)}\cdot\mathds{1}_{\\{h^{d}\left\|\psi\right\|^{2}_{2}\leqslant
B\\}}\,\mathbf{d}\psi$ (6)
with $\beta,B>0$. For simplicity, we express the volume element as
$\displaystyle\mathbf{d}\psi:=\prod_{{\boldsymbol{x}}\in
V}d\Re{\psi_{{\boldsymbol{x}}}}\,d\Im{\psi}_{{\boldsymbol{x}}}.$
The relationship between $B$, $\beta$, $h$ and $N$ governs the scaling
behavior of this measure. In this article, we will work with the following
version of the problem.
###### Definition 1.1 (The model).
Let $d\geqslant 3,p>1$ be fixed. Given a positive real number $\nu>0$, we
consider the Hamiltonian
$\displaystyle
H_{\nu,N}(\psi):=\left\|\nabla\psi\right\|_{2}^{2}-\left(\frac{\nu}{N}\right)^{(p-1)/2}\cdot\frac{2}{p+1}\left\|\psi\right\|_{p+1}^{p+1}.$
(7)
for $\psi:\mathds{T}^{d}_{n}\to\mathds{C}$. With this Hamiltonian, we obtain a
Gibbs measure of the form
$\displaystyle\mu_{N}^{\theta,\nu}(\psi)=\frac{1}{Z_{N}(\theta,\nu)}e^{-\theta\mathcal{H}_{\nu,N}(\psi)}\cdot\mathds{1}_{\\{\left\|\psi\right\|_{2}^{2}\leqslant
N\\}}\mathbf{d}\psi$ (8)
where $\theta>0$ is the inverse temperature and
$\displaystyle
Z_{N}(\theta,\nu):=\int_{\mathds{C}^{V}}e^{-\theta\mathcal{H}_{\nu,N}(\psi)}\cdot\mathds{1}_{\\{\left\|\psi\right\|_{2}^{2}\leqslant
N\\}}\mathbf{d}\psi$ (9)
is the partition function.
In terms of the original measure (6), this corresponds to choosing our
parameters to obey
$\displaystyle\begin{split}\theta&=\beta B\cdot\frac{1}{Nh^{2}}\text{ and
}\nu=B\cdot h^{-d+4/(p-1)};\\\ \text{or, equivalently }\beta&=\theta/\nu\cdot
Nh^{-(d-2)+4/(p-1)}=\theta/\nu\cdot Nh^{-(d-2)(p-p_{e})/(p-1)},\\\ \text{ and
}B&=\nu\cdot h^{d-4/(p-1)}=\nu\cdot h^{d(p-p_{m})/(p-1)},\end{split}$ (10)
where $p_{m}:=1+4/d<p_{e}:=1+4/(d-2)$ are the mass critical and energy
critical threshold, respectively. When $h$ is of constant order, we get
$B=\Theta(1)$ whereas $\beta=\Theta(N)$. However, when $h\to 0$, we have $B\to
0$ or $\infty$ according as $p>p_{m}$ or $p<p_{m}$; and $\beta\ll N$ or $\gg
N$ according as $p>p_{e}$ or $p<p_{e}$. We will elaborate further on the
scaling in Section 3.5. It suffices to say here that the dependence in $N$ is
chosen such that asymptotically the linear and nonlinear parts of the energy
contribute on the same scale.
## 2\. Main Results
In order to state our results, we need to introduce two functions, $I$ and $W$
defined on $(0,\infty)$. They correspond to nonlinear and linear contributions
to the free energy respectively, as will be explained.
###### Definition 2.1 (Soliton with given mass).
Recall the discrete Hamiltonian introduced in (5), with $h=1$ and defined on
$\ell^{2}(\mathds{Z}^{d})$. We define the minimum energy at mass $a$ as
$\displaystyle I(a):=\inf_{\left\|\psi\right\|_{2}^{2}=a}\mathcal{H}(\psi).$
(11)
This is the energy of a soliton of mass $a$, when it exists. We will elaborate
more on this in Section 5. A standard result in the theory of DLNS states that
the minimizer is attained whenever $I(a)<0$. Moreover, there exists
$R_{p}=R_{p,d}\geqslant 0$ such that $I(a)=0$ iff $a\leqslant R_{p}$ (see
[WEI99]). It can also be shown that $I(a)\geqslant 0$ implies that $I(a)=0$.
As for the function $W$, let $\Delta$ denote the graph Laplacian on the
discrete torus. Let $\phi^{\boldsymbol{0}}$ denote the constant
$\boldsymbol{1}$ vector, which spans $\ker(\Delta)$, and $\Delta^{\perp}$ be
the restriction of $\Delta$ on
$\mathds{C}^{N}/\mathrm{span}\\{\phi^{\boldsymbol{0}}\\}$. We define
$\displaystyle K(y)=\lim_{N\to\infty}\frac{1}{N}\log\det(y-\Delta^{\perp}),\
y\in[0,\infty).$ (12)
In Section 4, we will show the convergence of the limit and give a more useful
expression for $K(y)$.
###### Definition 2.2 (Free field energy with given mass).
Let $K$ be as in (12). We define $W:(0,\infty)\to(0,\infty)$ as
$\displaystyle W(b):=\inf_{y\,:\,K^{\prime}(y)\leqslant b}(K(y)-yb).$ (13)
In Lemma 4.7 we will show that $W$ is a decreasing convex function. Moreover,
it is the limiting free energy of the Gaussian Free Field conditioned to have
mass $b$ and will be explicitly demonstrated in Section 4. There is a
$d-$dependent constant $C_{d}$ (see (26)) such that $W(b)=K(0)$ for all
$b\geqslant C_{d}$.
Throughout the rest of the paper, we will fix the spatial dimension
$d\geqslant 3$ and unless otherwise specified, the non-linearity will be fixed
$p>1$.
### 2.1. Free energy limit
Let $W(b)$ denote the limiting mean free energy of the Gaussian Free Field
conditioned to have mass $b$, as given in equation (13) and $I(a)$ denote the
minimum energy for the Hamiltonian (5) on $\mathds{Z}^{d}$ at mass $a$, as
given in equation (11). Our first main result is the following.
###### Theorem 2.3 (Convergence of free energy).
Let $Z_{N}(\theta,\nu)$ be as in (9). We have,
$\displaystyle\left|\frac{1}{N}\log
Z_{N}(\theta,\nu)-F(\theta,\nu)\right|\leqslant O(N^{-2(d-2)/{3d}}),$
where
$\displaystyle
F(\theta,\nu):=\log\frac{\pi}{\theta}-\min_{0<a<1}\left(W(\theta(1-a))+\frac{\theta}{\nu}I(a\nu)\right).$
(14)
Essentially, Theorem 2.3 states that asymptotically the mass of a typical
function may be divided into two parts:
1. (1)
Structured part having mass $\approx an$, localized to a region of size
$\Theta(1)$, function values of $\psi_{x}$ are of order $\sqrt{N}$ and
contributes $\exp(-N\theta\nu^{-1}I(\nu a)+o(N))$ to the free energy.
2. (2)
Random part having mass $\approx bN$ with $b\leqslant 1-a$, maximum value of
$|\psi_{x}|^{2}$ is $o(N)$, Gaussian fluctuation dominates the typical
behavior and contribution to free energy is given by the integral
$\displaystyle\int_{\sum_{x}|\psi_{x}|^{2}\ \approx\
bN}\exp\left(-\theta\left\|\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi$
$\displaystyle=(1/\theta)^{N}\cdot\int_{\sum_{x}|\psi_{x}|^{2}\ \approx\
b\theta N}\exp\left(-\left\|\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi$
$\displaystyle=(\pi/\theta)^{N}\cdot\exp(-NW(b\theta)+o(N)).$
Optimizing over $a,b,a+b\leqslant 1$ should give us the scaling behavior for
$Z_{N}(\theta,\nu)$. We prove that this is indeed the case.
### 2.2. Phase transition curve
With the behavior of the free energy established, the question moves towards
the behavior of the phases, which we characterize by the mass fraction $a$
allocated to the soliton portion. We will prove in Lemma 4.7 that $W$ is
continuous and differentiable. In particular, if the minimizer is attained at
$a=a_{\star}\in(0,1)$, then by differentiability of $W,I$, we get the relation
$W^{\prime}(\theta(1-a_{\star}))+I^{\prime}(a_{\star}\nu)=0.$ However, we have
no explicit formula for $I^{\prime}$ and only an implicit formula for
$W^{\prime}$, making it difficult to utilize this relation. We will still be
able to characterize the phase diagram in Section 2.2. We define
$\displaystyle\mathscr{M}(\theta,\nu):=\operatorname{argmin}_{0\leqslant
a\leqslant 1}\left(W(\theta(1-a))+\frac{\theta}{\nu}I(a\nu)\right)$ (15)
as the set of minimizers for the variation formula. This is a compact set by
continuity of $W$ and $I$. We further define
$\displaystyle a_{\star}(\theta,\nu):=\min\mathscr{M}(\theta,\nu)$ (16)
as the smallest $a$ attaining the global minima for (14). We define
$\displaystyle\mathscr{S}:=\\{(\theta,\nu)\in(0,\infty)^{2}:a_{\star}(\theta,\nu)>0\\}.$
(17)
as the open region in the $(\theta,\nu)$ phase-space having non-zero solitonic
contribution and
$\mathscr{D}:=\text{int}((0,\infty)^{2}\setminus\mathscr{S}),$
as the open region in the $(\theta,\nu)$ phase-space having zero solitonic
contribution. Note that
$\displaystyle\mathscr{S}$
$\displaystyle=\\{(\theta,\nu)\in(0,\infty)^{2}:\min_{0\leqslant a\leqslant
1}\left((W(\theta(1-a))-W(\theta))/\theta+I(a\nu)/\nu\right)<0\\}$
$\displaystyle=\bigcup_{\varepsilon>0,\
0<a<1}\left\\{(\theta,\nu)\in(0,\infty)^{2}:(W(\theta(1-a))-W(\theta))/\theta+I(a\nu)/\nu<-\varepsilon\right\\}$
is an open set by continuity of the map
$(\theta,\nu)\mapsto(W(\theta(1-a))-W(\theta))/\theta+I(a\nu)/\nu$ for fixed
$a$. The following result characterizes the phase transition. Define the
function
$\displaystyle\xi_{p}(t):=\inf_{0<a<1}\frac{-\ln(1-a)}{a^{(p+1)/2}+t}\text{
for }t\geqslant 0.$ (18)
See Figure 1 for a plot of $\xi_{p}(0)$.
Figure 1. Plot of $p$ vs. $\xi_{p}(0)$.
###### Theorem 2.4 (Existence of Phase transition curve).
We have the following results.
1. a)
For $\nu\leqslant R_{p}$, we have $a_{\star}(\theta,\nu)=0$.
2. b)
For $\nu>R_{p}$, there exists a strictly decreasing continuous function
$\theta_{c}:(R_{p},\infty)\to(0,\infty)$ such that
$a_{\star}(\theta,\nu)\quad\begin{cases}=0&\text{ for
}\theta\leqslant\theta_{c}(\nu),\\\ >0&\text{ for
}\theta>\theta_{c}(\nu).\end{cases}$
3. c)
The function $\theta_{c}$ is bounded by
$\theta_{c}(\nu)\leqslant\min\left\\{\frac{p+1}{2}\nu^{-(p-1)/2}\cdot\xi_{p}(0),\frac{C_{d}\nu}{\nu-
R_{p}}\right\\}$
and satisfies,
$\displaystyle\lim_{\nu\uparrow\infty}\theta_{c}(\nu)\cdot\frac{2}{p+1}\nu^{(p-1)/2}$
$\displaystyle=\xi_{p}(0)\text{ and }\lim_{\nu\downarrow
R_{p}}\theta_{c}(\nu)\cdot(\nu-R_{p})=C_{d}R_{p}.$
4. d)
Moreover, we have that for all $\nu>R_{p}$,
$\liminf_{\theta\downarrow\theta_{c}}a_{\star}(\theta,\nu)>0.$
###### Remark 2.5.
We expect $\mathscr{M}$ to be singleton when $a_{\star}>0$. However, to prove
that, we need detailed behavior of the $I$ function, especially close to
$R_{p}$. We currently lack this.
See Figure 2 for a pictorial description of the phase diagram.
I IIa IIb IIIa IIIb
Figure 2. Top: Phase Diagrams for $\\{R>0,I^{\prime}(R+)=0\\}$ and $\\{R=0$ or
$R>0,I^{\prime}(R+)<0\\}$, respectively. Bottom: Representative plots of
$a\mapsto W(\theta(1-a))/\theta+I(a\nu)/\nu$ function for different regions of
the phase diagram.
###### Remark 2.6.
With the scaling $\theta\nu^{-(p-1)/2}=C$ and $\nu\to\infty$, we recover the
regime considered in [CK12], the critical value being $C=\xi_{p}(0)$. This is
due to the fact that as $\nu$ becomes large, the nonlinear part of the
Hamiltonian dominates and the lattice sites decouple.
### 2.3. Typical Dispersive Function
We now take $(\theta,\nu)\in\mathscr{D}$. In practice, for fixed $\theta$ we
will have to take $\nu$ appropriately small (but not vanishing). We introduce
the prototypical object to which we will compare a typical function in the
dispersive phase.
###### Definition 2.7.
Let $\\{\zeta_{\boldsymbol{k}}\\}_{\boldsymbol{k}\in[n]^{d}}$ be i.i.d.
standard complex Gaussian random variables. Let
$\\{\phi^{\boldsymbol{k}}\\}_{\boldsymbol{k}\in[n]^{d}}$ and
$\\{\lambda_{\boldsymbol{k}}\\}_{\boldsymbol{k}\in[n]^{d}}$ respectively be
the eigenfunctions (23) and eigenvalues (22) of $-\Delta$, as defined on
$\ell^{2}(\mathds{T}^{d}_{n})$. We define the massive Gaussian free field
(MGFF) with mass parameter $y>0$ as
$\displaystyle\Psi^{y}:=\sum_{\boldsymbol{k}\in[n]^{d}}\frac{\zeta_{\boldsymbol{k}}}{\sqrt{y+\lambda_{\boldsymbol{k}}}}\phi^{\boldsymbol{k}}.$
(19)
The properties of the massive free field are well understood. We will recap
some of them in Section 4.
###### Theorem 2.8.
Let $\theta<C_{d}$ be fixed, $p<5+4\sqrt{2}$ and $\nu$ be sufficiently small.
Let $y_{N}$ be such that
$\operatorname{\mathds{E}}\left\|\Psi^{y_{N}}\right\|_{2}^{2}=N\theta$. Let
$\mathcal{A}_{N}\subset\mathds{C}^{N}$ be such that
$\mathds{P}(\Psi^{y_{N}}\in\mathcal{A}_{N})\leqslant N^{-\alpha}$ where
$\alpha>1/2$. Then we have an $\epsilon>0$ such that
$\mu_{N}(\mathcal{A}_{N})\leqslant N^{-\epsilon}.$
In the dispersive phase, any event sufficiently rare for the massive GFF with
appropriate parameter $\theta$ is also rare for the measure $\mu_{N}$.
###### Remark 2.9.
We anticipate that the upper bound on $p$ should be removable; it is a
consequence of using a particular auxiliary function to aid our proofs.
###### Corollary 2.10.
For $\Psi$ sampled from $\mu_{N}$, we have with probability approaching $1$
$\left\|\Psi\right\|_{\infty}\leqslant\sqrt{3C_{d}\log N}.$
The penalty factor of $N^{1/2}$ does not arise from the exponential tilting
due to the nonlinearity, but rather the severe restriction placed on the
allowed value of the mass, as will be discussed in Section 8.
## 3\. Background, Literature Review and Heuristics
The purpose of this section is threefold; provide a very brief survey of some
of the important characteristics of the continuum NLS, discuss the obstacles
that arise when studying them and finally describe how the discrete NLS
captures some of these phenomena without the obstacles. Thus the analysis of
the discrete case sheds some light on the continuum.
### 3.1. Criticality
In the continuum focusing NLS, the value of the nonlinearity parameter $p$ can
lead to drastically different long-term behavior and also impose different
requirements on the regularity and size of the initial data to obtain a well-
posed solution. Fundamentally, the issue is that $|\psi|^{p-1}\psi$ need not
be in $L^{2}$, and thus can lead to the development of point singularities.
The Gagliardo-Nirenberg-Sobolev (GNS) inequality allows us to use the $H^{1}$
norm to bound the $L^{p}$ norm, there is a constant depending on $p$ and $d$
such that
$\displaystyle\left\|\psi\right\|_{p}\leqslant
C_{p,d}\left\|\nabla\psi\right\|_{2}^{(p-1)d/2}\cdot\left\|\psi\right\|_{2}^{2+(2-d)(p-1)/2}.$
(20)
The regime $p<1+4/d$ is referred to as mass subcritical, $H^{1}$ initial data
is adequate for global well-posedness. The regime $p=1+4/d$ is referred to as
mass critical. In this case, $H^{1}$ data leads to a globally well-posed
solution so long as $L^{2}$ norm is smaller than a threshold depending on $p$
and $d$. When $p>1+4/d$, referred to as the mass supercritical regime, we
require an upper threshold for both $L^{2}$ and $H^{1}$ norms.
### 3.2. Soliton Solutions
The competition between the dispersion the focusing effect of the nonlinearity
yield spatially stationary solutions called solitons. Solton solutions may be
realized in one of two ways; either using the separation of variables or
through a variational characterization by minimizing the Hamiltonian subject
to a mass constraint. If $\varphi({\boldsymbol{x}})$ denotes a soliton
solution, it satisfies the following nonlinear elliptic problem for $\omega<0$
$\omega\varphi+\Delta\varphi+|\varphi|^{p-1}\varphi=0.$
Solitons are strongly localized; it can be proved that they decay
exponentially about a center. Further, the variational characterization
implies that they are radially symmetric about a center and smooth for energy
subcritical nonlinearity. Under their construction, optimizing the difference
of $L^{p+1}$ and $H^{1}$ norms, they happen to be the functions for which the
GNS inequality is realized, and the constant is sharp [WEI82].
### 3.3. Use of Invariant Measures
The invariant measure approach to studying the continuum NLS is well-known and
celebrated. The hope is that an invariant measure sheds light on the ‘typical’
behavior of the NLS. For instance, this can include questions of well-
posedness, as well as questions of types of solutions. Consider the famous
_soliton resolution conjecture_ , which states that for generic initial data,
we see a potion of mass coalescing into a soliton and a portion dispersing
away. Invariant measures are a natural means for talking about what
constitutes generic initial data.
In terms of construction, the idea is to use the intuition provided by finite-
dimensional Hamiltonian systems to obtain candidate invariant measures on
appropriate function spaces. Of course, there is no version of a Lebesgue
measure on a function space to which Liouville’s theorem can be applied.
However, as is well known in probability, we may rigorously make sense of
Gaussian measures which have ‘density’ proportional to
$\exp(-\left\|\psi\right\|_{H^{1}})$ on appropriate Wiener spaces.
For instance, in dimension one with Dirichlet boundary conditions, this is the
Brownian bridge on the torus $\mathds{T}^{d}$ with harmonic function fixed;
this corresponds to the Gaussian Free Field. The regularity of these as
distributions is well understood, and the hope is that by exponentially
tilting these Gaussian measures by $\left\|\psi\right\|_{p+1}^{p+1}$ and
employing a mass cut-off, we may obtain an invariant measure for the dynamics.
This requires verifying that the tilt is integrable with respect to the
reference Gaussian measure and that the NLS flow is defined on the support of
the measure. This technique was used to construct a candidate invariant
measure for the 1-d periodic focusing NLS by Lebowitz, Rose, and Speer
[LRS88]. They worked in the subcritical mass regime, where GNS inequality can
be applied to control the nonlinearity. Later, McKean and Vaninsky [MV94,
MV97a, MV97b] proved that this measure is indeed invariant for the flow.
Bourgain [B94] used a version of this invariant measure to prove global well-
posedness for the periodic equation. Brydges and Slade [BS96] followed a
similar approach for the two-dimensional periodic equation, with a slightly
different ultraviolet cutoff. They established a normalizable measure for mass
below a critical threshold. However, their measure is not invariant for the
flow of the NLS. Indeed, this approach breaks down in $d\geqslant 3$, where
this approach fails to yield even a normalizable measure, and the associated
Gaussian field is too rough. It is at this juncture that discretization comes
into play.
### 3.4. The Discrete NLS
We introduced the DNLS in (4), and observed that it retains the Hamiltonian
structure, akin to its continuum counterpart. The discrete setting is harder
to work with in many ways as several of the symmetries of the continuum NLS
are lost, such as Galilean invariance and rotational invariance. On the other
hand, we do not have the same regularity issues; the DNLS is globally well-
posed for $\ell^{2}$ initial data. Like the continuum equation, the focusing
DNLS admits soliton solutions. Like the continuum, they can be realized either
through the separation of variables or as minimizers of the Hamiltonian. We
discuss them in detail in Section 5.
In [WEI99], Weinstein studied the discrete focusing NLS on $\mathds{Z}^{d}$
and showed that soliton solutions of arbitrary mass could be realized for
mass-subcritical nonlinearity. On the other hand, in the mass-supercritical
case, there is a constant depending only on the lattice coupling strength and
nonlinearity parameter, denoted by $R_{p}$, such that soliton solutions can
only be realized when the mass is more than $R_{p}$; a phenomenon strikingly
familiar to the blow-up in the continuum. The precise statement is provided
later; see Lemma 5.1. This analogy is strengthened by the observation that
there is a correspondence between solutions of a large mass and solutions on
the lattice with low coupling strength; soliton solutions are increasingly
concentrated onto a single lattice site as the mass increases.
In [CK12], Chatterjee and Kirkpatrick examined the behavior of the discrete
focusing cubic (with $p=3$) NLS defined on the torus of dimension $d\geqslant
3$, via the analysis of a Gibbs measure of the form (6). Their regime of
scaling is chosen to correspond to taking a limit to the continuum; the blow-
up phenomenon is realized as a phase transition. They show that a single
parameter $\theta=\theta(\beta,B)$ governs the phase behavior. When
$\theta\geqslant\theta_{c}$, a single site acquires a positive fraction of the
mass. This is explained by the scaling regime considered; the $H^{1}$ norm
part of the Hamiltonian becomes irrelevant, and the measure may be regarded as
an exponential tilt of the uniform measure on the ball via the $\ell^{4}$
norm. The immediate conclusion is that the favored states are those where all
the mass is localized to a single site.
Discrete invariant measures were used to rigorously establish a version of the
soliton resolution conjecture by Chatterjee in [C14]. Chatterjee worked with a
microcanonical ensemble, _i.e.,_ the uniform measure defined on an
$\varepsilon-$thickening of a $2N-2$ dimensional surface defined by taking
constant values of mass and energy and showed that a function uniformly drawn
from this measure, modulo translation and phase rotation, converges in a
suitable sense to a continuum soliton of the same mass.
### 3.5. Scaling commentary
There is a natural scale invariance associated with the continuum NLS. If
$\psi(t,x)$ is a solution of (1), then for any $\lambda>0$,
$\lambda^{\frac{2}{p-1}}\psi(\lambda^{2}t,\lambda x)$ is also a solution.
Since the lattice cannot be scaled, the discrete equation does not admit any
symmetry with respect to scaling. However, we do have the following
equivalence.
###### Lemma 3.1.
Let $\psi_{x}(t)$ denote a solution of the discrete NLS on either the lattice
$\mathds{Z}^{d}$ or the discrete torus $\mathds{T}^{d}$ with lattice spacing
$h$. Then $\lambda^{\frac{2}{p-1}}\psi_{x}(\lambda^{2}t)$ is a solution to the
discrete NLS corresponding to the same graph with lattice spacing $h/\lambda$.
###### Proof.
Proof follows immediately by noting that for
$h^{\prime}=h/\lambda,\psi^{\prime}_{x}(t):=\lambda^{\frac{2}{p-1}}\psi_{x}(\lambda^{2}t)$
we have
$H_{h}(\psi)=\lambda^{d-2-\frac{4}{p-1}}H_{h{{}^{\prime}}}(\psi{{}^{\prime}})$,
and using the discrete analogue in (4). $\blacksquare$
This yields a family of equivalent ODEs on the lattice, where solutions of one
can be scaled into solutions of the other. For the discrete equation, this is
the reason why solutions with low values of lattice coupling can be placed in
correspondence with solutions of significant mass, as seen in [WEI99]. In
particular, in the model (8) if we replace $\psi$ by $\lambda\psi$ we have
equivalence between (6) and (8) as long as the parameters satisfy
$\displaystyle\beta h^{d-2}=\theta\lambda^{2},\quad\beta
h^{d}=\theta\lambda^{2}\cdot(\nu\lambda^{2}/N)^{(p-1)/2}\text{ and
}Bh^{-d}=N\lambda^{-2}.$
Solving we get the relations (10) with $\lambda=\sqrt{Nh^{d}/B}$.
To use physics terminology, this regime of scaling corresponds to taking the
infrared limit, without removing the ultraviolet cutoff, essentially allowing
$\mathds{T}^{d}_{n}$ to grow to $\mathds{Z}^{d}$. As a consequence of this
scaling, we may consider the behavior of concentrated and dispersed parts of
typical functions separately. Take a function $\psi$ in
$\ell^{2}(\mathds{T}^{d}_{n})$ with mass bounded above by $N$. We may break it
into a region where the values are of order $\sqrt{N}$ and a region where they
are of strictly lower order. Let the region of concentration be $U$. The
energy of the restriction $\psi_{U}$ is then expressible in terms of the
$\mathds{Z}^{d}$ Hamiltonian as
$\mathcal{H}_{N}(\psi_{U})\approx\frac{N}{\nu}\mathcal{H}\left(\sqrt{\frac{\nu}{N}}\psi_{U}\right)$
where $\sqrt{\nu/N}\cdot\psi_{U}$ scales to yield a valid function in
$\ell^{2}(\mathds{Z}^{d})$.
As for the dispersive part, whenever the order of typical values of $\psi_{x}$
is lower than $N$, the nonlinearity does not contribute, and we see Gaussian
Free Field behavior for $\psi_{U^{c}}$. Moreover, the contribution of this
portion to the free energy is non-trivial. The analysis of this regime is what
makes this article novel; usually the scaling is chosen such that a function
sampled from the measure converges to a soliton, our work strongly suggests
(yet falls short of explicitly characterizing) the behavior of the
fluctuations about this soliton, in the vein of studying fluctuations for
various random surface models. Indeed, this is the case for our analysis of
the typical function in the dispersive phase. The fact that there is no
soliton essentially corresponds to the fact that no centering is required; we
only see the fluctuations.
### 3.6. Notations
The following are fixed for the entire article
1. i.
We will denote the integer lattice by $\mathds{Z}^{d}$ and the discrete torus
of side length $n$ by $\mathds{T}^{d}_{n}$.
2. ii.
Vertices in $\mathds{T}^{d}_{n}$ or $\mathds{Z}^{d}$ will be denoted by
${\boldsymbol{x}}$.
3. iii.
The dual variable to ${\boldsymbol{x}}$ in the sense of the Fourier transform
defined on $\mathds{T}_{n}^{d}$ will be denoted by $\boldsymbol{k}$.
4. iv.
$N$ will always denote $n^{d}$.
5. v.
$\mathds{C}^{N}$, as is standard will denote the complex vector space of
dimension $N$, with the standard inner product.
6. vi.
$\theta$ and $\nu$ will be positive real numbers denoting the inverse
temperature and the coupling constant, respectively.
7. vii.
$\mathscr{S}$ and $\mathscr{D}$ are subsets of $[0,\infty)^{2}$ denoting the
solitonic and dispersive regions of the parametric plane $(\theta,\nu)$.
8. viii.
$\mathscr{M}(\theta,\nu)$ is the collection of optimizers of the variational
formula defining the free energy.
As best as possible, we work with the following conventions. We also highlight
frequently recurring examples.
1. i.
Subsets of $\mathds{Z}^{d}$ or $\mathds{T}^{d}$ will be denoted by capitalized
Roman letters such as $U$ or $V$.
2. ii.
Subsets of the function spaces $\ell^{2}(\mathds{T}^{d})$ will be denoted by
calligraphic letters such as $\mathcal{A}$ or $\mathcal{B}$.
3. iii.
Functions in $\ell^{2}(\mathds{Z}^{d})$ or $\ell^{2}(\mathds{T}^{d})$ will be
denoted by small Greek letters such as $\psi$ or $\phi$, with the following
recurring examples.
1. (a)
Restrictions of a function $\psi$ to a subset $U$ will be denoted as
$\psi_{U}$.
2. (b)
Discrete solitons of mass $a$ will be denoted as $\varphi^{a}$.
3. (c)
Eigenfunctions of the $\mathds{T}^{d}_{n}$ Laplacian will be denoted as
$\phi^{\boldsymbol{k}}$ with $\boldsymbol{k}\in[n]^{d}$.
4. (d)
Functions in the orthogonal complement of the subspace spanned by
$\\{\phi^{\boldsymbol{0}}\\}$ will be denoted by $\psi^{\perp}$.
4. iv.
Random fields will be denoted by capital Greek letters such as $\Psi$ and
$\Phi$, with the following recurring examples,
1. (a)
$\Psi^{U,y}$ will denote the massive Dirichlet Gaussian Free Field taking
values in $\ell^{2}(U)$.
2. (b)
$\Psi^{\boldsymbol{0},y}$ will denote the massive zero average Gaussian Free
Field taking values in $\ell^{2}(\mathds{T}^{d}_{n})$.
3. (c)
The superscript $y$ will be dropped when we have $y=0$, when appropriate.
5. v.
The Laplacians under consideration will be variations of $\Delta$.
1. (a)
Restriction to the complement of the kernel will be denoted as
$\Delta^{\perp}$.
2. (b)
Dirichlet Laplacians on $U$ will be denoted as $\Delta^{0}_{U}$.
6. vi.
Constants will usually be denoted by variations on the letter $C$. The most
frequently recurring standard constant is the following.
1. (a)
$C_{d}$ will denote the limiting mass per site of $\Psi^{\boldsymbol{0}}$.
Important exceptions to the conventions listed above are unavoidable for
various reasons, such as consistency with prior literature. We list them
below.
1. i.
The Hamiltonians will always be denoted with variations of $\mathcal{H}$.
There are three cases
1. (a)
$\mathcal{H}_{c}$ is the continuum Hamiltonian and will not be refrerred to
beyond the background material.
2. (b)
$\mathcal{H}$ is the discrete Hamiltonian for the DNLS defined on
$\mathds{Z}^{d}$, with $h=1$. See (5).
3. (c)
$\mathcal{H}_{N}$ is our scale-dependent model Hamiltonian with which we
define the Gibbs measure of interest.
2. ii.
The following capital Roman Letters have specific meanings, and are not
subsets of lattice sites.
1. (a)
The function $I(a)$ for $a\geqslant 0$ will always denote the minimum
$\mathcal{H}$ for fixed mass $a$, $\mathcal{H}$ as defined in (5). See (11).
2. (b)
The function $K(y)$ will always denote the limiting scaled log determinant of
$y-\Delta^{\perp}$. See (12).
3. (c)
The function $W(b)$ for $b>0$ will always denote the Legendre transform of
$K$. See (13).
4. (d)
The function $L$ will always denote the inverse of $K^{\prime}$. See (27)
5. (e)
$R_{p}$ will always denote the mass threshold for soliton formation. See Lemma
5.1
3. iii.
Parameters $\theta>0$ and $\nu>0$ are the inverse temperature and coupling
constant for the nonlinearity in (9), and are not functions in $\ell^{2}$.
Along the way, we will define certain auxiliary functions and random variables
to make calculations more convenient to express. These will be defined in a
context-appropriate fashion.
### 3.7. Organization of the Paper
The article is organized as follows. In Section 4, we provide a description of
the Discrete Gaussian Free Field and explain its importance for our analysis.
We then prove the convergence of its limiting free energy, and of that
conditioned to have a specified mass. We also establish some useful
$\ell^{\infty}$ bounds. In Section 5, we discuss soliton solutions of the
DNLS. In particular, we construct exponentially decaying minimizers for (5).
We also establish properties of the function $I$. In Section 6, we combine
insights from Sections 4 and 5 to prove the convergence of the limiting free
energy, that is Theorem 2.3. In Section 7, we analyze the phases. In
particular, we demonstrate that there are two regimes of optimal mass
allocation, one where we have a non-trivial soliton, and one where we do not.
That is, we prove Theorem 2.4. In Section 8, we provide commentary on the
behavior of a typical function in the dispersive phase. We provide an explicit
comparison between the reference measure corresponding to the linear part of
the Hamiltonian and the massive Gaussian Free Field. We then show that the
tilt corresponding to the nonlinearity is integrable with respect to the
massive free field. A combination of these two results verifies Theorem 2.8.
As an immediate corollary, we have that that a typical function in the
dispersive phase is bounded above in probability by $\sqrt{3C_{d}\log N}$. We
conclude the article with some interesting questions for the future.
## 4\. Gaussian Free Field
The dispersive contribution to the free energy is given by the integral
$\displaystyle M_{N}(b,\varepsilon):=\int_{\left\|\psi\right\|^{2}_{2}\in
N(b-\varepsilon,b+\varepsilon)}\exp\bigl{(}-\left\|\nabla\psi\right\|_{2}^{2}\bigr{)}\,\mathbf{d}\psi$
(21)
where $b\geqslant 0,\varepsilon>0$. Understanding the asymptotics as
$N\to\infty$ is thus of fundamental importance to this article. This section
is first and foremost dedicated to establishing the following theorem.
###### Theorem 4.1.
Let $b>0$, and $\varepsilon>0$ be a positive number such that
$N\varepsilon^{d}\gg 1$. We have
$\displaystyle\left|\frac{1}{N}\log
M_{N}(b,\varepsilon)-\pi+W(b)\right|\leqslant 2\varepsilon
L(b)+2/(N^{2}\varepsilon^{2}\cdot
m_{N}(2))+\frac{1}{N}\log((b+\varepsilon)N).$
where $m_{N}(\cdot)$ is as per Lemma 4.4.
What prevents the immediate representation of the integral as an expectation
with respect to a Gaussian random variable is the fact that the quadratic form
in the exponential is degenerate; it corresponds to $-\Delta$ which has a non-
trivial kernel. However, we can still relate this integral to the large
deviations of the mass of an appropriate Gaussian field called the
zero–average Gaussian Free Field (GFF). We will define the GFF and evaluate
the asymptotic via a combination of two probabilistic techniques, exponential
tilting, and concentration. We will then conclude the section with some
maximum estimates will be of importance later.
### 4.1. Definitions
The Laplacian $\Delta$ is a translation-invariant operator on $\mathds{C}^{V}$
with respect to the standard basis and is thus diagonalized by the Fourier
basis. It is well-known that the eigenvalues of the Laplacian on the discrete
torus with vertex set $[n]^{d}$ are given by
$\displaystyle\lambda_{\boldsymbol{k}}=4\sum_{i=1}^{d}\sin^{2}(\pi
k_{i}/n)=f(\boldsymbol{k}/n)\text{ for }\boldsymbol{k}\in[n]^{d}.$ (22)
The corresponding eigenfunctions are
$\displaystyle\phi^{\boldsymbol{k}}_{\boldsymbol{x}}=\frac{1}{\sqrt{N}}\exp\left(2\pi{\mathrm{i}\mkern
2.0mu}\cdot{(\boldsymbol{k}\cdot{\boldsymbol{x}})}/n\right),\qquad{\boldsymbol{x}},\boldsymbol{k}\in[n]^{d}.$
(23)
###### Definition 4.2 (Massive zero–average GFF).
Let $y\geqslant 0$. We define the massive zero–average free field
$\Psi^{\boldsymbol{0},y}\in\mathds{C}^{V}$ as the random vector
$\Psi^{\boldsymbol{0},y}:=\sum_{\boldsymbol{k}\neq\boldsymbol{0}}\frac{\zeta_{\boldsymbol{k}}}{\sqrt{\lambda_{\boldsymbol{k}}+y}}\phi^{\boldsymbol{k}},$
where $\lambda_{\boldsymbol{k}}$ are the eigenvalues of $-\Delta$ as in (22),
$\phi^{\boldsymbol{k}}$ are the corresponding eigenfunctions as in (23) and
$\zeta_{\boldsymbol{k}}$ are i.i.d. standard complex Gaussian random
variables.
We may explicitly write down the density of the zero–average free field, which
is expressible as a Gibbs measure in its own right. We will be working with
the subspace $\mathds{C}^{N}/\\{\phi^{\boldsymbol{0}}\\}$, that is the
subspace of all $\psi^{\perp}$ that are orthogonal to $\phi^{\boldsymbol{0}}$.
Note, the restriction is negative definite and is therefore invertible. We
have
$\mathds{P}(\Psi^{\boldsymbol{0},y}\in\mathcal{A}):=\frac{1}{Z^{\boldsymbol{0},y}}\int_{\mathcal{A}}\exp\left(-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}-y\left\|\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}.$
where $\mathbf{d}\psi^{\perp}$ denotes the volume element on
$\mathds{C}/\\{\phi^{\boldsymbol{0}}\\}$. We will denote the restriction of
the Laplacian on this space as $\Delta^{\perp}$. The partition function is
given by
$Z^{\boldsymbol{0},y}:=\pi^{N-1}/\det(y-\Delta^{\perp}).$
We refer the interested reader to [S07] and [A19] for more details on GFF and
zero–average GFF on the discrete torus, respectively.
###### Remark 4.3.
The operator $y-\Delta$ for $y>0$ is positive definite, therefore we may
define a Gaussian process with covariance $(y-\Delta)^{-1}$ without the
restriction to $\mathds{C}^{N}/\mathrm{span}\\{\phi^{\boldsymbol{0}}\\}$. This
is exactly the massive Gaussian Free Field, seen in (19).
### 4.2. Analysis of the Limiting Free Energy
From here onwards, the graph under consideration will be the entirety of the
discrete torus $\mathds{T}^{d}$, and we will describe the properties of the
measure associated with the field $\Psi^{\boldsymbol{0},y}$. The associated
mean free energy is given by
$\displaystyle\frac{1}{N}\log
Z^{\boldsymbol{0},y}=\frac{N-1}{N}\log\pi-\frac{1}{N}\sum_{\boldsymbol{k}\neq\boldsymbol{0}}\log(\lambda_{\boldsymbol{k}}+y).$
In order to more compactly express our results, for
${\boldsymbol{x}}\in[0,1]^{d}$, we define
$\displaystyle f({\boldsymbol{x}}):=4\sum_{i=1}^{d}\sin^{2}(\pi
x_{i}),\quad{\boldsymbol{x}}=(x_{1},x_{2},\ldots,x_{d})\in[0,1]^{d}$ (24)
and
$\displaystyle
g_{y}({\boldsymbol{x}}):=\log\left(y+f({\boldsymbol{x}})\right).$
Let ${\boldsymbol{x}}$ denote a uniform random variable on $[0,1]^{d}$. We
define ${\boldsymbol{x}}_{n}:=\lfloor
n{\boldsymbol{x}}\rfloor/n\sim\text{Uniform}(\\{0,1/n,\ldots,1-1/n\\}^{d})$
and
$\displaystyle
K_{N}(y):=\frac{1}{N}\sum_{\boldsymbol{k}\neq\boldsymbol{0}}\log(y+\lambda_{\boldsymbol{k}})=\operatorname{\mathds{E}}g_{y}({\boldsymbol{x}}_{n})\mathds{1}_{{\boldsymbol{x}}_{n}\neq\boldsymbol{0}}.$
Observe that the expected mass of $\Psi^{\boldsymbol{0},y}$ is given by
$\operatorname{\mathds{E}}\left\|\Psi^{\boldsymbol{0},y}\right\|_{2}^{2}=-\frac{d}{dy}\log
Z^{\boldsymbol{0},y}=N\cdot K_{N}^{\prime}(y).$
Recall the function $K$ introduced in (12). By definition,
$K=\lim_{n\to\infty}K_{N}$. Clearly we should have
$\displaystyle
K(y)=\int_{[0,1]^{d}}g_{y}({\boldsymbol{x}})\,d{\boldsymbol{x}}.$ (25)
We will prove this convergence now, and as an abuse of notation, we use (25)
as the definition of $K$. We refer to the following lemma from [DK21], which
is important for establishing rates of convergence and follows quite easily.
###### Lemma 4.4 ([DK21]*Lemma $2.1$).
Let $\\{\lambda_{\boldsymbol{k}}\\}_{\boldsymbol{k}\neq 0}$ be the eigenvalues
of $-\Delta^{\perp}$. Then we have for any $p>0$,
$m_{N}(p):=\frac{1}{N}\sum_{\boldsymbol{k}\neq\boldsymbol{0}}\lambda_{\boldsymbol{k}}^{-p}\simeq\begin{cases}1&\text{
if }d>2p\\\ \log N&\text{ if }d=2p\\\ N^{-1+2p/d}&\text{
otherwise}.\end{cases}$
This lemma is first of use in order to calculate the rate of convergence of
the scaled expected mass, i.e.
$\operatorname{\mathds{E}}\left\|\Psi^{0,y}\right\|_{2}^{2}$.
###### Lemma 4.5.
For $y\geqslant 0$, we have a constant $c$, depending only on $d\geqslant 3$,
such that
$\displaystyle|K_{N}(y)-K(y)|\leqslant c\cdot N^{-1/d}\text{ and
}|K^{\prime}_{N}(y)-K^{\prime}(y)|\leqslant c\cdot
N^{-1/d}\cdot(1+\mathbf{1}_{d=3}\cdot\log N).$
###### Proof.
Let
$g_{y}({\boldsymbol{x}}):=\log(y+f({\boldsymbol{x}}))\text{ and
}g^{\prime}_{y}({\boldsymbol{x}}):=\partial_{y}g_{y}({\boldsymbol{x}})=(y+f({\boldsymbol{x}}))^{-1},\quad{\boldsymbol{x}}\in[0,1]^{d}.$
It is clear that
$\displaystyle K_{N}(y)$
$\displaystyle=\operatorname{\mathds{E}}g_{y}({\boldsymbol{x}}_{n})\mathds{1}_{{\boldsymbol{x}}_{n}\neq\boldsymbol{0}},\text{
}K(y)=\operatorname{\mathds{E}}g_{y}({\boldsymbol{x}}),$ $\displaystyle
K^{\prime}_{N}(y)$
$\displaystyle=\operatorname{\mathds{E}}g^{\prime}_{y}({\boldsymbol{x}}_{n})\mathds{1}_{{\boldsymbol{x}}_{n}\neq\boldsymbol{0}}\text{
and }K^{\prime}(y)=\operatorname{\mathds{E}}g^{\prime}_{y}({\boldsymbol{x}}).$
Thus, we have, for any $y\geqslant 0$, we have
$\displaystyle\left|K(y)-K_{N}(y)\right|$
$\displaystyle\leqslant\operatorname{\mathds{E}}|g_{y}({\boldsymbol{x}})|\mathds{1}_{{\boldsymbol{x}}_{n}=\boldsymbol{0}}+\operatorname{\mathds{E}}\left|g_{y}({\boldsymbol{x}})-g_{y}({\boldsymbol{x}}_{n})\right|\mathds{1}_{{\boldsymbol{x}}_{n}\neq\boldsymbol{0}}$
$\displaystyle\text{ and }\left|K^{\prime}(y)-K^{\prime}_{N}(y)\right|$
$\displaystyle\leqslant\operatorname{\mathds{E}}g^{\prime}_{y}({\boldsymbol{x}})\mathds{1}_{{\boldsymbol{x}}_{n}=\boldsymbol{0}}+\operatorname{\mathds{E}}\left|g^{\prime}_{y}({\boldsymbol{x}})-g^{\prime}_{y}({\boldsymbol{x}}_{n})\right|\mathds{1}_{{\boldsymbol{x}}_{n}\neq\boldsymbol{0}}.$
It is easy to chek that,
$\operatorname{\mathds{E}}|g_{y}({\boldsymbol{x}})|\mathds{1}_{{\boldsymbol{x}}_{n}=\boldsymbol{0}}\leqslant\operatorname{\mathds{E}}|\log(y+f({\boldsymbol{x}})|\mathds{1}_{{\boldsymbol{x}}_{n}=\boldsymbol{0}}\leqslant(1+|\log
y|)\cdot N^{-1}$
and
$\operatorname{\mathds{E}}g^{\prime}_{y}({\boldsymbol{x}})\mathds{1}_{{\boldsymbol{x}}_{n}=\boldsymbol{0}}\leqslant\operatorname{\mathds{E}}f({\boldsymbol{x}})^{-1}\mathds{1}_{{\boldsymbol{x}}_{n}=\boldsymbol{0}}\leqslant
cN^{2/d-1}\leqslant cN^{-1/d}.$
Note that with $y>0$ fixed, $g_{y}(\cdot)$ is smooth, with bounded derivatives
of all orders. Moreover, when $\left\|{\boldsymbol{x}}_{n}\right\|>0$, we can
bound
$\displaystyle\left|g_{y}({\boldsymbol{x}})-g_{y}({\boldsymbol{x}}_{n})\right|$
$\displaystyle\leqslant\left\|{\boldsymbol{x}}-{\boldsymbol{x}}_{n}\right\|\cdot\left\|\nabla
g_{y}({\boldsymbol{x}}_{n}^{*})\right\|\leqslant cN^{-1/d}\cdot
f({\boldsymbol{x}}_{n})^{-1/2}$ $\displaystyle\text{ and
}\left|g^{\prime}_{y}({\boldsymbol{x}})-g^{\prime}_{y}({\boldsymbol{x}}_{n})\right|$
$\displaystyle\leqslant\left\|{\boldsymbol{x}}-{\boldsymbol{x}}_{n}\right\|\cdot\left\|\nabla
g^{\prime}_{y}({\boldsymbol{x}}_{n}^{*})\right\|\leqslant cN^{-1/d}\cdot
f({\boldsymbol{x}}_{n})^{-3/2}.$
Here we used the fact that
$\left\|\nabla f({\boldsymbol{x}})\right\|^{2}=\sum_{i=1}^{d}(8\sin(\pi
x_{i})\cos(\pi x_{i}))^{2}\leqslant 16f({\boldsymbol{x}}).$
Summing, we get that
$\left|K(y)-K_{n}(y)\right|\leqslant cN^{-1/d}m_{N}(1/2)\text{ and
}\left|K^{\prime}(y)-K^{\prime}_{N}(y)\right|\leqslant cN^{-1/d}m_{N}(3/2),$
where $m_{n}$ is as given in Lemma 4.4. This completes the proof.
$\blacksquare$
We now take the opportunity to introduce an important dimension dependent
constant. We define
$\displaystyle C_{d}:=K^{\prime}(0),$ (26)
which is clearly finite for $d\geqslant 3$.
###### Remark 4.6.
The constant $C_{d}$ has an important interpretation in probability, it is the
expected number of returns to ${\boldsymbol{x}}$ for a simple symmetric random
walk started at ${\boldsymbol{x}}$ in $\mathds{Z}^{d}$. Clearly,
$C_{d}<\infty$ when $d\geqslant 3$ and is infinite otherwise due to the
recurrence of the random walk.
It is easy to check that $2dC_{d}\geqslant
2d/\operatorname{\mathds{E}}(f({\boldsymbol{x}}))=1$ for all $d\geqslant 3$
and converges to $1$ as $d\to\infty$. In Table 1 we provide the numerical
values of $C_{d}$ for $d=3,4,\cdots,10$.
$d$ | 3 | 4 | 5 | 6 | 6 | 8 | 9 | 10
---|---|---|---|---|---|---|---|---
$C_{d}$ | 0.252 | 0.155 | 0.116 | 0.093 | 0.078 | 0.067 | 0.059 | 0.053
Table 1. Numerical values for $C_{d}$
Since $K^{\prime}$ is decreasing and convex, we may define an inverse function
$\displaystyle L:(0,C_{d}]\to[0,\infty)$ (27)
which is also decreasing, convex and by hypothesis satisfies
$K^{\prime}(L(b))=b$. We extend $L$ by defining $L(b)=0$ for $b>C_{d}$. Note
that, $bL(b)\leqslant 1$ for all $b>0$ and
$\displaystyle W(b)=\inf_{y\,:\,K^{\prime}(y)\leqslant
b}(K(y)-yb)=\begin{cases}K(L(b))-bL(b)&\text{ if }b<C_{d}\\\ K(0)&\text{ if
}b\geqslant C_{d}.\end{cases}$ (28)
The function $W$ is singular at $0$, however the divergence can be well
understood.
###### Lemma 4.7.
The function $W$ given in (13) is a decreasing convex function of $b$ with
$W^{\prime}(b)=-L(b)$ and $\lim_{b\to 0+}(W(b)+\log eb)=0$. Moreover,
$\widehat{W}$ defined by
$\displaystyle\widehat{W}(b):=W(b)+\log eb=\int_{0}^{b}(s^{-1}-L(s))\,ds,$
(29)
is an increasing concave function for $b\in(0,C_{d}]$.
###### Proof of Lemma 4.7.
The function $W$ is decreasing and convex follows from the fact that
$W^{\prime}(b)=-L(b)\leqslant 0$ and
$W^{\prime\prime}(b)=-L^{\prime}(b)=-1/K^{\prime}(L(b))=1/\operatorname{\mathds{E}}(L(b)+f({\boldsymbol{x}}))^{-2}>0$.
Note that
$b=K^{\prime}(L(b))=\operatorname{\mathds{E}}(L(b)+f({\boldsymbol{x}}))^{-1}\geqslant(L(b)+\operatorname{\mathds{E}}f({\boldsymbol{x}}))^{-1}=(L(b)+2d)^{-1}$
implies that $1/b-L(b)\leqslant 2d$. Moreover, with $y=L(b)>0$, we have
$\displaystyle yb=yK^{\prime}(y)$
$\displaystyle=1-\operatorname{\mathds{E}}f({\boldsymbol{x}})(y+f({\boldsymbol{x}}))^{-1}$
$\displaystyle\leqslant
1-(y\operatorname{\mathds{E}}f({\boldsymbol{x}})^{-1}+1)^{-1}=1-(C_{d}y+1)^{-1}=y(y+1/C_{d})^{-1}.$
Simplifying we get
$1/b-L(b)\geqslant 1/C_{d}.$
The last conclusion in Lemma 4.7 follows from the fact that
$\widehat{W}^{\prime}(b)=1/b-L(b)>0$ and
$\displaystyle\widehat{W}^{\prime\prime}(b)$
$\displaystyle=-b^{-2}-L^{\prime}(b)$
$\displaystyle=b^{-2}L^{\prime}(b)\cdot(-1/L^{\prime}(b)-b^{2})=b^{-2}L^{\prime}(b)\cdot(\operatorname{\mathds{E}}(L(b)+f({\boldsymbol{x}}))^{-2}-b^{2})\leqslant
0$
as
$\operatorname{\mathds{E}}(L(b)+f({\boldsymbol{x}}))^{-2}>(\operatorname{\mathds{E}}(L(b)+f({\boldsymbol{x}}))^{-1})^{2}=b^{2}$
and $L^{\prime}(b)\leqslant 0$. $\blacksquare$
Figure 3. Plot of $W$ functions for $d=3,4,\ldots,8$; Lower function
corresponds to higher $d$.
See Figure 3 for plot of the $W$ function in dimensions $d=3,4,\ldots,10$.
### 4.3. Concentration of Mass
We now return to our integral of interest,
$M_{N}(b,\varepsilon)=\int_{\left\|\psi\right\|_{2}^{2}\in
N(b-\varepsilon,b+\varepsilon)}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi,$
and the proof of Theorem 4.1. We relate the integral to the concentration of
mass of $\Psi^{\boldsymbol{0},L(b)}$, and analyze two distinct cases depending
on whether $b\leqslant C_{d}$ or $b>C_{d}$. Let $\\{X_{\boldsymbol{k}}\\}$ be
i.i.d. exponential with rate 1 random variables. A simple change of variables
argument tells us that the mass of $\Psi^{\boldsymbol{0},y}$ may be
represented as
$\displaystyle\Gamma_{N,y}:=\left\|\Psi^{\boldsymbol{0},y}\right\|^{2}_{2}\stackrel{{\scriptstyle\mathrm{d}}}{{=}}\sum_{\boldsymbol{k}\neq\boldsymbol{0}}\frac{1}{\lambda_{\boldsymbol{k}}+y}\cdot
X_{\boldsymbol{k}}.$ (30)
###### Lemma 4.8.
Let $0<b<C_{d}$ with $C_{d}$ and $L$ from (26) and (27), respectively and
$n\varepsilon^{d}\gg 1$. Then we have, for some constant $c>0$
$\displaystyle\operatorname{\mathds{P}}\left(\Gamma_{N,L(b)}\in
N(b-\varepsilon,b+\varepsilon)\right)\geqslant 1-cm_{N}(2)/N\varepsilon^{2}.$
###### Proof.
The proof hinges on the fact that the expectation of $N^{-1}\Gamma_{N,y}$ is
$K^{\prime}_{N}(y)/N$, which converges $K^{\prime}(y)$ as $N\to\infty$.
Moreover, $K^{\prime}$ has the well defined inverse
$L(\cdot):(0,C_{d}]\to[0,\infty)$. What this means in practice is that we may
choose $y$ such that the limiting mean is $b$, so long as $b\in(0,C_{d}]$.
Using Lemma 4.5, we know that
$\displaystyle|K^{\prime}_{N}(L(b))-b|\leqslant cN^{-1/d}m_{N}(3/2).$
It is therefore adequate to verify concentration of $\Gamma_{N,y}/N$ about
$K_{N}^{\prime}(y)$. Applying Chebyshev’s inequality to (30),
$\displaystyle\operatorname{\mathds{P}}\left(\left|\Gamma_{N,y}-NK_{N}^{\prime}(y)\right|\geqslant
N\varepsilon\right)\leqslant(N\varepsilon)^{-2}\cdot\sum_{\boldsymbol{k}\neq\boldsymbol{0}}(\lambda_{\boldsymbol{k}}+y)^{-2}.$
Since $y\geqslant 0$,
$\displaystyle\operatorname{\mathds{P}}\left(\left|\Gamma_{N,y}-NK_{N}^{\prime}(y)\right|\geqslant
N\varepsilon\right)\leqslant(N\varepsilon)^{-2}\cdot\sum_{\boldsymbol{k}\neq\boldsymbol{0}}\lambda_{\boldsymbol{k}}^{-2}=m_{N}(2)/N\varepsilon^{2}.$
The last line follows by Lemma 4.4. This verifies the requisite concentration.
$\blacksquare$
###### Corollary 4.9.
Let $0<b<C_{d}$, $L(\cdot)$ as in (27), and $\phi^{\boldsymbol{0}}$ as in
(23). Let
$\mathcal{A}=\left\\{\psi^{\perp}\in\mathds{C}^{V}/\\{\phi^{\boldsymbol{0}}\\}:N(b-\varepsilon)\leqslant\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
N(b+\varepsilon)\right\\}.$
Then we have
$\left|\frac{1}{N}\log\int_{\mathcal{A}}\exp\left(-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}-\bigl{(}\pi-W(b)\bigr{)}\right|\leqslant
2\varepsilon
L(b)+\left|\log\left(1-\frac{cm_{N}(2)}{N\varepsilon^{2}}\right)\right|.$
###### Proof.
This corollary follows almost directly from verifying
$\displaystyle
e^{N(b-\varepsilon)L(b)}\left(1-\frac{cm_{N}(2)}{N\varepsilon^{2}}\right)Z^{\boldsymbol{0},L(b)}\leqslant\int_{\mathcal{A}}\exp\left(-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}\leqslant
Z^{\boldsymbol{0},L(b)}e^{NL(b)(b+\varepsilon)}.$ (31)
For efficiency of notation, for $y>0$ we will denote the quadratic form
$\left\|\nabla\psi^{\perp}\right\|_{2}^{2}+y\left\|\psi^{\perp}\right\|_{2}^{2}$
by $Q^{y}(\psi^{\perp})$. The following bounds may be immediately verified.
$e^{NL(b)(b-\varepsilon)}\int_{\mathcal{A}}\exp({-Q^{L(b)}(\psi^{\perp})})\mathbf{d}\psi\leqslant\int_{\mathcal{A}}\exp\left({-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}}\right)\mathbf{d}\psi$
and
$\int_{\mathcal{A}}\exp\left({-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}}\right)\mathbf{d}\psi\leqslant
e^{NL(b)(b+\varepsilon)}\int_{\mathcal{A}}\exp({-Q^{L(b)}(\psi^{\perp})})\mathbf{d}\psi.$
We begin with the upper bound to (31) as it is easier to establish, it follows
simply by enlarging the region of integration to all of $\mathds{C}^{V}$:
$e^{NL(b)(b+\varepsilon)}\int_{\left\|\psi^{\perp}\right\|_{2}^{2}\in
N(b-\varepsilon,b+\varepsilon)}\exp(-Q^{L(b)}(\psi^{\perp}))\mathbf{d}\psi\leqslant\pi^{N-1}\exp\bigl{(}NL(b)(b+\varepsilon)-NK_{N}(L(b))\bigr{)}.$
Immediately, we may conclude that
$\frac{1}{N}\log\int_{\left\|\psi^{\perp}\right\|_{2}^{2}\in
N(b-\varepsilon,b+\varepsilon)}\exp\left(-\left\|\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi\leqslant\pi+L(b)(b+\varepsilon)-K_{N}(L(b)).$
Next, we show that the lower bound converges to the same limit as the upper
bound. It is at this point that we introduce the concentration estimates. We
have
$\displaystyle\frac{1}{Z^{\boldsymbol{0},L(b)}}\int_{\left\|\psi\right\|_{2}^{2}\in
N(b-\varepsilon,b+\varepsilon)}\exp(-Q^{L(b)}(\psi))\mathbf{d}\psi=\operatorname{\mathds{P}}\biggl{(}\left\|\Psi^{\boldsymbol{0},L(b)}\right\|^{2}\in
N(b-\varepsilon,b+\varepsilon)\biggr{)}.$
Lemma 4.8 may now be directly applied. $\blacksquare$
###### Remark 4.10.
It is a trivial extension of the above corollary that
$\frac{1}{N}\log\int_{\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
N(b+\varepsilon)}\exp\left(\left\|\nabla\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}$
converges to $\pi-W(b)$ with the same bound for the rate of convergence. This
fact is required for the proof of Theorem 4.1, but the proof is identical to
that above and is therefore omitted.
We have now assembled all the ingredients required to prove Theorem 4.1.
###### Proof of Theorem 4.1.
Let $\psi$ be such that
$N(b-\varepsilon)\leqslant\left\|\psi\right\|_{2}^{2}\leqslant
N(b+\varepsilon)$. We orthogonally decompose $\psi$ with respect to to
$\phi^{\boldsymbol{0}}$ obtaining
$\psi=c_{\boldsymbol{0}}\phi^{\boldsymbol{0}}+\psi^{\perp}$. Thus,
$N(b-\varepsilon)\leqslant|c_{0}|^{2}+\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
N(b+\varepsilon).$
Let $b^{\prime}=\min\\{b,C_{d}\\}$. We define
$\displaystyle\mathcal{A}_{1}:=\left\\{c_{\boldsymbol{0}}\phi_{\boldsymbol{0}}:b-b^{\prime}-\frac{\varepsilon}{2}\leqslant\frac{1}{N}|c_{0}|^{2}\leqslant
b-b^{\prime}+\frac{\varepsilon}{2}\right\\}\times\left\\{\psi^{\perp}:b^{\prime}-\frac{\varepsilon}{2}\leqslant\frac{1}{N}\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
b^{\prime}+\frac{\varepsilon}{2}\right\\}.$ (32)
and
$\displaystyle\mathcal{A}_{2}:=\left\\{c_{0}\phi_{\boldsymbol{0}}:|c_{0}|^{2}\leqslant
N(b+\varepsilon)\right\\}\times\left\\{\psi^{\perp}:\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
N(b+\varepsilon)\right\\}.$ (33)
It is clear that
$\mathcal{A}_{1}\subset\mathcal{A}\subset\mathcal{A}_{2}.$
The proof is reduced to showing that the integrals of
$\exp(-\left\|\nabla\psi\right\|_{2}^{2})$ over $\mathcal{A}_{1}$ and
$\mathcal{A}_{2}$ are logarithmically equivalent. The upper bound is easier,
so we establish it first. We have
$\int_{\mathcal{A}_{2}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi=\pi
N(b+\varepsilon)\cdot\int_{\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
N(b+\varepsilon)}\exp\left(-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}$
Applying Corollary 4.9,
$\frac{1}{N}\log\int_{\mathcal{A}_{2}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi\leqslant\pi-W(b)+\frac{1}{N}\log(b+\varepsilon)N$
Now as for the lower bound,
$\int_{\mathcal{A}_{1}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi\geqslant\pi
N\varepsilon\cdot\int_{\left\|\psi^{\perp}\right\|_{2}^{2}\in
N(b^{\prime}-\varepsilon/2,b^{\prime}+\varepsilon/2)}\exp\left(-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}.$
The concluding step is to again use Corollary 4.9 $\blacksquare$
### 4.4. Bounds on the Maximum
In order to address how the soliton is distinguishable from the background
noise, we need upper bounds on the values that the free field can take at a
point. Additionally, maximum bounds are crucial for the stitching procedure
required in Section 6.2. The most crucial result in this section is the
following.
###### Theorem 4.11.
Let $C_{d}$ be as defined in (26) and $b\in(0,\infty)$. We have
$\displaystyle\left(\int_{\mathcal{A}^{\prime}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi\right)$
$\displaystyle/\left(\int_{\mathcal{A}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi\right)$
$\displaystyle\geqslant\frac{b-b^{\prime}+\varepsilon}{b+\varepsilon}\cdot
e^{-2N\varepsilon L(b)}\cdot\left(1-2cm_{N}(2)/N\varepsilon^{2}\right)$ (34)
where
$\displaystyle\mathcal{A}$
$\displaystyle=\\{\psi:N(b-\varepsilon)\leqslant\left\|\psi\right\|_{2}^{2}\leqslant
N(b+\varepsilon)\\}$ $\displaystyle\text{and }\mathcal{A}^{\prime}$
$\displaystyle=\mathcal{A}\cap\\{\psi:\left\|\psi\right\|_{\infty}\leqslant
2\cdot\sqrt{3C_{d}\log N}\\}.$
This goes one step beyond mass concentration, as it asserts that on the
exponential scale, the dominant contribution to the integral
$M_{N}(b,\varepsilon)$ comes from functions for which the mass is relatively
evenly spread over the entire torus, we cannot have too many sharp peaks. This
phenomenon is closely related to (and proved by) an $\ell^{\infty}$ bound on
the associated free field.
###### Lemma 4.12.
Let $0<b\leqslant C_{d}$. We have for $n$ sufficiently large
$\operatorname{\mathds{P}}\left(\left\|\Psi^{\boldsymbol{0},L(b)}\right\|_{\infty}\geqslant\sqrt{3C_{d}\log
N}\right)\leqslant N^{-1}.$
###### Proof.
The translation invariance tells us that the random variables
$\left\\{\Psi^{\boldsymbol{0},L(b)}_{{\boldsymbol{x}}}\right\\}_{{\boldsymbol{x}}\in\mathds{T}^{d}}$
are identically distributed. The union bound is applicable and yields
$\displaystyle\operatorname{\mathds{P}}\left(\left\|\Psi^{\boldsymbol{0},L(b)}\right\|_{\infty}\geqslant\sqrt{3C_{d}\log
N}\right)$
$\displaystyle\leqslant\sum_{{\boldsymbol{x}}\in\mathds{T}^{d}}\operatorname{\mathds{P}}\left(\left|\Psi_{{\boldsymbol{x}}}^{\boldsymbol{0},L(b)}\right|^{2}\geqslant
3C_{d}\log N\right)$
$\displaystyle=\exp\left(\left(1-\frac{3C_{d}}{K^{\prime}_{n}(L(b))}\right)\cdot\log
N\right).$
The proof follows using the fact that $K^{\prime}_{N}(L(b))\to
K^{\prime}(L(b))\leqslant C_{d}$, and thus for $N$ sufficiently large,
$3C_{d}/K_{N}(L(b))\geqslant 2$. $\blacksquare$
###### Proof of Theorem 4.11.
Recall the definitions of $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ be as in the
proof of Theorem 4.1. We define
$\mathcal{A}_{1}^{\prime}:=\mathcal{A}_{1}\cap\left\\{\psi:\left\|\psi^{\perp}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\right\\}.$
It is clear that for $N$ sufficiently large,
$\mathcal{A}_{1}^{\prime}\subset\mathcal{A}^{\prime}$. We will denote
$\mathcal{A}_{\perp}:=\left\\{\psi^{\perp}\in\mathds{C}^{V}/\\{\phi^{\boldsymbol{0}}\\}:N(b^{\prime}-\varepsilon)\left\|\psi^{\perp}\right\|_{2}^{2}\leqslant
N(b^{\prime}+\varepsilon)\right\\}$
and
$\mathcal{A}_{\perp}^{\prime}:=\mathcal{A}_{\perp}\cap\left\\{\left\|\psi^{\perp}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\right\\}.$
We use the fact that $\mathcal{A}_{2}$ contains $\mathcal{A}$ to conclude that
the ratio in (4.11) may be bounded below by
$\displaystyle\left(\int_{\mathcal{A}_{1}^{\prime}}\exp\left(-\left\|\nabla\psi\right\|\right)\mathbf{d}\psi\right)/\left(\int_{\mathcal{A}_{2}}\exp\left(-\left\|\nabla\psi\right\|\mathbf{d}\psi\right)\right).$
(35)
On separation, we then know that
$\int_{\mathcal{A}^{\prime}_{1}}\exp\left(\left\|-\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi=N\pi(b-b^{\prime}+\epsilon_{n})\cdot\int_{\mathcal{A}_{\perp}^{\prime}}\exp\left(-\left\|\nabla\psi^{\perp}\right\|_{2}^{2}\right)\mathbf{d}\psi^{\perp}.$
On introducing the exponential tilt, we further have
$\int_{\mathcal{A}_{1}^{\prime}}\exp\left(-\left\|\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi\geqslant
N\pi(b-b^{\prime}+\varepsilon)\cdot
e^{-L(b)(b-\varepsilon)}\cdot\int_{\mathcal{A}_{\perp}^{\prime}}\exp\left(-Q^{L(b)}(\psi^{\perp})\right)\mathbf{d}\psi^{\perp}.$
Analogously,
$\int_{\mathcal{A}_{2}}\exp\left(-\left\|\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi=N\pi(b+\varepsilon)\cdot
e^{L(b)(b+\varepsilon)}\cdot Z^{\boldsymbol{0},L(b)}.$
Thus (35) is bounded below further as
$\frac{b-b^{\prime}+\varepsilon}{b+\varepsilon}\cdot e^{-2\varepsilon
L(b)}\cdot\frac{1}{Z^{\boldsymbol{0},L(b)}}\int_{\mathcal{A}_{\perp}^{\prime}}\exp\left(-\left\|\nabla\psi\right\|_{2}^{2}\right)\mathbf{d}\psi.$
Now, observe that
$\displaystyle\frac{1}{Z^{\boldsymbol{0},y}}$
$\displaystyle\int_{\mathcal{A}_{\perp}^{\prime}}\exp\left(-\left\|\nabla\psi^{\perp}\right\|\right)\mathbf{d}\psi^{\perp}$
$\displaystyle=\operatorname{\mathds{P}}\left(b-\varepsilon\leqslant\frac{1}{N}\left\|\Psi^{\boldsymbol{0},L(b)}\right\|_{2}^{2}\leqslant
b+\varepsilon,\text{
}\left\|\Psi^{\boldsymbol{0},L(b)}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\right)$ $\displaystyle\geqslant
1-\frac{cm_{N}(2)}{N\varepsilon^{2}}-\frac{1}{N}$
which follows via a combination of Lemmas 4.8 and 4.12, as well as the union
bound. $\blacksquare$
## 5\. Soliton Solutions and Minimal Energy
In this section, we provide a survey on some results describing soliton
solutions of the DNLS defined on the lattice $\mathds{Z}^{d}$. Recall, this
means that $\psi_{{\boldsymbol{x}}}(t)\in\ell^{2}(\mathds{Z}^{d})$ solves
$\displaystyle{\mathrm{i}\mkern
2.0mu}\frac{d}{dt}\psi_{{\boldsymbol{x}}}=-(\Delta\psi)_{{\boldsymbol{x}}}-|\psi_{{\boldsymbol{x}}}|^{p-1}\psi_{{\boldsymbol{x}}}.$
In particular, we will be interested in the time-periodic solutions, which we
refer to as discrete breathers or solitons.
### 5.1. Definition and Existence
There are two ways of characterizing soliton solutions. The first is finding
solutions via the ansatz $\psi_{{\boldsymbol{x}}}(t)=e^{{\mathrm{i}\mkern
2.0mu}\omega t}\varphi_{{\boldsymbol{x}}}$ where $\varphi_{{\boldsymbol{x}}}$
is time-invariant. The second is variational, solitons can be realized as the
minimizers of the Hamiltonian subject to the constraint
$\left\|\varphi\right\|_{2}^{2}=a$. The parameter $\omega$ is realized as a
Lagrange Multiplier and thus depends on $a$. The soliton equation is given by
$\displaystyle\omega(a)\cdot\varphi_{{\boldsymbol{x}}}=-(\Delta\phi)_{{\boldsymbol{x}}}-|\varphi_{{\boldsymbol{x}}}|^{p-1}\varphi_{{\boldsymbol{x}}}.$
(36)
The variational characterization also tells us that the values at all sites
must have the same complex phase, and thus the discrete solitons may be
assumed to be real-valued and nonnegative. Unlike the continuum case, there is
no scale invariance for the DNLS defined on a given lattice. Thus, we cannot
construct soliton solutions of arbitrary mass. The fundamental requirement for
(36) to admit an $\ell^{2}$ solution is that $\omega(a)<0$. As discussed in
the introduction, this occurs for all choices of mass $a>0$ when $p<1+4/d$, or
when $a>R_{p,d}$ for $p\geqslant 1+4/d$.
###### Lemma 5.1 (Weinstein, See [WEI99]).
The following holds.
1. i.
Ground state or the minimizer in (11) exists when $I(a)\in(-\infty,0)$.
2. ii.
Let $1<p<1+4/d$, then $I(a)<0$ for all $a>0$. Thus $R_{p}=0$.
3. iii.
Let $p\geqslant 1+4/d$, then there exists a ground state excitation threshold
$R_{p}>0$ so that $I(a)<0$ if $a>R_{p}$; and $I(a)=0$ if $a<R_{p}$. Moreover,
$\displaystyle\frac{2}{p+1}R_{p}^{(p-1)/2}=\inf_{f}\left\\{\frac{\left\|f\right\|_{2}^{p-1}\cdot\left\|\nabla
f\right\|_{2}^{2}}{\left\|f\right\|_{p+1}^{p+1}}\right\\}.$
The last statement interprets $R_{p}$ in terms of a functional inequality for
the lattice. It is reciprocal to the best possible constant for the discrete
Gagliardo-Nirenberg-Sobolev inequality to hold [WEI99].
### 5.2. Dirichlet Solitons and Exponential Decay
In the continuum, when soliton solutions exist, they are known to be smooth
and exponentially decaying. In the discrete setting, exponential decay still
holds whenever the solitons exist, that is whenever $r>R_{p}$. A discrete
counterpart of the continuum proof can be used to prove this result on
$\mathds{Z}^{d}$. For our purposes, it suffices to establish a uniform rate of
exponential decay for the Dirichlet problem defined on a growing sequence of
boxes $\Lambda$ centered at the origin, and show the existence of an
exponentially decaying $\ell^{2}(\mathds{Z}^{d})$ minimizer via a tightness
argument. The analogous Dirichlet problem may be defined as
$\displaystyle\varphi^{\Lambda,a}:=\operatorname{argmin}\\{\mathcal{H}(\varphi):\varphi\in\mathds{C}^{\Lambda},\left\|\varphi\right\|_{2}^{2}=a\\}.$
(37)
It is clear that the Dirichlet solitons must satisfy
$\displaystyle\omega_{\Lambda}(a)\cdot\varphi^{\Lambda,a}_{{\boldsymbol{x}}}=\left(-\Delta^{0}_{\Lambda}\varphi^{\Lambda,a}\right)_{{\boldsymbol{x}}}-\left|\varphi^{\Lambda,a}_{{\boldsymbol{x}}}\right|^{p-1}\varphi^{\Lambda,a}_{{\boldsymbol{x}}}$
(38)
for a Lagrange multiplier $\omega_{\Lambda}(a)$. Note, minimizers for each
$\Lambda$ may be simultaneously defined on $\mathds{Z}^{d}$. Moreover, the
resulting sequence itself is minimizing.
###### Lemma 5.2.
Let $\varphi^{\Lambda,a}_{{\boldsymbol{x}}}$ denote the Dirichlet minimizers
of (37) with mass $a$ on $\Lambda\subset\mathds{Z}^{d}$. Let
$\\{\Lambda_{k}\\}_{k\geqslant 1}$ be a sequence of subsets each containing
$\boldsymbol{0}$ such that $\Lambda_{k}\uparrow\mathds{Z}^{d}$ as
$k\to\infty$. Then the sequence $\\{\varphi^{\Lambda_{k},a}\\}_{k\geqslant 1}$
is minimizing for (5).
###### Proof.
Let ${\varphi^{a,j}}$ be a minimizing sequence for $\mathcal{H}$, each with
mass $a$. Note, by the translation invariance of $\mathcal{H}$, we may
recenter the $\varphi^{a,j}$ such that $\boldsymbol{0}$ is always the site
with the largest absolute value. We then define
$\varphi^{a,j,\Lambda_{k}}:=\sqrt{\frac{a}{\left\|P_{\Lambda_{k}}\varphi^{a,j}\right\|_{2}^{2}}}\cdot
P_{\Lambda_{k}}\varphi^{a,j}.$
Note that as $\Lambda_{k}\uparrow\mathds{Z}^{d}$,
$P_{\Lambda_{k}}\varphi\to\varphi$ in norm for any
$\varphi\in\ell^{2}(\mathds{Z}^{d})$. We may then choose a diagonal
subsequence that is minimizing for $H$ and denote it as
$\\{\varphi^{a,j_{k},\Lambda_{l}}\\}_{k\geqslant 1}$. By hypothesis,
$\mathcal{H}(\varphi^{\Lambda_{k},a})\leqslant\mathcal{H}(\varphi^{n_{k},a,\Lambda_{k}}).$
Thus, the sequence of Dirichlet minimizers is also minimizing for the
Hamiltonian. $\blacksquare$
On the question of exponential decay, we provide a probabilistic proof
directly adapted from Chatterjee [C14], who proved the same for soliton
solutions in the mass subcritical regime on the discrete torus.
We bring this in now in order to control the value of the Lagrange Multiplier,
which in turn is required for a uniform rate of exponential decay.
###### Lemma 5.3.
Let $a>R_{p}$, for $|\Lambda|$ sufficiently large we have
$\varepsilon_{1}<\varepsilon_{2}<0$ depending on $a$ such that
$\varepsilon_{1}<\omega_{\Lambda}(a)<\varepsilon_{2}.$
###### Proof.
Multiplying both sides of (38) by $\varphi^{\Lambda,a}_{{\boldsymbol{x}}}$
(recall that $\varphi^{\Lambda,a}$ is assumed to be nonnegative), we obtain
that
$\displaystyle\omega_{\Lambda}(a)\cdot r$
$\displaystyle=\left\|\nabla\varphi^{\Lambda,a}\right\|_{2}^{2}-\left\|\varphi^{\Lambda,a}\right\|_{p+1}^{p+1}$
$\displaystyle\leqslant\left\|\nabla\varphi^{\Lambda,a}\right\|_{2}^{2}-\frac{2}{p+1}\left\|\varphi^{\Lambda,a}\right\|_{p+1}^{p+1}=\mathcal{H}(\varphi^{\Lambda,a}).$
Now recall that as $\Lambda\uparrow\mathds{Z}^{d}$,
$\mathcal{H}(\varphi^{\Lambda,a})$ converges to $I(a)<0$, and we choose
$\varepsilon_{2}>I(a)$ but still strictly negative. As for the lower bound, we
have that $\omega_{\Lambda}(a)\geqslant-a^{(p+1)/2}=:\varepsilon_{1}.$
$\blacksquare$
###### Lemma 5.4.
Let $a>R_{p}$. Let $\varphi^{\Lambda}$ denote a Dirichlet minimizer
corresponding to $\Lambda$. We have $U_{0}\subset\Lambda$ and finite constants
$\omega_{0}>0$ and $C_{0}>0$ depending on $a$ and independent of $\Lambda$
such that
$\displaystyle\varphi^{\Lambda}_{{\boldsymbol{x}}}\leqslant
C_{0}e^{-\omega_{0}\cdot d({\boldsymbol{x}},U_{0})}.$
###### Proof.
As we have already seen, $a>R_{p}$ implies that for $I(a)<\varepsilon<0$,
$\omega_{\Lambda}(a)<\varepsilon$ for all $|\Lambda|$ sufficiently large. In
turn, (38) may be rewritten using Green’s function description of the inverse
of the Dirichlet Laplacian as
$\displaystyle\varphi^{\Lambda,a}_{{\boldsymbol{x}}}=\sum_{{\boldsymbol{x}^{\prime}}}G^{\Lambda}_{-\omega_{\Lambda}}({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})\left|\varphi^{\Lambda,a}_{{\boldsymbol{x}^{\prime}}}\right|^{p-1}\varphi^{\Lambda,a}_{{\boldsymbol{x}^{\prime}}}.$
(39)
Now, let $\delta>0$, and define
$U_{\delta}:=\\{{\boldsymbol{x}}\in\Lambda:\varphi^{\Lambda,a}_{{\boldsymbol{x}}}\geqslant\delta\\}.$
By the usual bound,
$|U_{\delta}|\leqslant\frac{a}{\delta^{2}}.$
We define $z=\omega_{\Lambda}(a)^{2}/(1+\omega_{\Lambda}(a)^{2})$, and let
$(X^{z}_{t})_{t\in\mathds{N}}$ be the simple symmetric random walk on
$\Lambda$, started at ${\boldsymbol{x}}$ which is killed with probability $z$
at each step, and is killed at the boundary. Recall that the Green’s function
is given by
$G^{\Lambda}_{|\omega_{\Lambda}|}({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})=\operatorname{\mathds{E}}\sum_{t=1}^{T_{\Lambda}^{|\omega_{\Lambda}|}}\mathds{1}_{X_{t}^{z}={\boldsymbol{x}^{\prime}}}.$
Using this expression and the fact that the event of death is independent of
the step taken, we may rewrite (39) as
$\varphi^{\Lambda,a}_{{\boldsymbol{x}}}=\sum_{t=0}^{\infty}(1-z)^{t}\operatorname{\mathds{E}}\left|\varphi^{\Lambda,a}_{X_{t}^{z}}\right|^{p-1}\varphi^{\Lambda,a}_{X_{t}^{z}}.$
Let $p_{\Lambda}({\boldsymbol{x}},{\boldsymbol{x}^{\prime}},t)$ denote the
probability kernel of $X_{t}$, the random walk on $\Lambda$ annihilated on the
boundary. Observe that for all ${\boldsymbol{x}^{\prime}}$ such that
$d({\boldsymbol{x}^{\prime}},{\boldsymbol{x}})>t$, the probability that the
random walk has reached ${\boldsymbol{x}^{\prime}}$ is $0$. Thus,
$\displaystyle\varphi^{\Lambda,a}_{{\boldsymbol{x}}}=\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}\sum_{t\geqslant
d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}(1-z)^{t}p_{\Lambda}({\boldsymbol{x}},{\boldsymbol{x}^{\prime}},t)\left(\varphi^{\lambda,a}_{{\boldsymbol{x}^{\prime}}}\right)^{p}.$
(40)
Clearly $p({\boldsymbol{x}},\mathbf{y},t)\leqslant 1$ and on evaluation of the
geometric sum, we have
$\varphi^{\Lambda,a}_{{\boldsymbol{x}}}\leqslant\frac{1}{z}\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}{(1-z)^{d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}}\left(\varphi^{\lambda,a}_{{\boldsymbol{x}^{\prime}}}\right)^{p}.$
Now, if ${\boldsymbol{x}^{\prime}}\notin U_{\delta}$, then we have that
$(\varphi^{\Lambda,a}_{{\boldsymbol{x}}})^{p}\leqslant\delta^{p-1}\varphi_{{\boldsymbol{x}^{\prime}}}^{\Lambda,a}$,
and if ${\boldsymbol{x}^{\prime}}\in U_{\delta}$, then
$d({\boldsymbol{x}},U_{\delta})\leqslant
d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})$. Thus, partitioning the sum in
(40),
$\displaystyle\varphi^{\lambda,a}_{{\boldsymbol{x}}}$
$\displaystyle\leqslant\frac{(1-z)^{d(U_{\delta},{\boldsymbol{x}})}}{z}\sum_{{\boldsymbol{x}^{\prime}}\in
U_{\delta}}\left(\varphi^{\Lambda,a}_{{\boldsymbol{x}^{\prime}}}\right)^{p}+\frac{\delta^{p-1}}{z}\sum_{{\boldsymbol{x}^{\prime}}\notin
U_{\delta}}\varphi^{\Lambda,a}_{{\boldsymbol{x}^{\prime}}}$
$\displaystyle\leqslant\frac{(1-z)^{d(U_{\delta},{\boldsymbol{x}})}}{z}\cdot
a^{p/2}\cdot\frac{a}{\delta^{2}}+\frac{\delta^{p-1}}{z}\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}(1-z)^{d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}\varphi^{\Lambda,a}_{{\boldsymbol{x}^{\prime}}}.$
(41)
Let
$\eta_{{\boldsymbol{x}}}:=\frac{a^{(p+2)/2}}{z\delta^{2}}\cdot\max_{{\boldsymbol{x}^{\prime}}^{\prime}\in\Lambda}(1-z)^{d({\boldsymbol{x}^{\prime}}^{\prime},U_{\delta})+d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}}^{\prime})/2}.$
We observe as a consequence of the triangle inequality,
$\eta_{{\boldsymbol{x}}}\leqslant\frac{a^{(p+2)/2}}{z\delta^{2}}\max_{{\boldsymbol{x}^{\prime}}^{\prime}\in\Lambda}(1-z)^{d({\boldsymbol{x}^{\prime}}^{\prime},U_{\delta})+d({\boldsymbol{x}^{\prime}},{\boldsymbol{x}^{\prime}}^{\prime})/2-d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})/2}\leqslant(1-z)^{-d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}\eta_{{\boldsymbol{x}^{\prime}}}.$
Let $C$ denote the smallest constant such that
$\displaystyle\varphi^{\Lambda,a}_{{\boldsymbol{x}}}\leqslant
C\eta_{{\boldsymbol{x}}}.$ (42)
Since $\eta_{{\boldsymbol{x}}}>0$ and $\Lambda$ is finite, it is clear that
$C$ is finite as well. We use this bound for (41) obtaining
$\displaystyle\varphi^{\Lambda,a}_{{\boldsymbol{x}}}$
$\displaystyle\leqslant\frac{(1-z)^{d(U_{\delta},{\boldsymbol{x}})}}{z}\cdot\frac{a^{(p+2)/2}}{\delta^{2}}+\frac{C\delta^{p-1}}{z}\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}(1-z)^{d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}\eta_{{\boldsymbol{x}^{\prime}}}$
$\displaystyle\leqslant\frac{(1-z)^{d(U_{\Delta},{\boldsymbol{x}})}}{z}\cdot\frac{a^{(p+2)/2}}{\delta^{2}}+\frac{C\delta^{p-1}}{z}\eta_{{\boldsymbol{x}}}\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}(1-z)^{\frac{1}{2}d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}$
$\displaystyle\leqslant\left(1+C\cdot\delta^{p-1}\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}(1-z)^{d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})/2}\right)\varphi_{{\boldsymbol{x}}}.$
We can choose $\delta$ sufficiently small, such that
$\frac{\delta^{p-1}}{z}\sum_{{\boldsymbol{x}^{\prime}}\in\Lambda}(1-z)^{d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})}\leqslant\frac{1}{2}.$
In turn, this tells us that
$C\leqslant 1+{C}/{2}$
since $C$ is the best constant for (42). This is the same as $C\leqslant 2$.
With this choice of $\delta$, we have that
$\varphi^{\Lambda,a}_{{\boldsymbol{x}}}\leqslant
2\frac{a^{(p+2)/2}}{z\delta^{2}}\max_{{\boldsymbol{x}^{\prime}}\in\Lambda}(1-z)^{d(U_{\delta},{\boldsymbol{x}^{\prime}})+d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})/2}.$
By the triangle inequality,
$d(U_{\delta},{\boldsymbol{x}^{\prime}})+\frac{1}{2}d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})\geqslant\frac{1}{2}d(U_{\delta},{\boldsymbol{x}^{\prime}})+\frac{1}{2}d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}})\geqslant\frac{d(U_{\delta},{\boldsymbol{x}})-d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}}^{\prime})}{2}+\frac{1}{2}d({\boldsymbol{x}},{\boldsymbol{x}^{\prime}}).$
Thus,
$\varphi^{\Lambda,a}_{{\boldsymbol{x}}}\leqslant
2\frac{a^{(p+2)/2}}{z\delta^{2}}(1-z)^{d({\boldsymbol{x}},U_{\delta})/2}.$
To conclude, take $U_{0}=U_{\delta},\text{
}C_{0}=2\frac{a^{(p+2)/2}}{z\delta^{2}}\text{ and
}\omega_{0}=-\frac{1}{2}\log(1-z).$ $\blacksquare$
We take this moment to emphasize that we may bound all the constants arising
in Lemma 5.4 in terms of the $\varepsilon_{1}$ and $\varepsilon_{2}$
introduced in Lemma 5.3. To begin with, as a direct consequence of Lemma 5.3,
$\frac{\varepsilon^{2}_{2}}{1+\varepsilon^{2}_{2}}\leqslant
z\leqslant\frac{\varepsilon^{2}_{1}}{1+\varepsilon^{2}_{1}}.$
Next, a valid choice of $\delta$ is
$\delta=\frac{z|\log(1-z)|^{d}}{2(d-1)!}$
which in turn yields
$C_{0}=\frac{4\cdot
a^{(p+1)/2}\cdot(d-1)!}{z^{2}|\log(1-z)|^{d}}\leqslant\frac{4\cdot
a^{(p+1)/2}\cdot(d-1)!\cdot(1+\varepsilon_{2}^{2})}{\varepsilon_{2}^{2}\cdot|\log(1+\varepsilon^{2}_{2})-2\log\varepsilon_{2}|^{d}}$
The uniform rate of exponential decay of the Dirichlet minimizers implies the
existence of an exponentially decaying minimizer to (5), via a standard
compactness argument. For the sake of clarity of exposition, we detail the
argument here.
###### Lemma 5.5.
Let $a>R_{p}$. There exists an exponentially decaying minimizer $\varphi^{a}$
to (5).
###### Proof.
Given a box $\Lambda=[-n,n]^{d}\cap\mathds{Z}^{d}$ centered at the origin, we
define $2\cdot\Lambda$ as $[-2n,2n]^{d}\cap\mathds{Z}^{d}$. We embed the
minimizer $\varphi^{\Lambda,a}$ into $\ell^{2}(2\cdot\Lambda)$ via the
standard inclusion map and note that the energy is still the same. We are then
free to translate $\varphi^{\Lambda,a}$ within this larger box and preserve
the energy. We define $\tilde{\varphi}^{\Lambda,a}$ to be the translation of
$\varphi^{\Lambda,a}$ such that the site with the largest mass is situated at
the origin. By Lemma 5.4 there exist, independent of $\Lambda$, a set $U_{0}$
of bounded size and positive constants $C_{0}$ and $\omega_{0}$ such that
$\tilde{\varphi}^{\Lambda,a}\leqslant
C_{0}e^{-\omega_{0}d({\boldsymbol{x}},U_{0})}$. By construction, it is clear
that $\boldsymbol{0}\in U_{0}$, and thus there is a box $\Lambda_{0}$ such
that $\tilde{\varphi}^{\Lambda,a}\leqslant C_{0}e^{-\omega_{0}\cdot
d(\Lambda_{0},{\boldsymbol{x}})}$. Thus, $\\{\tilde{\varphi}^{\Lambda,a}\\}$
is a pre-compact sequence in $\ell^{2}(\mathds{Z}^{d})$. We take $\varphi^{a}$
to be any accumulation point of the sequence. Clearly,
$\varphi^{a}_{{\boldsymbol{x}}}\geqslant 0$,
$\left\|\varphi^{a}\right\|_{2}^{2}=a$ and
$\varphi^{a}_{{\boldsymbol{x}}}\leqslant
C_{0}e^{-\omega_{0}d({\boldsymbol{x}},\Lambda_{0})}$. $\blacksquare$
###### Remark 5.6.
We remark that the above process of re-centering should not be necessary as
the sequence of Dirichlet minimizers should have a mass that is concentrated
towards the center of the box $\Lambda$. This is due to the decay of the
Dirichlet heat kernel at the edges of the box. We, in fact, conjecture that
the minimizer for the Dirichlet problem is unique, which would in turn imply
the uniqueness of the minimizer to (5) up to translation and phase rotation.
The exponentially decaying minimizer and the corresponding truncations are
very useful from the perspective of constructing a minimizing sequence with a
good rate of convergence. We establish this now.
###### Lemma 5.7.
Let $a>R_{p}+\varepsilon$, $\Lambda=[-n,n]^{d}\cap\mathds{Z}^{d}$, and let
$\varphi^{\Lambda,a}$ denote the Dirichlet minimizer. We have
$\left|I(a)-\mathcal{H}\left(\varphi^{\Lambda,a}\right)\right|\leqslant
C_{0}^{\prime}\cdot a^{p^{2}-1}\cdot e^{-\omega_{0}\cdot n}.$
###### Proof.
Let $\varphi^{a}$ be as in Lemma 5.5. Note that for any $p$,
$\displaystyle\left\|\varphi^{a}\right\|^{p}_{p}-\left(\frac{a}{\left\|P_{\Lambda}\varphi^{r}\right\|_{2}^{2}}\right)^{p/2}\left\|P_{\Lambda}\varphi^{a}\right\|_{p}^{p}\leqslant\left|1-\left(\frac{a}{\left\|P_{\Lambda}\varphi^{r}\right\|_{2}^{2}}\right)^{p}\right|\left\|\varphi^{a}\right\|_{p}^{p}.$
(43)
By construction, we know that for all ${\boldsymbol{x}}$ outside $\Lambda$
$\varphi^{a}_{{\boldsymbol{x}}}\leqslant C_{0}\exp(-\omega_{0}\cdot n).$
Thus, we have that
$\left\|P_{\Lambda}\varphi^{a}\right\|_{2}^{2}\geqslant
a-\frac{C_{0}\cdot\Gamma(d)}{\omega_{0}^{d}}\exp\left({-\omega_{0}\cdot
n}\right).$
For the convenience of notation, we will introduce for the purposes of this
proof
$\epsilon(n):=C_{0}\exp(-\omega_{0}\cdot n).$
Applied to (43),
$\displaystyle\left\|\varphi^{a}\right\|^{p+1}_{p+1}-\left(\frac{a}{\left\|P_{\Lambda}\varphi^{a}\right\|_{2}^{2}}\right)^{(p+1)/2}\left\|P_{\Lambda}\varphi^{a}\right\|_{p+1}^{p+1}$
$\displaystyle\leqslant\left|\frac{a^{p+1}-(a-\epsilon(n))^{p+1}}{(a-\epsilon(n))^{p}}\right|\left\|\varphi^{a}\right\|^{p}_{p}$
$\displaystyle\leqslant\frac{(p+1)a^{p(p+1)}\epsilon(n)}{(a-\epsilon(n))^{p+1}}.$
Now as for the gradient term, we only have to control the contributions from
outside $\Lambda$ and on the boundary. We have that
$\displaystyle\left\|\nabla\varphi^{a}\right\|^{2}_{2}-\frac{a}{\left\|P_{\Lambda}\varphi^{a}\right\|_{2}^{2}}\cdot\left\|\nabla
P_{\Lambda}\varphi^{a}\right\|_{2}^{2}\leqslant\sum_{{\boldsymbol{x}}\sim{\boldsymbol{x}^{\prime}}\in\Lambda^{c}}|\varphi^{a}_{{\boldsymbol{x}}}-\varphi^{a}_{{\boldsymbol{x}^{\prime}}}|^{2}$
$\displaystyle+\sum_{{\boldsymbol{x}}\sim{\boldsymbol{x}^{\prime}}\in\Lambda}\left(1-\frac{a}{\left\|P_{\Lambda}\varphi^{a}\right\|_{2}^{2}}\right)\cdot|\varphi^{a}_{{\boldsymbol{x}}}-\varphi^{a}_{{\boldsymbol{x}^{\prime}}}|^{2}$
$\displaystyle+\sum_{{\boldsymbol{x}}\sim{\boldsymbol{x}^{\prime}}\in\partial\Lambda}|\varphi^{a}_{{\boldsymbol{x}}}|^{2}.$
Applying the bound obtained from the exponential decay,
$\left\|\nabla\varphi^{a}\right\|^{2}_{2}-\frac{a}{\left\|P_{\Lambda}\varphi^{a}\right\|_{2}^{2}}\cdot\left\|\nabla
P_{\Lambda}\varphi^{a}\right\|_{2}^{2}\leqslant
2\epsilon(n)+\left(\frac{\epsilon(n)}{a-\epsilon(n)}\right)a.$
Combining, we have
$\left|I(a)-\mathcal{H}\left(\sqrt{\frac{a}{\left\|P_{\Lambda}\varphi^{a}\right\|}}\varphi^{a}\right)\right|\leqslant\epsilon(n)\cdot\left(\frac{a^{p(p+1)}}{2(a-\epsilon(n))^{p+1}}+\frac{2a}{a-\epsilon(n)}\right)\leqslant
C_{0}^{\prime}\cdot a^{p^{2}-1}\cdot e^{-\omega_{0}\cdot n}.$
To conclude, we note that
$I(a)\leqslant\mathcal{H}(\varphi^{\Lambda,a})\leqslant\mathcal{H}\left(\sqrt{\frac{a}{\left\|P_{\Lambda}\varphi^{a}\right\|_{2}^{2}}}P_{\Lambda}\varphi^{a}\right).$
This completes the proof. $\blacksquare$
For the scenario where the minimizer does not exist, any weakly convergent
sequence to zero is minimizing. We may use this fact to bound the rate of
convergence.
###### Lemma 5.8.
Let $a\leqslant R_{p}$, $\Lambda=[-n,n]^{d}\cap\mathds{Z}^{d}$ and
$\varphi^{\Lambda,a}$ be the Dirichlet minimizer with mass $a$. Then we have
$|\mathcal{H}(\varphi^{\Lambda,a})|\leqslant\frac{2ad}{|\Lambda|^{1/d}}.$
###### Proof.
Let $\boldsymbol{1}_{\Lambda,{\boldsymbol{x}}}$ denote the function which
takes the value $1$ for every ${\boldsymbol{x}}\in\Lambda$ and is zero outside
$\Lambda$. It is easy to evaluate the norms of $\boldsymbol{1}_{\Lambda}$, in
particular
$\left\|\nabla\boldsymbol{1}_{\Lambda}\right\|_{2}^{2}\leqslant
2d|\Lambda|^{(d-1)/d}.$
Thus,
$\mathcal{H}\left(\sqrt{\frac{a}{|\Lambda|}}\boldsymbol{1}_{\Lambda}\right)\leqslant\frac{2ad}{|\Lambda|^{1/d}}.$
Since we are working with $a\leqslant R_{p}$, we know that $I(a)=0$. Combined
with the fact that $\mathcal{H}(\varphi^{\Lambda,a})$ is monotonically
decreasing, we must have that $\mathcal{H}(\varphi^{\Lambda,a})\geqslant 0$.
For $|\Lambda|$ sufficiently large, by the minimizing hypothesis, we have
$0\leqslant\mathcal{H}(\varphi^{\Lambda,a})\leqslant\mathcal{H}\left(\sqrt{\frac{a}{|\Lambda|}}\boldsymbol{1}_{\Lambda}\right)\leqslant\frac{2ad}{|\Lambda|^{1/d}}$
and this completes the proof. $\blacksquare$
### 5.3. Analysis of Minimal Energy
We conclude this section with some basic facts about the function $I$.
Clearly, we have
$\left\|\nabla\psi\right\|_{2}^{2}\leqslant
4d\left\|\psi\right\|_{2}^{2}\text{ and
}\left\|\psi\right\|_{p+1}^{p+1}\leqslant\left\|\psi\right\|_{2}^{2}\cdot\max_{x\in\mathds{Z}^{d}}|\psi_{x}|^{p-1}\leqslant\left\|\psi\right\|_{2}^{p+1}.$
Define the functions $J,\widehat{J}:(0,\infty)\to[0,\infty)$ by
$\displaystyle J(a):=\frac{1}{a}I(a)\text{ and
}\widehat{J}(a):=\frac{1}{a}I(a)+\frac{2}{p+1}(a\vee R_{p})^{(p-1)/2}.$ (44)
###### Lemma 5.9.
The function $J$ is decreasing and differentiable.
###### Proof.
Fix $0<a^{\prime}<a$. Given a function $\psi$ with mass $a^{\prime}$, we
consider the function $\tilde{\psi}(x)=\sqrt{a/a^{\prime}}\cdot\psi(x)$ with
mass $a$. We have
$\displaystyle\mathcal{H}(\tilde{\psi})=\frac{a}{a^{\prime}}\left\|\nabla\psi\right\|_{2}^{2}-\frac{2}{p+1}\left(\frac{a}{a^{\prime}}\right)^{(p+1)/2}\left\|\psi\right\|_{p+1}^{p+1},$
or
$\frac{I(a)}{a}\leqslant\frac{1}{a}\mathcal{H}(\tilde{\psi})=\frac{1}{a^{\prime}}\mathcal{H}(\psi)-2\cdot\frac{a^{(p-1)/2}-(a^{\prime})^{(p-1)/2}}{(p+1)\cdot(a^{\prime})^{(p+1)/2}}\left\|\psi\right\|_{p+1}^{p+1}\leqslant\frac{\mathcal{H}(\psi)}{a^{\prime}}.$
Thus we have $\frac{I(a)}{a}\leqslant\frac{I(a^{\prime})}{a^{\prime}}$.
Similarly, given a function $\tilde{\psi}$ with mass $a$, we consider the
function $\psi(x)=\sqrt{a^{\prime}/a}\cdot\tilde{\psi}(x)$ with mass $s$ to
have
$\displaystyle\frac{I(a^{\prime})}{a^{\prime}}\leqslant\frac{1}{a^{\prime}}\mathcal{H}(\psi)$
$\displaystyle=\frac{1}{a}\mathcal{H}(\tilde{\psi})+\frac{a^{(p-1)/2}-(a^{\prime})^{(p-1)/2}}{(p+1)a^{(p+1)/2}}\left\|\tilde{\psi}\right\|_{p+1}^{p+1}$
$\displaystyle\leqslant\frac{1}{a}\mathcal{H}(\tilde{\psi})+\frac{2}{p+1}(a^{(p+1)/2}-(a^{\prime})^{(p-1)/2}).$
Thus we have
$0\leqslant\frac{I(a^{\prime})}{a^{\prime}}-\frac{I(a)}{a}\leqslant\frac{2}{p+1}(a^{(p-1)/2}-(a^{\prime})^{(p-1)/2})$
and the proof is complete. $\blacksquare$
A trivial corollary of this result is the differentiability of the $I$
function itself, merely a consequence of the product rule.
###### Lemma 5.10.
$\widehat{J}$ is increasing in $r$. Moreover,
$\frac{2}{p+1}R_{p}^{(p-1)/2}\leqslant\widehat{J}(a)\leqslant 1/C_{d}\text{
for all }a>0.$
###### Proof.
By evaluating $\mathcal{H}$ at the function
$\psi(x)=\sqrt{a}\cdot\mathds{1}_{x=0}$ in (11), it is clear that
$\displaystyle I(a)\leqslant 2d\cdot a-\frac{2}{p+1}a^{(p+1)/2}.$ (45)
From Lemma 5.9, we have for $a\geqslant R_{p}$ and $a^{\prime}=R_{p}$
$\displaystyle I(a)\geqslant\frac{2}{p+1}R_{p}^{(p-1)/2}\cdot
a-\frac{2}{p+1}a^{(p+1)/2}.$ (46)
$\blacksquare$
## 6\. Convergence of the Free Energy
In this section, we will establish the convergence of the scaled free energy.
Recall that our measure is restricted to the ball of radius $\sqrt{N}$ in
$\mathds{C}^{V}$. We will be cutting the mass constraint into pieces
corresponding to “solitonic” and dispersive contributions. The vertex set $V$
will be partitioned into a set $U$ which is small, where a typical function is
concentrated, and $U^{c}$ where the mass of a typical function is spread out
over all the sites. Recall that we denote the restrictions of $\psi$ onto $U$
and $U^{c}$ as $\psi_{U}$ and $\psi_{U^{c}}$ respectively. We denote the
collection of sites ${\boldsymbol{x}}\in U$ adjacent to $U^{c}$ by $\partial
U_{int}$ and analogously the collection of sites ${\boldsymbol{x}^{\prime}}\in
U^{c}$ adjacent to $U$ by $\partial U_{ext}$. The Hamiltonian
$\mathcal{H}_{n}$ may then be written as
$\displaystyle\begin{split}\mathcal{H}_{n}(\psi)&=\mathcal{H}_{n}(\psi_{U})+\mathcal{H}_{n}(\psi_{U^{c}})\\\
&\qquad+\sum_{{\boldsymbol{x}}\sim{\boldsymbol{x}^{\prime}}\in\partial
U}|\psi_{{\boldsymbol{x}}}-\psi_{{\boldsymbol{x}^{\prime}}}|^{2}-\sum_{{\boldsymbol{x}}\in\partial
U_{int}}|\psi_{{\boldsymbol{x}}}|^{2}-\sum_{{\boldsymbol{x}^{\prime}}\in\partial
U_{ext}}|\psi_{{\boldsymbol{x}^{\prime}}}|^{2}.\end{split}$ (47)
In the definitions of $\mathcal{H}_{n}(\psi_{U})$ and
$\mathcal{H}_{n}(\psi_{U^{c}})$, the Laplacian is understood to mean the
Dirichlet Laplacian. The decomposition of $\psi$ into $\psi_{U}$ and
$\psi_{U^{c}}$ is to be regarded as the orthogonal decomposition
$\mathds{C}^{V}=\mathds{C}^{U}\oplus\mathds{C}^{U^{c}}$, and as such the
expression $\psi=\psi_{U}\oplus\psi_{U^{c}}$ carries the obvious meaning, as
does the decomposition of the volume element
$\mathbf{d}\psi=\mathbf{d}\psi_{U}\cdot\mathbf{d}\psi_{U^{c}}$. The mass
constraint, which may be expressed as
$\left\|\psi_{U}\right\|_{2}^{2}+\left\|\psi_{U^{c}}\right\|_{2}^{2}\leqslant
N,$
cannot be expressed as a product set, but can be closely approximated by a
disjoint union of product sets. For a choice of $0<\alpha<1$ to be specified
later, let $\kappa=N^{\alpha}$. Let $i^{*}$ be the smallest natural number
such that $i^{*}\kappa\geqslant N$ and $i_{*}$ be the largest natural number
such that $i_{*}\kappa\leqslant N$. We have that
$\displaystyle\bigcup_{i+j\leqslant
i^{*}}\left\\{\psi_{U}:\left\|\psi_{U}\right\|_{2}^{2}\in[i\kappa,i\kappa+\kappa)\right\\}\times\left\\{\psi_{U^{c}}:\left\|\psi_{U^{c}}\right\|_{2}^{2}\in[j\kappa,(j+1)\kappa)\right\\}$
(48)
contains the ball of radius $\sqrt{N}$, and
$\displaystyle\bigcup_{i+j=i_{*}}\left\\{\psi_{U}:\left\|\psi_{U}\right\|_{2}^{2}\in[i\kappa,i\kappa+\kappa)\right\\}\times\left\\{\psi_{U^{c}}:\left\|\psi_{U^{c}}\right\|_{2}^{2}\in[j\kappa,(j+1)\kappa)\right\\}$
(49)
is contained by the ball. Replacing the ball with these sets as the region of
integration will give us upper and lower bounds for the partition function.
For ease of notation in the sections to follow, we define
$\displaystyle\mathcal{B}_{i,U}$
$\displaystyle=\left\\{\psi_{U}:\left\|\psi_{U}\right\|_{2}^{2}\in[i\kappa,i\kappa+i)\right\\}$
(50) $\displaystyle\text{and }\mathcal{A}_{i,j,U}$
$\displaystyle=\mathcal{B}_{i,U}\times\mathcal{B}_{j,U^{c}}.$
To summarize, for a given fixed $U\subset V$, we have
$\displaystyle\bigcup_{i+j=i_{*}}\mathcal{A}_{i,j,U}\subset\left\\{\psi:\left\|\psi\right\|_{2}^{2}<N\right\\}\subset\bigcup_{i+j\leqslant
i^{*}}\mathcal{A}_{i,j,U}.$ (51)
### 6.1. Upper Bound
With the decomposition of the mass constraint introduced above, we will split
the proof of Theorem 2.3 into two parts. This section is dedicated to the
first portion, the appropriate upper bound for the free energy.
###### Lemma 6.1.
We have a constant $C$ and a sequence $\Theta_{S}(N)$ depending on $\nu$,
$\theta$, $p$ and $d$ such that
$\displaystyle\frac{1}{N}\log
Z_{N}(\theta,\nu)\leqslant\log(\pi/\theta)-\inf_{0<a<1}(W(\theta(1-a))+\theta\nu^{-1}I(a\nu))+\frac{1}{N}\log\Theta_{S}(N)$
where
$\Theta_{S}(N)\leqslant\exp\left(10s_{N}N\right)\cdot N^{2s_{N}^{-d-1}}\cdot
e^{C\kappa_{N}}.$
We begin with an argument to justify the separation of functions into
concentrated and dispersed parts, in other words showing that for any function
$\psi\in\mathds{C}^{V}$, we may always find a set $U$ such that $\psi$ is not
concentrated outside $U$, and the boundary contribution may be controlled.
###### Lemma 6.2.
Given a function $\psi\in\mathds{C}^{V}$ with $\left\|\psi\right\|_{2}^{2}<N$
and $\varepsilon>0$, there exists a subset $U\subset V$ with
$|U|\leqslant\varepsilon^{-d-1}$ such that $|\psi_{x}|^{2}<\varepsilon N$ for
all $x\notin U$ and $\sum_{x\in\partial U_{int}\cup\partial
U_{ext}}|\psi_{x}|^{2}<3\varepsilon N$.
###### Proof.
Take $U_{0}=\\{\boldsymbol{x}:|\psi_{\boldsymbol{x}}|^{2}\geqslant\varepsilon
N\\}$. Clearly $|U_{0}|\leqslant 1/\varepsilon$. We define the $U_{i}$ by the
successive addition of the 2-step outer boundary, _i.e.,_
$U_{i}=\\{\boldsymbol{x}\in V:d(\boldsymbol{x},U_{0})=2i\\}$ and
$B_{i}=U_{i}\setminus U_{i-1}$ for all $i\geqslant 1$. Take $B_{0}=U_{0}$.
Note that
$\sum_{i=1}^{k}\sum_{\begin{subarray}{c}\boldsymbol{x}\in
B_{i}\end{subarray}}|\psi_{\boldsymbol{x}}|^{2}\leqslant\sum_{\boldsymbol{x}}|\psi_{\boldsymbol{x}}|^{2}\leqslant
N.$
In particular, there exists $i\leqslant k$ such that $\sum_{\boldsymbol{x}\in
B_{i}}|\psi_{\boldsymbol{x}}|^{2}\leqslant N/k$. Take
$k=\lfloor\varepsilon^{-1}/2\rfloor$, so that $N/k\leqslant 10\varepsilon N$.
Now, note that $|U_{i}|\leqslant(2k+1)^{d}|U_{0}|\leqslant\varepsilon^{-d-1}$.
$\blacksquare$
We will refer to the set $U$ as $\varepsilon-$admissible for $\psi$, and what
Lemma 6.2 states that given a function $\psi$ with appropriate mass and
$\varepsilon>0$, we may find an $\varepsilon-$admissible set. For a fixed set
$U$ and a positive sequence $s_{N}$ decaying to zero at a specified rate (see
(74)), we define
$\displaystyle\mathcal{U}(s_{N}):=\\{\psi\in\mathds{C}^{n}:U\text{ is
$s_{N}-$admissible for $\psi$}\\}.$
It is easy to see that if $\mathcal{U}$ is non-empty, then it must be open in
$\mathds{C}^{V}$ since we may perturb slightly around any $\psi$ that is
contained. Therefore, if it is non-empty, it must have a non-zero Lebesgue
measure. Further, for every $\psi$ with appropriate mass, we know that there
must be a $U$ with size bounded above by $s_{N}^{-d-1}$ such that $U$ is
$s_{N}$ admissible for $\psi$. Thus, we have that
$\\{\psi:\left\|\psi\right\|_{2}^{2}<N\\}\subset\bigcup_{|U|\leqslant
s_{N}^{-d-1}}\mathcal{U}(s_{N}).$
On combination with (51), we have
$\displaystyle Z_{N}\leqslant\sum_{|U|\leqslant
s_{N}^{-d-1}}\mathcal{U}(s_{N})\sum_{i+j\leqslant
i^{*}}\int_{\mathcal{U}\cap\mathcal{A}_{i,j,U}}\exp(-\mathcal{H}_{N}(\psi))\mathbf{d}\psi.$
(52)
With a fixed choice of $U$ and admissible $\psi$, the bound on the gradient
and (47) yield
$\displaystyle\exp(-\mathcal{H}_{N}(\psi))\leqslant\exp(-\mathcal{H}_{N}(\psi_{U}))\cdot\exp(-\mathcal{H}_{N}(\psi_{U^{c}}))\cdot\exp(3s_{N}N).$
For any $\psi\in\mathcal{U}$, we have a bound on the maximum outside of $U$,
and the gradient control. Thus, if we take
$\displaystyle\tilde{\mathcal{B}}_{j,U^{c}}:=\left\\{\psi_{U^{c}}\in\mathcal{B}_{j,U^{c}}:\left\|\psi_{U^{c}}\right\|_{\infty}\leqslant\sqrt{s_{N}N}\text{
and }\left\|\psi_{\partial U_{ext}}\right\|_{2}^{2}\leqslant 3s_{N}N\right\\}$
(53)
then it follows that
$\displaystyle\mathcal{U}\cap\mathcal{A}_{i,j,U}\subset\mathcal{B}_{i,U}\times\tilde{\mathcal{B}}_{j,U^{c}}.$
(54)
This completes the separation of the partition function. We have
$\displaystyle Z_{n}\leqslant\exp(3\theta s_{N}N)\cdot\sum_{|U|\leqslant
s_{N}^{-d-1}}\sum_{i+j\leqslant i^{*}}$
$\displaystyle\int_{\mathcal{B}_{i,U}}\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\mathbf{d}\psi_{U}$
$\displaystyle\qquad\cdot\int_{\tilde{\mathcal{B}}_{j,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi_{U^{c}}))\mathbf{d}\psi_{U^{c}}.$
###### Lemma 6.3.
Let $U$ be a fixed subset with size bounded above by $s_{N}^{-d-1}$, and
$\mathcal{B}_{i,U}$ be as in (50). We have
$\displaystyle\int_{\mathcal{B}_{i,U}}\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\mathbf{d}\psi_{U}\leqslant\exp\left(-N\theta
I_{\nu}\left(\frac{i\kappa_{N}+\kappa_{N}}{N}\right)\right)\cdot\Theta_{1}(N)$
where
$\displaystyle\Theta_{1}(N)\leqslant{N^{s_{N}^{-d-1}}\pi^{s_{N}^{-d-1}}e^{C\kappa_{N}}}.$
###### Proof.
Since all sets $U$ under consideration have size bounded above $s_{N}^{-d-1}$
and the smallest non trivial cycle has size of order $N^{\frac{1}{d}}$, it
follows that for $N$ sufficiently large $U$ can be embedded in
$\mathds{Z}^{d}$. We assume that this is the case, fix an embedding, and as an
abuse of notation, refer to the corresponding images as $U$ and $\psi_{U}$. We
remind the reader that
$\displaystyle\mathcal{H}_{N}(\psi_{U})=\frac{N}{\nu}\cdot\mathcal{H}(N^{-1/2}\nu^{-1/2}\cdot\psi_{U}).$
We recall the function $I$ introduced in (11). By definition and using the
monotonicity of $I$, it follows that
$\displaystyle\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\leqslant\exp\left(-\frac{N\theta}{\nu}I\left(\nu\cdot\frac{i\kappa_{N}+\kappa_{N}}{N}\right)\right).$
Thus,
$\displaystyle\int_{\mathcal{B}_{i,U}}\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\mathbf{d}\psi_{U}\leqslant\exp\left(-\frac{N\theta}{\nu}I\left(\nu\cdot\frac{i\kappa_{N}+\kappa_{N}}{N}\right)\right)\cdot\text{Vol}(\mathcal{B}_{i,U}).$
The volume of $\mathcal{B}_{i,U}$ is easy to evaluate, as it is a concentric
shell with inner radius $\kappa$ and outer radius $\kappa i+\kappa$. Taking
the crude bound (the largest possible shell which has outer radius $N$) we
have
$\displaystyle\text{Vol}(\mathcal{B}_{i,U})\leqslant\frac{N^{|U|}\pi^{|U|}\kappa_{N}}{\Gamma(|U|+1)}\leqslant
N^{|U|}\pi^{|U|}\kappa_{n}.$
$\blacksquare$
Lemma 6.3 addresses the contribution to the free energy from the structured
portion of the Hamiltonian. We now establish the contribution from the
dispersive portion.
###### Lemma 6.4.
Let $U\subset V$ be a fixed subset and let $\tilde{\mathcal{B}}_{j,U^{c}}$ be
as defined in (53). We have
$\displaystyle\int_{\tilde{\mathcal{B}}_{j,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi_{U^{c}}))\mathbf{d}\psi_{U^{c}}\leqslant\exp\biggl{(}N\log(\pi/\theta)-NW\left(\theta\cdot\frac{j\kappa_{N}+\kappa_{N}}{N}\right)\biggr{)}\cdot\Theta_{2}(N)\cdot\Theta_{2}^{\prime}(N,j)$
where
$\displaystyle\Theta_{2}(N)\leqslant\exp\left(10\theta s_{N}N\right)\text{ and
}\Theta^{\prime}_{2}(N,j)=\exp\left(2\kappa_{N}L\left(\theta\cdot\frac{j\kappa_{N}+\kappa_{N}}{N}\right)\right).$
(55)
###### Proof.
The $\ell^{\infty}$ bound on the $\psi\in\tilde{\mathcal{B}}_{j,U^{c}}$ tells
us that
$\displaystyle\|\psi_{U^{c}}\|_{p+1}^{p+1}\leqslant(s_{N}N)^{(p-1)/2}N.$
For the Hamiltonian, this yields
$\displaystyle\mathcal{H}_{N}(\psi_{U})=\left\|\nabla_{0}\psi_{U^{c}}\right\|_{2}^{2}-\frac{N^{(1-p)/2}}{p+1}\left\|\psi_{U^{c}}\right\|_{p+1}^{p+1}\geqslant\left\|\nabla_{0}\psi_{U^{c}}\right\|_{2}^{2}-\frac{1}{p+1}\cdot
N\cdot(s_{N})^{(p-1)/2}$
and in turn,
$\displaystyle\int_{\tilde{\mathcal{B}}_{j,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi_{U^{c}}))\mathbf{d}\psi_{U^{c}}\leqslant\exp\bigl{(}\theta
Ns_{N}^{(p-1)/2}\bigr{)}\int_{\tilde{\mathcal{B}}_{j,U^{c}}}\exp(-\theta\left\|\nabla_{0}\psi_{U^{c}}\right\|)\mathbf{d}\psi_{U^{c}}.$
(56)
This enables us to discard the non linearity and consider only the free
portion of the Hamiltonian. The technique we will adopt is to, so to speak,
“patch” the missing portion to instead consider the free field on the torus.
It is easy to see, just by maximum bounds and calculating the volume of a box
with side length $\sqrt{3C_{d}\log|U|\cdot\theta^{-1}}$ that
$\displaystyle(6C_{d}\theta^{-1}\log
U)^{|U|}\cdot\exp\left(-6C_{d}d|U|\log|U|\right)\leqslant\int_{\\{\left\|\psi_{U}\right\|_{\infty}<\sqrt{3C_{d}\theta^{-1}\log|U|}\\}}\exp(-\theta\left\|\nabla_{0}\psi_{U}\right\|_{2}^{2})\mathbf{d}\psi_{U}.$
Thus (56) maybe bounded above by
$\displaystyle\left(\frac{\theta|U|^{2d}}{6C_{d}\log|U|}\right)^{|U|}\cdot\int_{\tilde{\mathcal{B}}_{j,U}\times\\{\left\|\psi_{U}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log|U|}\\}}\exp\bigl{(}-\theta(\left\|\nabla_{0}\psi_{U}\right\|_{2}^{2}+\left\|\nabla_{0}\psi_{U^{c}}\right\|_{2}^{2})\bigr{)}\mathbf{d}\psi_{U^{c}}\cdot\mathbf{d}\psi_{U}$
(57)
Since we know that $\left\|\psi_{\partial U_{ext}}\right\|_{2}^{2}\leqslant
3s_{N}N$ and
$\left\|\psi_{U}\right\|_{\infty}\leqslant\sqrt{3C_{d}\theta^{-1}\log|U|}$,
$\displaystyle\sum_{{\boldsymbol{x}}\sim\mathbf{y}\in\partial
U}|\psi_{{\boldsymbol{x}}}-\psi_{\mathbf{y}}|^{2}\leqslant
6s_{N}N+6C_{d}\theta^{-1}|U|\log|U|.$
We have implicitly used the AM-GM inequality in the above bound. We obtain
that (57) may be further bounded above by
$\left(\frac{\theta|U|^{6C_{d}(1+d)}}{6C_{d}\log|U|}\right)^{|U|}\cdot\exp(Ns_{N}^{(p-1)/2})\cdot\exp(6Ns_{N})\int_{\tilde{\mathcal{B}}_{j,U^{c}}\times\\{\left\|\psi_{U^{c}}\right\|_{\infty}\leqslant\sqrt{3C_{d}\theta^{-1}\log|U|}\\}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi$
Since $|U|\leqslant s_{N}^{-d-1}$, the largest mass that may be allocated to
$\psi_{U}$ is bounded above by $3\theta^{-1}C_{d}|U|\log|U|$. We may enlarge
our mass shell slightly so that for $N$ sufficiently large,
$\tilde{\mathcal{B}}_{j,U^{c}}\times\left\\{\left\|\psi_{U}\right\|_{\infty}\leqslant\sqrt{3C_{d}\theta^{-1}\log|U|}\right\\}\subset\mathcal{B}_{j,V}\cup\mathcal{B}_{j+1,V}.$
Combining all our bounds we have
$\displaystyle\int_{\tilde{\mathcal{B}}_{j,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi_{U^{c}}))\mathbf{d}\psi_{U^{c}}\leqslant\Theta_{2}(N,U)\int_{\mathcal{B}_{j,V}\cup\mathcal{B}_{j+1,V}}\exp(-\theta\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi$
where
$\displaystyle\Theta_{2}(N,U):=\exp\biggl{(}\theta Ns_{N}^{(p-1)/2}$
$\displaystyle+6\theta Ns_{N}+\bigl{(}6C_{d}(1+d)\bigr{)}|U|\log|U|$
$\displaystyle+(\log\theta-\log 6C_{d})|U|-|U|\log\log|U|\biggr{)}.$
Using the bound $|U|\leqslant s_{N}^{-d-1}$, we keep only the leading order
terms in the error to obtain
$\Theta_{2}(N,U)\leqslant\left(8\theta Ns_{N}\right)$
To conclude, all that needs be done is apply Theorem 4.1, which tells us that
$\displaystyle\int_{\mathcal{B}_{j,V}\cup\mathcal{B}_{j+1,V}}\exp(-\theta\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi\leqslant\left(\frac{\pi}{\theta}\right)^{N}$
$\displaystyle\cdot\exp\left(-N\cdot
W\left(\theta\cdot\frac{j\kappa_{N}+\kappa_{N}}{N}\right)\right)$
$\displaystyle\cdot\exp\left(2\kappa_{N}L\left(\theta\cdot\frac{j\kappa_{N}+\kappa_{N}}{N}\right)+2\log
N\right).$
The $\log N$ addition to the error is far smaller than the leading term, we
may ignore it. With $\Theta^{\prime}_{2}(j)$ defined as in (55) , we conclude
the proof. $\blacksquare$
Having individually bounded the concentrated and dispersed contributions to
the partition function, we are now ready to combine them to yield the upper
bound of the limiting free energy.
###### Proof of Lemma 6.1.
We remind the reader that in the prior sections we have removed the explicit
$U$, and have all bounds in terms of the maximum allowed $|U|$. To avoid
cumbersome expressions, we begin by combining $\Theta_{1}$, $\Theta_{2}$ and
the errors arising from the separation at the boundary, carry out the sum over
$U$ to define
$\Theta_{S}:=\exp(3\theta
Ns_{N})\cdot\binom{N}{s_{N}^{-d-1}}\cdot\Theta_{1}(N)\cdot\Theta_{2}(N).$
Before proceeding further, we deal with the binomial coefficient. We have
$\binom{N}{s_{N}^{-d-1}}\leqslant e^{s_{N}^{-d-1}}\cdot N^{s_{N}^{-d-1}}.$
$\displaystyle Z_{N}\leqslant\Theta_{S}(N)\cdot\sum_{i+j\leqslant
i^{*}}\exp\left(\log\left(\frac{\pi}{\theta}\right)^{N}-\frac{N\theta}{\nu}\cdot
I\left(\nu\cdot\frac{i\kappa_{N}+\kappa_{N}}{N}\right)-N\cdot
W\left(\theta\cdot\frac{j\kappa_{N}+\kappa_{N}}{N}\right)\right)\cdot\Theta_{2}^{\prime}(j).$
Both $I$ and $W$ are monotonically decreasing. In particular, if
$j_{1}\leqslant j_{2}$, then
$\exp\left(-NW\left(\theta\cdot\frac{j_{1}\kappa_{N}+\kappa_{N}}{N}\right)\right)\leqslant\exp\left(-N\cdot
W\left(\theta\cdot\frac{j_{2}\kappa_{N}+\kappa_{N}}{N}\right)\right).$
For a given $i$, the largest value that $j$ can take is $i^{*}-i$. We use this
fact to also render the sum over $j$ irrelevant. We have
$\displaystyle
Z_{N}\leqslant\Theta_{S}\cdot\sum_{i=1}^{i^{*}}\Theta_{2}^{\prime}(i)\cdot\exp
N\left(\log\left(\frac{\pi}{\theta}\right)-\frac{\theta}{\nu}I\left(\nu\cdot\frac{i\kappa_{N}}{N}+\nu\cdot\frac{\kappa_{N}}{N}\right)-W\left(\theta\left(1-\frac{i\kappa_{N}}{N}\right)+2\theta\cdot\frac{\kappa_{N}}{N}\right)\right).$
In the above expression, we have used the fact that
$\frac{i^{*}\kappa_{N}}{N}-1\leqslant\frac{\kappa_{N}}{N},$
and have redefined $\Theta_{S}$ to include the additionally accrued $N$ from
summing over $j$. We now need to tackle the $i$ dependent error term
$\Theta_{2}^{\prime}$ and control the regularity of $W$. To do both, we bring
in Lemma 7.1. For $M>0$, we have an $i^{\prime}$ dependent on $M$ and $\theta$
such that for $i>i^{\prime}$,
$W\left(\theta\cdot\frac{(i^{*}-i)\kappa_{N}+\kappa_{N}}{N}\right)\geqslant
M+I(1).$
We also recall from Lemma 4.7 that
$L(b)\leqslant b^{-1},$
which tells us that
$\kappa_{n}L\left(\theta\frac{(i^{*}-i)\kappa_{N}+\kappa_{N}}{N}\right)\leqslant
N/\theta.$
Thus on choosing $M>1/\theta$,
$\displaystyle\sum_{i=i^{\prime}}^{i^{*}}\Theta_{2}^{\prime}(i)\cdot\exp
N\biggl{(}\log\bigl{(}\frac{\pi}{\theta}\bigr{)}-$
$\displaystyle\frac{\theta}{\nu}\cdot
I\bigl{(}\frac{i\kappa_{N}+\kappa_{N}}{N}\bigr{)}-W\bigl{(}\theta\cdot\frac{(i^{*}-i)\kappa_{N}+\kappa_{N}}{N}\bigr{)}\biggr{)}$
$\displaystyle\leqslant N\cdot e^{-N(M-1/\theta)}.$
When $i\leqslant i^{\prime}$, the same argument tells us that
$\Theta^{\prime}_{2}$ is uniformly bounded by $\exp(\kappa_{N}\cdot
C^{\prime})$ where $C^{\prime}$ is a constant depending on $\theta$, $\nu$ and
$M$. Further, in this case both $I$ (rescaled by $\nu$) and $W$ (rescaled by
$\theta$) are Lipschitz functions. Let the upper bound on both Lipschitz
constants be denoted by $C_{L}$ depending on $\theta$, $\nu$ and $M$. Taking
$i\kappa_{N}/N$ to be $a$, we discard the perturbations to the arguments of
the $I$ and $W$ functions obtaining
$Z_{N}\leqslant e^{C\kappa_{N}}\cdot\Theta_{S}\cdot\sum_{i=1}^{i^{\prime}}\exp
N\left(\log\left(\frac{\pi}{\theta}\right)-\theta\nu^{-1}I(\nu
a)-W(\theta(1-a))\right)+\Theta_{S}\cdot Ne^{-C_{M}N}$
We then obtain the trivial bound by optimizing over $a$ and finally rendering
the sum over $i$ irrelevant. We have
$Z_{N}\leqslant N\cdot e^{C\kappa_{N}}\cdot\Theta_{S}\cdot\exp
N\left(\log\left(\frac{\pi}{\theta}\right)-\inf_{a}\bigl{(}\theta\nu^{-1}I(\nu
a)-W(\theta(1-a))\bigr{)}\right)+\Theta_{S}\cdot e^{-C_{M}\cdot N}.$
We remind the reader that by Lemma 7.1 the constant $C_{M}$ may be chosen such
that
$C_{M}-\inf_{a}\left(\theta\nu^{-1}I(\nu a)+W(\theta(1-a))\right)\geqslant M.$
Thus,
$Z_{N}\leqslant
e^{C\kappa_{N}}\cdot\Theta_{S}\cdot\exp\left(\log\left(\frac{\pi}{\theta}\right)-\inf_{a}\bigl{(}\theta\nu^{-1}I(\nu
a)-W(\theta(1-a))\bigr{)}\right)\cdot\left(1+e^{-CN}\right).$
Redefining $\Theta_{S}$ one last time to include the additionally accrued
error term $e^{C\kappa_{N}}$, we are done. The error accrued from the
$1+e^{-CN}$ is insignificant compared to the leading order error term and is
therefore ignored.
$\blacksquare$
### 6.2. Lower Bound
In this section, we will verify the corresponding lower bound of the limiting
free energy. The restriction of the region of integration for the lower bound
as chosen in (51) should now be motivated by results of Section 6.1, as we saw
this was the region of dominant contribution.
###### Lemma 6.5.
We have a constant $C$ and sequences $\gamma_{N}$ and $\Theta_{I}(N)$
depending on $\nu$, $\theta$, $\rho$ and $\nu$ such that
$\frac{1}{N}\log
Z_{N}(\theta,\nu)\geqslant\log(\pi/\theta)-\inf_{0<a<1}(W(\theta(1-a))+\theta\nu^{-1}I(a\nu))+\frac{1}{N}\log\Theta_{I}(N)$
where
$\Theta_{I}(N)\geqslant\exp(-N\theta\gamma_{N}-C\cdot\kappa_{N})$
and
$\lim_{N\to\infty}\gamma_{N}=0.$
As with the upper bound we and partition the mass of a function into the
structured and dispersive part. The lower bound will be established by
restricting the integral to a specifically chosen subset of functions within
$\mathcal{B}_{i,U}\times\mathcal{B}{j,U^{c}}$. For the results to come, it
will be helpful for the sake of conciseness to define
$\displaystyle\rho_{i}:=\frac{2i\kappa_{N}+\kappa_{N}}{2N}.$ (58)
We will take $U$ to be a cube of side length $\lfloor\log N\rfloor$. We will
restrict the solitonic portion of our integral to be centered about a specific
minimizing sequence. Let $U^{o}$ denote the set of interior points of $U$, and
recall that $\varphi^{a,U^{o}}$ denotes the Dirichlet minimizer on $U^{o}$
with mass $a$. We define
$\displaystyle\mathcal{C}_{i,U}:=\left\\{\psi_{U}\in\mathds{C}^{U}:\left\|\psi_{U}-N^{1/2}\cdot\varphi^{\rho_{i},U^{o}}\right\|_{2}^{2}\leqslant
N^{-\delta}\right\\},$ (59)
where $\delta>0$ and fixed. As for the dispersive portion, we will define
$\displaystyle\mathcal{C}_{j,U^{c}}:=\left\\{\psi_{U^{c}}\in\mathcal{B}_{j,U^{c}}:\left\|\psi_{U^{c}}\right\|_{\infty}\leqslant
2\sqrt{3C_{d}\log N}\right\\}.$ (60)
It is clear that $C_{i,U}\subset\mathcal{B}_{i,U}$ and the analogous inclusion
for $C_{j,U^{c}}$ is obvious by definition. Note that any function sampled
from $\mathcal{B}_{j,U}$ places a maximum mass of $N^{-\delta}$ on the
boundary, and any function sampled from $\mathcal{B}_{j,U^{c}}$ may place a
maximum mass of $6C_{d}d\cdot(\log N)^{d}$ on the boundary. Combining these
facts yields the boundary estimate required for separating the soliton and
dispersive parts,
$\displaystyle\sum_{{\boldsymbol{x}}\sim\boldsymbol{y}\in\partial
U}|\psi_{{\boldsymbol{x}}}-\psi_{\boldsymbol{y}}|^{2}\leqslant\sum_{{\boldsymbol{x}}\in\partial{U}_{int}}|\psi_{{\boldsymbol{x}}}|^{2}+\sum_{\boldsymbol{y}\in\partial
U_{ext}}|\psi_{\boldsymbol{y}}|^{2}\leqslant 12dC_{d}(\log N)^{d}.$ (61)
Applying (51), (47) and (61) we have
$\displaystyle\begin{split}Z_{N}&\geqslant\exp\bigl{(}-12C_{d}d(\log
N)^{d}\bigr{)}\cdot\sum_{i=1}^{i_{*}-1}\int_{\mathcal{C}_{i,U}}\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\mathbf{d}\psi_{U}\\\
&\qquad\qquad\qquad\cdot\int_{\mathcal{C}_{i_{*}-i,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi))\mathbf{d}\psi_{U^{c}}.\end{split}$
(62)
###### Lemma 6.6.
Let $\mathcal{B}_{j,U}$ be as in (59). Then we have
$\int_{\mathcal{C}_{i,U}}\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\mathbf{d}\psi_{U}\geqslant\exp\left(-N\theta\nu^{-1}\cdot
I(\nu\rho_{i})\right)\cdot\Theta_{3}(N,i)$
where
$\Theta_{3}(i,N)\geqslant\exp\left(-7dN^{1/2}-N\gamma_{i,N}\right)$
and
$\gamma_{i,N}:=\max_{\psi\in B_{i,U}}|\nu^{-1/2}\mathcal{H}(\nu^{1/2}\cdot
N^{-1/2}\cdot\psi)-\nu^{-1}I(\nu\rho_{i})|.$
###### Proof.
Our region of integration $\mathcal{C}_{i,U}$ is the ball of radius
$N^{-\delta/2}$ centered around $\varphi^{\rho_{i},U^{o}}$. We will verify
that this region is appropriately close in $\ell^{2}$ to the actual minimizer
$\phi$, and thus the energy $\mathcal{H}_{N}(\psi)$ is close to
$N\nu^{-1}I(\nu\rho_{j})$
$\int_{\mathcal{C}_{i,U}}\exp(-\theta\mathcal{H}_{N}(\psi_{U}))\mathbf{d}\psi_{U}=\exp(-\theta
N\nu^{-1}I\left(\nu\rho_{i}\right))\cdot\int_{\mathcal{B}_{i,U}}\exp(\theta(N\nu^{-1}I\left(\rho_{i}\right)-\mathcal{H}_{N}(\psi)))\mathbf{d}\psi_{U}$
Let $\psi\in\mathds{C}^{U}$ have unit mass and $t\in[0,N^{-\delta/2}]$. Using
the Cauchy-Schwarz inequality and the fact that the discrete Laplacian is
bounded, we have
$\left\|\nabla_{0}(N^{1/2}\cdot\varphi^{U}+t\psi)\right\|^{2}_{2}-\left\|N^{1/2}\cdot\nabla_{0}\varphi^{U}\right\|_{2}^{2}\leqslant
4dN^{-\delta/2}\cdot\sqrt{N\rho_{i}}+2dN^{-\delta}\leqslant
5dN^{(1-\delta)/2}.$
Further,
$\left\|N^{1/2}\cdot\varphi^{U}+t\psi\right\|_{p+1}^{p+1}-\left\|N^{1/2}\cdot\varphi^{U}\right\|_{p+1}^{p+1}\leqslant
N^{(p+1)/2}\left\|\psi\right\|_{p+1}\int_{0}^{t\cdot
N^{-1/2}}(\left\|\varphi^{U}\right\|_{p+1}+s\left\|\psi\right\|_{p+1})^{p}\text{d}s$
$\leqslant
N^{(1+p)/2}\left\|\psi\right\|_{p+1}\left\|\varphi^{U}\right\|_{p+1}^{p}\int_{0}^{t\cdot
N^{-1/2}}\left(1+s\frac{\left\|\psi\right\|_{p+1}}{\left\|\varphi^{U}\right\|_{p+1}}\right)^{p}\text{d}s.$
For any function in $\mathds{C}^{U}$ with mass $a$, we know that the lowest
possible $\ell^{p+1}$ norm is attained by the uniform function and is given by
$a^{1/2}\cdot|U|^{(1-p)/2(p+1)}$, and the largest possible is attained by
concentrating all the mass onto a single site and given by $a^{1/2}$. Thus, we
have a bounded constant depending only on $a$ such that
$\frac{1}{N^{(p-1)/2}}\left(\left\|N^{1/2}\cdot\varphi^{U}+t\psi\right\|_{p+1}^{p+1}-\left\|N^{1/2}\cdot\varphi^{U}\right\|_{p+1}^{p+1}\right)\leqslant
C\cdot N^{(-\delta+1)/2}.$
We then note that
$|\nu^{-1}\mathcal{H}_{N}(\psi)-N\nu^{-1}I(\nu\rho_{i})|\leqslant\max_{\psi\in\mathcal{B}_{j,U}}|\mathcal{H}_{N}(\psi)-NI(\nu\rho_{i})|=NC_{i}$
and note that $C_{i}$ is bounded and converges to $0$ as $n\to\infty$ as a
simple consequence of Lemma 5.2. As $\mathcal{C}_{i,U}$ is a ball of radius
$N^{-\delta}$ in $\mathds{C}^{U}$, we observe
$\text{Vol}(\mathcal{C}_{j,U})=\frac{\pi^{|\log N|^{d}}}{\Gamma(1+|\log
N|^{d})}\cdot N^{-2\delta|\log N|^{d}}.$
Defining
$\displaystyle\Theta_{3}(N):=\exp\left(-6dN^{(1-\delta)/2}-N\gamma_{i,N}\right)\cdot\frac{\pi^{|\log
N|^{d}}}{\Gamma(1+|\log N|^{d})}N^{-2\delta|\log N|^{d}}$ (63)
and then taking $\delta=0$ completes the proof. We clearly have
$\Theta_{3}(N,i)\geqslant\exp\left(-6dN^{1/2}-N\gamma_{i,N}-2d|\log
N|^{d}\log\log N\right)\geqslant\exp\left(-7dN^{1/2}-N\gamma_{i,N}\right)$
$\blacksquare$
###### Lemma 6.7.
Let $\mathcal{C}_{j,U_{c}}$ be as in (60). We have
$\int_{\mathcal{C}_{j,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi_{U_{c}}))\mathbf{d}\psi_{U^{c}}\geqslant\exp\left(N\log\left(\frac{\pi}{\theta}\right)-N\cdot
W\left(\theta\rho_{j}\right)\right)\cdot\Theta_{4}(j,N)$
where
$\Theta_{4}(j,N)\geqslant(3C_{d}\log|\log N|)^{-2|\log N|^{d}}\cdot
e^{-2\kappa_{N}L(\theta\rho_{j})}.$
###### Proof.
Since we seek a lower bound, we may immediately discard the non linear part of
the Hamiltonian to obtain
$\int_{\mathcal{C}_{j,U^{c}}}\exp(-\theta\mathcal{H}_{N}(\psi_{U^{c}}))\mathbf{d}\psi_{U^{c}}\geqslant\int_{\mathcal{C}_{j,U^{c}}}\exp(-\theta\left\|\nabla_{0}\psi\right\|_{2}^{2})\mathbf{d}\psi_{U^{c}}.$
The next step, just as in the case of the upper bound, is to “patch” the free
field to the entire torus. Merely from the fact that the exponential of a
negative number is bounded above by 1, we have
$\int_{\\{\left\|\psi_{U}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\\}}\exp(-\theta\left\|\nabla_{0}\psi\right\|_{2}^{2})\mathbf{d}\psi_{U}\leqslant\left({3C_{d}\log
N}\right)^{|\log N|^{d}}$
On the product space
$\mathcal{C}_{j,U^{c}}\times\\{\left\|\psi_{U}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\\}$, by definition,
$\left\|\psi\right\|_{2}^{2}\leqslant(j+1)\kappa_{N}+3C_{d}|\log
N|^{d+1}\text{ and }\left\|\psi_{U^{c}}\right\|_{2}^{2}\geqslant j\kappa_{N}.$
If we define
$\widehat{\mathcal{B}}_{j}:=\left\\{\psi\in\mathds{C}^{V}:\left\|\psi\right\|^{2}_{2}\in\bigl{[}j\kappa_{N}+\frac{\kappa_{N}}{4},j\kappa_{n}+\frac{3\kappa_{N}}{4}\bigr{)},\left\|\psi\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\right\\},$
it is clear that for $N$ sufficiently large,
$\widehat{\mathcal{B}}_{j}\subset\mathcal{C}_{j,U^{c}}\times\\{\psi_{U}:\left\|\psi_{U}\right\|_{\infty}\leqslant\sqrt{3C_{d}\log
N}\\}.$
Thus,
$\displaystyle\int_{\mathcal{C}_{j,U^{c}}}\exp(-\theta\left\|\nabla_{0}\psi\right\|_{2}^{2})\mathbf{d}\psi_{U^{c}}\geqslant\frac{1}{(3C_{d}\log
N)^{|\log
N|^{d}}}\int_{\widehat{\mathcal{B}}_{j}}\exp(-\theta(\left\|\nabla_{0}\psi_{U}\right\|_{2}^{2}+\left\|\nabla_{0}\psi_{U^{c}}\right\|))\mathbf{d}\psi.$
(64)
The functions under consideration are bounded above, and we also know the size
of the boundary of $U$ is bounded above by $4d|\log N|^{d-1}$. This gives us
the gradient control required for patching. We have
$\sum_{{\boldsymbol{x}}\sim\boldsymbol{y}\in\partial
U}|\psi_{{\boldsymbol{x}}}-\psi_{\boldsymbol{y}}|^{2}\leqslant 36dC_{d}(\log
N)^{d}.$
We use this to further lower bound (64) as
$\frac{1}{(3C_{d}\log N)^{|\log N|^{d}}}\cdot\exp(-36dC_{d}(\log
N)^{d})\int_{\widehat{\mathcal{B}}_{j}}\exp(-\theta\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi.$
Using a combination of Theorems 4.1 and 4.11, we have that
$\displaystyle\int_{\mathcal{B}_{j}}\exp(-\theta\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi$
$\displaystyle\qquad\geqslant\left(\frac{\pi}{\theta}\right)^{N}\cdot\exp\bigl{(}-NW(\theta\rho_{j})\bigr{)}\cdot\frac{N(\theta\rho_{i}-\theta
C_{d})_{+}+\kappa_{N}}{N\theta\rho_{i}+\kappa_{N}}\cdot
e^{-2\kappa_{N}L(\theta\rho_{j})}\cdot\left(1-\frac{2cm_{N}(2)N}{\kappa_{N}^{2}}\right)^{2}.$
Define
$\Theta_{4}:=\frac{\exp(-36dC_{d}(\log N)^{d})}{(3dC_{d}\log N)^{|\log
N|^{d}}}\cdot\frac{\kappa_{N}}{N+\kappa_{N}}\cdot\left(1-\frac{2cm_{N}(2)N}{\kappa_{N}^{2}}\right)^{2}e^{-2\kappa_{N}L(\theta\rho_{j})}$
and observe that
$\Theta_{4}(N,j)\geqslant\exp\left(-4\kappa_{N}L(\theta\rho_{j})\right)\cdot\left(1-\frac{2cm_{N}(2)N}{\kappa_{N}^{2}}\right)^{2}$
for $N$ sufficiently large. $\blacksquare$
Again, having bounded the solitonic and free contributions from a given shell
below, we are now ready to establish the free energy lower bound. Compared to
the upper bound, the process is much easier, as it only involves restricting
to the appropriate mass shell.
###### Proof of Lemma 6.5.
We remind the reader that we are taking $j=i_{*}-1$. All we need to do is
restrict the sum (62) to the $i$ that yields $\rho_{i}$ closest to the value
of $a_{\star}$. Let $i_{n}$ be a sequence such that $\rho_{i_{n}}\to
a_{\star}$ as $n\to\infty$. Note, by the definition of $i_{*}$, we have
$\rho_{j}=\frac{2j\kappa_{N}+\kappa_{N}}{2N}=\frac{2i_{*}\kappa_{N}-2i\kappa_{N}+\kappa_{N}}{N}=\frac{i_{*}\kappa_{N}+\kappa_{N}}{N}-\rho_{i}\leqslant
1-\rho_{i}.$
Combining,
$Z_{N}\geqslant\exp(-12dC_{d}(\log
N)^{d})\cdot\Theta_{3}(N,i)\cdot\Theta_{4}(N,i_{*}-i)\cdot\left(\frac{\pi}{\theta}\right)^{N}\cdot\exp\biggl{(}-N\nu^{-1}\theta
I(\nu\rho_{i})-NW(\theta(1-\rho_{i}))\biggr{)}.$
Now note that $|i_{n}-a_{*}|\leqslant\kappa/N$, and we know that for a fixed
$M>0$, $a_{*}\leqslant a_{M}<1$ by Lemma 7.1. This tells us two things,
firstly that $\Theta_{4}$ may be removed of its $j$ dependence, and secondly
on the interval $[0,a_{M}]$ the function
$W(\theta(1-a))+\theta\nu^{-1}I(\nu a)$
is Lipschitz, with Lipschitz constant depending only on $p$, $\theta$ and
$\nu$. Let the Lipschitz constant be $C_{2}$. Defining
$\Theta_{I}(n)=\Theta_{3}(n)\cdot\Theta_{4}(n)\cdot\exp(-4d\tau(\log
n)^{d})\cdot\exp(-(C+C_{2})\kappa_{N})$
completes the proof. Clearly, we must have appropriately chosen constant
$\Theta_{I}(N)\geqslant\exp\left(-C\kappa_{N}-N\gamma_{N}\right)\cdot\left(1-\frac{2cm_{N}(2)N}{\kappa_{N}^{2}}\right)^{2}$
where now $\gamma_{N}$ is defined for the optimizing $a_{\star}$.
$\blacksquare$
Thus, we have assembled all the requisite ingredients for Theorem 2.3.
## 7\. Analysis of Phases
Recall that the optimizing mass fractions $a_{\star}$ characterize the phase
behavior. The optimizing values $a_{\star}$ tell us the proportions of the
mass that are energetically favourable to allocate to the soliton
contribution. We begin this section with the following effective bound on
$a_{\star}$, which was used crucially in proving convergence of free energy.
Further, this establishes that it is never energetically favourable to have
all mass allocated to the soliton. Recall the expression for the limiting free
energy
$F(\theta,\nu)=\log\frac{\pi}{\theta}-\inf_{0<a<1}\left(W(\theta(1-a))+\frac{\theta}{\nu}I(a\nu)\right).$
###### Lemma 7.1.
We have $\mathscr{M}(\theta,\nu)\subseteq[0,a_{M}]$ where
$a_{M}:=1-\exp(-\theta(2d-I(\nu)/\nu)).$
###### Proof.
Using the fact that $J(r)=I(r)/r$ is decreasing and properties of
$\widehat{W}$ from Lemma 4.7, we get that
$\displaystyle W(\theta(1-a))-W(\theta)+\theta I(a\nu)/\nu$
$\displaystyle=\widehat{W}(\theta(1-a))-\widehat{W}(\theta)-\ln(1-a)+a\theta
J(a\nu)$ $\displaystyle\geqslant-2da\theta-\ln(1-a)+a\theta J(\nu)$
$\displaystyle=-\ln(1-a)-a\theta(2d-J(\nu))>0$
if $a>1-\exp(-\theta(2d-J(\nu)))$. In particular, we have
$\inf_{a>1-\exp(-\theta(2d-I(\nu)/\nu))}\left(W(\theta(1-a))+\theta
I(a\nu)/\nu\right)>W(\theta).$
This completes the proof. $\blacksquare$
It is clear that $\mathscr{M}$ is compact and closed in $[0,1)$ by the
continuity of
$a\mapsto G_{\theta,\nu}(a):=W(\theta(1-a))+\theta I(a\nu)/\nu.$
We begin with some simple results to show that there are regions each of the
phases are achieved. The easiest technique is to note that there are intervals
where $W$ and $I$ are constant. We define
$a_{W}(\theta):=1-\frac{C_{d}}{\theta}\text{ and
}a_{I}(\nu):=\frac{R_{p}}{\nu}.$
Clearly, $W(\theta(1-a))=C_{d}$ for all $a\leqslant a_{W}$ and $I_{\nu}(a)=0$
for all $a\leqslant a_{I}$.
###### Lemma 7.2 (Phase 1).
Let $\nu\leqslant R_{p}$. Then $0\in\mathcal{F}$.
###### Proof.
Since $v\leqslant R_{p}$, then $a_{I}\geqslant 1$, and thus $I_{\nu}(a)=0$ for
all values of $a$. It follows that $G_{\theta,\nu}$ is non decreasing in $a$,
and thus $0\in\mathcal{F}$. $\blacksquare$
This lemma shows that the dispersive phase is indeed achieved. We show that
the soliton phase is achieved as well.
###### Lemma 7.3 (Phase 2).
Let $\nu$ and $\theta$ be such that
$\frac{C_{d}}{\theta}+\frac{R_{p}}{\nu}<1.$ Then
$0\notin\mathscr{M}(\theta,\nu)$.
###### Proof.
With $\nu$ and $\theta$ as above, we know that $a_{I}<a_{W}$. On the interval
$[0,a_{W}]$, the function $G_{\theta,\nu}$ is non-increasing and is in fact
strictly decreasing on the interval $[a_{I},a_{W}]$. Thus, there is an
$a\in[a_{I},a_{W}]$ such that $G_{\theta,\nu}(a)<W(\theta)=G_{\theta,\nu}(0)$.
$\blacksquare$
### 7.1. Existence of Transition Curve
We have shown that each of the defined phases may be realized, it remains to
be shown that the regions are separated by a curve. We also need to establish
the continuity of the curve.
###### Lemma 7.4.
Let $(\theta,\nu)$ be such that $0\in\mathscr{M}(\theta,\nu)$. We say
$(\theta_{1},\nu_{1})\preccurlyeq(\theta,\nu)$ if $\theta_{1}\leqslant\theta$
and $\nu_{1}\leqslant\nu$. Now let
$(\theta_{1},\nu_{1})\preccurlyeq(\theta,\nu)\preccurlyeq(\theta_{2},\nu_{2}).$
If $0\in\mathscr{M}(\theta,\nu)$, then $0\in\mathscr{M}(\theta_{1},\nu_{1})$.
Conversely, if $0\notin\mathscr{M}(\theta,\nu)$, then
$0\notin\mathscr{M}(\theta_{2},\nu_{2})$.
###### Proof.
We know that $0\in\mathscr{M}(\theta,\nu)$ if and only if for all $a\in[0,1)$,
$\displaystyle W(\theta)\leqslant W(\theta(1-a))+\theta\nu^{-1}I(\nu a).$ (65)
Using the fact that $W^{\prime}(b)=L(b)$, we may rewrite (65)
$\displaystyle I_{\nu}(a)+\int_{(1-a)}^{1}L(\theta s)ds\geqslant 0$ (66)
By Lemma 5.9, decreasing $\nu$ increases $I_{\nu}(a)$, preserving the
inequality. In addition, decreasing $\theta$ increases $L(\theta s)$, still
preserving the inequality. As for the converse, observe that
$0\notin\mathscr{M}$ if and only if we have an $a_{1}$ such that
$\displaystyle I_{\nu}(a_{1})+\int_{(1-a_{1})}^{1}L(\theta s)ds<0.$ (67)
Again, by Lemma 5.9, increasing $\nu$ decreases $I_{\nu}(a)$, and increasing
$\theta$ decreases $L(\theta s)$. $\blacksquare$
Lemma 7.4 is adequate to establish not only the existence of the transition
curve but the monotonicity as well. We remark now the following useful
consequence of the relation (67), the set of $(\theta,\nu)$ corresponding to
the soliton phase is open in the plane $[R_{p},\infty)\times[0,\infty)$.
###### Corollary 7.5.
Let $\theta_{c}:(R_{p},\infty)\to(0,\infty)$ be the measurable function
defined as
$\theta_{c}(\nu):=\sup\\{\theta>0:0\in\mathcal{F}(\theta,\nu)\\}.$
By definition, $\theta_{c}$ is our transition curve. We have that $\theta_{c}$
is non-increasing and continuous.
###### Proof.
By Lemma 7.3, we know that
$\theta_{c}(\nu)\leqslant\frac{C_{d}\nu}{\nu-R_{p}}<\infty$
for all $\nu$ under consideration. Not only does this tell us that
$\theta_{c}$ is finite, it also tells us that it may be evaluated at every
$\nu$ under consideration. Next, suppose we have that $\nu_{1}<\nu_{2}$ such
that $\theta_{c}(\nu_{1})<\theta_{c}(\nu_{2})$. Observe that this contradicts
Lemma 7.4. Since we have proved that $\theta_{c}$ is non-increasing, it
further implies that on any compact interval
$[\nu_{1},\nu_{2}]\subset(R_{p},\infty)$ there are at most countably many
discontinuities. Let $\nu_{3}$ be a point of discontinuity. We have that
$\theta_{1}:=\lim_{\nu\downarrow\nu_{3}}\theta_{c}(\nu)\leqslant\theta_{2}:=\theta_{c}(\nu_{c})\leqslant\theta_{3}:=\lim_{\nu\uparrow\nu_{3}}\theta_{c}(\nu),$
Suppose
$\theta_{2}<\theta_{3}.$
We may find an $\varepsilon>0$ such that $\theta_{2}+\varepsilon<\theta_{3}$.
By hypothesis,
$0\notin\mathcal{F}(\nu_{3},\theta_{2}+\varepsilon).$
However, note that every open ball centered at
$(\nu_{3},\theta_{2}+\varepsilon)$ intersects
$[R_{p},\nu_{3})\times[0,\theta_{3})$ on which $0\in\mathcal{F}(\theta,\nu)$,
which contradicts the fact that the solitonic region is open in the plane, and
we must have $\theta_{2}=\theta_{3}$. To actually establish continuity, we
note that the boundary between the solitonic region and the plane may be
characterized as all $(\theta,\nu)$ such that (65) holds, and we have an
$a_{\star}\geqslant a_{I}$ such that
$I_{\nu}(a_{\star})+\int_{(1-a\star)}^{1}L(\theta s)ds=0.$
Let $\varepsilon>0$ be such that $\theta_{1}+2\varepsilon<\theta_{2}$. We have
that $(\nu,\theta_{1}+\varepsilon)$ is on the boundary, as for any
$\nu^{\prime}>\nu$ we enter the soliton phase. Further, we know that
$\theta_{1}+\varepsilon<\theta_{2}\leqslant\frac{C_{d}\nu}{\nu-R_{p}}.$
Thus, $a_{\star}>a_{W}$ and
$\int_{(1-a_{\star})}^{1}L((\theta_{1}+\varepsilon)s)ds>\int_{(1-a_{\star})}^{1}L((\theta_{1}+1.5\varepsilon)s)ds.$
Combined with the fact that $I_{\nu}(a_{\star})<0$, this tells us that
$(\nu,\theta_{1}+1.5\varepsilon)$ is in the soliton phase, further implying
that $\theta_{2}\leqslant\theta_{1}+\varepsilon$ which is a contradiction.
Thus, we must have that $\theta_{1}=\theta_{2}$, which verifies continuity.
$\blacksquare$
###### Remark 7.6.
The phase curve could be equivalently defined with $\nu_{c}$ as a function of
$\theta$, which would be the exact inverse of the function $\theta_{c}$
obtained above. The same properties of monotonicity and continuity can be
verified for $\nu_{c}$, which then tells us that $\theta_{c}$ must be strictly
decreasing.
## 8\. Discussion on the Typical Function in the Subcritical Phase
In this section, we will work with the condition that $1<p<5+4\sqrt{2}$, and
establish some properties of a typical function sampled from the measure
$\mu_{N}$ in the dispersive phase. Recall that this corresponds to
$(\theta,\nu)\in\mathcal{D}$. We will also further make the assumption that
$\theta<C_{d}$, for reasons that will be made clear. In this section, it will
be convenient to work with the following rescaled version of the measure
$\mu_{N}$, which is equivalent due to Lemma 3.1.
$\tilde{\mu}_{N}(\mathcal{A}):=\frac{1}{\tilde{Z}(\theta,\nu)}\int_{\mathcal{A}}\exp\left(-\left\|\nabla\psi\right\|^{2}_{2}+(\theta
N)^{-\frac{p-1}{2}}\frac{2\nu}{p+1}\left\|\psi\right\|_{p+1}^{p+1}\right)\mathds{1}_{\left\|\psi\right\|^{2}_{2}\leqslant\theta
N}\cdot\mathbf{d}\psi$
We will define the reference measure $\mu_{\text{ref}}$, and will then regard
$\tilde{\mu}_{N}$ as an exponential tilt with respect to the nonlinearity. We
define
$\mu^{\theta}_{\text{ref}}(\mathcal{A}):=\frac{1}{Z_{\text{ref}}(\theta)}\int_{\mathcal{A}}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi.$
We begin by explicitly demonstrating the sense in which this measure is close
to the massive Gaussian free field.
###### Theorem 8.1.
Let $y_{N}$ be defined by the equation $K^{\prime}_{N}(y_{N})=\theta$, and let
$Z^{y_{N}}(\theta)$ denote the partition function of the massive free field on
the torus, given by
$Z^{y_{N}}=\frac{\pi^{N}}{\det(y_{N}-\Delta)}.$
Then we have a constant $C$ depending on $\theta$ such that
$\lim_{N\to\infty}\frac{e^{-N\theta}\cdot
Z^{y_{N}}}{\sqrt{N}Z_{\text{ref}}(\theta)}=C.$
###### Proof.
We have that
$\displaystyle 1$
$\displaystyle=\frac{1}{Z_{\text{ref}}(\theta)}\int_{\left\|\psi\right\|_{2}^{2}\leqslant\theta
N}\exp(-\left\|\nabla\psi\right\|_{2}^{2})\mathbf{d}\psi$
$\displaystyle=\frac{e^{N\theta}\cdot
Z^{y_{N}}}{Z_{\text{ref}}(\theta)}\operatorname{\mathds{E}}\exp\left(\left\|\Psi^{y_{N}}\right\|_{2}^{2}-N\theta\right)\cdot\mathds{1}_{\left\|\Psi^{y_{N}}\right\|_{2}^{2}\leqslant
N\theta}$
It, therefore, suffices to prove that
$\sqrt{N}\cdot\operatorname{\mathds{E}}\exp\left(\left\|\Psi^{y_{N}}\right\|_{2}^{2}-N\theta\right)\cdot\mathds{1}_{\left\|\Psi^{y_{N}}\right\|_{2}^{2}\leqslant
N\theta}$
converges to a constant. We have that
$\exp\left(\left\|\Psi^{y_{N}}\right\|_{2}^{2}-N\theta\right)=\int^{\infty}_{0}e^{-s}\cdot\mathds{1}_{s\geqslant
N\theta-\left\|\psi^{y_{N}}\right\|_{2}^{2}}\cdot ds,$
and thus
$\operatorname{\mathds{E}}\exp\left(\left\|\Psi^{y_{N}}\right\|_{2}^{2}-N\theta\right)\cdot\mathds{1}_{\left\|\Psi^{y_{N}}\right\|_{2}^{2}\leqslant
N\theta}=\int_{0}^{\infty}e^{-s}\cdot\mathds{P}\left(s\leqslant
N\theta-\left\|\Psi\right\|_{2}^{2}\leqslant 0\right)ds.$
The problem now comes down to analyzing
$\sqrt{N}\cdot\mathds{P}\left(s\leqslant
N\theta-\left\|\Psi^{y_{N}}\right\|_{2}^{2}\leqslant 0\right).$
Note that we may equivalently consider
$\displaystyle\sqrt{N}\cdot\mathds{P}\left(\frac{s}{\sqrt{N}}\leqslant\frac{N\theta-\left\|\Psi^{y_{N}}\right\|_{2}^{2}}{\sqrt{N}}\leqslant
0\right).$ (68)
Recall that $\left\|\Psi^{y_{N}}\right\|$ is expressible as a sum of
independent random variables, we have that
$\left\|\Psi^{y_{N}}\right\|_{2}^{2}\stackrel{{\scriptstyle\mathrm{d}}}{{=}}\sum_{\boldsymbol{k}\in[n]^{d}}\frac{1}{y_{N}+\lambda_{\boldsymbol{k}}}X_{\boldsymbol{k}}.$
Further, $N\theta-\left\|\Psi^{y_{N}}\right\|_{2}^{2}$ is a centered sum by
definition, and each random variable in the sum has a uniformly bounded third
moment. The Berry-Esseen Theorem then tells us that (68) is bounded above by a
constant. The local limit theorem yields convergence. $\blacksquare$
Theorem 8.1 gives us a means of evaluating probabilities of events with
respect to the reference measure, in terms of the MGFF, where of course we pay
a penalty due to the mass restriction. In the next sequence, we will examine
the effect of exponentially tilting the massive free field with respect to the
nonlinearity.
###### Theorem 8.2.
Let $\Psi^{y}$ be distributed according to the massive Gaussian Free Field on
$\mathds{T}^{d}$ with mass parameter $y$. Then we have a constant $C$
depending on $\nu$, $y$ and $d$ such that
$\operatorname{\mathds{E}}\exp\left(\frac{2\nu
N^{\frac{1-p}{2}}}{p+1}\left\|\Psi^{y}\right\|^{p+1}_{p+1}\right)\cdot\mathds{1}_{\left\|\Psi\right\|_{2}^{2}\leqslant
N}\leqslant e^{C}$
The mass constraint is a significant obstacle, as it is not expressible as a
product, nor is it smooth for us to apply some standard techniques to deal
with Gaussian integration. We resolve both these issues with the following
upper bound. We introduce the following auxilary function in order to deal
with this issue:
$\displaystyle h(x):=\frac{2x^{p+1}}{1+x^{p-1}}.$ (69)
Observe that when $x<1$, we have that $x^{p+1}<h(x)$, and when $x>1$,
$h(x)<2x^{2}$. Crucially, $h$ is smoothly differentiable.
###### Lemma 8.3.
For $p<5+\sqrt{32}$ we have that
$h^{\prime\prime}(x)+x^{-1}\cdot h^{\prime}(x)>0$
###### Proof.
It is equivalent to verify that $(xh^{\prime}(x))^{\prime}>0$. We merely carry
out the calculation. We have
$(xh^{\prime}(x))=\frac{2(p+1)x^{p}}{1+x^{p-1}}-\frac{2(p-1)x^{2p-1}}{(1+x^{p-1})^{2}}+\frac{(p-1)(p+1)x^{p}}{(1+x^{p-1})^{2}}-\frac{2(p-1)^{2}x^{2p-1}}{(1+x^{p-1})^{3}}.$
Adding and simplifying, we find that the numerator is given by
$x^{p}\left(4x^{2p-2}+(3+6p-p^{2})x^{p-1}+(p+1)^{2}\right),$
This equation has no real roots when $1<p<5+\sqrt{32}.$ $\blacksquare$
Merely by the definition of $h$, it follows that
$\displaystyle\operatorname{\mathds{E}}\exp\left(\frac{2\nu
N^{\frac{1-p}{2}}}{p+1}\left\|\Psi\right\|^{p+1}_{p+1}\right)\cdot\mathds{1}_{\left\|\Psi\right\|_{2}^{2}\leqslant
N}\leqslant\operatorname{\mathds{E}}\prod_{{\boldsymbol{x}}\in\mathds{T}_{n}^{d}}\exp\left(\frac{2\nu
N}{p+1}h\left(\frac{|\Psi_{{\boldsymbol{x}}}|}{\sqrt{N}}\right)\right)$ (70)
The intuition now is to work with the right side of (70) and attempt to use
the decay of correlations to separate as a product of expectations,
essentially comparing to the i.i.d. situation. We will first establish a
counterpart of Theorem 8.2 for an i.i.d. standard complex Gaussian vector
$\\{\Phi_{{\boldsymbol{x}}}\\}_{{\boldsymbol{x}}\in\mathds{T}^{d}}$, and then
use Gaussian interpolation to compare the expectations under the MGFF and
i.i.d. cases.
###### Lemma 8.4.
Let $\Phi$ be an i.i.d standard complex Gaussian vector. Then for $p>3$ and
$\nu<(p+1)/8,$
we have a constant $C$ depending on $\nu$ and $p$ such that
$\operatorname{\mathds{E}}\exp\left(\frac{2\nu
N}{p+1}h\left(\frac{|\Phi_{{\boldsymbol{x}}}|}{\sqrt{N}}\right)\right)\leqslant
e^{C}.$
###### Proof.
We may immediately express the expectation as a product using the
independence, and noting that $|\Phi_{x}|^{2}$ is distributed according to
exponential $1/2$, we have
$\displaystyle\operatorname{\mathds{E}}\exp\left(\frac{2\nu
N}{p+1}h\left(\frac{|\Phi_{{\boldsymbol{x}}}|}{\sqrt{N}}\right)\right)=\left(\frac{1}{2}\int_{0}^{\infty}\exp\left(\frac{2\nu
N}{p+1}h\left(\sqrt{N^{-1}t}\right)\right)\cdot e^{-t/2}dt\right)^{N}.$ (71)
We split the integral into separate regions and bound their contributions
individually. Let $\alpha=1-4/(p+1)$. We will consider the intervals
$[0,N^{\alpha})$ and $[N^{\alpha},\infty)$. This is where $p>3$ becomes
important, as we want $N^{\alpha}\to\infty$ as $N\to\infty$. For the first
interval,
$\displaystyle\frac{1}{2}\int_{0}^{N^{\alpha}}\exp\left(\frac{2\nu
N}{p+1}h\left(\sqrt{\frac{t}{N}}\right)\right)e^{-\frac{t}{2}}dt$
$\displaystyle\leqslant\exp\frac{4\nu}{(p+1)N}\cdot\frac{1}{2}\int_{0}^{N^{\alpha}}e^{-t/2}dt$
$\displaystyle\leqslant\exp\frac{2\nu(a+1)}{a(p+1)N}$
In this bound, we have used the fact that on the interval $[0,N^{\alpha-1})$,
$h(x)\leqslant 2x^{p+1}$. Now take a fixed $\varepsilon>0$, we know that for
$N$ sufficiently large,
$\exp\frac{2\nu(a+1)}{a(p+1)N}\leqslant
1+(1+\varepsilon)\frac{2\nu(a+1)}{a(p+1)N}.$
The case to consider is $t\in[N^{\alpha},\infty)$. On this interval, we may
use the bound $h(x)\leqslant(a+1)x^{2}$ to obtain
$\int_{N^{\alpha}}^{N}\exp\left(\frac{2\nu
N}{p+1}h\left(\sqrt{\frac{t}{N}}\right)\right)e^{-\frac{t}{2}}dt\leqslant
N\exp\biggl{(}\bigl{(}-\frac{1}{2}+\frac{4\nu}{p+1}\bigr{)}N^{\alpha-1}\biggr{)}.$
This decays to zero rapidly as $N\to\infty$, so long as
$\nu<\frac{1}{8}(p+1).$ Thus, combining we obtain that the following is an
upper bound for (71)
$\left(1+\frac{6\nu(1+\varepsilon)}{(p+1)N}\right)^{N}\leqslant\exp\left(\frac{6\nu(1+\varepsilon)}{p+1}\right).$
$\blacksquare$
To use Lemma 8.4 to prove Theorem 8.2, we need one more ingredient to compare
the expectation w.r.t. the massive Gaussian Free Field and the i.i.d. case. We
may use Gaussian interpolation and correlation decay to accomplish this.
###### Lemma 8.5.
Let $\Psi^{y}$ denote the MGFF on $\mathds{T}^{d}_{n}$, let $\Phi$ denote a
standard complex Gaussian vector, and let $C_{y}$ be a constant such that
$\displaystyle C_{y}\geqslant
2\sum_{{\boldsymbol{x}}_{2}}(y-\Delta)^{-1}_{{\boldsymbol{x}}_{1}{\boldsymbol{x}}_{2}}.$
(72)
Then we have that
$\operatorname{\mathds{E}}\prod_{x\in\mathds{T}_{d}}\exp\left(\frac{2\nu
N}{p+1}h\left(\frac{|\Psi^{y}_{{\boldsymbol{x}}}|}{\sqrt{N}}\right)\right)\leqslant\left(\operatorname{\mathds{E}}\exp\left(\frac{2\nu
N}{p+1}h\left(\frac{\sqrt{C_{y}}|\Phi_{{\boldsymbol{x}}}|}{\sqrt{N}}\right)\right)\right)^{N}.$
Lemma 8.5 hinges on the application of Gaussian interpolation. We state the
version used below, easily adapted from the real-valued case.
###### Lemma 8.6.
Let $\Psi^{0}$ and $\Psi^{1}$ be independent, centered, complex-valued
Gaussian processes on $\mathds{T}^{d}_{n}$, with covariance matrices $G_{0}$
and $G_{1}$ respectively. Let $r:\mathds{C}^{N}\to\mathds{R}$ be integrable
w.r.t. the laws of both $\Psi^{0}$ and $\Psi^{1}$. Let
$\Psi^{t}:=\sqrt{1-t}\cdot\Psi^{0}+\sqrt{t}\cdot\Psi^{1}$, and define
$R(t):=\operatorname{\mathds{E}}r(\Psi^{t})$. Then we have that
$R^{\prime}(t)=\sum_{{\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2}\in\mathds{T}^{d}_{n}}\bigl{(}G_{1}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})-G_{0}({\boldsymbol{x}}_{1},{\boldsymbol{x}}_{2})\bigr{)}\cdot\operatorname{\mathds{E}}\frac{\partial^{2}}{\partial\psi_{{\boldsymbol{x}}_{1}}\partial\psi_{{\boldsymbol{x}}_{2}}}r(\Psi^{t}).$
It is clear that $R(0)$ denotes the expectation of $r$ under the law of
$\Psi^{0}$, and $R(1)$ the expectation under the law of $\Psi^{1}$. Thus, if
$R^{\prime}$ can be shown to be nonnegative, it is clear that $R(1)\geqslant
R(0)$. We are now ready to prove Lemma 8.5, and then obtain Theorem 8.2 for
free.
###### Proof of Lemma 8.5.
We apply Lemma 8.6 with $\Psi^{0}=\Psi^{y}$, $\Psi^{1}=\sqrt{C_{y}}\ \Phi$ as
defined in (72), and
$r(\psi)=\exp\left(\frac{2\nu
N}{p+1}\sum_{{\boldsymbol{x}}\in\mathds{T}^{d}}h\left(\frac{|\psi_{{\boldsymbol{x}}}|}{\sqrt{N}}\right)\right).$
Observe that for any smooth function $f:\mathds{R}\to\mathds{R}$, we have
$(\exp(f))^{\prime}=\exp(f)\cdot f^{\prime}\text{ and
}(\exp(f))^{\prime\prime}=\exp(f)\cdot(f^{\prime\prime}+(f^{\prime})^{2}).$
In our cases, $\Re{\Psi}$ and $\Im{\Psi}$ are independent copies of one
another, and therefore any calculations done for the real part are exactly
replicated for the imaginary part. For convenience, we introduce the notation
$\mathfrak{J}$ which denotes either $\Re$ or $\Im$. With our choice of $r$, we
have
$\frac{\partial^{2}}{\partial\mathfrak{J}_{1}\psi_{{\boldsymbol{x}}_{1}}\partial\mathfrak{J}_{2}\psi_{{\boldsymbol{x}}_{2}}}r(\psi)=N\cdot\left(\frac{2\nu}{p+1}\right)^{2}\cdot
h^{\prime}\left(\frac{|\psi_{{\boldsymbol{x}}_{1}}|}{\sqrt{N}}\right)\cdot\frac{\partial|\psi_{\boldsymbol{x}_{1}}|}{\partial\mathfrak{J}_{1}\psi_{\boldsymbol{x}_{1}}}\cdot
h^{\prime}\left(\frac{|\psi_{{\boldsymbol{x}}_{2}}|}{\sqrt{N}}\right)\cdot\frac{\partial|\psi_{\boldsymbol{x}_{2}}|}{\partial\mathfrak{J}_{2}\psi_{\boldsymbol{x}_{2}}}\cdot
r(\psi).$
For the non mixed second derivative, we have
$\displaystyle\frac{\partial^{2}}{\partial\mathfrak{J}\psi_{{\boldsymbol{x}}}^{2}}r(\psi)$
$\displaystyle=N\cdot r(\psi)\cdot\left(\frac{2\nu}{p+1}\cdot
h^{\prime}\left(\frac{|\psi_{\boldsymbol{x}}|}{N}\right)\cdot\frac{\partial|\psi_{\boldsymbol{x}}|}{\partial\mathfrak{J}\psi_{\boldsymbol{x}}}\right)^{2}+r(\psi)\cdot\frac{2\nu}{p+1}\cdot\left(\frac{\partial|\psi_{\boldsymbol{x}}|}{\partial\mathfrak{J}\psi_{\boldsymbol{x}}}\right)^{2}h^{\prime\prime}\left(\frac{|\psi_{{\boldsymbol{x}}}|}{N}\right)$
$\displaystyle\qquad\qquad+N\cdot
r(\psi)\cdot\left(\frac{2\nu}{p+1}\right)h^{\prime}\left(\frac{|\psi_{{\boldsymbol{x}}}|}{N}\right)\cdot\frac{\partial^{2}|\psi_{\boldsymbol{x}}|}{\partial\mathfrak{J}\psi_{\boldsymbol{x}}^{2}}.$
We focus our attention on the latter two terms, that is
$\displaystyle
r(\psi)\cdot\frac{2\nu}{p+1}\cdot\left(\frac{\partial|\psi_{\boldsymbol{x}}|}{\partial\mathfrak{J}\psi_{\boldsymbol{x}}}\right)^{2}h^{\prime\prime}\left(\frac{|\psi_{{\boldsymbol{x}}}|}{N}\right)+\sqrt{N}\cdot
r(\psi)\cdot\left(\frac{2\nu}{p+1}\right)h^{\prime}\left(\frac{|\psi_{{\boldsymbol{x}}}|}{N}\right)\cdot\frac{\partial^{2}|\psi_{\boldsymbol{x}}|}{\partial\mathfrak{J}\psi_{\boldsymbol{x}}^{2}}$
(73)
Evaluating the sum of (73) over both cases of $\mathfrak{J}$, that is $\Re$ or
$\Im$, we obtain
$r(\psi)\cdot\left(\frac{\partial^{2}}{\partial|\psi_{\boldsymbol{x}}|^{2}}+\frac{1}{|\psi_{\boldsymbol{x}}|}\frac{\partial}{\partial|\psi_{\boldsymbol{x}}|}\right)h\left(\frac{|\psi_{\boldsymbol{x}}|}{\sqrt{N}}\right).$
Using Lemma 8.3, we may conclude that
$\frac{\partial^{2}}{\partial\Re\psi_{\boldsymbol{x}}^{2}}r(\psi)+\frac{\partial^{2}}{\partial\Im\psi_{\boldsymbol{x}}^{2}}r(\psi)-\left(\frac{\partial}{\partial\Re\psi_{\boldsymbol{x}}}r(\psi)\right)^{2}-\left(\frac{\partial}{\partial\Im\psi_{\boldsymbol{x}}}r(\psi)\right)^{2}\geqslant
0$
Combining the inequality $a^{2}+b^{2}>2ab$ with (72), we have that
$R^{\prime}(t)\geqslant\sum_{x_{1}}\left(C_{y}-\sum_{{\boldsymbol{x}}_{2}}(y-\Delta)^{-1}_{{\boldsymbol{x}}_{1}{\boldsymbol{x}}_{2}}\right)\operatorname{\mathds{E}}\left(\frac{\partial^{2}}{\partial\Re\psi_{{\boldsymbol{x}}}^{2}}+\frac{\partial^{2}}{\partial\Im\psi_{{\boldsymbol{x}}}^{2}}\right)r(\psi)\geqslant
0.$
This completes the proof. $\blacksquare$
All parts are established to prove Theorem 8.2.
###### Proof.
A detailed proof is omitted; all that is required is to combine Lemma 8.5 with
a rescaled version of Lemma 8.4. $\blacksquare$
## 9\. Proofs of Main Theorems
In the sections prior, we have all the pieces required to prove our main
results. In this section, we tie them all together. We begin with the proof of
free energy convergence. All that needs to be done is to combine the upper and
lower bounds found in Section 6
### 9.1. Proof of Theorem 2.3
Essentially, the proof is an immediate corollary of Lemmas 6.1 and 6.5. It
suffices to specify $\kappa_{N}$ and $s_{N}$, and provide a rate of
convergence for $\gamma_{N}$. The sequence $\kappa_{N}$ is chosen such that
the mass shells are of adequate thickness such that the required concentration
of mass of the Gaussian free field holds. A larger $\kappa_{N}$ yields a
better concentration bound but a worse rate of convergence. It is necessary
that
$\lim_{N\to\infty}\frac{\kappa_{N}}{\sqrt{m_{N}(2)N}}=\infty\text{ and
}\lim_{N\to\infty}\frac{\kappa_{N}}{N}=0.$
As for $s_{N}$, which governs the size of the concentrated region in the upper
bound, observe that $s_{N}$ needs to be chosen such that the corresponding $U$
will never contain nontrivial cycles, $s_{N}\to 0$ as $N\to\infty$, and the
corresponding $U$ are sufficiently small so as to allow concentration. It
suffices to consider
$\displaystyle s_{N}=\frac{1}{\log N}.$ (74)
Finally, as for $\gamma_{N}$, this depends on the optimizing values $a$. We
note that if $0$ is optimizing, then by Lemma 5.8, we have that
$\gamma_{N}\leqslant\frac{2d\kappa_{N}}{N\log N}.$
On the contrary, if we have a non trivial minimezer $a_{\star}$, then we also
know that $\nu a_{\star}>R_{p}$, and thus by Lemma 5.7 we have that
$\gamma_{N}\leqslant C_{1}(\nu)(\nu a_{\star})^{p^{2}-1}\exp(-C_{2}(\nu)\cdot
N^{1/d}).$
Thus, the product of all the error terms arising in Lemmas 6.1 and 6.5 can be
bounded above by
$e^{C_{1}{\kappa_{N}}}\cdot\left(1-C\cdot\frac{Nm_{N}(2)}{\kappa_{N}^{2}}\right)^{2}.$
Using the fact that $m_{N}(2)=N^{-1+4/d}$, we find that the best choice of
$\alpha$ for $\kappa_{N}=N^{\alpha}$ is given by
$\alpha=\frac{1}{3}+\frac{4}{3d}$
$\blacksquare$
Next, the characterization of the phase transition curve. These are
essentially the finishing touches required to extract Theorem 2.4 from Section
7.
### 9.2. Proof of Theorem 2.4
We address the four parts Theorem 2.4 individually.
1. a)
Immediate corollary of Lemma 7.2.
2. b)
This is exactly established in Lemma 7.4.
3. c)
We begin with the bound and asymptotic for $\nu\downarrow R_{p}$. Indeed,
Lemma 7.3 may be rephrased as
$\theta_{c}(\nu)\leqslant C_{d}\cdot\frac{\nu}{\nu-R_{p}}.$
Further, note that we have a non trivial minimizer $a_{\star}$ iff we have an
$a$ such that $G(a)<0$. In particular, thus, if $(\theta,\nu)$ corresponds to
the dispersive phase, we must have that $G(a)\geqslant 0$ for $a\in[0,1)$. Now
consider the curve given by
$\frac{R_{p}}{\nu}+\frac{C_{d}}{\theta}=(1+\varepsilon)$
We know that the function $J$ has a derivative, moreover $J^{\prime}=0$ for
all $a<R_{p}$. As $\theta\uparrow\infty$,
$\nu\downarrow(1+\varepsilon)^{-1}\cdot R_{p}$. Thus we must have that along
this curve
$\lim_{\theta\to\infty}\theta\cdot J(a\nu)=0$
for all values of $a\in[0,1)$. Thus, along this curve as $\theta$ increases,
$(\nu_{\varepsilon},\theta_{\varepsilon})$ will eventually correspond to the
dispersive phase. Since choice of $\varepsilon$ is arbitrary, this verifies
that
$\lim_{\nu\downarrow R_{p}}(\nu-R_{p})\theta_{c}(\nu)=C_{d}R_{p}.$
Finally, we address the bound and the asymptotic as $\nu\uparrow\infty$. To do
this, recall the functions $\widehat{W}$ and $\widehat{J}$ introduced in (44)
and (29) respectively. We may then write
$\displaystyle\log{\theta}$
$\displaystyle+W(\theta(1-a))+\frac{\theta}{\nu}I(a\nu)$
$\displaystyle=\widehat{W}(\theta(1-a))+\theta
a\widehat{J}(a\nu)-\frac{2\theta}{p+1}a(a\nu\vee R_{p})^{(p-1)/2}-\log(1-a).$
We know that the functions $\widehat{W}$ and $\widehat{J}$ are bounded,
moreover, we may discard the $\log\theta$ as it is irrelevant to the phase
behavior (independent of $a$). Thus, it suffices to characterize the behavior
of
$G_{\theta,\nu}(a)=\widehat{W}(\theta(1-a))+\theta
a\widehat{J}(a\nu)-\frac{2\theta}{p+1}a^{(p+1/2)}\cdot\nu^{(p-1)/2}-\log(1-a).$
If we take $C\cdot\theta=\nu^{-(p-1)/2}$ for a constant $C$ then observe that
then the limit as $\nu\to\infty$ is non negative iff
$C\leqslant\xi_{p}(0)$
for $\xi_{p}$ defined in (18). This completes the proof.
4. d)
It suffices to consider $\nu>R_{p}$ and $\theta>\theta_{c}$. This implies that
there is a $a_{\star}>0$ which is the smallest optimizer. Now, suppose that as
$\theta\downarrow\theta_{c}$, and we have that
$a_{\star}(\theta_{c})\downarrow 0$. Since $a_{\star}(\theta)$ continue to be
non trivial minimizers we must still have $I(a_{\star}\nu)<0$, a contradiction
as eventually $a_{\star}\nu<R_{p}$. $\blacksquare$
We conclude this section by finally proving Theorem 2.8. Essentially, we need
to combine Theorems 8.1 and 8.2.
### 9.3. Proof of Theorem 2.8
Let $\mathcal{A}\subset\ell^{2}(\mathds{T}^{d}_{n})$. Using the trivial fact
that $\tilde{Z}_{N}\geqslant Z_{\text{ref}}$, we may write
$\displaystyle\tilde{\mu}_{N}(A)\leqslant\frac{Z^{y_{N}}e^{N\theta
y_{N}}}{\sqrt{N}\cdot
Z_{\text{ref}}(\theta)}\operatorname{\mathds{E}}\left(\mathds{1}_{\mathcal{A}}\cdot\frac{\exp(\left\|\Psi^{y}\right\|_{2}^{2})}{e^{N\theta
y_{N}}}\cdot\exp\left(\frac{\nu(\theta
N)^{(1-p)/2}}{p+1}\left\|\Psi^{y_{N}}\right\|_{p+1}^{p+1}\right)\cdot\mathds{1}_{\left\|\Psi^{y}\right\|_{2}^{2}\leqslant
N\theta}\right).$ (75)
Two applications of Hölder’s inequality are all that are required now. Indeed,
let $\epsilon_{1}$ and $\epsilon_{2}$ be positive real numbers, such that
$(1+\epsilon_{1})\nu\theta^{(p+1)/2}$ satisfies the hypothesis of Theorem 8.2.
By the first application, we have that (75) can be further bounded above by
the product of the following
$\displaystyle\operatorname{\mathds{E}}\left(\exp((1+\epsilon_{1})\cdot\frac{\nu
N^{-(p-1)/2}}{p+1}\cdot\left\|\Psi^{y_{N}}\right\|_{p+1}^{p+1})\cdot\mathds{1}_{\left\|\Psi^{y}\right\|_{2}^{2}\leqslant
N\theta}\right)^{1/(1+\epsilon_{1})}$ (76)
and
$\displaystyle\operatorname{\mathds{E}}\left(\exp\left(\frac{1+\epsilon_{1}}{\epsilon_{1}}y_{N}\left\|\psi^{y_{N}}\right\|_{2}^{2}-\frac{1+\epsilon_{1}}{\epsilon_{1}}y_{N}N\theta\right)\cdot\mathds{1}_{\left\|\Psi^{y}\right\|_{2}^{2}\leqslant
N\theta}\right)^{\epsilon_{1}/(1+\epsilon_{1})}$ (77)
We, for convenience, denote
$\epsilon_{1,2}=\epsilon_{1}+\epsilon_{2}+\epsilon_{1}\epsilon_{2}.$
Applying Hölder’s inequality again to (77), we obtain the upper bound
$\displaystyle\operatorname{\mathds{E}}\left(\exp\left(\frac{1+\epsilon_{1,2}}{\epsilon_{1}\epsilon_{2}}y_{N}\left\|\psi^{y_{N}}\right\|_{2}^{2}-\frac{1+\epsilon_{1,2}}{\epsilon_{1}\epsilon_{2}}y_{N}N\theta\right)\cdot\mathds{1}_{\left\|\Psi^{y}\right\|_{2}^{2}\leqslant
N\theta}\right)^{\epsilon_{1}\epsilon_{2}/(1+\epsilon_{1,2})}\operatorname{\mathds{P}}\left(\Psi^{y}\in\mathcal{A}\right)^{\epsilon_{1}/(1+\epsilon_{1,2})}.$
(78)
Thus, by Theorem 8.2, we have that (76) is bounded above by a constant. By
Theorem 8.1, we know that (78) is bounded above by
$N^{-\epsilon_{1}\epsilon_{2}/(2+2\epsilon_{1,2})}$. Combining, we obtain that
$\tilde{\mu}_{N}(A)\leqslant C\cdot
N^{\frac{1+\epsilon_{1}+\epsilon_{2}}{2+2\epsilon_{1,2}}}\cdot\mathds{P}\left(\Psi^{y}\in\mathcal{A}\right)^{\epsilon_{2}/(1+\epsilon_{1,2})}$
Indeed, if $\mathds{P}(\Psi^{y_{N}}\in\mathcal{A})\leqslant N^{-\alpha}$ for
some $\alpha>0$, we have that
$\tilde{\mu}_{N}(\mathcal{A})\leqslant C\cdot
N^{\frac{1+\epsilon_{1}+(1-2\alpha)\epsilon_{2}}{2+2\epsilon_{1,2}}}$
We have flexibility in choosing $\epsilon_{2}$, and so long as $\alpha>1/2$,
we may select $\epsilon_{2}$ such that the numerator of the exponent is
negative. $\blacksquare$
## 10\. Discussion and Further Questions
### 10.1. On the Question of Multi-Soliton Solutions
The question of multi-soliton phases is closely related to having multiple
minimizers for the variational formula in the soliton phase; _i.e.,_ $0$ is
not a minimizer. We conjecture that this is impossible. Indeed, we know that
$I(a)$ is concave for large values of $a$. However, we need detailed behavior
of the $I$ function near $R_{p}$ to prove this.
### 10.2. Ergodicity
The behavior of the invariant measure under the dynamics is yet to be
explored. In [CK12], the corresponding analysis was possible due to the mass
of typical functions being concentrated at a single lattice site,
significantly simplifying computations. This is no longer possible here; for
us, the corresponding concentration occurs on a region of size $O(1)$. It
should be possible to consider the dynamics introduced in [OL] with the regime
of scaling considered in this article. In [WEI3], a hierarchy of local minima
for the Hamiltonian (5) was provided, which would be the collection of
metastable states. In addition, the Witten Laplacian approach in [LEB] can be
considered with restriction to the sphere instead of the ball, so as to remove
technicalities arising from carrying out Morse theoretic calculations on a
manifold with boundary.
### 10.3. Dimension two analysis
We have explicitly used the finiteness of $C_{d}$ for our maximum bounds. One
of the obstacles in extending our results to the two-dimensional case is that
$C_{2}=\infty$. We work with a massive field for most parts, so this issue
does not arise often. We anticipate that one can work around it. The more
serious obstacle is that the techniques used here do not yield adequate mass
concentration in two dimensions. This being said, the asymptotics is more
interesting in two dimensions; by choosing the correct scaling, it might be
possible to use the 2-D Gaussian Free Field characterization in [RAY] to
evaluate our scaling limit explicitly in the dispersive phase.
Acknowledgments. We would like to thank Gayana Jayasinghe, Gourab Ray, and
Arnab Sen for many useful discussions.
## References
|
This paper proposes an algorithm for motion planning among dynamic agents using adaptive conformal prediction. We consider a deterministic control system and use trajectory predictors to predict the dynamic agents' future motion, which is assumed to follow an unknown distribution. We then leverage ideas from adaptive conformal prediction to dynamically quantify prediction uncertainty from an online data stream. Particularly, we provide an online algorithm that uses delayed agent observations to obtain uncertainty sets for multistep-ahead predictions with probabilistic coverage. These uncertainty sets are used within a model predictive controller to safely navigate among dynamic agents. While most existing data-driven prediction approaches quantify prediction uncertainty heuristically, we quantify the true prediction uncertainty in a distribution-free, adaptive manner that even allows to capture changes in prediction quality and the agents' motion. We empirically evaluate our algorithm on a case study where a drone avoids a flying frisbee.
MPC, dynamic environments, uncertainty quantification, and conformal prediction.
§ INTRODUCTION
Motion planning of autonomous systems in dynamic environments requires the system to reason about uncertainty in its environment, e.g., a self-driving car needs to reason about uncertainty in the motion of other vehicles, and a mobile robot navigating a crowded space needs to assess uncertainty of nearby pedestrians. These applications are safety critical, as the agents’ intentions are unknown, and systems must be able to plan reactive behaviors in response to an increase in uncertainty.
Existing works include predictive and reactive approaches, e.g., multi-agent navigation via the dynamic window approach [Fox et al., 1997, Mitsch et al., 2013] or navigation functions [Dimarogonas et al., 2006, Tanner et al., 2003]. Reactive approaches typically consider simplified dynamics and do not optimize performance. Predictive approaches incorporate predictions of the agents' future motion and can optimize performance. Interactive approaches take inter-agent interaction into account [Kretzschmar et al., 2016, Everett et al., 2021], while non-interactive approaches ignore potential interactions [Trautman and Krause, 2010, Du Toit and Burdick, 2011].
While many prior works assume perfect knowledge of the environment, an important challenge is to account for uncertainty in perception. Existing works address the problem by making simplifying assumptions, such as linear system dynamics and bounded or Gaussian uncertainty distributions [Aoude et al., 2013, Thomas et al., 2021, Renganathan et al., 2020]. However, addressing the problem in its full generality for nonlinear dynamics and arbitrary distributions is an open problem.
In this paper, we use trajectory predictors to predict the agents’ future motion, and quantify prediction uncertainty in an adaptive and online manner from past agent observations of a single trajectory. Particularly, we use tools from the adaptive conformal prediction (ACP) literature [Gibbs and Candes, 2021, Gibbs and Candès, 2022, Zaffran et al., 2022, Bastani et al., 2022] to construct prediction regions that quantify multistep-ahead prediction uncertainty. Based on this quantification, we formulate an uncertainty-informed motion planner. Our contributions are as follows:
* We propose an algorithm that adaptively quantifies uncertainty of trajectory predictors using ACP. Our algorithm is distribution-free and applies to a broad class of trajectory predictors, providing average probabilistic coverage.
* We propose a model predictive controller (MPC) that leverages uncertainty quantifications to plan probabilistically safe paths around dynamic obstacles. Importantly, our adaptive algorithm enables us to capture and react to changes in prediction quality and the agents’ motion.
* We provide empirical evaluations of a drone avoiding a flying frisbee.
§.§ Related Work
Planning in dynamic environments has found broad interest, and non-interactive sampling-based motion planner were presented in [Phillips and Likhachev, 2011, Renganathan et al., 2022, Aoude et al., 2013, Majd et al., 2021, Kalluraya et al., 2022], while [Du Toit and Burdick, 2011, Wei et al., 2022, Wang et al., 2022, Thomas et al., 2021] propose non-interactive receding horizon planning algorithms. However, accounting for uncertainty in the agent motion is challenging.
Intent-driven models for planning among human agents have estimated agent uncertainty using Bayesian inference [Fisac et al., 2018, Nakamura and Bansal, 2022, Fridovich-Keil et al., 2020, Bansal et al., 2020]. Model predictive control was also used in a stochastic setting to account for uncertainty under the assumption of bounded or Gaussian uncertainty [Fan et al., 2021, Nair et al., 2022, Yoon et al., 2021]. Data-driven trajectory predictors can provide mean and variance information of the predictions, which can be approximated as a Gaussian distribution [Busch et al., 2022] and used within stochastic planning frameworks [Choi et al., 2017, Omainska et al., 2021, Fulgenzi et al., 2008]. These approaches quantify prediction uncertainty in a heuristic manner for real systems as the authors make certain assumptions on prediction algorithms and agent models and its distribution, e.g., being Gaussian. Distributionally robust approaches such as [Wei et al., 2022] are distribution free and can ensure safety at the cost of conservatism.
Data-driven trajectory predictors, such as RNNs or LSTMs, provide no information about prediction uncertainty which can lead to unsafe decisions. For this reason, prediction monitors were recently presented in [Farid et al., 2022, Luo et al., 2021] to monitor prediction quality. Especially [Luo et al., 2021] used conformal prediction to obtain guarantees on the predictor's false negative rate. Conformal prediction was further used to obtain estimates on constraint satisfaction via neural network predictors [Dietterich and Hostetler, 2022, Bortolussi et al., 2019, Qin et al., 2022, Lindemann et al., 2022]. Conceptually closest to our work are [Chen et al., 2020, Lindemann et al., 2022] where prediction uncertainty quantifications are obtained using conformal prediction, and then utilized to design model predictive controllers. While the algorithm in [Chen et al., 2020] can not provide end-to-end safety guarantees, [Lindemann et al., 2022] can provide probabilistic safety guarantees for the planner. However, changes in the distribution that describes the agents' motion can not be accounted for, e.g., when the agents' motion changes depending on the motion of the control system. Another distinct difference is that offline trajectory data is needed, while we obtain uncertainty quantifications in an adaptive manner from past agent observations of a single trajectory.
§ PROBLEM FORMULATION AND PRELIMINARIES
The dynamics of our autonomous system are governed by the discrete-time dynamical system,
\begin{align}
\label{eq:system}
x_{t+1}=f(x_t,u_t), \;\;\; x_0:=\zeta
\end{align}
where $x_t\in\mathcal{X}\subseteq\mathbb{R}^n$ and $u_t\in \mathcal{U}\subseteq \mathbb{R}^m$ denote the state and the control input at time $t\in\mathbb{N}\cup \{0\}$, respectively. The sets $\mathcal{U}$ and $\mathcal{X}$ denote the set of permissible control inputs and the workspace of the system, respectively. The measurable function $f:\mathbb{R}^n\times\mathbb{R}^m\to \mathbb{R}^n$ describes the system dynamics and $\zeta\in\mathbb{R}^n$ is the initial condition of the system. For brevity, let $x:=(x_0,x_1,\hdots)$ denote the trajectory of (<ref>) under a given control sequence $u:=(u_0,u_1,\hdots)$.
The system operates in an environment with $N$ dynamic agents whose trajectories are a priori unknown. Let $\mathcal{D}(x)$ be an unknown distribution over agent trajectories, i.e., let
Y:=(Y_0,Y_1,\hdots)\sim \mathcal{D}(x)
describe a random trajectory where the joint agent state $Y_t:=(Y_{t,1},\hdots,Y_{t,N})$ at times $t\in\mathbb{N}\cup \{0\}$ is drawn from $\mathbb{R}^{Nn}$, i.e., $Y_{t,j}$ is the state of agent $j$ at time $t$. For instance, $Y_t$ can denote the uncertain two-dimensional positions of $N$ pedestrians at time $t$. Modeling dynamic agents by a distribution $\mathcal{D}$ provides great flexibility, and $\mathcal{D}$ can generally describe the motion of Markov decision processes.
We use lowercase letters $y_t$ when referring to a realization of $Y_t$, and assume at time $t$ to have access to past observations $(y_0,\hdots,y_t)$. We make no other assumptions on the distribution $\mathcal{D}$, and in our proposed algorithm we will predict states $(y_{t+1},\hdots,y_{t+H})$ for a prediction horizon of $H$ from $(y_0,\hdots,y_t)$ and quantify prediction uncertainty using ideas from ACP.
Given the system in (<ref>), the unknown random trajectories $Y\sim\mathcal{D}(x)$, and a failure probability $\delta\in(0,1)$, design the control inputs $u_t$ such that the Lipschitz continuous constraint function $c:\mathbb{R}^n\times\mathbb{R}^{nN}\to \mathbb{R}$ is satisfied[For an obstacle avoidance constraint, like $c(x,y):= \lVert x - y \rVert -0.5 \geq 0 $, the Lipschitz constant is 1. We implicitly assume that the constraint function is initially satisfied, i.e., that $c(x_0,y_0)\ge 0$.] with a probability of at least $1-\delta$ at each time, i.e., that
\begin{align}\label{eq:safety_constr}
\textrm{Prob}\big(c(x_\tau,Y_\tau)\ge 0\big)\ge 1-\delta\;\;\; \text{ for all } \;\;\; \tau\ge 0.
\end{align}
We note that our previous work [Lindemann et al., 2022] considers a similar problem formulation. However, in [Lindemann et al., 2022], we assume that the distribution $\mathcal{D}$ is stationary and it does not depend on the system trajectory $x$ or the environment, i.e., there is no interaction between the control system and the dynamic agents. In reality, however, a pedestrian may come to a halt if a mobile robot comes too close, resulting in a distribution shift in $\mathcal{D}$. This work is a step towards the implementation of a general framework that can adapt to such changes in the agent distribution.
To address Problem <ref>, we use trajectory predictors to predict the motion of the agents $(Y_0,Y_1,\hdots)$ to enforce the constraint (<ref>) within a MPC framework. In [Lindemann et al., 2022], we assumed the availability of validation data from $\mathcal{D}$ to build prediction regions that quantify uncertainty of trajectory predictors. In this setting, we can collect data online to adapt our uncertainty sets based on past performance of our predictor using ACP without any assumptions on the distribution of the uncertainty and exchangeability of the validation and training dataset.
By parameterizing the distribution $\mathcal{D}(x)$ by the trajectory $x$, we model potential interactions between system and agents. This way, we can adapt to cases where the trajectory predictor (introduced next) is trained without information of $x$, i.e., without taking interactions into account.
Trajectory Predictors: Given observations $(y_0,\hdots,y_t)$ at time $t$, we want to predict future states $(y_{t+1},\hdots,y_{t+H})$ for a prediction horizon of $H$. Assume that Predict is a function that maps observations $(y_{0},\hdots,y_t)$ to predictions $(\hat{y}_t^1,\hdots,\hat{y}_t^H)$ of $(y_{t+1},\hdots,y_{t+H})$. Note that $t$ in $\hat{y}_t^\tau$ denotes the time at which the prediction is made, while $\tau$ indicates how many steps we predict ahead. In principle, Predict can be a classical auto-regressive model or a neural network based method.
While our proposed problem solution is compatible with any trajectory predictor Predict, we focus in the case studies on real-time updating strategies like sliding linear predictors with extended Kalman filter. Extracting a dynamics model from data is challenging, especially when the available data is limited, noisy, and partial.
[Takens, 1981] showed that the method of delays can be used to reconstruct qualitative features of the full-state, phase space from delayed partial observations. By building on our previous work using time delay embedding in dynamic obstacle avoidance ([Wei et al., 2022]), we employ a linear predictor based on spatio-temporal factorization of the delayed partial observations as the pairing trajectory predictor (See Appendix <ref>).
Adaptive Conformal Prediction (ACP): Conformal prediction is used to obtain prediction regions for predictive models, e.g., neural networks, without making assumptions on the underlying distribution or the predictive model [Vovk et al., 2005, Shafer and Vovk, 2008, Angelopoulos and Bates, 2021]. Let $R_1,\hdots,R_{t+1}$ be $t+1$ independent and identically distributed (i.i.d.) random variables. The goal in conformal prediction is to obtain a prediction region of $R_{t+1}$ based on $R_1,\hdots,R_{t}$. Formally, given a failure probability $\delta\in (0,1)$, we want to obtain a prediction region $C$ such that
\begin{align*}
\textrm{Prob}(R_{t+1}\le C)\ge 1-\delta.
\end{align*}
We refer to $R_i$ also as the nonconformity score. For supervised learning, we can select $R_i:=\|Z_i-\mu(X_i)\|$ where $\mu$ is the predictor so that a large nonconformity score indicates a poor predictive model. By a quantile argument, see <cit.>, we can obtain $C$ to be the $(1-\delta)$th quantile of the empirical distribution of the values $R_1,\hdots,R_{t}$ and $\infty$. Calculating the $(1-\delta)$th quantile can be done by assuming that $\bar{R}_1,\hdots,\bar{R}_{t}$ correspond to the values of $R_1,\hdots,R_{t}$, but instead sorted in non-decreasing order ($\bar{R}$ refers to the order statistic of $R$), i.e., for each $\bar{R}_i$ there exists exactly one $R_j$ such that $\bar{R}_i=R_j$ and $\bar{R}_{i+1}\ge \bar{R}_i$. By setting $q:=\lceil (t+1)(1-\delta)\rceil\le t$, we obtain the $(1-\delta)$th quantile as $C:=\bar{R}_{q}$, i.e., the $q^{\text{th}}$ smallest nonconformity score.
The underlying assumption in conformal prediction is that $R_1,\hdots,R_{t+1}$ are exchangeable (exchangeability includes i.i.d. data). This is an unreasonable assumption for time-series prediction where $R_t$ may denote the nonconformity score at time $t$. To address this issue, ACP was introduced in [Gibbs and Candes, 2021, Gibbs and Candès, 2022, Zaffran et al., 2022, Bastani et al., 2022]. The idea is now to obtain a prediction region $C_{t+1}$ adaptively so that $\textrm{Prob}(R_{t+1}\le C_{t+1})\ge 1-\delta$ for each time $t$. In fact, the prediction region is now obtained as $C_{t+1}:=\bar{R}_{q_{t+1}}$ where $q_{t+1}:=\lceil (t+1)(1-\delta_{t+1})\rceil$ depends on the variable $\delta_{t+1}$ that is adapted online based on observed data. In this way, the prediction region $C_{t+1}$ becomes a tuneable parameter by the choice of $\delta_{t+1}$. To adaptively obtain the parameter $\delta_{t+1}$, ideas from online learning are used and we update $\delta_{t+1}$ as
\begin{align}\label{eq:adapt_upd_rule}
\delta_{t+1}:=\delta_{t}+\gamma(\delta-e_{t})
\; \text{ with }\; e_{t}:=\begin{cases}
0 &\text{if }\, r_{t}\le C_{t}\\
1 &\text{otherwise}
\end{cases}
\end{align}
where we denote by $r_{t}$ the observed realization of $R_{t}$ and where $\gamma$ is a learning rate. The idea is to use $\delta_{t+1}$ to adapt to changes in the distribution of $R_1,\hdots,R_{t+1}$ over time by using information on how much the prediction region $C_{t}$ overcovered ($r_{t}\ll C_{t}$) or undercovered ($r_{t}\gg C_{t}$) in the past.
One of the main performance enhancers is the proper choice of $\gamma$. In [Gibbs and Candès, 2022], the authors present fully adaptive conformal prediction (FACP) where a set of learning rates $\{\gamma_i\}_{1\leq i \leq k}$ is used in parallel from which the best $\gamma$ is selected adaptively. Based on past performance (using a reweighting scheme that evaluates which $\gamma_i$ provided the best coverage), the authors maintain a belief $p_t^{(i)}$ at each time step $t$ for each $\{\delta_t^{(i)}\}_{1\leq i \leq k}$. The new update laws are
\delta^{(i)}_{t+1}:=\delta^{(i)}_{t}+\gamma_i(\delta-e^{(i)}_{t})
\; \text{ with }\; e^{(i)}_{t}:=\begin{cases}
0 &\text{if }\, r_{t}\le C^{(i)}_{t}\\
1 &\text{otherwise}
\end{cases}$$
where the individual prediction regions are $C^{(i)}_{t}:=\bar{R}_{q^{(i)}_{t}}$ with $q^{(i)}_{t}:=\lceil (t+1)(1-\delta^{(i)}_{t})\rceil$, while the best prediction region is $C_{t}:=\bar{R}_{q_{t}}$ with $q_{t}:=\lceil (t+1)(1-\sum_{i=1}^{k}p_t^{(i)}\delta^{(i)}_{t})\rceil$.
§ ADAPTIVE CONFORMAL PREDICTION REGIONS FOR TRAJECTORY PREDICTIONS
Recall that we can obtain predictions $(\hat{y}_t^1,\hdots,\hat{y}_t^H)$ at time $t$ of future agent states $(Y_{t+1},\hdots,Y_{t+H})$ from past observations $(y_{0},\hdots,y_t)$ using the Predict function. Note, however, that these point predictions contain no information about prediction uncertainty and can hence not be used to reason about the safety constraint (<ref>). To tackle this issue, we aim to construct prediction regions for $(Y_{t+1},\hdots,Y_{t+H})$ using ideas from ACP.
To obtain prediction regions for $(Y_{t+1},\hdots,Y_{t+H})$, we could consider the nonconformity score $\|Y_{t+\tau}-\hat{y}_t^\tau\|$ at time $t$ that captures the multistep-ahead prediction error for each $\tau\in\{1,\hdots,H\}$. A large nonconformity score indicates that the prediction $\hat{y}_t^\tau$ of $Y_{t+\tau}$ is not accurate, while a small score indicates an accurate prediction. For each $\tau$, we wish to obtain a prediction region $C_t^\tau$ that is again defined by an update variable $\delta_t^\tau$. Note, however, that we can not evaluate $\|y_{t+\tau}-\hat{y}_t^\tau\|$ at time $t$ as only measurements $(y_0,\hdots,y_t)$ are known, but not $(y_{t+1},\hdots,y_{t+H})$. Consequently, we cannot use the update rule (<ref>) to update $\delta_{t}^\tau$, as the error $e_t^\tau$ would depend on checking if $\|y_{t+\tau}-\hat{y}_{t}^\tau\|\le C_{t}^\tau$. To address this issue, we define the time lagged nonconformity score
\begin{align*}
\end{align*}
that we can evaluate at time $t$ so that we can use the update rule (<ref>). This nonconformity score $R_t^\tau$ is time lagged in the sense that, at time $t$, we evaluate the $\tau$ step-ahead prediction error that was made $\tau$ time steps ago. We can now update the parameter $\delta_{t+1}^\tau$ that defines $C_{t+1}^\tau$ as
\begin{align}\label{eq:recursion}
\delta_{t+1}^\tau:=\delta_{t}^\tau+\gamma(\delta-e_{t}^\tau)
\; \text{ with }\; e_{t}^\tau:=\begin{cases}
0 &\text{if }\, \|y_{t}-\hat{y}_{t-\tau}^\tau\|\le C_{t}^\tau\\
1 &\text{otherwise.}
\end{cases}
\end{align}
To compute the prediction region $C_{t+1}^\tau$, note that we can not compute $R_1^\tau,\hdots,R_{\tau-1}^\tau$. Therefore, with minor change, we let $C_{t+1}^\tau$ be the $\lceil(t-\tau+1)(1-\delta_{t+1}^\tau)\rceil^{\text{th}}$ smallest value of $(R_\tau^\tau,\hdots,R_{t}^\tau)$[Instead of keeping track of all data, we will choose a sliding window of the $N$ most recent data. For all prediction regions, we will then consider $(R_{t-N}^\tau,\hdots,R_{t}^\tau)$ and compute $C_{t+1}^\tau$ as the $\lceil(N+1)(1-\delta_{t+1}^\tau)\rceil^{\text{th}}$ smallest value.].
By obtaining a prediction region for $R_{t+1}^\tau$ using ACP, we obtain a prediction region for the $\tau$ step-ahead prediction error that was made $\tau-1$ time steps ago, i.e., for $\|Y_{t+1}-\hat{y}_{t+1-\tau}^\tau\|$. Under the assumption that $R_{t+1}^\tau$ and $R_{t+\tau}^\tau$ are independent and identically distributed, $R_{t+1}^\tau$ serves as a prediction region for $\tau$ step-ahead prediction error that was made $0$ time steps ago (now at time $t$), i.e., for $R_{t+\tau}^\tau$ which encodes $\|Y_{t+\tau}-\hat{y}_{t}^\tau\|$. Naturally, in our setting $R_{t+1}^\tau$ and $R_{t+\tau}^\tau$ are not independent and identically distributed, but it still serves as a good measure for the prediction region $R_{t+\tau}^\tau$. We remark that for the theoretical guarantees that we provide in the next section, only the one step-ahead prediction errors are relevant.
Let $\gamma$ be a learning rate, $\delta_0^1\in (0,1)$ be an initial value for the recursion (<ref>), and $T$ be the number of times that we compute the recursion (<ref>). Then, for the onestep-ahead prediction errors, it holds that
\begin{align}\label{eq:thm1}
1-\delta-p_1 \leq \frac{1}{T}\sum_{t=0}^
{T-1} \textrm{Prob}(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1)\leq 1-\delta+p_2
\end{align}
with constants $p_1:=\frac{\delta_0^1+ \gamma}{T\gamma}$, $p_2:=\frac{(1-\delta_0^1)+ \gamma}{T\gamma}$ so that $\lim_{T\rightarrow\infty}p_1 =0$ and $\lim_{T\rightarrow\infty}p_2 =0$.
Since the probability of an event is equivalent to the expected value of the indicator function of that event, it follows by the definition of the error $e_{t+1}^1$ that
\begin{align}\label{eq:proof1}
\textrm{Prob}(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1)= \mathbb{E}[1-e_{t+1}^1] = 1-\mathbb{E}[e_{t+1}^1].
%\mathbb{P}(\|y_{t}-\hat{y}_{t-\tau}^\tau\|\le C_{t-\tau-1}^\tau \, | \, )
\end{align}
For a given initialization $\delta_0^\tau$ and learning rate $\gamma$, we know from <cit.> that the following bound holds (with probability one) for the misclassification errors
\begin{align*}
\;\;\;\frac{-(1-\delta_0^1)+ \gamma}{T\gamma}&\leq \frac{1}{T}\sum_{t=0}^
{T-1} e_{t+1}^1 -\delta \leq \frac{\delta_0^1+ \gamma}{T\gamma}
\implies \Big|\frac{1}{T}\sum_{t=0}^
{T-1} e_{t+1}^1 -&\delta\Big| \leq \frac{\max(\delta_0^1,1-\delta_0^1)+ \gamma}{T\gamma}.
\end{align*}
Hence, taking the expectation of the above two-sided inequality, we get that
\begin{align*}
\frac{-(1-\delta_0^1)+ \gamma}{T\gamma} &\leq \frac{1}{T}\sum_{t=0}^
{T-1} \mathbb{E}[e_{t+1}^1] -\delta \leq \frac{\delta_0^1+ \gamma}{T\gamma},\\
\overset{(a)}{\Leftrightarrow}\;\;\;\frac{-(1-\delta_0^1)+ \gamma}{T\gamma} &\leq\frac{1}{T}\sum_{t=0}^
{T-1} \big(1-\textrm{Prob}(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1)\big)-\delta \leq \frac{\delta_0^1+ \gamma}{T\gamma},\\
\Leftrightarrow\;\;\;1-\delta + \frac{(1-\delta_0^1)+ \gamma}{T\gamma} &\geq \frac{1}{T}\sum_{t=0}^
{T-1} \textrm{Prob}(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1) \geq 1-\delta - \frac{\delta_0^1+ \gamma}{T\gamma},
\end{align*}
where we used equation (<ref>) for the equivalence in (a).
The above result can be similarly extended to the FACP case with a set of candidate learning rates, $\gamma$, <cit.>.
To illustrate these multistep-ahead prediction regions, consider a planar double pendulum whose dynamics are governed by chaotic, nonlinear dynamics that are sensitive to the initial condition <cit.>.
We study the predictions made by a linear predictor that uses noisy observations of the position of the double pendulum (See Appendix <ref>)
and use ACP to predict the uncertainty in the predictions. Both the trajectory predictor and the uncertainty quantification using ACP use online data from a single trajectory. ACP provides the multi-step errors in the linear predictions with a coverage level of $\delta = 0.1$, and learning rates $\gamma = \begin{pmatrix}
0.0008 & 0.0015 & 0.003 & 0.005& 0.009 & 0.017 &0.03 & 0.05 &0.08
\end{pmatrix}.$
Figure <ref> compares the 1-step and 6-step ahead error prediction regions to the true multi-step errors for two states, the second mass position, $x_2, y_2$. The percentages of one-step errors that are incorrectly predicted, i.e., $e_t^1 = 1$, for the positions of each mass, $x_1, x_2, y_1, y_2$ are $2.36\%, 0.94\%, 1.57 \%, 1.73\%$ respectively. We can see the effects of adaptation as the ACP prediction regions are larger in areas of poor performance of the linear predictor (and consequently higher error in the prediction) and smaller in regions where the linear predictor performs well.
The multi-step prediction errors are shown for two of the six states of a double pendulum ($x_2, y_2$). ACP can correctly predict regions of high and low error ($90\%$ coverage regions) by adjusting the prediction quantile using update law (<ref>). The orange lines are the true multi-step prediction errors and the blue areas are the error regions predicted by ACP.
§ UNCERTAINTY-INFORMED MODEL PREDICTIVE CONTROL
Based on the obtained uncertainty quantification from the previous section, we propose an uncertainty-informed model predictive controller (MPC) that uses predictions $\hat{y}_t^\tau$ and adaptive prediction regions $C_{t+1}^\tau$. The underlying optimization problem that is solved at every time step $t$ is:
\begin{align}
\min_{(u_t,\hdots,u_{t+H-1})}& \sum_{k=t}^{t+H-1}J(x_{k+1},u_k) &\\
\text{s.t.}\qquad & x_{k+1}=f(x_k,u_k), &k\in\{t,\hdots,t+H-1\}\\
& c(x_{t+\tau},\hat{y}_t^\tau)\ge LC_{t+1}^\tau,&\tau\in\{1,\hdots,H\}\label{eq:constC_2}\\
& u_k \in \mathcal{U},x_{k+1} \in \mathcal{X},&k\in\{t,\hdots,t+H-1\}
\end{align}
where $L$ is the Lipschitz constant of the constraint function $c$, $J$ is a step-wise cost function, and $u_t,\hdots,u_{t+H-1}$ is the control sequence. The optimization problem in (<ref>) is convex if the functions $J$ and $f$ are convex, the function $c$ is convex in its first argument, and the sets $\mathcal{U}$ and $\mathcal{X}$ are convex.
Based on this optimization problem, we propose a receding horizon control strategy in Algorithm <ref>. In line 1 of Algorithm <ref>, we initialize the parameter $\delta_0^t$ simply to $\delta$. Lines 2-11 present the real-time planning loop by: 1) updating the states $x_t$ and $y_t$ and calculating new predictions $\hat{y}_{t}^\tau$ (lines 3-4), 2) computing the adaptive nonconformity scores $C_{t+1}^\tau$ (lines 5-9), and 3) solving the optimization problem in (<ref>) of which we apply only $u_t$ (lines 10-11).
[1]
Input: Failure probability $\delta$, prediction horizon $H$, learning rate $\gamma$
Output: Control input $u_t(x_t,y_0,\hdots,y_t)$ at each time $t$
$\delta_0^\tau \gets \delta$ for $\tau\in\{1,\hdots,H\}$
$t$ from $0$ to $\infty$ # real-time motion planning loop
Update $x_t$ and $y_t$
Obtain predictions $\hat{y}_t^\tau$ for $\tau\in\{1,\hdots,H\}$
$\tau$ from $1$ to $H$ # compute ACP regions
$\delta_{t+1}^\tau \gets \delta_{t}^\tau+\gamma(\delta-e_{t}^\tau)$
$q \gets \big\lceil (t+1)(1-\delta_{t+1}^\tau)\big\rceil$
Set $C_{t+1}^\tau$ as the $q$th smallest value of $(R_\tau^\tau,\hdots,R_t^\tau)$
Calculate controls $u_t,...,u_{t+H-1}$ as the solution of (<ref>)
Apply $u_t$ to (<ref>)
MPC with ACP Regions
While Algorithm <ref> uses a single learning rate, one can similarly extend the above algorithm to be fully adaptive using a candidate set of $\{\gamma_i\}_{1\leq i\leq k}$ without loss of generality.
assume that when $\delta_{t+1} \leq 0$, the prediction region $C_{t+1}\rightarrow \infty$. This means that when the algorithm requires robust behavior, the $\infty$-prediction region ensures that any prediction at the next time-step should be correctly classified. For a physical system, there are limits on how much the dynamic obstacle can accelerate in one time-step which gives us an upper bound $R_{\text{max}}< \infty$ on the worst-case error. In practice, we enforce $0\leq\delta_{t+1}\leq 1$ with $C_{t+1}\leq R_{\text{max}}$.
If the MPC optimization is recursively feasible, Algorithm <ref> will provide a controller that collides with the dynamic obstacle with at most the following average probability. By assumption, the optimization problem (<ref>) is feasible at each time $t\in \{0,1,\hdots\}$. Due to constraint (<ref>) and Lipschitz continuity of $c$, it hence holds that
\begin{align*}
0&\le c(x_{t+1},\hat{y}_{t}^1)- LC_{t+1}^1\\
&\le c(x_{t+1},Y_{t+1})+L\|Y_{t+1}-\hat{y}_{t}^1\|- LC_{t+1}^1=:p(x_{t+1},Y_{t+1})
\end{align*}
at each time $t\in \{0,1, \hdots\}$. If we now know that $P(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1))\ge 1-\delta$, then we know that $P(c(x_{t+1},Y_{t+1})\ge 0)\ge 1-\delta$. In this adaptive setting, we can not guarantee $P(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1))\ge 1-\delta$, but we know from Theorem <ref> that (<ref>) holds, and, consequently, by taking the limit at $t\to\infty$, we know that
\begin{align*}
\lim_{T\xrightarrow[]{}\infty}\frac{1}{T}\sum_{t=1}^
{T}\textrm{Prob}(\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1) \geq 1-\delta,
\end{align*}
i.e., on average we have the desired prediction accuracy with a probability greater than $1-\delta$.
If the MPC optimization is recursively feasible, Algorithm <ref> will provide a controller that collides with the dynamic obstacle with atmost the following average probability,
\begin{align}
\frac{1}{T}\sum_{t=1}^
{T}\textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)< LC_{t-\tau}^\tau\big) \leq \delta + \frac{1-\delta_0^\tau+ \gamma}{T\gamma} \qquad \qquad \forall \tau\in\{1, \dotsc, H\}.
\end{align}
Using the Law of Total Probability, we have,
\begin{align*}
\textrm{Prob}&\big(c(x_{t+\tau-1},\hat{y}_t^\tau)\ge LC_{t-\tau}^\tau\big) \\&=
\textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)\ge LC_{t-\tau}^\tau \,\big\vert \, \lVert y_{t}-\hat{y}_{t-\tau}^\tau\rVert\le C_{t-\tau}^\tau\big)\textrm{Prob}(\lVert y_{t}-\hat{y}_{t-\tau}^\tau\rVert\le C_{t-\tau}^\tau) + \\ &\qquad\textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)\ge LC_{t-\tau}^\tau \,\big\vert \, \lVert y_{t}-\hat{y}_{t-\tau}^\tau\rVert> C_{t-\tau}^\tau\big)\textrm{Prob}(\lVert y_{t}-\hat{y}_{t-\tau}^\tau\rVert> C_{t-\tau}^\tau) \\
&\geq \textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)\ge LC_{t-\tau}^\tau \,\big\vert \, \lVert y_{t}-\hat{y}_{t-\tau}^\tau\rVert\le C_{t-\tau}^\tau\big)\textrm{Prob}(\lVert y_{t}-\hat{y}_{t-\tau}^\tau\rVert\le C_{t-\tau}^\tau) \\
&= \textrm{Prob}(\|y_{t}-\hat{y}_{t-\tau}^\tau\|\le C_{t-\tau}^\tau)\\
&= \mathbb{E}(1-e_t^\tau) = 1-\mathbb{E}(e_t^\tau).
%\mathbb{P}(\|y_{t}-\hat{y}_{t-\tau}^\tau\|\le C_{t-\tau-1}^\tau \, | \, )
\end{align*}
We know from [Gibbs and Candes, 2021] that the following bound holds for the misclassification errors, for a given initialization $\delta_0^\tau$ and learning rate $\gamma$,
\begin{align*}
\frac{-\delta_0^\tau + \gamma}{T\gamma} \leq \frac{1}{T}\sum_{t=1}^
{T} e_t^\tau -\delta \leq \frac{1-\delta_0^\tau+ \gamma}{T\gamma} \qquad \qquad \forall \tau\in\{1, \dotsc, H\}.
\end{align*}
Hence, taking the expectation of the above set of inequalities, we get,
\begin{align*}
{T} \mathbb{E}[e_t^\tau] -\delta \leq \frac{1-\delta_0^\tau+ \gamma}{T\gamma},\\
{T} \bigg(1-\textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)\ge LC_{t-\tau}^\tau\big)\bigg)-\delta \leq \frac{1}{T}\sum_{t=1}^
{T} \mathbb{E}[e_t^\tau] -\delta \leq \frac{1-\delta_0^\tau+ \gamma}{T\gamma}, \\
{T}\textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)< LC_{t-\tau}^\tau\big) -\delta \leq \frac{1-\delta_0^\tau+ \gamma}{T\gamma} \qquad \qquad \forall \tau\in\{1, \dotsc, H\}.
\end{align*}
This gives us the following asymptotic guarantee on obstacle avoidance,
\begin{align*}
\lim_{T\xrightarrow[]{}\infty}\frac{1}{T}\sum_{t=1}^
{T}\textrm{Prob}\big(c(x_{t+\tau-1},\hat{y}_t^\tau)< LC_{t-\tau}^\tau\big) \leqas \delta.
\end{align*}
Moreover, it's clear that the above guarantee holds with equality for the prediction regions obtained from ACI, i.e.,
\begin{align*}
\lim_{T\xrightarrow[]{}\infty}\frac{1}{T}\sum_{t=1}^
{T}\textrm{Prob}(\|y_{t}-\hat{y}_{t-\tau}^\tau\|> C_{t-\tau}^\tau) \equalas \delta \qquad \qquad \forall \tau\in\{1, \dotsc, H\}.
\end{align*}
Let $\gamma$ be a learning rate, $\delta_0^1\in (0,1)$ be an initial value for the recursion (<ref>), and $T$ be the number of times that we compute the recursion (<ref>). If the optimization problem (<ref>) in Algorithm <ref> is recursively feasible, then Algorithm <ref> will lead to
\begin{align}
\frac{1}{T}\sum_{t=0}^
{T-1}\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0\big) \geq 1-\delta - p_1
\end{align}
with constant $p_1:=\frac{\delta_0^1+ \gamma}{T\gamma}$ so that $\lim_{T\rightarrow\infty}p_1 =0$.
By assumption, the optimization problem in (<ref>) is feasible at each time $t\in \{0,1,\hdots\}$. Due to constraint (<ref>) and Lipschitz continuity of $c$, it hence holds that
\begin{align}\label{eq:proof2}
0&\le c(x_{t+1},\hat{y}_{t}^1)- LC_{t+1}^1\le c(x_{t+1},Y_{t+1})+L\|Y_{t+1}-\hat{y}_{t}^1\|- LC_{t+1}^1
\end{align}
at each time $t\in \{0,1, \hdots\}$. Consequently, note that $\|Y_{t+1}-\hat{y}_{t}^1\|\le C_{t+1}^1$ is a sufficient condition for $c(x_{t+1},Y_{t+1})\ge 0$. In a next step, we can derive that
\begin{align*}
\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0\big) &\overset{(a)}{=}\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0 \,\big\vert \, \lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1\big)\textrm{Prob}(\lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1) \\ &\qquad \hspace{-0.75cm}+\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0 \,\big\vert \, \lVert Y_{t+1}-\hat{y}_{t}^1\rVert> C_{t}^1\big)\textrm{Prob}(\lVert Y_{t+1}-\hat{y}_{t}^1\rVert> C_{t}^1) \\
&\overset{(b)}{\geq}\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0 \,\big\vert \, \lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1\big)\textrm{Prob}(\lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1) \\
&\overset{(c)}{=} \textrm{Prob}(\lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1)
% &= \mathbb{E}[1-e_{t+1}^1] = 1-\mathbb{E}[e_{t+1}^1].
%\mathbb{P}(\|y_{t}-\hat{y}_{t-\tau}^\tau\|\le C_{t-\tau-1}^\tau \, | \, )
\end{align*}
where the equality in (a) follows from the law of total probability, while the inequality in (b) follows from the nonnegativity of probabilities. The equality in (c) follows as $\textrm{Prob}(c(x_{t+1},Y_{t+1})\ge 0 \,\vert \, \lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1)=1$ since $\lVert Y_{t+1}-\hat{y}_{t}^1\rVert\le C_{t}^1$ implies $c(x_{t+1},Y_{t+1})\ge 0$ according to (<ref>). We now use the result from Theorem <ref> to complete the proof.
This gives us the following asymptotic guarantee on obstacle avoidance,
\begin{align*}
\lim_{T\xrightarrow[]{}\infty}\frac{1}{T}\sum_{t=1}^
{T}\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0\big) \geq 1-\delta.
\end{align*}
remark equality of Theorem 3 vs inequality of thm 7
This gives us the following asymptotic guarantee on obstacle avoidance,
\begin{align*}
{T}\textrm{Prob}\big(c(x_{t+1},Y_{t+1})< 0\big) \leq \delta\\
& \lim_{T\xrightarrow[]{}\infty}\frac{1}{T}\sum_{t=1}^
{T}\textrm{Prob}\big(c(x_{t+1},Y_{t+1})\ge 0\big) \geq 1-\delta
\end{align*}
§ CASE STUDIES: MULTIROTOR OPERATING IN SMALL ANGLE REGIME DODGING A FLYING FRISBEE
We compare the performance of the MPC with ACP uncertainty prediction regions with our past work that uses a distributionally robust approach to uncertainty quantification <cit.>. We use the same multirotor operating in the presence of a moving obstacle example with a MPC planner. The multirotor is constrained to operate within the state constraints
$\theta\in [-0.45,0.45]$ radians and $\varphi \in [-0.45,0.45]$ radians. We use the following standard multirotor linear dynamics,
\begin{align} \label{eq:sim_agent_dyn2}
\ddot{x}= -g\theta, \, \,
\ddot{y} = g\varphi,\,\, \ddot{z}=u_1 -g, \,\,
\ddot{\varphi} = \frac{u_2}{I_{xx}},\,\, \ddot{\theta} = \frac{u_3}{I_{yy}},\,\, \ddot{\psi} = \frac{u_4}{I_{zz}},\vspace{-3mm}
\end{align}
where the planner control inputs $u_1, u_2, u_3, u_4$ correspond to the thrust force in the body frame and three moments. The vehicle's moments of inertia are $I_{xx} = 0.0075 kgm^2, I_{yy} = 0.0075 kgm^2, I_{zz} = 0.013kgm^2$. The MPC planner has a horizon length of 10 steps and the planner is updated at 20 Hz. It is implemented through a Sequential Convex Programming approach <cit.>.
Numerical simulations of the proposed MPC planner with ACP regions and dynamics (<ref>) are presented as it avoids a Frisbee that is thrown at the drone from various initial positions, velocities, and rotation speed. The Frisbee is modeled following [Hummel, 2003], and we implement linear predictions of the trajectory arising from its nonlinear dynamics.
We conducted 1000 Monte Carlo simulations per allowed failure probability level $\delta$ to compare the numerical feasibility, percentage of success in obstacle avoidance (if the MPC planner is feasible), and the planner's conservativeness, as measured by the minimum distance between the obstacle and agent centers, i.e., $\bar{d}_{min}$ and $\sigma({d_{min}})$ describe the average and standard deviation of this minimum distance across simulations, respectively. We compare three uncertainty quantification techniques in Table <ref>, (1) The proposed ACP method (Algorithm <ref>), (2) empirical bootstrap prediction that accounts for the uncertainty in the predictions using the empirical bootstrap variance <cit.>, and (3) the sliding linear predictor with an Extended Kalman Filter (EKF) that approximates the uncertainty in the obstacle predictions as a Gaussian distribution (See Appendix <ref>).
Discussion: Table <ref> shows that our proposed method can successfully avoid the Frisbee, while using a significantly smaller average divergence distance ($d_{min}, \,\sigma({d_{min}})$) from the Frisbee. I.e., our approach avoids the conservatism of other approaches due to the adaptivity of the uncertainty sets.
Our method can usefully adjust the prediction sets when the underlying uncertainty distribution is shifting (due to discrepancy in the linear dynamic predicted and the true nonlinear obstacle motion). We also note that the feasibility of the MPC optimization is worse for our method compared to [Wei et al., 2022] and the EKF predictor. This issue arises during sudden changes in the size of the uncertainty sets when the learning rate $\gamma$ is chosen too large. We will investigate this issue in future work by considering tools to ensure recursive feasibility <cit.> or by providing backup controllers <cit.> when the MPC is infeasible.
Case $\delta$ $0.025$ $0.05$
UQ method Proposed [Wei et al., 2022] w/EKF Proposed [Wei et al., 2022] w/EKF
$\%$Feas. 83.8 87.4 97.1 80.9 90.3 97.6
Frisbee $\%$Succ. 99.2 100 100 100 100 100
w/drag $\overline{d}_{min}$ 2.91 14.2 5.27 2.74 4.97 4.25
$\sigma(d_{min})$ 1.25 2.04 1.28 1.3 1.97 1.11
Summary of results from MC simulations of system (<ref>). We used FACP for predicting uncertainty sets with learning rates $\gamma = \{0.0008,\, 0.0015,\, 0.003,\, 0.005,\, 0.009,\, 0.017,\,0.03,\,0.05,\,0.08,\, 0.13 \} $ and using the last $30$ measurements of the obstacle.
CARLA Simulations for Pedestrian Motion Prediction using LSTMs. We evaluate our MPC and ACP planning framework in the autonomous driving simulator CARLA on an intersection containing four pedestrians with random initial and goal positions. The pedestrians randomly speed up and slow down at a fixed time in the middle of each trial, which is something that our previous work from [Lindemann et al., 2022] would not be able to handle. We used the LSTM trajectory predictor from our previous work [Lindemann et al., 2022]. The simulator runs at 20Hz, while the LSTM and MPC run at 2Hz. The MPC uses standard double integrator dynamics to generate waypoints that get fed to PID controllers that control the car's throttle and steering angle. The LSTM has an observation window of 3 seconds and a prediction horizon of 5 seconds. The MPC uses a planning horizon of 5 seconds.
For the ACP, learning rates $\gamma = \{ 0.0008, 0.0015, 0.003, 0.005, 0.009, 0.017, 0.03, 0.05, 0.08\}$ and a desired coverage of 0.9 were used. At the beginning of each trial, we collect 20 seconds of pedestrian data, which corresponds to 40 data points, before beginning the FACP and MPC. We ran ... trials of the scenario. One such trial is show in <ref>. For all ... trials the car reached the goal location while avoiding the pedestrians. The ACP region coverage across all trials for the $10$ look ahead prediction horizons (0.5 seconds to 5 seconds) are given somewhere. Finally, the 1 step look ahead and 8 step look ahead error and prediction region sizes are shown over time for a trial in Figure <ref>. The prediction errors spike when the pedestrians speed up, but for the 0.5 second prediction the ACP is quickly able to react by increasing its prediction region. However, the ACP for the 4 second prediction window takes much longer to increase it's prediction region. That is because there is a 4 second delay between when the prediction get made and the ACP algorithm seeing what the error was. This highlights one potential drawback of the ACP approach, which is that the longer it takes for the algorithm to get real-time feedback about the predictions, the longer it will take to adapt to distribution shifts.
The change of pedestrian speed is not something that the offline conformal prediction from our previous work in [Lindemann et al., 2022] could properly capture, unless it was explicitly encoded in the offline validation data. Even then, note that the LSTM errors for the different speeds are slightly different. This would lead to larger conformal regions even when the pedestrians are travelling at lower speeds. On the other hand, our adaptive approach can alter it's prediction regions to be smaller when the pedestrians are travelling at low speeds and higher when the pedestrians are travelling at high speed.
The car (black dot) navigating an intersection with pedestrians (red stars). The LSTM predictions are shown as green dots and the ACP prediction regions are shown as green circles. The blue line shows the desired MPC path. In between the second and third subfigure, the pedestrians all speed up, which causes the predictions regions to rapidly expand to account for the new prediction errors.
§ CONCLUSION
We presented an algorithm for safe motion planning in an environment with other dynamic agents using ACP. Specifically, we considered a deterministic control system that uses state predictors to estimate the future motion of dynamic agents. We then leveraged ideas from ACP to dynamically quantify prediction uncertainty from an online data stream, and designed an uncertainty-informed model predictive controller to safely navigate among dynamic agents. In contrast to other data-driven prediction models that quantify prediction uncertainty in a heuristic manner, we quantify the true prediction uncertainty in a distribution-free, adaptive manner that even allows to capture changes in prediction quality and the agents' motion.
Lars Lindemann, Matthew Cleaveland, and George J. Pappas were generously supported by NSF award CPS-2038873. The work of Anushri Dixit and Skylar Wei was supported in part by DARPA, through the LINC program.
[Agarwal et al., 2020]
Anish Agarwal, Abdullah Alomar, and Devavrat Shah.
On multivariate singular spectrum analysis and its variants, 2020.
URL <https://arxiv.org/abs/2006.13448>.
[Angelopoulos and Bates, 2021]
Anastasios N Angelopoulos and Stephen Bates.
A gentle introduction to conformal prediction and distribution-free
uncertainty quantification.
arXiv preprint arXiv:2107.07511, 2021.
[Aoude et al., 2013]
Georges S Aoude, Brandon D Luders, Joshua M Joseph, Nicholas Roy, and
Jonathan P How.
Probabilistically safe motion planning to avoid dynamic obstacles
with uncertain motion patterns.
Autonomous Robots, 350 (1):0 51–76, 2013.
[Bansal et al., 2020]
Somil Bansal, Andrea Bajcsy, Ellis Ratner, Anca D Dragan, and Claire J Tomlin.
A hamilton-jacobi reachability-based framework for predicting and
analyzing human motion for safe planning.
In 2020 IEEE International Conference on Robotics and
Automation (ICRA), pages 7149–7155. IEEE, 2020.
[Bastani et al., 2022]
Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam,
and Aaron Roth.
Practical adversarial multivalid conformal prediction.
arXiv preprint arXiv:2206.01067, 2022.
[Bortolussi et al., 2019]
Luca Bortolussi, Francesca Cairoli, Nicola Paoletti, Scott A Smolka, and
Scott D Stoller.
Neural predictive monitoring.
In International Conference on Runtime Verification, pages
129–147. Springer, 2019.
[Busch et al., 2022]
Finn Lukas Busch, Jake Johnson, Edward L. Zhu, and Francesco Borrelli.
A gaussian process model for opponent prediction in autonomous
racing, 2022.
URL <https://arxiv.org/abs/2204.12533>.
[Chen et al., 2020]
Yuxiao Chen, Ugo Rosolia, Chuchu Fan, Aaron D Ames, and Richard Murray.
Reactive motion planning with probabilistic safety guarantees.
arXiv preprint arXiv:2011.03590, 2020.
[Choi et al., 2017]
Sungjoon Choi, Eunwoo Kim, Kyungjae Lee, and Songhwai Oh.
Real-time nonparametric reactive navigation of mobile robots in
dynamic environments.
Robotics and Autonomous Systems, 91:0 11–24, 2017.
[Dietterich and Hostetler, 2022]
Thomas G Dietterich and Jesse Hostetler.
Conformal prediction intervals for markov decision process
arXiv preprint arXiv:2206.04860, 2022.
[Du Toit and Burdick, 2011]
Noel E Du Toit and Joel W Burdick.
Robot motion planning in dynamic, uncertain environments.
IEEE Transactions on Robotics, 280 (1):0
101–115, 2011.
[Everett et al., 2021]
Michael Everett, Yu Fan Chen, and Jonathan P How.
Collision avoidance in pedestrian-rich environments with deep
reinforcement learning.
IEEE Access, 9:0 10357–10377, 2021.
[Fan et al., 2021]
D.D. Fan, K. Otsu, Y. Kubo, A. Dixit, J.W. Burdick, and A.-A. Agha-Mohammadi.
Step: Stochastic traversability evaluation and planning for
risk-aware off-road navigation.
In Robot.: Sci. Syst, 2021.
[Farid et al., 2022]
Alec Farid, Sushant Veer, Boris Ivanovic, Karen Leung, and Marco Pavone.
Task-relevant failure detection for trajectory predictors in
autonomous vehicles.
arXiv preprint arXiv:2207.12380, 2022.
[Fisac et al., 2018]
Jaime F Fisac, Andrea Bajcsy, Sylvia L Herbert, David Fridovich-Keil, Steven
Wang, Claire J Tomlin, and Anca D Dragan.
Probabilistically safe robot planning with confidence-based human
In 14th Robotics: Science and Systems, RSS 2018. MIT Press
Journals, 2018.
[Fox et al., 1997]
Dieter Fox, Wolfram Burgard, and Sebastian Thrun.
The dynamic window approach to collision avoidance.
IEEE Robotics & Automation Magazine, 40 (1):0
23–33, 1997.
[Fridovich-Keil et al., 2020]
David Fridovich-Keil, Andrea Bajcsy, Jaime F Fisac, Sylvia L Herbert, Steven
Wang, Anca D Dragan, and Claire J Tomlin.
Confidence-aware motion prediction for real-time collision
The International Journal of Robotics Research, 390
(2-3):0 250–265, 2020.
[Fulgenzi et al., 2008]
Chiara Fulgenzi, Christopher Tay, Anne Spalanzani, and Christian Laugier.
Probabilistic navigation in dynamic environment using
rapidly-exploring random trees and gaussian processes.
In 2008 IEEE/RSJ International Conference on Intelligent Robots
and Systems, pages 1056–1062. IEEE, 2008.
[Gavish and Donoho, 2014]
Matan Gavish and David L. Donoho.
The optimal hard threshold for singular values is $4/\sqrt {3}$.
IEEE Transactions on Information Theory, 600
(8):0 5040–5053, 2014.
[Gibbs and Candes, 2021]
Isaac Gibbs and Emmanuel Candes.
Adaptive conformal inference under distribution shift.
Advances in Neural Information Processing Systems,
34:0 1660–1672, 2021.
[Gibbs and Candès, 2022]
Isaac Gibbs and Emmanuel Candès.
Conformal inference for online prediction with arbitrary distribution
arXiv preprint arXiv:2208.08401, 2022.
[Golyandina et al., 2001]
Nina Golyandina, Vladimir Nekrutkin, and Anatoly A Zhigljavsky.
Analysis of time series structure: SSA and related techniques.
CRC press, 2001.
[Hewing et al., 2020]
Lukas Hewing, Kim P. Wabersich, and Melanie N. Zeilinger.
Recursively feasible stochastic model predictive control using
indirect feedback.
Automatica, 119:0 109095, 2020.
ISSN 0005-1098.
[Hummel, 2003]
S.A. Hummel.
Frisbee flight simulation and throw biomechanics.
University of California, Davis, 2003.
[Kalluraya et al., 2022]
Samarth Kalluraya, George J. Pappas, and Yiannis Kantaros.
Multi-robot mission planning in dynamic semantic environments.
arXiv preprint arXiv:2209.06323, 2022.
[Kretzschmar et al., 2016]
Henrik Kretzschmar, Markus Spies, Christoph Sprunk, and Wolfram Burgard.
Socially compliant mobile robot navigation via inverse reinforcement
The International Journal of Robotics Research, 350
(11):0 1289–1307, 2016.
[Lindemann et al., 2022]
Lars Lindemann, Matthew Cleaveland, Gihyun Shim, and George J Pappas.
Safe planning in dynamic environments using conformal prediction.
arXiv preprint arXiv:2210.10254, 2022a.
[Lindemann et al., 2022]
Lars Lindemann, Xin Qin, Jyotirmoy V Deshmukh, and George J Pappas.
Conformal prediction for stl runtime verification.
arXiv preprint arXiv:2211.01539, 2022b.
[Luo et al., 2021]
Rachel Luo, Shengjia Zhao, Jonathan Kuck, Boris Ivanovic, Silvio Savarese,
Edward Schmerling, and Marco Pavone.
Sample-efficient safety assurances using conformal prediction.
arXiv preprint arXiv:2109.14082, 2021.
[Majd et al., 2021]
Keyvan Majd, Shakiba Yaghoubi, Tomoya Yamaguchi, Bardh Hoxha, Danil Prokhorov,
and Georgios Fainekos.
Safe navigation in human occupied environments using sampling and
control barrier functions.
In 2021 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), pages 5794–5800. IEEE, 2021.
[Mitsch et al., 2013]
Stefan Mitsch, Khalil Ghorbal, and André Platzer.
On provably safe obstacle avoidance for autonomous robotic ground
In Robotics: Science and Systems IX, Technische Universität
Berlin, Berlin, Germany, June 24-June 28, 2013, 2013.
[Morgan et al., 2014]
D. Morgan, S.J. Chung, and F. Hadaegh.
Model predictive control of swarms of spacecraft using sequential
convex programming.
Journal of Guidance, Control, and Dynamics, 37:0
1–16, 04 2014.
[Nair et al., 2022]
Siddharth H. Nair, Vijay Govindarajan, Theresa Lin, Chris Meissen, H. Eric
Tseng, and Francesco Borrelli.
Stochastic mpc with multi-modal predictions for traffic
In 2022 IEEE 25th International Conference on Intelligent
Transportation Systems (ITSC), pages 635–640, 2022.
[Nakamura and Bansal, 2022]
Kensuke Nakamura and Somil Bansal.
Online update of safety assurances using confidence-based
arXiv preprint arXiv:2210.01199, 2022.
[Omainska et al., 2021]
Marco Omainska, Junya Yamauchi, Thomas Beckers, Takeshi Hatanaka, Sandra
Hirche, and Masayuki Fujita.
Gaussian process-based visual pursuit control with unknown target
motion learning in three dimensions.
SICE Journal of Control, Measurement, and System Integration,
140 (1):0 116–127, 2021.
[Phillips and Likhachev, 2011]
Mike Phillips and Maxim Likhachev.
Sipp: Safe interval path planning for dynamic environments.
In 2011 IEEE International Conference on Robotics and
Automation, pages 5628–5635. IEEE, 2011.
[Qin et al., 2022]
Xin Qin, Yuan Xian, Aditya Zutshi, Chuchu Fan, and Jyotirmoy V Deshmukh.
Statistical verification of cyber-physical systems using surrogate
models and conformal inference.
In 2022 ACM/IEEE 13th International Conference on
Cyber-Physical Systems (ICCPS), pages 116–126. IEEE, 2022.
[Renganathan et al., 2020]
Venkatraman Renganathan, Iman Shames, and Tyler H Summers.
Towards integrated perception and motion planning with
distributionally robust risk constraints.
IFAC-PapersOnLine, 530 (2):0 15530–15536,
[Renganathan et al., 2022]
Venkatraman Renganathan, Sleiman Safaoui, Aadi Kothari, Benjamin Gravell, Iman
Shames, and Tyler Summers.
Risk bounded nonlinear robot motion planning with integrated
perception & control.
arXiv preprint arXiv:2201.01483, 2022.
[Shafer and Vovk, 2008]
Glenn Shafer and Vladimir Vovk.
A tutorial on conformal prediction.
Journal of Machine Learning Research, 90 (3), 2008.
[Shinbrot et al., 1992]
Troy Shinbrot, Celso A Grebogi, Jack Wisdom, and James A Yorke.
Chaos in a double pendulum.
In American Journal of Physics, American Association of Physics
Teachers. 1992.
[Singletary et al., 2022]
Andrew Singletary, Aiden Swann, Ivan Dario Jimenez Rodriguez, and Aaron D.
Safe drone flight with time-varying backup controllers, 2022.
URL <https://arxiv.org/abs/2207.05220>.
[Takens, 1981]
F. Takens.
Detecting strange attractors in turbulence.
In Dynamical systems and turbulence, pages 366–381. Springer,
[Tanner et al., 2003]
Herbert G Tanner, Savvas G Loizou, and Kostas J Kyriakopoulos.
Nonholonomic navigation and control of cooperating mobile
IEEE Trans. Robotics and Automation, 190 (1):0
53–64, 2003.
[Dimarogonas et al., 2006]
Dimos V. Dimarogonas et al.
A feedback stabilization and collision avoidance scheme for multiple
independent non-point agents.
Automatica, 420 (2):0 229–243, 2006.
[Thomas et al., 2021]
Antony Thomas, Fulvio Mastrogiovanni, and Marco Baglietto.
Probabilistic collision constraint for motion planning in dynamic
In International Conference on Intelligent Autonomous Systems,
pages 141–154. Springer, 2021.
[Tibshirani et al., 2019]
Ryan J Tibshirani, Rina Foygel Barber, Emmanuel Candes, and Aaditya Ramdas.
Conformal prediction under covariate shift.
Advances in neural information processing systems, 32, 2019.
[Tordesillas et al., 2020]
Jesus Tordesillas, Brett T. Lopez, Michael Everett, and Jonathan P. How.
Faster: Fast and safe trajectory planner for navigation in unknown
environments, 2020.
URL <https://arxiv.org/abs/2001.04420>.
[Trautman and Krause, 2010]
Peter Trautman and Andreas Krause.
Unfreezing the robot: Navigation in dense, interacting crowds.
In 2010 IEEE/RSJ International Conference on Intelligent Robots
and Systems, pages 797–803. IEEE, 2010.
[Vovk et al., 2005]
Vladimir Vovk, Alexander Gammerman, and Glenn Shafer.
Algorithmic learning in a random world.
Springer Science & Business Media, 2005.
[Wang et al., 2022]
Allan Wang, Christoforos Mavrogiannis, and Aaron Steinfeld.
Group-based motion prediction for navigation in crowded environments.
In Conference on Robot Learning, pages 871–882. PMLR, 2022.
[Wei et al., 2022]
Skylar X. Wei, Anushri Dixit, Shashank Tomar, and Joel W. Burdick.
Moving obstacle avoidance: A data-driven risk-aware approach.
IEEE Control Systems Letters, 7:0 289–294, 2022.
[Yoon et al., 2021]
Youngmin Yoon, Changhee Kim, Jongmin Lee, and Kyongsu Yi.
Interaction-aware probabilistic trajectory prediction of cut-in
vehicles using gaussian process for proactive control of autonomous vehicles.
IEEE Access, 9:0 63440–63455, 2021.
[Zaffran et al., 2022]
Margaux Zaffran, Olivier Féron, Yannig Goude, Julie Josse, and Aymeric
Adaptive conformal predictions for time series.
In International Conference on Machine Learning, pages
25834–25866. PMLR, 2022.
§ SLIDING LINEAR PREDICTOR WITH EXTENDED KALMAN FILTER
Given observations $\{y\}_{0}^t$ at the current time $t\ge 0$ of a discrete-time multivariate stochastic process, we assume the agent is governed by
\begin{equation}
z_{t+1} = f_d^{agent}(z_{t}),\quad \quad y_t = h_d(z_{t}) + \xi_t
\end{equation}
where $f_d$ and $h_d$ are smooth (infinitely differentiable) functions that are the unknown states transition function in terms of full agent state $z_t$ and observation function that maps $z_t$ into observables (partial) $\{y\}_{0}^{t}$, respectively. The observables are corrupted by independent identically distributed Gaussian noise $\{\xi\}_{0}^{t}$ where each $\xi_i \in \mathcal{N}(0,\sigma^2)$.
Our goal is to obtain predictions $(\hat{y}_t^1,\hdots,\hat{y}_t^H)$ at time $t$ of future agent states $(Y_{t+1},\hdots,Y_{t+H})$ from past observations $(y_{0},\hdots,y_t)$ using the Predict function.
Together, we let the vector $g_{0:L-1}^{(i)}\triangleq[
]^{T} \in \mathbb{R}^{L}$ be the $L$-delay embedding the of $i^{th}$ measurable. As additional observable is acquired as time progresses, we can construct the trajectory matrix of the $i^{th}$ observable $\{y^{(i)}_0,\cdots,y^{(i)}_{N}\}$ or also known as the Hankel matrix:
\begin{align} \label{eq:hankel}
H^{(i)}_{[L,N]} = \begin{bmatrix}
g^{(i)}_{0:L-1} & g^{(i)}_{1:L} &\cdots & g^{(i)}_{L:N-1}
\end{bmatrix} = U\Sigma V^*
\end{align}
The matrix of left singular vectors $U = \begin{bmatrix}{\mu}_1, \cdots, {\mu}_L \end{bmatrix}$ is orthonormal. The principal components of $H^x_{L\times N}$ are the columns of $V$.
To efficiently separate the noise and the true signal, we follow the work by [Agarwal et al., 2020] introduce the Page matrix representation of observables $\{y_0,\cdots,y_{TL-1}\}$. We construct and denote a $L$-embedding Page matrix as $P^{(i)}_{[L,TL]}$:
\begin{equation}
P^{(i)}_{[L,TL]} = \begin{bmatrix} g^{(i)}_{0:L-1} & g^{(i)}_{L:2L-1} &\cdots & g^{(i)}_{(T-1)L:TL-1}
\end{bmatrix}= U_P \sigma_P V_P^*
\end{equation}
Unlike Hankel matrices (<ref>), Page matrices do not have repeated entries which enable us to leverage the result by [Gavish and Donoho, 2014], an optimal hard singular value threshold (optHSVT) algorithm (with respect to the Mean Squared Error) for any unknown $m\times n$ matrix corrupted by noise that is zero mean, identically and independently distributed.
In summary, the optHSVT algorithm provides $\sigma_{HSVT}$ that partitions the Page matrix as,
\begin{equation} \label{eq:page_noise_signal_separation}
P^{(i)}_{[L,TL]} = \underbrace{\sum_{\rho = 1}^{n_{HSVT}} \sigma_\rho \mathbold{\mu}_\rho\mathbold{\nu}_\rho^T}_{\approx\,\, \mbox{signal}} + \underbrace{\sum_{\rho=n_{HSVT}+1}^{\min\{L, T\}}\sigma_\rho \mathbold{\mu}_\rho\mathbold{\nu}_\rho^T}_{\approx\,\, \mbox{noise}}.
\end{equation}
where $n_{HSVT}$ is the index of singular value of the logic statement $\sigma_\rho\left( P^{(i)}_{[L,TL]}\right) \geq \sigma_{HSVT}$.
Since both Hankel and Page construction shares the same rank, allowing us to use the Page matrix to recover the rank of the system and avoid ill-conditioned matrix inversion. As a result, we extract a linear predictor using the pseudo inverse $\Lambda_t = \hat{H}_{[L,2L]}^{(i),2:L} (\hat{H}_{[L,2L]}^{(i),1:L-1})^{\dagger}$ (similar to the minimum linear recurrence result in [Golyandina et al., 2001]). In particular, we denote
$$\hat{H}^{(i),2:L}_{[L,2L]} = \Big[ g^{(i)}_{t-2L:t-L-1}, g^{(i)}_{t-2L+1:t-L}, \cdots, g^{(i)}_{[t-L:t-1]}\Big],$$
$$\hat{H}^{(i),1:L-1}_{[L,2L]} = \Big[ g^{(i)}_{t-2L+1:t-L}, g^{(i)}_{t-2L+1:t-L+1},\cdots, g^{(i)}_{[t-L+1:t]}\Big],$$
which are both $L\times L$ matrices. The $\hat{\cdot}$ operation is reconstructing the Hankel matrices with the first $n_{HSVT}$ singular eigenvector and eigenvalue pairs. At each instance, a $H$ step prediction simply extracts the last $H$ elements of the last column of $(\Lambda_t)^{H}\hat{H}^{(i),2:L}_{[L,2L]}$. This linear predictor model will be updated instantly as new measurements are obtained. Further, we employ a standard Extended Kalman Filter (EKF) which allows us the incorporate the new measurements observed over time where the instantaneous $\Lambda_t$ is approximated as the Jacobian of the state transition function of the agent.
|
# Localization vs. Semantics:
How Can Language Benefit Visual Representation Learning?
Zhuowan Li$1$ Cihang Xie$2$ Benjamin Van Durme$1$ Alan Yuille$1$
$1$ Johns Hopkins University $2$ University of California, Santa Cruz
###### Abstract
Despite the superior performance brought by vision-and-language pretraining,
it remains unclear whether learning with multi-modal data can help understand
each individual modality. In this work, we investigate how language can help
with visual representation learning from a probing perspective. Specifically,
we compare vision-and-language and vision-only models by probing their visual
representations on a broad range of tasks, in order to assess the quality of
the learned representations in a fine-grained manner. Interestingly, our
probing results suggest that vision-and-language models are better at label
prediction tasks like object and attribute prediction, while vision-only
models are stronger at dense prediction tasks that require more localized
information. With further analysis using detailed metrics, our study suggests
that language helps vision models learn better semantics, but not
localization. Code is released at https://github.com/Lizw14/visual_probing.
## 1 Introduction
Humans learn about the world through both vision and language, and the
understanding of both modalities offers mutual benefits. For example, pointing
to images helps children learn the meaning of words [7], and specific
descriptions of pictures (_e.g_., the yellow is to the left of the black) help
children memorize visual features better [12]. For machines, the joint
understanding of vision and language is greatly promoted by recent
developments in vision-and-language pretraining (VLP).
Recent VLP methods achieve impressive performance, not only on multi-modal
tasks like visual question answering, but also on uni-modal vision tasks or
language tasks (_e.g_., ImageNet classification [11], GLUE [53] language
understanding). For examples, contrastive methods like CLIP [39] and ALIGN
[24] learn strong visual models for transfer learning; unified foundation
models like OFA [54] and FLAVA [43] target tasks in vision, language, and
vision-and-language all at once with a single model. Unlike earlier VLP
methods (_e.g_., LXMERT [45], OSCAR [29]) that rely on image features
extracted by separately trained features extractors, these recent developments
in VLP learn feature representations directly from raw image pixels in an end-
to-end manner. This learning paradigm enables textual knowledge to propagate
into the visual encoder, potentially allowing visual understanding to benefit
from learning together with language.
Figure 1: We compare the visual representations of vision-and-language models
with vision-only models from a probing perspective. Probing results on five
tasks suggest that language helps vision models learn better semantics, but
not localization.
Despite the superior performance, there is little understanding of how multi-
modal training can benefit visual learning. Meanwhile, interestingly, we note
the opposite question of how vision can help language has already been
extensively studied in NLP—existing studies suggest that vision helps language
by improving grounding [46, 23], reducing reporting bias [16], and enhancing
commonsense knowledge [59], _etc_. Inspired by these works, we take a similar
but further step—we aim to understand _how language can benefit visual
learning_ by comparing multi-modal and uni-modal visual representations in
terms of a broad spectrum of abilities, _e.g_., predicting labels and dense
maps, understanding single/multi-object images.
To this end, we compare vision-and-language (VL) models and vision-only (V)
models from a probing perspective. Probing, which is to train a low-capacity
model to predict linguistic properties from frozen representations, is a
commonly used technique in NLP for interpreting the representations of trained
models [42, 6]. Using probing, language model representations have been found
to encode a broad range of linguistic properties like part-of-speech [5] or
sentence length [1]. Following this paradigm, we first extract image features
using different pretrained models, and then train a simple prediction head to
align the model’s representation space with the label space of interest. We
make the head as simple as possible based on the intuition that less
expressive heads can more selectively reflect the quality of the
representations [21]. The probing is done on various tasks and datasets:
object name classification on the Visual Genome dataset [26], attribute
prediction on the VAW dataset [37], object detection and instance segmentation
on the MSCOCO dataset [31], and semantic object part segmentation on the
PartImageNet dataset [17]. With these probing tasks, we compare advanced
vision-only models including MAE [18] and MOCOv3 [8], with vision-and-language
pretrained models including OFA [54], FLAVA [43] and CLIP [39].
Our experiments draw several interesting conclusions. Firstly, we find that VL
models are much better at label prediction tasks (_e.g_., object/attribute
prediction), while vision-only models are stronger at dense prediction tasks
like detection and segmentation. In other words, language improves the
semantic information in visual representations to better predict fine-grained
labels, but does not enrich the localization information that is required by
spatial-aware tasks. We further verify our findings with a more detailed
analysis. Secondly, our results on attribute prediction reveal distinct
expertise of vision and VL models: vision models are better for predicting
visually grounded attributes like textures, while VL models are better at more
abstract attributes like actions. We also study the influence of finetuning on
downstream tasks, and find that finetuning degrades the probing results to
varying degrees, depending on the downstream task. More analysis is described
in Sec. 4.
In summary, this work aims to understand the role of language in visual
learning from a holistic perspective, which goes beyond the high performance
of VLP. To achieve this goal, we borrow the idea of probing from the NLP
field, and probe the visual representation in pretrained models on a broad
spectrum of tasks that measure various properties of the representations. As
shown in Fig. 1, our probing results suggest that training with language helps
vision models learn better semantics, but not localization. We hope our
findings provide insights for improving vision models with multi-modal
knowledge.
## 2 Related work
#### Pretrained models.
Inspired by the success of pretrained transformers [51] in NLP like BERT [25,
34], the multi-modal and computer vision fields have shifted to explore the
transformer-based pretraining strategy. Earlier multi-modal pretraining
methods take image features extracted by separately trained vision models like
Faster-RCNN [19] or Resnet [20] as input, then train transformers to fuse the
object visual features and language. Representative works, including LXMERT
[45], UNITER [9], OSCAR [29], VinVL [60], etc., perform well on multi-modal
downstream tasks like visual question answering [3] and image captioning [52].
Recently, thanks to the invention of vision transformer ViT [13] and the
follow-up improvements [49, 4, 35, 18], end-to-end pretraining from raw images
pixels is made possible. End-to-end trained VLP models not only perform well
on multi-modal tasks, but also uni-modal tasks like image classification and
language understanding. For example, dual encoders trained with a contrastive
loss like CLIP [39] and ALIGN [24] achieve superior visual learning
performance. The unified transformer architecture for language and images
greatly boosts the development of unified foundation models that solve
language, vision, and multi-modal tasks at one time. Representative works
include OFA [54], Florence [58], FLAVA [43], Unified-IO [36], CoCa [57], and
SimVLM [55] etc. We refer readers to [15] for more details.
#### Vision and language benefit each other.
Several recent works in NLP suggest that vision can help language
understanding. Vokenization [46], Z-LaVI [56] and VIDLANKD [47] show language
understanding performance can be improved by better grounding, visual
imagination, or knowledge distillation from videos. Recent work [59] analyzes
language and multi-modal models and shows that vision can help language models
learn better commonsense knowledge and mitigate reporting bias. However, there
is little understanding of how language can help vision, which goes beyond the
high performance achieved by CLIP, ALIGN, etc. This paper aims to analyze
these advanced VLP methods and concretely understand their advantages in
learning.
#### Probing.
Probing is a widely used strategy in NLP for interpreting representations [42,
6]. Given a fixed source of representations (_e.g_., word embeddings from
language models) and a linguistic property of interest (_e.g_., part-of-
speech), a probing head with restricted capacity is trained to predict the
property from the representations. Note that only the probing head is trained
while the representations from pretrained models are frozen during the process
[2]. Various works use probing to show that language representations encode a
broad range of properties like part-of-speech [5], syntax [22], semantics
[27], sentence length [1], etc., and to compare different language models in
those properties [48]. Simple probing heads are most commonly used based on
the intuition that we want to assess implicit and easily accessible
information in a representation [2, 33]. Recently, more complex probing heads
(_e.g_., multi-layer perceptron) are also suggested [38, 10], and the effects
of probing complexity have been studied [21].
Probing has also been adopted to understand multimodal representations in
terms of the capacity for instance retrieval [23], inter-modality knowledge
[41], understanding of verbs [32], entity and syntactic grounding [28], and
visual commonsense knowledge [59], etc. With probing, multi-modal VL models
are compared with uni-modal language models to assess the advantage of multi-
modal learning. In computer vision, linear probing performance is used as a
fast on-the-fly metric for model evaluation [13, 18, 8], which is
complementary to fine-tuning considering the computational cost. While there
is little understanding of visual representation using probing, we hereby
design various visual prediction tasks to assess the encoded visual knowledge
in the representations. To our knowledge, we are the first to compare VL
models and vision-only models using probing.
## 3 Method
To analyze the capacity of the learned representations of different models, we
choose a set of tasks to probe the models. For each task, we first extract
features using the pretrained models, then we train a simple standard head to
predict the results. We try to keep the head as simple as possible, because by
using a simple head with limited capacity, the probing performance can be
restricted by the amount of information that are explicitly learned and
encoded in the representation, instead of being learned by the head (as
suggested in [21]).
Mathematically, for every image $I\in\mathbb{R}^{3\times w\times h}$ and its
features $f$, where $f\in\mathbb{R}^{C\times W\times H}$, a prediction head
$H$ is trained to predict the task-specific results. Here $(w,h)$ is the size
of the input image and $(C,W,H)$ is the size of the feature. In the whole
process, only the head is trained while the pretrained model (_i.e_., feature
extractor) is frozen.
In this section, we will first describe the probing tasks, datasets and the
design of the prediction head for each task (Sec. 3.1), then we describe the
models we evaluated (Sec. 3.2), and finally how to make the comparison
settings fair for every model (Sec. 3.3).
### 3.1 Probing tasks and datasets
Task | Dataset | # of classes | Metric | Prediction head
---|---|---|---|---
object name prediction | Visual Genome [26] | 151 | accuracy | linear classifier on ROI features
attribute prediction | VAW [37] | 620 | mAP | linear classifier on ROI features
part semantic segmentation | PartImageNet [17] | 40 | mIOU | head from Segmenter [44]
object detection | MSCOCO [31] | 80 | mAP | head from VitDet [30]
instance segmentation | MSCOCO [31] | 80 | mAP | head from VitDet [30]
Table 1: The details of {dataset, number of classes, metric, prediction head}
for the five probing tasks.
We choose five probing tasks: object name prediction, attribute prediction,
object detection, instance segmentation and semantic segmentation for object
parts. Among the five tasks, object name and attribute prediction focus more
on predicting the semantic labels, while the others are dense prediction tasks
that highly rely on spatial information.
#### Object name prediction.
Information about object names is a key component in various multi-modal
downstream tasks like VQA and image captioning, in which text descriptions
refer to objects by their names. Given an image and a bounding box, object
name prediction requires predicting the name of the object in the box. We use
the Visual Genome dataset [26] for training and evaluation in this task.
Images in Visual Genome mostly come from MSCOCO [31] and contain multiple
objects. For each object, the annotations provide its bounding box, name and
attributes (color, material, etc.). The annotations cover 151 object classes
for 1.3 million objects in 108k images.
A simple linear classifier is used to predict object names. More specifically,
for each object, we first use ROI-Pooling [40] to average pool the features
according to its box, then use a linear layer on top of the pooled features to
predict the name class of the object. Cross entropy loss is used to train the
head. Note that the ground-truth bounding box coordinates are provided to the
head for both training and testing.
#### Object attribute prediction.
Similar to object name prediction, attribute prediction requires predicting
attributes for the object in the given bounding box. As shown in [60], visual
features with better-encoded attribute information can substantially improve
the performance of multi-modal tasks. This motivates us to treat the attribute
as an important axis for evaluating visual representation. The VAW dataset
[37] is used for object attribute prediction. VAW improves the noisy attribute
annotations in Visual Genome. VAW annotates 620 attributes belonging to 8
categories, including color, shape, size, material, texture, action, state,
and others. Every attribute is annotated as positive, negative, or unknown for
each instance. The annotation covers 260k instances from 72k images, which is
a subset of Visual Genome images. Mean average precision (mAP) is used to
evaluate the prediction results following [37].
Since attribute prediction is formulated as a multi-label classification
problem, the prediction head is similar to object name prediction, but has
several differences. First, binary cross entropy loss is used for training
instead of cross entropy. Second, since the attributes naturally come with a
long-tailed distribution, to prevent the rare attributes (_e.g_., playing)
from being overriden by the frequent ones (_e.g_., black), we assign higher
weights to rare attributes and lower weights to frequent ones. Third, for the
attributes labeled as unknown, we treat them as negative labels with a small
(0.001) weight. Those strategies are borrowed from [37].
#### Object detection and instance segmentation.
While object name/attribute prediction tests the ability to predict class
labels when the object bounding box is given, we are also interested in tasks
that focus more on locating the objects. We choose object detection and
instance segmentation on MSCOCO [31] for this purpose. MSCOCO contains 330K
images with 1.5 million object instances in 80 categories. The bounding box
and segmentation mask are annotated for each instance. mAP, _i.e_., mean of
average precision for each category, is adopted as the evaluation metric.
Because detection and segmentation cannot be completed using a simple head
like a linear layer, we adopt the prediction head in VitDet [30] as our
probing head. While the widely used Mask-RCNN is based on convolutional neural
network (CNN) features, [30] propose a variant that is more suitable for non-
hierarchical transformer features. Considering the fact that most of our
evaluated models are transformer-based, we adopt this VitDet head for probing
in our work. Unless specified, all the experiment settings are kept the same
as [30].
#### Part semantic segmentation.
While image classification accuracy on ImageNet dataset [11] is the most
commonly used metric for evaluating visual representations, the recent
PartImageNet dataset [17] provides additional annotations for the ImageNet
images, thus enables finer-grained evaluation. PartImageNet annotates
segmentation masks of 40 object parts (_e.g_., head, body, tail) for 11
categories of objects on 24k images. Using this dataset, we perform semantic
segmentation of object parts as an additional probing task that requires
localization information.
For the segmentation head, we use the mask transformer decoder in Segmenter
[44] due to its simplicity and impressive performance on standard datasets.
[44] adapts transformers for semantic segmentation with the proposed “mask
transformer decoder” on top of the embeddings produced by the transformer
encoder (standard ViT). In our probing, we replace their transformer encoder
with the pretrained models to be evaluated and train the mask transformer
decoder to output the semantic segmentation map. Because our goal is to fairly
compare different models instead of achieving high performance, we reduce the
input image size (from $1024\times 1024$ to $224\times 224$). A linear layer
is used to match the feature’s dimensions and bilinear upsampling is used to
match feature’s spatial sizes. All the other training settings are kept the
same.
### 3.2 Evaluated models
We evaluate five models: three representative VL models including CLIP, OFA
and FLAVA, and two vision-only models including MAE and MOCOv3. Among the five
models, CLIP and MOCOv3 are trained using contrastive loss, while the others
are trained with sequence modeling losses. We choose these models because they
are representative and highly popular, and their pretrained weights and code
are publicly available. In the following, we describe the models, especially
their visual components, and how we extract features from them.
#### CLIP [39].
CLIP is a dual encoder model trained with contrastive loss using 400M image-
text pairs. The image embeddings produced by the image encoder, which can be
either a ResNet or a transformer, and the text embeddings produced by the text
encoder are trained to be closer with each other in the embedding space when
the image and text pair matches. The learned image embeddings are shown to
have superior transferability on various downstream tasks. In our study, image
features are extracted using the pretrained image encoder.
#### OFA [54].
OFA is a unified model that targets both uni-modal and multi-modal tasks. The
vision tasks (image classification and object detection), language tasks, and
multi-modal tasks (VQA, region/image captioning, visual grounding) are all
formulated into a sequence-to-sequence generation problem. In particular,
special visual tokens from discrete-VAE [50, 14] are used for image infilling
and the object bounding box coordinates are also discretized into special
tokens. The OFA model first uses a ResNet (Res101 for OFAbase) to encode
images, then use the transformer encoder and decoder to generate the target
sequence from image and text features. Cross entropy loss is used as
supervision. OFA is pretrained using 20M image-text pairs with additional uni-
modal data. To obtain visual representations of images, we feed the model with
only the image (_i.e_., empty text input), send it through the ResNet, and
take the output of the transformer encoder.
#### FLAVA [43].
FLAVA is a fully transformer-based unified model. Similar to OFA, the model
can solve both uni-modal and multi-modal tasks. However, the differences lie
in (a) tasks, (b) model architecture, and (c) training loss. (a) FLAVA does
not have bounding boxes in the vocabulary, and thus does not support box-
related tasks like object detection, visual grounding or region captioning.
(b) FLAVA is fully based on transformers; it first uses two separate
transformer encoders to encode images and texts, then have several more
transformer layers for multi-modal fusion. (c) FLAVA uses multiple losses
including CLIP-like contrastive loss, masked image/text/multi-modal modeling
losses, and image-text matching loss. FLAVA is pretrained on 70M image and
text pairs. We take the output of the visual transformer encoder as image
representations.
#### MAE [18].
Masked Auto-Encoder (MAE) is a self-supervised vision model trained with a
masked image modeling task. MAE encodes masked image patches with a
transformer encoder and reconstructs the missing pixels with a lightweight
decoder trained with MSE loss. Unlike OFA and FLAVA, the reconstruction for
MAE happens in the continuous pixel space, which does not require dVAE to
generate discretized image tokens. MAE is trained only with ImageNet-1k data
and shows promising transfer performance to downstream tasks.
#### MOCOv3 [8].
We choose MOCOv3 to represent self-supervised vision transformers trained with
contrastive loss. During training, two crops for each image under random data
augmentation are encoded by two encoders, a key encoder and a query encoder,
into two vectors named “key” and “query” respectively. During training, the
goal is to retrieve the corresponding “key” by the “query”. Similar to MAE,
MOCOv3 is trained using ImageNet-1k.
### 3.3 Comparison settings
To make the comparison fair, we carefully choose the model size and input
size, and ensure different methods are comparable. As probing tasks are highly
sensitive to image size and feature’s spatial size, for all the models on all
the tasks, we fix the input image resolution to be 224*224. We choose this
size because 224*224 is the input size for pretraining for all the models
except OFA (OFA is pretrained with size 384 for base version and 480 for
large). For dense tasks, although the original detection and segmentation
models (_i.e_., VitDet and Segmenter) use larger input image sizes for better
performance, we unify the input size because our goal is to fairly compare
models, rather than achieving the best performance.
The probing results are also sensitive to the models’ input patch size,
because different patch sizes produces features with different spatial sizes.
For example, for input images of 224*224, ViT-B/16 produces visual
representations with size 768*14*14, while ViT-B/14 gives feature size
768*16*16, which will affect probing. Therefore, considering the availability
of pretrained checkpoints with different model sizes and input patch sizes, we
evaluate with the ViT-B/16 backbone by default. Because OFA is not purely
transformer-based, we evaluate on the base size, which has a ResNet +
transformer encoder with 120M parameters (comparable to ViT-B/16 with 86M
parameters).
## 4 Experiments
In this section, we first compare the overall results of the five evaluated
methods on each probing task. Next, we conduct a more detailed analysis of the
segmentation and attribute prediction results. Lastly, we study the effects of
model size and finetuning on downstream tasks.
### 4.1 Implementation details
For object name classification and attribute prediction, the model is trained
with a learning rate of 0.001 and batch size of 64 for 200 epochs. We adopt
early stopping based on validation performance, then report performance on the
test split using the best model. For object detection and segmentation on the
COCO dataset, the model is trained for 120k iterations with batch size 20. The
learning rate is first set to 8e-5, then decay twice at step 100k and 115k
with a factor of 0.1. For part segmentation, we train the model with a
learning rate of 0.01 and batch size of 128 for 200 epochs. The validation
performance for the final checkpoint is reported.
### 4.2 Probing results
We probe the five models on each of the five probing tasks. We first save the
features extracted by each pretrained model for both train/val and test
splits, then train the head to predict the probing results based on the saved
features. We make sure that the experiment settings, including model size,
input size, training protocol and data splits, are well aligned for every
model in order to make fair comparisons. The probing results are shown in Tab.
2. We also include the ImageNet finetuning accuracy and linear probing
accuracy of each model for reference, because they are widely-used metrics for
model evaluation.
On each task, we compare the VL models and vision-only models. Note that the
evaluation metric for each task is different (as in Tab. 1), performance on
different tasks cannot be compared and we only compare numbers in each column
separately.
| Task | VG Obj. | VAW Attr. | COCO Det. | COCO Seg. | Part Seg. | IN1k ft. | IN1k probe
---|---|---|---|---|---|---|---|---
V+L | OFA | 57.13 | 61.67 | 25.04 | 19.38 | 33.11 | 82.2 | -
FLAVA | 54.29 | 61.51 | 21.06 | 17.20 | 34.77 | - | 75.5
CLIP | 51.54 | 61.15 | 19.55 | 15.56 | 40.61 | - | 80.2
V | MAE | 49.52 | 52.59 | 25.29 | 22.05 | 42.30 | 83.6 | 68.0
MOCOv3 | 47.81 | 54.44 | 20.31 | 16.96 | 40.11 | 83.2 | 76.7
Table 2: Probing results on the five probing tasks. VL models perform better
on label prediction tasks, while vision-only models perform better on dense
prediction tasks. Finetuning and linear probing results on ImageNet for each
model (cited from original papers) are also shown for reference.
| MSCOCO | PartImageNet
---|---|---
| mAP | Semantic | Localization | mIOU | Semantic | Localization
OFA | 19.38 | 60.02 | 17.41 | 33.11 | 71.71 | 84.15
FLAVA | 17.20 | 61.48 | 14.67 | 34.77 | 75.28 | 83.76
CLIP | 15.56 | 68.24 | 13.25 | 40.61 | 80.21 | 86.80
MAE | 22.05 | 46.85 | 20.69 | 42.30 | 75.03 | 89.50
MOCOv3 | 16.96 | 49.80 | 15.08 | 40.11 | 76.18 | 86.08
Table 3: Detailed analysis of instance segmentation and part segmentation
results. We evaluate the segmentation results (standard metric mAP, mIOU) from
two additional perspectives: semantics (F1 score for semantic class
prediction) and localization (mAP/mIOU for foreground/background
segmentation). While vision-only models are better on the standard metrics, VL
models are better when evaluated with semantics metrics.
For object name prediction and attribute prediction, VL models consistently
perform better than vision-only models. For object name prediction on Visual
Genome, VL models all achieve more than 51% accuracy while vision-only models
get accuracy less than 50%; for attribute prediction on VAW, mAP for VL models
are higher than 61% while lower than 55% for vision-only models. This suggests
that representations from VL models capture richer semantic information about
the objects in each image, which can be decoded using a simple linear layer.
In contrast, in vision-only models the name and attribute information are not
explicit enough.
For the dense prediction tasks, MAE performs the best on all three tasks. For
part semantic segmentation on PartImageNet, MOCOv3 and CLIP also get decent
performance ($>40\%$ mIOU) that is close to MAE (42%), while the other two VL
models are lower by a large margin ($<35\%$). For object detection on MSCOCO,
OFA gets close mAP (25.0) to MAE (25.3) while the performance of the other
three models are much lower; however, when it comes to instance segmentation,
the advantage of MAE is more clear, surpassing all the other models with a
margin larger than 2.7%.
Interestingly, comparing the object detection and instance segmentation
results on COCO, we find that the performance drops of vision-only models are
consistently smaller than VL models, which indicates that vision-only models
learn better localized representations. For example, for OFA, the mIOU for
segmentation is 5.7% (25.04-19.38) lower than that for detection; while the
drop MAE and MOCOv3 are smaller (3.2%, 3.3%). Because segmentation requires
more localized features than detection to find the boundary of objects, the
performance gap between detection and segmentation can be an indicator of the
localized information in the representations, considering those two tasks are
based on the same dataset. With the more-localized representations, the model
can better predict the mask boundary. Therefore, the smaller gap of vision-
only models suggests they learn more localized representations.
Figure 2: Comparison of segmentation results using different models. Compared
to vision-and-language models, vision-only models more accurately predict the
boundary of segmentation masks, but make mistakes in labeling the regions.
To further verify this finding, we next take a closer look into segmentation
results, which more clearly compare the semantics and localization information
in different models.
#### A closer look at the segmentation results.
We evaluate the instance segmentation results on COCO and semantic
segmentation results on PartImageNet using two more metrics: (a) the label
prediction metric, and (b) the foreground-background segmentation metric,
where (a) is an indicator for semantics and (b) for localization. The
motivation is that the segmentation metrics (mAP for instance segmentation,
mIOU for semantic segmentation) require correctly predicting both the class
label and the boundary, so the quality of both determines the score.
Therefore, we propose two additional metrics to measure the two factors
separately. For (a), for each image, we transform its predicted segmentation
map into label predictions, and evaluate the quality using the multi-label
prediction metric. In particular, we treat the appeared classes in the
predicted segmentation map as positive labels and the others as negative; then
the label predictions are evaluated using the F1 score. F1 score is defined as
$\frac{2*\text{precision}*\text{recall}}{(\text{precision}+\text{recall})}$,
where precision and recall are averaged over label classes. For (b), we merge
all the different object categories and process the segmentation map into
binary labels, _i.e_., foreground and background, then report the mIOU (for
instance segmentation) or mAP (for semantic segmentation) of the binary
segmentation maps.
Tab. 3 shows the segmentation results on COCO and PartImageNet evaluated using
the above two metrics. Although MAE achieves the best performance on both
datasets, when looking at the semantic and localization results, we find that
its advantage mainly comes from better localization, rather than semantics. In
terms of semantics, VL models perform much better than MAE. For example, on
the MSCOCO dataset, VL models achieve F1 scores higher than 60, while MAE and
MOCOv3 are lower than 50. The results suggest that while MAE is better at
finding the object boundaries when predicting segmentation masks, VL models
are better at predicting labels for the objects.
In Fig. 2, we show several examples of the part segmentation results on
PartImageNet. In the examples, MAE captures the object’s shape more
accurately, like the curly snake body, the shark’s small fin, and the
quadruped contour. However, MAE and MOCOv3 make more mistakes in labeling the
regions compared to VL models. For example, MAE wrongly predicts the shark fin
as a reptile foot, and the quadruped as a reptile; MOCOv3 confuses the
quadruped head and foot as the fish head and fins. Those examples more
explicitly compare the semantics and localization knowledge learned by VL and
vision-only models.
#### Analysis on different attribute groups.
We further decompose the attribute prediction results into different attribute
groups. In the VAW dataset, attributes are categorized into 8 groups: action,
texture, shape, size, color, material, state, and others. The results are
shown in Fig. 3. Interestingly, despite the overall better results of VL
models, we find that their advantages differ in different groups. For example,
the gap between VL and vision-only models in the “action” category is more
significant than in the “texture” category. Intuitively, “action” is less
visually grounded then “texture” requires more context and semantic
information, on which VL models is better at, suggesting that while vision-
only ones are better at predicting highly visually grounded local attributes
(_e.g_., texture), VL models are better at more abstract ones.
Figure 3: A closer look at the attribute prediction results by separately
evaluating different types of attributes. The advantage of VL models is more
significant in the more abstract categories (_e.g_., action) than visually
grounded categories (_e.g_., texture).
#### Semantics vs. localization.
To summarize, by probing the models on the five tasks, a detailed look at the
segmentation results, and a finer evaluation of different attribute
categories, we find that the VL model is better at predicting semantic labels,
while vision-only models are better at localized dense prediction.
### 4.3 More analysis
#### Findings of contrastive training.
The results also show that contrastive models perform relatively better on
localization for single-object images than multi-object images. Among the five
tasks, part segmentation on PartImageNet dataset are based on single-object
images from ImageNet, while the other four tasks are based on COCO-style
multi-object images. In Tab. 3, comparing the contrastively trained models
(CLIP, MOCOv3) and the models trained with sequence modeling objectives (OFA,
FLAVE, MAE), we find that contrastive models perform relatively better on
PartImageNet than MSCOCO. For example, on PartImageNet, CLIP outperforms the
other two VL models (_i.e_., OFA and FLAVA) by a large margin (more than 6%
mIOU); on MSCOCO, it under-performs them. The semantic and localization
evaluation suggests that this difference is mainly caused by localization,
_e.g_., the localization results of CLIP is much better than OFA and FLAVA on
PartImageNet. A similar observation can be obtained by comparing MOCOv3 and
MAE: although MOCOv3 underperforms MAE on both datasets, the gap is much
smaller on PartImageNet than MSCOCO (2.2 vs. 5.1). Therefore, we suggest that
the localization ability of contrastive models is relatively stronger on
single-object images.
#### The Effect of model size.
To study the effect of model size, in Tab. 4, we show the probing results with
size base and large for MAE and OFA.
For MAE, a larger model size improves performance on all the probing tasks in
parallel for 1% to 2%. However, note that this improvement is less significant
compared to the big gaps between different model types. For OFA, except for
the marginal improvement in attribute prediction, the larger model size hurts
probing results on the other four tasks. The reason for the decrease is that
the OFAlarge is pretrained with a larger input image size (480*480) compared
with OFAbase model (384*384). Because we probe all models with the same image
size (224*224) for a fair comparison, the gap in image size between
pretraining and probing is more significant for OFAlarge. In summary, the
effect of model size is less considerable than other factors like model type
or input image size.
| obj. | attr. | det. | seg. | p-seg.
---|---|---|---|---|---
MAEbase | 49.52 | 52.59 | 25.29 | 22.05 | 42.30
MAElarge | 51.91 | 53.38 | 29.67 | 25.63 | 44.85
OFAbase | 57.13 | 61.67 | 25.04 | 19.38 | 33.11
OFAlarge | 52.33 | 62.01 | 21.23 | 16.51 | 32.04
Table 4: Probing results of model sizes. The influence of model size is less
considerable than other factors like model type.
#### The effect of downstream finetuning.
Tab. 5 compares probing results of models with and without finetuning on
downstream tasks. For MAE, the results are based on the base size; for OFA,
the results are on large size, due to the availability of publicly released
model checkpoints.
For both models, finetuning on image classification on ImageNet-1k and VQA on
VQAv2 hurts the probing performance to varying degrees (except for attribute
prediction). This indicates that while in pretraining, the model learns
features that capture various fine-grained information about the image, during
finetuning towards a specific task, only information useful for the task is
kept and other information is dropped. Moreover, compared with ImageNet
finetuning, finetuning on VQA leads to a much smaller performance decrease in
probing results, suggesting that the change in probing results depends on the
nature of downstream tasks. In this case, VQA requires more fine-grained
information about objects, attributes, etc., resulting in a smaller drop than
ImageNet finetuning.
| obj. | attr. | det. | seg. | p-seg.
---|---|---|---|---|---
MAE | 49.52 | 52.59 | 25.29 | 22.05 | 42.30
MAEIN1k | 45.16 | 53.82 | 21.41 | 17.74 | 35.62
OFA | 52.33 | 62.01 | 21.23 | 16.51 | 32.04
OFAIN1k | 50.54 | 60.74 | 18.91 | 14.67 | 27.56
OFAVQA | 51.42 | 63.40 | 19.01 | 14.22 | 28.34
Table 5: Probing results of models finetuned on downstream tasks including
ImageNet-1k and VQA. The MAE results are based on base size and OFA results
are on large size. Finetuning hurts the probing performance in most cases.
#### Limitations.
This study is limited by the coverage of pretrained models. We only evaluate
models which have publicly accessible checkpoints, and which can be aligned in
terms of model sizes, patch sizes, etc. Because we do not have enough
computational resources to retrain the models, our comparisons are restricted
by the released ones.
## 5 Conclusion
This work studies how language can help visual learning with feature probing
under well-aligned settings. By comparing three representative VL models and
two vision-only models on five probing tasks, we find that VL models are
stronger in label prediction tasks, while vision-only models are better in
dense prediction tasks. With detailed analysis, we conclude that language
helps vision models learn better semantics, but not localization. We hope our
diagnostic findings inspire future works in improving visual learning with the
help of language.
## Acknowledgements
This work is partially supported by a gift from the JHU+Amazon Initiative for
Interactive AI. This work is also supported with Cloud TPUs from Google’s TPU
Research Cloud (TRC) program. We would like to thank Elias Stengel-Eskin, Kate
Sanders, David Etter, Reno Kriz, and Chen Wei for their helpful suggestions.
## References
* [1] Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In International Conference on Learning Representations, 2017.
* [2] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. In International Conference on Learning Representations, 2016.
* [3] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433, 2015.
* [4] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. In International Conference on Learning Representations, 2021.
* [5] Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872, 2017.
* [6] Yonatan Belinkov and James Glass. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72, 2019.
* [7] Paul Bloom. How children learn the meanings of words. MIT press, 2002.
* [8] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9640–9649, 2021.
* [9] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104–120. Springer, 2020.
* [10] Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136, 2018.
* [11] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [12] Banchiamlack Dessalegn and Barbara Landau. More than meets the eye: The role of language in binding and maintaining feature conjunctions. Psychological science, 19(2):189–195, 2008.
* [13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020.
* [14] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12873–12883, 2021.
* [15] Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, and Jianfeng Gao. Vision-language pre-training: Basics, recent advances, and future trends. arXiv preprint arXiv:2210.09263, 2022.
* [16] Jonathan Gordon and Benjamin Van Durme. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 25–30, 2013.
* [17] Ju He, Shuo Yang, Shaokang Yang, Adam Kortylewski, Xiaoding Yuan, Jie-Neng Chen, Shuai Liu, Cheng Yang, Qihang Yu, and Alan Yuille. Partimagenet: A large, high-quality dataset of parts. In European Conference on Computer Vision, pages 128–145. Springer, 2022.
* [18] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000–16009, 2022.
* [19] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
* [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [21] J Hewitt and P Liang. Designing and interpreting probes with control tasks. Proceedings of the 2019 Con, 2019.
* [22] John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, 2019.
* [23] Gabriel Ilharco, Rowan Zellers, Ali Farhadi, and Hannaneh Hajishirzi. Probing contextual language models for common ground with visual representations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5367–5377, 2021.
* [24] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904–4916. PMLR, 2021.
* [25] Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186, 2019.
* [26] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32–73, 2017.
* [27] Belinda Z Li, Maxwell Nye, and Jacob Andreas. Implicit representations of meaning in neural language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1813–1827, 2021.
* [28] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. What does bert with vision look at? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5265–5275, 2020.
* [29] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121–137. Springer, 2020.
* [30] Yanghao Li, Hanzi Mao, Ross Girshick, and Kaiming He. Exploring plain vision transformer backbones for object detection. arXiv preprint arXiv:2203.16527, 2022.
* [31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
* [32] Adam Dahlgren Lindström, Johanna Björklund, Suna Bensch, and Frank Drewes. Probing multimodal embeddings for linguistic properties: the visual-semantic case. In Proceedings of the 28th International Conference on Computational Linguistics, pages 730–744, 2020.
* [33] Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, 2019.
* [34] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
* [35] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022, 2021.
* [36] Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha Kembhavi. Unified-io: A unified model for vision, language, and multi-modal tasks. arXiv preprint arXiv:2206.08916, 2022.
* [37] Khoi Pham, Kushal Kafle, Zhe Lin, Zhihong Ding, Scott Cohen, Quan Tran, and Abhinav Shrivastava. Learning to predict visual attributes in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13018–13028, 2021.
* [38] Tiago Pimentel, Naomi Saphra, Adina Williams, and Ryan Cotterell. Pareto probing: Trading off accuracy for complexity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3138–3153, 2020.
* [39] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021.
* [40] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015.
* [41] Emmanuelle Salin, Badreddine Farah, Stéphane Ayache, and Benoit Favre. Are vision-language transformers learning multimodal representations? a probing perspective. In AAAI 2022, 2022.
* [42] Xing Shi, Inkit Padhi, and Kevin Knight. Does string-based neural mt learn source syntax? In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1526–1534, 2016.
* [43] Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15638–15650, 2022.
* [44] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7262–7272, 2021.
* [45] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111, 2019.
* [46] Hao Tan and Mohit Bansal. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2066–2080, 2020.
* [47] Zineng Tang, Jaemin Cho, Hao Tan, and Mohit Bansal. Vidlankd: Improving language understanding via video-distilled knowledge transfer. Advances in Neural Information Processing Systems, 34:24468–24481, 2021.
* [48] Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations, 2019.
* [49] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pages 10347–10357. PMLR, 2021.
* [50] Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
* [51] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* [52] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164, 2015.
* [53] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, 2019.
* [54] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, pages 23318–23340. PMLR, 2022.
* [55] Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. arXiv preprint arXiv:2108.10904, 2021.
* [56] Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, and Jianshu Chen. Z-lavi: Zero-shot language solver fueled by visual imagination. arXiv preprint arXiv:2210.12261, 2022.
* [57] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917, 2022.
* [58] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432, 2021.
* [59] Chenyu Zhang, Benjamin Van Durme, Zhuowan Li, and Elias Stengel-Eskin. Visual commonsense in pretrained unimodal and multimodal models. In Proceedings of NAACL-HLT, 2022.
* [60] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Making visual representations matter in vision-language models. CVPR 2021, 2021.
|
# Modelling Solar Energetic Neutral Atoms from Solar Flares and CME-driven
Shocks
Gang Li Department of Space Science and CSPAR
University of Alabama in Huntsville
Huntsville, AL 35899, USA Albert Y. Shih NASA Goddard Space Flight Center,
Greenbelt, MD 20771, USA Robert C. Allen Johns Hopkins University Applied
Physics Laboratory, Laurel, MD, 20723, USA George C. Ho Johns Hopkins
University Applied Physics Laboratory, Laurel, MD, 20723, USA Christina M.S.
Cohen California Institute of Technology, Pasadena, CA, 91125, USA Mihir
Desai Southwest Research Institute, San Antonio, TX, 78238, USA Maher A.
Dayeh Southwest Research Institute, San Antonio, TX, 78238, USA Glenn Mason
Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
###### Abstract
We examine the production of energetic neutral atoms (ENAs) in solar flares
and CME-driven shocks and their subsequent propagation to 1 au. Time profiles
and fluence spectra of solar ENAs at 1 au are computed for two scenarios: 1)
ENAs are produced downstream at CME-driven shocks, and 2) ENAs are produced at
large-scale post-flare loops in solar flares. Both the time profiles and
fluence spectra for these two scenarios are vastly different. Our calculations
indicate that we can use solar ENAs as a new probe to examine the underlying
acceleration process of solar energetic particles (SEPs) and to differentiate
the two acceleration sites: large loops in solar flares and downstream of CME-
driven shocks, in large SEP events.
## 1 Introduction
Solar flares and coronal mass ejections (CMEs) are two of the most energetic
processes in the solar system. Efficient particle acceleration can occur in
both solar flares and at CME-driven shocks. Energetic protons accelerated at
either CME-driven shocks or solar flares can precipitate down to the Sun’s
surface or propagate into the interplanetary medium along open interplanetary
magnetic field (IMF) lines. During their propagation, they can interact with
ions and thermal neutral atoms in the solar atmosphere via charge exchange,
and produce energetic neutral hydrogen atoms. Once produced, energetic neutral
hydrogen atoms (hereafter referred as ENAs) do not feel solar magnetic field
and propagate along straight lines. They are subject to loss processes wherein
they lose the electron and become an energetic proton again. Because the
density of the solar wind drops quickly with the heliocentric distance, and
because the loss rate of ENAs is proportional to the solar wind density, ENAs
reaching 20 Rs suffer no further loss. Since the IMF does not affect the
propagation of ENA hydrogen, these ENAs therefore provide a powerful avenue in
probing the acceleration processes and plasma properties of the underlying
acceleration site.
Because the production cross section is small, the flux of ENAs at a distance
of 1 au from the Sun can be extremely small. To date, only a few observational
clues of ENAs accompanying SEP events were reported (Mewaldt et al., 2009;
Mason et al., 2021). Mewaldt et al. (2009) reported $1.6$ to $5$ MeV energetic
neutral atoms (ENAs) from STEREO-A/B observations. They inferred a power-law
spectrum of $dJ/dE\sim E^{-2.46}$ accompanying an X9-class solar flare and
suggested that these ENAs are produced via charge exchange of SEP protons with
O6+ ions. Following (Mewaldt et al., 2009), Wang et al. (2014) performed a
simulation and showed that sufficient counts of ENAs are expected for typical
gradual SEP events where particles are accelerated at CME-driven shocks. This
stimulated interests in observational effort. More recently Mason et al.
(2021) examined $18$ SEP events with SAMPEX, and found indirect, but
compelling evidence of solar ENAs near the geomagnetic equator at low
altitudes where the geomagnetic field filters out all charged SEPs. This new
insight also shed light on three previously reported puzzling $\sim$ MeV ion
intensity increases that were also observed near the equatorial regions about
$\sim 3$ hrs after the occurrence of the corresponding X-ray flares (Greenspan
et al., 1999). The discovery of ENAs by STEREO, and confirmation from SAMPEX,
shows that solar ENAs can be expected to accompany many large SEP events.
If ENAs can be detected in SEP events, one of the pressing questions would be
where do they originate. Are they accelerated at a rather confined
reconnection site at flares or at a broader shock front driven by CMEs? To
answer such a question, we examine ENA productions in two different scenarios:
CME-driven shock and large post flare loops in this work. A schematic of the
two acceleration sites are shown in Figure 1. Note that in many large SEP
events, CMEs and flares often occur together. However, the spatial extension
of the flare is much smaller than the CME. Ions can be efficiently accelerated
at both the flare site and the CME-driven shock front. In the case of CME-
driven shocks, protons and ions are accelerated at the shock front via the
first order Fermi acceleration mechanism. Once accelerated, they can escape
upstream propagating along IMF, or trapped downstream for an extended period
of time. They may precipitate down to the solar surface, causing, for example,
long duration gamma-ray events (Share et al., 2018; Jin et al., 2018). In the
case of flares, particles can be accelerated at the reconnection exhausts and
in solar flare loops (Petrosian, 2012; Ryan, 2000) by e.g. the second order
Fermi acceleration mechanism. Continued magnetic reconnection can lead to a
rising of the post-flare loops (West & Seaton, 2015). Accelerated particles
may be trapped in post-flare loops for very long period of time, serving as an
alternative candidate for the long-duration gamma ray events (Ryan, 2000; de
Nolfo et al., 2019).
We note that in large SEP events it is possible that flare accelerated
particles can be re-accelerated at the accompanying CME-driven shocks (Li &
Zank, 2005; Petrosian, 2012). Simulations by Li & Zank (2005) showed that
depending on if the observer is magnetically connected to the flare and/or the
shock surface, the characteristics of the ion time profiles differ and may
show two peaks as reported in (Cane et al., 2003). However, because the
presence of solar wind MHD turbulence can affect the propagation of charged
ions, the interpretation of 1-au ion observations is often complicated. ENAs,
with ballistic propagation, do not follow IMF and are not affected by the
solar wind MHD turbulence. Therefore, ENA observations with enough angular
resolution can clearly distinguish ENAs from flare sites and those from much
broader CME-driven shocks.
Figure 1: Schematic cartoon showing the two acceleration sites of ions in
large SEP events: CME-driven shocks and flares. Once produced, energetic ions
can propagate along open IMFs and be detected in-situ at 1 au. Near the
acceleration sites, the density of solar atmosphere is high enough so that
energetic ions can lead to the production of solar ENAs. The characteristics
of solar ENAs in both scenarios are examined in this work.
We examine solar ENA production from CME-dirven shocks in section 2 and from
solar flare loops in section 3. Production and loss processes of ENAs are
discussed in Appendix A.
## 2 ENAs from CME-driven shocks
In this section, we consider the observation of ENA particles generated at a
propagating CME-driven shock. The first ENA simulation was done by Wang et al.
(2014) who simulated a CME-driven shock from a side-on orientation and
suggested that the observed flux in (Mewaldt et al., 2009) is consistent with
ENA production at a CME-driven shock. More recently, following the work of
(Wang et al., 2014), Wang et al. (2022) examined a variety cases with
different CME speeds, open angles, and CME propagation directions. They also
examined the effect of solar wind density variation near the Sun on the
production of ENAs. These authors found similar results as Wang et al. (2014).
Here we reexamine the case considered in (Wang et al., 2014) and include
another two cases with different CME propagation directions to obtain an
estimate of ENA flux range at 1 au. Our treatment is similar to our previous
work (Wang et al., 2014) but with a few differences. As in (Wang et al.,
2014), we assume protons are accelerated at the shock and then distributed
uniformly downstream of the shock. This is based on the DSA mechanism and has
been adopted in our previous large SEP event simulations (Li et al., 2003,
2005, 2012b, 2021). Since the turbulence downstream of the shock is a lot
stronger than that upstream of the shock (see e.g. (Lee, 1983; Zank et al.,
2000; Li et al., 2003)), accelerated particles can be kept downstream of the
shock for a long period of time. In (Wang et al., 2014), we assumed there is
no leakage of accelerated particles from downstream of the shock. This was
mostly for simplicity since accelerated particles can precipitate back to the
sun along open field lines. Indeed, Jin et al. (2018) has explored the
possibility that the long duration gamma ray events are due to shock
acceleration protons. In such a scenario, accelerated protons downstream of
the shock can steadily precipitate to the solar surface. Therefore, in this
work, we include a decay of the accelerated protons downstream of the shock.
As an estimate of the decay time, we refer to Li et al. (2012a), who, from a
statistical study of twin-CME events, suggested that a decay time scale of the
turbulence in large SEP events is around $9$-$13$ hours. We use a decay time
$\tau=10$ hours in this work. We also set our inner boundary at $r=1.02R_{s}$,
which differs from that used in (Wang et al., 2014), $1.5R_{s}$. We further
improve the treatment of ENA propagation from downstream of the shock to the
observer. In (Wang et al., 2014), downstream medium was divided into shells
and ENAs produced in individual shells are assumed to propagate to the
observer all from the shell center. This is refined in our current work. We
now divide the downstream region of the shock into multiple parcels, as shown
in the left panel of Figure 2. ENAs are produced and followed in individual
parcels. Since ENAs in different parcels propagate to the observer along
different paths, our current treatment will lead to a more accurate survival
probability computation. Finally, a correction factor $cos(\theta)$ to the
flux expression, equation (4) in (Wang et al., 2014) is included, see equation
(A3).
Figure 2: Left: Schematic plot showing the CME configuration for the base
case. Plasma downstream of the shock is divided into parcels. These parcels
are used to track the ENA production and propagation to the observer. The
observer is along the X axis at 1 au and the CME propagates along the Y axis.
Right: Another two cases, case II and case III are also considered. In case
II, the CME propagates toward the observer; and in case III, the CME
propagates $45^{\circ}$ off from the $+Y$ direction.
Figure 2 shows the configuration of the ENA production process for the CME
shock case. The left panel depicts the base case: the observer locates at 1 au
along the $X$ axis and the CME is propagating to the right along the $+Y$
direction, i.e., $\phi=90^{\circ}$ where $\phi$ is the angle between the sun-
observer line and the CME propagation direction. The plasma downstream of the
shock is divided into multiple parcels. ENA production is followed in these
parcels. ENAs produced in these parcels can propagate along straight lines to
the observer. These trajectories differ for different parcels, and lead to
different survival probabilities. Right panel of Figure 2 shows two other
cases with different CME propagation directions. In case II, the CME
propagates toward the observer with $\phi=0^{\circ}$. In case III, the CME
propagates $45^{\circ}$ off from the $+X$ and $+Y$ directions, i.e.
$\phi=45^{\circ}$. For our simulation, the shock has a constant speed of
$V_{sh}=1500$ km/s and a constant compression ratio of $s=3.5$. The open angle
of the shock is $60^{\circ}$ and the shock is followed up to $30R_{s}$. As in
the flare ENA case, we use the Leblanc model (Leblanc et al., 1998) to compute
the solar wind density.
Figure 3 plots the time profiles and the fluence of ENAs for the three cases
shown in Figure 2. The upper left, upper right, and lower left panels show the
time profiles for the base case, case II and case III, respectively. For all
three cases, ENAs of $11$ energies are considered. The three time profiles are
similar. Consider the base case (upper left panel). The x-axis is the time
after shock initiation, in unit of $10$ minutes; and the y-axis is the ENA
flux at the observer, in unit of #/(cm${}^{2}\cdot$ sec $\cdot$ keV). The
observer first see the $20$ MeV ENAs arriving $\sim 40$ minutes after the
shock initiation. The flux can reach $7*10^{-3}$ cm${}^{2}\cdot$ sec $\cdot$
keV. It then decreases, reflecting the fact that the density of energetic
protons decreases with time as the shock propagates out. As the ENA energy
become smaller, their first arrival times become later and the flux increases
with decreasing energy, till $E=0.75$ MeV. Below $E=0.75$ MeV the flux shows a
more plateau feature and drops slightly. This behaviour is due to the energy
dependence of the charge exchange cross sections that are responsible for the
ENA production. See Figure 6 in Appendix A. Comparing to the base case, cases
II and III are comparable and show larger fluxes than the base case. This is
easily understood from Figure 2 because the ENAs produced in these two cases
travel shorter distances and through less dense solar atmosphere to the
observer and consequently have larger survival probabilities. The lower right
panel of Figure 3 plots the fluence for these three cases. Note the relatively
plateau-like behavior below $E=0.75$ MeV, which is the consequence of the
energy dependence of the relevant charge exchange cross section. Above $1$
MeV, the ENA fluence spectrum shown here is comparable to that inferred in
Mewaldt et al. (2009). The general shape of the CME shock ENA fluence is
similar to the parent energetic ion spectrum which is a power law. This is in
stark contrast to the flare ENA case (see next section) where the ENA fluence
does not resemble the parent energetic ion spectra.
Figure 3: Upper left, upper right and lower left panels show time Profiles of
ENA hydrogens produced at CME-driven shocks for an observer at 1 au, for the
base case, case II and case III, respectively. Eleven energies are considered.
Shocks is followed up to $30R_{s}$. Lower Right panel is the fluence of the
ENA hydrogen for the time duration shown in the other three panels. Note the
bent-over at 1 MeV for the ENAs. The parent energetic protons has a power law
extended to $0.02$ MeV. The bent-over is due to the energy dependence of the
various charge cross sections shown in the Appendix A.
## 3 ENAs from solar flares
We examine ENA production by solar flares in this section. Both electrons and
ions are efficiently accelerated at solar flares, and the accelerated
electrons and ions lead to the emission of hard X-rays and gamma rays. It is
generally accepted that acceleration may occur at reconnection current sheets
and/or by turbulence in the flare loops. Observations of hard X-ray and gamma
rays suggest that the accelerated electron and ion spectra can be approximated
by a power law. Power law like spectra are supported by earlier theoretical
works by (Miller & Roberts, 1995; Petrosian, 2012), where ions are accelerated
in flare loops by MHD turbulence via second order Fermi acceleration. More
recent PIC simulations of ions in flare reconnection site also found a power
law spectrum (Zhang et al., 2021).
Figure 4: Cartoon showing the postflare loops in solar flares. Large scale and
high postflare loops are potential production site of solar ENAs. Upper image
is adopted from (West & Seaton, 2015).
Once accelerated, ions precipitate down to the solar surface along post-flare
loops. The density of a post-flare loop can be constrained by free–free
continuum emission for hot loops. In a recent work, Jejčič et al. (2018)
reported an electron density as high as $10^{13}$ cm-3, $10$ to $100$ higher
than that at typical flare loops. At a density of $\sim 10^{11}$ cm-3, ENAs
can be easily produced in these loops. Once produced, ENAs are not constrained
in the loops and can propagate in all directions. However, if the loops are
low, the solar atmosphere density in the surrounding environment can be too
dense to allow these ENAs to escape from the Sun. Therefore to observe flare
ENAs, the flare loops must be high. The height of flare loops can be estimated
from the looptop hard X-ray observations. A recent study of looptop hard X-ray
source of solar flares (Effenberger et al., 2017) showed that the height of a
typical flare ranges from $10$ to $50$ Mm. If ions are accelerated at and
below this height at flares, no ENAs can survive as they propagate out.
However, using the Sun Watcher with Active Pixels (SWAP) EUV imaging solar
telescope, West & Seaton (2015) examined an M2.2 flare which occurred on 2014
October 14 and found that the post-flare loops were long-lasting, and reached
a height of over $400$ Mm ($\geq 0.5\ {{R}_{\odot}}$) $\sim 48$ hours after
the eruption. West & Seaton (2015) argued that the giant arches in this event
are similar to ordinary post-flare loops and are the results of a long-lasting
($48$ hours) magnetic reconnection occurred along a large-scale current sheet
(Forbes & Lin, 2000). This continuous magnetic reconnection provides the
energy source to heat the loop and can accelerate particles. Besides magnetic
reconnection, turbulence inside the loop can also lead to stochastic
acceleration of ions (Ryan, 2000). We note that the magnetic reconnection at
the current sheet and the enhanced turbulence inside the large postflare loop
may be intimately related. In a recent work by Cheng et al. (2018), the
authors examined the 2017 09 10 flare and showed that a Kolmogorov-like
turbulence spectrum can develop in the current sheet above the flare loops.
Presence of such a turbulence implies that particles can be accelerated in the
turbulent current sheet in a similar way as in flare loops through a second
order Fermi acceleration process (Miller & Roberts, 1995; Petrosian, 2012).
The spatial extension of the current sheet is similar to the flare loops that
are beneath it (Cheng et al., 2018; French et al., 2019), but the density of
the current sheet is, however, smaller than the density in the postflare
loops. Indeed French et al. (2019) concludes that the density in the current
sheet is $\sim<10^{10}$/cm3. This is $100$ times smaller than the density
inferred in the lower flare loop as reported in (Jejčič et al., 2018), and is
$10$ times smaller than what we assume for the post flare loops,
$10^{11}$/cm3. The ENA production in these current sheets is therefore much
smaller than in postflare loops. So we only consider ENA producion in
postflare loops in this work. However, we remark that these turbulent current
sheets can be potential sites of ENA production, and if future ENA probes have
high enough sensitivities, it is possible to obtain direct observations of
these current sheets through ENA observations. A cartoon showing these post
flare loops are shown in Figure 4.
Continuous acceleration, as suggested by Ryan (2000), has been identified as a
possible scenario for the long duration gamma ray events de Nolfo et al.
(2019). Long duration gamma ray events are not uncommon. Recently Share et al.
(2018) examined $\sim 30$ long duration gamma ray events and found that the
energy spectral indices of $>300$ MeV proton producing gamma rays range from
$2.5$ to $6.5$, similar to typical flare events. In a recent study, de Nolfo
et al. (2019) compared the gamma-ray-producing proton numbers with the in-situ
SEP proton numbers in long duration gamma ray flares and found a poor
correlation. Their study supports the continuous acceleration in the post-
flare loop scenario, as suggested by Ryan (2000). We point out that the event
reported in (West & Seaton, 2015), despite having large post-flare loops, was
not a long duration gamma ray event. This is possible if particles are not
accelerated to high enough energies ($\sim 100$ MeV/nuc) to produce gamma
rays.
We now examine ENAs from post-flare loops. We model the post-flare loops as
semi-circle tubes. We assume that the loop has a height (radius) of $h(t)$,
which increases with time. We assume the starting height of the post flare
loop is $0.04R_{s}$ ($\sim 28$Mm), and a rising rate of $V_{r}=3$ km/s (West &
Seaton, 2015). This gives a height of $H=0.22$, $0.41$, and $0.60$ $R_{s}$
when $t=$ 12, 24, and 36 hours, respectively. The cross section of the tube
can be assumed to be a circle with a radius as $a$. One can take $a$ to be
$\sim 700$ km, which is comparable to the half width for a typical flare
ribbon. However, as we will see below, the ENA production depends on the total
number of accelerated protons and does not depend on the choice of $a$ and the
number of loops we consider.
We also assume a constant proton density inside the flare loop. By way of
example, we assume a loop density of $10^{11}$ cm-3. This is smaller than that
obtained in (Jejčič et al., 2018), but larger than the density at the solar
surface, which is $\sim 10^{9-10}$ cm-3. As a simplification, we assume the
acceleration process (Ryan, 2000) is time independent and the production rate
of energetic protons, $\alpha$, is a constant during the rising phase of the
post-flare loop. We denote the duration of the rising phase to be $T$, and the
total number of accelerated particle $N_{0}=\alpha T$. Once accelerated these
particles can precipitate to the solar surface. We model this as a loss
process with an energy-independent decay time $\tau$. The total number of
accelerated particles $N(t)$ in the loop is given by,
$\frac{dN(t)}{dt}=\frac{N_{0}}{T}\theta(T-t)-\frac{N(t)}{\tau}$ (1)
where $\theta(t)$ is the Heaviside function. The solution of equation (1) is,
$N(t)=N_{0}\frac{\tau}{T}\left[(1-e^{-t/\tau})*\theta(T-t)+(1-e^{-T/\tau})e^{-(t-T)/\tau}*\theta(t-T)\right].$
(2)
In equation (2), $N_{0}$ can be constrained from the following consideration.
In the long duration gamma ray events examined by (Share et al., 2018), the
authors inferred that accelerated particles at high energies ($>$300 MeV) in
the loops is about $0.01$ to $0.5$ of that of the accompanying SEP events,
presumably accelerated at the CME-driven shocks. Assuming this ratio is energy
independent, then one can estimate the range of $N_{0}$ from the CME-driven
shock case. Alternatively, one can estimate $N_{0}$ from an energy budget
point of view. In a study of the CME/Flare Energy Budget for two (Emslie et
al., 2005), and subsequently for $38$ large SEP events, Emslie et al. (2012)
found that the energy budget for $>1$ MeV flare ions can reach $\epsilon\sim
4*10^{31}$-$10^{32}$ erg, which can be comparable and even larger than those
observed in-situ. In this work, we estimate $N_{0}$ by assuming the total
energy for the accelerated particles ($>1$ MeV) is $\epsilon=10^{31}$ erg.
With a source spectrum of the accelerated protons given by,
$f(E,t)=\frac{N(t)(\gamma_{1}-1)}{E_{0}}\left[(\frac{E}{E_{0}})^{-\gamma_{1}}\theta(E_{b}-E)+(E_{b}/E_{0})^{-\gamma_{1}}(\frac{E}{E_{b}})^{-\gamma_{2}}\theta(E-E_{b})\right]\quad$
(3)
where $E_{0}$ is the injection energy, $E_{b}$ is the break energy,
$\gamma_{1}=2.5$ is the spectral index at energies below $E_{b}$ and
$\gamma_{2}=5.5$ is the spectral index at energies above $E_{b}$. This gives,
$N_{0}\approx(\frac{\gamma_{1}-2}{\gamma_{1}-1})\left(\frac{\epsilon
E_{0}^{1-\gamma_{1}}}{1-(E_{b})^{2-\gamma_{1}}}\right)$ (4)
For a choice of $E_{0}=0.02$ MeV, $E_{b}=30$ MeV, $\gamma_{1}=2.5$ and
$\gamma_{2}=5.5$, we find $N_{0}=9*10^{38}$. Equation (3), together with
equations (2) and (4) describe the energetic proton source, as a function of
time, for the ENAs inside the post-flare loop.
One can now compute the production of ENAs and obtain the time profiles and
fluence of ENAs as observed at 1 au. We consider three cases with $T=12$,
$24$, and $36$ hrs, corresponding to a final loop height of $H=0.22$, $0.41$,
and $0.60$ Rs, respectively. In all cases $\tau=3$ hr. We further assume that
the flare locates at $\phi=0$ degree, i.e. in a face-on situation. For other
viewing angles, the results are qualitatively similar.
Figure 5: Upper left, upper right, and lower left: time profiles of solar
flare ENAs for a loop with a final loop height $h=0.22R_{s}$, $0.41R_{s}$, and
$0.60R_{s}$, respectively. Lower Right: total ENA fluence for the three cases
considered. See text for details.
Figure 5 plots the time profiles and fluence of the flare ENAs. The upper
left, upper right and lower left panels are time profiles for the three
choices of the final flare loop heights. Seven energies are considered. These
are $1.0$, $1.5$, $2$, $5$, $10$, $15$, and $20$ MeVs. As can be seen from
these panels, high energy ENAs arrive earlier due to a short propagation time
from the Sun to 1 au. In all three panels, the peak of the time profiles occur
shortly after the loops reach the maximum height. The energy dependence of the
peak intensity (and the fluence, see the lower right panel) strongly depends
on the loop height. If the loop height is $0.22R_{s}$ (upper left panel), the
peak intensity of $T=2$ MeV ENAs is $5$ orders of magnitude smaller than that
of the $T=15$ MeV ENAs. Furthermore, there is no ENAs with $T<2$ MeV. In
comparison, when the loop height is $0.41$ or $0.6$ Rs, the peak intensity of
$T=2$ MeV ENAs is similar to that of the $T=15$ MeV ENAs. This energy
dependence can be also seen from the fluence plot shown in the lower right
panel of Figure 5. When the loop height is $0.60R_{s}$, the fluence has a
maximum $\sim 800$/cm2 at $T=2$ MeV, and at $T=20$ MeV, the fluence is about
$10$. When the loop height is $0.22R_{s}$, however, the fluence of 2 MeV ENAs
drop by a factor of $4*10^{8}$ to $\sim 2*10^{-6}$/cm2. In comparison, the
flunece of $20$ MeV ENA drops only by a factor of $50$, to $0.2$/cm2. This big
difference of ENA fluence at 1 au for different flare loop height is due to
efficient loss of ENA close to the Sun. Although plenty of ENAs are produced
in the flare loop, they can not escape the high density solar atmosphere if
the flare loop is not high enough. Note that during the eruption phase of
solar flares, the height of flare loops, as seen from X-ray imaging, is a lot
smaller than $0.22R_{s}$ (Effenberger et al., 2017), therefore we expect no
ENAs during the eruption phase of solar flares. However, large post flare
loops, as those reported in (West & Seaton, 2015), can reach $0.5R_{s}$. Our
calculations show that there will be clear ENA signals from such a flare. We
do note that the absolute amplitude and the shape of the ENA fluence depend on
the solar atmosphere density model as well as the relevant charge exchange
cross sections (see Appendix A). Nevertheless, because the ENA fluence, and in
particular, its energy dependence, sensitively depend on the flare loop
height, one can use the ENA fluence as a probe of the flare loop height. We
point out that these large post flare loops may not be common. Consequently,
flare ENAs may not be common either. Note that both the time profiles and the
fluence for flare ENAs shown in Figure 5 are vastly different from their
counterparts in shock accelerated ENAs shown in Figure 3. This suggests that
one can use ENA observations to discern if the parent energetic ions are
accelerated at CME-driven shock or at solar flares.
## 4 Conclusions
Understanding the underlying particle acceleration process in large SEP events
has been one of the central problems in heliophysics research. With only in-
situ observations of energetic ions, questions such as the relative roles of
magnetic reconnection in flares vs shock acceleration at CME shocks, and how
to discern the effects of acceleration from that of transport, can be very
hard to answer. In part, this is because our basic understanding of the near-
Sun conditions and the physical processes involved in the production of SEP
events is hampered by our inability to make direct measurements near the
acceleration sites and to remove the effects of transport. ENA observations
can significantly advance our understanding of SEP acceleration at its source
because ENAs do not interact with IMF and is not affected by the transport
effect.
In this paper, we examine the production of ENAs at CME-driven shock fronts
and in solar flares. We compute the time profiles and fluence of ENAs for
these two scenarios. Our calculations suggest that in large SEP events where
ions are efficiently accelerated at CME-driven shocks, ENAs are copiously
produced behind the shock. At 1 au the flux of these ENAs are at a level that
can be readily measured by a dedicated ENA detector. ENAs can also be produced
in flares where large scale and high postflare loops exist. The time profiles
and fluence of ENAs for these two scenarios differ considerably. This offers
us an opportunity to constrain the underlying particle acceleration process
via ENA observations. Our work also forms a theoretical basis for interpreting
future ENA observations.
This work is supported in part by NASA grants 80NSSC19K0075 and 80NSSC20K1783,
and NSF grant 2149771 at UAH. Work at SwRI is partially supported by NASA LWS
grants 80NSSC19K0079 and 80NSSC20K1815. And work at APL is partially supported
by NASA contract 80MSFC19F0002.
## Appendix A Production and Loss of solar ENAs
Production: We examine the ENA production at solar flares and CME-driven
shocks in this work. The underlying ENA production process is the same for
both cases and is through charge exchange reactions. At time $t$ and location
${\bf r}$, the production rate of ENA is,
$A({\bf r},E,t)=\frac{dn}{dtdE}=\sum_{i}n_{i}\cdot\sigma_{i}\cdot v\cdot
f({\bf r},E)$ (A1)
Here $f({\bf r},E)$ is the distribution function of the accelerated proton
from either the CME-driven shock or the flare site; $E=\frac{1}{2}m_{p}v^{2}$
is the kinetic energy of the energetic proton and we consider non-relativistic
case; the sum is for all contributing charge exchange processes. For the case
of solar composition, the following three charge-exchange interactions are the
most relevant:
$p+O^{6+}\rightarrow H+O^{7+},\quad p+C^{4+}\rightarrow H+C^{5+}\quad
p+H\rightarrow H+p,$ (A2)
The abundance ratio of O6+/p is $\sim 10^{-3}$, and C4+/O6+ is $\sim 0.067$
(von Steiger et al., 2000). For neutral hydrogen, ionization by impact
collision and EUV balance the recombination and charge exchange collisions,
leading to a ratio of neutral H to proton to be $\sim 2.6*10^{-7}$ (D’Amicis
et al., 2007).
Figure 6: The relevant cross sections for ENA production and loss. Adopted
from (Wang et al., 2014, 2022). Dashed lines signal extrapolations.
The corresponding cross sections for the three charge-exchange interactions,
as a function of proton energy, are shown in Figure 6. These cross sections
were obtained from theoretical calculations (Gruntman et al., 2001; Yu Rang,
1992) and are subject to uncertainties. The energy range for these cross
sections are also limited. Following Wang et al. (2022), we have extended them
to a larger energy range whenever necessary. Note that as in (Wang et al.,
2014), we ignore charge exchanges by other ions (He+, N5+, etc) due to their
smaller abundances. Including these would marginally increase the ENA
production rate.
Propagation and Loss of ENAs: Once produced, solar hydrogen ENAs leave their
birth places along ballistic trajectory, subject to losses due to primarily
impact ionization and EUV ionization. The cross sections for the two most
important impact ionization processes are also shown in Figure 6. The
differential flux $J({\bf r},v{\hat{n}},t)$ (with unit of s-1 cm-2 keV-1), at
location ${\bf r}$, time $t$, and along the direction of ${\hat{n}}$, is given
by,
$\displaystyle J({\bf r},v{\hat{n}},t)=\int_{0}^{t}dt^{\prime}\int d^{3}{\bf
v^{\prime}}d^{3}{\bf r^{\prime}}\frac{A({\bf r^{\prime}},{\bf
v^{\prime}},t^{\prime})h({\bf r}-{\bf r^{\prime}},v)}{4\pi|{\bf r}-{\bf
r^{\prime}}|^{2}}\delta(v-v^{\prime})\delta(t-t^{\prime}-\frac{|{\bf r}-{\bf
r^{\prime}}|}{v})cos(\theta)$ (A3)
where $cos(\theta)=({\bf r}-{\bf r^{\prime}})\cdot\hat{\bf n}/|{\bf r}-{\bf
r^{\prime}}|$ and $h({\bf r}-{\bf r^{\prime}},v)$ is the survival probability
of the neutral hydrogen at location ${\bf r}$, produced at ${\bf r^{\prime}}$.
The survival probability $h({\bf r}-{\bf r^{\prime}},v)$ depends on the travel
history and its speed $v$ of the ENA hydrogen and is computed by (Wang et al.,
2014),
$h({\bf r}-{\bf r^{\prime}},v)=exp(-\int_{0}^{|{\bf r}-{\bf
r^{\prime}}|}\gamma({\bf r^{\prime}}){dl})$ (A4)
where the integration $dl$ is along the direction ${\bf r}-{\bf r^{\prime}}$
and $\gamma$ is the total loss rate. We consider three loss processes here:
electron impact ionization, proton impact ionization, and photo-ionization.
The loss rate for these processes are (Wang et al., 2014),
$\gamma_{eH}={\rho_{sw,e}({\bf
r})\sigma_{eH}},\quad\gamma_{pH}={\rho_{sw,p}({\bf
r})\sigma_{pH}},\quad\gamma_{\gamma
H}=4*10^{-3}(\frac{r_{s}}{r})^{2}\frac{1}{v}.$ (A5)
For both the flare ENAs and the CME-shock ENAs, the treatment of ENA
production and propagation/loss is the same. The difference between them is
the region of the energetic ion source. Comparing to the CME case, the post-
flare loops is more localized.
## References
* Cane et al. (2003) Cane, H. V., von Rosenvinge, T. T., Cohen, C. M. S., & Mewaldt, R. A. 2003, Geophys. Res. Lett., 30, 8017, doi: 10.1029/2002GL016580
* Cheng et al. (2018) Cheng, X., Li, Y., Wan, L. F., et al. 2018, ApJ, 866, 64, doi: 10.3847/1538-4357/aadd16
* D’Amicis et al. (2007) D’Amicis, R., Orsini, S., Antonucci, E., et al. 2007, Journal of Geophysical Research (Space Physics), 112, A06110, doi: 10.1029/2006JA011969
* de Nolfo et al. (2019) de Nolfo, G. A., Bruno, A., Ryan, J. M., et al. 2019, ApJ, 879, 90, doi: 10.3847/1538-4357/ab258f
* Effenberger et al. (2017) Effenberger, F., Rubio da Costa, F., Oka, M., et al. 2017, ApJ, 835, 124, doi: 10.3847/1538-4357/835/2/124
* Emslie et al. (2005) Emslie, A. G., Dennis, B. R., Holman, G. D., & Hudson, H. S. 2005, Journal of Geophysical Research (Space Physics), 110, A11103, doi: 10.1029/2005JA011305
* Emslie et al. (2012) Emslie, A. G., Dennis, B. R., Shih, A. Y., et al. 2012, ApJ, 759, 71, doi: 10.1088/0004-637X/759/1/71
* Forbes & Lin (2000) Forbes, T. G., & Lin, J. 2000, Journal of Atmospheric and Solar-Terrestrial Physics, 62, 1499, doi: 10.1016/S1364-6826(00)00083-3
* French et al. (2019) French, R. J., Judge, P. G., Matthews, S. A., & van Driel-Gesztelyi, L. 2019, ApJ, 887, L34, doi: 10.3847/2041-8213/ab5d34
* Greenspan et al. (1999) Greenspan, M. E., Mason, G. M., & Mazur, J. E. 1999, J. Geophys. Res., 104, 19911, doi: 10.1029/1999JA900225
* Gruntman et al. (2001) Gruntman, M., Roelof, E. C., Mitchell, D. G., et al. 2001, J. Geophys. Res., 106, 15767, doi: 10.1029/2000JA000328
* Jejčič et al. (2018) Jejčič, S., Kleint, L., & Heinzel, P. 2018, ApJ, 867, 134, doi: 10.3847/1538-4357/aae650
* Jin et al. (2018) Jin, M., Petrosian, V., Liu, W., et al. 2018, The Astrophysical Journal, 867, 122, doi: 10.3847/1538-4357/aae1fd
* Leblanc et al. (1998) Leblanc, Y., Dulk, G. A., & Bougeret, J.-L. 1998, Sol. Phys., 183, 165, doi: 10.1023/A:1005049730506
* Lee (1983) Lee, M. A. 1983, J. Geophys. Res., 88, 6109, doi: 10.1029/JA088iA08p06109
* Li et al. (2012a) Li, G., Moore, R., Mewaldt, R. A., Zhao, L., & Labrador, A. W. 2012a, Space Science Reviews, 171, 141, doi: 10.1007/s11214-011-9823-7
* Li et al. (2012b) Li, G., Shalchi, A., Ao, X., Zank, G., & Verkhoglyadova, O. 2012b, Advances in Space Research, 49, 1067, doi: 10.1016/j.asr.2011.12.027
* Li & Zank (2005) Li, G., & Zank, G. P. 2005, Geophysical Research Letters, 32, 1, doi: 10.1029/2004GL021250
* Li et al. (2005) Li, G., Zank, G. P., & Rice, W. K. 2005, Journal of Geophysical Research: Space Physics, 110, A06104, doi: 10.1029/2004JA010600
* Li et al. (2003) Li, G., Zank, G. P., & Rice, W. K. M. 2003, Journal of Geophysical Research: Space Physics, 108, 1, doi: 10.1029/2002JA009666
* Li et al. (2021) Li, G., Jin, M., Ding, Z., et al. 2021, The Astrophysical Journal, 919, 146, doi: 10.3847/1538-4357/ac0db9
* Mason et al. (2021) Mason, G. M., Greenspan, M. E., Kanekal, S. G., et al. 2021, ApJ, 923, 195, doi: 10.3847/1538-4357/ac2fa2
* Mewaldt et al. (2009) Mewaldt, R. A., Leske, R. A., Stone, E. C., et al. 2009, ApJ, 693, L11, doi: 10.1088/0004-637X/693/1/L11
* Miller & Roberts (1995) Miller, J. A., & Roberts, D. A. 1995, ApJ, 452, 912, doi: 10.1086/176359
* Petrosian (2012) Petrosian, V. 2012, Space Sci. Rev., 173, 535, doi: 10.1007/s11214-012-9900-6
* Ryan (2000) Ryan, J. M. 2000, Space Sci. Rev., 93, 581, doi: 10.1023/A:1026547513730
* Share et al. (2018) Share, G. H., Murphy, R. J., White, S. M., et al. 2018, ApJ, 869, 182, doi: 10.3847/1538-4357/aaebf7
* von Steiger et al. (2000) von Steiger, R., Schwadron, N. A., Fisk, L. A., et al. 2000, J. Geophys. Res., 105, 27217, doi: 10.1029/1999JA000358
* Wang et al. (2014) Wang, L., Li, G., Shih, A. Y., Lin, R. P., & Wimmer-Schweingruber, R. F. 2014, ApJ, 793, L37, doi: 10.1088/2041-8205/793/2/L37
* Wang et al. (2022) Wang, X. D., Klecker, B., Nicolaou, G., et al. 2022, Earth and Planetary Physics, 6, 42, doi: 10.26464/epp2022003
* West & Seaton (2015) West, M. J., & Seaton, D. B. 2015, ApJ, 801, L6, doi: 10.1088/2041-8205/801/1/L6
* Yu Rang (1992) Yu Rang, K. 1992, Journal of Physics B Atomic Molecular Physics, 25, 199, doi: 10.1088/0953-4075/25/1/023
* Zank et al. (2000) Zank, G. P., Rice, W. K. M., & Wu, C. C. 2000, J. Geophys. Res., 105, 25079, doi: 10.1029/1999JA000455
* Zhang et al. (2021) Zhang, Q., Guo, F., Daughton, W., Li, H., & Li, X. 2021, Phys. Rev. Lett., 127, 185101, doi: 10.1103/PhysRevLett.127.185101
|
# Photometry of the Four Anti-Galactocentric Old Open Clusters: Czernik 30,
Berkeley 34, Berkeley 75, and Berkeley 76
Hyobin Im Korea Astronomy Space Science Institute (KASI), 776 Daedukdae-ro,
Yuseong-gu, Daejeon 34055, Republic of Korea Korea University of Science and
Technology (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon 34113, Republic of Korea
Sang Chul KIM Corresponding author. Korea Astronomy Space Science Institute
(KASI), 776 Daedukdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea Korea
University of Science and Technology (UST), 217 Gajeong-ro, Yuseong-gu,
Daejeon 34113, Republic of Korea Visiting astronomer, Cerro Tololo Inter-
American Observatory at NSF’s NOIRLab, which is managed by the Association of
Universities for Research in Astronomy (AURA) under a cooperative agreement
with the National Science Foundation. Jaemann Kyeong Korea Astronomy Space
Science Institute (KASI), 776 Daedukdae-ro, Yuseong-gu, Daejeon 34055,
Republic of Korea Hong Soo Park Korea Astronomy Space Science Institute
(KASI), 776 Daedukdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea Korea
University of Science and Technology (UST), 217 Gajeong-ro, Yuseong-gu,
Daejeon 34113, Republic of Korea Visiting astronomer, Cerro Tololo Inter-
American Observatory at NSF’s NOIRLab, which is managed by the Association of
Universities for Research in Astronomy (AURA) under a cooperative agreement
with the National Science Foundation. Joon Hyeop Lee Korea Astronomy Space
Science Institute (KASI), 776 Daedukdae-ro, Yuseong-gu, Daejeon 34055,
Republic of Korea
###### Abstract
We present a $BVI$ photometric study of four old open clusters (OCs) in the
Milky Way Galaxy, Czernik 30, Berkeley 34, Berkeley 75, and Berkeley 76 using
the observation data obtained with the SMARTS 1.0 m telescope at the CTIO,
Chile. These four OCs are located at the anti-Galactocentric direction and in
the Galactic plane. We determine the fundamental physical parameters for the
four OCs, such as age, metallicity, distance modulus, and color excess, using
red clump and PARSEC isochrone fitting methods after finding center and size
of the four OCs. These four old OCs are $2-3$ Gyr old and $6-8$ kpc away from
the Sun. The metallicity ([Fe/H]) values of the four OCs are between $-0.6$
and $0.0$ dex. We combine data for these four OCs with those for old OCs from
five literatures resulting in 236 objects to investigate Galactic radial
metallicity distribution. The gradient of a single linear fit for this
Galactocentric [Fe/H] distribution is $-0.052\pm 0.004$ dex kpc-1. If we
assume the existence of a discontinuity in this radial metallicity
distribution, the gradient at Galactocentric radius $<12$ kpc is $-0.070\pm
0.006$ dex kpc-1, while that at the outer part is $-0.016\pm 0.010$ which is
flatter than that of the inner part. Although there are not many sample
clusters at the outer part, the broken linear fit seems to better follow the
observation data.
Open star clusters (1160); Red giant clump (1370); Galaxy disks (589); Galaxy
evolution (594); Galaxy abundances (574); Milky Way evolution (1052); Chemical
abundances (224)
††software: Scipy (Jones et al., 2001), astrometry.net (Lang et al., 2010),
IRAF (Tody, 1986, 1993), DAOPHOT II/ALLSTAR (Stetson, 1990), PARSEC (Bressan
et al., 2012), pyUPMASK (Pera et al., 2021).
## 1 Introduction
Most stars in the Milky Way Galaxy (MWG) are born in star clusters (Lada &
Lada, 2003; Kim et al., 2009; Kyeong et al., 2011). The stars in open clusters
(OCs) share some physical values, such as distance, age, and chemical
composition, which can be determined using photometric methods (Park & Lee,
1999; Kyeong et al., 2001, 2008; Ahumada et al., 2013; Carrera et al., 2017).
OCs can be divided into three groups by age: old OCs have ages older than 1
Gyr, young OCs have ages younger than 1 Myr, and intermediate-age OCs have
ages of 1 Myr $-$ 1 Gyr (Friel, 1995). Young OCs are useful for investigating
star formation processes, while old OCs are a good tool for research on the
formation and early evolution of the Galactic disk and examination of stellar
evolution models (van den Bergh & McClure, 1980; Lada & Lada, 2003).
There are many Galactic OC catalogs. Lyngå published ‘Catalog of Open Cluster
Data’ that includes 1148 OCs with physical parameters, like diameter, age,
metallicity, and reddening (Lyngå, 1995). Dias et al. (2002) catalog of
version 3.5 includes 2167 MWG OCs with the information about location,
kinematics, distance, age, and reddening. The Milky Way Star Cluster catalog
of Kharchenko et al. (2013) increased the number of OCs to 2808.
The number of OCs in catalogs goes up, but the number of OCs with known
physical parameters are much less than the total number of OCs in the
catalogs. Since the beginning of the $Gaia$ era, many studies have estimated
parameters such as distance and age with $Gaia$. Using the $Gaia$ DR2 data,
Cantat-Gaudin et al. (2018a) published a list of 1229 OCs including physical
parameters like age, distance, proper motion, and parallax. Liu & Pang (2019)
included 2443 cluster candidates with parameters from isochrone fitting. Using
$Gaia$ DR2 data, the Gaussian mixture model, mean-shift algorithms and visual
inspections, Sim et al. (2019) discovered 207 new OCs. Although OC catalogs
are being updated, there are disagreements about the physical parameters of
the same object among the studies in the catalogs. Cantat-Gaudin et al. (2020)
used machine learning method to fit isochrone models to the $Gaia$ DR2 data
and obtained parameters (age, distance, and extinction) for 2000 OCs. Dias et
al. (2021) provided physical parameters, such as proper motion, radial
velocity, distance, age, and [Fe/H], for 1743 OCs based on the $Gaia$ DR2
data.
The old OCs with larger Galactocentric distances are important for studying
metallicity distribution in the Galactic disk. Janes (1979) found the Galactic
disk metallicity gradient using OCs. Twarog et al. (1997) argued the existence
of a discontinuity in radial metallicity distribution outside of 10 kpc from
the Galactic center, where the inner part shows a steeper gradient than the
outer part. The position of the discontinuity is suggested to be at $10-15$
kpc in recent studies (Netopil et al., 2016; Kim et al., 2017; Donor et al.,
2020; Monteiro et al., 2021). However, the number of well-studied OCs at the
outer part of the Galactic disk is currently too small to clearly determine
the existence and position of the discontinuity.
One of the strengths of studying the anti-Galactocentric region is the
relatively lower extinction, which enables us to investigate the evolution of
the outer part of the Galactic disk. The old OCs in the anti-Galactocentric
region can be a useful tool for studying the evolution of the MWG since they
hold a long dynamic timescales (Gaia Collaboration et al., 2021).
We investigated the physical parameters of four OCs located in the anti-
Galactocentric direction: Czernik 30, Berkeley 34, Berkeley 75 and Berkeley 76
by using the red clump (RC) stars and by fitting the PAdova and TRieste
Stellar Evolution Code (PARSEC) isochrones (Bressan et al., 2012).
In Table 1, we summarize the physical parameters of the four OCs obtained by
the previous studies and in our study. Czernik 30 is located at
$\alpha_{J2000}=07^{h}31^{m}10.8^{s}$ and $\delta_{J2000}=-09\arcdeg 56\arcmin
42\arcsec$ and has been studied in four literatures. Hasegawa et al. (2008)
and Piatti et al. (2009) presented physical parameters using $BVI$ photometric
data and Washington photometric data. Perren et al. (2015) made a code for the
automatic determination of physical parameters of OCs, and included Czernik 30
in their sample for testing the code and gave the physical parameters. Hayes
et al. (2015) conducted a photometric and spectroscopic study of Czernik 30
and obtained the basic parameters.
The position of Berkeley 34 is $\alpha_{J2000}=07^{h}00^{m}23.2^{s}$,
$\delta_{J2000}=-00\arcdeg 13\arcmin 54\arcsec$ and there are three previous
studies for this cluster. Hasegawa et al. (2004) and Ortolani et al. (2005)
presented the physical parameters using isochrone fitting. Donati et al.
(2012) presented ranges for the physical parameters and calculated the binary
fraction, which is measured from color and magnitude and fine-tuning with
differential reddening value.
Berkeley 75 is located at $\alpha_{J2000}=06^{h}48^{m}59.1^{s}$,
$\delta_{J2000}=-23\arcdeg 59\arcmin 36\arcsec$. Carraro et al. (2005)
published the physical parameters using the $BVI$ photometry and Carraro et
al. (2007) studied five OCs at the outer Galactic disk including Berkeley 75
using VLT high resolution spectroscopic data, and suggested the physical
parameters. Cantat-Gaudin et al. (2016) studied the abundances and kinematics
of ten OCs including Berkeley 75. They used the spectroscopic data of two
member stars of Berkeley 75 and gave the [Fe/H] value of Berkeley 75.
The location of Berkeley 76 is $\alpha_{J2000}=07^{h}06^{m}42.4^{s}$,
$\delta_{J2000}=-11\arcdeg 43\arcmin 33\arcsec$ and the properties of Berkeley
76 from three previous studies have a relatively wider range. Hasegawa et al.
(2008) and Tadross (2008) obtained the physical parameters from isochrone
fittings to the $BVI$ photometric data and 2MASS $JHK$ data. Carraro et al.
(2013) studied five old OCs at the outer Galactic disk including Berkeley 76
and they determined the parameters. The distance modulus from Hasegawa et al.
(2008) and Carraro et al. (2013) differ by almost 3 magnitudes.
This study uses the observation data obtained from the same observing run as
that of Kim et al. (2017), which presented the physical parameters of the old
OC Ruprecht 6. To better constrain the evolution of the outer part of the
Galactic disk, in this paper, we estimate the physical parameters of the four
OCs in a way basically consistent with that of Kim et al. (2017) but more
improved. In this study, we use the $Gaia$ Early Data Release 3 (EDR3) data to
select member stars of the clusters and adopt the number density distribution
function of the stellar photometry results, to better estimate the centers and
radius of the clusters.
This paper is organized as follows. In Section 2, we explain the observations
and data reduction. In Section 3, we describe the results on Czernik 30.
Section 3 has five subsections : the center of Czernik 30, radius, member
selection using the $Gaia$ EDR3 data, reddening and distance, age and
metallicity, and comparison with previous studies. In Sections 4, 5, and 6, we
show the results for Berkeley 34, Berkeley 75, and Berkeley 76, respectively,
using the same routines as in section 3. In Section 7, we show and discuss the
radial metallicity distribution of the Galactic disk, using the previously
known OCs from the literature and the newly estimated physical quantities of
the four OCs together. In Section 8, we summarize our results.
Figure 1: $B$-band images of the four old open clusters: (a) Czerinik 30, (b) Berkeley 34, (c) Berkeley75, and (d) Berkeley 76. North is up, and east is to the left. The red cross symbols are the centers of the clusters, red circles show the scope of the clusters with the radii determined in this study. The radius for each cluster is shown in Tab. 2. Other X symbols indicate the centers of the clusters from previous studies (see the text for details). Table 1: Summary of the physical parameters R.A. (J2000) | Dec. (J2000) | E($B-V$) | E($V-I$) | Age | [Fe/H] | $(m-M)_{0}$ | Distance | Source
---|---|---|---|---|---|---|---|---
hh:mm:ss | dd:mm:ss | mag | mag | Gyr | dex | mag | kpc |
(a) Czernik 30
07:31:10 | $-9:56$ | $\cdots$ | $0.34$ | $2.5$ | $-0.4$ | $14.27$ | $\cdots$ | Hasegawa et al. (2008)
07:31:18 | $-09:58:00$ | $0.26\pm 0.02$ | $\cdots$ | $2.5^{+0.3}_{-0.25}$ | $-0.4\pm 0.2$ | $\cdots$ | $6.2\pm 0.8$ | Piatti et al. (2009)
07:31:19.2 | $-09:58:12$ | $0.5\pm 0.1$ | $\cdots$ | $0.8^{+0.5}_{-0.3}$ | $-0.3\pm 0.4$ | $\cdots$ | $7.9^{+1.6}_{-1.3}$ | Perren et al. (2015)
07:31:11 | $-09:56:38$ | $0.24\pm 0.06$ | $0.36\pm 0.04$ | $2.8\pm 0.3$ | $-0.2\pm 0.15$ | $\cdots$ | 6.5 | Hayes et al. (2015)
07:31:10.8 | $-09:56:42$ | $0.15\pm 0.08$ | $0.27\pm 0.20$ | $2.82\pm 0.32$ | $-0.22\pm 0.15$ | $14.05\pm 0.13$ | $6.46\pm 0.39$ | This study
(b) Berkeley 34
07:00:24 | $-00:15:00$ | 0.45 | 0.60 | 2.8 | $-0.02$ | 14.31 | $\cdots$ | Hasegawa et al. (2004)
07:00:23 | $-00:14:15$ | $0.30\pm 0.05$ | $\cdots$ | $2.3\pm 0.4$ | $-0.41$ | $\cdots$ | $7.8\pm 0.8$ | Ortolani et al. (2005)
07:00:23 | $-00:13:56$ | $0.57-0.64$ | $\cdots$ | $2.1-2.5$ | $-0.31$ | $14.1-14.3$ | $6-7$ | Donati et al. (2012)
07:00:23.2 | $-00:13:54$ | $0.56\pm 0.24$ | $0.73\pm 0.31$ | $2.51\pm 0.30$ | $-0.30\pm 0.15$ | $14.13\pm 0.19$ | $6.70\pm 0.59$ | This study
(c) Berkeley 75
06:48:59 | $-23:59:30$ | $0.08\pm 0.05$ | $0.13\pm 0.05$ | $3.0\pm 0.3$ | $-0.72$ | 14.9 | 9.8 | Carraro et al. (2005)
$\cdots$ | $\cdots$ | $0.04\pm 0.03$ | $\cdots$ | $4.0\pm 0.4$ | $-0.22\pm 0.20$ | $14.90\pm 0.20$ | 9.1 | Carraro et al. (2007)
06:48:59 | $-23:59:30$ | $\cdots$ | $\cdots$ | $\cdots$ | $-0.38$ | $\cdots$ | $\cdots$ | Cantat-Gaudin et al. (2016)
06:48:59.1 | $-23:59:36$ | $0.07\pm 0.18$ | $0.13\pm 0.32$ | $3.16\pm 0.73$ | $-0.57\pm 0.20$ | $14.44\pm 0.17$ | $7.73\pm 0.61$ | This study
(d) Berkeley 76
07:06:44 | $-11:44$ | $\cdots$ | $0.70$ | $1.6$ | $-0.4$ | 14.39 | $\cdots$ | Hasegawa et al. (2008)
07:06:24 | $-11:37:38$ | 0.73 | $\cdots$ | 0.8 | $\cdots$ | $\cdots$ | $2.505\pm 0.115$ | Tadross (2008)
07:06:24 | $-11:37:00$ | $0.55\pm 0.10$ | $0.75\pm 0.10$ | 1.5 | $\cdots$ | $17.20\pm 0.15$ | 12.6 | Carraro et al. (2013)
07:06:42.4 | $-11:43:33$ | $0.41\pm 0.33$ | $0.57\pm 0.46$ | $1.26\pm 0.14$ | $0.00\pm 0.20$ | $13.97\pm 0.23$ | $6.22\pm 0.66$ | This study
## 2 Observations and Data Reduction
The $BVI$ images for the four target OCs, Czernik 30, Berkeley 34, Berkeley
75, and Berkeley 76, were acquired at the Small and Moderate Aperture Research
Telescope System (SMARTS) 1.0 m telescope with the Y4KCam camera at the Cerro
Tololo Inter-American Observatory (CTIO) in 2010 December. Y4KCam has 4064
$\times$ 4064 pixels and the pixel scale is $0.289\arcsec$ pixel-1 and the
field of view (FoV) is $19.57\arcmin\times 19.57\arcmin$. While the R.A. and
declination are in Table 1, Table 2 lists Galactic longitudes, Galactic
latitudes, and the radii of the four OCs. Figure 1 shows the centers and radii
of the OCs together with the center positions from previous studies. Table 3
lists the observation log showing the observation date, filter and exposure
times.
While the reduction and photometry routines were the same as those applied as
in Kim et al. (2017), we summarize the key processes here. IRAF111IRAF is
distributed by the National Optical Astronomy Observatories, which is operated
by the Association of Universities for Research in Astronomy, Inc. (AURA)
under a cooperative agreement with the National Science Foundation./CCDRED
package has been used for the standard reduction processes of overscan
correction, bias correction, and sky flattening. Point spread function (PSF)
photometry has been performed by using the DAOPHOT II/ALLSTAR stand-alone
package (Stetson, 1990). The error values of the PSF photometry are shown in
Fig. 2. To derive the astrometry solution, astrometry.net (Lang et al., 2010)
has been used.
Four Landolt standard star fields (PG0231+051, LB1735, LSS982, Rubin 149)
(Landolt, 1992; Landolt & Uomoto, 2007; Landolt, 2009) were observed to obtain
the standardization equations to convert the instrumental magnitudes to
standard magnitudes. The same transformation equations as those in Kim et al.
(2017) are used, which are
$B=b-0.285(\pm 0.009)~{}X_{b}-0.127(\pm 0.005)(B-V)-1.903(\pm 0.013)$
$V=v-0.157(\pm 0.007)~{}X_{v}+0.027(\pm 0.004)(B-V)-1.693(\pm 0.011)$
$I=i-0.056(\pm 0.007)~{}X_{i}+0.019(\pm 0.003)(V-I)-2.712(\pm 0.010)$
where $b,v,i$ are instrumental magnitudes for each band, $B,V,I$ are standard
magnitudes, and $X$ means airmass for each band. The rms values of the
standardization residuals (standard magnitude minus transformed magnitude) are
$\Delta B=0.037$, $\Delta V=0.030$, and $\Delta I=0.029$ mag.
Table 2: Galactic coordinates and radii of the four OCs Name | Galactic longitude ($l$) | Galactic latitude ($b$) | Radius | Source
---|---|---|---|---
| [deg] | [deg] | [arcmin] |
Czernik 30 | 226.34 | 4.16 | $2.3\pm 0.3$ | This study
Berkeley 34 | 214.16 | 1.89 | $2.5\pm 0.3$ | This study
Berkeley 75 | 234.30 | $-11.19$ | $1.9\pm 0.2$ | This study
Berkeley 76 | 225.10 | $-1.99$ | $4.0\pm 0.3$ | This study
Table 3: Log of the observations for the four OCs Target | Date | Filter | Exposure time
---|---|---|---
| (UT) | | (seconds)
Czernik 30 | 2010 December 13 | $B$ | 1200 s $\times 3$
| | $V$ | 900 s $\times 3$
| | $I$ | 800 s $\times 3$
Berkeley 34 | 2010 December 15 | $B$ | 1200 s $\times 3$
| | $V$ | 900 s $\times 3$
| | $I$ | 900 s $\times 3$
Berkeley 75 | 2010 December 12 | $B$ | 900 s $\times 3$
| | $V$ | 900 s $\times 2$
| | $I$ | 400 s $\times 3$
Berkeley 76 | 2010 December 12 | $B$ | 1200 s $\times 3$
| | $V$ | 900 s $\times 3$
| | $I$ | 800 s $\times 3$
Figure 2: Error plot of the $BVI$ bands for Czernik 30.
## 3 Czernik 30
### 3.1 Center
To determine the center of Czernik 30, we fit the Gaussian function on the
distribution of the point sources detected with the DAOPHOT II routine in
Section 2 and brighter than $V=20$ mag using the Python Gaussian_kde function
of Scipy package with Scott’s rule as bandwidth, which is the optimal
bandwidth for a Gaussian kernel to minimize the integral value of the mean
squared error. We obtain the probability distribution function (PDF) for the
whole image, and the peak of this function is considered to be the center of
Czernik 30. This result is shown in Fig. 3. The left color bar in Fig. 3
indicates the number of stars brighter than $V=20$ mag per arcmin square and
the right color bar shows the membership probability of each star (see
Sectioin 3.2 below).
The red cross symbol in Fig. 1 (a) is the derived center of Czernik 30 :
$\alpha_{J2000}=07^{h}31^{m}10.8^{s}$ and $\delta_{J2000}=-09\arcdeg 56\arcmin
42\arcsec$. While the center of Czernik 30 used by Hayes et al. (2015)
($\alpha_{J2000}=07^{h}31^{m}11^{s}$, $\delta_{J2000}=-09\arcdeg 56\arcmin
38\arcsec$, green x symbol in Fig. 1 (a)) and that used by Piatti et al.
(2009) ($\alpha_{J2000}=07^{h}31^{m}10^{s}$, $\delta_{J2000}=-09\arcdeg
56\arcmin 00\arcsec$, magenta x symbol in Fig. 1 (a)) are very close to ours,
the centers used by Hasegawa et al. (2008)
($\alpha_{J2000}=07^{h}31^{m}18^{s}$, $\delta_{J2000}=-09\arcdeg 58\arcmin
00\arcsec$, yellow x symbol in Fig. 1 (a)) and that used by Perren et al.
(2015) ($\alpha_{J2000}=07^{h}31^{m}19.2^{s}$, $\delta_{J2000}=-09\arcdeg
58\arcmin 12\arcsec$, cyan x symbol in Fig. 1 (a)) are a bit different from
ours.
Figure 3: Distribution function of the stellar photometry data from the
$B$-band image including Czernik 30. The color bars on the right show the
values of the normalized number density for the background (left) and the
membership probability for the colors of dots (right). Black dots are the
locations of stars without consideration of their brightnesses, red dot is the
obtained center of Czernik 30 and the red circle is radius $2.3^{\prime}$.
### 3.2 Member Selection
pyUPMASK (Pera et al., 2021) is a package to determine members of a star
cluster using the method of the ‘unsupervised photometric membership
assignment in stellar clusters’ (UPMASK) algorithm (Krone-Martins & Moitinho,
2014). UPMASK initially selected the stellar cluster members using the K-mean
clustering method with photometric information. Cantat-Gaudin et al. (2018a,
b) and Carrera et al. (2019) found the membership of OCs using UPMASK with
proper motion and parallax data from $Gaia$. pyUPMASK is developed in Python
and supports the clustering method from the scikit-learn library, while UPMASK
is written by R and supports the K-mean clustering method. pyUPMASK is
composed of two loops: an outer loop and an inner loop. The outer loop runs
the inner loop and calculates the membership probability, and the inner loop
identifies and rejects clusters. pyUPMASK measures clustering in three
dimensional space, such as proper motion and parallax.
We adopted pyUPMASK to select the members of Czernik 30 with proper motion and
parallax data from the $Gaia$ EDR3. $Gaia$ EDR3 data which cover our image
region were matched with our photometric catalog. The stars included in the
final catalog for selecting members satisfy two conditions: brighter than
$V=20$ mag and parallax greater than 0. Among a dozen clustering methods we
adopted the Gaussian mixture model, which assumes every cluster follows a
Gaussian function. Finally, 137 member stars were found to have membership
probability larger than 0.70, which was also used in Zhong et al. (2022) as a
probability limit for member stars.
### 3.3 Radius
We investigated the radial density profiles using concentric circles around
the center of the cluster determined in the previous subsection, with a radial
bin size of $0.5\arcmin$, as shown in Fig. 4. We counted the number of stars
for each bin and divided it by the corresponding area (black line in Fig. 4).
Since we located Czernik 30 in the upper right quadrant of the CCD chip during
the observations, at around $>6\arcmin$ the whole annulus was not covered in
the image, so we could use only part of the annulus for the calculation.
For the member stars of Czernik 30, we plotted the radial density profile
(blue line in Fig. 4) in the same way as we mentioned above. We decided
$2.3\arcmin\pm 0.3\arcmin$ is the radius where the member fraction is greater
than 0.5, since member stars are the majority within the radius. The
uncertainty was measured by the bootstrap method. Although a small number of
member stars exist at $2.3\arcmin<r<5\arcmin$, the number of field stars is
much larger than the member stars in this region. In our study, we only used
the stars within the radius to determine the physical parameters of Czernik
30.
This result, within the error range, is an excellent agreement with that of
Hayes et al. (2015). While Piatti et al. (2009) used $r\sim 1.33\arcmin$ for
the radius of Czernik 30 to get a clean sample of cluster stars, Hasegawa et
al. (2008) did not mention any radius used in their study.
Figure 4: Radial density profile of Czernik 30. While the blue line includes
only the member stars, the black line includes members and field stars. The
red vertical line is the adopted radius of Czernik 30. The error bars indicate
the Poisson errors.
### 3.4 Reddening and Distance
We plot the $V$ vs. $B-V$ and $V$ vs. $V-I$ CMDs in Fig. 5, that shows the
distinct main sequence (MS) and some red clump (RC) stars. MS turn off (MSTO)
is found to be located at $V\sim 18.05\pm 0.05$ mag. We consider the three
stars near $V\sim 15.51$ mag, $B-V\sim 1.16$ mag and $V-I\sim 1.30$ to be the
RC stars. In the previous study, Piatti et al. (2009) inferred RC was located
at $T_{1}\sim 14.5-15.0$, $C-T_{1}\sim 2.4-2.6$ in the Washington photometric
System. The location of RC from Piatti et al. (2009) can be transformed into
$V\sim 15.14-15.69$ and $V-I\sim 1.27-1.36$ using the transformation equations
of Bessell (2001), and these ranges include the location of the RC from our
study.
RC stars are low-mass stars in the stage of core-helium burning, and they
appear as a distinct grouping in the CMD (Cannon, 1970; Girardi, 2016). Since
the magnitude and color of the RC stars are known to be constant, they have
been widely used to get distances and reddenings for old OCs (Janes & Phelps,
1994; Girardi, 2016). The strength of using 2MASS (Two Micron All Sky Survey)
(Skrutskie et al., 2006) $K_{s}$-band of the RC stars comes from the smaller
dependency on age, metallicity, and extinction than other optical bands. The
absolute magnitude and intrinsic color of the RC stars have been studied by
many researchers (Alves, 2000; Grocholski & Sarajedini, 2002; van Helshoecht &
Groenewegen, 2007; Groenewegen, 2008; Laney et al., 2012; Francis & Anderson,
2014; Girardi, 2016; Chen et al., 2017; Hawkins et al., 2017; Ruiz-Dern et
al., 2018; Chan & Bovy, 2020). We use the absolute magnitude of
$M_{K_{s}}=-1.628\pm 0.133$ and the intrinsic color $(J-K_{s})_{RC,0}=0.656\pm
0.040$ for the RC stars from the most recent study of Wang & Chen (2021). They
used the $Gaia$ EDR3 data, the Apache Point Observatory Galactic Evolution
Experiment(APOGEE) and the Large Sky Area Multi-Object Fiber Spectroscopic
Telescope(LAMOST) data and 156,000 RC samples to calculate the absolute
magnitude and intrinsic color of the RC stars. We matched our RC stars with
the 2MASS $JHK_{s}$ band catalog data. The list of the three RC stars of
Czernik 30 is shown in Tab. 4. The mean magnitude and color of the RC stars of
Czernik 30 are $V=15.54\pm 0.10$, $B-V=1.15\pm 0.07$, $J=13.19\pm 0.01$,
$H=12.60\pm 0.02$, $K_{s}=12.46\pm 0.01$, and $J-K_{s}=0.73\pm 0.01$.
Using the intrinsic $(J-K_{s})$ color of the RC stars derived by Wang & Chen
(2021), we obtain the reddening value of
$E(J-K_{s})=(J-K_{s})_{RC}-(J-K_{s})_{RC,0}=0.07\pm 0.04$ and $E(B-V)=0.15\pm
0.08$ using the relation $E(J-K_{s})=0.488\times E(B-V)$ (Kim, 2006). We also
use the $\delta V$ index to obtain the reddening value, which is defined as
the difference between the magnitudes of RC and MSTO (Phelps et al., 1994;
Janes & Phelps, 1994; Kim & Sung, 2003). When $\delta V>1.0$, the RC of an OC
has and the absolute magnitude of $M_{V,RC}=0.90\pm 0.40$ and the intrinsic
color of $(B-V)_{0}=0.95\pm 0.10$ (Janes & Phelps, 1994). Since, $\delta V$ is
2.54 mag for Czernik 30, the reddening value is derived to be
$E(B-V)=(B-V)-(B-V)_{0}=0.20\pm 0.12$ which agrees with the reddening value
from the RC method within the error range.
Using the mean $K_{s}$ magnitude of $12.46\pm 0.01$ for the RC stars of
Czernik 30, the distance modulus is derived to be
$(m-M)_{0}=K_{s}-M_{K_{s}}-A_{K_{s}}=14.05\pm 0.13$ mag ($d=6.46\pm 0.39$
kpc), where $A_{K_{s}}=0.528\times E(J-{K_{s}})$ (Nishiyama et al., 2009).
### 3.5 Age and [Fe/H]
To derive the physical parameters of age and metallicity for Czernik 30, we
have performed PARSEC isochrone fittings (Bressan et al., 2012) with the
distance and reddening values fixed, which were obtained in Sec. 3.4. From the
best fitted PARSEC isochrone shown in Fig. 5 (a), we obtained age and
metallicity and their uncertainties from the possible isochrone variations
within a tolerable limit: $\log t=9.45\pm 0.05$ ($t=2.82\pm 0.32$ Gyr),
[Fe/H]$=-0.22\pm$ $0.15$ dex. We derived $\log t=9.45\pm 0.05$ ($t=2.82\pm
0.32$ Gyr), E$(V-I)=0.27\pm 0.20$ from the best fitted PARSEC isochrone in $V$
vs. $(V-I)$ CMD (Fig. 5 (b)).
### 3.6 Comparison with previous studies
There are four previous studies about the physical parameters of Czernik 30.
The physical parameters from the previous studies and our study are shown in
Table 1.
Hasegawa et al. (2008) used the Padova isochrones and estimated age $t=2.5$
Gyr$(\log t=9.40)$, metallicity $Z=0.008$ ([Fe/H$]=-0.41)$, color excess
$E(V-I)=0.34$, and distance modulus $(m-M)_{0}=14.27$. Piatti et al. (2009)
used three radii to determine the physical parameters:
$r_{\tiny\textrm{FWHM}}$, $r_{\textrm{clean}}$ and $r_{\textrm{cls}}$(see
details in sec. 3 of Piatti et al. (2009)), and obtained age
$t=2.5^{+0.30}_{-0.25}$ Gyr $(\log t=9.40)$, metallicity [Fe/H$]=-0.4\pm 0.2$,
color excess $E(B-V)=0.26\pm 0.02$ and distance $d=6.2\pm 0.8$ kpc using the
Padova isochrones. Perren et al. (2015) developed a code that automatically
estimates the physical parameters of OCs after finding the center, and they
obtained the physical parameters of 20 OCs including Czernik 30 using their
code. Perren et al. (2015) presented two types of radii: one was a manually
determined radius and the other was automatically assigned by the code. They
suggested two physical parameter sets using the two radii, and these two
physical parameter sets are overall not in good agreement, among which only
the distance values are quite similar. Adopting their values obtained with the
automatically found radius, $E(B-V)$ and age from their study were 0.35 mag
larger and 2.02 Gyr younger, respectively, than those in our study, and they
suggested $\sim 1.4$ kpc farther distance than that in our study. Hayes et al.
(2015) analyzed the photometric and spectroscopic data of Czernik 30 and
determined age $t=2.8\pm 0.3$ Gyr $(\log t=9.45)$, metallicity [Fe/H]$=-0.2\pm
0.15$, distance modulus $(m-M)_{V}=14.8\pm 0.1$ ($d\sim 6.5$ kpc), and color
excess $E(B-V)=0.24\pm 0.06$ and $E(V-I)=0.36\pm 0.04$.
In this study, we have used both the RC properties and the isochrone fitting.
While our study obtained somewhat smaller reddening values compared to the
previous studies, age, metallicity, and distances are in very good agreement
with the values in the literature.
Table 4: The photometry results of the red clump stars for the four old OCs
ID | R.A. | Dec. | $B$ | $B$ error | $V$ | $V$ error | $I$ | $I$ error | $J$ | $J$ error | $H$ | $H$ error | $K_{s}$ | $K_{s}$ error
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
| hh:mm:ss | dd:mm:ss | mag | mag | mag | mag | mag | mag | mag | mag | mag | mag | mag | mag
(a) Czernik 30
3779 | 07:31:07.28 | $-09:55:03.2$ | 16.629 | 0.002 | 15.480 | 0.001 | 14.183 | 0.003 | 13.198 | 0.027 | 12.610 | 0.022 | 12.457 | 0.026
3800 | 07:31:07.47 | $-09:55:51.0$ | 16.637 | 0.002 | 15.481 | 0.002 | 14.191 | 0.005 | 13.180 | 0.070 | 12.579 | 0.090 | 12.456 | 0.069
4128 | 07:31:10.98 | $-09:56:48.3$ | 16.727 | 0.002 | 15.560 | 0.001 | 14.260 | 0.005 | 13.191 | 0.027 | 12.615 | 0.022 | 12.468 | 0.026
(b) Berkeley 34
3657 | 07:00:16.37 | $-00:12:29.4$ | 18.309 | 0.005 | 16.743 | 0.002 | 15.032 | 0.002 | 13.620 | 0.028 | 12.875 | 0.031 | 12.692 | 0.024
3870 | 07:00:19.12 | $-00:12:39.5$ | 18.338 | 0.006 | 16.760 | 0.002 | 15.055 | 0.003 | 13.636 | 0.035 | 12.982 | 0.039 | 12.726 | 0.029
4137 | 07:00:21.97 | $-00:14:38.9$ | 18.306 | 0.004 | 16.726 | 0.002 | 14.971 | 0.003 | 13.599 | 0.032 | 12.880 | 0.034 | 12.673 | 0.026
4273 | 07:00:23.22 | $-00:15:40.5$ | 18.267 | 0.004 | 16.660 | 0.002 | 14.896 | 0.002 | 13.461 | 0.033 | 12.717 | 0.032 | 12.502 | 0.028
(c) Berkeley 75
2525 | 06:49:00.11 | $-23:59:39.7$ | 16.582 | 0.002 | 15.547 | 0.002 | 14.433 | 0.002 | 13.577 | 0.035 | 13.039 | 0.044 | 12.893 | 0.041
2710 | 06:49:02.99 | $-23:59:29.6$ | 16.300 | 0.002 | 15.328 | 0.002 | 14.237 | 0.002 | 13.473 | 0.037 | 12.917 | 0.041 | 12.772 | 0.043
(d) Berkeley 76
3582 | 07:06:36.28 | $-11:44:23.6$ | 17.723 | 0.003 | 16.320 | 0.002 | 14.717 | 0.002 | 13.410 | 0.029 | 12.710 | 0.023 | 12.527 | 0.026
4109 | 07:06:42.60 | $-11:43:32.5$ | 17.624 | 0.003 | 16.255 | 0.002 | 14.710 | 0.002 | 13.354 | 0.024 | 12.744 | 0.027 | 12.527 | 0.029
4149 | 07:06:43.02 | $-11:43:05.8$ | 17.523 | 0.002 | 16.155 | 0.002 | 14.573 | 0.002 | 13.319 | 0.030 | 12.674 | 0.031 | 12.471 | 0.030
4331 | 07:06:45.14 | $-11:43:55.6$ | 17.206 | 0.002 | 15.878 | 0.004 | 14.291 | 0.002 | 13.001 | 0.028 | 12.341 | 0.029 | 12.086 | 0.021
4352 | 07:06:45.42 | $-11:46:49.3$ | 17.658 | 0.003 | 16.275 | 0.002 | 14.660 | 0.002 | 13.338 | 0.029 | 12.691 | 0.031 | 12.499 | 0.027
Note. — $J,H,K_{s}$ magnitudes and magnitude errors are from the 2MASS catalog
(Skrutskie et al., 2006).
Figure 5: Best fit PARSEC isochrone line on (a) $V$ vs. $(B-V)$ CMD and (b)
$V$ vs. $(V-I)$ CMD of Czernik 30. The black open circle symbols are the
member stars of Czernik 30, the blue open circles indicate the RC stars of
Czernik 30, the gray dots are non-member stars but located inside the radius
of Czernik 30 and the red lines are the best fit PARSEC isochrone model.
## 4 Berkeley 34
In the same way as in Czernik 30, we determined the center of Berkeley 34:
$\alpha_{J2000}=07^{h}00^{m}23.2^{s}$ and $\delta_{J2000}=-00\arcdeg 13\arcmin
54\arcsec$ (red cross symbol in Fig. 1 (b)) using gaussian_kde package and the
distribution function shown in Fig. 6. The center of Berkeley 34 from the
previous study is shown in Fig. 1 (b). The green x symbol is
$\alpha_{J2000}=07^{h}00^{m}23^{s}$, $\delta_{J2000}=-00\arcdeg 14\arcmin
15\arcsec$ (Ortolani et al., 2005) and the yellow x symbol is
$\alpha_{J2000}=07^{h}00^{m}24^{s}$, $\delta_{J2000}=-00\arcdeg 15\arcmin
00\arcsec$ (Hasegawa et al., 2004). Donati et al. (2012) presents
$\alpha_{J2000}=07^{h}00^{m}23^{s}$, $\delta_{J2000}=-00\arcdeg 13\arcmin
56\arcsec$ as the center of Berkeley 34, and the magenta x symbol indicates
this location.
Figure 6: Distribution function of the stellar photometry data from the
$B$-band image including Berkeley 34. Black dots are the locations of stars
without consideration of their brightnesses, and the red dot indicates the
center of Berkeley 34 obtained from Gaussian fitting. The red circle indicates
the radius $2.5^{\prime}$ of Berkeley 34. The white-magenta dot symbols are
the members of Berkeley 34. The left color bar shows values of the normalized
number density function and the right color bar the membership probability for
each star.
To select the member stars of Berkeley 34, we adopted the pyUPMASK package
(see Section 3.2) using the $Gaia$ proper motion and parallax data. Finally,
147 stars were selected as the members of Berkeley 34 and are shown in Fig. 6
as white-magenta dot symbols.
Fig. 7 shows the radial density profile of Berkeley 34. We determine the
radius of Berkeley 34 to be about $2.5\arcmin\pm 0.3\arcmin$ where the member
fraction is greater than 0.5 in spite of the existence of members from
$2.5\arcmin$ to $4\arcmin$. While Hasegawa et al. (2004) did not specify the
radius value adopted in their study, Ortolani et al. (2005) used $r\sim
58\arcsec$ in fitting the isochrones, and Donati et al. (2012) used the stars
inside $r\sim 2.5\arcmin$ region.
Figure 7: Radial density profile of Berkeley 34. The black line indicate the
radial density profile of member stars and field stars. The blue line indicate
the radial density profile of member stars. The red vertical line is the
radius of Berkeley 34. The error bars indicate the Poisson errors.
Fig. 8 shows $V$ vs. $B-V$ and $V$ vs. $V-I$ CMDs for the stars in $r\sim
2.5\arcmin$. The MSTO is located at $V\sim 19.00$ mag, $B-V\sim 0.98$, and
$V-I\sim 1.25$. Hasegawa et al. (2004) estimated the MSTO location to be
($V,V-I$) = (18.5, 1.2). Donati et al. (2012) claimed two points for the MS of
Berkeley 34: MS red hook (the reddest part of MS) at $V\sim 18.5$ mag and the
MS termination point at $V\sim 18.0$ mag.
Figure 8: Best fit PARSEC isochrone on $V$ vs. $(B-V)$ CMD (panel (a)) and $V$
vs. $(V-I)$ CMD (panel (b)) of Berkeley 34. The black open circles are the
member stars of Berkeley 34, the blue open circles are the RC of Berkeley 34,
the gray dot symbols are non-member stars but located inside the radius of
Berkeley 34 and the red line is the best fitted PARSEC isochrone model.
We took the four stars near $V\sim 16.72$, $B-V\sim 1.58$, and $V-I\sim 1.73$
as the RC stars of Berkeley 34, and thier photometry data are shown in Table
4. From the RC method, we calculate the reddening values $E(J-K_{s})=0.27\pm
0.12$, $E(B-V)=0.56\pm 0.24$, and the distance modulus $(m-M)_{0}=14.13\pm
0.19$ ($d=6.70\pm 0.59$ kpc). Since $\delta V$ index is 2.28, $E(B-V)$ was
estimated to be $0.63\pm 0.10$, which is consistent with the value from the RC
method. Donati et al. (2012) suggested two groups of RC, brighter and fainter:
the position of the brighter RC group was at $V\sim 15.7$ and $B-V\sim 1.7$
and the fainter RC group was located at $V\sim 16.7$ and $B-V\sim 1.55$. We
consider only one RC group exists for Berkeley 34, which corresponds to the
fainter group in Donati et al. (2012).
We tried to fit the PARSEC isochrones to the CMDs of Berkeley 34 using the
reddening and distance modulus derived using the RC method. Fig. 8 shows the
best fit PARSEC isochrones with CMDs. Finally, we determined the fundamental
physical parameters for Berkeley 34, which include age, metallicity, distance
modulus, and color excess: age $\log t=9.40\pm 0.05$ ($t=2.51\pm 0.30$ Gyr),
metallicity [Fe/H] $=-0.30\pm 0.15$ dex, distance modulus $(m-M)_{0}=14.13\pm
0.19$ ($d=6.70\pm 0.59$ kpc), and color excesses $E(B-V)=0.56\pm 0.24$ and
$E(V-I)=0.73\pm 0.31$.
As shown in Table 1, Hasegawa et al. (2004) obtained age $t=2.8$ Gyr,
metallicity $Z=0.019$ ([Fe/H]$=-0.02$), distance $(m-M)_{0}=14.31$, and color
excesses $E(B-V)=0.45$ and $E(V-I)=0.60$. Ortolani et al. (2005) obtained the
distance to Berkeley 34 of d = $7.8\pm 0.8$ kpc. Donati et al. (2012) measured
the physical parameters using Full Spectrum of Turbulence (FST), Padova,
Frascati Raphson Newton Evolutionary Code (FRANEC) isochrone. They gave a
physical parameters range from the FST isochrone: age from 2.1 to 2.5 Gyr,
metallicity $Z=0.01$ ([Fe/H]$=-0.31$ dex), distance from 6 to 7 kpc
($(m-M)_{0}\sim 14.1-14.3$), color excess E($B-V)\sim 0.57-0.64$. Overall, our
results show good agreement with the values in the three studies listed above.
## 5 Berkeley 75
In the same way as in Czernik 30, we determined the center of Berkeley 75
using the kernel density estimation method (Fig. 9). Berkeley 75 is located at
$\alpha_{J2000}=06^{h}48^{m}59.1^{s}$, $\delta_{J2000}=-23\arcdeg 59\arcmin
36\arcsec$. The green x symbol in Fig. 1 (c) for
$\alpha_{J2000}=06^{h}48^{m}59^{s}$ and $\delta_{J2000}=-23\arcdeg 59\arcmin
30\arcsec$ indicates the center position of Berkeley 75 used by Carraro et al.
(2005).
Figure 9: Distribution function of the stellar photometry data from the
$B$-band image including Berkeley 75. The black dots are the locations of
stars without considering their brightnesses, the red dot is the center of
Berkeley 75 and the red circle indicates the radius $1.9^{\prime}$ of Berkeley
75. The white-magenta dot symbols are the members of Berkeley 75. The left
color bar shows the values of the normalized number density function and the
right color bar the membership probability for each star.
By adopting the pyUPMASK package (see Section 3.2), 77 stars were determined
to be the members of Berkeley 75. The radial density profile of Berkeley 75 is
shown in Fig. 10. The region from $1.9\arcmin$ to $4^{\prime}$ has member
stars of Berkeley 75 but field stars represent the majority in this region.
Thus, we determined the radius of Berkeley 75 to be $1.9\arcmin$. Carraro et
al. (2005) determined the radius of Berkeley 75 to be $1\arcmin$ from its
radial density profile.
Figure 10: Radial density profile of Berkeley 75. The black line includes the
member stars and the field stars, and the blue line is the radial density
profile of members. The red line indicates the radius of Berkeley 75. The
error bars indicate the Poisson errors.
The CMDs of Berkeley 75 are shown in Fig. 11, where MSTO is located at $V\sim
18.1$ mag, $B-V\sim 0.46$, and $V-I\sim 0.63$. Carraro et al. (2005) also
presented almost the same value ($V\approx 18$ mag) for the MSTO.
We selected two RC stars of Berkeley 75, which are listed in Table 4 (c). The
mean magnitude and color for the RC stars in Berkeley 75 are $V=15.44\pm
0.11$, $B-V=1.00\pm 0.18$, and $V-I=1.10\pm 0.15$ while Carraro et al. (2005)
measured the location of the RCs to be $V\sim 16.0$ mag which is quite
different from ours. Using the RC method, we calculated the distance
$(m-M)_{0}=14.44\pm 0.17$ ($d=7.73\pm 0.61$ kpc) and reddening $E(B-V)=0.07\pm
0.18$. Using $\delta V$ index of 2.66 and the method of Janes & Phelps (1994),
we obtained $E(B-V)=0.05\pm 0.20$, which is consistent with the value from the
RC method within error range.
Figure 11: Best fit PARSEC isochrone on $V$ vs. $(B-V)$ CMD (panel (a)) and
$V$ vs. $(V-I)$ CMD (panel (b)) of Berkeley 75. The black open symbols are the
member stars of Berkeley 75, the blue open symbols are the RC stars of
Berkeley 75, the gray dots are non-member stars but located inside the radius
of Berkeley 75 and the red lines are the best fit PARSEC isochrone model for
each CMD.
We tried to determine the age and metallicity of Berkeley 75 using the PARSEC
isochrones and the reddening and distance values obtained from the RC method,
as shown in Fig. 11. We measured age $\log t=9.50\pm 0.10$ ($t=3.16\pm 0.73$
Gyr), metallicity [Fe/H] $=-0.57\pm 0.20$ dex, distance modulus
$(m-M)_{0}=14.44\pm 0.17$, and color excesses $E(B-V)=0.07\pm 0.18$ and
$E(V-I)=0.13\pm 0.32$. Although the reddening value measured by Janes & Phelps
(1994) was not exactly consistent with the reddening value from the RC method,
the reddening values were consistent with the values from the RC method within
the error range. Carraro et al. (2005) obtained distance modulus $(m-M)=15.2$,
color excesses $E(B-V)=0.08$ and $E(V-I)=0.13$ using the Padova isochrones of
age 3 Gyr and metallicity $Z=0.004$ ([Fe/H] $=-0.72$ dex). Carraro et al.
(2007) revised the estimates to be: age $4.0\pm 0.4$ Gyr, metallicity [Fe/H]
$=-0.22\pm 0.20$ dex, distance modulus $(m-M)=14.90\pm 0.20$, and color excess
$E(B-V)=0.04\pm 0.03$. The revised parameters of Carraro et al. (2007) show
good agreement with our parameters.
## 6 Berkeley 76
Using the kernel density estimation method as in the previous sections, we
determined the center of Berkeley 76 as shown in Fig. 12. Unlike the three OCs
in the previous sections, Berkeley76 has many more number of stars spread in
the field. We determined the center of Berkeley 76 to be at
$\alpha_{J2000}=07^{h}06^{m}42.4^{s}$ and $\delta_{J2000}=-11\arcdeg 43\arcmin
33\arcsec$. Carraro et al. (2013) suggested the center of Berkeley 76 to be
$\alpha_{J2000}=07^{h}06^{m}24^{s}$ and $\delta_{J2000}=-11\arcdeg 37\arcmin
00\arcsec$. However, since their Fig. 1 and our Fig. 1 (d) show the same
region, their center coordinates in their Table 1 might not be correct. The
yellow x symbol in our Fig. 1 (d) indicates
$\alpha_{J2000}=07^{h}06^{m}44^{s}$ and $\delta_{J2000}=-11\arcdeg 44\arcmin$
from Hasegawa et al. (2008) and the magenta x symbol is
$\alpha_{J2000}=07^{h}06^{m}24^{s}$ and $\delta_{J2000}=-11\arcdeg 37\arcmin
38\arcsec$ from Tadross (2008). The center location from Tadross (2008) is
quite far away ($7.51\arcmin$) from the center in our study.
Figure 12: Distribution function of the stellar photometry data from the
$B$-band image including Berkeley 76. The black dots are the locations of
stars without considering their brightnesses, the red dot is the center of
Berkeley 76 and the red circle is the radius $4.0^{\prime}$ of Berkeley 76.
The white-magenta dot symbols are the members of Berkeley 76. The left color
bar shows values of the normalized number density function and the right color
bar the membership probability for each star.
288 stars are selected as members of Berkeley 76 from the pyUPMASK algorithm
(Pera et al., 2021) with $Gaia$ EDR3 proper motion and parallax data. In Fig.
13, the trend in the radial density profile of Berkeley 76 is different from
those in the three OCs of the previous sections. $4.0\arcmin\pm 0.3\arcmin$ is
determined to be the radius of Berkeley 76 where the member fraction is 0.5.
Carraro et al. (2013) used $2\arcmin$ as the radius of Berkeley 76 and Tadross
(2008) obtained $4.5\arcmin$ for the radius of Berkeley 76.
Figure 13: Radial density profile of Berkeley 76. The black line includes the
member stars and the field stars, and the blue line is the radial density
profile of members. The red line indicates the radius of Berkeley 76. The
error bars indicate the Poisson errors.
Fig. 14 shows the $V$ vs. $B-V$ and $V$ vs. $V-I$ CMDs for Berkeley 76, where
the five RC stars can be seen at $V\sim 16.22\pm 0.08$, $B-V\sim 1.35\pm
0.06$, and $V-I\sim 1.59\pm 0.03$. The photometry results for these five stars
are shown in Tab. 4 (d). We determined the distance and reddening of Berkeley
76 using the RC method: distance modulus $(m-M)_{0}=13.97\pm 0.23$ and
reddening $E(B-V)=0.41\pm 0.33$. $\delta V$ index of Berkeley 76 is 2.04, and
this gives us $E(B-V)=0.39\pm 0.12$ which is consistent with that from the RC
method.
Carraro et al. (2013) suggested the mean magnitude and color for the four RC
stars in Berkeley 76 to be $V\sim 17.9$ and $B-V\sim 1.4$, respectively. While
the $B-V$ colors are in very good agreement with their color and ours, their
$V$ magnitude is $\sim 1.7$ mag fainter than ours. Considering two things,
that (1) the two CMDs in our study (Fig. 14) and Carraro et al. (2013) (their
Fig. 7) are very similar, and (2) the distance modulus estimated by Carraro et
al. (2013) ($(m-M)_{0}=17.20\pm 0.15$) is much larger than those of Hasegawa
et al. (2008) ($(m-M)_{0}=14.39$) and our study ($(m-M)_{0}=13.97\pm 0.23$),
we suspect the $V$ magnitudes in Carraro et al. (2013) were somehow shifted by
$\sim+1.7$ mag.
We tried to find best fit PARSEC isochrones using the distance and the
reddening values from the RC method as shown in Fig. 14. We determined the
physical parameters: age $\log t=9.10\pm 0.05$ ($t=1.26\pm 0.14$ Gyr),
metallicity [Fe/H] $=0.00\pm 0.20$ dex, distance modulus $(m-M)_{0}=13.97\pm
0.23$ ($d=6.22\pm 0.66$ kpc), and color excesses $E(B-V)=0.41\pm 0.33$ and
$E(V-I)=0.57\pm 0.46$.
Figure 14: (a) $V$ vs. $(B-V)$ and (b) $V$ vs. $(V-I)$ CMDs of Berkeley 76
with the best fit PARSEC isochrones. The black symbols are the member stars of
Berkeley 76, the blue open circles are the RC stars of Berkeley 76, the gray
dots are non-member stars but located inside the radius of Berkeley 76 and the
red solid lines are the best fitted PARSEC isochrone model.
## 7 Radial metallicity distribution
OCs can help reveal the chemical evolution of the Galactic disk (Netopil et
al., 2016; Kim et al., 2017; Chen & Zhao, 2020; Donor et al., 2020; Spina et
al., 2021; Zhang et al., 2021; Netopil et al., 2022). Netopil et al. (2016)
mentioned the importance of a homogeneous data set and they obtained the
Galactic metaillicity distribution from a homogeneous data set of 172 OCs for
three ranges, which is divided at $R_{GC}\sim 9$ and $12$ kpc. Donor et al.
(2020) studied the chemical abundance distribution of the Galactic disk using
OC data from the Sloan Digital Sky Survey/APOGEE DR 16, and they determined
the [Fe/H] vs $R_{GC}$ has a slope of $-0.068\pm 0.001$ dex kpc-1 in the
region of $6<R_{GC}<13.9$ kpc from the Markov Chain Monte Carlo method. Spina
et al. (2021) found the slope of [Fe/H] over $R_{GC}$ to be $-0.076\pm 0.009$
dex kpc-1 using a bayesian regression with the spectroscopic data of 134 OCs
from GALactic Archaeology with HERMES (GALAH) survey or APOGEE survey. Spina
et al. (2022) gathered high-resolution spectroscopic surveys data and measured
$-0.064\pm 0.007$ dex kpc-1 as the metallicity gradient. They also suggested a
flat metallicity distribution at outside of $R_{GC}=12.1\pm 1.1$ kpc.
We combined the distances and the [Fe/H] values from the following five
catalogs, together with the data for the four OCs obtained in this study :
Dias et al. (2002), Netopil et al. (2016), Donor et al. (2020), Spina et al.
(2021), and Dias et al. (2021). Dias et al. (2002, 2021) are the OC catalogs
including the physical parameters such as age, distance and metallicity.
Netopil et al. (2016), Donor et al. (2020), and Spina et al. (2021) focused on
the chemical evolution in the Galactic disk. If there were more than two
[Fe/H] values, we tried to use the values from the spectroscopic data, if they
exist, expecting them to have higher accuracy. We used 8 kpc as the solar
distance from the Galactic center, $R_{GC,\odot}$. The number of old OCs in
the final catalog is 236.
Fig. 15 shows the Galactic radial metallicity distribution of the OCs with
ages older or younger than 1 Gyr. We tried applying a single linear fit (panel
(a)) and a broken linear fit (panel (b)) to the combined data for OCs with
$t\geq 1$ Gyr. The broken linear fit assumes the existence of discontinuity
and uses two linear functions for the fit, with the final result listed in
Table 5. While the existence of the discontinuity is a controversial issue,
several possibilities are suggested as causes of the metallicity distribution
in the Galactic disk: for example, radial migration (Minchev et al., 2013,
2018; Zhang et al., 2021; Netopil et al., 2022), metal enrichment (Monteiro et
al., 2021), etc.
The number of OCs younger than 1 Gyr shown in Fig. 15 (c) is negligible in the
outer part of the Galactic disk, especially outside of 14.5 kpc. The small
number of samples at the outer part in Fig. 15 (b) make the broken linear fit
look more suitable than the single linear fit. For the broken linear fit in
Fig. 15 (b), we tried to find the appropriate location of the discontinuity
from 12 kpc to 14 kpc using a step size of 0.5 kpc. The discontinuity at 12
kpc has, naturally, the largest number of old OCs at the outer region, hence,
the Bayesian information criteria (BIC)222the BIC statistic is a method for
scoring and selecting a model, and the model with the lowest BIC is selected.
value at 12 kpc was the smallest among those from 12 kpc to 14 kpc. When using
the old OCs as elements to investigate the metallicity distribution in the
Galactic disk, it is important to increase the number of samples at the outer
region, especially outside of 14 kpc, for better analysis. Although the
addition of the four old OCs from our study to Fig. 15 is not a significant
increase in the number of the sample, our data are relatively important in
that all the four clusters are located at the outer region of $r\sim 14$ kpc.
Table 5: The metallicity gradient from least square fit using 236 old OCs in
Fig. 15
. Function Range N Gradient Intercept kpc dex kpc-1 dex single linear fit 236
$-0.052\pm 0.004$ $+0.391\pm 0.040$ broken linear fit $<12$ 196 $-0.070\pm
0.006$ $+0.556\pm 0.056$ broken linear fit $\geq 12$ 40 $-0.016\pm 0.010$
$-0.101\pm 0.145$
Figure 15: Radial metallicity distribution in the Galactic disk. (a) is the
result of single linear fitting to old OCs. (b) shows the broken linear fit
result with the discontinuity at 12 kpc. (c) is the distribution of OCs
younger than 1 Gyr. (d) is the radial metallicity distribution for the OCs in
all age ranges.
## 8 Summary
In this paper, we investigated four old OCs in the MWG. We photometrically
determined their physical quantities and compared them with those in previous
studies. By combining data of the four OCs with those from previously known
OCs, we newly estimated the radial metallicity distribution of the MWG. We
summarize our results as follows (see also Table 1 and Table 2).
* •
We determined the center of Czernik 30 -
$\alpha_{J2000}=07^{h}31^{m}10.8^{s}$, $\delta_{J2000}=-09\arcdeg 56\arcmin
42\arcsec$. We estimated the physical parameters: radius $2.3^{\prime}\pm
0.3^{\prime}$, color excess $E(B-V)=0.15\pm 0.08$, age $t=2.82\pm 0.32$ Gyr
($\log t=9.45\pm 0.05$), metallicity [Fe/H]$=-0.22\pm 0.15$ dex, and distance
modulus $(m-M)_{0}=14.05\pm 0.13$.
* •
We determined the center of Berkeley 34 -
$\alpha_{J2000}=07^{h}00^{m}23.2^{s}$, $\delta_{J2000}=-00\arcdeg 13\arcmin
54\arcsec$. We estimated the quantities: radius $2.5^{\prime}\pm
0.3^{\prime}$, color excess $E(B-V)=0.56\pm 0.24$, age $t=2.51\pm 0.30$
Gyr$(\log t=9.40\pm 0.05)$, metallicity [Fe/H] $=-0.30\pm 0.15$ dex, and
distance modulus $(m-M)_{0}=14.13\pm 0.19$.
* •
We determined the center of Berkeley 75 -
$\alpha_{J2000}=06^{h}48^{m}59.1^{s}$, $\delta_{J2000}=-23\arcdeg 59\arcmin
36\arcsec$. As for the physical quantities : radius $1.9^{\prime}\pm
0.2^{\prime}$, color excess $E(B-V)=0.07\pm 0.18$, age $t=3.16\pm 0.73$ Gyr
($\log t=9.50\pm 0.10$), metallicity [Fe/H] $=-0.57\pm 0.20$ dex, and distance
modulus $(m-M)_{0}=14.44\pm 0.17$.
* •
We determined the center of Berkeley 76 -
$\alpha_{J2000}=07^{h}06^{m}42.4^{s}$ and $\delta_{J2000}=-11\arcdeg 43\arcmin
33\arcsec$. For the physical quantities: we obtained radius $4.0^{\prime}\pm
0,3^{\prime}$, color excess $E(B-V)=0.41\pm 0.33$, age $t=1.26\pm 0.14$ Gyr
($\log t=9.10\pm 0.05$), metallicity [Fe/H] $=0.00\pm 0.20$ dex, and distance
modulus $(m-M)_{0}=13.97\pm 0.23$.
* •
We investigated the radial metallicity distribution of the Galactic disk using
a single linear fit and a broken linear fit to 236 old OCs. The gradient of
the single linear fit was $-0.052\pm 0.004$ dex kpc-1, and those for the
broken linear fit were $-0.070\pm 0.006$ dex kpc-1 at $r<12$ kpc and
$-0.016\pm 0.010$ at $r\geq 12$ kpc.
We thank the anonmymous referee for the fast and very helpful comments that
improved the manuscript. We thank A. E. Piatti for sending us the photometric
data of Czernik 30 and Takashi Hasegawa for providing us the photometric data
of Berkeley 76. We appreciate Mridweeka Singh for helpful discussion. Based on
observations at Cerro Tololo Inter-American Observatory at NSF’s NOIRLab
(NOIRLab Prop. ID 2010B-0178; PI: Sang Chul Kim), which is managed by the
Association of Universities for Research in Astronomy (AURA) under a
cooperative agreement with the National Science Foundation. This publication
makes use of data products from the Two Micron All Sky Survey, which is a
joint project of the University of Massachusetts and the Infrared Processing
and Analysis Center/California Institute of Technology, funded by the National
Aeronautics and Space Administration and the National Science Foundation. This
work has made use of data from the European Space Agency (ESA) mission Gaia
(https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and
Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement. This research was supported
by the Korea Astronomy and Space Science Institute under the R&D program
(Project No. 2022-1-868-04) supervised by the Ministry of Science and ICT.
H.S.P. was supported in part by the National Research Foundation of Korea
(NRF) grant funded by the Korea government (MSIT, Ministry of Science and ICT;
No. NRF-2019R1F1A1058228). J.H.L. was supported by the National Research
Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.
2022R1A2C1004025).
## References
* Ahumada et al. (2013) Ahumada, A. V., Cignoni, M., Bragaglia, A., et al. 2013, MNRAS, 430, 221, doi: 10.1093/mnras/sts593
* Alves (2000) Alves, D. R. 2000, ApJ, 539, 732, doi: 10.1086/309278
* Bessell (2001) Bessell, M. S. 2001, PASP, 113, 66, doi: 10.1086/317972
* Bressan et al. (2012) Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127, doi: 10.1111/j.1365-2966.2012.21948.x
* Cannon (1970) Cannon, R. D. 1970, MNRAS, 150, 111, doi: 10.1093/mnras/150.1.111
* Cantat-Gaudin et al. (2016) Cantat-Gaudin, T., Donati, P., Vallenari, A., et al. 2016, A&A, 588, A120, doi: 10.1051/0004-6361/201628115
* Cantat-Gaudin et al. (2018a) Cantat-Gaudin, T., Jordi, C., Vallenari, A., et al. 2018a, A&A, 618, A93, doi: 10.1051/0004-6361/201833476
* Cantat-Gaudin et al. (2018b) Cantat-Gaudin, T., Vallenari, A., Sordo, R., et al. 2018b, A&A, 615, A49, doi: 10.1051/0004-6361/201731251
* Cantat-Gaudin et al. (2020) Cantat-Gaudin, T., Anders, F., Castro-Ginard, A., et al. 2020, A&A, 640, A1, doi: 10.1051/0004-6361/202038192
* Carraro et al. (2013) Carraro, G., Beletsky, Y., & Marconi, G. 2013, MNRAS, 428, 502, doi: 10.1093/mnras/sts038
* Carraro et al. (2005) Carraro, G., Geisler, D., Moitinho, A., Baume, G., & Vázquez, R. A. 2005, A&A, 442, 917, doi: 10.1051/0004-6361:20053089
* Carraro et al. (2007) Carraro, G., Geisler, D., Villanova, S., Frinchaboy, P. M., & Majewski, S. R. 2007, A&A, 476, 217, doi: 10.1051/0004-6361:20078113
* Carrera et al. (2017) Carrera, R., Rodríguez Espinosa, L., Casamiquela, L., et al. 2017, MNRAS, 470, 4285, doi: 10.1093/mnras/stx1526
* Carrera et al. (2019) Carrera, R., Pasquato, M., Vallenari, A., et al. 2019, A&A, 627, A119, doi: 10.1051/0004-6361/201935599
* Chan & Bovy (2020) Chan, V. C., & Bovy, J. 2020, MNRAS, 493, 4367, doi: 10.1093/mnras/staa571
* Chen et al. (2017) Chen, Y. Q., Casagrande, L., Zhao, G., et al. 2017, ApJ, 840, 77, doi: 10.3847/1538-4357/aa6d0f
* Chen & Zhao (2020) Chen, Y. Q., & Zhao, G. 2020, MNRAS, 495, 2673, doi: 10.1093/mnras/staa1079
* Dias et al. (2002) Dias, W. S., Alessi, B. S., Moitinho, A., & Lépine, J. R. D. 2002, A&A, 389, 871, doi: 10.1051/0004-6361:20020668
* Dias et al. (2021) Dias, W. S., Monteiro, H., Moitinho, A., et al. 2021, MNRAS, 504, 356, doi: 10.1093/mnras/stab770
* Donati et al. (2012) Donati, P., Bragaglia, A., Cignoni, M., Cocozza, G., & Tosi, M. 2012, MNRAS, 424, 1132, doi: 10.1111/j.1365-2966.2012.21289.x
* Donor et al. (2020) Donor, J., Frinchaboy, P. M., Cunha, K., et al. 2020, AJ, 159, 199, doi: 10.3847/1538-3881/ab77bc
* Francis & Anderson (2014) Francis, C., & Anderson, E. 2014, MNRAS, 441, 1105, doi: 10.1093/mnras/stu631
* Friel (1995) Friel, E. D. 1995, ARA&A, 33, 381, doi: 10.1146/annurev.aa.33.090195.002121
* Gaia Collaboration et al. (2021) Gaia Collaboration, Antoja, T., McMillan, P. J., et al. 2021, A&A, 649, A8, doi: 10.1051/0004-6361/202039714
* Girardi (2016) Girardi, L. 2016, ARA&A, 54, 95, doi: 10.1146/annurev-astro-081915-023354
* Grocholski & Sarajedini (2002) Grocholski, A. J., & Sarajedini, A. 2002, AJ, 123, 1603, doi: 10.1086/339027
* Groenewegen (2008) Groenewegen, M. A. T. 2008, A&A, 488, 935, doi: 10.1051/0004-6361:200810201
* Hasegawa et al. (2004) Hasegawa, T., Malasan, H. L., Kawakita, H., et al. 2004, PASJ, 56, 295, doi: 10.1093/pasj/56.2.295
* Hasegawa et al. (2008) Hasegawa, T., Sakamoto, T., & Malasan, H. L. 2008, PASJ, 60, 1267, doi: 10.1093/pasj/60.6.1267
* Hawkins et al. (2017) Hawkins, K., Leistedt, B., Bovy, J., & Hogg, D. W. 2017, MNRAS, 471, 722, doi: 10.1093/mnras/stx1655
* Hayes et al. (2015) Hayes, C. R., Friel, E. D., Slack, T. J., & Boberg, O. M. 2015, AJ, 150, 200, doi: 10.1088/0004-6256/150/6/200
* Janes (1979) Janes, K. A. 1979, ApJS, 39, 135, doi: 10.1086/190568
* Janes & Phelps (1994) Janes, K. A., & Phelps, R. L. 1994, AJ, 108, 1773, doi: 10.1086/117192
* Jones et al. (2001) Jones, E., Oliphant, T., & Peterson, P. 2001, SciPy: Open Source Scientific Tools for Python. http://www.scipy.org
* Kharchenko et al. (2013) Kharchenko, N. V., Piskunov, A. E., Schilbach, E., Röser, S., & Scholz, R. D. 2013, A&A, 558, A53, doi: 10.1051/0004-6361/201322302
* Kim (2006) Kim, S. C. 2006, Journal of Korean Astronomical Society, 39, 115, doi: 10.5303/JKAS.2006.39.4.115
* Kim et al. (2017) Kim, S. C., Kyeong, J., Park, H. S., et al. 2017, Journal of Korean Astronomical Society, 50, 79, doi: 10.5303/JKAS.2017.50.3.79
* Kim et al. (2009) Kim, S. C., Kyeong, J., & Sung, E.-C. 2009, Journal of Korean Astronomical Society, 42, 135, doi: 10.5303/JKAS.2009.42.6.135
* Kim & Sung (2003) Kim, S. C., & Sung, H. 2003, Journal of Korean Astronomical Society, 36, 13, doi: 10.5303/JKAS.2003.36.1.013
* Krone-Martins & Moitinho (2014) Krone-Martins, A., & Moitinho, A. 2014, A&A, 561, A57, doi: 10.1051/0004-6361/201321143
* Kyeong et al. (2008) Kyeong, J., Kim, S. C., Hiriart, D., & Sung, E.-C. 2008, Journal of Korean Astronomical Society, 41, 147, doi: 10.5303/JKAS.2008.41.6.147
* Kyeong et al. (2011) Kyeong, J., Moon, H.-K., Kim, S. C., & Sung, E.-C. 2011, Journal of Korean Astronomical Society, 44, 33, doi: 10.5303/JKAS.2011.44.1.033
* Kyeong et al. (2001) Kyeong, J.-M., Byun, Y.-I., & Sung, E.-C. 2001, Journal of Korean Astronomical Society, 34, 143, doi: 10.5303/JKAS.2001.34.3.143
* Lada & Lada (2003) Lada, C. J., & Lada, E. A. 2003, ARA&A, 41, 57, doi: 10.1146/annurev.astro.41.011802.094844
* Landolt (1992) Landolt, A. U. 1992, AJ, 104, 340, doi: 10.1086/116242
* Landolt (2009) —. 2009, AJ, 137, 4186, doi: 10.1088/0004-6256/137/5/4186
* Landolt & Uomoto (2007) Landolt, A. U., & Uomoto, A. K. 2007, AJ, 133, 768, doi: 10.1086/510485
* Laney et al. (2012) Laney, C. D., Joner, M. D., & Pietrzyński, G. 2012, MNRAS, 419, 1637, doi: 10.1111/j.1365-2966.2011.19826.x
* Lang et al. (2010) Lang, D., Hogg, D. W., Mierle, K., Blanton, M., & Roweis, S. 2010, AJ, 139, 1782, doi: 10.1088/0004-6256/139/5/1782
* Liu & Pang (2019) Liu, L., & Pang, X. 2019, ApJS, 245, 32, doi: 10.3847/1538-4365/ab530a
* Lyngå (1995) Lyngå, G. 1995, VizieR Online Data Catalog, VII/92A
* Minchev et al. (2013) Minchev, I., Chiappini, C., & Martig, M. 2013, A&A, 558, A9, doi: 10.1051/0004-6361/201220189
* Minchev et al. (2018) Minchev, I., Anders, F., Recio-Blanco, A., et al. 2018, MNRAS, 481, 1645, doi: 10.1093/mnras/sty2033
* Monteiro et al. (2021) Monteiro, H., Barros, D. A., Dias, W. S., & Lépine, J. R. D. 2021, Frontiers in Astronomy and Space Sciences, 8, 62, doi: 10.3389/fspas.2021.656474
* Netopil et al. (2022) Netopil, M., Oralhan, İ. A., Çakmak, H., Michel, R., & Karataş, Y. 2022, MNRAS, 509, 421, doi: 10.1093/mnras/stab2961
* Netopil et al. (2016) Netopil, M., Paunzen, E., Heiter, U., & Soubiran, C. 2016, A&A, 585, A150, doi: 10.1051/0004-6361/201526370
* Nishiyama et al. (2009) Nishiyama, S., Tamura, M., Hatano, H., et al. 2009, ApJ, 696, 1407, doi: 10.1088/0004-637X/696/2/1407
* Ortolani et al. (2005) Ortolani, S., Bica, E., Barbuy, B., & Zoccali, M. 2005, A&A, 439, 1135, doi: 10.1051/0004-6361:20041458e
* Park & Lee (1999) Park, H. S., & Lee, M. G. 1999, MNRAS, 304, 883, doi: 10.1046/j.1365-8711.1999.02366.x
* Pera et al. (2021) Pera, M. S., Perren, G. I., Moitinho, A., Navone, H. D., & Vazquez, R. A. 2021, A&A, 650, A109, doi: 10.1051/0004-6361/202040252
* Perren et al. (2015) Perren, G. I., Vázquez, R. A., & Piatti, A. E. 2015, A&A, 576, A6, doi: 10.1051/0004-6361/201424946
* Phelps et al. (1994) Phelps, R. L., Janes, K. A., & Montgomery, K. A. 1994, AJ, 107, 1079, doi: 10.1086/116920
* Piatti et al. (2009) Piatti, A. E., Clariá, J. J., Parisi, M. C., & Ahumada, A. V. 2009, New A, 14, 97, doi: 10.1016/j.newast.2008.05.006
* Ruiz-Dern et al. (2018) Ruiz-Dern, L., Babusiaux, C., Arenou, F., Turon, C., & Lallement, R. 2018, A&A, 609, A116, doi: 10.1051/0004-6361/201731572
* Sim et al. (2019) Sim, G., Lee, S. H., Ann, H. B., & Kim, S. 2019, Journal of Korean Astronomical Society, 52, 145, doi: 10.5303/JKAS.2019.52.5.145
* Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163, doi: 10.1086/498708
* Spina et al. (2022) Spina, L., Magrini, L., & Cunha, K. 2022, Universe, 8, 87, doi: 10.3390/universe8020087
* Spina et al. (2021) Spina, L., Ting, Y. S., De Silva, G. M., et al. 2021, MNRAS, 503, 3279, doi: 10.1093/mnras/stab471
* Stetson (1990) Stetson, P. B. 1990, PASP, 102, 932, doi: 10.1086/132719
* Tadross (2008) Tadross, A. L. 2008, MNRAS, 389, 285, doi: 10.1111/j.1365-2966.2008.13554.x
* Tody (1986) Tody, D. 1986, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 627, Instrumentation in astronomy VI, ed. D. L. Crawford, 733, doi: 10.1117/12.968154
* Tody (1993) Tody, D. 1993, in Astronomical Society of the Pacific Conference Series, Vol. 52, Astronomical Data Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes, 173
* Twarog et al. (1997) Twarog, B. A., Ashman, K. M., & Anthony-Twarog, B. J. 1997, AJ, 114, 2556, doi: 10.1086/118667
* van den Bergh & McClure (1980) van den Bergh, S., & McClure, R. D. 1980, A&A, 88, 360
* van Helshoecht & Groenewegen (2007) van Helshoecht, V., & Groenewegen, M. A. T. 2007, A&A, 463, 559, doi: 10.1051/0004-6361:20052721
* Wang & Chen (2021) Wang, S., & Chen, X. 2021, ApJ, in print, arXiv:2108.13605. https://arxiv.org/abs/2108.13605
* Zhang et al. (2021) Zhang, H., Chen, Y., & Zhao, G. 2021, ApJ, 919, 52, doi: 10.3847/1538-4357/ac0e92
* Zhong et al. (2022) Zhong, J., Chen, L., Jiang, Y., Qin, S., & Hou, J. 2022, AJ, 164, 54, doi: 10.3847/1538-3881/ac77fa
|
Scalable and adaptive variational Bayes methods for Hawkes processes
Déborah Sulem<EMAIL_ADDRESS>
Department of Statistics
University of Oxford
Vincent Rivoirard<EMAIL_ADDRESS>
Ceremade, CMRS, UMR 7534
Université Paris-Dauphine, PSL University
Judith Rousseau<EMAIL_ADDRESS>
Department of Statistics
University of Oxford
Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions.
In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model.
Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm
for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.
temporal point processes, bayesian nonparametrics, connectivity graph, variational approximation.
§ INTRODUCTION
Modelling point or event data with temporal dependence often implies to infer a local dependence structure between events and to estimate interaction parameters. In this context, the multivariate Hawkes process is a widely used temporal point process (TPP) model, for instance, in seismology [Ogata, 1999], criminology [Mohler et al., 2011], finance [Bacry and Muzy, 2015], and social network analysis [Lemonnier and Vayatis, 2014]. In particular, the generalised nonlinear multivariate Hawkes model, an extension of the classical self-exciting process [Hawkes, 1971], is able to account for different types of temporal interactions, including excitation and inhibition effects, often found in event data [Hawkes, 2018, Bonnet et al., 2021]. The excitation phenomenon, sometimes named contagion or bursting behaviour, corresponds to empirical observation that the occurrence of an event, e.g., a post on a social media, increases the probability of observing similar events in the future, e.g., reaction comments. The inhibition phenomenon refers to the opposite observation and is prominent in neuronal applications due to biological regulation mechanisms [Bonnet et al., 2021], and in criminology due to the enforcement of policies [Olinde and Short, 2020].
Moreover, the Hawkes model has become popular for the interpretability of its parameter, in particular the connectivity or dependence graph parameter, which corresponds to a Granger-causal graph for the multivariate point process [Eichler et al., 2017].
More precisely, in event data modelling, a multivariate TPP is often described as a counting process of events (or points), $N = (N_t)_{t \in [0,T]} = (N_t^1, \dots, N_t^K)_{t \in [0,T]}$, where $K \geq 1$ is the number of components (or dimensions) of the process, observed over a period $[0,T]$ of length $T > 0$. Each component of a TPP can represent a specific type of event (e.g., an earthquake), or a particular location where events are recorded (e.g., a country). For each $k=1,\dots, K$ and time $t \in [0,T]$, $N_t^k \in \mathbb{N}$ counts the number of events that have occurred until $t$ at component $k$, therefore, $(N_t^k)_{t \in [0,T]}$ is an integer-valued, non-decreasing, process.
In particular, multivariate TPP models are of interest for jointly modelling the occurrences of events separated into distinct types, or recorded at multiple places, by specifying a multivariate conditional intensity function (or, more concisely, intensity). The latter, denoted $(\lambda_t)_t = (\lambda_t^1, \dots, \lambda_t^K)_{t\in \R}$, characterises the probability distribution of events, for each component. It is informally defined as the infinitesimal probability rate of event, conditionally on the history of the process, i.e,
\begin{equation*}
\lambda_t^k dt = \Prob{\text{event at dimension $k$ in } \: [t,t+dt] \Big|\mathcal{G}_{t}}, \quad k=1, \dots, K, \quad t \in [0,T],
\end{equation*}
where $\mathcal{G}_{t} = \sigma(N_s, 0 \leq s<t)$ denotes the history of the process until time $t$.
In the generalised nonlinear Hawkes model, the intensity is defined as
\begin{equation}\label{def:NLintensity}
\lambda_t^k = \phi_k \left( \nu_k + \sum_{l=1}^{K} \int_{-\infty}^{t^-} h_{lk}(t-s)dN_s^l \right), \quad k=1,\dots K,
\end{equation}
where for each $k$, $\phi_k: \mathbb{R}\to \mathbb{R}^+$ is a link or activation function, $\nu_k > 0$ is a background or spontaneous rate of events, and for each $l=1, \dots, K$, $h_{lk}: \R^+ \to \R$ is an interaction function or triggering kernel, modelling the influence of $N^l$ onto $N^k$. We note that in this model, the parameter $\nu = (\nu_k)_k$
characterises the external influence of the environment on the process, here, assumed constant over time, while the functions $h = (h_{lk})_{l,k=1, \dots, K}$
parametrise the causal influence of past events, that depends on each ordered pair of dimensions. In particular,
for any $(l,k)$, there exists a Granger-causal relationship from $N^l$ to $N^k$, or in other words, $N^k$ is locally-dependent on $N^l$, if and only if $h_{lk} \neq 0$ [Eichler et al., 2017]. Moreover, defining for each $(l,k)$, $\delta_{lk} := \mathds{1}_{h_{lk}\neq 0}$, the parameter $\delta := (\delta_{lk})_{l,k} \in \{0,1\}^{K \times K}$ defines a Granger-causal graph, called the connectivity graph.
Finally, the link functions $\phi = (\phi_k)_k$'s are in general nonlinear and monotone non-decreasing, so that a value $h_{lk}(x) > 0$ can be interpreted as an excitation effect, and $h_{lk}(x) < 0$ as an inhibition effect, for some $x \in \R^+$. Link functions are an essential part of the model chosen by the practitioner, and frequently set as ReLU functions $\phi_k(x) = \max(x,0) = (x)_+$ [Hansen et al., 2015, Chen et al., 2017, Costa et al., 2020, Lu and Abergel, 2018, Bonnet et al., 2021, Deutsch and Ross, 2022], sigmoid-type functions, e.g., $\phi_k(x) = \lsup_k (1 + e^{x})^{-1}$ with a scale parameter $ \lsup_k > 0$ [Zhou et al., 2021, Zhou et al., 2021, Malem-Shinitski et al., 2021], softplus functions $\phi_k(x) = \log (1+e^x)$ [Mei and Eisner, 2017], or clipped exponential functions, i.e., $\phi_k(x) = \min(e^x, \Lambda_k)$ with a clip parameter $\Lambda_k > 0$ [Gerhard et al., 2017, Carstensen et al., 2010]. When all the interaction functions are non-negative and $\phi_k(x) = x$ for every $k$, the intensity (<ref>) corresponds to the linear Hawkes model. Defining the underlying or linear intensity as
\begin{align}\label{def:lin_intensity}
\Tilde \lambda_t^k = \nu_k + \sum_{l=1}^{K} \int_{-\infty}^{t^-} h_{lk}(t-s)dN_s^l, \quad k=1,\dots, K,
\end{align}
for any $t \in \R$, the nonlinear intensity (<ref>) can be re-written as $\lambda_t^k = \phi_k(\Tilde \lambda_t^k)$.
Estimating the parameter of the Hawkes model, denoted $f = (\nu, h)$, and the graph parameter $\delta$,
can be done via Bayesian nonparametric methods, by leveraging standard prior distributions such as random histograms, B-splines, mixtures of Beta densities [Donnet et al., 2020, Sulem et al., 2021], or Gaussian processes [Malem-Shinitski et al., 2021], which enjoy asymptotic guarantees under mild conditions on the model. However, Monte-Carlo Markov Chain (MCMC) methods to compute the posterior distribution are too computationally expensive in practice, even in linear Hawkes models with a moderately large number of dimensions [Donnet et al., 2020]. In contrast, frequentist methods such as maximum likelihood estimates [Bonnet et al., 2021] and penalised projection estimators [Hansen et al., 2015, Bacry et al., 2020, Cai et al., 2021] are more computationally efficient but do not provide uncertainty quantification on the parameter estimates. Yet, in practice, most methods rely on estimating a parametric exponential form of the interaction functions, i.e., $h_{lk}(x) = \alpha_{lk}e^{-\beta_{lk}x}$ [Bonnet et al., 2021, Wang et al., 2016, Deutsch and Ross, 2022].
The implementation of Bayesian methods using MCMC algorithms is computational intensive for two reasons: the high dimensionality of the parameter space ($K^2$ functions and $K$ parameters to estimate) and the non linearity induced by the link function.
Recently, data augmentation strategies have been used to answer the second difficulty, jointly with variational Bayes algorithms in sigmoid Hawkes processes [Malem-Shinitski et al., 2021, Zhou et al., 2022]. These novel methods leverage the conjugacy of an augmented mean-field variational posterior distribution with certain families of Gaussian priors.
In particular, [Zhou et al., 2021] propose an efficient iterative mean-field variational inference (MF-VI) algorithm in a semi-parametric multivariate model. A similar type of algorithm is introduced by [Malem-Shinitski et al., 2021], based on a nonparametric Gaussian process prior construction. Nonetheless, these methods do not consider the high-dimensional nonparametric setting. They do not address either the problem of estimating the connectivity graph $\delta$, which is of interest in many applications and which also allows to reduce the computational complexity. In fact, the connectivity graph also determines the dimensionality and the sparsity of the estimation problem, similarly to the structure parameter in high-dimensional regression [Ray and Szabó , 2021]. Moreover, variational Bayes approaches have not been yet theoretically analysed.
In this work, we make the following contributions to the variational Bayes estimation of multivariate Hawkes processes.
* First, we provide a general nonparametric variational Bayes estimation framework for multivariate Hawkes processes
and analyse the asymptotic properties of variational methods in this context. We notably establish concentration rates for variational posterior distributions, leveraging the general methodology of [Zhang and Gao, 2020], based on verifying a prior mass, a testing, and a variational class condition. Moreover, we apply our general results to variational classes of interest in the Hawkes model, namely mean-field and model-selection variational families.
* Secondly, we propose a novel adaptive and sparsity-inducing variational Bayes procedure, based on a estimate of the connectivity graph using thresholding of the $\ell_1$-norms of the interaction functions, and relying on model selection variational families [Zhang and Gao, 2020, Ohn and Lin, 2021]. For sigmoid Hawkes processes, we additionally leverage a mean-field approximation to derive an efficient adaptive variational inference algorithm. In addition to being theoretically valid in the asymptotic regime, we show that this approach performs very well in practice.
* In addition to the previous theoretical guarantees and proposed methodology, we empirically demonstrate the effectiveness of our algorithm in an extensive set of simulations. We notably show that, in low-dimensional settings, our adaptive variational algorithm is more computationally efficient than MCMC methods, while enjoying comparable estimation performance. Moreover, our approach is scalable to high-dimensional and sparse processes, and provides good estimates. In particular,
our algorithm is able to uncover the causality structure of the true generating process given by the graph parameter,
even in some type of model mis-specification.
Outline In the remaining part of this section, we introduce some useful notation. Then, in Section <ref>, we describe our general model and inference setup, and present our novel adaptive and sparsity-inducing variational algorithm in Section <ref>. Moreover, Section <ref> contains our general results, and their applications to prior and variational families of interest in the Hawkes model. Finally, we report in Section <ref> the results of an in-depth simulation study. Besides, the proofs of our main results are reported in Appendix <ref>.
Notations. For a function $h$, we denote $\norm{h}_1 = \int_{\mathbb R} |h(x)|dx$ the $L_1$-norm, $\norm{h}_2 = \sqrt{\int_{\mathbb R}h^2(x)dx}$ the $L_2$-norm, $\norm{h}_\infty = \sup \limits_{x\in \mathbb R} |h(x)|$ the supremum norm, and $h^+ = max(h,0), \: h^- = max(-h,0)$ its positive and negative parts. For a $K \times K$ matrix $M$, we denote $r(M)$ its spectral radius, $\norm{M}$ its spectral norm, and $tr(M)$ its trace.
For a vector $u \in \R^K, \norm{u}_1 = \sum_{k=1}^K |u_k|$. The notation $k \in [K]$ is used for $k \in \{ 1, \ldots, K\}$. For a set $B$ and $k \in [K]$, we denote $N^k(B)$ the number of events of $N^k$ in $B$ and $N^k|_B$ the point process measure restricted to the set $B$. For random processes, the notation $ \overset{\mathcal{L}}{=}$ corresponds to equality in distribution.
We also denote $\mathcal{N}(u, \mathcal{H}_0,d)$ the covering number of a set $\mathcal{H}_0$ by balls of radius $u$ w.r.t. a metric $d$. For any $k \in [K]$, let $\mu_k^0 = \mathbb{E}_0[\lambda_t^k(f_0)]$ be the mean of $\lambda_t^k(f_0)$ under the stationary distribution $\mathbb{P}_0$. For a set $\Omega$, its complement is denoted $\Omega^c$. We also use the notations $u_T \lesssim v_T$ if $|u_T/v_T|$ is bounded when $T \to \infty$, $u_T \gtrsim v_T$ if $|v_T/u_T|$ is bounded and $u_T \asymp v_T$ if $|u_T/v_T|$ and $|v_T/u_T|$ are bounded. We recall that a function $ \phi$ is $L$-Lipschitz, if for any $(x,x')\in\R^2$,
$|\phi(x) - \phi(x')| \leq L |x - x'|$. We denote $\mathds{1}_n$ and $\mathbf{0}_n$ the all-ones and all-zeros vectors of size $n$. Finally, we denote $\mathcal{H}(\beta, L_0)$ the Hölder class of $\beta$-smooth functions with radius $L_0$.
§ BAYESIAN NONPARAMETRIC INFERENCE OF MULTIVARIATE HAWKES PROCESSES
§.§ The Hawkes model and Bayesian framework
Formally a $K$-dimensional temporal point process $N = (N_t)_{t \in \R} = (N_t^1, \dots, N_t^K)_{t \in \R}$, defined as a process on the real line $\R$ and on a probability space $(\mathcal{X}, \mathcal{G}, \mathbb{P})$, is a Hawkes process if it satisfies the following properties.
* Almost surely, $\forall k,l \in [K]$, $(N^k_t)_t$ and $(N^l_t)_t$ never jump simultaneously.
* For all $k \in [K]$, the $\mathcal{G}_t$-predictable conditional intensity function of $N^k$ at $t \in \R$ is given by (<ref>), where $\mathcal{G}_t = \sigma(N_s, s < t) \subset \mathcal{G}$.
From now on, we assume that $N$ is a stationary, finite-memory, $K$-dimensional Hawkes process $N$ with parameter $f_0 = (\nu_0, h_0)$, link functions $(\phi_k)_k$, and memory parameter $A>0$, defined as $A = \sup \{x \in \R^+; \max_{l,k} |h_{lk}^0(x)| > 0 \}$. We note that $A$ characterises the temporal length of interaction of the point process and that this inference setting is commonly used in previous work on Hawkes processes [Hansen et al., 2015, Donnet et al., 2020, Sulem et al., 2021, Cai et al., 2021]. We assume that $f_0$ is the unknown parameter, and that $(\phi_k)_k$ and $A$ are known to the statistician.
Similarly to [Donnet et al., 2020], we consider that our data is an observation of $N$ over a time window $[-A,T]$, with $T > 0$, but our inference procedure is based on the log-likelihood function corresponding to the observation of $N$ over $[0,T]$. For a parameter $f = (\nu, h)$, this log-likelihood is given by
\begin{equation}\label{eq:loglik}
L_T(f) := \sum_{k=1}^K L_T^k(f), \quad L_T^k(f) = \left[\int_0^T \log (\lambda_t^k(f)) dN_t^k - \int_0^T \lambda_t^k(f) dt\right].
\end{equation}
We denote by $\mathbb{P}_0(.|\mathcal{G}_0)$ the true conditional distribution of $N$, given the initial condition $\mathcal{G}_0$, and by $\mathbb{P}_f(.|\mathcal{G}_0)$ the distribution defined as
d\mathbb{P}_f(.|\mathcal{G}_0) = e^{L_T(f) - L_T(f_0)}\mathbb{P}_0(.|\mathcal{G}_0).
We also denote $\mathbb{E}_0$ and $\mathbb{E}_f$ the expectations associated to $\mathbb{P}_0(.|\mathcal{G}_0)$ and $\mathbb{P}_f(.|\mathcal{G}_0)$. With a slight abuse of notation, we drop the notation $\mathcal{G}_0$ in the subsequent expressions.
We consider a nonparametric setting for estimating the parameter $f$, within a parameter space $\mathcal{F}$. Given a prior distribution $\Pi$ on $\mathcal{F}$, the posterior distribution, for any subset $B \subset \mathcal{F}$, is defined as
\begin{equation}\label{def:pposterior_dist}
\Pi(B|N) = \frac{\int_{B} \exp(L_T(f)) d\Pi(f)}{\int_{\mathcal{F}} \exp(L_T(f)) d\Pi(f)} =: \frac{N_T(B)}{D_T}, \quad D_T := \int_{\mathcal{F}} \exp(L_T(f)) d\Pi(f).
\end{equation}
This posterior distribution (<ref>) is often said to be doubly intractable, because of the integrals in the log-likelihood function (<ref>) and in the denominator $D_T$. Before studying the problem of computing the posterior distribution, we explicit our construction of the prior distribution.
Firstly, our prior distribution $\Pi$ is built so that it puts mass 1 to finite-memory processes, i.e., to parameter $f$ such that the interaction functions $(h_{lk})_{l,k}$ have a bounded support included in $[0,A]$.
Moreover, we use a hierarchical spike-and-slab prior based on the connectivity graph parameter $\delta$ similar to [Donnet et al., 2020, Sulem et al., 2021]. For each $(l,k) \in [K]^2$, we consider the following parametrisation
\begin{align*}
h_{lk} = \delta_{lk} \bar h_{lk}, \quad \delta_{lk} \in \{0,1\}, \quad \text{ with } \quad \bar h_{lk}=0 \quad \iff \quad \delta_{lk}=0\end{align*}
so that $\delta = (\delta_{lk})_{lk} \in \{0,1\}^{K^2}$ is the connectivity graph associated to $f$. We therefore consider $\delta \sim \Pi_\delta$, where $\Pi_\delta$ is a prior distribution on the space $\{0,1\}^{K^2}$, and, for each $(l,k)$ such that $\delta_{lk} = 1$, $\bar h_{lk} \sim \tilde \Pi_h$ where $\tilde \Pi_h$ is a prior distribution on functions with support included in $[0,A]$. In this paper we will mostly consider the case where the functions $\bar h_{lk}$, when non null, are developed on a dictionary of functions $(e_j)_{j\geq 1}$, such that $e_j: [0,A] \to \R, \: \forall j$, and
\begin{equation} \label{dictionary}
\bar h_{lk} = \sum_{j=1}^{J_{k}} h_{lk}^j e_j, \quad h_{lk}^j \in \mathbb R, \: \quad \forall j \in [J_{k}], \quad J_{k}\geq 1, \quad (l,k) \in [K]^2.
\end{equation}
Then, choosing a prior distribution $\Pi_J$ on $J = (J_k)_{k \in [K]}$, our hierarchical prior on $f$ finally writes as
\begin{align}\label{eq:prior-distribution}
d\Pi(f) = d\Pi_\nu(\nu) d\Pi_\delta(\delta) d\Pi_J(J) d \Pi_{h|\delta, J}(h),
\end{align}
where $\Pi_\nu$ is a prior distribution on $\R_+^K$, suitable to the nonlinear model (see [Sulem et al., 2021] for some examples), and
\begin{align*}
d \Pi_{h|\delta, J}(h) = \prod_{l,k} (1- \delta_{lk})\delta_{(0)}(\bar h) + \delta_{lk}d\Tilde \Pi_{h|\delta, J}(\bar h),
\end{align*}
where $\delta_{(0)}$ denotes the Dirac measure at 0 and $\Tilde \Pi_{h|\delta, J}(\bar h)$ is a prior distribution on non-null functions decomposed over $J_k$ functions from the dictionary. From the previous construction, one can see that the graph parameter $\delta \in \{0,1\}^{K^2}$ defines the sparsity structure of $h = (h_{lk})_{l,k}$. This parameter plays a crucial role when performing inference on high dimensional Hawkes processes, either in settings when sparsity is a reasonable assumption, or as the only parameter of interest [Bacry et al., 2020, Chen et al., 2017].
As previously noted, it is generally expensive to compute the posterior distribution (<ref>),
which does not have an analytical expressions. However, we note that when the prior on $f$ is a product of probability distributions on the dimension-restricted parameters
$f_k = (\nu_k, (h_{lk})_{l=1, \dots, K}) \in \mathcal{F}_k$, for $k \in [K]$, so that $f = (f_k)_k$, $\mathcal{F} = \mathcal{F}_1 \times \dots \times \mathcal{F}_K$ and $d\Pi(f) = \prod_k d\Pi_k(f_k)$, then, given the expressions of the log-likelihood function (<ref>) and the intensity function (<ref>), we have that each term $L_T^k(f)$ in (<ref>) only depends on $f_k$, i.e., $L_T^k(f) = L_T^k(f_k)$. Furthermore, the posterior distribution can be written as
\begin{align} \label{rem:factorisation}
d\Pi(f|N) = \prod_k d\Pi_k(f_k|N), \quad d\Pi_k(f_k|N) = \frac{\exp(L_T^k(f_k)) d\Pi_k(f_k)}{\int_{\mathcal{F}_k} \exp(L_T^k(f_k)) d\Pi_k(f_k)}.
\end{align}
In particular, the latter factorisation implies that each factor $\Pi_k(.|N)$ of the posterior distribution can be computed in parallel, nonetheless, given the whole data $N$. Despite this possible parallelisation, implementation of MCMC methods for computing the posterior distribution in the context of multivariate nonlinear Hawkes processes remains very challenging [Donnet et al., 2020, Zhou et al., 2021, Malem-Shinitski et al., 2021].
To alleviate this computational bottleneck, we consider in the next section a family of variational algorithms, together with a two-step procedure to handle high-dimensional processes.
§.§ Variational Bayes inference
To scale up Bayesian nonparametric methods to high-dimensional processes, we consider a variational Bayes approach. The latter consists of approximating the posterior distribution within a variational class of distributions on $\mathcal{F}$, denoted $\mathcal{V}$. Then, the variational Bayes (VB) posterior distribution, denoted $\hat Q$, is defined as the best approximation of the posterior distribution within $\mathcal{V}$, with respect to the Kullback-Leibler divergence, i.e.,
\begin{align}\label{eq:var_posterior}
\hat Q := \arg \min_{Q \in \mathcal{V}} KL \left(Q|| \Pi(.|N)\right),
\end{align}
where the Kullabck-Leibler divergence between $Q$ and $Q'$ is defined as
\begin{align*}
KL(Q||Q') := \begin{cases}
\int \log \Big(\frac{dQ}{dQ'}\Big) dQ, & \text{if } Q \ll Q' \\
+\infty, & \text{otherwise}
\end{cases}.
\end{align*}
For a more in-depth introduction to this framework in the context of Hawkes processes, we refer to the works of [Zhang et al., 2020, Zhou et al., 2022, Malem-Shinitski et al., 2021].
In the variational Bayes approach, there are many possible families for $\mathcal{V}$. Interestingly, we note that under a product posterior (<ref>), the variational distribution also factorises in $K$ factors, $\hat Q = \prod_k \hat Q_k$ where each factor $\hat Q_k$ approximates $\Pi_k(.|N)$. Therefore, one can choose a variational class $\mathcal{V}'$ of distributions on $\mathcal{F}_1$, and define $\mathcal{V} := \mathcal{V}'^{\otimes K}$. In the case of multivariate Hawkes processes, we combine mean-field variational approaches [Zhou et al., 2022, Malem-Shinitski et al., 2021] with different versions of model selection variational methods [Zhang and Gao, 2020, Ohn and Lin, 2021]. Some important notions related to the two latter inference strategies are recalled in Appendix <ref>. Before presenting our method, we introduce additional concepts and notation.
We consider a general model where the log-likelihood function of the nonlinear Hawkes process can be augmented with some latent variable $z \in \mathcal{Z}$, with $\mathcal{Z}$ the latent parameter space. This approach is notably used by [Malem-Shinitski et al., 2021, Zhou et al., 2021] in the sigmoid Hawkes model, for which $\phi_k(x) \propto (1 + e^{x})^{-1}, \forall k \in [K]$. Denoting $L_T^A(f,z)$ the augmented log-likelihood, we define the augmented posterior distribution as
\begin{align*}%\label{def:aug_posterior_dist}
\Pi_A(B |N) = \frac{\int_{B} \exp(L_T^A(f,z)) d(\Pi(f) \times \mathbb{P}_{A}(z))}{\int_{\mathcal{F} \times \mathcal{Z}} \exp(L_T^A(f,z)) d(\Pi(f) \times \mathbb{P}_{A})(z)}, \quad B \subset \mathcal{F} \times \mathcal{Z},
\end{align*}
where $\mathbb{P}_{A}$ is a prior density on $\mathcal{Z}$ with respect to a dominating measure $\mu_z$. One can then define an approximating mean-field family of $\Pi_A(. |N)$ as
\begin{align}\label{eq:aug-mean-field}
\mathcal{V}_{AMF} = \left \{Q: \mathcal{F} \times \mathcal{Z} \to [0,1] ; \: Q(f, z) = Q_1(f)Q_2(z) \right \},
\end{align}
by only “breaking” correlations between parameters and latent variables. The corresponding mean-field variational posterior distribution is then
\begin{align}\label{eq:var-mf}
\hat Q_{AMF} = \arg \min_{Q \in \mathcal{V}_{AMF}} KL(Q|| \Pi_A(. |N)).
\end{align}
Moreover, our hierarchical prior construction (<ref>) implies that a parameter $f$ is indexed by a set of hyperparameters in the form $m=(\delta, J_{lk}; (l,k) \in \mathcal I(\delta))$, where $\mathcal I(\delta) := \{ (l,k) \in [K]^2;\, \delta_{lk}=1\}$ is the set of “edges", i.e., pair indices corresponding to non-null interaction functions in $f$. Moreover, $J_{lk}$ is the number of functions in the dictionary used to decompose $h_{lk}$. We note that $m$ characterises the dimensionality of the parameter $f$, and we call it a model. We can then re-write our parameter space as
\begin{align}\label{eq:space-decomposition}
\mathcal{F} = \bigcup_{m \in \mathcal{M}} \mathcal{F}_m, \quad \mathcal{F}_m = \left\{f' \in \mathcal{F}; \: \delta' = \delta, \: J' = J \right\}, \quad m = (\delta, J), \: \delta = (\delta_{lk})_{l,k}, \: J = (J_{lk})_{l,k},
\end{align}
where $\mathcal{M}$ is the set of models
\begin{align*}
\mathcal{M} = \left \{ m = (\delta, J); \delta \in \{0,1\}^{K\times K}, \: J \in \mathbb N^{K \times K} \right \}.
\end{align*}
From now on, we assume that for each $k$, $J_{lk} = J_k, \: forall l$ and re-define $J = (J_1, \dots, J_K) \in \mathbb N^{K}$.
The decomposition (<ref>) of the parameter space is key to compute a variational distribution that has support on the whole space $\mathcal{F}$, and that in particular, provides a distribution on the space of graph parameter. Next, we can construct an adaptive variational posterior distribution by considering an approximating family of variational distributions within each subspace $\mathcal{F}_m$, denoted $\mathcal{V}^m$.
We leverage two types of adaptive variational posterior distributions, $\hat Q_{A1}$ and $\hat Q_{A2}$, considered respectively by [Zhang and Gao, 2020] and [Ohn and Lin, 2021], and defined as
\begin{align}
&\hat Q_{A1} := \hat Q_{\hat m}, \quad \hat m := \arg \max_{m \in \mathcal{M}} ELBO(\hat Q^m),\label{eq:ms_var_post} \\
&\hat Q_{A2} := \sum_{m \in \mathcal{M}} \hat \gamma_{m} \hat Q_m \label{eq:ms_var_post_avg},
\end{align}
where $\hat Q_m = \arg \min_{Q \in \mathcal{V}^m} KL(Q|||\Pi(.|N))$ is the variational posterior distribution in model $m$ (defined on $\mathcal{F}_m$), $ELBO(\cdot)$ is the evidence lower bound (ELBO) (defined in our context in (<ref>) in Appendix <ref>), and $ \{\hat \gamma_{m}\}_{m \in \mathcal{M}} $ are the model marginal probabilities defined as
\begin{align*}
\hat \gamma_{m} = \frac{ \Pi_m(m) \exp \left \{ ELBO(\hat Q_m) \right \}}{\sum_{m \in \mathcal{M}}\Pi_m(m) \exp \left \{ ELBO(\hat Q_m) \right \}}, \quad m \in \mathcal{M}.%\label{eq:ms_var_post}
\end{align*}
We note that in practice one might prefer using the adaptive VB posterior (<ref>) rather than (<ref>), to avoid manipulating a distribution mixture. In our simulations in Section <ref>, we often find that one or two models only have significant marginal probabilities $\hat \gamma_k^m$, and therefore the two adaptive variational posteriors (<ref>) and (<ref>) are often close.
To leverage the computational benefits of the augmented mean-field variational class (<ref>), we can set the variational family $\mathcal{V}^m$ as
\begin{align}\label{eq:mean-field-m}
\mathcal{V}^m_{AMF} = \left \{Q: \mathcal{F}_m \times \mathcal{Z} \to [0,1] ; \: Q(f, z) = Q_1(f)Q_2(z) \right \}, \quad \forall m \in \mathcal{M}.
\end{align}
Nonetheless, in the case of moderately large to large values of $K$, it is not computationally feasible to explore all possible models in $\mathcal{M}$, which number is greater than $2^{K^2}$, the cardinality of the graph space $\{0,1\}^{K^2}$. Even with parallel inference on each dimension, the number of models per dimension is greater than $2^K$ and remains too large. Therefore, for this dimensionality regime, we propose an efficient two-step procedure in the next section. This procedure consists first in estimating $\delta$ using a thresholding procedure, then computes the adaptive mean-field variational Bayes posterior in a restricted set of models with $\delta$ fixed at this estimator.
§.§ Adaptive two-step procedure
In this section, we propose an adaptive and sparsity-inducing variational Bayes procedure for estimating the parameter of Hawkes processes with a moderately large or large number of dimensions $K$.
Firstly, we note that in Section <ref>, we will provide theoretical guarantees for the above types of variational approaches in nonlinear multivariate Hawkes processes. In particular, we show that under easy to verify assumptions on the prior and on the parameters, the variational posterior concentrates, in $L_1$-norms at some rate $\epsilon_T$, which typically depends on the smoothness of the interaction functions. Moreover, this concentration rate is the same as for the true posterior distribution. For instance, using [Sulem et al., 2021], for Lipshitz link functions and well-behaved priors, such as hierarchical Gaussian processes, histogram priors, or Bayesian splines, if the interaction functions belong to a Hölder or Sobolev class with smoothness parameter $\beta$, we obtain that $\epsilon_T\asymp T^{-\beta/(2\beta+1)}$, up to $\log T$ terms.
A consequence of this result is that for each $(l,k) \in [K]^2$, the (variational) posterior distribution of
$S_{lk} := \|h_{lk}\|_1$ concentrates around the true value $S_{lk}^0 := \|h_{lk}^0\|_1$ at the same rate $\epsilon_T$. Hence, if for all $(l,k)$ such that $\delta_{lk}^0=1$, $S_{lk}^0$ is large compared to $\epsilon_T$, then the following thresholding estimator of $\delta$ is consistent
\begin{align}\label{eq:graph-estimator}
\hat \delta_{lk} = 1 \quad \Leftrightarrow \quad \hat S_{lk}> \eta_0,
\end{align}
where $\hat S_{lk}$ is the variational posterior mean or median on $S_{lk}$ and $\epsilon_T<<\eta_0< \min_{lk}S_{lk}^0$.
In particular, the above results hold for the adaptive variational Bayes posterior with the set $\mathcal{M}_C$ of candidate models with the complete graph $\delta_C$, defined as
\begin{align}\label{eq:set_models_complete}
\mathcal{M}_C := \{m = (\delta_C = \mathds{1}\mathds{1}^T, J = (J_{k})_{k}); \: J_{k} \geq 1, \: \forall k \in [K] \}.
\end{align}
In this case, to choose the threshold $\eta_0$ in a data-driven way, we order the estimators $\hat S_{lk}, \: (l,k) \in [K]^2$, say $\hat S_{(1)} \leq \hat S_{(2)} \leq \cdots \leq \hat S_{(K^2)}$, and set $\eta_0 \in (S_{(i_0)}, S_{(i_0+1)})$ where $i_0$ is the index of the first significant gap in $(\hat S_{(i)})_i$, i.e., the first significant values of $ S_{(i+1)}- S_{(i)}$. In Figure <ref>, we plot the estimates $(\tilde S_{(i)})_i$ (blue dots) in one of the simulation settings of Section <ref>. In this case, the true graph $\delta_0$ is sparse and many $S_{lk}^0$ (orange dots) are equal to 0. From this picture, we can see that by choosing $\eta_0$ anywhere between $0.1$ and $0.2$, we can correctly estimate the true graph $\delta_0$. More details on these results and their interpretation are provided in Section <ref>.
Estimated $L_1$-norms $(\hat S_{(i)})_{i \in [K^2]}$ (blue dots), based on the mean-field adaptive variational posterior mean and the set of models $\mathcal{M}_C$ containing models with complete graph $\delta_C = \mathds{1}\mathds{1}^T$, plotted in increasing order. The orange dots correspond to the true values $S_{lk}^0 = \|h_{lk}^0\|_1$. These results correspond to one realisation of the Excitation scenario of Simulation 4, for the Hawkes processes with $K=16$ dimensions.
Therefore, once $\hat \delta$ is obtained, we compute an adaptive variational Bayes posterior, conditional on $\delta=\hat \delta$, by considering the set of models
\begin{align}\label{eq:set_models_restricted}
\mathcal{M}_E := \{m = (\hat \delta, J = (J_{k})_{k}); \: J_{k} \geq 1, \: \forall k \in [K]\}.
\end{align}
In summary, our adaptive two-step algorithm writes as:
* Complete graph VB:
(a) compute the VB posterior associated to the set of models $\mathcal{M}_C$, i.e., to the complete graph $ \delta_C = \mathds{1}\mathds{1}^T$, and compute the posterior mean of $S_{lk}=\|h_{lk}\|_1$, denoted $\hat S_{lk}$, $\forall (l,k) \in[K]^2$.
(b) order the values $\hat S_{lk}$ in increasing order, say $\hat S_{(1)} \leq \hat S_{(2)} \leq \cdots \leq \tilde S_{(K^2)}$, and define
$\hat \delta_{lk}= 1$ iff $\hat S_{lk} > \eta_0$, where $\eta_0$ is a threshold defined by the first significant value of $\hat S_{(i+1)} - \hat S_{(i)}$.
* Graph-restricted VB: compute the VB posterior associated to the set of models $\mathcal{M}_E$, i.e., to models with $\delta = \hat \delta$.
Theoretical validation of our procedure is provided in Section <ref>. We also note that different variants of our two-step strategy are possible. In particular one can choose a different threshold for each dimension $k \in [K]$, since different convergence rates could be obtained in the different dimensions. Moreover, one can potentially remove the model selection procedure to choose the $J_{k}$'s, $k\in [K]$ in the first step 1(a), and compute a variational posterior in only one model $m \in \mathcal{M}_C$.
In the next section we consider the case of the sigmoid Hawkes processes, for which a data augmentation scheme allows to efficiently compute a mean-field approximation of the posterior distribution within a model $m$.
In recent work, [Bonnet et al., 2021] also propose a thresholding approach for estimating the connectivity graph $\delta$ in the context of parametric maximum likelihood estimation. In fact, an alternative strategy to our procedure derived from their work would consist in defining the graph estimator as $\hat \delta_{lk} = 1 \iff \tilde S_{lk}> \varepsilon \sum_{l,k} \tilde S_{lk}$, where $\varepsilon \in (0,1)$ is a pre-defined or data-driven threshold.
§ ADAPTIVE VARIATIONAL BAYES ALGORITHMS IN THE SIGMOID MODEL
In this section, we focus on the sigmoid Hawkes model, for which the link functions in (<ref>) are sigmoid-type functions. We consider the following parametrisation of this model: for each $ k \in [K]$,
\begin{align}\label{eq:sigmoid_tilde}
\phi_k(x) = \theta_k\Tilde \sigma(x), \quad \Tilde \sigma(x) = \sigma \left(\alpha(x-\eta)\right), \quad \sigma(x) := (1 + e^{-x})^{-1}, \quad \alpha > 0, \: \eta > 0, \: \theta_k > 0.
\end{align}
Here, we assume that the hyperparameters $\alpha, \eta$ and $\theta = (\theta_k)_k$ are known; however, our methodology can be directly extended to estimate an unknown $\theta$, similarly to [Zhou et al., 2022] and [Malem-Shinitski et al., 2021]. We first note that for $\alpha = 0.1, \eta = 10$ and $\theta_k = 20$, the nonlinearity $\phi_k$ is similar to the ReLU
and softplus functions on $[-\infty, 20]$ (see Figure <ref> in Section <ref>). This is helpful to compare the impact of the link functions on the inference in our numerical experiments in Section <ref>.
For sigmoid-type of link functions, efficient mean-field variational inference methods based on data augmentation and Gaussian priors have been previously proposed, notably by [Malem-Shinitski et al., 2021, Zhou et al., 2021, Zhou et al., 2022]. We first recall this latent variable augmentation scheme, which allows to obtain a conjugate form for the variational posterior distribution in a fixed model $m = (\delta, J)$ (see Section <ref>).
Then, building on this prior work, we provide two explicit algorithms based on the adaptive and sparsity-inducing methodology presented in Section <ref>.
§.§ Augmented mean-field variational inference
in a fixed model
In our method, we leverage existing latent variable augmentation strategy and Gaussian prior construction, which allows to efficiently compute a mean-field variational posterior distribution on $\mathcal{F}_m \subset \mathcal{F}$, the parameter subspace within a model $m=(\delta, J_{lk}; (l,k) \in \mathcal I(\delta))$. The details of this construction are provided in Appendix <ref> and we recall that in this context, the set of latent variables $\omega, \bar Z$ correspond respectively to marks at each point of the point process $N$ and to a marked Poisson point process on $[0,T] \times \R^+$.
Then, the augmented mean-field variational family (<ref>) approximating the augmented posterior distribution corresponds to
\mathcal{V}_{AMF} = \left \{ Q : \mathcal{F} \times \mathcal{O} \times \mathcal{Z} \to \R^+
; \: dQ(f, \omega, \bar Z) = dQ_1(f) dQ_2(\omega, \bar Z) \right \},
where $\mathcal{O}$ and $\mathcal{Z}$ denote the latent variable spaces. More precisely, in our method, we use the mean-field approach within a fixed model $m$, and therefore define the model-restricted mean-field variational class as
\mathcal{V}_{AMF}^m = \left \{ Q: \mathcal{F}_m \times \mathcal{O} \times \mathcal{Z}
; \: dQ(f, \omega, \bar Z) = dQ_1(f) dQ_2(\omega, \bar Z) \right \},
leading to the model-restricted variational posterior
$\hat Q_{AMF}^m(f, \omega , \bar Z) = \hat Q_1^m(f) \hat Q_2^m(\omega, \bar Z)$.
Then, we introduce a family of Gaussian prior distributions $\Pi_{h|\delta, J}(h)$ on $\mathcal{F}_m$ such that the factors of $\hat Q_{AMF}^m$, $\hat Q_1^m$ and $\hat Q_2^m$, are conjugate. This conjugacy leads to an iterative variational inference algorithms with closed-forms updates, using (<ref>).
Let $|J| = \sum_k J_k$.
We define
\begin{align*}
\mathcal{H}_{e}^J = \left \{ h = (h_{lk})_{l,k} \in \mathcal{H}; \: h_{lk}(x) = \sum_{j=1}^{J_k} h^j_{lk} e_j(x), \: x \in [0,A], \: \underline{h}_{lk}^{J_k} = (h_{lk}^1, \dots, h_{lk}^{J_k}) \in \R^{J_k}, \: \forall (l,k) \in [K]^2 \right \}.
\end{align*}
Now, for each $(l,k)$, if $\delta_{lk} = 1$, we consider a normal prior distribution
on $\underline{h}_{lk}^{J_k}$, with mean vector $\mu_{J_k} \in \R^{J_k}$ and covariance matrix $\Sigma_{J_k} \in \R^{J_k \times J_k}$, i.e.,
$\underline{h}_{lk}^{J_k} \sim \mathcal{N}(\mu_{J_k}, \Sigma_{J_k})$, and if $\delta_{lk} = 0$, we set $\underline{h}_{lk}^{J_k} = \mathbf{0}_{J_k}$. We then denote $\mu_m = (\mu_k^m)_k$ with $\mu_k^m = (\delta_{lk} \mu_{J_k})_{l} \in \R^{K J_k}$ and $\Sigma_m = Diag((\Sigma_k^m)_k)$ with $\Sigma_k^m = Diag((\delta_{lk} \Sigma_{J_k})_{l}) \in \R^{KJ_k \times KJ_k}$. We also consider a normal prior on the background rates, i.e., $\nu_k \overset{i.i.d}{\sim} \mathcal{N}( \mu_\nu, \sigma_\nu^2)$ with hyperparameters $\mu_\nu, \sigma_\nu > 0$. We finally denote by $f_m := (f_k^m)_k \in \mathcal{F}_m$ where for each $k$,
$ f_k^m = (\nu_k, \underline{h}_{1k}^{J_k}, \dots, \underline{h}_{Kk}^{J_k}) \in \R^{KJ_k+1}$, and define $H(t) = (H^0(t), H^1(t), \dots, H^K(t)) \in \R^{|J| + 1}$, where $H_0(t) = 1$ and for $k \in [K]$, $H^k(t) = (H_j^k(t))_{j=1, \dots, J_k}$ with
\begin{align}\label{eq:notation_Hjk}
%&H_j(t) := \int_{t-A}^t e_j(t-s)dN^{k_j}_s, \quad k_j \in [K], \quad j = 1, \dots, K J, \\
&H_j^k(t) := \int_{t-A}^t e_j(t-s)dN^{k}_s, \quad j \in [J_k].
\end{align}
Using similar computations as in [Donner and Opper, 2019, Zhou et al., 2021, Malem-Shinitski et al., 2021], we can derive analytic forms for $\hat Q_{1}^m$ and $\hat Q_{2}^m$. In particular, we have that $\hat Q_{1}^m(f_m) = \prod_{k} \hat Q_{1}^{m,k}(f_k^m)$, and for each $k$, $\hat Q_{1}^{m,k}(f_k^m)$ is a normal distribution with mean vector $ \tilde \mu_k^m \in \R^{KJ_k+1}$ and covariance matrix $\tilde{\Sigma}_k^m \in \R^{(KJ_k +1) \times (K J_k +1) }$ given by
\begin{align}
\tilde{\Sigma}_k^m &= \left[ \alpha^2 \sum_{i \in [N_k]} \mathbb{E}_{\hat Q_{2}^{m,k}}[\omega_i^k] H(T_i^k) H(T_i^k)^T + \alpha^2 \int_0^T \int_0^{+\infty} \bar \omega_t^k H(t) H(t)^T \Lambda^k(t,\bar \omega) d\bar \omega dt + (\Sigma_k^m)^{-1 } \right]^{-1}, \label{eq:update-sigma}\\
%&= \left[ \alpha^2 \sum_{i \in [N_k]}\mathbb{E}[\omega_i^k] H_k(t_i^k) H_k(t_i^k)^T + \alpha^2 \lsup \sum_q v_q \Ex{\bar \omega_q^k} \frac{\exp(-\frac{1}{2}\mathbb{E}[h(p_q,f_k)])}{2\cosh \frac{\tilde{h}(p_q,f_k)}{2}} H(p_q) H(p_q)^T + \Sigma^{-1 } \right]^{-1}
\tilde{\mu}_k^m &= \frac{1}{2 } \tilde{\Sigma}_k^m \left[ \alpha \sum_{i \in [N_k]} (2 \mathbb{E}_{\hat Q_{2s}^k}[\omega_i^k] \alpha \eta + 1 ) H(T_i^k) + \alpha \int_{0}^T \int_0^{+\infty} \left(2 \bar \omega^k \alpha \eta - 1 \right) H(t) \Lambda^k(t,\bar \omega) d\bar \omega dt + 2(\Sigma_k^m)^{-1 } \mu_k^m \right], \label{eq:update-mu}
%&= \frac{1}{2 } \tilde{\Sigma}_k \left[ \alpha \sum_{i \in [N_k]} (2 \mathbb{E}[\omega_i^k] \alpha \eta + 1 ) H_k(t_i^k)^T + \alpha \lsup \sum_q v_q (2 \Ex{\bar \omega^k_q} \alpha \eta - 1 ) \frac{\exp(-\frac{1}{2}\mathbb{E}[h(p_q,f_k)])}{2\cosh \frac{\tilde{h}(p_q,f_k)}{2}} H(p_q)^T + 2\Sigma^{-1 } \mu \right]
\end{align}
where $N_k := N^k[0,T]$ and
\begin{align*}
&\Lambda^k(t,\bar \omega) := \lsup_k \frac{\exp \left \{-\frac{1}{2}\mathbb{E}_{Q_{1}^{m,k}}[\Tilde{\lambda}^k_t(f_k^s)] \right \}}{2\cosh \frac{ c^k_{t}}{2}} p_{PG}(\bar \omega;1, c^k_{t}), \quad c^k_{t} := \sqrt{ \mathbb{E}_{Q_{1}^{m,k}}[\Tilde{\lambda}^k_t(f)^2]}.
\end{align*}
Besides, we also have that $\hat Q_{2}^m(\omega, \bar Z) = \hat Q_{21}^m(\omega) \hat Q_{22}^m (\bar Z)$ with
$ \hat Q_{21}^m(\omega) = \prod_k \prod_{i \in [N_k]} p_{PG}(\omega_i^k;1, c^k_{T_i^k})$
and $\hat Q_{22}^m = \prod_k \hat Q^{m,k}_{22}$ where for each $k$, $ \hat Q^{m,k}_{22}$ is the probability distribution of a marked Poisson point process on $[0,T] \times \R^+$ with intensity measure $ \Lambda^k(t,\bar \omega) $.
The full derivation of these formulas can be found in Appendix <ref>.
From the previous expression, we can compute $\hat Q_{2}^m$ given an estimate of $\hat Q_{1}^m$, and conversely. Therefore, to compute the model-restricted mean-field variational posterior $\hat Q^m$, we use an iterative algorithm that updates each factor alternatively, a procedure summarised in Algorithm <ref>. We note that the updates of the mean vectors and covariance matrices require to compute an integral, which we perform using the Gaussian quadrature method [Golub and Welsch, 1969], where the number of points, denoted $n_{GQ}$, is a hyperparameter of our method. We finally recall that in this algorithm, each variational factor $\hat Q_k^m$ can be computed independently
and only depends on a subset of the parameter $f_k$, and hence, of the sub-model, $m_k := (\delta_k, J_k)$.
The number of iterations $n_{iter}$ in Algorithm <ref> is another hyperparameter of our method. In practice, we implement an early-stopping procedure, where we set a maximum number of iterations, such as 100, and stop the algorithm whenever the increase of the ELBO is small, e.g., lower than $10^{-3}$, indicating that the algorithm has converged.
Similarly to [Zhou et al., 2021, Malem-Shinitski et al., 2021], we can also derive analytic forms of the conditional distributions of the augmented posterior (<ref>). Therefore, the latter could be computed via a Gibbs sampler, which is provided in Algorithm <ref> in Appendix <ref>. However, in this Gibbs sampler, one needs to sample the latent variables - in particular a $K$-dimensional inhomogeneous Poisson point process. This is therefore computationally much slower than the variational inference counterpart, which only implies to compute expectation wrt to the latent variables distribution.
Mean-field variational inference algorithm in a fixed model
$N = (N^1, \dots, N^K)$, $m = (\delta, J), \: J = (J_1, \dots, J_K)$, $\mu_m = (\mu_k^m)_k, \Sigma_m = (\Sigma_k^m)_k$, $n_{iter}$, $n_{GQ}$.
$\Tilde \mu_m = (\Tilde \mu_k^m)_k, \Tilde \Sigma_m = (\Tilde \Sigma_k^m)_k$.
Precompute $(H(T_i^k))_{i, k}$.
Precompute $(p_q, v_q)_{q \in [n_{GQ}]}$ (points and weights for Gaussian quadrature) and $(H(p_q))_{q \in [n_{GQ}]}$ .
DoParalleldo in parallel for each $k = 1, \dots, K$end
Initialisation: $\tilde{\mu}_k^m \gets \mu_k^m$, $\tilde{\Sigma}_k^m \gets \Sigma_k^m$.
$t \gets 1$ to $n_{iter}$
$i \gets 1$ to $N_k$
$\mathbb{E}_{\hat Q_{1}^{m,k}}[\tilde{\lambda}^k_{T_i^k}(f_k^m)^2] = \alpha \left( H(T_i^k)^T \Tilde \Sigma_k^s H(T_i^k) + (H(T_i^k)^T \Tilde \mu_k^s)^2 - 2 \eta H(T_i^k)^T \Tilde \mu_k^m + \eta^2 \right)$
$\mathbb{E}_{\hat Q_{2}^{m,k}}[\omega_i^k] = \tanh\left(\sqrt{\mathbb{E}_{\hat Q_{1}^{m,k}}[\tilde{\lambda}^k_{T_i^k}(f_k^s)^2]}\right) / \left(2 \sqrt{\mathbb{E}_{\hat Q_{1}^{m,k}}[ \tilde{\lambda}^k_{T_i^k}(f_k^m)^2]} \right) $
$q \gets 1$ to $n_{GQ}$
$\mathbb{E}_{\hat Q_{1}^{m,k}}[\tilde{\lambda}_{p_q}^k ( f_k^m)^2] = \alpha \left( H(p_q)^T \Tilde \Sigma_k^m H(p_q) + (H(p_q)^T \Tilde \mu_k^m)^2 - 2 \eta H(p_q)^T \Tilde \mu_k^m + \eta^2 \right)$
$\mathbb{E}_{\hat Q_{2}^{m,k}}[\omega_q^k] = \tanh\left(\sqrt{\mathbb{E}_{\hat Q_{1}^{m,k}}[\tilde{\lambda}^k_{p_q}(f_k^s)^2]}\right) / \left(2 \sqrt{\mathbb{E}_{\hat Q_{1}^{m,k}}[ \tilde{\lambda}^k_{p_q}(f_k^m)^2]} \right) $
$\mathbb{E}_{\hat Q_{1}^{m,k}}[\tilde{\lambda}_{p_q}^k ( f_k^m) ] = \alpha \left((\tilde{\mu}_k^m)^T H(p_q) -\eta\right)$
Compute $\tilde{\Sigma}_k^m$ and $\tilde{\mu}_k^m$ using (<ref>) and (<ref>)
§.§ Adaptive variational algorithms
Using Algorithm <ref> for computing a model-restricted mean-field variational posterior, we now leverage the model-selection and two-step approach from Section <ref> to design two adaptive variational Bayes algorithms. The first one, denoted fully-adaptive, is only based on the model-selection strategy from Section <ref> and is suitable for low-dimensional settings. The second one, denoted two-step adaptive, relies on a partial model-selection strategy and the two-step approach from Section <ref>, and is more efficient for moderately large to large dimensions of the point process.
§.§.§ Fully-adaptive variational algorithm
From now on, we assume that the number of functions $(e_j)$ in the dictionary is bounded by $J_T \in \mathbb{N}$. We then define the set of models
\begin{align}\label{eq:set-models-bounded}
\mathcal{M}_T = \big\{m = (\delta, J=(J_k)_k); \: \delta \in \{0,1\}^{K \times K}, \: 1 \leq J_k \leq J_T, \: k\in [K] \big\}.
\end{align}
We can easily see that in this case $|\mathcal{M}_T | \sim 2^{K^2} J_T$, and that
for any $m = (\delta, J) \in \mathcal{M}_T$, the number of parameters in $m$ is equal to $\sum_{l,k} \delta_{lk}(J_k+1) + 1$. Therefore, we recall that exploring all models in $ \mathcal{M}_T $ is only computationally feasible for low-dimensional settings, e.g., $K \leq 3$.
We also recall our notation $m = (m_k)_k$ with $m_k = (\delta_{\cdot k}, J_k), \forall k$.
Let $\Pi_{m}$ be a prior distribution on $\mathcal{M}_T$ of the form
\begin{align*}
\Pi_{m}(m) = \prod_k \Pi_{m}(m_k) = \prod_k \Pi_{k,\delta}(\delta_{\cdot k}) \Pi_{k,J}(J_k).
\end{align*}
For instance, one can choose $\Pi_{k,\delta}$ as a product of Bernoulli distribution with parameter $p \in (0,1)$ and $\Pi_{k,J}$ as the uniform distribution over $[J_T]$.
Using Algorithm <ref>, for each $m = (m_k)_k$, we compute $\hat Q_k^m $ together with the corresponding $ELBO(\hat Q_k^m )$ for each $k$.
We note that the computations for each model can be computed independently, and therefore be parallelised to further accelerate posterior inference.
Then, we recall that the model-selection adaptive variational approach consists in either selecting $\hat m$ which maximises the ELBO over $m \in \mathcal{M}_T$ (see (<ref>)) or in averaging over the different models $m$ (see (<ref>)). In the first case,
with $\hat m_k = \arg \max_{m_k} ELBO(\hat Q_k^m)$, the VB posterior is $\hat Q_{MS}=\otimes_{k=1}^{K}\hat Q_k^{\hat m_k} $.
In the second case, the model-averaging adaptive variational posterior is given by
\begin{align}
&\hat Q_{AV} = \otimes_{k=1}^K \hat Q_k^{AV}, \quad \hat Q^{AV}_k = \sum_{ m_k } \hat \gamma_k^{m} \hat Q_k^{m},\quad \hat \gamma_k^{m} = \frac{\Tilde \gamma_k^{m}}{\sum_m \Tilde \gamma_k^{m}} \nonumber \\
&\Tilde \gamma_k^{m} = \Pi_{k,\delta}(\delta_{\cdot k}) \Pi_{k,J}(J_k) \exp \left \{ ELBO(\hat Q_k^{m}) \right \}. \label{eq:vpost_hat}
\end{align}
We call this procedure (exploring all models in $\mathcal{M}_T$) the fully-adaptive mean-field variational inference algorithm, and summarise its steps in Algorithm <ref>. In the next section, we propose a faster algorithm that avoids the exploration of all models in $\mathcal{M}_T$.
Fully-adaptive mean-field variational inference
$N = (N^1, \dots, N^K)$, $\mathcal{M}_T$, $\mu = (\mu_m)_{m \in \mathcal{M}_T}, \Sigma = (\Sigma_m)_{m \in \mathcal{M}_T}$, $n_{iter}$, $n_{GQ}$.
$\hat Q_{AV}$ or $\hat Q_{MV}$.
DoParalleldo in parallel for each $m = (\delta, D) \in \mathcal{M}_T$end
Compute the variational posterior $\hat Q_m$ using Algorithm <ref> with $\mu_m, \Sigma_m$, $n_{iter}$ and $n_{GQ}$ as hyperparameters.
Compute $(ELBO(\hat Q_k^m)_k)$ and $(\Tilde{\gamma}_{k}^m)_k$ using (<ref>).
Compute $\{\hat{\gamma}_{m}\}_{m \in \mathcal{M}_T}$ and $\hat Q_{AV}$ or $\hat Q_{MS}$.
§.§.§ Two-step adaptive mean-field algorithm
As discussed in the above section, for moderately large values of $K$, the model-averaging or model-selection procedures in Algorithm <ref> become prohibitive. In this case, we instead use the two-step approach introduced in Section <ref>.
We recall that this strategy corresponds to starting with a maximal graph $\delta_C$, typically the complete graph $\delta_C = \mathds{1}\mathds{1}^T$, and considering the set of models
\mathcal{M}_C = \big \{m = (\delta_C, J=(J_k)_k); \: 1\leq J_k \leq J_T, \: k\in [K] \big \},
where here as well we assume that the number of functions in the dictionary is bounded by $J_T$. Then, after computing a graph estimator $\hat \delta$, we consider the second set of models $
\mathcal{M}_E = \big \{m = (\hat \delta, J=(J_k)_k); \: 1\leq J_k \leq J_T, \: k\in [K] \big \}
$. We note that both $ \mathcal{M}_C$ and $ \mathcal{M}_E$ have cardinality of order $ K J_T$, and the cardinality of models per dimension is $J_T$. Therefore, as soon as the computation for each model is fast and $J_T$ is not too large, optimisation procedures over these two sets are feasible, even for large values of $K$.
In the first step of our fast algorithm, we compute the model-selection adaptive VB posterior $\hat Q_{MS}^{C}$ using Algorithm <ref>, replacing $\mathcal{M}_T$ by $\mathcal{M}_C$. Then, we use $\hat Q_{MS}^{C}$ to estimate the norms $(\norm{h_{lk}}_1)_{l,k}$ and the graph parameter, with the thresholding method described in Section <ref>:
(a) denoting $\hat J_C = (J_{k,C})_k$ the selected dimensionality in $\hat Q_{MS}^{C}$, we compute our estimates of the norm $\hat S_{lk} = \mathbb{E}_{\hat Q_{MS}^{C}}[\norm{h_{lk}}_1], \: \forall (l,k)$, and define $\hat S= (\hat S_{lk})_{l,k} \in \R_+^{K\times K}$;
(b) we order our estimates $\hat S_{(1)} < \dots < \hat S_{(K^2)}$ and choose a threshold $\eta_0$ in the first significant gap between $\hat S_{(i)}$ and $\hat S_{(i+1)}$, $i \in [K^2]$;
(c) we compute the graph estimator $ \hat \delta = (\hat \delta_{lk})_{l,k}$ defined for any $k$ and $l$ by
\hat \delta_{lk} = \mathds{1}_{\{\hat S_{lk} > \eta_0\}}.
In the second step, we compute the adaptive model-selection VB posterior $\hat Q_{MS}$ or model-averaging VB posterior $\hat Q_{AV}$ using Algorithm <ref>, replacing $\mathcal{M}_T$ by $\mathcal{M}_E$.
This procedure is summarised in Algorithm <ref>. In the next section, we provide theoretical guarantees for general variational Bayes approaches, and apply them to our adaptive and mean-field algorithms.
Two-step adaptive mean-field variational inference
$N = (N^1, \dots, N^K)$, $\mathcal{M}_T$, $\mu = (\mu_m)_m, \Sigma = (\Sigma_m)_m$, $n_{iter}$, $n_{GQ}$.
$\hat Q_{MS}$ or $\hat Q_{AV}$
Compute $\hat Q_{MS}$ using Algorithm <ref> with input set $ \mathcal{M}_C$ and hyperparameters $\mu = (\mu_m)_m, \Sigma = (\Sigma_m)_m$, $n_{iter}$, $n_{GQ}$.
Compute $\hat \delta$ using the thresholding of the estimate $\Tilde S$.
Compute $\hat Q_{MS}^C$ or $\hat Q_{AV}$ using Algorithm <ref> with input set $\mathcal{M}_E$ and hyperparameters $\mu = (\mu_m)_m, \Sigma = (\Sigma_m)_m$, $n_{iter}$, $n_{GQ}$.
§ THEORETICAL PROPERTIES OF THE VARIATIONAL POSTERIORS
This section contains general results on variational Bayes methods for estimating the parameter of Hawkes processes, and theoretical guarantees for our adaptive and mean-field approaches proposed in Section <ref> and Section <ref>. In particular, we derive the concentration rates of variational Bayes posterior distributions, under general conditions on the model, the prior distribution, and the variational family. Then, we apply our general result to variational methods of practical interest, in particular our model-selection adaptive and mean-field methods.
We recall that in our problem setting, the link functions $\phi := (\phi_k)_k$ in the nonlinear intensity (<ref>) are fixed by the statistician and therefore known a-priori. Throughout the section we assume that these functions
are monotone non-decreasing, $L$-Lipschitz, $L > 0$, and that one of the two following conditions is satisfied:
(C1) For a parameter $f = (\nu, h)$, the matrix defined by $S^+ = (S^+_{lk})_{l,k} \in \R_+^{K \times K}$ with $S_{lk}^+ = L \norm{h_{lk}^+}_1, \forall l,k$, satisfies $\norm{S^+} < 1$;
(C2) For any $k \in [K]$, the link function $\phi_k$ is bounded, i.e., $\exists \Lambda_k > 0, \forall x \in \R$, $0 \leq \phi_k(x) \leq \Lambda_k$.
These conditions are sufficient to prove that the Hawkes process is stationary (see for instance [Bremaud and Massoulie, 1996], [Deutsch and Ross, 2022], or [Sulem et al., 2021]).
§.§ Variational posterior concentration rates
To establish our general concentration result on the VB posterior distribution, we need to introduce the following assumption, also used to prove the concentration of the posterior distribution (<ref>) in the nonlinear Hawkes model
in [Sulem et al., 2021].
For a parameter $f$, we assume that
there exists $\varepsilon > 0$ such that for each $k \in [K]$, the link function $\phi_k$ restricted to $I_k = (\nu_k - \max \limits_{l \in [K]} \norm{h_{lk}^{-}}_\infty - \varepsilon, \nu_k + \max \limits_{l \in [K]} \norm{h_{lk}^{+}}_\infty + \varepsilon)$ is bijective from $I_k$ to $J_k = \phi_k(I_k)$ and its inverse is $ L'$- Lipschitz on $J_k$, with $L' > 0$. We also assume that at least one of the two following conditions is satisfied.
(i) For any $k \in [K]$, $ \inf \limits_{x \in \R} \phi_k(x) >0$.
(ii) For any $k \in [K]$, $\phi_k > 0$, and $\sqrt{\phi_k}$ and $\log \phi_k$ are $L_1$-Lipschitz with $L_1 > 0$ .
In [Sulem et al., 2021], Assumption <ref> is used to obtain general posterior concentration rates, and is verified for commonly used link functions (see Example 1 in [Sulem et al., 2021]). In particular, it holds for sigmoid-type link functions, such as the ones considered in Section <ref>, when the parameter space is bounded (see below).
We now define our parameter space $\mathcal{F}$
as follows
\begin{align*}
&\mathcal{H}' = \left \{ h: [0,A] \to \R; \: \|h \|_\infty < \infty \right \}, \quad \mathcal{H} = \left \{ h = (h_{lk})_{l,k=1}^K \in \mathcal{H}'^{K^2}; \: (h, \phi) \text{ satisfy \textbf{(C1)} or \textbf{(C2)} } \right \}, \\
&\mathcal{F} = \left \{ f = (\nu, h) \in (\R_+\backslash\{0\})^K \times \mathcal{H}; \: (f, \phi) \text{ satisfies Assumption ~\ref{ass-psi} } \right \}.
\end{align*}
We also define the $L_1$-distance for any $f,f' \in \mathcal{F}$ as
\begin{align*}
&\|f-f'\|_1 := \norm{\nu - \nu'}_1 + \norm{h -h'}_1, \quad \norm{h -h'}_1 := \sum_{l,k=1}^K \norm{h_{lk} -h_{lk}'}_1, \quad \norm{\nu - \nu'}_1 := \sum_{k} |\nu_{k} -\nu_{k}'|.
\end{align*}
In particular, for the sigmoid function $\phi_k(x) = \theta_k \sigma( \alpha(x - \eta))$, we can choose
\mathcal{F} = \left \{ f = (\nu, h) \in [0,B]^K \times \mathcal{H} \right \}
$, with $B > 0$.
Moreover, we introduce
\begin{align*}
&B_\infty(\epsilon) = \left \{f \in \mathcal{F}; \: \nu_k^0 \leq \nu_k \leq \nu_k^0 + \epsilon, \, h_{lk}^0 \leq h_{lk} \leq h_{lk}^0 + \epsilon, \: (l,k) \in [K]^2 \right \}, \quad \epsilon > 0,
%&B_2(\epsilon_T, B) = \{f \in \mathcal{F}; \: \max_k |\nu_k - \nu_k^0| \leq \epsilon_T, \: \max_{l,k} \|h_{lk} - h_{lk}^0\|_2 \leq \epsilon_T, \: \max_{l} \nu_l + \max_k \|h_{kl}\|_\infty < B \},
\end{align*}
a neighbourhood around $f_0$ in supremum norm, and a sequence $(\kappa_T)_T$ defined as
\begin{align} \label{kappaT}
\kappa_T := 10 (\log T)^r,
\end{align}
with $r=0$ if $(\phi_k)_k$ satisfies Assumption <ref> (i), and $r=1$ if $(\phi_k)_k$ satisfies Assumption <ref> (ii).
We can now state our general theorem.
Let $N$ be a Hawkes process with link functions $\phi = (\phi_k)_k$ and parameter $f_0 = (\nu_0, h_0)$ such that $(\phi, f_0)$ satisfy Assumption <ref> and (C1) or (C2). Let $\epsilon_T = o(1/\sqrt{\kappa_T})$ be a positive sequence verifying $\log^3 T=O(T \epsilon_T^2)$, $\Pi$ a prior distribution on $\mathcal{F}$ and $\mathcal{V}$ a variational family of distributions on $\mathcal{F}$. We assume that the following conditions are satisfied for $T$ large enough.
(A0) There exists $c_1 > 0$ such that $\Pi(B_\infty(\epsilon_T)) \geq e^{-c_1T\epsilon_T^2}.$
(A1) There exist $\mathcal{H}_T \subset \mathcal{H}$, $\zeta_0 > 0$, and $x_0 > 0$ such that
$$\Pi(\mathcal{H}_T^{c}) = o(e^{- (\kappa_T +c_1) T\epsilon_T^2})\quad\mbox{and}\quad\log \mathcal{N}\left(\zeta_0 \epsilon_T, \mathcal{H}_T, ||\cdot||_1\right) \leq x_0 T \epsilon_T^2.$$
(A2) There exists $Q \in \mathcal{V}$ such that $supp(Q) \subset B_\infty(\epsilon_T)$ and $KL(Q||\Pi) = O( \kappa_T T \epsilon_T^2).$
Then, for any $M_T \to \infty$ and $\hat Q$ defined in (<ref>), we have that
\begin{align*}
\Exz{ \hat Q \left( \norm{f - f_0}_1 > M_T \sqrt{\kappa_T} \epsilon_T \right) } \xrightarrow[T\to \infty]{} 0.
\end{align*}
The proof of Theorem <ref> is reported in Appendix <ref> and leverage existing theory on posterior concentration rates. We now make a few remarks related to the previous results.
Firstly, similarly to [Donnet et al., 2020] [Sulem et al., 2021], Theorem <ref> also holds when the neighborhoods $B_\infty(\epsilon_T)$ around $f_0$ in supremum norm, considered in Assumptions (A0) and (A2), are replaced neighborhoods in $L_2$-norm, defined as
\begin{align*}
B_2(\epsilon_T, B) = \left \{f \in \mathcal{F}; \: \max_k |\nu_k - \nu_k^0| \leq \epsilon_T, \: \max_{l,k} \|h_{lk} - h_{lk}^0\|_2 \leq \epsilon_T, \: \max_{l} \nu_l + \max_k \|h_{kl}\|_\infty < B \right \},
\end{align*}
with $B>0$, and when $\kappa_T$ replaced by $\kappa_T' = 10 (\log \log T) (\log T)^r$.
Secondly, Theorem <ref> also holds under the more general condition on the variational family:
$\quad$ (A2') The variational family $\mathcal{V}$ verifies $ \min_{Q \in \mathcal{V}} KL(Q||\Pi(.|N)) = O(\kappa_T T \epsilon_T^2).$
However, in practice, one often verifies (A2) and deduces (A2') using the following steps from [Zhang and Gao, 2020]. For any $Q \in \mathcal{V}$, we have that
\begin{align*}
KL(Q||\Pi(.|N)) \leq %KL(Q||\Pi) + Q(KL(d\mathbb{P}_{0},d\mathbb{P}_f)) =
KL(Q||\Pi) + Q(KL(\mathbb{P}_{T,f_0}, \mathbb{P}_{T,f})), %Q(\Exz{L_T(f_0) - L_T(f)}),
\end{align*}
where we denote $\mathbb{P}_{T,f_0} = e^{L_T(f_0)}$ and $\mathbb{P}_{T,f} = e^{L_T(f)}$.
Using Lemma S6.1 from [Sulem et al., 2021], for any $f \in B_\infty(\epsilon_T)$,
we also have that
\begin{align*}
\Exz{L_T(f_0) - L_T(f)} \leq \kappa_T T \epsilon_T^2.
\end{align*}
Therefore, under (A2), there exists $Q \in \mathcal{V}$ such that
KL(Q||\Pi(.|N)) = O(\kappa_T T \epsilon_T^2),
which implies (A2') . Besides, (A2) (or (A2')), is the only condition on the variational class, and informally states that this family of distributions can approximate the true posterior conveniently. Nonetheless, under (A2), we may still have $\min_{Q \in \mathcal{V}} KL(Q||\Pi(.|N)) \xrightarrow[T\to \infty]{} \infty$, as has been observed by [Nieman et al., 2021].
Finally, Assumptions (A0) and (A1) are similar to the ones of Theorem 3.2 in [Sulem et al., 2021]. They are sufficient conditions for proving that the posterior concentration rate is at least as fast as $\sqrt{\kappa_T} \epsilon_T$.
§.§ Applications to variational classes and prior families of interest
In this section, we apply the previous result to variational inference methods of interest in nonlinear Hawkes models, in particular, the mean-field and model-selection variational families, introduced in Section <ref> and used in our algorithms. We also verify our general conditions on the prior distribution on two common examples of nonparametric prior families, namely random histograms and Gaussian processes, see for instance in [Donnet et al., 2020, Malem-Shinitski et al., 2021].
We then obtain explicit concentration rates for the variational posterior distribution and for Hölder classes of functions.
First, we re-write our hierarchical spike-and-slab prior distribution from Section <ref> as
\begin{align}\label{eq:prior-distribution-rewritten}
d\Pi(f) = d\Pi_\nu(\nu) d\Pi_\delta(\delta) d \Pi_{h|\delta}(h), \quad d \Pi_{h|\delta}(h) = \prod_{l,k} d \Tilde \Pi_{h|\delta}(h_{lk})
\end{align}
and recall that from [Sulem et al., 2021], we know that Assumption (A0) of Theorem <ref> can be replaced by
(A0') There exists $c_1 > 0$ such that $\Pi( B_\infty(\epsilon_T) | \delta= \delta_0) \geq e^{ -c_1 T\epsilon_T^2/2} $ and $ \Pi_\delta ( \delta = \delta_0) \geq e^{ -c_1 T\epsilon_T^2/2}$.
Furthermore, one can choose for instance $\Pi_\delta = \mathcal B(p)^{K^2}$ with $p \in (0,1)$, implying that the $\delta_{lk} $'s are i.i.d. Bernoulli random variables. Then, for any fixed $p$, one only needs to verify $ \Pi_{h|\delta}( B_\infty(\epsilon_T)|\delta = \delta_0) \geq e^{ -c_1 T\epsilon_T^2/2} $.
§.§.§ Mean-field variational family
Here, we consider the mean-field variational inference method with general latent variable augmentation as described in Section <ref>.
We recall that for some latent variable $z \in \mathcal{Z}$, the mean-field family $\mathcal{V}_{AMF}$ for approximating the augmented posterior $\Pi_A(. |N)$ is defined as
\begin{align*}
\mathcal{V}_{AMF} = \left \{Q: \mathcal{F} \times \mathcal{Z} \to [0,1] ; \: Q(f, z) = Q_1(f)Q_2(z) \right \},
\end{align*}
and the corresponding mean-field variational posterior is $\hat Q_{AMF} = \arg \min_{Q \in \mathcal{V}_{AMF}} KL(Q|| \Pi_A(. |N))$. We also recall our notation $ \mathbb{P}_{A}$, for the prior distribution on the latent variable.
We note that here the augmented prior distribution is $\Pi \times \mathbb{P}_{A} \in \mathcal{V}_{AMF}$, therefore, assumption (A2) is equivalent to the prior mass condition (see for instance [Zhang and Gao, 2020]). Therefore, we only need to verify the assumptions (A0') and (A1). Besides, these assumptions are the same as in [Sulem et al., 2021] and therefore can be applied to any prior family discussed there. In particular, priors on the $h_{lk}$'s based on decompositions on dictionaries like in (<ref>) have been studied in [Arbel et al., 2013] or [Shen and Ghosal, 2015] and their results can be applied to prove assumptions (A0') and (A1). Below, we apply Theorem <ref> in two examples, random histogram priors and hierarchical Gaussian process priors.
Random histogram prior
We consider a random histogram prior for $\Pi_{h|\delta}(h)$, using a similar construction as in Section <ref>.
This prior family is notably used in [Donnet et al., 2020, Sulem et al., 2021], and is similar to the basis decomposition prior in [Zhou et al., 2021, Zhou et al., 2021]. For simplicity, we assume here that $J=J_1=\dots=J_k$ and consider a regular partition of $(0,A]$ based on
$(t_j)_{j=0,\dots,J}$ with $t_j = j A/J, \: j=0, \dots, J$, $J \geq 1$, and define piecewise-constant interaction functions as
$$ h_{lk}^{w}(x) = \sum_{j=1}^{J} w_{lk}^j e_j(x), \quad e_j(x) =\frac{ J }{A}\mathds{1}_{(t_{j-1}, t_j]}(x), \quad w_{lk}^j \in \R \quad \forall j \in [J], \forall l,k \in [K]. % \quad J \sim \mathcal P(\lambda), \quad \lambda > 0 .
Note that $\|e_j\|_2=\sqrt{J/A}$ but $\|e_j\|_1 = 1, \: \forall j \in [J]$, therefore, the functions of the dictionary, $(e_j)_j$ are orthonormal in terms of the $L_1$-norm.
In this general construction, we also consider a prior on the number of pieces $J$ with exponential tails, for instance we can choose $J \sim \mathcal P(\lambda)$ with $ \lambda > 0 $, or $J= 2^D$ where $2^D \leq J_D < 2^{D+1}$ and $J_D\sim \mathcal P(\lambda)$. Finally, given $J$, we consider a normal prior distribution on each weight $w_{lk}^j$, i.e.,
$$w_{lk}^j |J \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0_J,K_J), \quad K_J = \sigma_0^2 I_J, \quad \sigma_0 > 0.$$
With this prior construction, assumptions (A0') and (A1) are easily checked. For instance, this Gaussian random histogram prior is a particular case of the spline prior family in [Sulem et al., 2021], with a spline basis of order $q=0$. We note that these conditions are also verified easily for other prior distributions on the weights, for instance, the shrinkage prior of [Zhou et al., 2021] based on the Laplace distribution $p_{Lap}(w_{lk}^j ;0,b) = (2b)^{-1} \exp \{ - |w_{lk}^j |/b \}$ with $ b> 0$, and a “local" spike-and-slab prior inspired by the construction in [Donnet et al., 2020, Sulem et al., 2021]:
\begin{align*}
w_{lk}^j | J \overset{\mathrm{i.i.d.}}{\sim} p \delta_{(0)} + (1 - p) p_{Lap}(. ;0,b), \quad p \in (0,1), \quad b>0,
\end{align*}
where $\delta_{(0)} $ is the Dirac measure at 0.
In the following proposition, we further assume that the true functions in $h_0$ belong to a Holder-smooth class of functions $\mathcal{H}(\beta, L_0)$ with $\beta \in (0,1)$, so that explicit variational posterior concentration rates $\epsilon_T$ for the mean-field family and the random histogram prior can be derived.
Let $N$ be a Hawkes process with link functions $\phi = (\phi_k)_k$ and parameter $f_0 = (\nu_0, h_0) $ such that $(\phi,f_0)$ verify Assumption <ref>. Assume that for any $l,k \in [K]$, $h_{lk}^0 \in \mathcal{H}(\beta, L_0)$ with $\beta \in (0,1)$ and $L_0 > 0$. Then, under the above Gaussian random histogram prior, the mean-field variational distribution $\hat Q_1$ defined in (<ref>) satisfies, for any $M_T \to +\infty$,
\begin{align*}
\Exz{\hat Q_1 \left( \norm{f - f_0}_1 > M_T(\log T)^q (T / \log T)^{-\beta / (2\beta + 1)} \right)} \xrightarrow[T\to \infty]{} 0,
\end{align*}
with $q = 0$ if $\phi$ verifies Assumption <ref>(i) and $q = 1/2$ if $\phi$ verifies Assumption <ref>(ii).
The proof of Proposition <ref> is omitted since it is a direct application of Theorem <ref> to mean-field variational families in the context of a latent variable augmentation scheme. We note that the variational concentration rates also match the true posterior concentration rates (see [Sulem et al., 2021]).
Gaussian process prior We now consider a prior family $\Pi_{h|\delta}$ based on Gaussian processes which is commonly used for nonparametric estimation of Hawkes processes (see for instance [Zhang et al., 2020, Zhou et al., 2020, Malem-Shinitski et al., 2021]).
We define a centered Gaussian process distribution with
covariance function $k_{GP}$ as the prior distribution $\Tilde \Pi_{h|\delta}$ on each $h_{lk}$ such that $\delta_{lk} = 1$, $l,k \in [K]$, i.e., for any $n \geq 1$ and $x_1,\dots, x_n \in [0,A]$, we have
\begin{align*}
(h_{lk}(x_i))_{i=1,\dots, n} \sim \mathcal{N}\left(0_{n}, (k_{GP}(x_i,x_j))_{i,j=1,\dots, n}\right).
\end{align*}
We then verify assumptions (A0') and (A1) based on the $L_2$-neighborhoods (see comment after Theorem <ref>), i.e., we check that there exist $ \mathcal{H}_T \subset \mathcal{H}$ and $c_1, x_0, \zeta_0 > 0$, such that
\begin{align*}
&\Pi(\mathcal{H}_T^c) \leq e^{-(\kappa_T + c_1) T\epsilon_T^2}, \quad \log \mathcal{N}(\zeta_0 \epsilon_T, \mathcal{H}_T, \|.\|_1) \leq x_0 T \epsilon_T^2,\quad \Pi(B_2(\epsilon_T,B)) \geq e^{-c_1 T\epsilon_T^2}.
\end{align*}
It is therefore enough to find $ \mathcal{B}_T \subset L_2([0,A])$ such that
\begin{align*}
&\Tilde \Pi_h(\mathcal{B}_T^c) \leq e^{-(\kappa_T + c_1) T\e_T^2}, \quad \log \mathcal{N}(\zeta_0 \epsilon_T, \mathcal{B}_T, \|.\|_1) \leq \frac{x_0 T \epsilon_T^2 }{K^2}, \quad \Tilde \Pi_h(\norm{h_{lk} - h_{lk}^0}_2 < \epsilon_T) \geq e^{-c_2 T\e_T^2} / K^2,
\end{align*}
and define $\mathcal{H}_T = \mathcal{B}_T^{\otimes K^2}$, since for all $\zeta>0$, there exists $\zeta_2 >0$ (independent of $T$) such that $\Pi(\mathcal{H}_T^c) \leq \Tilde \Pi(\mathcal{B}_T^c),$ and
\begin{align*}
\log \mathcal{N}(\zeta \epsilon_T, \mathcal{H}_T, \|.\|_1) \leq K^2\log \mathcal{N}(\zeta_2 \epsilon_T, \mathcal{B}_T, \|.\|_1), \quad \Pi(B_2(\epsilon_T, B)) \geq \prod_{l,k} \Tilde \Pi_h\left(\norm{h_{lk} - h_{lk}^0}_2 < \epsilon_T\right).
\end{align*}
These conditions are easily deduced from Theorem 2.1 in [van der Vaart and van Zanten, 2009] that we recall here. Let $\mathbb{H}$ be the Reproducing Kernel Hilbert Space of $k_{GP}$
and $\phi_{h_0}(\e)$ be the concentration function associated to $\Tilde \Pi_{h|\delta}$ defined as
\begin{align*}
\phi_{h_0}(\e) = \inf_{h \in \mathbb{H}, \norm{h_{lk}-h_{lk}^0}_2 \leq \e} \norm{h_{lk}-h_{lk}^0}_{\mathbb{H}} - \log \Tilde \Pi(\norm{h_{lk}}_2 \leq \e), \quad \e > 0.
\end{align*}
For any $\epsilon_T > 0$ such that $\phi_{h_0}(\epsilon_T) \leq T \epsilon_T^2$, there exists $ \mathcal{B}_T \subset L_2([0,A])$ satisfying
\begin{align*}
&\Tilde \Pi_h(\mathcal{B}_T^c) \leq e^{-C T\epsilon_T^2}, \quad \log \mathcal{N}(3 \epsilon_T, \mathcal{B}_T, \|.\|_2) \leq 6C T \e_T^2, \quad \Tilde \Pi_h(\norm{h_{lk} - h_{lk}^0}_\infty < 2\epsilon_T) \geq e^{- T\epsilon_T^2},
\end{align*}
for any $C > 1$ such that $e^{-CT \epsilon_T^2}<1/2$. Since $\norm{h_{lk}}_1 \leq \sqrt{A} \norm{h_{lk}}_2$, we then obtain that
\begin{align*}
&\log \mathcal{N}(3 \sqrt{A} \epsilon_T, \mathcal{B}_T, \|.\|_1) \leq \log \mathcal{N}(3 \epsilon_T, \mathcal{B}_T, \|.\|_2) \leq 6C T \epsilon_T^2,
\end{align*}
and finally, that $\log \mathcal{N}(\zeta_0 \epsilon_T, \mathcal{H}_T, \|.\|_1) \leq 6C K^2 T \epsilon_T^2 \leq x_0 T \epsilon_T^2$ with $\zeta_0 = 3 \sqrt{A}, x_0 = 12 C K^2$.
Although more general kernel functions $k_{GP}$ could be considered, we focus on the hierarchical squared exponential kernels for which
\begin{align*}
\forall x,y \in \R, \quad k_{GP}(x,y; \ell) = \exp \left \{- (x-y)^2 / \ell^2 \right \}, \quad \ell \sim IG(\ell; a_0, a_1 ), \quad a_0,a_1 > 0,
\end{align*}
where $IG(.; a_0, a_1 )$ with $ a_0,a_1 > 0$ is the Inverse Gamma distribution. The hierarchical squared exponential kernel is notably chosen in the variational method of [Malem-Shinitski et al., 2021], and its adaptivity and near-optimality has been proved by [van der Vaart and van Zanten, 2009].
Let $N$ be a Hawkes process with link functions $\phi = (\phi_k)_k$ and parameter $f_0 = (\nu_0, h_0) $ such that $(\phi,f_0)$ verify Assumption <ref>. Assume that for any $l,k \in [K]$, $h_{lk}^0 \in \mathcal{H}(\beta, L_0)$ with $\beta > 0$ and $L_0 > 0$. Let $\Tilde \Pi_{h|\delta}$ be the above Gaussian Process prior with hierarchical squared exponential kernel $k_{GP}$. Then, under our hierarchical prior, the mean-field variational distribution $\hat Q_1$ defined in (<ref>) satisfies, for any $M_T \to +\infty$,
\begin{align*}
\Exz{\hat Q_{1} \left( \norm{f - f_0}_1 > M_T(\log \log T)^{1/2}(\log T)^q (T/ \log T)^{-\beta / (2\beta + 1)} \right)} \xrightarrow[T\to \infty]{} 0,
\end{align*}
with $q = 1$ if $\phi$ verifies Assumption <ref>(i) and $q = 3/2$ if $\phi$ verifies Assumption <ref>(ii).
Given Theorem <ref>, Proposition <ref> is then a direct consequence of Theorem <ref> and [van der Vaart and van Zanten, 2009], therefore its proof is omitted.
The Gaussian process prior has been used in variational methods for Hawkes processes when there exists a conjugate form of the mean-field variational posterior distribution, i.e., $\hat Q_1$ is itself a Gaussian process with mean function $m_{VP}$ and kernel function $k_{VP}$. This is notably the case in the sigmoid Hawkes model under the latent variable augmentation scheme described in Section <ref> and used for instance by[Malem-Shinitski et al., 2021]. Since the computation of the Gaussian process variational distribution is often expensive for large data set, the latter is often further approximated using the sparse Gaussian process approximation via inducing variables [Titsias and Lázaro-Gredilla, 2011].
Using results of [Nieman et al., 2021], we conjecture that our result in Proposition <ref> would also hold for the mean-field variational posterior with inducing variables.
§.§.§ Model-selection variational family
In this section, we consider the model-selection adaptive variational posterior distributions (<ref>) and (<ref>), and similarly obtain their concentration rates. We recall that these two types of adaptive variational posterior correspond to the following variational families (see also Appendix <ref>)
\begin{align*}
&\mathcal{V}_{A1} = \cup_{m \in \mathcal M} \{ \{m\}\times \mathcal{V}^m \},
&\mathcal{V}_{A2} = \left \{ \sum_{m \in \mathcal{M}} \alpha_{m} Q_m; \sum_{m} \alpha_{m} = 1, \: \alpha_m \geq 0, \: Q_m \in \mathcal{V}^m, \: \forall m \in \mathcal{M} \right\},
\end{align*}
where here, $\mathcal{M}$ is the set of all possible models, i.e.,
\begin{align*}
\mathcal{M} = \left \{ m = (\delta, J=(J_1, \dots, J_K)); \delta \in \{0,1\}^{K\times K}, \: J_k \in \mathbb N, \: \forall k \in [K]\right \},
\end{align*}
and for a model $m \in \mathcal{M}$, the variational family $\mathcal{V}^m$ corresponds to a set of distributions on the subspace $\mathcal{F}_m \subset \mathcal{F}$ and $\bigcup_{m \in \mathcal{M}} \mathcal{F}_m = \mathcal{F}$. In the data augmentation context and with the mean-field approximation, $\mathcal{V}^m$ is the set of distributions $Q: \mathcal{F}_m \times \mathcal{Z} \to [0,1]$ such that $Q(f,z) = Q(f)Q(z)$. We further recall that for each $k$, $J_k$ corresponds to the number of functions in the dictionary used to construct $(h_{lk})_{l \in [K]}$.
In this context, the general results from [Zhang and Gao, 2020] can be applied, and here, it is enough to replace the prior assumption (A0) by
\begin{align}\label{eq:A0''}
\textbf{(A0'')} \quad \exists c_1 > 0, \: &\Pi \left( B_\infty(\epsilon_T) \left| \delta= \delta_0, J= (J_k^0 )_k J_T \right. \right) \geq e^{ -c_1 T\epsilon_T^2/3}, \nonumber \\
&\Pi_\delta ( \delta = \delta_0) \geq e^{ -c_1 T\epsilon_T^2/3}, \quad \Pi_J \left( J=\left(J_k^0 \right)_k J_T\right) \geq e^{ -c_1 T\epsilon_T^2/3},
\end{align}
where $J_T = \left(\frac{T}{\log T}\right)^{\beta / (2 \beta +1)}$,
assuming that, for any $l,k \in [K]$, $h_{lk}^0 \in \mathcal{H}(\beta, L_0)$. Indeed, (A0”) implies that
\begin{align*}
- \log \Pi(m = m_0) - \log \Pi \left( B_\infty(\epsilon_T) | m=m_0 \right) \leq c_1 T \epsilon_T^2, \quad m_0 = \left(\delta_0, \left(J_k^0 \right)_k J_T\right),
\end{align*}
which also implies (A0). For example, under the random histogram prior of Section <ref>, it is enough to choose $\Pi_J$ such that, for some sequence $(x_n)_{n\geq 1}$ such that $x_n \xrightarrow[n \to \infty]{}\infty$,
\begin{align*}
\Pi_J(J_l > x_n) \lesssim e^{-cx_n}, \quad \Pi_J(J_l = x_n) \gtrsim e^{-cx_n}, \quad \forall n\geq 1, \quad c > 0,
\end{align*}
which is the case for instance when $\Pi_J$ is a Geometric distribution. In the next proposition, we state our result on the model-selection variational family, when using the random histogram prior distribution; however, this result also holds for other prior distributions based on decomposition over dictionaries such as the ones in [Arbel et al., 2013, Shen and Ghosal, 2015].
Let $N$ be a Hawkes process with link functions $\phi = (\phi_k)_k$, parameter $f_0 = (\nu_0, h_0) $ such that $(\phi,f_0)$ verify Assumption <ref>. Assume that for any $l,k \in [K]$, $h_{lk}^0 \in \mathcal{H}(\beta, L_0)$ with $\beta \in (0,1)$ and $L_0 > 0$.
Then, under the random histogram prior distribution, for the model selection variational posterior (<ref>) , we have that, for any $M_T \to +\infty$,
\begin{align*}
\Exz{\hat Q_{A1} \left( \norm{f - f_0}_1 > M_T(\log T)^q (T / \log T)^{-\beta / (2\beta + 1)} \right)} \xrightarrow[T\to \infty]{} 0,
\end{align*}
with $q = 0$ if $\phi$ verifies Assumption <ref>(i) and $q = 1/2$ if $\phi$ verifies Assumption <ref>(ii).
Since Proposition <ref> is a direct consequence of Theorem <ref> and Theorem 4.1 in [Zhang and Gao, 2020], its proof is omitted. Finally, we note that we can obtain similar guarantees for the model-averaging adaptive variational posterior (<ref>), by adapting Theorem 3.6 from [Ohn and Lin, 2021], which directly holds under the same assumptions as Proposition <ref>.
§.§ Convergence rate associated to the two-step algorithm
As discussed in Section <ref>, when the number of dimensions $K$ is moderately large, both the $\hat Q_{A1}$ and $\hat Q_{A2}$ are intractable, due to the necessity of exploring all models in $\mathcal{M}_T$, defined in (<ref>). For this setting, we have proposed a two-step procedure (Algorithm <ref>) that first constructs the estimator of the graph with (<ref>), then constructs a restricted set of models $\mathcal{M}_E$ and computes the corresponding variational distribution $\hat Q^{\hat \delta}$. We now show that this two-step procedure is theoretically justified. We recall our notation $S_{lk}^0=\norm{h_{lk}^0}_1, \: \forall l,k \in [K]$.
Firstly, since the complete graph $\delta_C = \mathds{1}\mathds{1}^T$ is larger than the true graph $\delta_0$, the subspace $\bigcup_{m \in \mathcal{M}_C} \mathcal{F}_m$ contains the true parameter $f_0$.
Hence Theorem <ref> remains valid with $\mathcal{V}_C = \cup_{m \in \mathcal M_C} \{ \{m\}\times \mathcal{V}^m \}$.
In particular the rates $\epsilon_T = (\log T)^q T^{-\beta/(2\beta+1)}$ obtained in Propositions <ref> and <ref> apply to the corresponding variational posterior $\hat Q_{MS}^{C}$, under the assumption that $K$ is large but fixed. In particular, for each $(l,k)$ and $\hat S_{lk} = \int\norm{h_{lk}}_1dQ_{MS}^{C}(h_{lk}) $, Theorem <ref> implies that
$$\mathbb P_0 \big(|\hat S_{lk} - S_{lk}^0| > \epsilon_T \big) =o(1).$$
In our two-step procedure, we consider the two following thresholding strategies:
(i) given a threshold $\eta_0 > 0$ defined a-priori, we compute $\hat \delta = (\hat \delta_{lk})_{l,k}, \: \delta_{lk} = \mathds{1}_{\{\hat S_{lk} > \eta_0\}}, \forall l,k$.
(ii) we choose a data-dependent threshold $\eta_0 \in (\hat S_{(i_0)}, \hat S_{(i_0+1)})$, where $(\hat S_{(i)})_{i \in [K^2]}$ corresponds to the values $(\hat S_{lk})_{l,k}$ in increasing order and $i_0$ is the first index such that $\hat S_{(i+1)} - \hat S_{(i)}$ is large. We then compute $\hat \delta$ as in (i).
Let $i^* := K^2 - \sum_{l,k}\delta_{lk}^0 = \min \left \{i \in [K^2]; \: S^0_{(i+1)} -S^0_{(i)} \neq 0 \right \}$ be the first index of non-zero such that $S^0_{(i+1)} > 0$,
where $(S^0_{(i)})_{i \in [K^2]}$ corresponds to the values of $(S_{lk}^0)_{l,k}$ in increasing order.
We recall our notation $\mathcal I(\delta_0)$ for the set of index pairs $(l,k)$ such that $S_{lk}^0> 0$. We now assume that $f_0$ is such that
\begin{equation}\label{signal_detection}
S_{lk}^0 \geq u_T, \quad \forall l,k \in \mathcal I(\delta_0),
\end{equation}
where $u_T>> \epsilon_T$. We note that (<ref>) is a mild requirement on $f_0$ since we allow $u_T$ to go to 0 almost as fast as $\epsilon_T$.
Now, for the thresholding strategy (i), for any $\eta_0$ (possibly depending on $T$) such that $u_T \leq \eta_0 <\min_{(l,k) \in \mathcal I(\delta_0)}\norm{h_{lk}^0}_1/2 $, we obtain that
\begin{align}\label{eq:graph-consistency}
\mathbb P_0( \hat \delta \neq \delta_0 ) = o(1).
\end{align}
Moreover, for the data-dependent thresholding strategy (ii), as soon as the gap $\hat S_{(i+1)} - \hat S_{(i)}$ is larger than $u_T$ but smaller than $\min_{lk\in \mathcal I(\delta_0)}\norm{h_{lk}^0}_1/2$, then (<ref>) also holds.
This is verified since
\begin{align*}
\mathbb P_0( \hat \delta \neq \delta_0)
\leq \sum_{l,k \in [K]} \mathbb P_0( |\hat S_{lk}-S_{lk}^0|> u_T/2 )=o(1).
\end{align*}
§ NUMERICAL RESULTS
In this section, we perform a simulation study to evaluate our variational Bayesian method in the context of nonlinear Hawkes processes, and demonstrate its efficiency, scalability, and robustnessin various estimation setups. In low-dimensional settings ($K = 1$ and $K = 2$), we can compare our variational posterior to the posterior distribution obtained from an MCMC method. As a preliminary experiment, we additionally analyse the performance of a Metropolis-Hastings sampler in commonly used nonlinear Hawkes processes, namely with ReLU, sigmoid and softplus link functions (Simulation 1). In the subsequent simulations, we focus on the sigmoid model and test our adaptive variational algorithms, in well-specified (Simulations 2-5) and mis-specified settings (Simulation 6), high-dimensional data sets, and for different connectivity graphs (Simulation 4).
In each setting, we sample one observation of a Hawkes process with dimension $K$, link functions $(\phi_k)_k$ and parameter $f_0 = (\nu_0, h_0)$ on $[0,T]$, using the thinning algorithm of [Adams et al., 2009].
In most simulated settings, the true interaction functions $(h_{lk}^0)_{l,k}$ will be piecewise-constant, and we use the random histogram prior described in Section <ref> in our variational Bayes method. For $D\geq 1$, we introduce the notation
\begin{align*}
\mathcal{H}_{histo}^D = \left \{ h_k = (h_{lk})_{l} ; \: h_{lk}(x) = \sum_{j=1}^{2^D} w^j_{lk} e_j(x), \: x \in [0,A], \: l \in [K], \: e_j(x) = \frac{2^D}{A}\mathds{1}_{[\frac{j A}{2^D},\frac{(j+1)A}{2^D}}))(x) \right \},
\end{align*}
and for the remaining of this section, we index functions $h_{lk}$ by the histogram depth $D$.
In the next sections, we report the results of the following set of simulations.
* Simulation 1: Posterior distribution in parametric, univariate, nonlinear Hawkes models.
We analyse the posterior distribution computed from a Metropolis-Hasting sampler (MH) in several nonlinear univariate Hawkes processes ($K=1$), with ReLU, sigmoid, and softplus link functions. For this sampler, we consider that the dimensionality $D_0$ such that $h_0 \in \mathcal{H}_{histo}^{D_0}$ is known, and therefore, the posterior inference is non-adaptive.
* Simulation 2: Variational and true posterior distribution in parametric, univariate sigmoid Hawkes models. In a univariate setting with $h_0 \in \mathcal{H}_{histo}^{D_0}$ and the dimensionality $D_0$ is known (non-adaptive), we compare the variational posterior obtained from Algorithm <ref> to the posterior distribution obtained from two MCMC samplers, i.e., the MH sampler of Simulation 1, and a Gibbs sampler available in the sigmoid model (Algorithm <ref>).
* Simulation 3: Fully-adaptive variational algorithm in univariate and bivariate sigmoid models. This experiment evaluates our first adaptive variational algorithm (Algorithm <ref>) in sigmoid Hawkes processes with $K=1$ and $K=2$, in nonparametric settings where the true interaction functions are either piecewise-constant functions with unknown dimensionality or continuous.
* Simulation 4: Two-step adaptive variational algorithm in high-dimensional sigmoid models. This experiment evaluates the performance and scalability of our fast adaptive variational algorithm (Algorithm <ref>), for sigmoid Hawkes processes with $K \in \{2,4,8,10,16, 32, 64\}$, in sparse and less sparse settings of the true parameter $h_0 \in \mathcal{H}_{histo}^{D_0}$ with unknown dimensionality $D_0$.
* Simulation 5: Convergence of the two-step adaptive variational posterior for varying data set sizes. In this experiment, we evaluate the asymptotic performance of our two-step variational procedure (Algorithm <ref>), with respect to the number of observations, i.e., the length of the observation horizon $T$, for sigmoid Hawkes processes with $K =10$.
* Simulation 6: Robustness of the variational posterior to some types of mis-specification of the Hawkes model. This experiment aims at evaluating the performance our variational algorithm for the sigmoid Hakwes model (Algorithm <ref>) on data sets generated from Hawkes processes with mis-specified nonlinear link functions and memory parameter of the interaction functions.
In all simulations, we set the memory parameter as $A=0.1$, and we evaluate the performance visually, in low-dimensional settings, or with the $L_1$-risk on the continuous parameter and $\ell_0$-error on the graph parameter (defined below), in moderately large to large-dimensional settings.
One important quantity in these synthetic experiments is the number of excursions in the generated data, formally defined in [Costa et al., 2020] and Lemma <ref> in Appendix <ref>. Intuitively, the observation window of the data $[0,T]$ can be partitioned into contiguous intervals $\{[\tau_{i-1},\tau_i)\}_{i=1,\dots, I}$, $\tau_0=0, \tau_{I}=T$, $I \in \mathbb N$, called excursions, where the point process measures are i.i.d. The main properties of these intervals are that $N[\tau_{i-1}, \tau_i) \geq 1$ and $N[\tau_{i}-A, \tau_i)=0$. For our multivariate contexts, we additionally introduce a new concept of excursions, that we call local excursions, defined for each dimension $k$ as a partition of $[0,T] = \bigcup_{i=1}^{I_k} [\tau^k_{i-1},\tau^k_i)$ such that $N^k[\tau^k_{i-1}, \tau^k_i) \geq 1$ and $N^k[\tau^k_{i}-A, \tau^k_i)=0$. To the best of our knowledge, this quantity has not yet been introduced for Hawkes processes, although we observe in our experiments that it is an important statistical property, as will be shown below.
§.§ Simulation 1: Posterior distribution in univariate nonlinear Hawkes models
Link functions $\phi$ of the Hawkes model considered in Simulation 1, namely the sigmoid (blue), ReLU (red), and softplus (green) functions.
In this simulation, we consider univariate Hawkes processes ($K=1$) with link function $\phi = \phi_1$ of the form
\begin{align}\label{eq:nonlinearity}
&\phi(x) = \theta + \Lambda \psi(\alpha (x - \eta)),
\end{align}
where $\xi = (\theta, \Lambda, \alpha, \eta) $ and $\psi:\R \to \R^+$ are known and chosen as:
* Sigmoid: $\psi(x) = (1 + e^{-x})^{-1}$ and $\xi = (0.0, 20.0, 0.2, 10.0)$;
* ReLU: $\psi(x) = \max(x,0)$ and $\xi = (0.001, 1.0, 1.0, 0.0)$;
* Softplus: $\psi(x) = \log(1 + e^x)$ and $\xi = (0.0, 40.0, 0.1, 20.0)$.
Note that the corresponding link functions $\phi$ have similar shapes on a range of values between -20 and 20 (see Figure <ref>). In all models, we consider a Hawkes process with
$h_0 = h_{11}^0 \in \mathcal{H}_{histo}^{D_0}$ with $D_0=2$, and three scenarios, called Excitation only, Mixed effect, and Inhibition only, where $h_0$ is respectively non-negative, signed, and non-positive (see Figure <ref> for instance). In each of the nine settings, we set $T=500$ and in Table <ref>, we report the corresponding number of events and excursions observed in each scenario and model. Note that, as we may expect, more events and less excursions are observed in the data generated in Excitation only scenario than in the Mixed effect and Inhibition only scenarios.
Here, we assume that $D_0$ is known and we consider a normal prior on $ \mathcal{H}_{histo}^{D_0}$ such that
w_{11} \sim \mathcal{N}(0, \sigma^2 I),
and for $\nu_1$,
\nu_1 \sim \mathcal{N}(0, \sigma^2),
with $\sigma=5.0$.
To compute the (true) posterior distribution, we run a Metropolis-Hasting (MH) sampler implemented via the Python package PyMC4[<https://www.pymc.io/welcome.html>] with 4 chains, 40 000 iterations, and a burn-in time of 4000 iterations. We also use the Gaussian quadrature method [Golub and Welsch, 1969] for evaluating the log-likelihood function, except in the ReLU model and Excitation only scenario, where the integral term is computed exactly. We note that we also tested a Hamiltonian Monte-Carlo sampler in this simulation, and obtained similar posterior distributions, but within a much larger computational time, therefore these results are excluded from this experiment.
The posterior distribution on $f = (\nu_1, h_{11})$ in the ReLU model and our three scenarios are plotted in Figure <ref>. For conciseness purpose in this section, our results for the sigmoid and softplus models are reported in Appendix <ref>. We note that in almost all settings, the ground-truth parameter $f_0$ is included in the 95% credible sets of the posterior distribution.
Nonetheless, the posterior mean is sometimes biased,
possibly due to the numerical integration errors in the log-likelihood computation. Moreover, we conjecture that the estimation quality depends on the number of events and the number of excursions, which could explain the differences between the Excitation only, Mixed effect, and Inhibition only scenarios. In particular, the credible sets seem consistently smaller for the second scenario, which realisations have more excursions than the other ones.
This simulation therefore shows that the posterior distribution in commonly used nonlinear univariate Hawkes models behaves well and can be sampled from using a simple MH sampler. Nonetheless, we note that the MH iterations are computationally expensive, which prevents from scaling this algorithm to large dimensions. Therefore, we will only use the MH sampler to compute the posterior distribution in the low-dimensional settings, i.e., Simulations 2 and 3, with respectively $K=1$ and $K=2$.
Scenario Sigmoid ReLU Softplus
2*Excitation only # events 5250 5352 4953
# excursions 1558 1436 1373
2*Mixed effect # events 3876 3684 3418
# excursions 1775 1795 1650
2*Inhibition only # events 3047 2724 2596
# excursions 1817 1693 1588
Number of events and excursions in the simulated data of Simulation 1 with $T=500$. We refer to Remark <ref> and Lemma <ref> in Appendix <ref> for the definition of an excursion in Hawkes processes.
Excitation onlyMixed effect Inhibition only
Posterior distribution on $f = (\nu_1, h_{11})$ obtained with the Metropolis-Hastings sampler (MH), in the univariate ReLU models of Simulation 1. The three columns correspond to the Excitation only (left), Mixed effect (center), and Inhibition only (right) scenarios. On the first row, we plot the marginal posterior distribution on the background rate $\nu_1$, and on the second row, the posterior mean (solid orange line) and 95% credible sets (orange areas) on the interaction function $h_{11}$, here piecewise-constant with dimensionality $2^{D_0} = 4$. The true parameter $f_0 = (\nu_1^0, h_{11}^0)$ is plotted in dotted green line.
§.§ Simulation 2: Parametric variational posterior and posterior distribution in the univariate sigmoid model.
In this simulation, we consider the same univariate scenarios as Simulation 1, but only for the sigmoid Hawkes model and compare the variational and true posterior distributions. Here, the dimensionality $D_0$ of the true function $h_0$ is assumed to be known, therefore, the samplers are non-adaptive. Specifically, we compare the performance of the previous MH sampler,
the Gibbs sampler (introduced in Remark <ref> and described in Algorithm <ref> in Appendix <ref>),
and our mean-field variational algorithm in a fixed model (Algorithm <ref>) - here, we fix the dimensionality of $h_{11}$ to $J=2^{D_0}=4$.
We run 4 chains for 40 000 iterations for the MH sampler, 3000 iterations of the Gibbs sampler, and use our early-stopping procedure for the mean-field variational algorithm.
In Figure <ref>, we can compare the variational posterior on $f = (\nu_1, h_{11})$ to the posterior distributions, computed either with the Gibbs or MH samplers, in the three estimation scenarios. We note that variational posterior mean is always close to the posterior mean, in particular when computed with the Gibbs sampler. Nonetheless, its credible sets are generally smaller,
which is a common empirical observation of mean-field variational approximations.
Besides, the variational posterior seems to be similarly biased as the posterior distribution, as can be seen for the background rate $\nu_1$ in the Inhibition scenario. One could therefore test if this bias decreases with more data observations, i.e., larger $T$; however, the Gibbs sampler has a large computational time (between 3 and 5 hours),
which is about 6 (resp. 40) times longer than the MH sampler (resp. our mean-field algorithm), due to the expensive latent variable sampling scheme (see Table <ref>).
Finally, we also compare the estimated intensity function using the (variational) posterior means, on a sub-window of the observations in Figure <ref>. The latter plot shows that all three methods provide fairly equivalent estimates on the nonlinear intensity function.
From this simulation, we conclude that, in the univariate and parametric sigmoid Hawkes model, the mean-field variational algorithm in a fixed model provides a good approximation of the posterior distribution. Moreover, we note that although the Gibbs sampler is slightly better than MH, it is much slower than the latter and therefore cannot be applied to multivariate Hawkes processes in practice. Therefore, in the bivariate simulation in the next section, we only compare to the posterior distribution computed with the MH sampler, which can still be computed within reasonable time for $K=2$ .
Scenario MH Gibbs MF-VI
Excitation only 2169 16 092 416
Mixed effect 2181 13 097 338
Inhibition only 2222 9 318 400
Computational times (in seconds) of the Gibbs sampler (Algorithm <ref>), our mean-field variational (MF-VI) algorithm (Algorithm <ref>), and the Metropolis-Hastings (MH) sampler in each parametric univariate scenario of Simulation 2 with $T = 500$. We note that the Gibbs sampler is much slower than the MH sampler, which is also slower than the mean-field variational algorithm.
Excitation onlyMixed effectInhibition only
Posterior and variational posterior distributions on $f = (\nu_1, h_{11})$ in the univariate sigmoid model of Simulation 2, evaluated by the MH sampler, the mean-field variational (MF-VI) algorithm in a fixed model (Algorithm <ref>) and the Gibbs sampler (Algorithm <ref>). The three columns correspond to the Excitation only (left), Mixed effect (center), and Inhibition only (right) scenarios. The true parameter $f_0$ is plotted in dotted green line. The first row contains the marginal distributions (VB, MH and Gibbs) on the background rate $\nu_1$, and the second row represents the posterior means (solid lines) and 95% credible sets (colored areas) on the (self) interaction function $h_{11}$. We note that the variational posterior is close to the Gibbs posterior distribution, nonetheless, has smaller credible bands.
Excitation only
Mixed effect
Inhibition only
Intensity function on a sub-window of the observation window estimated via the variational posterior mean (blue) or via the posterior mean, computed with the MH sampler (orange) or the Gibbs sampler (purple), in each scenario of Simulation 2. The true intensity $\lambda_t^1(f_0)$ is plotted in dotted green line. We note that all estimates are close in this simulation.
§.§ Simulation 3: Fully-adaptive variational method in the univariate and bivariate sigmoid models.
# dimensions Scenario T FA-MF-VI MH
2*$K=1$ Excitation 2000 32 417
Inhibition 3000 33 445
2*$K=2$ Excitation 2000 189 2605
Inhibition 3000 197 2791
Computing times (in seconds) of our fully-adaptive mean-field variational method (FA-MF-VI) (Algorithm <ref>) and the Metropolis-Hastings (MH) sampler in the univariate and bivariate sigmoid models and the scenarios of Simulation 3.
In this simulation, we
test our fully-adaptive variational inference algorithm (Algorithm <ref>), in the one-dimensional ($K=1$) and two-dimensional ($K=2$) sigmoid models, and in two estimation settings:
* Well-specified: $h_0 \in \mathcal{H}_{hist}^{D_0}$ (with $D_0 = 2$);
* Mis-specified: $h_0 \notin \mathcal{H}_{hist}^{D_0}$, and $h_{lk}^0$ is a continuous function, for all $(l,k) \in [K^2]$.
Note that in the well-specified case, $m_0 := (\delta_0, 2^{D_0})$ is unknown for the variational method, nonetheless, we also compute the posterior distribution with the non-adaptive MH sampler using the true $m_0$.
In the bivariate model, we choose a true graph parameter $\delta_0$ with one zero entry (see Figure <ref>a).
We also consider an Excitation scenario where all the true interaction functions $(h_{lk}^0)_{l,k}$ are non-negative and with $T = 2000$, and an Inhibition scenario where the self-interaction functions $(h_{kk}^0)_{k=1,2}$ are non-positive with $T = 3000$. The latter setting aims at imitating the so-called self-inhibition phenomenon in neuronal spiking data, due to the refractory period of neurons [Bonnet et al., 2021].
In our adaptive variational algorithm, we set a maximum histogram depth $D_1 = 5$ for $K=1$, and $D_1 = 4$ for $K=2$, so that the number of models per dimension is respectively 7 and 76.
In the well-specified setting, we first analyse the ability of Algorithm <ref> to recover the true connectivity graph and dimensionality of $h_0$. In Figure <ref>, we plot the model marginal probabilities $(\hat \gamma_m)_{m}$ in our adaptive variational posterior and in the univariate setting.
In the Excitation scenario, the largest marginal probability $\hat \gamma_{\hat s}$ is on the true model, i.e., $\hat m = m_0 = (\delta_0 = 1,2^{D_0}=2)$, and all the other marginal probabilities are negligible. Therefore, in this case, the model-averaging VB posterior (<ref>) is essentially equivalent to the model-selection VB posterior (<ref>). In the Inhibition scenario, the dimensionality $\hat D$ is not well inferred in the model selection variational posterior (maximising the ELBO), which is over-regularizing in this case, since $\hat m = (\hat \delta = \delta_0= 1, \hat D = 1)$. However, as seen in Figure <ref>, the ELBO for $D=1$ and for $D=2=D_0$ are very close,
therefore, the model-averaging variational posterior better captures the model since it
is essentially a mixture of two components, one corresponding to $\hat D = 1$, and the second one corresponding to the true model $D_0 + 2$.
Nonetheless, comparing the estimated nonlinear intensity based on the model-selection variational posterior mean and the posterior mean in Figure <ref> in Appendix <ref>, we note that the model selection variational estimate is very close to the true intensity and the non-adaptive MH estimate, despite the error of dimensionality in the Inhibition scenario.
We then compare the model selection adaptive variational posterior distribution on the parameter with the true posterior distribution computed with the non-adaptive MH sampler in Figure <ref>. We note that in the Excitation scenario, the variational posterior mean is very close to the posterior mean, however, its 95% credible bands are significantly smaller. Note also that, in the Inhibition scenario, in spite of the wrongly selected histogram depth, the estimated interaction function is still not too far from the truth.
In the mis-specified setting, all the marginal probabilities are negligible but one, in both the Excitation and Inhibition scenarios (see Figure <ref>), although there is no true $m_0$ in this case. In Figure <ref> in Appendix <ref>, we note that the model selection adaptive variational posterior mean approximates quite well the true parameter. Moreover, its 95% credible bands often cover the truth but are once again slightly too narrow.
The previous observations in the well-specified and mis-specified settings can also be made in the two-dimensional setting. The true connectivity graph and the marginal probabilities in the adaptive variational posterior are plotted in Figure <ref>. We note that in the well-specified case, $\hat m = m_0$ in both scenarios.
Moreover, the parameter and the nonlinear intensity are well estimated, as can be seen in Figure <ref> and in Figures <ref>, <ref> in Appendix <ref>. Note however that, in the mis-specified setting, the under-coverage phenomenon of the credible regions also occurs (see Figure <ref>).
Finally, we note that our fully-adaptive variational algorithm is more than 10 times faster to compute than the non-adaptive MH sampler,
as can be seen from the computing times reported in Table <ref>. This simulation study therefore shows that our fully-adaptive variational algorithm enjoys several advantages in Bayesian estimation for Hawkes processes: it can infer the dimensionality of the interaction functions $D$, the dependence structure through the graph parameter $\delta$, provides a good approximation of the posterior mean, and is computationally efficient.
$K = 1$
Model marginal probabilities $(\hat \gamma_{m})_{m}$ in the adaptive mean-field variational posterior, in the well-specified and mis-specified settings of Simulation 3 with $K=1$.
The left and right panels correspond to the Excitation (resp. Inhibition) setting. The elements in $\mathcal{S}_1$ are indexed from 1 to 7, and correspond respectively to $m = (\delta=0,2^{D}=1)$, and $m = (\delta=1,2^D)$ with $D=0, \dots, 5$.
$K = 1$
Posterior and model-selection variational posterior distributions on $f = (\nu_1, h_{11})$ in the univariate sigmoid model and settings of Simulation 3, evaluated by the MH sampler and the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The three columns correspond respectively to the two well-specified settings, i.e., the Excitation (Well-specified-Exc) and Inhibition (Well-specified-Exc) scenarios, and one mis-specified setting (Mis-specified-Exc). The first row contains the marginal distribution on the background rate $\nu_1$, and the second row represents the (variational) posterior mean (solid line) and 95% credible sets (colored areas) on the (self) interaction function $h_{11}$. The true parameter $f_0$ is plotted in dotted green line.
Marginal probabilities on the graph and dimensionality parameter $s_k = (\delta_{\cdot, k}, D_k)$ at each dimension, i.e., $(\hat \gamma_{s_k}^k)_{s_k \in \mathcal{S}_2}$ in the fully-adaptive averaged mean-field variational posterior, in the well-specified setting of Simulation 3 with $K=2$.
The Excitation scenario (exc) corresponds to $h_0 \geq 0$, while in the Inhibition scenario (inh) , $h_{11}^0, h_{22}^0 \leq 0$. The elements in $\mathcal{S}_2$ are indexed from 1 to 13 and the true model in this set is indicated in orange.
$K = 2$
Interaction functions
Model-selection variational posterior distributions on $f = (\nu, h)$ in the bivariate sigmoid model, and well-specified and mis-specified settings, and Excitation scenario of Simulation 3, computed with the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The first row correspond two columns correspond to the Excitation (left) and Inhibition (right) settings. The first row contains the marginal distribution on the background rates $(\nu_1, \nu_2)$, and the second and third rows represent the (variational) posterior mean (solid line) and 95% credible sets (colored areas) on the four interaction function $h_{11}, h_{12}, h_{21}, h_{22}$. The true parameter $f_0$ is plotted in dotted green line.
Excitation scenario
Self-inhibition scenario
Estimated intensity function based on the (variational) posterior mean, in the well-specified and bivariate setting of Simulation 3 on $[0,10]$, using the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The true intensity $\lambda_t(f_0)$ is plotted in dotted green line.
§.§ Simulation 4: Two-step variational posterior in high-dimensional sigmoid models.
In this section, we test the performance of our two-step variational procedure (Algorithm <ref>), first, in sparse settings of the true parameter $h_0$, then, in relatively denser regimes.
§.§.§ Sparse settings
Computational times of our two-step mean-field variational algorithm (Algorithm <ref>) in the Excitation (exc) and Inhibition (inh) scenarios and well-specified setting of Simulation 4, for $K = 2,4,8,16,32, 64$.
In this experiment, we consider sparse multivariate sigmoid models with $K \in \{2,4,8,16, 32, 64\}$ dimensions. We note that to the best of our knowledge, the only Bayesian method that has currently been tested in high-dimensional Hawkes processes is the semi-parametric version of [Zhou et al., 2022] where the interaction functions are also decomposed over a dictionary of functions, but the choice of the number of functions is not driven by a model selection procedure and the graph of interaction is not inferred.
Here, we construct a well-specified setting with $h_0 \in \mathcal{H}_{hist}^{D_0}$ and $D_0 = 1$, and an Excitation scenario and an Inhibition scenario, similar to Simulation 3, and a sparse connectivity graph parameter $\delta_0$ with $\sum_{l,k} \delta_{lk}^0 = 2K - 1$, as shown in Figure <ref>. In Table <ref>, we report our chosen value of $T$ in each setting and the corresponding number of events, excursions, and local excursions.
In Table <ref>, we report the performance of our method, in terms of the $L_1$-risk of the model-selection variational posterior defined as
\begin{align}\label{eq:risk}
r_{L_1}(\hat Q) := \mathbb{E}_{\hat Q}[\norm{\nu-\nu_0}_{\ell_1}] + \sum_{l,k} \mathbb{E}_{\hat Q}\left[\norm{h_{lk}-h_{lk}^0}_1\right].
\end{align}
We note that in general, the number of terms in the risk grows with $K$ and the number of non-null interaction functions in $h$ and $h_0$ - which thus can be of order $O(K^2)$ in a dense setting.
We first note that for our prior distribution and for the augmented variational posterior distribution $\hat Q$ in a fixed model $m = (\delta, J=(J_k))$, we have that
\begin{align*}
\mathbb{E}_{\hat Q_1}[\norm{h_{lk}}_1] %&= \sum_{j=1}^{J_{k}} \mathbb{E}_{\hat Q_1^{\delta}}[|h_{lk}^j|] \\
&= \sum_{j=1}^{J_{k}} \sqrt{\frac{2}{\pi} [\Sigma_{lk}^{J_{k}}]_{jj}} \exp \left \{ - \frac{[\tilde{\mu}_{lk}^{J_{k}}]_{j}^2}{[\Sigma_{lk}^{D_{k}}]_{jj} } \right \} - [\tilde{\mu}_{lk}^{J_{k}}]_{j} \left[ 1 - 2 \Phi\left(- \frac{[\tilde{\mu}_{lk}^{J_{k}}]_{j} }{\sqrt{[\Sigma_{lk}^{J_{k,C}}]_{jj} }}\right) \right].
\end{align*}
We evaluate the accuracy of our algorithm when estimating the graph of interaction and the size $D_k$ at each dimension $k$, defined as
\begin{align*}
&Acc_{graph}(\hat \delta) = \frac{1}{K^2} \sum_{l,k} \mathds{1}_{\delta_{lk}^0 = \hat \delta_{lk}},
&Acc_{dim}(\hat D) = \frac{1}{K} \sum_{k} \mathds{1}_{D_{k}^0 = \hat D_{k}},
\end{align*}
where $\hat \delta = (\hat \delta_{lk})_{l,k}$ and $\hat D = (\hat D_{k})_k$ are respectively the estimated graph and the inferred dimensionality of $(h_{.k})_k$ in Algorithm <ref>.
Firstly, we note that, in almost all settings, the accuracy of our algorithm is equal or is very close to 1, therefore, it is able to recover almost perfectly the true graph $\delta_0$ and the dimensionality $D_0$ (the estimated graphs in the Excitation and Inhibition scenarios are plotted in Figures <ref> and <ref> in Appendix). In fact, our gap heuristics for choosing the threshold $\eta_0$ (see Section <ref>) allows to estimate the graph after the first step of Algorithm <ref>. In Figure <ref> (and Figure <ref> in Appendix in the Inhibition scenario), we note that the $L_1$-norms of the interaction functions are well estimated in the first step, leading to a gap between the norms close and far from 0. This gap includes the range $[0.1, 0.2]$ for all $K$'s, therefore, here, we choose $\eta_0 = 0.15$, which allows to discriminate between the true signals and the noise and to recover the true graph parameter.
Secondly, from Table <ref>, we note that the risk seems to grow linearly with $K$, which indicates that the estimation does not deteriorate with larger $K$. In Figure <ref> (and Figure <ref> in Appendix, we plot the risk on the $L_1$-norms using the model-selection variational posterior, i.e., $(\mathbb{E}_{\hat Q_{MV}}\left[\norm{h_{lk}-h_{lk}^0}_1\right])_{l,k}$, in the form of a heatmap compared to the true norms, and note that for all $K$'s, these errors are relatively small. Moreover, our variational algorithm estimates well the parameter, as can be visually checked in Figure <ref>, where we plot the model-selection variational posterior distribution on a subset of the parameter for each value of $K$, in the Excitation scenario (see Figure <ref> in Appendix for our results in the Inhibition scenario).
Besides, the computing times of our algorithm seem to scale well with $K$ and the number of events in these sparse settings, as can be seen from Table <ref> and Figure <ref>. For $K=64$, our algorithm runs in less than 2.5 hours, in spite of the large number of events (about 133 000). We also note that these experiments have been run using only two processing units. [The computing time of our algorithm could thus be greatly decreased if it is computed on a machine with more processing units.]
§.§.§ Testing different graphs and sparsity levels.
In this experiment, we evaluate Algorithm <ref> on different settings of the graph parameter $\delta_0$, namely a sparse, a random, and a dense settings, illustrate in Figure <ref>. The sparse setting is similar to the previous section, while the random setting corresponds to a slightly less sparse regime where additional edges are present in $\delta_0$. Note that these three settings have different numbers of edges in $\delta_0$, therefore, different numbers of non-null interaction functions to estimate. From Table <ref>, we also note that there are more events and less global excursions in the dense setting that in the two other ones, in particular, in the Excitation scenario where this number drops to 2.
Our numerical results in Table <ref> show that in the dense setting, the graph accuracy of our estimator is slightly worse, and the risk of the variational posterior is much higher than in the other settings. We conjecture that this loss of performance is related to the smaller number of global excursions, which leads to a more difficult estimation problem. We can also see from Figure <ref> that in this particular setting, the estimation of the norms of the interaction functions is deteriorated, and the gap that allows to discriminate between the null and non-null functions is not present anymore. Nonetheless, in the Inhibition scenario, for which the number of global excursions is not too small, this phenomenon does not happen and the estimation is almost equivalent in all graph settings.
To further explore the applicability of our thresholding approach in the dense setting, we test the following three-step approach in the Excitation scenario, with $K=10$ and a dense graph $\delta_0$:
* The first step is similar to the one of our two-step procedure, i.e., we estimate an adaptive variational posterior distribution within models that contain the complete graph $\delta_C$.
Then, if there is no significant gap in the variational posterior mean estimates of the $L_1$-norms, we look for a (conservative) threshold $\eta_1$ corresponding to the first “slope change", and estimate a (dense) graph $\hat \delta$.
* In a second step, we compute the adaptive variational posterior distribution within models that contain $\hat \delta$ and re-estimate the $L_1$-norms of the functions.
If we now see a significant gap in the norms estimates, we choose a second threshold within that gap; otherwise, we look again for a slope change and pick a conservative threshold $\eta_2$ to compute a second graph estimate $\hat \delta_2$.
* In the third and last step, we repeat the second step with now our second graph estimate, $\hat \delta_2$.
In Figure <ref>, we plot our estimates of the norms after each step of the previous procedure. In this case, we have chosen visually the threshold $\eta_1=0.09$ and $\eta_2=0.18$ after respectively the first and second step, using the slope change heuristics. We note that the previous method indeed provides a conservative graph estimate in the first step, but in the second step, allows to refine our estimate of the graph and approach the true graph. Besides, we note that the large norms are inflated along the three steps of our procedure. Therefore, our method performs better in sparse settings where a significant gap allows to correctly infer the true graph $\delta_0$.
In conclusion, our simulations in low and high-dimensional settings, with different levels of sparsity in the graph, show that our two-step procedure is able to correctly select the graph parameter and dimensionality of the process in sparse settings, and hence allows to scale up variational Bayes approaches to larger number of dimensions. Nonetheless, from the moderately high-dimensional settings, the estimation of the parameter $f$ becomes sensitive to the difficulty of the problem. In particular, the performance is sensitive to the graph sparsity, tuning the number of non-null functions to estimate, and, as we conjecture, the number of global excursions in the data. Finally, we note that heuristic approaches for the choice of the threshold - needed to estimate the graph parameter - need to further explored in noisier and denser settings.
K Scenario T # events # excursions # local excursions computing time (s)
2*2 Excitation 500 5680 2416 1830 19
Inhibition 700 4800 2416 1830 18
2*4 Excitation 500 11338 2378 1878 41
Inhibition 700 9895 2378 1878 39
2*8 Excitation 500 22514 1207 1857 151
Inhibition 700 19746 1207 1857 134
2*16 Excitation 500 51246 200 1784 577
Inhibition 700 37166 200 1784 494
2*32 Excitation 500 96803 4 1824 2147
Inhibition 700 76106 4 1824 1386
2*64 Excitation 200 117862 0 1481 8176
Inhibition 300 133200 0 1481 7583
Number of observed events, excursions, and computing times of Algorithm <ref> in the multivariate settings of Simulation 4.
Scenario Graph # Edges # Events # Excursions # Local excursions
3*Excitation Sparse $2K-1$ 24638 431 1212
Random $3K-1$ 27475 398 1262
Dense $5K-6$ 90788 2 1432
2*Inhibition Sparse $2K-1$ 22683 911 1778
Random $3K-1$ 24031 884 1834
Dense $5K-6$ 35291 547 2170
Number of edges, observed events, and excursions in the different graph settings of Simulation 4 ($K=10$).
# dimensions Scenario Graph accuracy Dimension accuracy Risk
2*2 Excitation 1.00 1.00 0.79
Inhibition 1.00 1.00 0.35
2*4 Excitation 1.00 1.00 1.01
Inhibition 1.00 1.00 0.92
2*8 Excitation 1.00 1.00 2.10
Inhibition 1.00 1.00 2.12
2*16 Excitation 1.00 1.00 5.77
Inhibition 1.00 1.00 4.48
2*32 Excitation 1.00 0.97 10.57
Inhibition 1.00 1.00 8.53
2*64 Excitation 1.00 1.00 23.74
Inhibition 1.00 1.00 18.43
Performance of Algorithm <ref> in the multivariate settings of Simulation 4. We report the accuracy of our graph estimate $\hat \delta$ and the selected dimensionality of the interaction functions in the model-selection variational posterior, and the risk on the whole parameter $f$ defined in (<ref>).
Scenario Graph Graph accuracy Dimension accuracy Risk
3*Excitation Sparse 1.00 1.00 2.91
Random 1.00 1.00 4.00
Dense 0.5 1.00 17.67
3*Inhibition Sparse 1.00 1.00 2.62
Random 0.99 1.00 3.44
Dense 1.00 1.00 2.67
Performance of Algorithm <ref> in the different graph settings of Simulation 4 ($K=10$). We note in that the dense graph setting, there are more parameters to estimate, and therefore non-null terms in the risk metric.
True graph parameter $\delta_0$ (black=0, white=1) in the sparse multivariate settings of Simulations 4 with the number of dimensions $K=2,4,8, 16, 32, 64$.
True graph parameter $\delta_0$ (black=0, white=1) in the sparse, random, and dense settings of Simulations 4 with $K=10$ dimensions.
Heatmaps of the $L_1$-norms of the true parameter $h_0$, i.e., the entries of the matrix $S_0 = (S^0_{lk})_{l,k} = (\norm{h_{lk}^0}_1)_{l,k}$ (left column) and the $L_1$-risk of the model-selection variational posterior obtained with Algorithm <ref>, i.e., $(\mathbb{E}^{\hat Q_{MV}}[\norm{h_{lk}^0 - h_{lk}}_1])_{l,k}$ (right column), in the Excitation scenario of Simulation 4. The rows correspond to $K=2,4,8,16,32,64$.
Estimated $L_1$-norms using the model-selection variational posterior obtained after the first step of Algorithm <ref>, plotted in increasing order, in the Excitation scenario of Simulation 4, for the models with $K=2,4,8,16, 32, 64$. In these settings, our threshold $\eta_0 = 0.15$ is included in the gap between the estimated norms close to 0 and far from 0, therefore, our gap heuristics allows to recover the true graph parameter (see Section <ref>).
Excitation - sparse
Inhibition - sparse
Excitation - random
Inhibition - random
Excitation - dense
Inhibition - dense
Estimated $L_1$-norms using the model-selection variational posterior obtained after the first step of Algorithm <ref>, plotted in increasing order, in the different graph settings (sparse, random, and dense $\delta_0$, see Figure <ref>) and scenarios of Simulation 4 with $K=10$. We note that in the dense graph setting, although the norms are not very well estimated after the first step, the gap heuristics still allows to recover the true graph parameter.
True graph $\delta_0$
Step 1
Step 2
Step 3
True graph $\delta_0$ and estimated $L_1$-norms using the model selection adaptive variational posterior obtained after each step of our three-step procedure, proposed for the dense graph setting of Simulation 4. In Step 1 and Step 2, we plot the data-driven thresholds $\eta_1$ and $\eta_2$, chosen with a “slope change" heuristics.
Background $\nu_1$Interaction functions $h_{11}$ and $h_{21}$
Model-selection variational posterior distributions on $\nu_1$ (left column) and interaction functions $h_{11}$ and $ h_{21}$ (second and third columns) in the Excitation scenario and multivariate sigmoid models of Simulation 4, computed with our two-step mean-field variational (MF-VI) algorithm (Algorithm <ref>). The different rows correspond to different multivariate settings $K=2,4,8,16,32, 64$.
§.§ Simulation 5: Convergence of the two-step variational posterior for varying data set sizes.
In this experiment, we study the variations of performances of Algorithm <ref> with increasing lengths of the observation window, i.e., increasing number of data points. We consider multidimensional data sets with $K=10$, $T \in \{50, 200, 400, 800\}$, the same connectivity graph as in Simulation 4, and an Excitation and an Inhibition scenarios. The number of events and excursions in each data sets are reported in Table <ref> in Appendix <ref>.
We estimate the parameters using the model-selection variational posterior in Algorithm <ref> for each data set. From Table <ref>, we note that our graph estimator converges quickly to the true graph and the risk also decreases with the number of observations. We can also see from Figure <ref> that the estimation of the $L_1$-norms after the first step of the algorithm improves for larger $T$, leading to a bigger gap between the small and large norms.
Finally, in Figure <ref> (and Figure <ref> in Appendix), we plot the model-selection variational posterior and note that its mean gets closer to the ground-truth parameter and its credible set shrinks for larger $T$.
Scenario T Graph accuracy Dimension accuracy Risk
3*Excitation 50 1.00 0.40 7.06
200 1.00 1.00 5.07
400 1.00 1.00 5.06
800 1.00 1.00 4.01
3*Inhibition 50 0.98 0.40 8.61
200 1.00 1.00 4.30
400 1.00 1.00 3.96
800 1.00 1.00 2.91
Performance of Algorithm <ref> for the different data set sizes $T \in \{50,200,400,800\}$ in the scenarios of Simulation 5 with $K=10$. We note the graph estimator quickly converges to the true graph $\delta_0$.
Excitation scenario
Inhibition scenario
Estimated $L_1$-norms after the first step of Algorithm <ref>, for different observation lengths $T$, in the Excitation and Inhibition scenarios of Simulation 5 with $K = 10$.
We note that the norms are better estimated, after the first step of our algorithm, for larger $T$, leading to a larger gap between the small and large estimated norms, in both scenarios.
Excitation scenario
Inhibition scenario
Model-selection adaptive variational posterior on a subset of background rates, $(\nu_1, \dots, \nu_5)$, for different observation lengths $T \in \{50,200,400, 800\}$, in the Excitation and Inhibition scenarios in Simulation 5 with $K=10$. The variational posterior behaves as expected in this simulation: as $T$ increases, its mean gets closer to the ground-truth parameter and its variance decreases.
§.§ Simulation 6: robustness to mis-specification of the link function and the memory parameter
In this experiment, we first test the robustness of our variational method based on the sigmoid model parametrised by (<ref>) with $\xi = (0.0, 20.0, 0.2, 10.0)$ to mis-specification of the nonlinear link functions $(\phi_k)_k$. Specifically, we set $K=10$ and construct synthetic mis-specified data by simulating a Hawkes process where for each $k$, the link $\phi_k$ is chosen as:
* ReLU: $\phi_k(x) = (x)_+$;
* Softplus: link $\phi_k(x) = \log(1 + e^x)$;
* Mis-specified sigmoid, with unknown $\theta_k \overset{i.i.d.}{\sim} U([15,25])$.
We also consider Excitation and Inhibition scenarios. Here, $T=300$ in all settings.
In Figure <ref>, we plot the estimated $L_1$-norms after the first step of Algorithm <ref> and note that there is still a gap in all settings and scenarios, although the norms are not well estimated in the case of the ReLU and softplus nonlinearities. The gaps allow to estimate well the connectivity graph parameter, but the other parameters cannot be well estimated for these two links, as can be seen from the risks in Table <ref>. Nonetheless,
the sign of the interaction functions is well recovered in all settings.
Then, we test the robustness of our variational method to mis-specification of the memory parameter $A$, assumed to be known in our framework. We recall that $A$ corresponds to the upper bound of the support of the interaction functions. For this experiment, we generate data from the sigmoid Hawkes process with $K=10$ and with ground-truth parameter $A_0=0.1$, in two sets of parameters corresponding to an Excitation and an Inhibition scenarios. Here, we set $T=500$ and apply our variational method (Algorithm <ref>) with $A \in \{0.5, 0.1, 0.2, 0.4\}$.
In Figure <ref>, we plot the estimated $L_1$-norms of the interaction functions, after the first step of Algorithm <ref>, when using the different values of $A$. We note that when $A$ is smaller than $A_0$, the norms of the non-null functions are underestimated, while if $A$ is larger than $A_0$, the norms are slightly overestimated. We note that, in all settings, the graph can be well estimated with the gap heuristics (see Figure <ref> in Appendix). The model-selection variational posterior on a subset of the interaction functions is plotted in Figure <ref>. We note that for $A=0.05=A_0/2$, only the first part of the functions can be estimated, while for $A>A_0$, the mean estimate is close to 0 on the upper part of the support. Nonetheless, in the latter case, the dimensionality of the true functions is not well-recovered.
In conclusion, this experiment shows that our algorithm is robust to the mis-specification of the nonlinear link functions and the memory parameter, for estimating the connectivity graph and the sign of the interaction functions when the latter are either non-negative or non-positive. Nonetheless, the other parameters of the Hawkes model cannot be well recovered.
Excitation scenario
Inhibition scenario
Estimated $L_1$-norms after the first step of Algorithm <ref>, in the mis-specified settings of Simulation 6. In this simulation, the link functions are set to $\phi_k(x) = 20\sigma(0.2(x - 10)), \forall k$, in our algorithm, while the data sets are generated from a Hawkes process with ReLU, softplus, or a mis-specified sigmoid (mis-sigmoid) link functions, in Excitation and Inhibition scenarios. We note that for the ReLU and softplus link, the norms are not well estimated after the first step, nonetheless, our gap heuristic can still recover the true graph parameter.
Scenario Link Graph accuracy Dimension accuracy Risk
3*Excitation ReLU 1.00 1.00 49.58
Softplus 1.00 1.00 34.27
Mis-specified sigmoid 1.00 1.00 19.69
3*Inhibition ReLU 1.00 1.00 59.95
Softplus 1.00 1.00 33.94
Mis-specified sigmoid 0.99 1.00 15.78
Performance of Algorithm <ref> for the different mis-specified settings and scenarios of Simulation 6 ($K=10$). We note that the graph parameter and the dimensionality are still recovered in these cases, although, the other parameters cannot be well estimated, as can be seen from the large risk.
Excitation scenario
Inhibition scenario
Estimated $L_1$-norms of the interaction functions after the first step of Algorithm <ref> specified with different values of the memory parameter $A=0.05, 0.1, 0.2, 0.4$ containing the true memory parameter $A_0=0.1$, in the scenarios of Simulation 6. In all cases, we still observe a gap, although the norms are under-estimated (resp. over-estimated) for $A=0.05$ (resp. $A=0.4$)
Excitation scenario
Inhibition scenario
Model-selection variational posterior on the interaction functions $h_{66}$ and $h_{76}$ obtained with Algorithm <ref>, specified with different values of the memory parameter $A=0.05, 0.1, 0.2, 0.4$, in the scenarios of Simulation 6 with $K=10$ and true memory parameter $A_0=0.1$. We note that the estimation of the interaction functions is deteriorated when $A$ is mis-specified, however the signs of the functions are still recovered.
§ DISCUSSION
In this paper, we proposed a novel adaptive variational Bayes method for sparse and high-dimensional Hawkes processes, and provided a general theoretical analysis of these methods. We notably obtained variational posterior concentration rates, under easily verifiable conditions on the prior and approximating family that we validated commonly used inference set-ups. Our general theory holds in particular in the sigmoid Hawkes model, for which we developed adaptive variational mean-field algorithms, which improve existing ones by their ability to infer the graph parameter and the dimensionality of the interaction functions. Moreover, we demonstrated on simulated data that our most computationally efficient algorithm is able to scale up to high-dimensional processes.
Nonetheless, our theory does not yet cover the high-dimensional setting with $K \to \infty$, which is of interest in applications of Hawkes processes to social network analysis and neuroscience. In this limit, previous works have considered sparse models [Cai et al., 2021, Bacry et al., 2020, Chen et al., 2017] and mean-field settings [Pfaffelhuber et al., 2022]. We would therefore be interested in extending our results to these models. Moreover, our empirical study shows that the credible sets of variational distributions do not always have good coverage, an observation that sometimes also holds for the posterior distribution. Therefore, it is left for future work to study the property of (variational) posterior credible regions, and potentially design post-processing methods of the latter to improve coverage in practice.
the thresholding approach for estimating the graph in our two-step adaptive variational procedure could be further explored, in particular, in dense settings.
Finally, it would be of practical interest to develop variational algorithms beyond the sigmoid model, e.g., for the ReLU and softplus Hawkes models. While in the sigmoid model, the conjugacy of the mean-field variational posterior using data augmentation leads to particularly efficient algorithms, it is unlikely that such convenient forms could be obtained for more general models. A potential approach for other models could be to parametrise variational families with normalising flows, as it is for instance done for cut posteriors in [Carmona and Nicholls, 2022].
The project leading to this work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 834175). The project is also partially funded by the EPSRC via the CDT OxWaSP.
[Adams et al., 2009]
Ryan Prescott Adams, Iain Murray, and David J. C. MacKay.
Tractable nonparametric bayesian inference in poisson processes with
gaussian process intensities.
In Proceedings of the 26th Annual International Conference on
Machine Learning, ICML '09, page 9–16, New York, NY, USA, 2009.
Association for Computing Machinery.
ISBN 9781605585161.
URL <https://doi.org/10.1145/1553374.1553376>.
[Arbel et al., 2013]
J. Arbel, G. Gayraud, and J. Rousseau.
Bayesian adaptive optimal estimation using a sieve prior.
Scand. J. Statist., 40:0 549–570, 2013.
[Bacry and Muzy, 2015]
Emmanuel Bacry and Jean-Francois Muzy.
Second order statistics characterization of hawkes processes and
non-parametric estimation, 2015.
[Bacry et al., 2020]
Emmanuel Bacry, Martin Bompaire, Stéphane Gaïffas, and Jean-Francois
Sparse and low-rank multivariate hawkes processes.
Journal of Machine Learning Research, 210
(50):0 1–32, 2020.
[Bishop, 2006]
Christopher M. Bishop.
Pattern recognition and machine learning.
Information Science and Statistics. Springer, New York, 2006.
ISBN 978-0387-31073-2; 0-387-31073-8.
[Bonnet et al., 2021]
Anna Bonnet, Miguel Martinez Herrera, and Maxime Sangnier.
Maximum likelihood estimation for hawkes processes with
self-excitation or inhibition.
Statistics & Probability Letters, 179:0 109214, 2021.
[Bremaud and Massoulie, 1996]
Pierre Bremaud and Laurent Massoulie.
Stability of nonlinear hawkes processes.
The Annals of Probability, 1996.
[Cai et al., 2021]
Biao Cai, Jingfei Zhang, and Yongtao Guan.
Latent network structure learning from high dimensional multivariate
point processes, 2021.
[Carmona and Nicholls, 2022]
Chris U. Carmona and Geoff K. Nicholls.
Scalable semi-modular inference with variational meta-posteriors,
URL <https://arxiv.org/abs/2204.00296>.
[Carstensen et al., 2010]
Lisbeth Carstensen, Albin Sandelin, Ole Winther, and Niels R Hansen.
Multivariate hawkes process models of the occurrence of regulatory
BMC bioinformatics, 110 (1):0 1–19, 2010.
[Chen et al., 2017]
Shizhe Chen, Ali Shojaie, Eric Shea-Brown, and Daniela Witten.
The multivariate hawkes process in high dimensions: Beyond mutual
arXiv:1707.04928v2, 2017a.
[Chen et al., 2017]
Shizhe Chen, Daniela Witten, and Ali Shojaie.
Nearly assumptionless screening for the mutually-exciting
multivariate Hawkes process.
Electron. J. Stat., 110 (1):0 1207–1234,
ISSN 1935-7524.
URL <https://doi.org/10.1214/17-EJS1251>.
[Costa et al., 2020]
Manon Costa, Carl Graham, Laurence Marsalle, and Viet Chi Tran.
Renewal in hawkes processes with self-excitation and inhibition.
Advances in Applied Probability, 520 (3):0
879–915, 2020.
[Daley and Vere-Jones, 2007]
Daryl J Daley and David Vere-Jones.
An introduction to the theory of point processes: volume II:
general theory and structure.
Springer Science & Business Media, 2007.
[Deutsch and Ross, 2022]
Isabella Deutsch and Gordon J. Ross.
Bayesian estimation of multivariate hawkes processes with inhibition
and sparsity, 2022.
URL <https://arxiv.org/abs/2201.05009>.
[Donner and Opper, 2019]
Christian Donner and Manfred Opper.
Efficient bayesian inference of sigmoidal gaussian cox processes,
[Donnet et al., 2020]
Sophie Donnet, Vincent Rivoirard, and Judith Rousseau.
Nonparametric Bayesian estimation for multivariate Hawkes
Ann. Statist., 480 (5):0 2698–2727, 2020.
ISSN 0090-5364.
URL <https://doi-org.proxy.bu.dauphine.fr/10.1214/19-AOS1903>.
[Eichler et al., 2017]
Michael Eichler, Rainer Dahlhaus, and Johannes Dueck.
Graphical modeling for multivariate hawkes processes with
nonparametric link functions.
Journal of Time Series Analysis, 380 (2):0
225–242, 2017.
[Gerhard et al., 2017]
Felipe Gerhard, Moritz Deger, and Wilson Truccolo.
On the stability and dynamics of stochastic spiking neuron models:
Nonlinear hawkes process and point process glms.
PLOS Computational Biology, 13:0 1–31, 02 2017.
URL <https://doi.org/10.1371/journal.pcbi.1005390>.
[Golub and Welsch, 1969]
Gene H Golub and John H Welsch.
Calculation of gauss quadrature rules.
Mathematics of computation, 230 (106):0
221–230, 1969.
[Hansen et al., 2015]
Niels Richard Hansen, Patricia Reynaud-Bouret, and Vincent Rivoirard.
Lasso and probabilistic inequalities for multivariate point
Bernoulli, 210 (1):0 83–143, 2015.
ISSN 1350-7265.
URL <http://dx.doi.org/10.3150/13-BEJ562>.
[Hawkes, 1971]
Alan G Hawkes.
Point spectra of some mutually exciting point processes.
Journal of the Royal Statistical Society: Series B
(Methodological), 330 (3):0 438–443, 1971.
[Hawkes, 2018]
Alan G. Hawkes.
Hawkes processes and their applications to finance: a review.
Quantitative Finance, 180 (2):0 193–198,
URL <https://doi.org/10.1080/14697688.2017.1403131>.
[Kingman, 1993]
J. F. C. Kingman.
Poisson processes, volume 3 of Oxford Studies in
The Clarendon Press Oxford University Press, New York, 1993.
ISBN 0-19-853693-3.
Oxford Science Publications.
[Lemonnier and Vayatis, 2014]
Remi Lemonnier and Nicolas Vayatis.
Nonparametric markovian learning of triggering kernels for mutually
exciting and mutually inhibiting multivariate hawkes processes.
In Joint European Conference on Machine Learning and Knowledge
Discovery in Databases, pages 161–176. Springer, 2014.
[Lu and Abergel, 2018]
Xiaofei Lu and Frédéric Abergel.
High-dimensional hawkes processes for limit order books: modelling,
empirical analysis and numerical calibration.
Quantitative Finance, 180 (2):0 249–264,
[Malem-Shinitski et al., 2021]
Noa Malem-Shinitski, Cesar Ojeda, and Manfred Opper.
Nonlinear hawkes process with gaussian process self effects, 2021.
[Mei and Eisner, 2017]
Hongyuan Mei and Jason Eisner.
The neural hawkes process: A neurally self-modulating multivariate
point process, 2017.
[Mohler et al., 2011]
G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg, and G. E. Tita.
Self-exciting point process modeling of crime.
Journal of the American Statistical Association, 1060
(493):0 100–108, 2011.
URL <https://doi.org/10.1198/jasa.2011.ap09546>.
[Nieman et al., 2021]
Dennis Nieman, Botond Szabo, and Harry van Zanten.
Contraction rates for sparse variational approximations in gaussian
process regression, 2021.
URL <https://arxiv.org/abs/2109.10755>.
[Ogata, 1999]
Yosihiko Ogata.
Seismicity analysis through point-process modeling: A review.
Seismicity patterns, their statistical significance and
physical meaning, pages 471–507, 1999.
[Ohn and Lin, 2021]
Ilsang Ohn and Lizhen Lin.
Adaptive variational bayes: Optimality, computation and applications,
[Olinde and Short, 2020]
Jack Olinde and Martin B. Short.
A self-limiting hawkes process: Interpretation, estimation, and use
in crime modeling.
In 2020 IEEE International Conference on Big Data (Big Data),
pages 3212–3219, 2020.
[Pfaffelhuber et al., 2022]
Peter Pfaffelhuber, Stefan Rotter, and Jakob Stiefel.
Mean-field limits for non-linear hawkes processes with excitation and
Stochastic Processes and their Applications, 2022.
[Polson et al., 2012]
Nicholas G. Polson, James G. Scott, and Jesse Windle.
Bayesian inference for logistic models using polya-gamma latent
variables, 2012.
URL <https://arxiv.org/abs/1205.0310>.
[Ray and Szabó , 2021]
Kolyan Ray and Botond Szabó .
Variational bayes for high-dimensional linear regression with sparse
Journal of the American Statistical Association, pages 1–12,
jan 2021.
URL <https://doi.org/10.1080>.
[Shen and Ghosal, 2015]
Weining Shen and Subhashis Ghosal.
Adaptive bayesian procedures using random series priors.
Scandinavian Journal of Statistics, 420 (4):0
1194–1213, 2015.
URL <https://onlinelibrary.wiley.com/doi/abs/10.1111/sjos.12159>.
[Sulem et al., 2021]
Deborah Sulem, Vincent Rivoirard, and Judith Rousseau.
Bayesian estimation of nonlinear hawkes process, 2021.
[Titsias and Lázaro-Gredilla, 2011]
Michalis Titsias and Miguel Lázaro-Gredilla.
Spike and slab variational inference for multi-task and multiple
kernel learning.
In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q.
Weinberger, editors, Advances in Neural Information Processing
Systems, volume 24. Curran Associates, Inc., 2011.
[van der Vaart and van Zanten, 2009]
A. W. van der Vaart and J. H. van Zanten.
Adaptive bayesian estimation using a gaussian random field with
inverse gamma bandwidth.
The Annals of Statistics, 370 (5B), oct
URL <https://doi.org/10.1214>.
[van der Vaart and van Zanten, 2009]
A. W. van der Vaart and J. H. van Zanten.
Adaptive Bayesian estimation using a Gaussian random field with
inverse gamma bandwidth.
Ann. Statist., 370 (5B):0 2655–2675,
ISSN 0090-5364.
URL <https://doi-org.proxy.bu.dauphine.fr/10.1214/08-AOS678>.
[Wang et al., 2016]
Yichen Wang, Bo Xie, Nan Du, and Le Song.
Isotonic hawkes processes.
In Proceedings of the 33rd International Conference on
International Conference on Machine Learning - Volume 48, ICML'16, page
2226–2234. JMLR.org, 2016.
[Zhang and Gao, 2020]
Fengshuo Zhang and Chao Gao.
Convergence rates of variational posterior distributions.
The Annals of Statistics, 480 (4):0 2180 –
2207, 2020.
[Zhang et al., 2020]
Rui Zhang, Christian Walder, and Marian-Andrei Rizoiu.
Variational inference for sparse gaussian process modulated hawkes
Proceedings of the AAAI Conference on Artificial Intelligence,
340 (04):0 6803–6810, Apr 2020.
ISSN 2159-5399.
URL <http://dx.doi.org/10.1609/aaai.v34i04.6160>.
[Zhou et al., 2020]
Feng Zhou, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, and Fang Chen.
Efficient inference for nonparametric hawkes processes using
auxiliary latent variables.
Journal of Machine Learning Research, 210
(241):0 1–31, 2020.
URL <http://jmlr.org/papers/v21/19-930.html>.
[Zhou et al., 2021]
Feng Zhou, Quyu Kong, Yixuan Zhang, Cheng Feng, and Jun Zhu.
Nonlinear hawkes processes in time-varying system,
[Zhou et al., 2021]
Feng Zhou, Yixuan Zhang, and Jun Zhu.
Efficient inference of flexible interaction in spiking-neuron
networks, 2021b.
[Zhou et al., 2022]
Feng Zhou, Quyu Kong, Zhijie Deng, Jichao Kan, Yixuan Zhang, Cheng Feng, and
Jun Zhu.
Efficient inference for dynamic flexible interactions of neural
Journal of Machine Learning Research, 230
(211):0 1–49, 2022.
URL <http://jmlr.org/papers/v23/21-1273.html>.
§ MEAN-FIELD AND MODEL-SELECTION VARIATIONAL INFERENCE
In this section, we first recall some general notions on mean field variational Bayes and model selection variational Bayes, then present additional details on the construction of variational families in the case of multivariate Hawkes processes.
§.§ Mean-field approximations
In a general inference context, when the parameter of interest, say $\vartheta$, is decomposed into $D$ blocks, $\vartheta = (\vartheta_1, \dots, \vartheta_D)$ with $ D > 1$, a common choice of variational class is a mean-field family
that can be defined as
\mathcal{V}_{MF} = \left \{Q ; \: dQ(\vartheta) = \prod_{d=1}^D dQ_d(\vartheta_d) \right \}.
In this case, the mean-field variational posterior distribution corresponds to $\hat Q = \arg \min_{Q \in \mathcal{V}_{MF}} KL \left(Q|| \Pi(.|N)\right) = \prod_{d=1}^D \hat Q_d.$ Note that the mean-field family removes some dependencies between blocks of coordinates of the parameter in the approximated posterior distribution.
Assuming that the mean-field variational posterior distribution has a density with respect to a dominating measure $\mu = \prod_d \mu_d$, with a slight abuse of notation, we denote $\hat Q$ both the distribution and density with respect to $\mu$. An interesting result from [Bishop, 2006] is that the mean-field variational posterior distribution verifies, for each $d \in [D]$,
\begin{align}\label{eq:var_post_factor}
\hat Q_d(\vartheta_d) \propto \exp \left \{ \mathbb{E}_{\hat Q_{-d}} [\log p(\vartheta,N ) ] \right \},
\end{align}
where $p(\vartheta, N )$ is the joint density of the observations and the parameter with respect to $\prod_d \mu_d \times \mu_N$ with $\mu_N$ the data density, and $\hat Q_{-d} := \prod_{d' \neq d} \hat Q_{d'} $.
This property (<ref>) can be used to design efficient algorithms for computing the variational posterior, such as the coordinate-ascent variational inference algorithm.
In a general setting where the log-likelihood function of the nonlinear Hawkes model can be augmented with some latent variable $z \in \mathcal{Z}$ (see for instance [Zhou et al., 2021, Zhou et al., 2022, Malem-Shinitski et al., 2021]), with $\mathcal{Z}$ the latent parameter space, the augmented log-likelihood $L_T^A(f,z)$ leads to an augmented posterior distribution, defined as
\begin{align*}%\label{def:aug_posterior_dist}
\Pi_A(B |N) = \frac{\int_{B} \exp(L_T^A(f,z)) d(\Pi(f) \times \mathbb{P}_{A}(z))}{\int_{\mathcal{F} \times \mathcal{Z}} \exp(L_T^A(f,z)) d(\Pi(f) \times \mathbb{P}_{A})(z)}, \quad B \subset \mathcal{F} \times \mathcal{Z},
\end{align*}
where $\mathbb{P}_{A}$ is a prior distribution on $z$ which has a density with respect to a dominating measure $\mu_z$. Recalling the mean-field variational from Section <ref> defined as
\begin{align*}
\mathcal{V}_{AMF} = \left \{Q: \mathcal{F} \times \mathcal{Z} \to [0,1] ; \: Q(f, z) = Q_1(f)Q_2(z) \right \},
\end{align*}
the augmented mean-field variational posterior corresponds to
\begin{align}\label{eq:mean-field-vp}
\hat Q_{AMF}(f,z) := \arg \min_{Q \in \mathcal{V}_{AMF}} KL \left(Q(f,z) || \Pi_A(f,z |N) \right) =: \hat Q_1(f) \hat Q_2(z),
\end{align}
and, using property (<ref>), verifies
\begin{align} \label{eq:var_factor_1}
\hat Q_{1}(f) \propto \exp \left \{ \mathbb{E}_{\hat Q_{2}} [\log p(f, z, N ) ] \right \},\quad
\hat Q_{2}(z) \propto \exp \left \{ \mathbb{E}_{\hat Q_{1}} [\log p(f, z,N ) ] \right \},
\end{align}
where $p(f, z,N)$ is the joint density of the parameter, the latent variable, and the observations with respect to the measure $\prod_d \mu_d \times \mu_z \times \mu_N$.
§.§ Model-selection variational posterior
In this section, we present two model-selection variational approaches to approximate the posterior by an adaptive variational posterior distribution. We recall from our construction in Section <ref> that our parameter $f$ of the Hawkes processes is indexed by a model $m$ of hyperparameters in the form $m=(\delta, J_{lk}, (l,k) \in \mathcal I(\delta))$, where $\mathcal I(\delta) = \{ (l,k);\, \delta_{lk}=1\}$ is the set of non null functions.
In a model-selection variational approach, one can consider a set of candidate models $\mathcal M$ and for any $m\in\mathcal M$, a class of variational distributions on $f$ with model $m$, denoted $\mathcal{V}^m$. Then, one can define the total variational class as $\mathcal{V} = \cup_{m \in \mathcal M} \{ \{m\}\times \mathcal{V}^m \}$, which contains distributions on $f$ localised on one model. Then, given $\mathcal{V}$ and as shown for instance in [Zhang and Gao, 2020], the variational posterior distribution has the form
\begin{align*} %\label{eq:ms_var_post}
\hat Q &:= \hat Q_{\hat m}, \quad \hat m := \arg \max_{m \in \mathcal{M}} ELBO(\hat Q^m),
\end{align*}
where $\hat Q^m = \arg \min_{Q \in \mathcal{V}^m} KL(Q|||\Pi(.|N))$ and $ELBO(\cdot)$ is called the evidence lower bound (ELBO), defined as
\begin{align}\label{eq:elbo_1}
ELBO(Q) &:= \mathbb{E}_{ Q} \left[ \log \frac{p(f,z, N)}{Q(f,z) }\right], \quad Q \in \mathcal{V}.
\end{align}
The ELBO is a lower bound of the marginal log-likelihood $p(N)$.
An alternative model-selection variational approach consists in constructing a model-averaging variational posterior, also called adaptive in [Ohn and Lin, 2021], as a mixture of distributions over the different models, i.e.,
\begin{align}\label{eq:adap_var_post}
\hat Q = \sum_{m \in \mathcal{M}} \hat \gamma_{m} \hat Q_m,
\end{align}
where $ \{\hat \gamma_{m}\}_{m \in \mathcal{M}} $ are marginal probabilities defined as
\begin{align}
\hat \gamma_{m} = \frac{ \Pi_m(m) \exp \left \{ ELBO(\hat Q_m) \right \}}{\sum_{m \in \mathcal{M}}\Pi_m(m) \exp \left \{ ELBO(\hat Q_m) \right \}}, \quad \forall m \in \mathcal{M}.
\end{align}
In this strategy, the approximating family of distributions corresponds to
\begin{align*}
\mathcal{V} = \left \{ \sum_{m \in \mathcal{M}} \alpha_{m} Q_m; \sum_{m} \alpha_{m} = 1, \: \alpha_m \geq 0, \: Q_m \in \mathcal{V}^m, \: \forall m \right\}.
\end{align*}
§ DATA AUGMENTATION IN THE SIGMOID HAWKES MODEL
In this section, we recall the latent variable augmentation strategy and the definition of the augmented mean-field variational distribution in sigmoid-type Hawkes processes, proposed in previous work [Zhou et al., 2022, Malem-Shinitski et al., 2021]. In our method in Section <ref>, we use this construction to efficiently compute an approximated posterior distribution on $\mathcal{F}_m \subset \mathcal{F}$, on parameters $f$ within a model $m=(\delta, J_{lk}; (l,k) \in \mathcal I(\delta))$.
The first data augmentation step consists in re-writing the sigmoid function as a mixture of Polya-Gamma random variables [Polson et al., 2012], i.e.,
\begin{align}\label{eq:sigmoid}
\sigma(x) = \mathbb{E}_{\omega \sim p_{PG}(.;1,0)}\left[e^{g(\omega,x)}\right] = \int_0^{+\infty} e^{g(\omega,x)} p_{PG}(\omega;1,0) d\omega, \quad g(\omega,x) = - \frac{\omega x^2}{2} + \frac{x}{2} - \log 2,
\end{align}
with $p_{PG}(.;1,0)$ the Polya-Gamma density. We recall that $p_{PG}(.;1,0)$ is the density of the random variable
\begin{align*}
\frac{1}{2\pi^2} \sum_{k=1}^\infty \frac{g_k}{(k-1/2)^2}, \quad g_k \overset{\mathrm{i.i.d.}}{\sim} Gamma(1,1), % \quad \forall k \in \mathbb{N}\backslash\{0\}.
\end{align*}
and that the tilted Polya-Gamma distribution is defined as
\begin{align*}
p_{PG}(\omega;1,c) = \cosh{\left(\frac{c}{2}\right)} \exp \left \{- \frac{c^2 \omega}{2} \right \} p_{PG}(\omega;1,0), \quad c\geq 0,
\end{align*}
where $\cosh$ denotes the hyperbolic cosine function.
With a slight abuse of notation, we re-define the linear intensity (<ref>) as
\tilde{\lambda}^k_t(f) =\alpha \left( \nu_k + \sum_{l=1}^{K} \int_{-\infty}^{t^-} h_{lk}(t-s)dN_s^l - \eta \right),
so that we have $\lambda^k_t(f) = \theta_k \sigma(\tilde{\lambda}^k_t(f) ), t\in \R$.
For any $k \in [K]$, let $ N_k := N^k[0,T]$ and $T_1^k, \dots, T_{N_k}^k \in [0,T]$
be the times of events at component $N^k$.
Now, let $\omega = (\omega_i^k)_{k\in [K], i \in [N_k]}$ be a set of latent variables such that
\begin{align*}
\omega_i^k \overset{\mathrm{i.i.d.}}{\sim} p_{PG}( \cdot;1,0), \quad i \in [N_k], \quad k \in [K].
\end{align*}
Then, using (<ref>), an augmented log-likelihood function can be defined as
\begin{align}\label{eq:aug_loglik}
L_T(f, \omega ;N) &= \sum_{k \in [K]} \left \{ \sum_{i \in [N_k]} \left( \log \lsup_k + g(\omega_i^k, \Tilde{\lambda}_{T_i^k}(f)) + \log p_{PG}( \omega_i^k; 1, 0) \right) - \int_0^T \lsup_k \sigma(\Tilde{\lambda}^k_{t}(f)) dt \right \},
\end{align}
and, using that $\sigma(x) = 1 - \sigma(-x)$, the integral term on the RHS in (<ref>) can be re-written as
\begin{align*}
\int_0^T \lsup_k \sigma(\Tilde{\lambda}^k_{t}(f)) dt &= \int_0^T \int_0^\infty \lsup_k \left[ 1 - e^{g(\bar \omega,- \Tilde{\lambda}^k_{t}(f))}\right] p_{PG}(\bar \omega;1,0) d \bar \omega dt.
\end{align*}
Secondly, Campbell's theorem [Daley and Vere-Jones, 2007, Kingman, 1993] is applied. We first recall here its general formulation. For a Poisson point process $\bar Z$ on a space $ \mathcal{X}$ with intensity measure $\Lambda: \mathcal{X} \to \R^+$, and for any function $\zeta: \mathcal{X} \to \R$, it holds true that
\begin{align}\label{eq:campbell}
\Ex{\prod_{x \in \bar Z} e^{\zeta(x)}} = \exp \left \{ \int (e^{\zeta(x)} - 1) \Lambda(dx)\right \}.
\end{align}
Therefore, using that $\sigma(x) = 1 - \sigma(-x)$, and considering for each $k$ a marked Poisson point process $\bar Z^k$ on $\mathcal{X} = ([0,T], \R^+)$ with intensity measure $\Lambda^k(t, \omega) = \lsup_k p_{PG}( \omega;1,0)$, and distribution $\mathbb{P}_{\bar Z}$, applying Campbell's theorem with $\zeta(t, \omega ) := g(\omega, -\Tilde{\lambda}^k_{t}(f))$, one obtains that
\begin{align*}
\Ex{\prod_{(\bar T_j^k, \bar \omega_j^k) \in \bar Z^k}e^{g(\bar \omega_k, -\Tilde{\lambda}^k_{\bar T_j}(f))}} &= \exp \left \{ \int_0^T \int_0^\infty \lsup_k \left(e^{g(\bar \omega, - \Tilde{\lambda}^k_{t}(f))} - 1\right) p_{PG}(\bar \omega;1,0) d \bar \omega dt \right \}.
%= e^{ - \int_0^T \lsup_k \sigma(\Tilde{\lambda}_{t}(f)dt}.
%&\approx \prod_{(\bar t_j^k, \bar \omega_j^k) \in \bar N^k} e^{g(\omega_j^k, -\Tilde{\lambda}_{\bar t_j^}(f))},
\end{align*}
Conditionally on $N$, let $\bar Z := (\bar Z^1, \dots, \bar Z^K)$ be an observation of the previous Poisson point process on $[0,T]$. For each $k \in [K]$, we denote $\bar Z_k := \bar Z^k[0,T]$, $(\bar T_1^k, \bar \omega_1^k), \dots, (\bar T_1^k, \bar \omega_{\bar Z_k}^k) \in [0,T] \times \R_+$ the times and marks of $\bar Z_k$, and $\bar Z = (\bar Z, \omega_i^k, i\leq N_k, k\leq K)$, the set of augmented variables. Then, replacing the integral term in (<ref>) by a product over the observation $\bar Z$, the doubly augmented log-likelihood function
corresponds to
\begin{align*}
L_T(f,\omega, \bar Z; N) &= \sum_{k \in [K]} \left \{ \sum_{i \in [N_k]} \left [ \log \lsup_k + g(\omega_i^k, \Tilde{\lambda}_{T_i^k}(f)) + \log p_{PG}( \omega_i^k; 1, 0) \right]\right.\\ &\left.\hspace{1cm}+ \sum_{j \in [\bar Z_k]} \left [ \log \lsup_k + g(\bar \omega_j^k, -\Tilde{\lambda}_{\bar T_j}(f)) + \log p_{PG}(\bar \omega_j^k; 1, 0) \right] -\lsup_k T\right \}.
\end{align*}
The previous augmented log-likelihood function, and the prior distribution $\Pi$ on the parameter and the latent variables distribution $\mathbb{P}_A = p_{PG}(.|1,0) \times \mathbb{P}_{\bar Z}$, allow to construct an augmented posterior distribution proportional to
\begin{align}\label{eq:augm_posterior}
\Pi(f, \omega, \bar Z |N)
&\propto \prod_k \left \{ \prod_{i \in [N_k]} \lsup_k e^{g(\omega_i^k, \Tilde{\lambda}_{T_i^k}(f))}p_{PG}( \omega_i^k; 1, 0) \times \prod_{j \in [\bar Z^k]} \lsup_k e^{g(\bar \omega_j^k, -
\Tilde{\lambda}_{\bar T_j}(f))}p_{PG}(\bar \omega_j^k; 1, 0) \right \} \times \Pi(f).
\end{align}
§ ANALYTICAL DERIVATION IN THE SIGMOID HAWKES MODEL
§.§ Mean-field updates in a fixed model
In this section, we derive the analytic forms of the conditional updates in Algorithm <ref>, the mean-field variational algorithm with fixed dimensionality described in Section <ref>. For ease of exposition, in this section we consider a model $m$ and a dimension $k$ and we drop the indices $k$ and $m$, e.g., we use the notation $Q_1, Q_2$ for the variational factors. In the following computation, we use the notation $c$ to denote a generic constant which value can vary from one line to the other. For simplicity, we also assume that $J := J_1 = \dots = J_K$ and we recall that $\phi_k(x) = \theta_k \sigma(\alpha(x-\eta))$.
From the definition of the augmented posterior (<ref>), we first note that
\begin{align}\label{eq:joint_dist}
\log p(f, N, \omega, \bar Z ) &= \log \Pi(f,\omega, \bar Z| N) + \log p(N) = L_T(f,\omega, \bar Z; N) + \log \Pi(f) + \log p(N) + c \nonumber \\
&= \log p(\omega | f, N) + \log p(\bar Z|f,N) + \log \Pi(f) + \log p(N) + c.
\end{align}
In the previous equality we have used the facts that $p(\omega | f, N, \bar Z) = p(\omega | f, N)$ and $p(\bar Z|f,N, \omega) = p(\bar Z|f,N)$. We recall our notation $H(t) = (H^0(t), H^1(t), \dots, H^K(t)) \in \R^{KJ + 1}, \: t \in \R$, where for $k \in [K]$, $H^k(t) = (H_j^k(t))_{j=1, \dots, J}$ and $H_j^k(t)$ is defined in (<ref>). In the following, $H(t)$ denotes $H^k(t)$ for the chosen $k$. We have that
\begin{align*}
\mathbb{E}_{Q_2} [\log p(\omega|f, N) ]
&= \mathbb{E}_{Q_2} \left[ \sum_{i \in [N]} g(\omega_i, \Tilde{\lambda}_{T_i}(f)) \right] + c = \mathbb{E}_{Q_2} \left[\sum_{i \in [N]} - \frac{\omega_i \Tilde{\lambda}_{T_i}(f)^2}{2} + \frac{ \Tilde{\lambda}_{T_i}(f)}{2} \right] + c \\
&= \mathbb{E}_{Q_2} \left[ \sum_{i \in [N]} - \frac{\omega_i \alpha^2 ( f^T H(T_i) H(T_i)^T f - 2 \eta H(T_i)^T f + \eta^2)}{2} + \frac{ \alpha H(T_i)^T f }{2} \right] + c \\
&= \mathbb{E}_{Q_2} \left[ - \frac{1}{2} \sum_{i \in [N]} \left \{ \omega_i \alpha^2 f^T H(T_i) H(T_i)^T f - \alpha (2 \omega_i \alpha \eta + 1 ) H(T_i)^T f + \omega_i \alpha^2 \eta^2 \right \} \right] + c \\
&=- \frac{1}{2} \sum_{i \in [N]} \left \{\mathbb{E}_{Q_2}[\omega_i] \alpha^2 f^T H(T_i) H(T_i)^T f - \alpha (2 \mathbb{E}_{Q_2}[\omega_i] \alpha \eta + 1 ) H(T_i)^T f + \mathbb{E}_{Q_2}[\omega_i] \alpha^2 \eta^2 \right \} + c.
% &=- \frac{1}{2} (f - \tilde \mu_1)^T \tilde{\Sigma}_1^{-1}(f - \tilde \mu_1) + c,
\end{align*}
Moreover, we also have that
\begin{align*}
\mathbb{E}_{Q_2} [\log p(\bar Z|f, N) ] %&= \mathbb{E}_{Q_2} \left[ \sum_{j \in [\bar Z]} g(\bar \omega_j, - h(\bar T_j,f)) \right] + c \\
%&= \mathbb{E}_{Q_2} \left[\sum_{j} - \frac{\bar \omega_j h(\bar T_j,f)^2}{2} - \frac{ h(\bar T_j,f)}{2} \right] + c \\
%&= \mathbb{E}_{Q_2} \left[ \sum_{j} - \frac{\bar \omega_j \alpha^2 ( f^T H(\bar T_j) H(\bar T_j)^T f - 2 \eta H(\bar T_j)^T f + \eta^2)}{2} - \frac{ \alpha H(\bar T_j)^T f }{2} \right] + c \\
&= \mathbb{E}_{Q_2} \left[ - \frac{1}{2} \sum_{j \in [\bar Z]} \left \{ \bar \omega_j \alpha^2 f^T H(\bar T_j) H(\bar T_j)^T f - \alpha (2 \bar \omega_j \alpha \eta - 1) H(\bar T_j)^T f + \bar \omega_j \alpha^2 \eta^2 \right \} \right] + c \\
&= \int_{0}^T \int_0^{\infty} \left[ - \frac{1}{2} \left(\bar \omega \alpha^2 f^T H(t) H(t)^T f - \alpha (2 \bar \omega \alpha \eta - 1 ) H(t)^T f + \bar \omega \alpha^2 \eta^2 \right) \right] \Lambda(t,\bar \omega) d\bar \omega dt + c \\
&= - \frac{1}{2} \left[f^T \left( \alpha^2 \int_0^T \int_0^{\infty} \bar \omega H(t) H(t)^T \Lambda(t,\bar \omega) d\bar \omega dt \right) f \right.\\ &\hspace{1cm}\left.+ f^T \left( \alpha \int_{0}^T \int_0^{\infty} (2 \bar \omega \alpha \eta - 1 ) H(t)^T \Lambda(t,\bar \omega) d\bar \omega dt \right) \right] + c.
% &=- \frac{1}{2} (f - \tilde \mu_2)^T \tilde{\Sigma}_2^{-1} (f - \tilde \mu_2) + c,
\end{align*}
Besides, we have
\mathbb{E}_{Q_2} [\log \Pi(f) ] = - \frac{1}{2} f^T \Sigma^{-1} f + f^T \Sigma^{-1} \mu + c.
Therefore, using (<ref>), we obtain that
\begin{align*}
\log Q_1(f) &= - \frac{1}{2} \left[f^T \left( \alpha^2 \sum_{i \in [N]} \mathbb{E}_{Q_2}[\omega_i] H(T_i) H(T_i)^T + \alpha^2 \int_0^T \int_0^{\infty} \bar \omega H(t) H(t)^T \Lambda(t,\bar \omega) d\bar \omega dt + \Sigma^{-1 }\right) f \right. \\
&- \left. f^T \left( \alpha \sum_{i \in [N]} (2 \mathbb{E}_{Q_2}[\omega_i] \alpha \eta + 1 ) H(T_i)^T + \alpha \int_{0}^T \int_0^{\infty} (2 \bar \omega \alpha \eta - 1) H(t)^T \Lambda(t,\bar \omega) d\bar \omega dt + 2\Sigma^{-1 } \mu \right) \right] + c \\
&=: - \frac{1}{2} (f - \tilde \mu)^T \tilde{\Sigma}^{-1} (f - \tilde \mu) + c,
\end{align*}
therefore $Q_1(f)$ is a normal distribution with mean vector $ \tilde \mu$ and covariance matrix $\tilde{\Sigma}$ given by
\begin{align}\label{eq:vi_updates}
&\tilde{\Sigma}^{-1} = \alpha^2 \sum_{i \in [N]} \mathbb{E}_{Q_2}[\omega_i] H(T_i) H(T_i)^T + \alpha^2 \int_0^T \int_0^{\infty} \bar \omega H(t) H(t)^T \Lambda(t,\bar \omega) d\bar \omega dt + \Sigma^{-1 }, \\
&\tilde{\mu} = \frac{1}{2 } \tilde{\Sigma} \left[ \alpha \sum_{i \in [N]} (2 \mathbb{E}_{Q_2}[\omega_i] \alpha \eta + 1 ) H(T_i)^T + \alpha \int_{0}^T \int_0^{\infty} (2 \bar \omega \alpha \eta - 1 ) H(t)^T \Lambda(t,\bar \omega) d\bar \omega dt + 2\Sigma^{-1 } \mu \right].
\end{align}
For $Q_2(\omega, \bar Z)$, we first note that using (<ref>) and (<ref>), we have $Q_2(\omega, \bar Z) = Q_{21}(\omega) Q_{22} (\bar Z)$. Using the same computation as [Donner and Opper, 2019]) Appendices B and D, one can then show that
\begin{align*}
Q_{21}(\omega) &= \prod_{i \in [N]} p_{PG}(\omega_i|1, \underline{\lambda}_{T_i}), \\
\underline{\lambda}_{t} &= \sqrt{ \mathbb{E}_{Q_1}[\Tilde{\lambda}_t(f)^2]} = \alpha^2 \sqrt{ H(t)^T \Tilde \Sigma H(t) + (H(t)^T \Tilde \mu)^2 - 2 \eta H(t)^T \Tilde \mu + \eta^2 }, \quad \forall t \in [0,T],
\end{align*}
and that $Q_{22}$ is a marked Poisson point process measure on $[0,T] \times \R^+$ with intensity
\begin{align*}
\Lambda(t,\bar \omega) &= \lsup e^{\mathbb{E}_{Q_1}[g(\bar \omega, -\Tilde{\lambda}_t(f)]} p_{PG}(\bar \omega; 1,0) = \lsup \frac{\exp(-\frac{1}{2}\mathbb{E}_{Q_1}[\Tilde{\lambda}_t(f)])}{2\cosh \frac{ \underline{\lambda}_{t}(f)}{2}} p_{PG}(\bar \omega|1, \underline{\lambda}_{t}(f)) \\
&=\lsup \sigma(- \underline{\lambda}_{t}) \exp \left \{ \frac{1}{2} ( \underline{\lambda}_{t}(f) - \mathbb{E}_{Q_1}[\Tilde{\lambda}_t(f)]) \right \} p_{PG}(\bar \omega|1, \underline{\lambda}_{t}) \\
\mathbb{E}_{Q_1}[\Tilde{\lambda}_t(f)] &= \alpha (H(t)^T\tilde{\mu} - \eta).
\end{align*}
Therefore, we have that
\begin{align*}
\mathbb{E}_{Q_1}[\omega_i] = \frac{1}{2 \underline{\lambda}_{T_i}} \tanh \left( \frac{ \underline{\lambda}_{T_i}}{2} \right), \quad \forall i \in [N].
\end{align*}
§.§ Analytic formulas of the ELBO
In this section, we provide the derivation of the evidence lower bound $(ELBO(\hat Q_k^m))_k$ for a mean-field variational distribution $\hat Q_m(f, \bar Z)= \hat Q_1^m(f)\hat Q_2^m(\bar Z)$ in a fixed model $m = (\delta, D)$. For ease of exposition, we drop the subscript $m$ and $k$. From (<ref>), we have
\begin{align*}
ELBO(\hat Q) &= \mathbb{E}_{\hat Q} \left[ \log \frac{p(f,\omega,\bar Z, N)}{\hat Q_{1}(f) \hat Q_{2}(\omega, \bar Z)}\right] \\
&= \mathbb{E}_{\hat Q_{2}} \left[ - \log \hat Q_2(\omega, \bar Z) \right] + \mathbb{E}_{\hat Q_{2}} \left[\mathbb{E}_{\hat Q_{1}} \left[ \log p(f,\omega,\bar Z,N) \right]\right] + \mathbb{E}_{\hat Q_1} [ - \log \hat Q_{1}(f)]. \nonumber
\end{align*}
Now using the notation of Section <ref>, we first note that defining $K(t) := H(t)H(t)^T$, we have that
\begin{align*}
&\mathbb{E}_{\hat Q_1}[\tilde{\lambda}_{T_i}(f)^2] = tr(K(t) \tilde{\Sigma}) + \tilde{\mu}^T K(t) \tilde{\mu} \\
&\mathbb{E}_{\hat Q_1} \left[ \log \mathcal{N}(f;\mu, \Sigma) \right] = - \frac{1}{2} tr(\Sigma^{-1} \Tilde{\Sigma}) - \frac{1}{2} \Tilde \mu^T \Sigma^{-1} \Tilde \mu + \Tilde \mu^T \Sigma^{-1} \mu - \frac{1}{2} \mu^T \Sigma^{-1} \mu - \frac{1}{2} \log |2 \pi \Sigma|.
\end{align*}
Moreover, we have
\begin{align*}
\mathbb{E}_{\hat Q_1} [ \log \hat Q_1(f)] &= %\mathbb{E}_{\hat Q_1} [ - \frac{1}{2} (f -\tilde{\mu})^T \tilde{\Sigma}^{-1} (f-\tilde{\mu}) ]
-\frac{|m|}{2} -\frac{1}{2} \log |2\pi \tilde{\Sigma}|.
% &= - \frac{1}{2} \log |\tilde{\Sigma}| - \frac{|m|}{2} ( 1 + \log 2 \pi) -\Tilde \mu^T \Tilde \Sigma_D^{-1} \Tilde \mu_D \\
% &= - \frac{1}{2} \sum_k \log |\tilde{\Sigma}_k| - \frac{|m|}{2} ( 1 + \log 2 \pi) -\Tilde \mu_D^T \Tilde \Sigma_D^{-1} \Tilde \mu_D.
\end{align*}
Using that for any $c> 0$,
p_{PG}(\omega;1,c) = e^{-c^2 \omega/2} \cosh{(c/2)} p_{PG}(\omega;1,0),
we also have
\begin{align*}
\mathbb{E}_{\hat Q_2} \left[ - \log \hat Q_{2}(\omega, \bar Z) \right] &= \sum_{i \in [N]} - \mathbb{E}_{\hat Q_2}[\log p_{PG}(\omega_i,1,0)] + \frac{1}{2}\mathbb{E}_{\hat Q_2} [\omega_i] \mathbb{E}_{\hat Q_1}[\tilde{\lambda}_{T_i}(f)^2] - \log \cosh{\left(\frac{\underline{\lambda}_{T_i}(f) }{2}\right)} \\ % tr(K(T_i) \tilde{\Sigma}) - \tilde{\mu}^T K(T_i) \\
&- \int_{t=0}^T \int_0^{+\infty} [\log \Lambda(t,\bar \omega)] \Lambda(t,\bar \omega)d \bar \omega dt + \int_{t=0}^T \int_0^{+\infty} \Lambda(t,\bar \omega)d \bar \omega dt\\
&= \sum_{i \in [N]} - \mathbb{E}_{\hat Q_2}[\log p_{PG}(\omega_i,1,0)] + \frac{1}{2} \mathbb{E}_{\hat Q_2} [\omega_i] \mathbb{E}_{\hat Q_{1}}[\tilde{\lambda}_{T_i}(f)^2] - \log \cosh{\left(\frac{\underline{\lambda}_{T_i}(f) }{2}\right)}\\ % tr(K(T_i) \tilde{\Sigma}) - \tilde{\mu}^T K(T_i) \\
&- \int_{t=0}^T \int_0^{+\infty}
\left[\log \lsup - \frac{1}{2} \mathbb{E}_{\hat Q_1}[\tilde{\lambda}_{T_i}(f)] - \log 2 - \log \cosh{\left(\frac{\underline{\lambda}_{T_i}(f)}{2} \right)} - \frac{1}{2} \mathbb{E}_{\hat Q_{1}}[\tilde{\lambda}_{T_i}(f)^2] \bar \omega \right. \\
&+ \left. \log \cosh{ \left(\frac{1}{2} \underline{\lambda}_{T_i}(f) \right)} + \log p_{PG}(\bar \omega;1,0) - 1 \right] \Lambda(t) p_{PG}(\bar \omega;1,\underline{\lambda}_{T_i}(f)) dt d\bar \omega \\
&= \sum_{i \in [N]} - \mathbb{E}_{\hat Q_2}[\log p_{PG}(\omega_i,1,0)] + \frac{1}{2} \mathbb{E}_{\hat Q_2} [\omega_i^k] \mathbb{E}_{\hat Q_{1}}[\tilde{\lambda}_{T_i}(f)^2] - \log \cosh{\left(\frac{\underline{\lambda}_{T_i}(f) }{2} \right)}\\ % tr(K(T_i) \tilde{\Sigma}) - \tilde{\mu}^T K(T_i) \\
&- \int_{t=0}^T
\left[\log \lsup - \frac{1}{2} \mathbb{E}_{\hat Q_1}[\tilde{\lambda}_{T_i}(f)] - \log 2 - \frac{1}{2} \mathbb{E}_{\hat Q_1}[\tilde{\lambda}_{T_i}(f)^2] \mathbb{E}_{\hat Q_2}[\bar \omega ] - 1 \right] \Lambda(t) dt \\
&- \int_{t=0}^T \int_0^{+\infty} \log p_{PG}(\omega;1,0) \Lambda(t) p_{PG}(\omega;1,\underline{\lambda}_{T_i}(f)) d\omega dt.
% &= \sum_{T_i \in N} \mathbb{E}_{\hat Q^m_2}[- \log p_{PG}(\omega_i,1,0)]] - \frac{\tanh \sqrt{tr(K(T_i) \tilde{\Sigma}) + \tilde{\mu}^T K(T_i)} /2}{2\sqrt{tr(K(T_i) \tilde{\Sigma}) + \tilde{\mu}^T K(T_i)}} - tr(K(T_i) \tilde{\Sigma}) - \tilde{\mu}^T K(T_i) \\
% &- \int_{t=0}^T \int \log \bar \Lambda(t,\bar \omega) \bar \Lambda(t,\bar \omega)d \bar \omega dt
% &= \sum_{T_i \in N} \mathbb{E}_{\hat Q^m_2}[- \log p_{PG}(\omega_i,1,0)]] - \frac{\tanh \sqrt{tr(K(T_i) \tilde{\Sigma}) + \tilde{\mu}^T K(T_i)} /2}{2\sqrt{tr(K(T_i) \tilde{\Sigma}) + \tilde{\mu}^T K(T_i)}} - tr(K(T_i) \tilde{\Sigma}) - \tilde{\mu}^T K(T_i) \\
% &- \int_{t=0}^T \int \log \bar \Lambda(t,\bar \omega) \bar \Lambda(t,\bar \omega)d \bar \omega dt
\end{align*}
with $\Lambda(t) = \theta \int_0^\infty \Lambda(t,\bar \omega) d\bar \omega = \frac{e^{-\frac{1}{2} \mathbb{E}_{\hat Q_1}[\tilde{\lambda}_{T_i}(f)] }}{2 \cosh \frac{\underline{\lambda}_{T_i}(f) }{2}}$. Moreover, we have
\begin{align*}
&\mathbb{E}_{\hat Q_2} \left[ \mathbb{E}_{\hat Q_1} \left[ \log p(f,\omega,\bar Z,N) \right] \right] = \sum_{i \in [N]} \left \{ \log \lsup + \mathbb{E}_{\hat Q_2} \left[ \mathbb{E}_{\hat Q_1} \left[ g(\omega_i, \tilde{\lambda}_{T_i}(f)) \right] + \log p_{PG}(\omega_i;1,0) \right] \right \} \\
&+ \log \lsup + \mathbb{E}_{\hat Q_2} \left[\mathbb{E}_{\hat Q_1} \left[ g(\bar \omega_t, - \tilde{\lambda}_{T_i}(f))) \right] + \log p_{PG}(\bar \omega_t;1,0) \right] + \mathbb{E}_{\hat Q_1} \left[ \log \mathcal{N}(f;\mu, \Sigma) \right] \\
&= \sum_{i \in [N]} \log \lsup - \log 2 - \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f)^2 \right] \mathbb{E}_{\hat Q_2} \left[ \omega_i \right] + \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[\tilde{\lambda}_{T_i}(f) \right] + \mathbb{E}_{\hat Q_2} \left[ \log p_{PG}(\omega_i;1,0) \right] \\
&+ \int_0^T \int_0^{+\infty} \left[ \log \lsup_k - \log 2 - \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f)^2 \right] \bar \omega - \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f) \right] +\log p_{PG}(\bar \omega;1,0) \right] \Lambda^k(t)p_{PG}(\omega;1,\underline{\lambda}_{T_i}(f)) d\omega dt \\
& %- \int_{t=0}^T \int_0^{+\infty} \Lambda(t,\bar \omega)d \bar \omega dt
+ \mathbb{E}_{\hat Q_1} \left[ \log \mathcal{N}(f;\mu, \Sigma) \right] - \lsup T \\
&= \sum_{i \in [N]} \log \lsup_k - \log 2 - \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f)^2 \right] \mathbb{E}_{\hat Q_2} \left[ \omega_i \right] + \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f) \right] + \mathbb{E}_{\hat Q_2} \left[ \log p_{PG}(\omega_i;1,0) \right] \\
&+ \int_0^T \left[ \log \lsup - \log 2 - \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f)^2 \right]\mathbb{E}_{\hat Q_2} \left[ \bar \omega \right] - \frac{1}{2} \mathbb{E}_{\hat Q_1} \left[\tilde{\lambda}_{T_i}(f) \right]\right] \Lambda(t) dt \\
&+ \int_0^T \int_0^{+\infty} \log p_{PG}(\bar \omega;1,0) \Lambda(t)p_{PG}(\bar \omega;1,\underline{\lambda}_{T_i}(f)) d\bar \omega dt + \mathbb{E}_{\hat Q_1} \left[ \log \mathcal{N}(f;\mu, \Sigma) \right] - \lsup T.
\end{align*}
Therefore, with $c>0$ a constant that does not depend on the size of the model, with zero mean prior $\mu = 0$,
\begin{align*}
ELBO(\hat Q) &= \frac{|m|}{2} + \frac{1}{2} \log |2\pi \tilde{\Sigma}| - \frac{1}{2} tr(\Sigma^{-1} \Tilde{\Sigma}) - \frac{1}{2} \Tilde \mu^T \Sigma^{-1} \Tilde \mu - \frac{1}{2} \log |2\pi\Sigma| \\
&+ \sum_{i \in [N]} \log \lsup -\log 2 + \frac{\mathbb{E}_{\hat Q_1} \left[ \tilde{\lambda}_{T_i}(f) \right]}{2} - \log \cosh \left(\frac{\tilde{\lambda}_{T_i}(f) }{2} \right) \\
&+ \int_{t=0}^T \int_0^{+\infty} \Lambda(t,\bar \omega)d \bar \omega dt - \lsup T.
%+ \log 2 \int_0^T \Lambda^k(t) dt - \lsup_k T.
% &= - \frac{1}{2} tr(\Sigma^{-1} \Tilde{\Sigma}) + \frac{1}{2} \Tilde \mu_D^T \Sigma_D^{-1} \Tilde \mu_D + \frac{1}{2} \log ( |\tilde{\Sigma}_D| - |\Sigma_D| ) + \frac{|m|}{2} \\
% &+ \sum_k \sum_{i \in [N_k]} \log \lsup_k + \frac{\mathbb{E}_{\hat Q^D_1} \left[ \Tilde{\lambda}_{T_i^k}(f) \right]}{2} + \int_0^T \int_0^\infty \log \left(2 \cosh{ \mathbb{E}_{\hat Q^D_1}[h(t,f_k)]/2} \right) \Lambda^k(t,\bar \omega) d\bar \omega dt + c.
\end{align*}
§.§ Gibbs sampler
From the augmented posterior $\Pi_A(f,\omega, \bar |N)$ defined in (<ref>) and using the Gaussian prior family described in Section <ref>, similar computation as Appendix <ref> can provide analytic forms of the conditional posterior distributions $\Pi_A(f|\omega, \bar Z, N), \Pi_A(\omega | N, f)$ and $ \Pi_A(\bar Z|f, N)$ . This allows to design a Gibbs sampler algorithm that sequentially samples the parameter $f$, the latent variables $\omega$ and Poisson process $\bar Z$. With the notation of Appendix <ref>, such procedure can be defined as
For every $k \in [K]$,
\begin{align*}
\text{(Sample latent variables)} \: &\omega^k_i|N,f_k \sim p_{PG}(\omega_i^k; 1, \Tilde{\lambda}^k_{T_i^k}(f)), \quad \forall i \in [N_k] \\
&\text{$\bar Z^k|f_k$, a Poisson process on $[0,T]$ with intensity }\\& \Lambda^k(t, \bar \omega) = \lsup_k \sigma(- \Tilde{\lambda}^k_{t}(f)) p_{PG}(\bar \omega; 1,\Tilde{\lambda}_{t}^k(f))\\
\text{(Update hyperparameters)} \: &R_k = \bar{N}^k[0,T] \\
&H_k = [H_{N^k}, H_{\bar Z^k}], \: [H_{N^k}]_{id} = H_j(T_i^k), \\& [H_{\bar Z^k}]_{jd} = H_b(\bar T_j^k), \: d = 0, \dots, KJ, \: i \in [N_k], \: j \in [R_k] \\
&D_k = Diag([\omega^k_i]_{i \in [N^k]}, [\bar \omega^k_j]_{j \in [R^k]}) \\
&\tilde \Sigma_{k} = [\beta^2 H_k D_k (H_k)^T + \Sigma^{-1}]^{-1} \\
&\tilde \mu_{k} = \Tilde \Sigma_{k} \left( H_k \left[\beta v_k + \beta^2 \eta u_k \right] + \Sigma^{-1} \mu \right),\\& \quad v_k = 0.5 [\mathds{1}_{N_k}, - \mathds{1}_{R_k}], \quad u_k = [[\omega^k_i]_{i \in [N_k]}, [\bar \omega^k_{j}]_{j \in [R_k]}] \\
\text{(Sample parameter)} %\quad &\pi(\lsup_k|N, \bar Z) = Gamma(\lsup_k; a_0 + N^k + R^k, b_0 + T) \\
\: &f_{k}|N,\bar Z^k, \omega^k \sim \mathcal{N}(f_k; \tilde m_{k}, \Tilde \Sigma_{k}).
\end{align*}
These steps are summarised in Algorithm <ref> in Appendix. We note that in this algorithm, one does not need to perform a numerical integration, however, sampling the latent Poisson process is computationally intensive. In our numerical experiments, we use the Python package [<https://pypi.org/project/polyagamma/>] to sample the Polya-Gamma variables and a thinning algorithm to sample the inhomogeneous Poisson process.
§ PROOFS
In this section, we provide the proof of our main theoretical result, namely Theorem <ref>.
We first recall a set of useful lemmas from [Sulem et al., 2021].
§.§ Technical lemmas
In the first lemma, we recall the definition of excursions from [Sulem et al., 2021], for stationary nonlinear Hawkes processes verifying conditions (C1) or (C2). Then, Lemma <ref>, corresponding to Lemma A.1 in [Sulem et al., 2021], provides a control on the main event $\Tilde{\Omega}_T$ considered in the proof of Theorem <ref>. Finally, Lemma <ref> (Lemma A.4 in [Sulem et al., 2021]) is a technical lemma for proving posterior concentration in Hawkes processes.
We also introduce the following notation. For any excursion index $j \in [J_T-1]$, we denote $(U_j^{(1)}, U_j^{(2)})$ the times of the first two events after the $j$-th renewal time $\tau_j$, and $\xi_j := U_j^{(2)}$ if $U_j^{(2)} \in [\tau_j,\tau_{j+1})$ and $\xi_j := \tau_{j+1} $
Let $N$ be a Hawkes process with monotone non-decreasing and Lipschitz link functions $\phi = (\phi_k)_k$ and parameter $f = (\nu, h)$ such that $(\phi, f)$ verify (C1) or (C2).
Then the point process measure $X_t(.)$ defined as
\begin{equation}\label{eq:pp_measure_x}
X_t(.) = N|_{(t-A,t]},
\end{equation}
is a strong Markov process with positive recurrent state $\emptyset$. Let $\{\tau_j\}_{j\geq 0}$ be the sequence of random times defined as
\begin{align*}
\tau_j = \begin{cases}
0 & \text{ if } j=0; \\
\inf \left \{t > \tau_{j-1}; \: X_{t^-} \neq \emptyset, \: X_{t} = \emptyset \right \} = \inf \left \{t > \tau_{j-1}; \: N|_{[t-A,t)} \neq \emptyset, \: N|_{(t-A,t]} = \emptyset \right \} & \text{ if } j\geq 1 .
\end{cases}
\end{align*}
Then, $\{\tau_j\}_{j\geq 0}$ are stopping times for the process $N$. For $T > 0$, we also define
\begin{equation}\label{def:J_T}
J_T=\max\{j\geq 0;\: \tau_j \leq T\}.
\end{equation}
The intervals $\{[\tau_j, \tau_{j+1})\}_{j=0}^{J_{T}-1} \cup [\tau_{J_T}, T]$ form a partition of $[0,T]$. The point process measures $(N|_{[\tau_j, \tau_{j+1})})_{1 \leq j \leq J_T - 1}$ are i.i.d. and independent of $N|_{[0, \tau_1)}$ and $N|_{[\tau_{J_T},T]}$; they are called excursions and the stopping times $\{\tau_j\}_{j\geq 1}$ are called regenerative or renewal times.
Let $Q > 0$. We consider $\Tilde{\Omega}_T$ defined in Section <ref>. For any $\beta > 0$, we can choose $C_\beta$ and $c_\beta$ in the definition of $\Tilde{\Omega}_T$ such that
\probz{\Tilde{\Omega}_T^c} \leq T^{-\beta}.
Moreover, for any $1 \leq q \leq Q$,
\Exz{\mathds{1}_{\eve^c} \max_l \sup \limits_{t \in [0,T]} \left(N^l[t-A,t)\right)^q} \leq 2 T^{-\beta/2}.
For any $f \in \mathcal{F}_T$ and $l \in [K]$, let
\begin{equation*}
Z_{1l} = \int_{\tau_1}^{\xi_1} |\lambda^l_t(f) - \lambda^l_t(f_0)|dt.
\end{equation*}
Under the assumptions of Theorem <ref>, for $M_T \to \infty$ such that $M_T > M \sqrt{\kappa_T}$ with $M>0$ and for any $f \in \mathcal{F}_T$ such that $\norm{r-r_0}_1 \leq \max(\norm{r_0}_1, \Tilde{C})$ with $\Tilde{C}>0$,
there exists $l \in [K]$ such that on $\Tilde{\Omega}_{T}$,
\begin{equation*}
\Exf{Z_{1l}} \geq C(f_0) \Big(\norm{r_f - r_0}_1 + \norm{h - h_0}_1\Big),
\end{equation*}
with $C(f_0) > 0$ a constant that depends only on $f_0$ and $(\phi_k)_k$.
§.§ Proof of Theorem <ref>
We recall that in this result, we consider a general Hawkes model with known link functions $(\phi_k)_k$. Let $r_0 = (r_1^0, \dots, r_K^0)$ with $r_k^0 = \phi_k(\nu_k^0)$. With $C_\beta, c_\beta > 0$, we first define $\eve \in \mathcal{G}_T$ as
\begin{align*}
\eve &= \Omega_N \cap \Omega_J \cap \Omega_U, \\
\Omega_N &= \left \{ \max \limits_{k \in [K]} \sup \limits_{t \in [0,T]} N^k[t-A,t) \leq C_\beta \log T \right \} \cap \left \{ \sum_{k=1}^K \left|\frac{N^k[-A,T]}{T} - \mu_k^0\right| \leq \delta_T \right \}, \\
\Omega_{J} &= \left\{ J_T \in \mathcal{J}_T \right \}, \quad \Omega_{U} = \left\{ \sum_{j=1}^{J_T-1} (U_j^{(1)} - \tau_j) \geq
\frac{T}{\mathbb{E}_0[\Delta \tau_1] \|r_0\|_1} \left(1 - 2c_\beta\sqrt{\frac{\log T }{T}}\right) \right \}, \\
\mathcal{J}_T &= \left \{ J \in \N; \: \left|\frac{J-1}{T} - \frac{1}{\mathbb{E}_0[\Delta \tau_1]} \right| \leq c_\beta \sqrt{\frac{\log T}{T}} \right \},
\end{align*}
with $J_T$ the number of excursions as defined in (<ref>), $\mu_k^0 := \Exz{\lambda_t^k(f_0)}, \forall k$, $\delta_T = \delta_0 \sqrt{\frac{\log T}{T}}, \: \delta_0 > 0$ and $\{U_j^{(1)}\}_{j=1, \dots, J_T-1}$ denoting the first events of each excursion (see Lemma <ref> for a precise definition). Secondly, we define $A_T' \in \mathcal{G}_T$ as
\begin{align*}
A_T' = \left \{\int e^{L_T(f) - L_T(f_0)} d\widetilde{\Pi}(f) > e^{- C_1 T \e_T^2} \right \}, \quad \widetilde{\Pi}(B) = \frac{\Pi(B \cap K_T)}{\Pi(K_T)}, \quad K_T \subset \mathcal{F},
\end{align*}
with $C_1 > 0$ and $\e_T, M_T$ positive sequences such that $T\e_T^2 \to \infty$ and $M_T \to \infty$. From Lemma <ref>, we have that $\Probz{\eve^c} = o(1)$. Thus, with $D_T$ defined in (<ref>), $A_T = \eve \cap A_T'$, $K_T = B_\infty(\epsilon_T)$, and $\e_T = \sqrt{\kappa_T } \epsilon_T$, we obtain that
\begin{align*}
\Probz{A_T^c} &\leq \Probz{\eve^c} + \Probz{A_T'^c \cap \eve}\\
&= o(1) + \Probz{ \left \{\int_{K_T} e^{L_T(f) - L_T(f_0)} d\Pi(f) \leq \Pi(K_T) e^{- C_1 T \e_T^2} \right\} \cap \eve} \\
&\leq o(1) + \Probz{ \left \{ D_T \leq \Pi(K_T) e^{- C_1 T \e_T^2} \right\} \cap \eve} = o(1),
\end{align*}
with $C_1 > 1$, using (A0), i.e., $\Pi(K_T) \geq e^{-c_1 T \e_T^2}$, and the following intermediate result from the proof of Theorem 3.2 in [Sulem et al., 2021]
\begin{align*}
\Probz{\left \{ D_T \leq \Pi(B_\infty(\epsilon_T)) e^{- \kappa_T T \e_T^2} \right \} \cap \eve} = o(1).
\end{align*}
Therefore, we can conclude that
$$\Probz{A_T} \xrightarrow[T \to \infty]{} 1.$$
We now define the stochastic distance $\Tilde{d}_{1T}$ and stochastic neighborhoods around $f_0$ as
\begin{align}\label{def:stoch_dist}
&\Tilde{d}_{1T}(f,f') = \frac{1}{T} \sum_{k=1}^K \int_0^T \mathds{1}_{A_{2}(T)}(t) |\lambda_{t}^k(f) - \lambda_{t}^k(f')| dt, \quad A_2(T) = \bigcup_{j=1}^{J_T - 1} [\tau_j, \xi_j] \\
&A_{d_1}(\e) = \left \{f \in \mathcal{F}; \: \Tilde{d}_{1T}(f,f_0) \leq \e \right \}, \quad \e > 0, \nonumber
\end{align}
where for each $j \in [J_T]$, $ U_j^{(2)}$ is the first event after $ U_j^{(1)}$, and $\xi_j := U_j^{(2)}$ if $U_j^{(2)} \in [\tau_j,\tau_{j+1})$ and $\xi_j := \tau_{j+1} $ otherwise. Let $\eta_T$ be a positive sequence and $\hat Q$ be the variational posterior as defined in (<ref>). We have
\begin{align}\label{eq:q_decomp}
\Exz{\hat Q( A_{d_1}(\eta_T)^c)} &\leq \Probz{A_T^c} + \Exz{\hat Q( A_{d_1}(\eta_T)^c) \mathds{1}_{A_T}}.
\end{align}
We first bound the second term on the RHS of (<ref>) using the following technical lemma, which is an adaptation of Theorem 5 of [Ray and Szabó , 2021] and Lemma 13 in [Nieman et al., 2021].
Let $B_T \subset \mathcal{F}$, $A_T \in \mathcal{G}_T$, and $Q$ be a distribution on $\mathcal{F}$. If there exist $C, u_T > 0$ such that
\begin{align}\label{eq:hyp_post}
\Exz{\Pi(B_T|N) \mathds{1}_{A_T}} \leq C e^{-u_T},
\end{align}
then, we have that
\begin{align*}
\Exz{Q(B_T) \mathds{1}_{A_T}} \leq \frac{2}{u_T} \left( \Exz{KL(Q||\Pi(.|N)) \mathds{1}_{A_T}} + C e^{-u_T/2} \right).
\end{align*}
We follow the proof of [Ray and Szabó , 2021] and use the fact that, for any $g: \mathcal{F} \to \R$ such that $\int_{\mathcal{F}} e^{g(f)} d\Pi(f|N) < +\infty$, it holds true that
\begin{align*}
\int_{\mathcal{F}}g(f) dQ(f) \leq KL(Q||\Pi(.|N)) + \log \int_{\mathcal{F}} e^{g(f)}\Pi(f|N).
\end{align*}
Applying the latter inequality with $g = \frac{1}{2} u_T \mathds{1}_{B_T}$, we obtain
\begin{align*}
\frac{1}{2} u_T Q(B_T) &\leq KL(Q||\Pi(.|N)) + \log (1 + e^{\frac{1}{2} u_T} \Pi(B_T|N)) \\
&\leq KL(Q||\Pi(.|N)) + e^{\frac{1}{2} u_T} \Pi(B_T|N).
\end{align*}
Then, multiplying both sides of the previous inequality by $\mathds{1}_{A_T}$ and taking expectation w.r.t. to $\mathbb{P}_0$, using (<ref>), we finally obtain
\begin{align*}
\frac{1}{2} u_T \Exz{Q(B_T) \mathds{1}_{A_T}} \leq \Exz{KL(Q||\Pi(.|N)) \mathds{1}_{A_T}} + C e^{-\frac{1}{2}u_T}.
\end{align*}
We thus apply Lemma <ref> with $B_T = A_{d_1}(\eta_T)^c$, $\eta_T = M_T' \e_T$, $Q = \hat Q$, and $u_T = M_T T \e_T^2$ with $M_T' \to \infty$. We first check that (<ref>) holds, i.e., we show that there exist $C, M_T,M_T' > 0$ such that
\begin{align}\label{eq:cond_post}
\Exz{\mathds{1}_{A_T }\Pi[ \Tilde{d}_{1T}(f,f_0) > M_T' \e_T |N]} \leq C \exp(-M_T T\e_T^2).
\end{align}
For any test $\phi$, we have the following decomposition
\begin{align*}
\Exz{\mathds{1}_{A_T}\Pi[ \Tilde{d}_{1T}(f,f_0) > M_T' \e_T |N]} \leq \underbrace{\Exz{\phi \mathds{1}_{A_T} ] }}_{(I)} + \underbrace{\Exz{(1 - \phi)\mathds{1}_{A_T }\Pi[A_{d_1}(M_T' \e_T)^c|N]}}_{(II)}. %+ \underbrace{\Exz{\mathds{1}_{A_T}\Pi[A_{L_1}(C\e)^c \cap A_{d_1}(M \e) |N]}}_{(III)}
\end{align*}
Note that we have
\begin{align}\label{eq:upper_bound_error2}
(II) = \Exz{(1 - \phi)\mathds{1}_{A_T }\Pi[A_{d_1}(M_T' \e_T)^c|N]} &= \Exz{\int_{ A_{d_1}(M_T' \e_T)^c } \mathds{1}_{A_T} (1-\phi) \frac{e^{L_T(f) - L_T(f_0)}}{D_T} d\Pi(f)} \nonumber \\
&\leq \frac{e^{C_1 T \e_T^2}}{\Pi(K_T)} \Exz{ \sup_{f \in \mathcal{F}_T} \Exf{\mathds{1}_{A_{d_1}(M_T'\e_T)^c} \mathds{1}_{A_T} (1-\phi)|\mathcal{G}_0}},
\end{align}
since on $A_T, D_T \geq \Pi(K_T)e^{-C_1 T \e_T^2} $. Using the proof of Theorem 5.5 in [Sulem et al., 2021], we can directly obtain that for $T$ large enough, there exist $x_1, M, M' > 0$ such that
\begin{align*}
&(I) \leq 2(2K+1) e^{-x_1 {M'_T}^2 T \e_T^2} \\
&(II) \leq 2 (2K+1) e^{-x_1 {M'_T}^2 T \e_T^2 /2},
\end{align*}
which implies that
\begin{align*}
\Exz{\mathds{1}_{A_T }\Pi[\Tilde{d}_{1T}(f,f_0) > M_T' \e_T |N]} &\leq 4 (2K+1) e^{-x_1 M_T'^2 T \e_T^2 /2},
\end{align*}
and (<ref>) with $M_T = x_1 M_T'^2/2$ and $C = 4(2K+1)$. Applying Lemma <ref> thus leads to
\begin{align*}
\Exz{\hat Q( A_{d_1}(\eta_T)^c) \mathds{1}_{A_T}} \leq 2 \frac{KL(\hat Q || \Pi(.|N)) + Ce^{-M_T T\e_T^2/2}}{M_T T\e_T^2} \leq 2C e^{-M_T T \e_T^2/2} + 2 \frac{KL(\hat Q || \Pi(.|N))}{M_T T\e_T^2}.
\end{align*}
Moreover, from (A2) and the remark following Theorem <ref>, it holds that $KL(\hat Q || \Pi(.|N)) = O(T\e_T^2) $, therefore we obtain the following intermediate result
\begin{align*}
\Exz{\hat Q( A_{d_1}(\eta_T)^c) } = o(1).
\end{align*}
Now, with $M_T > M_T'$, we note that
\begin{align*}
\Exz{\hat Q (\norm{f-f_0}_1 > M_T \e_T)} &= \Exz{\hat Q (\Tilde{d}_{1T}(f,f_0) > M_T' \e_T)}\\ &\hspace{0.5cm}+ \Exz{\hat Q (\norm{f-f_0}_1 > M_T \e_T ,\Tilde{d}_{1T}(f,f_0) < M_T' \e_T) \mathds{1}_{A_T}} + \probz{A_T^c}.
\end{align*}
Therefore, it remains to show that
\begin{align*}
\Exz{\hat Q (\norm{f-f_0}_1 > M_T \e_T ,\Tilde{d}_{1T}(f,f_0) < M_T' \e_T) \mathds{1}_{A_T}} = \Exz{\hat Q( A_{L_1}( M_T \epsilon_T)^c \cap A_{d_1}( M_T' \e_T)) \mathds{1}_{A_T}} = o(1).
\end{align*}
For this, we apply again Lemma <ref> with $B_T = A_{L_1}( M_T \e_T)^c \cap A_{d_1}( M_T' \e_T)$ and $u_T = T M_T^2 \e_T^2$. We have
\begin{align*}
\Exz{\mathds{1}_{A_T} \Pi(A_{L_1}( M_T \e_T)^c \cap A_{d_1}( M_T' \e_T)|N)} &\leq \frac{e^{C_1 T \e_T^2}}{\Pi(K_T)} \Exz{\Exf{ \mathds{1}_{A_T}\mathds{1}_{A_{L_1}(M_T \e_T)^c \cap A_{d_1}(M_T' \epsilon_T) }| \mathcal{G}_0}}.
\end{align*}
Let $f \in A_{L_1}(M_T \e_T)^c \cap A_{d_1}(M_T' \e_T)$. For any $j \in [J_T-1]$ and $l \in [K]$, let
\begin{align}\label{def:zj}
Z_{jl} = \int_{\tau_j}^{\xi_j} |\lambda^l_t(f) - \lambda^l_t(f_0)|dt, \quad j \in [J_T-1], \quad l \in [K].
\end{align}
Using Lemma <ref> and the integer $l$ introduced in this lemma, for any $f \in A_{L_1}(M_T \epsilon_T)^c$, we have
\begin{align*}
\Exf{ \mathds{1}_{A_T}\mathds{1}_{ A_{d_1}(M_T' \e_T) } | \mathcal{G}_0} &\leq \Probf{\sum_{j=1}^{J_T-1} Z_{jl} \leq T M_T' \e_T | \mathcal{G}_0} \\
&\leq \sum_{J \in \mathcal{J}_T} \Probf{\sum_{j=1}^{J-1} Z_{jl} - \Exf{Z_{jl}} \leq T M_T' \epsilon_T - \frac{T}{2\Exz{\Delta \tau_1}} C(f_0) M_T \epsilon_T | \mathcal{G}_0} \\
&\leq \sum_{J \in \mathcal{J}_T} \Probf{\sum_{j=1}^{J-1} Z_{jl} - \Exf{Z_{jl}} \leq - \frac{T}{4\Exz{\Delta \tau_1}} C(f_0) M_T \e_T | \mathcal{G}_0},
\end{align*}
for any $M_T \geq 4\Exz{\Delta \tau_1} M_T'$. Similarly to the proof of Theorem 3.2 in [Sulem et al., 2021]), we apply Bernstein's inequality for each $J \in \mathcal{J}_T$ and obtain that
\begin{align*}
\Exf{ \mathds{1}_{A_T}\mathds{1}_{ A_{d_1}(M_T' \e_T) } | \mathcal{G}_0} \leq \exp\{-c(f_0)' T\}, \quad \forall f \in A_{L_1}(M_T \e_T)^c,
\end{align*}
for $c(f_0)'$ a positive constant. Therefore, we can conclude that
\begin{align*}
\Exz{\hat Q \left( A_{L_1}( M_T \e_T)^c \cap A_{d_1}( M_T' \e_T)\right) \mathds{1}_{A_T}} \leq \frac{2}{M_T T \e_T^2} \Exz{KL(\hat Q||\Pi(.|N)) } + o(1) = o(1),
\end{align*}
since $ \Exz{KL(\hat Q||\Pi(.|N)) } = O(T \e_T^2)$ by assumption (A2). This leads to our final conclusion
\begin{align*}
\Exz{\hat Q \left( \norm{f-f_0}_1 > M_T \e_T \right) } = o(1).
\end{align*}
§ GIBBS SAMPLER IN THE SIGMOID HAWKES MODEL
In this section, we describe a non-adaptive Gibbs sampler that computes the posterior distribution in the sigmoid Hawkes model, using the data augmentation scheme of Section <ref> (see also Remark <ref>).
Gibbs sampler in the sigmoid Hawkes model with data augmentation
$N = (N^1, \dots, N^K)$, $n_{iter}$, $\mu, \Sigma$.
Samples $S = (f_i)_{i\in [n_{iter}]}$ from the posterior distribution $\Pi_A(f|N)$.
Precompute $(H_k(T_i^k))_i, k \in [K]$.
Initialise $f \sim \mathcal{N}(f,\mu, \Sigma)$ and $S = []$.
$t \gets 1$ to $n_{iter}$
$k \gets 1$ to $K$
$i \gets 1$ to $N_k$
Sample $\omega_i^k \sim p_{PG}(\omega_i^k; 1, \Tilde{\lambda}^k_{T_i^k}(f))$
Sample $(\bar T_j^k)_{j=1,R_k}$ a Poisson temporal point process on $[0,T]$ with intensity $\lsup_k \sigma(- \Tilde{\lambda}^k_{t}(f))$
$j \gets 1$ to $R_k$
Sample $\bar \omega_j^k \sim p_{PG}(\omega; 1,\Tilde{\lambda}^k_{\bar T_j^k}(f))$
Update $\tilde \Sigma_{k} = [\beta^2 H_k D_k (H_k)^T + \Sigma^{-1}]^{-1}$
Update $\tilde \mu_{k} = \Tilde \Sigma_{k} \left( H_k \left[\beta v_k + \beta^2 \eta u_k \right] + \Sigma^{-1} \mu \right)$
Sample $f_k \sim \mathcal{N}(f_k; \tilde \mu_{k}, \Tilde \Sigma_{k})$
Add $f = (f_k)_k$ to $S$.
§ ADDITIONAL RESULTS FROM OUR NUMERICAL EXPERIMENTS
In this section, we report results from our simulation study in Section <ref> that were not added to the main text for conciseness purposes. Each of the following sub-sections corresponds to one of the simulation set-up.
§.§ Simulation 1
This section contains our results for the MH sampler, in the univariate settings of Simulation 1 with sigmoid and softplus link functions (see Figures <ref> and <ref>).
Posterior distribution on $f = (\nu_1, h_{11})$ obtained with the MH sampler in the sigmoid model, in the three scenarios of Simulation 1 ($K=1$). The three columns correspond to the Excitation only (left), Mixed effect (center), and Inhibition only (right) scenarios. The first row contains the marginal distribution on the background rate $\nu_1$, and the second row represents the posterior mean (solid orange line) and 95% credible sets (orange areas) on the (self) interaction function $h_{11}$. The true parameter $f_0$ is plotted in dotted green line.
Posterior distribution on $f = (\nu_1, h_{11})$ obtained with the MH sampler in the softplus model, in the three scenarios of Simulation 1 ($K=1$). The three columns correspond to the Excitation only (left), Mixed effect (center), and Inhibition only (right) scenarios. The first row contains the marginal distribution on the background rate $\nu_1$, and the second row represents the posterior mean (solid orange line) and 95% credible sets (orange areas) on the (self) interaction function $h_{11}$. The true parameter $f_0$ is plotted in dotted green line.
§.§ Simulation 3
This section contains our results regarding the estimated intensity function in the univariate and well-specified settings in Simulation 3 (see Figure <ref>), the estimated parameter in the mis-specified settings (see Figure <ref>), and the estimated interaction functions in the bivariate settings (see Figures <ref> and <ref>).
Excitation scenario
Inhibition scenario
Intensity function on a subwindow of the observation window estimated via the variational posterior mean and via the posterior mean computed with the MH sampler, in the well-specified setting of Simulation 3 on $[0,10]$, using the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The true intensity $\lambda_t^1(f_0)$ is plotted in dotted green line.
$K = 2$
Interaction functions
Posterior and model-selection variational posterior distributions on $f = (\nu, h)$ in the bivariate sigmoid model, well-specified setting, and Excitation setting of Simulation 3, evaluated by the non-adaptive MH sampler and the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The first row contains the marginal distribution on the background rates $(\nu_1, \nu_2)$, and the second and third rows represent the (variational) posterior mean (solid line) and 95% credible sets (colored areas) on the four interaction function $h_{11}, h_{12}, h_{21}, h_{22}$. The true parameter $f_0$ is plotted in dotted green line.
$K = 1$
Model-selection variational posterior distributions on $f = (\nu_1, h_{11})$ in the univariate sigmoid model and mis-specified setting of Simulation 3, evaluated by the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The two columns correspond to a (mostly) Excitation (left) and a (mostly) Inhibition (right) settings. The first row contains the marginal distribution on the background rate $\nu_1$, and the second row represents the variational posterior mean (solid line) and 95% credible sets (colored areas) on the (self) interaction function $h_{11}$. The true parameter $f_0$ is plotted in dotted green line.
$K = 2$
Interaction functions
Posterior and model-selection variational posterior distributions on $f = (\nu, h)$ in the bivariate sigmoid model, well-specified setting, and Inhibition setting of Simulation 3, evaluated by the non-adaptive MH sampler and the fully-adaptive mean-field variational (FA-MF-VI) algorithm (Algorithm <ref>). The first row contains the marginal distribution on the background rates $(\nu_1, \nu_2)$, and the second and third rows represent the (variational) posterior mean (solid line) and 95% credible sets (colored areas) on the four interaction function $h_{11}, h_{12}, h_{21}, h_{22}$. The true parameter $f_0$ is plotted in dotted green line.
§.§ Simulation 4
This section contains our results for the Inhibition setting of Simulation 4, i.e., the estimated graphs in (Figures <ref> and <ref>), the heatmaps of the risk on the interaction functions in Figure <ref>, the estimated $L_1$-norms after the first step of Algorithm <ref> in Figure <ref>, and the variational posterior distribution on the subset of the parameter in Figure <ref>.
Estimated graph parameter $\hat \delta$ (black=0, white=1) for $K=2,4,8,16,32,64$ in the Excitation scenario of Simulation 4.
Estimated graph parameter $\hat \delta$ (black=0, white=1) for $K=2,4,8,16,32,64$ in the Inhibition scenario of Simulation 4.
Function norms
Heatmaps of the $L_1$-norms of the true parameter $h_0$, i.e., the entries of the matrix $S_0 = (S^0_{lk})_{l,k} = (\norm{h_{lk}^0}_1)_{l,k}$ (left column) and $L_1$-risk, i.e., $(\mathbb{E}^{Q}[\norm{h_{lk}^0 - h_{lk}}_1])_{l,k}$ (right column) after the first step of Algorithm <ref>, in the Inhibition scenario of Simulation 4. The rows correspond to $K=2,4,8,16,32,64$.
Estimated $L_1$-norms after the first step of Algorithm <ref> (in blue), and ground-truth norms (in orange), plotted in increasing order, in the Inhibition scenario of Simulation 4, for the models with $K \in \{2,4,8,16, 32, 64\}$.
Background $\nu_1$Interaction functions $h_{11}$ and $h_{21}$
Model-selection variational posterior distributions on $\nu_1$ (left column) and interaction functions $h_{11}$ and $ h_{21}$ (second and third columns) in the Inhibition scenario and multivariate sigmoid models of Simulation 4, computed with our two-step mean-field variational (MF-VI) algorithm (Algorithm <ref>). The different rows correspond to different multivariate settings $K=2,4,8,16,32, 64$.
§.§ Simulation 5
In this section, we report some characteristics of the simulated data in Simulation 5, in particular the number of points and excursions in each setting (see Table <ref>). Moreover, we report the plots of the posterior distribution in a subset of the parameter in Figure <ref>.
Scenario T # events # excursions # local excursions
4*Excitation 50 2621 36 114
200 10,729 155 473
400 21,727 303 957
800 42,904 596 1921
4*Inhibition 50 1747 49 134
200 7019 222 529
400 13,819 466 1053
800 27,723 926 2118
Number of points and global and average local excursions in the multidimensional data sets of Simulation 5 ($K=10$).
Excitation scenario
Inhibition scenario
Model-selection variational posterior on two interaction functions $h_{66}$ and $h_{76}$, for different observation lengths $T \in \{50,200,400, 800\}$, in the Excitation and Inhibition scenarios in Simulation 5 with $K=10$. We note that in this simulation, the true number of basis functions is 2 and is well recovered for all values of $T$. The estimation of these two interaction functions is poor for the smallest $T$, however, it improves when $T$ increases.
§.§ Simulation 6
This section contains the estimated graphs (Figures <ref> and <ref>), the variational posterior distribution on a subset of the parameter (Figures <ref> and <ref>), in the mis-specified settings of Simulation 6.
Excitation scenario
Inhibition scenario
Estimated graph after thresholding the $L_1$-norms using the “gap" or “slope change" heuristic, in the different settings of mis-specified link functions of Simulation 6, and in the Excitation and Inhibition scenarios. We observe that the true graph (with non-null principal and first off-diagonal) is correctly estimated for the ReLU mis-specification setting, while some errors happen in the two other link settings, in particular in the Inhibition scenario.
Excitation scenario
Inhibition scenario
Estimated interaction functions $h_{66}$ and $h_{76}$ in the mis-specified settings of Simulation 6, where the data is generated from a Hawkes model with ReLU, softplus, or a mis-specified link function, and in the Excitation and Inhibition scenarios. We note that the estimation of the interaction functions is deteriorated in these mis-specified cases, however the sign of the functions are still recovered.
Excitation scenario
Inhibition scenario
Estimated graph after thresholding the $L_1$-norms, when using Algorithm <ref> with different support upper bounds $A'\in \{0.5, 0.1, 0.2, 0.4\}$, containing the true memory parameter $A=0.1$, in the settings of Simulation 7. We note that the true graph (with non-null principal and first off-diagonal) is correctly estimated in all cases, in the Excitation scenario (first row) and in the Inhibition scenario (second row).
Excitation scenario
Inhibition scenario
Estimated background rates $\nu_k$ for $k=1,\dots, 5$ when using different values of the upper bound parameter $A \in \{0.05, 0.1, 0.2, 0.4\}$, in the two scenarios of Simulation 8. As expected, the background rates are better estimated in the well-specified setting $A=A_0=0.1$; nonetheless, when $A$ is not too far above $A_0$, the estimation does not deteriorate too much, in particular in the Inhibition scenarios.
Acknowledgements and Disclosure of Funding should go at the end, before appendices and references
All acknowledgements go at the end of the paper before appendices and references.
Moreover, you are required to declare funding (financial activities supporting the
submitted work) and competing interests (related financial activities outside the submitted work).
More information about this disclosure can be found on the JMLR website.
In this appendix we prove the following theorem from
Section 6.2:
Theorem Let $u,v,w$ be discrete variables such that $v, w$ do
not co-occur with $u$ (i.e., $u\neq0\;\Rightarrow \;v=w=0$ in a given
dataset $\dataset$). Let $N_{v0},N_{w0}$ be the number of data points for
which $v=0, w=0$ respectively, and let $I_{uv},I_{uw}$ be the
respective empirical mutual information values based on the sample
$\dataset$. Then
\[
N_{v0} \;>\; N_{w0}\;\;\Rightarrow\;\;I_{uv} \;\leq\;I_{uw}
\]
with equality only if $u$ is identically 0.
Proof. We use the notation:
\[
P_v(i) \;=\;\frac{N_v^i}{N},\;\;\;i \neq 0;\;\;\;
P_{v0}\;\equiv\;P_v(0)\; = \;1 - \sum_{i\neq 0}P_v(i).
\]
These values represent the (empirical) probabilities of $v$
taking value $i\neq 0$ and 0 respectively. Entropies will be denoted
by $H$. We aim to show that $\fracpartial{I_{uv}}{P_{v0}} < 0$....
Remainder omitted in this sample. See http://www.jmlr.org/papers/ for full paper.
[Adams et al., 2009]
Ryan Prescott Adams, Iain Murray, and David J. C. MacKay.
Tractable nonparametric bayesian inference in poisson processes with
gaussian process intensities.
In Proceedings of the 26th Annual International Conference on
Machine Learning, ICML '09, page 9–16, New York, NY, USA, 2009.
Association for Computing Machinery.
ISBN 9781605585161.
URL <https://doi.org/10.1145/1553374.1553376>.
[Arbel et al., 2013]
J. Arbel, G. Gayraud, and J. Rousseau.
Bayesian adaptive optimal estimation using a sieve prior.
Scand. J. Statist., 40:0 549–570, 2013.
[Bacry and Muzy, 2015]
Emmanuel Bacry and Jean-Francois Muzy.
Second order statistics characterization of hawkes processes and
non-parametric estimation, 2015.
[Bacry et al., 2020]
Emmanuel Bacry, Martin Bompaire, Stéphane Gaïffas, and Jean-Francois
Sparse and low-rank multivariate hawkes processes.
Journal of Machine Learning Research, 210
(50):0 1–32, 2020.
[Bishop, 2006]
Christopher M. Bishop.
Pattern recognition and machine learning.
Information Science and Statistics. Springer, New York, 2006.
ISBN 978-0387-31073-2; 0-387-31073-8.
[Bonnet et al., 2021]
Anna Bonnet, Miguel Martinez Herrera, and Maxime Sangnier.
Maximum likelihood estimation for hawkes processes with
self-excitation or inhibition.
Statistics & Probability Letters, 179:0 109214, 2021.
[Bremaud and Massoulie, 1996]
Pierre Bremaud and Laurent Massoulie.
Stability of nonlinear hawkes processes.
The Annals of Probability, 1996.
[Cai et al., 2021]
Biao Cai, Jingfei Zhang, and Yongtao Guan.
Latent network structure learning from high dimensional multivariate
point processes, 2021.
[Carmona and Nicholls, 2022]
Chris U. Carmona and Geoff K. Nicholls.
Scalable semi-modular inference with variational meta-posteriors,
URL <https://arxiv.org/abs/2204.00296>.
[Carstensen et al., 2010]
Lisbeth Carstensen, Albin Sandelin, Ole Winther, and Niels R Hansen.
Multivariate hawkes process models of the occurrence of regulatory
BMC bioinformatics, 110 (1):0 1–19, 2010.
[Chen et al., 2017]
Shizhe Chen, Ali Shojaie, Eric Shea-Brown, and Daniela Witten.
The multivariate hawkes process in high dimensions: Beyond mutual
arXiv:1707.04928v2, 2017a.
[Chen et al., 2017]
Shizhe Chen, Daniela Witten, and Ali Shojaie.
Nearly assumptionless screening for the mutually-exciting
multivariate Hawkes process.
Electron. J. Stat., 110 (1):0 1207–1234,
ISSN 1935-7524.
URL <https://doi.org/10.1214/17-EJS1251>.
[Costa et al., 2020]
Manon Costa, Carl Graham, Laurence Marsalle, and Viet Chi Tran.
Renewal in hawkes processes with self-excitation and inhibition.
Advances in Applied Probability, 520 (3):0
879–915, 2020.
[Daley and Vere-Jones, 2007]
Daryl J Daley and David Vere-Jones.
An introduction to the theory of point processes: volume II:
general theory and structure.
Springer Science & Business Media, 2007.
[Deutsch and Ross, 2022]
Isabella Deutsch and Gordon J. Ross.
Bayesian estimation of multivariate hawkes processes with inhibition
and sparsity, 2022.
URL <https://arxiv.org/abs/2201.05009>.
[Donner and Opper, 2019]
Christian Donner and Manfred Opper.
Efficient bayesian inference of sigmoidal gaussian cox processes,
[Donnet et al., 2020]
Sophie Donnet, Vincent Rivoirard, and Judith Rousseau.
Nonparametric Bayesian estimation for multivariate Hawkes
Ann. Statist., 480 (5):0 2698–2727, 2020.
ISSN 0090-5364.
URL <https://doi-org.proxy.bu.dauphine.fr/10.1214/19-AOS1903>.
[Eichler et al., 2017]
Michael Eichler, Rainer Dahlhaus, and Johannes Dueck.
Graphical modeling for multivariate hawkes processes with
nonparametric link functions.
Journal of Time Series Analysis, 380 (2):0
225–242, 2017.
[Gerhard et al., 2017]
Felipe Gerhard, Moritz Deger, and Wilson Truccolo.
On the stability and dynamics of stochastic spiking neuron models:
Nonlinear hawkes process and point process glms.
PLOS Computational Biology, 13:0 1–31, 02 2017.
URL <https://doi.org/10.1371/journal.pcbi.1005390>.
[Golub and Welsch, 1969]
Gene H Golub and John H Welsch.
Calculation of gauss quadrature rules.
Mathematics of computation, 230 (106):0
221–230, 1969.
[Hansen et al., 2015]
Niels Richard Hansen, Patricia Reynaud-Bouret, and Vincent Rivoirard.
Lasso and probabilistic inequalities for multivariate point
Bernoulli, 210 (1):0 83–143, 2015.
ISSN 1350-7265.
URL <http://dx.doi.org/10.3150/13-BEJ562>.
[Hawkes, 1971]
Alan G Hawkes.
Point spectra of some mutually exciting point processes.
Journal of the Royal Statistical Society: Series B
(Methodological), 330 (3):0 438–443, 1971.
[Hawkes, 2018]
Alan G. Hawkes.
Hawkes processes and their applications to finance: a review.
Quantitative Finance, 180 (2):0 193–198,
URL <https://doi.org/10.1080/14697688.2017.1403131>.
[Kingman, 1993]
J. F. C. Kingman.
Poisson processes, volume 3 of Oxford Studies in
The Clarendon Press Oxford University Press, New York, 1993.
ISBN 0-19-853693-3.
Oxford Science Publications.
[Lemonnier and Vayatis, 2014]
Remi Lemonnier and Nicolas Vayatis.
Nonparametric markovian learning of triggering kernels for mutually
exciting and mutually inhibiting multivariate hawkes processes.
In Joint European Conference on Machine Learning and Knowledge
Discovery in Databases, pages 161–176. Springer, 2014.
[Lu and Abergel, 2018]
Xiaofei Lu and Frédéric Abergel.
High-dimensional hawkes processes for limit order books: modelling,
empirical analysis and numerical calibration.
Quantitative Finance, 180 (2):0 249–264,
[Malem-Shinitski et al., 2021]
Noa Malem-Shinitski, Cesar Ojeda, and Manfred Opper.
Nonlinear hawkes process with gaussian process self effects, 2021.
[Mei and Eisner, 2017]
Hongyuan Mei and Jason Eisner.
The neural hawkes process: A neurally self-modulating multivariate
point process, 2017.
[Mohler et al., 2011]
G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg, and G. E. Tita.
Self-exciting point process modeling of crime.
Journal of the American Statistical Association, 1060
(493):0 100–108, 2011.
URL <https://doi.org/10.1198/jasa.2011.ap09546>.
[Nieman et al., 2021]
Dennis Nieman, Botond Szabo, and Harry van Zanten.
Contraction rates for sparse variational approximations in gaussian
process regression, 2021.
URL <https://arxiv.org/abs/2109.10755>.
[Ogata, 1999]
Yosihiko Ogata.
Seismicity analysis through point-process modeling: A review.
Seismicity patterns, their statistical significance and
physical meaning, pages 471–507, 1999.
[Ohn and Lin, 2021]
Ilsang Ohn and Lizhen Lin.
Adaptive variational bayes: Optimality, computation and applications,
[Olinde and Short, 2020]
Jack Olinde and Martin B. Short.
A self-limiting hawkes process: Interpretation, estimation, and use
in crime modeling.
In 2020 IEEE International Conference on Big Data (Big Data),
pages 3212–3219, 2020.
[Pfaffelhuber et al., 2022]
Peter Pfaffelhuber, Stefan Rotter, and Jakob Stiefel.
Mean-field limits for non-linear hawkes processes with excitation and
Stochastic Processes and their Applications, 2022.
[Polson et al., 2012]
Nicholas G. Polson, James G. Scott, and Jesse Windle.
Bayesian inference for logistic models using polya-gamma latent
variables, 2012.
URL <https://arxiv.org/abs/1205.0310>.
[Ray and Szabó , 2021]
Kolyan Ray and Botond Szabó .
Variational bayes for high-dimensional linear regression with sparse
Journal of the American Statistical Association, pages 1–12,
jan 2021.
URL <https://doi.org/10.1080>.
[Shen and Ghosal, 2015]
Weining Shen and Subhashis Ghosal.
Adaptive bayesian procedures using random series priors.
Scandinavian Journal of Statistics, 420 (4):0
1194–1213, 2015.
URL <https://onlinelibrary.wiley.com/doi/abs/10.1111/sjos.12159>.
[Sulem et al., 2021]
Deborah Sulem, Vincent Rivoirard, and Judith Rousseau.
Bayesian estimation of nonlinear hawkes process, 2021.
[Titsias and Lázaro-Gredilla, 2011]
Michalis Titsias and Miguel Lázaro-Gredilla.
Spike and slab variational inference for multi-task and multiple
kernel learning.
In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K.Q.
Weinberger, editors, Advances in Neural Information Processing
Systems, volume 24. Curran Associates, Inc., 2011.
[van der Vaart and van Zanten, 2009]
A. W. van der Vaart and J. H. van Zanten.
Adaptive bayesian estimation using a gaussian random field with
inverse gamma bandwidth.
The Annals of Statistics, 370 (5B), oct
URL <https://doi.org/10.1214>.
[van der Vaart and van Zanten, 2009]
A. W. van der Vaart and J. H. van Zanten.
Adaptive Bayesian estimation using a Gaussian random field with
inverse gamma bandwidth.
Ann. Statist., 370 (5B):0 2655–2675,
ISSN 0090-5364.
URL <https://doi-org.proxy.bu.dauphine.fr/10.1214/08-AOS678>.
[Wang et al., 2016]
Yichen Wang, Bo Xie, Nan Du, and Le Song.
Isotonic hawkes processes.
In Proceedings of the 33rd International Conference on
International Conference on Machine Learning - Volume 48, ICML'16, page
2226–2234. JMLR.org, 2016.
[Zhang and Gao, 2020]
Fengshuo Zhang and Chao Gao.
Convergence rates of variational posterior distributions.
The Annals of Statistics, 480 (4):0 2180 –
2207, 2020.
[Zhang et al., 2020]
Rui Zhang, Christian Walder, and Marian-Andrei Rizoiu.
Variational inference for sparse gaussian process modulated hawkes
Proceedings of the AAAI Conference on Artificial Intelligence,
340 (04):0 6803–6810, Apr 2020.
ISSN 2159-5399.
URL <http://dx.doi.org/10.1609/aaai.v34i04.6160>.
[Zhou et al., 2020]
Feng Zhou, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, and Fang Chen.
Efficient inference for nonparametric hawkes processes using
auxiliary latent variables.
Journal of Machine Learning Research, 210
(241):0 1–31, 2020.
URL <http://jmlr.org/papers/v21/19-930.html>.
[Zhou et al., 2021]
Feng Zhou, Quyu Kong, Yixuan Zhang, Cheng Feng, and Jun Zhu.
Nonlinear hawkes processes in time-varying system,
[Zhou et al., 2021]
Feng Zhou, Yixuan Zhang, and Jun Zhu.
Efficient inference of flexible interaction in spiking-neuron
networks, 2021b.
[Zhou et al., 2022]
Feng Zhou, Quyu Kong, Zhijie Deng, Jichao Kan, Yixuan Zhang, Cheng Feng, and
Jun Zhu.
Efficient inference for dynamic flexible interactions of neural
Journal of Machine Learning Research, 230
(211):0 1–49, 2022.
URL <http://jmlr.org/papers/v23/21-1273.html>.
|
# A Chebotarev Density Theorem over Local Fields
Asvin G.111email id<EMAIL_ADDRESS>Yifan Wei, John Yin222All three
authors are affiliated with the University of Wisconsin-Madison
###### Abstract
We compute the $p$-adic densities of points with a given splitting type along
a finite map, analogous to the classical Chebotarev theorem over number fields
and function fields. Under certain niceness hypotheses, we prove that these
densities satisfy a functional equation in the size of the residue field. As a
consequence, we prove a conjecture of Bhargava, Cremona, Fisher, and Gajović
on factorization densities of p-adic polynomials.
The key tool is the notion of _admissible pairs_ associated to a group, which
we use as an invariant of the inertia and decomposition action of a local
field on the fibers of the finite map. We compute the splitting densities by
Möbius inverting certain p-adic integrals along the poset of admissible pairs.
The conjecture on factorization densities follows immediately for tamely
ramified primes from our general results. We reduce the complete conjecture
(including the wild primes) to the existence of an explicit ”Tate-type”
resolution of the ”resultant locus” over $\operatorname{Spec}\mathbb{Z}$ but
defer this construction to forthcoming work.
###### Contents
1. 1 Introduction
1. 1.1 Our Main Results
2. 1.2 An overview of the proof
3. 1.3 Relation to previous work
4. 1.4 Organization of the paper
2. 2 Preliminaries
1. 2.1 Geometry over $p$-adic rings
2. 2.2 $p$-adic integration.
3. 2.3 Palindromic forms
4. 2.4 The poset of admissible pairs
5. 2.5 Generically finite, Galois maps
6. 2.6 Galois twists
3. 3 On the palindromicity of various natural densities
4. 4 A conjecture on the density of polynomials with fixed factorization type
1. 4.1 Factorization types of polynomials
2. 4.2 Reducing the computation of factorization densities to certain integrals
## 1 Introduction
Let $h(z)=c_{n}z^{n}+c_{n-1}z^{n-1}\dots+c_{0}$ be a random polynomial having
coefficients $c_{i}\in\mathbb{Z}_{p}$. In this paper, we develop a general
method to compute the density of polynomials with a given factorization type
over $\mathbb{Z}_{p}$, thus answering a conjecture of Bhargava, Cremona,
Fisher and Gajović [4]. As we will see, our method is far more general than
this special case and can be considered as a Chebotarev-type theorem for
finite maps over local fields (as opposed to the classical versions which
apply to varieties over finitely generated rings). For concreteness, we first
explain the case of a random polynomial.
We parametrize polynomials $h(z)$ as above by $\mathbb{Z}_{p}^{n+1}$ with the
Haar measure $\mu_{\mathrm{Haar}}$ normalized so that the total measure is
$1$. If $h(z)$ is irreducible, define $K=\mathbb{Q}_{p}[z]/(h(z))$ and the
_factorization type_ of $h$ by $\sigma(h)=\\{f^{e}\\}$ where $f$ is the size
of the residue field of $K$ and $e$ its inertial degree. If $e=1$, we will
often omit the superscript and simply write $\\{f\\}$ instead of
$\\{f^{1}\\}$. In general, if $h(z)$ is squarefree with a factorization
$h=g_{1}\dots g_{r}$ into irreducible polynomials over $\mathbb{Q}_{p}$, we
define its factorization type to be the multiset
$\sigma(h)=\\{\sigma(g_{1}),\dots,\sigma(g_{r})\\}$.
Fixing such a factorization type $\sigma$, let
$U_{\sigma}(p)\subset\mathbb{Z}_{p}^{n+1}$ be the $p$-adic open subset of
squarefree polynomials with factorization type $\sigma$ and
$\rho(n,\tau;p)=\mu_{\mathrm{Haar}}(U_{\tau}(p))$. As a few sample examples,
[4] computes
$\displaystyle\rho(2,(11);p)$ $\displaystyle=\frac{1}{2}$
$\displaystyle\rho(2,(2);p)$ $\displaystyle=\frac{p^{2}-p+1}{2(p^{2}+p+1)}$
$\displaystyle\rho(3,(111);p)$
$\displaystyle=\frac{p^{4}+2p^{2}+1}{6(p^{4}+p^{3}+p^{2}+p+1)}$
$\displaystyle\rho(3,(12);p)$
$\displaystyle=\frac{p^{4}+1}{2(p^{4}+p^{3}+p^{2}+p+1)}.$
Based on numerical evidence and some of their results, [4, Conjecture 1.2]
states:
###### Conjecture 1.1.
The densities $\rho(n,\sigma;p)$ are rational functions in $p$ and satisfy the
following remarkable symmetry:
$\rho(n,\sigma;p^{-1})=\rho(n,\sigma;p).$
More precisely, let $X:\mathbb{Z}_{p}^{n+1}\to\mathbb{N}$ be the random
variable that sends a polynomial to the number $\mathbb{Q}_{p}$-rational
roots. Then, [4] shows that all the moments of this random variable are
rational functions in $p$ that are invariant under the transformation $p\to
p^{-1}$.
In the case where the splitting fields of the polynomials are tame extensions
of $\mathbb{Q}_{p}$, [11] (and independently, the last author in [30]) prove
that the densities $\alpha(n,\sigma;p)$ form a rational function in $p$. They
compute a recurrence equation for the above densities in terms of certain
other densities. The individual terms in this recurrence are rational but do
not satisfy the above symmetry and therefore, we need a new idea in order to
prove the functional equation (and the rationality at wild primes). More
recently (and in fact simultaneously with the current paper), [28] proves the
above conjecture including the symmetry for the factorization type
$\\{n^{1}\\}$.
In this paper, we prove a generalization of this conjecture formulated for any
$p$-adic local field for the tame primes, i.e., for $p$ not dividing any of
the exponents $e_{i}$ in $\sigma$. We appeal to the previous results proving
that it is a rational function under which assumption the functional equation
follows from the main theorem in our paper (which applies to a much broader
context).
Our method also offers a completely independent proof of the rationality in
$p$ for all but finitely many primes by ”geometric methods”. The rationality
in $p$ for factorization densities boils down to the fact that certain
varieties $D/\mathbb{Z}$ related to the problem have point counts
$p\to|D(\mathbb{F}_{p})|$ that are polynomial functions of $p$ and is special
to the case of polynomial factorizations. In forthcoming work, we aim to prove
an explicit resolution of the relevant spaces and hence prove the theorem for
_all_ primes, including the wild ones. This involves a significant increase in
difficulty because we are only able to find a resolution by an Artin stack and
the appropriate change of variables formula in this case is correspondingly
more complicated.
Before stating our general results, we briefly recall the classical Chebotarev
theorem since our results can be considered as an analogue of that theorem for
local fields. Let $f:X\to Y$ be an étale Galois map between varieties with
constant Galois group $G$ over a finite field $\mathbb{F}_{q}$. Given a closed
point $y\in Y$ with residue field $\kappa(y)$ and a geometric lift $x\in
X(\overline{\mathbb{F}}_{q})$, we have an induced map
$\widehat{\mathbb{Z}}\cong\operatorname{Gal}(\overline{\mathbb{F}}_{q}/\kappa(y))=\pi_{1}^{\text{\'{e}t}}(y;x)\to
G$
by functoriality of the étale fundamental group. The image of the Frobenius
element
$\sigma_{\kappa(y)}\in\operatorname{Gal}(\overline{\mathbb{F}}_{q}/\kappa(y))$
is called the Frobenius of $y$ with respect to the base point $x$ and
different choices of base points correspond to conjugation by $G$. Therefore,
we can define a well defined conjugacy class $\sigma_{y}\subset G$. We then
have the following classical theorem.
###### Theorem 1.2 (Chebotarev).
Let $c\subset G$ be a conjugacy class. Then,
$\lim_{N\to\infty}\frac{|\\{y\in Y:[\kappa(y):\mathbb{F}_{q}]\leq
N,\sigma_{y}=c\\}|}{|\\{y\in Y:[\kappa(y):\mathbb{F}_{q}]\leq
N\\}|}=\frac{|c|}{|G|}.$
### 1.1 Our Main Results
We work in the context of an arbitrary local ring ${{\mathcal{O}}_{K}}$ over
$\mathbb{Z}_{p}$ with fraction field $K$ and residue field
${{\mathcal{O}}_{K}}/\mathfrak{m}_{K}=\mathbb{F}_{q}$. Let $f:X\to Y$ be a
generically finite, Galois map between smooth proper varieties defined over
${{\mathcal{O}}_{K}}$ with Galois group $G$, a constant group scheme over
${{\mathcal{O}}_{K}}$. By this, we mean that there is a Zariski open subset
$U_{f}\subset Y$ such that $f:f^{-1}(U_{f})\to U_{f}$ is an étale Galois map
with Galois group $G$ and moreover, the natural map $\operatorname{Aut}(f)\to
G$ is an isomorphism.
Let $P\in U_{f}(K)$ with $Q\in U_{f}(\overline{K})$ a geometric lift of $P$.
By functoriality of the étale fundamental group, we have a map
$\operatorname{Gal}(\overline{K}/K)=\pi_{1}^{\text{\'{e}t}}(P;Q)\to G.$
We denote the image of the entire Galois group by $D_{Q}$, the image of the
inertia group by $I_{Q}$ and the image of the canonical Frobenius coset by
$\sigma_{Q}$. The Galois group of a local field is significantly more
complicated than that of a finite field and thus, we need a new definition to
capture the data of $(I_{Q},\sigma_{Q})$. We call a pair $\tau=(H,gH)$ for a
subgroup $H\subset G$ and an element $g\in G$ an _admissible pair_ if $gH=Hg$.
The pair $(I_{Q},\sigma_{Q})$ as above is an admissible pair and we define
$U_{f,\tau}(K)=\\{P\in U_{f}(K):(I_{Q},\sigma_{Q})=\tau\text{ for some
}Q\text{ with }f(Q)=P\\}.$
For a smooth projective variety $Y$ as above, one can define a canonical
measure $\mu_{Y}$, see §2.2 for more details. We normalize the canonical
measure $\mu_{Y}$ on $Y(K)$ so that $\mu_{Y}(Y(K))=|Y(\mathbb{F}_{q})|$. In
analogy with the factorization densities conjecture, one might hope that
$\rho_{f,\tau}(K)\coloneqq\mu_{Y}(U_{f,\tau}(K))$ is a rational function in
$q$ as we vary ${{\mathcal{O}}_{K}}$ over extensions of $\mathbb{Z}_{p}$. The
simplest examples (such as the degree $2$ map from an elliptic curve
$E\to\mathbb{P}^{1}$) very quickly disabuse us of this hope or any reasonable
modification of it. Indeed, we will show that $\rho_{f,\tau}(K)$ is a linear
combination of the point counts $|Z(\mathbb{F}_{q})|$ weighted by
$\eta_{k}(q)$ as $Z$ ranges over smooth proper varieties of dimension at most
$\dim Y$ where $\eta_{k}(q)=(q^{k+1}-q)/(1-q^{k+1})$. The obstruction to
rationality is precisely that the point counts $|Z(\mathbb{F}_{q})|$ are
generally not rational functions of $q$.
Despite this, the “remarkable symmetry” mentioned above continues to hold in
this general context when formulated correctly. More precisely, we define the
notion of a palindromic form of weight $k$ over ${{\mathcal{O}}_{K}}$ in
Definition 2.4 and prove that the densities are palindromic forms. For the
introduction, we will only need that palindromic forms of weight $k$ are
functions $\rho:\mathbb{N}\to\mathbb{Q}$ of a special form that allow a
natural extension to a function $\rho:\mathbb{Z}\to\mathbb{Q}$ with the
property that
$\rho(-m)=q^{-km}\rho(m).$
With this definition, our main theorem is as follows. Note that it not even
obvious a-priori that the densities $\rho_{f,\tau}({L})\in\mathbb{Q}$.
###### Theorem 1.3 (Theorem 3.10).
Let $f:X\to Y/\operatorname{Spec}{{\mathcal{O}}_{K}}$ be a generically finite,
Galois map with Galois group $G$ between smooth, projective varieties and
$\tau=(H,gH)$ an admissible pair as above. We denote its inverse by
$\tau^{-1}=(H,g^{-1}H)$. Suppose that, for every
$\tau^{\prime}=(H^{\prime},g^{\prime}H^{\prime})$ with $H^{\prime}\subset H$
and $g^{\prime}\in gH$, there exists a $g^{\prime}$-equivariant resolution
$\pi_{H^{\prime}}:\widetilde{X_{H^{\prime}}}\to X/H^{\prime}$ such that the
ramified locus in $\widetilde{X_{H^{\prime}}}$ for the map
$\widetilde{X_{H^{\prime}}}\to Y$ is a simple normal crossings divisor over an
unramified extension of ${{\mathcal{O}}_{K}}$.
Then, for any finite extension of local rings
${{\mathcal{O}}_{K}}\subset{{\mathcal{O}}_{L}}$ with corresponding residue
field extensions $\mathbb{F}_{q}\subset\mathbb{F}_{q^{m}}$, the densities
$\rho_{f,\tau}({L})\in\mathbb{Q}$ and the function
$m\to\rho_{f,\tau}({L})+\rho_{f,\tau^{-1}}({L})$
depends on ${L}$ only through the size of the residue field
$\mathbb{F}_{q^{m}}$ and is a palindromic form (in the variable $m$) of weight
$\dim_{K}Y_{K}$.
One can consider this as a refinement of the classical Chebotarev density
theorem over finite fields as $q\to\infty$ (Corollary 3.13). In this limit,
the ramified densities tend to $0$ and the unramified densities tend to the
classical limits.
Simple examples 333such as the map $\mathbb{P}^{1}\to\mathbb{P}^{1};t\to
pt^{2}$ show that it’s necessary to assume the existence of such a resolution.
If $f:X\to Y$ is generically finite with Galois group $G$ over (an open
subset) of the ring of integers of a number field, then the resolution
hypothesis is satisfied for all but finitely many completions of the number
ring since we can equivariantly resolve singularities over characteristic $0$
(for instance, see [1],[16] and [5] among many others) and _spread out_ to all
but finitely many primes as we show in Theorem 3.12.
Moreover, the densities of the $U_{f,\tau}({L})$ individually are not
palindromic forms of any weight. The key here is Lemma 2.8 and Remark 2.9
shows that $\rho_{f,\tau}$ is not palindromic by itself in general. Similarly,
the densities where we require that the splitting field is a fixed extension
are also not palindromic. As Caruso explains immediately following [9, Theorem
C], the expected number of roots in a fixed ramified quadratic extension of a
random $p$-adic degree $n$ polynomial (for $n\geq 3$) is not symmetric but the
symmetry is regained when we consider instead the expected number of roots in
any ramified quadratic extension.
In the context of a random polynomial over $\mathbb{Z}_{p}$, the relevant map
is
$f:X=(\mathbb{P}^{1})^{n}\to(\mathbb{P}^{1})^{n}/S_{n}=\mathbb{P}^{n}=Y;([t_{1},s_{1}],\dots,[t_{n},s_{n}])\to[a_{0},\dots,a_{n}]$
where $(s_{1}z-t_{1})\dots(s_{n}z-t_{n})=\sum a_{i}z^{i}$. For any local field
$K/\mathbb{Q}_{p}$ with residue field $\mathbb{F}_{q}$, one can define an
analogous notion of factorization density $\rho(n,\sigma;q)$. A priori, this
might depend on $K$ but we show that it is purely a function of $q$. We prove
that it is a rational function satisfying the following functional equation.
###### Theorem 1.4 (Corollary
LABEL:cor:_factorization_densities_are_polynomial_in_q).
The factorization densities $\rho(n,\sigma;q)$ form a rational function in $q$
and satisfy the symmetry
$\rho(n,\sigma;q^{-1})=\rho(n,\sigma;q).$
###### Remark 1.5.
The methods of our proof can be adapted to the setting of motivic integration
with the following differences:
* •
We have a satisfactory theory of resolutions of singularities and the main
theorems are therefore hypothesis free.
* •
The absolute Galois group of $\mathbb{C}[\\![t]\\!]$ is much simpler than that
of $\mathbb{Q}_{p}$ which obviates the need for the notion of _admissible
pairs_ and the associated induction of their poset. Instead, we can simply use
the poset of cyclic subgroups (under inclusion) and an induction along this
poset instead.
* •
One important complication however is that motivic integration needs to be
modified to take values in an _equivariant_ Grothendieck ring of motives in
order to define a change of variables formula for generically finite maps as
opposed to the standard theory dealing with birational maps. This modification
is discussed in [10, §2.1], for example.
Given these modifications, exactly the analogous theorems of this paper should
hold althought we do not pursue this direction here. In this context, the
transformation $m\to-m$ should be replaced by Bittner duality [6, Corollary
3.4]. This duality is a manifestation of the duality in the conjectured
abelian category of pure motives and is closely related to Poincaré duality.
As such, our transformation $m\to-m$ can be seen as a $p$-adic shadow of this
motivic duality.
### 1.2 An overview of the proof
The first observation in proving Theorem 3.10 is that the _split points_ ,
i.e., the points corresponding to the admissible pair
$(\mathrm{id},\mathrm{id})$, correspond exactly to the image of the points
$X(K)$ under $f$. This density can then be computed by an integral over the
$p$-adic analytic set $X(K)$ of an appropriate function of the form
$\rho_{f,(\mathrm{id},\mathrm{id})}(K)=\eta_{f}(K)=\int_{X(K)}|\operatorname{Jac}(f)|_{K}d\mu_{X}.$
We have traded the measure of a possibly complicated space for the integral of
a possibly complicated function over a nice space. However, we show that the
function is not too bad if there exists a nice resolution of singularities and
we explicitly compute the above integral in Theorem 3.8 which proves that the
integrals are palindromic forms.
In the simplest case when the ramified locus $Z_{f}\subset X$ can be resolved
by a simple normal crossings divisor, we have the following.
###### Corollary 1.6 (Corollary 3.7).
Let $f:X\to Y$ be a generically finite, Galois map and suppose that there
exists a resolution of singularities $\pi:\widetilde{X}\to X$ over
${{\mathcal{O}}_{K}}$ with ramification locus
$\pi^{-1}(Z_{f})=\bigcup_{i=1}^{r}D_{i}\subset\tilde{X}$ a simple normal
crossings divisor over ${{\mathcal{O}}_{K}}$. Then
$\eta_{f}(K)=\sum_{J\subset\\{1,\dots,r\\}}\left|\left(\bigcap_{j\in
J}D_{j}\right)(\mathbb{F}_{q})\right|\prod_{j\in
J}\frac{q-q^{e_{j}+1}}{q^{e_{j}+1}-1}$
for some positive integers $e_{j}$.
Similar formulas as in Corollary 3.7 are well known in slightly different
forms in various _geometric_ situations where $Z_{f}$ is a simple normal
crossings divisor ([29, Proposition 4.11], [13, Theorem 2.3.2], [23, Theorem
2.8.1]) and of most relevance to us [15, Theorem 3]. Indeed, the last
reference proves a similar functional equation to the “remarkable symmetry”
using similar methods to this paper. There are two major new complications
that we need to deal with.
First, [15] only deals with the case of a hypersurface in projective space but
the general case we consider of a finite map between varieties can be dealt
with using similar ideas. More importantly, they assume that the irreducible
divisors appearing in the complement $Z_{f}$ are geometrically irreducible. We
_need_ to allow for geometrically _reducible_ divisors since our inductive
step deals with _unramified Galois twists_ of our finite map. Even if the
$Z_{f}$ were initially composed of geometrically irreducible divisors, this
will almost certainly not be true after twisting. This significantly
complicates the local computations and involves new ideas. The key here is
Lemma 3.5.
To compute the densities corresponding to an arbitrary admissible pair
$\tau=(H,gH)$ as above, we compute the densities of
${{}^{\tau}f}({{}^{\tau}X}(K))\subset Y(K)$ in terms of integrals similar to
the ones discussed above. Here, ${{}^{\tau}X}\coloneqq{{}^{g^{-1}}X}/H$ is the
quotient of an unramified $g$-twist of $X$ by $H$ while
${{}^{\tau}f}:{{}^{\tau}X}\to Y$ is the quotient map. We show that the sum of
the densities of ${{}^{\tau}f}({{}^{\tau}X}({L}))$ and
${{}^{\tau^{-1}}f}({{{}^{\tau^{-1}}}X}({L}))$ are palindromic forms of weight
$\dim Y$ as we vary over ${{\mathcal{O}}_{K}}\subset{{\mathcal{O}}_{L}}$ in
Theorem 3.8. This is not an immediate consequence of our integral computation
because we twist _after_ base changing. The crucial input is Lemma 2.8.
We express the densities of these images as a linear combination of the
densities $\rho_{f,\tau^{\prime}}$ for $\tau^{\prime}\leq\tau$. This system of
linear equations can be considered as an identity in the incidence algebra of
the poset of admissible types which lets us express the splitting type
densities in terms of the densities of the images discussed above. Said more
simply, the system of linear equations is upper triangular and invertible with
respect to an appropriate basis.
Finally, we deal direction with the conjecture about factorization types of
polynomials. The functional equation follows almost immediately from the
previous results on rationality and Theorem 3.12.
We also relate these densities directly to integrals over a certain family of
”nice” quotients: The maps involved here turn out to be the ”polynomial
multiplication” maps
$\mathbb{P}^{e_{1}}\times\dots\times\mathbb{P}^{e_{r}}\to\mathbb{P}^{e_{1}+\dots+e_{r}}$
where we identify points of projective space with univariate polynomials of
the appropriate degree so that the above map corresponds to
$(P_{1}(z),\dots,P_{r}(z))\to P_{1}(z)\dots P_{r}(z).$
The ramification locus for this corresponds exactly to sets of polynomials
$(P_{1}(z),\dots,P_{r}(z))$ where at least two of the polynomials of a common
root. In forthcoming work, we will describe an explicit resolution by an Artin
stack and hence complete the proof of rationality for all primes, including
the wild ones.
###### Remark 1.7.
In [30, Conjecture 9.1], the last author conjectured a further functional
equation for a generating function $\rho(\sigma,e,f;p,t)$, which specializes
to $\rho(n,\sigma;p)$ after setting $t=p^{-e/2}$. The methods of this paper
generalize without any difficulty to prove this conjecture by considering
$\eta_{f}(K,t)=\int_{X(K)}|\operatorname{Jac}(f)|_{K}t^{\nu_{K}(\operatorname{Jac}(f))}d\mu_{X}$
instead of
$\eta_{f}(K)=\int_{X(K)}|\operatorname{Jac}(f)|_{K}d\mu_{X}.$
### 1.3 Relation to previous work
Our work draws from multiple strands of research as described below.
#### Densities of random polynomial with fixed splitting data
The question of splitting field densities in the archimedean case goes back at
least to work of Bloch and Pölya [7] who proved asymptotic bounds on the
number of expected roots of polynomials with coefficients in $\\{-1,0,1\\}$,
followed by a series of papers of which a highlight is the work of Kac [21],
which provides an _exact formula_ for the average number of roots of a random
polynomial with Gaussian coefficients. For a survey and related results, [12]
and [24]. The $p$-adic version was first considered by Evans [17], followed by
a series of authors ([8],[22]) all of whom were concerned with the expected
number of roots of a random $p$-adic polynomial for which the methods of Kac
can be extended. Beyond the mean, the aforementioned paper [4] was among the
first to consider the density of $p$-adic polynomials with a given number of
$\mathbb{Q}_{p}$ roots and they proved that these densities are rational in
$p$ and are symmetric under $p\to p^{-1}$. Another recent interesting paper is
by Caruso [9], which is interested in the expected number of roots in an
extension $\mathbb{Q}_{p}\subset F$ of a random $p$-adic polynomial.
#### Motivic and $p$-adic integration
Our proof owes the most debt to the literature on $p$-adic and motivic
integration. The story begins with Batyrev’s paper [2] proving that birational
Calabi-Yau varieties have the same Betti numbers using ideas from $p$-adic
integration. Following this, Kontsevich in a 1995 lecture in Orsay introduced
the theory of Motivic integration to prove that birational Calabi-Yau
varieties have the same Hodge numbers. In both these contexts, the key tool is
a change of variables formula relating ($p$-adic or motivic) integrals on the
$Y_{i}$ to corresponding integrals on a common resolution, where the integral
is much simpler to compute. There has been much work since then in this area
of which we would like to highlight the literature on McKay correspondences
([26] for a survey).
Given a finite group $G$ with an action on a variety $V$, this correspondence
seeks to relate invariants of $V/G$ to invariants of $G$. The invariants
involved are “stringy integrals” very similar to the ones appearing in our
work (as in Corollary 3.7). Indeed, our general formula involves computing
these stringy invariants for quotients $X/H$ for $H\subset G$ a subgroup with
respect to a _ramification divisor_ on $X$. [29, Theorem 1.1] proves a result
relating these stringy invariants on $X/H$ to certain other stringy invariants
on $X$. It is quite plausible that the methods of our final section could be
replaced by using the McKay correspondence and thus generalized and made more
explicit.
Another landmark result with thematic similarities to our paper is Serre’s
mass formula [27] and especially the first proof in that paper using $p$-adic
integration. In fact, there have been many recent results along similar themes
of which [9], [3] and [29] are especially relevant to our work. Some other
recent highlights are [18], [19].
Finally, we would like to mention the work of [14]. They associate to every
formula $\varphi$ in the first order language of rings with coefficients in a
field of characteristic $0$ a virtual motive $\chi_{c}(\varphi)$ in the
Grothendieck ring of an appropriate category of motives. The formula can be
interpreted as defining certain subsets of varieties as in the following
example.
For $\mathbb{P}^{1}\to\mathbb{P}^{1};t\to t^{n}$, the subset of points in the
image of this map corresponds to the formula $\varphi(x)=``\exists
y,x=y^{n}"$. More generally, for a finite Galois map $f:X\to Y$, the subset of
points in $Y$ with prescribed decomposition group correspond to some such
formula.
In [14, Corollary 7.1.2]. they prove that $\chi_{c}(\varphi)$ belong to the
subring of motives obtained by localizing at certain polynomials in the
Lefschetz motive. Under étale realization, $\chi_{c}(\varphi)$ specializes to
the canonical measure of the corresponding subsets. To emphasize the strength
of this theorem, it is not even clear a priori that these densities should be
rational!
In relation to this work, our theorem provides a different perspective to the
uniformity phenomenon. While Denef and Loeser use quantifier elimination and
motivic integration to prove the uniformity, our proof uses more geometric
methods. As such, we obtain an explicit formula that can be used to prove the
required functional equation (and rationality for factorization densities).
### 1.4 Organization of the paper
In §2, we review some well known theorems that will be useful for us and
standardize notations and normalizations. The most important novelty in this
section is the notion of a _palindromic form of weight k_ (Definition 2.4).
The other novelty is in §2.4, where we define the notion of an admissible pair
related to a group and a corresponding partial order on it. In §3, we prove
our main Theorem 3.10 and derive some corollaries. This section contains the
core arguments of our paper. Finally, in §4, we adapt our general method to
prove the conjecture of Bhargava, Cremona, Fisher and Gajović. The main work
is involved in showing that we can construct a nice, _Tate_ -type resolution
of singularities in this case.
Acknowledgements. We would like to thank Colin Crowley, Jordan Ellenberg,
Ruofan Jiang, Brian Lawrence, Daniel Litt, Andrew O’Desky, Connor Simpson,
Jason Starr, and Botong Wang for helpful discussions.
## 2 Preliminaries
### 2.1 Geometry over $p$-adic rings
Unless mentioned otherwise, we fix a prime $p$ throughout the paper and let
${{\mathcal{O}}_{K}}$ be a local ring over ${\mathbb{Z}}_{p}$ with maximal
ideal $\mathfrak{m}_{K}$ generated by a uniformizer $\pi_{K}$, fraction field
$K$ and residue field ${{\mathcal{O}}_{K}}/\mathfrak{m}_{K}={\mathbb{F}_{K}}$
of size $q$, a power of $p$ (and use similar notation for extensions
$K\subset{L}$). We will think of ${{\mathcal{O}}_{K}}$ as fixed and vary over
extensions ${{\mathcal{O}}_{K}}\subset{{\mathcal{O}}_{L}}$ with residue field
extension of degree $m=[{\mathbb{F}_{L}}:{\mathbb{F}_{K}}]$. We define
$|\cdot|_{{L}}$ to be the non-archimedean metric on ${L}$ (so that
$|\pi_{L}|_{L}=q^{-m}$) and $\sigma_{L}$ to be the Frobenius acting on the
unramified closure of ${L}$. We denote the Frobenius element in
$\operatorname{Gal}(\overline{\mathbb{F}}_{q}/\mathbb{F}_{q})$ by
$\sigma_{q}$.
We fix $\SS=\operatorname{Spec}{{\mathcal{O}}_{L}}$ and define a variety
$X\to\SS$ to be a finite type, geometrically reduced scheme that is flat over
$\SS$. By $\dim X$, we mean the relative dimension over $\SS$. Given an
extension of schemes $T\to\SS$, we denote the base change by $X_{T}$ and
similarly for morphisms between varieties. For a ring $R$, we let $X(R)$
denote the $\operatorname{Spec}R$ valued points of $X$.
Given a scheme $X\to\operatorname{Spec}{{\mathcal{O}}_{L}}$ and a point $x\in
X({{\mathcal{O}}_{L}})$, we define ${\mathcal{O}}_{X,x}$ to be the
localization at the maximal ideal $\mathfrak{m}_{X,x}$ corresponding to the
reduction $\overline{x}\in
X({{\mathcal{O}}_{L}}/\mathfrak{m}_{{\mathcal{O}}_{L}})$ and
$\widehat{{\mathcal{O}}}_{X,x}$ to be its completion at its maximal ideal.
When $X$ is smooth, a complete set of local co-ordinates $t_{1},\dots,t_{\dim
X}$ at $x$ determines an isomorphism
$\widehat{\mathcal{O}}_{X,x}\cong{{\mathcal{O}}_{L}}[\\![t_{1},\dots,t_{\dim
X}]\\!]$ and consequently,
$\widehat{{\mathcal{O}}}_{X,x}({{\mathcal{O}}_{L}})=\widehat{{\mathcal{O}}}_{X,x}({L})=\mathfrak{m}_{B}^{\dim
X}.$
Here, $\widehat{{\mathcal{O}}}_{X,x}({{\mathcal{O}}_{L}})$ is taken to be the
space of _continuous_ ring homomorphisms to ${{\mathcal{O}}_{L}}$.
### 2.2 $p$-adic integration.
We will use techniques of $p$-adic integration throughout this paper. In this
section, we give a brief introduction and define our normalizations. We fix an
extension $K\subset L$ with residue field $\mathbb{F}_{q^{m}}$ and
$U\to\operatorname{Spec}{{\mathcal{O}}_{L}}$ a smooth variety of dimension $d$
in this section. The set $U({L})$ carries the natural structure of a _${L}$
-analytic manifold_ [20, §2.4].
###### Definition 2.1 ($p$-adic integration with respect to a form).
Let $\omega\in\Omega_{U}^{d}$ be a top form. For $V\subset U({L})$ a compact
subset, one can define the integral
$\mu_{\omega}(V)\coloneqq\int_{V}|\omega|\in\mathbb{R}_{\geq 0}.$
If $\omega=f(x)dx_{1}\wedge\dots\wedge dx_{n}$ for a ${L}$ analytic function
$f$ and local co-ordinates $x_{1},\dots,x_{n}$ on $V$, then
$\int_{V}|\omega|=\int_{V^{\prime}}|f(x)|d\mu_{{L}^{d}}\in\mathbb{R}_{\geq 0}$
where we identify $V$ with an open subset $V^{\prime}$ of ${L}^{d}$ using the
$x_{i}$ and $\mu_{{L}^{d}}$ is the canonical Haar measure on ${L}^{d}$
normalized so that $\mu_{{L}^{d}}({{\mathcal{O}}_{L}}^{d})=q^{md}$. Note that
this normalization is slightly non-standard.
###### Definition 2.2 (Measures from compactifications).
Given an open immersion $U\subset X$ over
$\operatorname{Spec}{{\mathcal{O}}_{L}}$ with $X$ a smooth, proper variety, we
can assign a canonical measure $\mu_{X}$ on the set $U({L})$ [29, §4.1]. We
cover $X$ by Zariski opens $U_{i}$ such that $U_{i}({{\mathcal{O}}_{L}})$
covers $X({{\mathcal{O}}_{L}})=X({L})$ and moreover trivializes
$\Omega_{X}^{d}|_{U_{i}}$. Let $\omega_{i}\in\Omega_{X}^{d}(U_{i})$ be gauge
forms, i.e., nowhere vanishing sections. While these $\omega_{i}$ might not
glue together on intersections, the transition functions are are nowhere
vanishing on $(U_{i}\cap U_{j})({{\mathcal{O}}_{L}})$ and hence have $p$-adic
norm $1$. Therefore, the measures $\mu_{\omega_{i}}$ on the $p$-adic analytic
sets $U_{i}({{\mathcal{O}}_{L}})$ glue together to give a unique measure
$\mu_{X}$.
We note that our normalization is such that
$\mu_{X}(U({L}))=|X(\mathbb{F}_{q^{m}})|$ which is _not_ the standard one
found in the literature.
The following is easy but very useful.
###### Theorem 2.3.
[18, Proposition 2.1] Suppose we have a top form $\omega\in\Omega_{U}^{d}$ and
$V\subset U({L})$ a compact subset so that $\mu_{\omega}(V)$ is well defined.
Then
1. 1.
If $V=Z({L})$ for $Z$ a Zariski-closed subscheme, $\mu_{\omega}(V)=0$.
2. 2.
Suppose we have an étale map $f:\widetilde{U}\to\widetilde{U}/G=U$ where $G$
is an étale group scheme over ${{\mathcal{O}}_{L}}$ acting freely on
$\widetilde{U}$. Then
$\frac{1}{|G({L})|}\int_{f^{-1}(V)}|f^{*}\omega|=\int_{V}|\omega|.$
###### Proof.
The first part is well known while the second follows from the observation
that the map $f$ induces a covering space map of ${L}$-analytic manifolds
$f:\widetilde{U}({L})\to U({L})$
of degree $|G({L})|$. The theorem is then immediate. ∎
### 2.3 Palindromic forms
Let $K_{0}(\mathrm{var})$ be the Grothendieck ring of varieties over
$\mathbb{F}_{q}$. We consider the extension given by
$\widetilde{K_{0}(\mathrm{var})}=\left(K_{0}(\mathrm{var})\otimes\mathbb{Q}\right)\left[\frac{\mathbb{L}^{k}-\mathbb{L}}{\mathbb{L}^{k}-1}\right]_{k\geq
2}$
where $\mathbb{L}=[\mathbb{A}^{1}]$ is the Lefschetz motive. This ring has
implicitly and explicitly appeared before in the literature on motivic
integration ([15], [14, §7]).
###### Definition 2.4.
We will be concerned with functions $\rho:\mathbb{N}\to\mathbb{Q}$ which are
of the form
$m\to|X(\mathbb{F}_{q^{m}})|$
for $X\in\widetilde{K_{0}(\mathrm{var})}$. Explicitly, such functions are
${\mathbb{Q}}$-linear combinations of products of functions of the form
$m\to\frac{q^{mk}-q^{m}}{q^{mk}-1}\text{ and }m\to\sum_{i}\alpha_{i}^{m}$
where the $\alpha_{i}$ are Weil numbers (relative to $q$).444A Weil number
relative to $q$ is an algebraic integer $\alpha$ such that there exists a
$w\in\mathbb{N}$ so that under any complex embedding
$\sigma:\overline{\mathbb{Q}}\to\mathbb{C},|\sigma(\alpha)|=q^{w}$. Both types
of functions can be extended to have domain $\mathbb{Z}$ by allowing negative
powers in the exponents. We say that such a function has weight $k$ if the
natural extension of the function to $\mathbb{Z}$ given by the same formula
satisfies
$\rho(-m)=q^{-mk}\rho(m)$
and we call such functions _palindromic forms of weight $k$_.
###### Example 2.5.
Consider the class of the scheme
${\mathcal{P}}_{k}=\operatorname{Spec}\mathbb{F}_{q^{k}}\in\widetilde{K_{0}(\mathrm{var})}$.
The point counts ${\mathcal{P}}_{k}(\mathbb{F}_{q^{m}})$ correspond to the
function
$m\to\gcd(m,k)=\sum_{a=1}^{k}\zeta_{k}^{am}.$
This is a palindromic form of weight $0$.
###### Remark 2.6.
It is immediate that the sum of two palindromic forms of weight $k$ has weight
$k$ while the product of palindromic forms of weights $k_{1},\dots,k_{r}$ has
weight $k_{1}+\dots+k_{r}$.
###### Lemma 2.7.
Let $X/\mathbb{F}_{q}$ be a smooth, proper variety. The function
$m\to\rho_{X}(m)=|X(\mathbb{F}_{q^{m}})|$ is a palindromic form of weight
$\dim X$.
###### Proof.
By the Grothendieck-Lefschetz trace formula, one has Weil numbers
$\alpha_{ij}$ for $0\leq i\leq 2\dim X$ and $1\leq j\leq\dim
H^{i}_{\text{\'{e}t}}(X\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})$
such that
$\rho_{X}(m)=\sum_{i=0}^{2\dim X}\sum_{j}\alpha_{ij}^{m}.$
Since $X$ is smooth and proper, Poincaré duality guarantees us that for each
$i,j$, there exists a $j^{\prime}$ (with $j\to j^{\prime}$ a permutation) such
that $\alpha_{2\dim X-i,j^{\prime}}=q^{\dim X}\alpha_{ij}^{-1}$. Therefore
$\rho_{X}(-m)=\sum_{i=0}^{2\dim X}\sum_{j}\alpha_{ij}^{-m}=q^{-m\dim
X}\sum_{i=0}^{2\dim X}\sum_{j}\alpha_{2\dim X-i,j^{\prime}}^{m}=q^{-m\dim
X}\rho_{X}(m)$
as required. ∎
###### Lemma 2.8.
Let $X/\mathbb{F}_{q}$ be a smooth proper variety of dimension $k$ and let
$\ell$ be a prime coprime to $q$. For $g\in\mathrm{Aut}_{\mathbb{F}_{q}}(X)$
of finite order $d$, the function
$m\to\rho_{X,g}(m)\coloneqq\sum_{i=0}^{2\dim
X}(-1)^{i}\operatorname{tr}\left((g+g^{-1})\sigma_{q}^{m}|H^{i}_{\text{\'{e}t}}(X\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})\right)$
is a palindromic form of weight $\dim X$.
###### Proof.
Since $g$ is defined over $\mathbb{F}_{q}$, it commutes with the action of
$\sigma_{q}$ on étale cohomology. Since it is of finite order, it acts
semisimply and we can find a common set of eigenvectors $v_{ij}\in
H^{i}_{\text{\'{e}t}}(X\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\overline{\mathbb{Q}}_{\ell})$
with
$\sigma_{q}(v_{ij})=\alpha_{ij}v_{ij},g(v_{ij})=\lambda_{ij}v_{ij}$
with the $\alpha_{ij}$ as in the previous lemma and the $\lambda_{ij}$ roots
of unity of order dividing $d$. Also as in the previous lemma, Poincaré
duality guarantees us that for each $i,j$, there exists a $j^{\prime}$ with a
$G$-equivariant pairing
$\overline{\mathbb{Q}}v_{ij}\otimes\overline{\mathbb{Q}}v_{ij^{\prime}}\to
H^{2\dim
X}(X\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\overline{\mathbb{Q}}_{\ell})=\overline{\mathbb{Q}}(-\dim
X)$
so that $\alpha_{2\dim X-i,j^{\prime}}=q^{\dim X}\alpha_{ij}^{-1}$ and
$\lambda_{2\dim X-i,j^{\prime}}=\lambda_{ij}^{-1}$ since $G$ acts trivially on
the top degree cohomology555This can be seen, for instance, through the cycle
class map since the class of a point generates the top degree cohomology and
any two points are algebraically equivalent, therefore the $g$-action on a
point is trivial.. Therefore,
$\displaystyle\rho_{X,g}(m)$ $\displaystyle=\sum_{i=0}^{2\dim
X}\sum_{j}\alpha_{ij}^{m}\left(\lambda_{ij}+\lambda_{ij}^{-1}\right)$
$\displaystyle\implies\rho_{X,g}(-m)$ $\displaystyle=\sum_{i=0}^{2\dim
X}\sum_{j}\alpha_{ij}^{-m}\left(\lambda_{ij}+\lambda_{ij}^{-1}\right)$
$\displaystyle=\sum_{i=0}^{2\dim X}\sum_{j}q^{-m\dim X}\alpha_{2\dim
X-i,j^{\prime}}^{m}\left(\lambda_{2\dim X-i,j^{\prime}}^{-1}+\lambda_{2\dim
X-i,j^{\prime}}\right)$ $\displaystyle=q^{-m\dim X}\rho_{X,g}(m).$
∎
###### Remark 2.9.
We note that the function
$m\to\sum_{i=0}^{2\dim
X}(-1)^{i}\operatorname{tr}\left(g\sigma_{q}^{m}|H^{i}_{\text{\'{e}t}}(X\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})\right)$
is, perhaps surprisingly, not a palindromic form in general. Let $E$ be the
elliptic curve defined by the equation $y^{2}=x^{3}+x$ over $\mathbb{Z}_{p}$
with $p\equiv 1\pmod{4}$ and a fixed $i=\sqrt{-1}\in\mathbb{Z}_{p}$. We have
an automorphism $g:E\to E$ defined over $\mathbb{Z}_{p}$ given by
$g(x,y)=(-x,iy)$. Since $g$ has eigenvalues $\pm i$ on
$H^{1}_{\text{\'{e}t}}(\overline{E},\mathbb{Q}_{\ell})$, there exist
$a,b\in\mathbb{Z}$ such that $a^{2}+b^{2}=p$ and666For instance, if $p=5$ with
$i\equiv 2\pmod{5}$, we have $a=1,b=2$.
$\displaystyle\nu(m)\coloneqq\sum_{i=0}^{2\dim X}(-1)^{i}$
$\displaystyle\operatorname{tr}\left(g\sigma_{p}^{m}|H^{i}_{\text{\'{e}t}}(X\times_{\mathbb{F}_{p}}\overline{\mathbb{F}}_{p},\mathbb{Q}_{\ell})\right)=1+(i(a+ib)^{m}-i(a-ib)^{m})+p^{m}.$
Therefore,
$\displaystyle\nu(-m)=$ $\displaystyle 1+(i(a+ib)^{-m}-i(a-ib)^{-m})+p^{-m}$
$\displaystyle=$ $\displaystyle
p^{-m}\left(p^{m}+(i(a-ib)^{m}-i(a+ib)^{m})+1\right)\neq p^{-m}\nu(m)$
which proves that $\nu$ is not a palindromic form.
###### Example 2.10.
For any extension ${{\mathcal{O}}_{K}}\subset{{\mathcal{O}}_{L}}$ where
${{\mathcal{O}}_{L}}$ has residue field $q^{m}$, we define
$\delta_{k}(m;q)\coloneqq\int_{\mathfrak{m}_{L}}|t^{k}|_{L}dt=\frac{q^{m}-1}{q^{m(k+1)}-1}$
and $\eta_{k}(m;q)=\delta_{k}(m;q)-1$. The function $m\to\eta_{k}(m;q)$ is a
palindromic form of weight $1$:
$\eta_{k}(-m;q)=\frac{q^{-m}-q^{-m(k+1)}}{q^{-m(k+1)}-1}=q^{-m}\frac{q^{m(k+1)}-q^{m}}{1-q^{m(k+1)}}=q^{-m}\eta_{k}(m;q).$
Note that $\delta_{k},\eta_{k}$ depend on ${{\mathcal{O}}_{L}}$ only through
the size of its residue field $q^{m}$. Indeed, $\eta_{k}(m;q)$ is a function
purely of $q^{m}$ but we write it this way to emphasize that we will think of
$q$ as fixed and $m$ as varying.
###### Definition 2.11.
We say that a function $\rho:\\{\text{extensions }K\subset{L}\\}\to\mathbb{Q}$
is a palindromic form of weight $k$ if
1. 1.
The function depends on ${L}$ only through the degree of residue field
extensions $m$.
2. 2.
The function $m\to\rho({L})$ is a palindromic form of weight $k$.
### 2.4 The poset of admissible pairs
We discuss here a purely group theoretic notion that will become crucial in
the proof of Theorem 3.10. Let $G$ be a finite group.
We say that a tuple $\tau=(H_{\tau},g_{\tau}H_{\tau})$ consisting of a
subgroup and a left coset is an _admissible pair_ if $g_{\tau}$ normalizes
$H_{\tau}$, i.e., $g_{\tau}H_{\tau}=H_{\tau}g_{\tau}$. The data of an
admissible pair $\tau$ is equivalent to the choice of a pair of subgroups
$H_{\tau}\subset H_{1,\tau}\subset G$ such that $H_{\tau}$ is normal in
$H_{1,\tau}$ and $C_{\tau}\coloneqq H_{1,\tau}/H_{\tau}$ is cyclic along with
the choice of a distinguished generator $g_{\tau}$ for this cyclic group.
Here, $H_{1,\tau}$ corresponds to the subgroup generated by $g_{\tau}$ and
$H_{\tau}$.
We say that $\tau^{\prime}\leq\tau$ for two admissible pairs if
$H_{\tau^{\prime}}\subset H_{\tau}$ and $g_{\tau^{\prime}}\in
g_{\tau}H_{\tau}$. This is easily checked to be a partial order. There is one
maximal element $(G,G)$ while the minimal elements correspond exactly to
$(e,g)$ for $e$ the identity subgroup and $g\in G$ any element.
For $\lambda\in G$, we define the conjugate of $\tau=(H,gH)$ by $\lambda$ by
$\lambda\tau\lambda^{-1}\coloneqq(\lambda H\lambda^{-1},\lambda
gH\lambda^{-1}).$
We also define the inverse of an admissible pair as
$(H,gH)^{-1}\coloneqq(H,g^{-1}H).$
Both conjugation and inversion are easily seen to be automorphisms of the
poset of admissible pairs.
We will use the language of an incidence algebra on a poset. We briefly recall
this notion for the convenience of the reader. Given a poset ${\mathcal{P}}$,
an interval $[a,b]$ in it is the set of elements $c$ such that $a\leq c\leq b$
(where we tacitly assume that $a\leq b$). Its incidence algebra
${\mathcal{I}}_{{\mathcal{P}}}$ (over any ring $R$) is the ring of functions
from the set of intervals to $R$. Addition in ${\mathcal{I}}_{{\mathcal{P}}}$
is pointwise while multiplication is defined by convolution:
$\text{For all }\alpha,\beta\in{\mathcal{I}}_{{\mathcal{P}}},\hskip
28.45274pt(\alpha\ast\beta)([a,b])=\sum_{a\leq c\leq
b}\alpha([a,c])\beta([c,d]).$
This multiplication need not be commutative. An element
$\alpha\in{\mathcal{I}}$ is invertible precisely when $\alpha([a,a])\in
R^{\times}$ for all $a\in{\mathcal{P}}$. We will suppress the ring $R$ from
the notation, it will usually be taken to be $\mathbb{Q}$ or $\mathbb{R}$.
###### Definition 2.12.
In particular, we will make use of the following elements in
${\mathcal{I}}_{{\mathcal{P}}}$ for ${\mathcal{P}}$ the poset of admissible
pairs:
$\alpha([\tau^{\prime},\tau])=|\\{\lambda\in H_{\tau}\backslash
G:\lambda\tau^{\prime}\lambda^{-1}\leq\tau\\}|,$
$\beta([\tau^{\prime},\tau])=|\\{\tau^{\prime\prime}\in{\mathcal{P}}:\tau^{\prime\prime}=\lambda\tau^{\prime}\lambda^{-1}\leq\tau\text{
for some }\lambda\in G.\\}|.$
and
$\gamma([\tau^{\prime},\tau])=\alpha([\tau^{\prime},\tau])/\beta([\tau^{\prime},\tau])$.
If $\tau^{\prime}\leq\tau$, then
$\alpha([\tau^{\prime},\tau]),\beta([\tau^{\prime},\tau])$ are both at least
$1$ since $\tau^{\prime}$ is conjugate to itself. Therefore,
$\gamma([\tau^{\prime},\tau])>0$ when $\tau^{\prime}\leq\tau$.
Note that the condition in the definition of $\alpha$ indeed only depends on
the coset $H_{\tau}\lambda$. It is also clear that $\alpha$ and $\beta$ are
invariant under conjugating the poset by an element of $G$.
### 2.5 Generically finite, Galois maps
Let $X,Y$ be two geometrically connected, smooth and proper varieties relative
to $\operatorname{Spec}{{\mathcal{O}}_{K}}$.
###### Definition 2.13.
[generically finite] A map $f:X\to Y$ is said to be generically finite if
there is some open $U\subset Y$ such that $f:f^{-1}(U)\to U$ is étale and
finite. There is in fact a maximal such open which we will denote by $U_{f}$
and the preimage of the complement $Z_{f}=f^{-1}(Y-U_{f})$ will be called the
exceptional locus. This is always codimension $1$ in $X$ by the Jacobian
criterion (see 2.21). We furthermore require that $Z_{f}$ has no components
supported purely over the special fiber of
$\operatorname{Spec}{{\mathcal{O}}_{K}}$ or equivalently, that there is some
closed point on $Y$ over
$\operatorname{Spec}{{\mathcal{O}}_{K}}/\pi{{\mathcal{O}}_{K}}$ over which the
map $f$ is étale.
###### Definition 2.14.
[generically finite, Galois] Let $G$ be a constant group scheme over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$. A generically finite map $f:X\to Y$
is said to be Galois with Galois group $G$ if the restriction of the map to
$U_{f}$ is an étale Galois cover with Galois group $G$ and every such
automorphism extends to an automorphism of $f$. That is to say,
$\mathrm{Aut}_{{\mathcal{O}}_{K}}(f:X\to
Y)=\mathrm{Aut}_{{\mathcal{O}}_{K}}(f:f^{-1}(U_{f})\to U_{f})=G.$
###### Example 2.15.
An interesting family of generically finite, Galois maps are the following.
Let $C$ be a curve of genus $g\geq 2$. Let $D$ be a degree $g$ divisor on $C$.
Then, the morphism
$\displaystyle C^{g}$ $\displaystyle\to\operatorname{Jac}(C);$
$\displaystyle(P_{1},\dots,P_{n})$ $\displaystyle\to[P_{1}]+\dots+[P_{n}]-D$
is an example of a generically finite map. It would be interesting to compute
splitting densities for these maps although we don’t pursue it in this paper.
This paper will be mainly concerned with such maps and to that end, we state a
few preliminary definitions and properties of such maps here.
###### Definition 2.16.
[Decomposition group] Let $P\in U_{f}({L})$ be a rational point away from the
ramified locus. Then, $f^{-1}(P)$ is an étale ${L}$ algebra and each geometric
point $Q\in f^{-1}(P)$ over $\overline{{L}}$ defines a homomorphism
$\operatorname{Gal}(\overline{{L}}/{L})\to G$. We denote the image (known as
the decomposition group of $Q$ over $P$) by $D_{Q}\subset G$. Correspondingly,
the image of the inertia group
$\operatorname{Gal}(\overline{L}/L^{\mathrm{ur}})$ is denoted by $I_{Q}\subset
G$. The image of the Frobenius coset corresponds to a generating coset
$\sigma_{Q}\in D_{Q}/I_{Q}$. We note that the pair
$\tau_{Q}=(I_{Q},\sigma_{Q})$ is an _admissible pair_ as in §2.4.
If $P\in U_{f}({{\mathcal{O}}_{L}})$, then $f^{-1}(P)$ is an unramified
extension of ${{\mathcal{O}}_{L}}$ and a choice of $Q$ thus determines a well
defined Frobenius element $\sigma_{Q}\in G$.
###### Remark 2.17.
Different choices of $Q$ over $P\in U({L})$ correspond to conjugating $D_{Q}$
by elements in $G$. More precisely, let $(Q_{1},\dots,Q_{n})$ be the geometric
points of $X$ mapping to $P$. This set has a natural action of $G$ and is a
$G$-torsor under this action.
If $Q_{i}=\lambda(Q_{j})$ for some $\lambda\in G$, then $I_{Q_{i}}=\lambda
I_{Q_{j}}\lambda^{-1}$ and similarly for the decomposition groups and
Frobenius cosets. Therefore, we can define $\tau_{P}$ as the conjugacy class
of $\tau_{Q_{i}}$ by picking any lift $Q_{i}$ of $P$ to $X$ and similarly,
$(I_{P},D_{P})=\\{(\lambda I_{Q_{i}}\lambda^{-1},\lambda
D_{Q_{i}}\lambda^{-1}):\lambda\in G\\}.$
The following definition stratifies $P\in U_{f}({L})$ based on $\tau_{P}$,
i.e., the data of the (conjugacy class) of the inertia group at $P$ along with
a Frobenius coset.
###### Definition 2.18.
For $\tau$ an admissible pair for the group $G$, we define
$U_{f,\tau}({L})=\\{P\in U_{f}({L}):\tau\in\tau_{P}\\}$
with measure $\rho_{f,\tau}({L})=\mu_{Y}(U_{f,\tau}({L}))$. This only depends
on the conjugacy class of $\tau$. The $U_{f,\tau}({L})$ are in fact
${L}$-analytic open sets which proves that the measures $\rho_{f,\tau}({L})$
are well defined as the next lemma shows.
###### Lemma 2.19.
For any admissible pair $\tau$, the $U_{f,\tau}({L})$ are ${L}$-analytic open
sets of $X({L})$.
###### Proof.
By Krasner’s lemma [25, Proposition 3.5.74], the isomorphism type of the étale
scheme $f^{-1}(y)$ is a locally constant function as $y$ varies over $Y({L})$
in the analytic topology. Therefore, both the Galois group of the fiber and
its action on the geometric points are locally constant which proves that the
corresponding admissible pair is also locally constant. ∎
We obtain a coarser stratification of $P\in U_{f}({L})$ by remembering only
the (conjugacy class of the) inertia group and decomposition group associated
to $P$, i.e., we forget the precise generating coset corresponding to the
Frobenius coset.
###### Definition 2.20.
For $\tau$ an admissible pair for the group $G$, we define
$U_{f,H_{\tau},H_{1,\tau}}({L})=\\{P\in
U_{f}({L}):(H_{\tau},H_{1,\tau})\in(I_{P},D_{P})\\}$
with measure
$\rho_{f,H_{\tau},H_{1,\tau}}({L})=\mu_{Y}(U_{f,H_{\tau},H_{1,\tau}}({L}))$.
As before, this only depends on the conjugacy class of $\tau$.
We also have a change of variables formula for generically finite maps. We
begin by defining the Jacobian.
###### Definition 2.21.
(Jacobian) Let $f:X\to Y$ be a generically finite map over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$. Let $V\subset X$ and $U\subset Y$ be
affine opens such that $f(V)\subset U$ and there exist gauge forms
$\omega_{V},\omega_{U}$ on $V,U$ respectively (well defined as usual up to a
$p$-adic unit). We define the Jacobian of $f$ on $V$ by
$\operatorname{Jac}(f)=\frac{f^{*}(\omega_{U})}{\omega_{V}}.$
It is independent of the choice of local gauge forms up to a unit in
${{\mathcal{O}}_{K}}$ and in particular, the norm $|\operatorname{Jac}(f)|$ is
a well defined function on $X(K)$.
Alternatively, one could view $\operatorname{Jac}(f)$ as the canonical map
between the canonical bundles $f^{*}\mathscr{O}(K_{Y})\to\mathscr{O}(K_{X})$
or equivalently as a section of
$f^{*}\mathscr{O}(K_{Y})^{\vee}\otimes\mathscr{O}(K_{X})$. Specifying local
basis $f^{*}(\omega_{U}),\omega_{V}$ for the two line bundles then determines
the function $\operatorname{Jac}(f)$ as defined above. It is clear from this
description that the vanishing locus of $\operatorname{Jac}(f)$ is precisely
the exceptional locus $Z_{f}\subset X$.
### 2.6 Galois twists
We will use Galois twists of generically finite, Galois maps at multiple
points in this paper. For convenience, we recall this definition and state
some basic properties. We fix $f:X\to Y$ to be a generically finite, Galois
map with Galois group $G$.
###### Definition 2.22 (Galois twists).
We say that $\widetilde{f}:\widetilde{X}\to Y$ defined over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$ is a twist of $f$ if there exists an
unramified extension $K\subset K^{\prime}$ such that the map
$\widetilde{f}_{\mathcal{O}_{K^{\prime}}}:\widetilde{X}_{\mathcal{O}_{K^{\prime}}}\to
Y_{\mathcal{O}_{K^{\prime}}}$ is isomorphic to
$f_{\mathcal{O}_{K^{\prime}}}:X_{\mathcal{O}_{K^{\prime}}}\to
Y_{\mathcal{O}_{K^{\prime}}}$.
Given an element $g\in G$ of order $r$, we can define the twist of $f$ with
respect to $g$ as follows.
###### Definition 2.23.
[$g$-twist] Let $\mathcal{O}_{K^{\prime}}$ be the unique degree $r$ unramified
extension of ${{\mathcal{O}}_{K}}$ with
$\operatorname{Gal}(\mathcal{O}_{K^{\prime}}/{{\mathcal{O}}_{K}})=\mathbb{Z}/r\mathbb{Z}$
generated by $\sigma_{K}$. We define
${}^{g}X=X\times_{\operatorname{Spec}{{\mathcal{O}}_{K}}}\operatorname{Spec}\mathcal{O}_{K^{\prime}}/\langle(g,\sigma_{K})\rangle$
as a geometric quotient. As a scheme, it is defined over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$ and admits a map
${}^{g}f:{{}^{g}X}\to Y$. It is a standard fact that this is a twist of $f$.
One sees from the definition that ${}^{g}X$ ‘ _is_ ’ $X$ with the Frobenius
acting by ${{}^{g}\sigma_{K}}\coloneqq g\circ\sigma_{K}$ instead of
$\sigma_{K}$. The cohomology of the twist with its Frobenius action can be
identified with
$(H^{i}_{\text{\'{e}t}}({{}^{g}X}\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell}),{{}^{g}\sigma_{K}})\cong(H^{i}_{\text{\'{e}t}}({X}\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell}),{g^{-1}\sigma_{K}}).$
###### Remark 2.24.
We note that the map ${}^{g}f:{{}^{g}X}\to Y$ does not have a constant
automorphism group $G$, instead it has the étale automorphism group scheme
${}^{g}G$ which “ _is_ ” $G$ with the Frobenius acting by $h\to ghg^{-1}$.
## 3 On the palindromicity of various natural densities
This section contains the main theorems of this paper. We prove that various
natural densities are palindromic. Before beginning the proofs, we establish
some useful definitions and notations.
###### Definition 3.1.
[Simple normal crossings divisor] Let
$X/\operatorname{Spec}{{\mathcal{O}}_{L}}$ be a variety. We say that a divisor
$D\subset X$ is a simple normal crossings divisor (sncd) relative to
$\operatorname{Spec}{{\mathcal{O}}_{L}}$ if $X$ is smooth over
$\operatorname{Spec}{{\mathcal{O}}_{L}}$ and
1. 1.
$D=\bigcup_{i=1}^{r}D_{i}$ is a decomposition into geometrically irreducible
components such that for any subset $I\subset\\{1,\dots,r\\}$,
$D_{I}\coloneqq\bigcap_{i\in I}D_{i}$ is smooth over
$\operatorname{Spec}{{\mathcal{O}}_{L}}$. We use the convention that
$D_{\emptyset}=X$.
2. 2.
For every $x\in D_{I}({{\mathcal{O}}_{L}})$, there exists a regular system of
parameters $t_{i},i\in I$ over ${{\mathcal{O}}_{L}}$ such that $D_{i}$ is cut
out by $t_{i}$ in ${\mathcal{O}}_{X,x}$.777i.e., $t_{j}$ is a non zero divisor
in ${\mathcal{O}}_{X,x}/(t_{i_{1}},\dots,t_{i_{r}})$ for
$j\not\in\\{i_{1},\dots,i_{r}\\}\subset I$ and
${\mathcal{O}}_{X,x}/(t_{i_{1}},\dots,t_{i_{r}})$ is smooth over
${{\mathcal{O}}_{L}}$.
###### Definition 3.2.
[Geometric sncd] Let $X/\operatorname{Spec}{{\mathcal{O}}_{K}}$ be a variety.
We say that a divisor $D\subset X$ is a geometric sncd if there exists an
unramified extension ${{\mathcal{O}}_{K}}\subset\mathcal{O}_{K^{\prime}}$ and
$D_{\mathcal{O}_{K^{\prime}}}\subset
X_{\mathcal{O}_{K^{\prime}}}/\operatorname{Spec}\mathcal{O}_{K^{\prime}}$ is a
sncd relative to $\operatorname{Spec}\mathcal{O}_{K^{\prime}}$. In this case,
we write $D=\bigcup_{i=1}^{r}D_{i}$ with the $D_{i}$ irreducible over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$ (but possibly not geometrically
irreducible). For each $i\leq r$, we define $K_{i}$ to be the smallest
unramified extension over which $D_{i,\mathcal{O}_{K_{i}}}=\bigcup_{j\in
J_{i}}D_{ij.\mathcal{O}_{K_{i}}}$ with the $D_{ij}$ geometrically irreducible
(and hence smooth). In this case, the Frobenius automorphism $\sigma_{K}$ of
$K_{i}/K$ will transitively permute the $D_{ij}$ and we define
$\deg(\mathcal{O}_{K_{i}}/{{\mathcal{O}}_{K}})=k_{i}$. We will consider the
set $J_{i}=\\{{1},\dots,{k_{i}}\\}$ to be equipped with the action of the
Frobenius $\sigma_{K}$ (corresponding to the action on the divisors $D_{ij}$)
and denote by $J_{i}/\sigma_{K}^{\ell}$ the set of orbits for the action of
$\sigma_{K}^{\ell}$. At each closed point $x\in X$, we let
$t_{i,x}\in{\mathcal{O}}_{X,x}$ be a local function cutting out $D_{i}$ and
similarly, we define
$t_{ij,x}\in{\mathcal{O}}_{X,x}\otimes_{{{\mathcal{O}}_{K}}}\mathcal{O}_{K^{\prime}}$
cutting out $D_{ij}$ such that $\sigma_{K}(t_{ij})=t_{i\sigma_{K}(j)}$.
We note that $D\subset X$ being a geometric sncd is invariant under taking
unramified twists as in Definition 2.23 since this condition can be checked
over the unramified closure.
###### Hypothesis 3.3.
A tuple $(g,X\to\operatorname{Spec}{{\mathcal{O}}_{K}},Z\subset X)$ where
$g\in\operatorname{Aut}(X)$ and $Z\subset X$ is a $g$-equivariant closed
subscheme has an equivariant resolution if there exists a smooth variety
$\widetilde{X}\to\operatorname{Spec}{{\mathcal{O}}_{K}}$ with an automorphism
$g:\widetilde{X}\to\widetilde{X}$ and a proper, $g$-equivariant birational map
$\pi:\widetilde{X}\to X$ such that $\pi^{-1}(Z)$ is a geometric sncd.
This hypothesis need not be satisfied even if $g=\mathrm{id},X=\mathbb{P}^{1}$
as can be seen by considering $D=V(x^{2}-p^{2}y^{2})\subset\mathbb{P}^{1}$.
The minimal resolution of singularities is given in this example by blowing up
$X$ at the closed point $V(t,p)$ and $\pi^{-1}(D)$ has a component purely
supported in characteristic $p$ and hence is not a relative normal crossings
divisor. Nevertheless, it is generally not a restrictive hypothesis.
###### Remark 3.4.
If $f:X\to Y$ is generically finite with Galois group $G$ over (an open
subset) of the ring of integers of a number field, then the above hypothesis
for all the tuples $(g,X,Z_{f})$ where $g\in G$ is simultaneously satisfied
for all but finitely many $p$ since we can equivariantly resolve singularities
over characteristic $0$ (for instance, see [1],[16] or [5]) and _spread out_
to all but finitely many primes.
Before proving the main theorems of this section, we first do a local
computation. For any predicate $\mathscr{P}$ (such as “$\gcd(k,m)=\ell$”), we
define ${\mathcal{I}}(\mathscr{P})$ to be the indicator function that is $1$
when the predicate is satisfied and zero otherwise.
Since our divisors are not geometrically irreducible, our formulas depend on
the divisibility properties of the size of the residue field of ${L}$ and we
use the indicator functions to concisely express this.
###### Lemma 3.5.
Let $D\subset X$ be a geometric sncd as in Definition 3.2 (with the same
notation). For any $e_{1},\dots,e_{r}\in\mathbb{Z}_{\geq 0}$ and
$M_{1}\subset\\{1,\dots,k_{1}\\},\dots,M_{r}\subset\\{1,\dots,k_{r}\\}$,
define $D_{M_{i}}=\bigcap_{j\in M_{i}}D_{ij}$ and let $x\in
D_{M_{1},\dots,M_{r}}\coloneqq\bigcap_{i=1}^{r}D_{M_{i}}$ be a point defined
over an extension ${{\mathcal{O}}_{L}}\supset{{\mathcal{O}}_{K}}$. We have the
identity
$\displaystyle\delta_{x,J_{1},\dots,J_{r}}({L})$
$\displaystyle\coloneqq\int_{\widehat{\mathcal{O}}_{X,x}({{\mathcal{O}}_{L}})}|t_{1,x}|_{{L}}^{e_{1}}\dots|t_{r,x}|_{{L}}^{e_{r}}d\mu_{X_{\SS}}$
$\displaystyle=\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(k_{i},m)=\ell_{i})\left(\prod_{i=1}^{r}\delta_{e_{i}}^{|M_{i}/\sigma_{K}^{\ell_{i}}|}(m;q^{k_{i}/\ell_{i}})\right),$
where $m$ is the degree of the residue field of extension of $L/K$ and
$\delta_{k}(m;q)$ is as in Definition 2.10.
We make the preliminary observation that $\sigma_{L}$ action preserves the
subsets $M_{1},\dots,M_{r}$ since $x\in\bigcap_{i=1}^{r}D_{M_{i}}$.
###### Proof.
Recall Definition 3.2, i.e., we have minimal unramified extensions
${{\mathcal{O}}_{K}}\subset\mathcal{O}_{K_{i}}$ of degree $k_{i}$ over which
the $D_{i}=\bigcup_{j\in J_{i}}D_{ij}$ decompose into smooth components for
$J_{i}=\\{1,\dots,k_{i}\\}$. We let ${L}_{i}$ be the compositum of ${L}$ with
${K_{i}}$ so that ${L}\subset{L}_{i}$ has degree $\ell_{i}=\gcd(m,k_{i})$ and
take ${L}^{\prime}$ to be the compositum of all the ${L}_{i}$. As before, we
define local parameters $t_{ij}$ corresponding to the $D_{ij}$ with
$\sigma_{L}(t_{ij})=t_{i\sigma_{L}(j)}$ so that in particular, we have
$t_{i}\coloneqq t_{i,x}=\prod_{j\in J_{i}}t_{ij}$.
Consider the following map of ${{\mathcal{O}}_{L}}$ modules with a
$\sigma_{L}$ action
$\mathfrak{m}_{X,x}\to\mathfrak{m}_{X_{\mathcal{O}_{L^{\prime}}},x}.$
The preimage of the span of the $t_{ij}$ is a ${{\mathcal{O}}_{L}}$-submodule
${\mathcal{L}}\subset\mathfrak{m}_{X,x}$ with the property that its generic
rank is equal to its rank modulo the uniformizer $\pi_{L}$ precisely because
the $t_{ij}$ are a regular system of parameters and
${{\mathcal{O}}_{L}}\subset\mathcal{O}_{L^{\prime}}$ is a faithfully flat
extension. Therefore, we can extend ${\mathcal{L}}$ to a spanning lattice
${\mathcal{L}}^{\prime}$ by adding in generators $s_{1},\dots,s_{v}$.
Consider now the Frobenius equivariant isomorphism of free
${{\mathcal{O}}_{L}}$ modules
$\widehat{\mathcal{O}}_{X_{\mathcal{O}_{L^{\prime}}},x}(\mathcal{O}_{L^{\prime}})\xrightarrow{\sim}\prod_{i=1}^{r}\prod_{j\in
J_{i}}\mathfrak{m}_{{L}^{\prime}}\times(\mathfrak{m}_{{L}^{\prime}})^{v};P\to(t_{11}(P),t_{12}(P),\dots,t_{rk_{r}}(P),s_{1}(P),\dots,s_{v}(P)),$
(3.1)
where the Frobenius $\sigma_{L}$ acts on the right by
$(x_{11},x_{12},\dots,x_{rk_{r}},y_{1},\dots,y_{v})\to(\sigma_{L}(x_{1\sigma_{L}^{-1}(1)}),\dots,\sigma_{L}(x_{r\sigma_{L}^{-1}(k_{r})}),\sigma_{L}(y_{1}),\dots,\sigma_{L}(y_{v})).$
On taking $\sigma_{L}$ invariants on both sides, we obtain
$\widehat{\mathcal{O}}_{X,x}({{\mathcal{O}}_{L}})\cong\prod_{i=1}^{r}\prod_{\lambda\in
J_{i}/\sigma_{L}}\mathfrak{m}_{{L}_{i}}\times(\mathfrak{m}_{{L}})^{v}.$
This isomorphism preserves the additive structure and is therefore measure
preserving since the induced measure is the Haar measure with the correct
normalization. All together, our integral thus reduces to
$\displaystyle\int_{\widehat{{\mathcal{O}}}_{X,x}({{\mathcal{O}}_{L}})}|t_{1,x}|_{{L}}^{e_{1}}\dots|t_{r,x}|_{{L}}^{e_{r}}d\mu_{X_{\SS}}$
$\displaystyle=\int_{{\widehat{{\mathcal{O}}}}_{X,x}({{\mathcal{O}}_{L}})}\prod_{i=1}^{r}\prod_{j=1}^{k_{j}}|t_{ij}|_{{L}}^{e_{i}}d\mu_{X_{\SS}}$
$\displaystyle=\prod_{i=1}^{r}\prod_{\lambda\in
M_{i}/\sigma_{L}}\int_{\mathfrak{m}_{{L}_{i}}}\left(\prod_{j\in\lambda}|t_{ij}|_{{L}_{i}}^{e_{i}}\right)^{\ell_{i}/k_{i}}d\mu_{\mathfrak{m}_{{L}_{i}}},$
where in the second equality, we use the above identification of our region of
integration and the fact that $|t_{ij}|=1$ over our region of integration
unless $j\in M_{i}$. The functions $|t_{ij}|$ depend only on the $\sigma_{L}$
orbit $\lambda$ of $j$ since $|\sigma_{L}t_{ij}(P)|=|t_{ij}(P)|$ and we denote
this function by $|t_{i\lambda}|$. Moreover, the size of the orbit $\lambda$
is exactly $k_{i}/\ell_{i}$ and orbits of $\sigma_{L}$ are exactly the same
same as the orbits of $\sigma_{K}^{\ell_{i}}$ on the set $J_{i}$.
Therefore, the inner integrand simplifies to $|t_{i\lambda}|^{e_{i}}$ and the
above integral is equal to
$\prod_{i=1}^{r}\prod_{\lambda\in
M_{i}/\sigma_{K}^{\ell_{i}}}\int_{\mathfrak{m}_{{L}_{i}}}|t_{i\lambda}|_{{L}_{i}}^{e_{i}}d\mu_{\mathfrak{m}_{{L}_{i}}}=\prod_{i=1}^{r}\delta_{e_{i}}^{|M_{i}/\sigma_{K}^{\ell_{i}}|}(m;q^{k_{i}/\ell_{i}})$
since the residue field of $\mathcal{O}_{L_{i}}$ has size
$q^{k_{i}/\ell_{i}}$.
Since we assumed that $\gcd(k_{i},m)=\ell_{i}$, the proof is completed by
summing over all the possibilities for the $\ell_{i}$ the corresponding
indicator functions weighted by the above product.
∎
###### Theorem 3.6.
Let $X,Y$ be _smooth proper_ varieties over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$ with $f:X\to Y$ a generically finite
map with $(\mathrm{id},X,Z_{f})$ satisfying Hypothesis 3.3. The function
${L}\to\eta_{f}({L})=\int_{X({{\mathcal{O}}_{L}})}f_{\SS}^{*}(d\mu_{X_{\SS}})$
is a palindromic form of weight $\dim Y$. In fact, if $m$ is the degree of the
residue field extension of $L$ and we pick a resolution of singularities of
$\widetilde{\pi}:\widetilde{X}\to X$ as in Hypothesis 3.3 with
$\pi^{-1}(Z_{f})$ playing the role of $D$ in Lemma 3.5 with the same notation
as before, we have the identity
$\eta_{f}({L})=\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}}\prod_{i=1}^{r}\eta_{e_{i}}^{|\Lambda_{i}|}(m;q^{k_{i}/\ell_{i}})\left|D_{{\mathcal{L}}}(\mathbb{F}_{q^{m}})\right|\right)$
where the $e_{i}$ are positive integers and ${\mathcal{L}}$ ranges over tuples
of the form $(\Lambda_{1},\dots,\Lambda_{r})$ with $\Lambda_{i}\subset
J_{i}/\sigma_{K}^{\ell_{i}}$ and
$D_{\mathcal{L}}\coloneqq\bigcap_{i=1}^{r}\bigcap_{\lambda\in\Lambda_{i}}\bigcap_{j\in\lambda}D_{ij}$.
###### Proof.
By Hypothesis 3.3 and the change of variables formula for birational maps, we
can replace $X$ by a resolution of singularities $\widetilde{X}$ and hence
suppose that the exceptional locus $Z_{f}\subset X$ is a normal crossings
divisor relative to $\operatorname{Spec}{{\mathcal{O}}_{K}}$. Let $x\in X$ be
a closed point. Since $X/\operatorname{Spec}{{\mathcal{O}}_{K}}$ is smooth,
$\widehat{\mathcal{O}}_{X,x}\cong{{\mathcal{O}}_{K}}[\\![x_{1},\dots,x_{n}]\\!]$
for $n=\dim X$ which is a unique factorization domain by the Weierstrass
preparation theorem.
Note that $\operatorname{Jac}(f)$ (as in Definition 2.21) is a local function
that precisely cuts out $Z_{f}$ geometrically, well defined up to a function
that is invertible when evaluated on ${\mathcal{O}}_{\overline{K}}$ points. If
$Z_{f}=\bigcup_{i=1}^{r}D_{i}$ as in the statement of Lemma 3.5, we have
$\operatorname{Jac}(f)=u\prod_{i=1}^{r}t_{i,x}^{e_{i}}\in\widehat{\mathcal{O}}_{X,x}$
for some $u,e_{i}\geq 0$ with the $u$ coprime to the $t_{i,x}$. In fact, the
$e_{i}$ are exactly given by the order of vanishing of $\operatorname{Jac}(f)$
at the $D_{i}$ (computed on any local chart). This is well defined precisely
because $\operatorname{Jac}(f)$ is well defined up to a unit on that chart.
Due to this, we see that $u=\operatorname{Jac}(f)\prod_{i}t_{i,x}^{-e_{i}}$
does not vanish away from $Z_{f}$ and also does not vanish at the generic
point of each $D_{i}$. Therefore, it is non-vanishing on an open with
complement of codimension at least $2$ and hence a unit in
$\widehat{{\mathcal{O}}}_{X,x}$.
Finally, recall the notation from before that $D_{i}=\bigcup_{j\in
J_{i}}D_{ij}$ over an unramified extension
$\mathcal{O}_{L_{i}}/{{\mathcal{O}}_{K}}$ with the $D_{ij}$ smooth (with
$\mathcal{O}_{L_{i}}$ the minimal such extension). Given
${\mathcal{M}}=(M_{1},\dots,M_{r})$ with the $M_{i}\subset J_{i}$, we define
by $D_{{\mathcal{M}}}^{\circ}$ the locally closed strata of
$D_{{\mathcal{M}}}=\bigcap_{j\in J_{i}}D_{ij}$ not contained in
$D_{{\mathcal{M}}^{\prime}}$ for any other ${\mathcal{M}}^{\prime}$
corresponding to a smaller strata. We carry over other notation from Lemma 3.5
with $Z_{f}$ playing the role of $D$.
In order to compute the integral, we pick a set of points
${\mathcal{S}}\subset X({{\mathcal{O}}_{L}})$ in bijection with the points of
$X(\mathbb{F}_{q^{m}})$ under the reduction map. This gives rise to a
decomposition of the $p$-adic analytic set $X({{\mathcal{O}}_{L}})$ into discs
around each point $s\in{\mathcal{S}}$ isomorphic to
$\widehat{{\mathcal{O}}}_{X,s}$, giving rise to the equality
$\displaystyle\int_{X({{\mathcal{O}}_{L}})}f_{\SS}^{*}d\mu_{X_{\SS}}$
$\displaystyle=\sum_{{\mathcal{M}}}\sum_{s\in{\mathcal{S}}\cap
D_{{\mathcal{M}}}^{\circ}}\int_{\widehat{\mathcal{O}}_{X,s}({{\mathcal{O}}_{L}})}|t_{1}^{e_{1}}\dots
t_{r}^{e_{r}}|_{{L}}d\mu_{X_{\SS}}.$
By Lemma 3.5, the above is equal to
$\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{M}}}\sum_{s\in{\mathcal{S}}\cap
D_{{\mathcal{M}}}^{\circ}}\prod_{i=1}^{r}(\eta_{e_{i}}(m;q^{k_{i}/\ell_{i}})+1)^{|M_{i}/\sigma_{K}^{\ell_{i}}|}\right),$
where we recall that $\delta_{e}(m;q)=\eta_{e}(m;q)+1$. Since $s\in
D_{\mathcal{M}}({{\mathcal{O}}_{L}})$, the $M_{i}$ are necessarily
$\sigma_{L}$, and hence, $\sigma_{K}^{\ell_{i}}$ stable. Next, we pass to the
orbits by defining ${\mathcal{L}}=(\Lambda_{1},\dots,\Lambda_{r})$ with
$\Lambda_{i}=M_{i}/\sigma_{K}^{\ell_{i}}\subset J_{i}/\sigma_{K}^{\ell_{i}}$.
Summing over such ${\mathcal{L}}$ and noting that $|{\mathcal{S}}\cap
D_{\mathcal{L}}^{\circ}|=D_{\mathcal{L}}^{\circ}(\mathbb{F}_{q^{m}})$, we
obtain
$\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}\text{
as
above}}\left|D^{\circ}_{{\mathcal{L}}}(\mathbb{F}_{q^{m}})\right|\prod_{i=1}^{r}\prod_{\lambda\in\Lambda_{i}}\left(\eta_{e_{i}}(m;q^{k_{i}/\ell_{i}})+1\right)\right).$
We now expand the inner product as
$\prod_{\lambda\in\Lambda_{i}}\left(\eta_{e_{i}}(m;q^{k_{i}/\ell_{i}})+1\right)=\sum_{\Lambda_{i}^{\prime}\subset\Lambda_{i}}\eta_{e_{i}}^{|\Lambda_{i}^{\prime}|}(m;q^{k_{i}/\ell_{i}}).$
We say that
${\mathcal{L}}^{\prime}=(\Lambda_{1}^{\prime},\dots,\Lambda_{r}^{\prime})\leq{\mathcal{L}}$
when $\Lambda_{i}^{\prime}\subset\Lambda_{i}$ for all $i$. Then, the above
expansion lets us rewrite the integral as
$\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}\text{
as
above}}\sum_{{\mathcal{L}}^{\prime}\leq{\mathcal{L}}}\prod_{i=1}^{r}\eta_{e_{i}}^{|\Lambda_{i}^{\prime}|}(m;q^{k_{i}/\ell_{i}})|D^{\circ}_{{\mathcal{L}}}(\mathbb{F}_{q^{m}})|\right).$
Switching the order of summation, we have
$\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}^{\prime}}\left(\prod_{i=1}^{r}\left(\eta_{e_{i}}^{|\Lambda_{i}^{\prime}|}(m;q^{k_{i}/\ell_{i}})\right)\sum_{{\mathcal{L}}^{\prime}\leq{\mathcal{L}}}|D^{\circ}_{{\mathcal{L}}}(\mathbb{F}_{q^{m}})|\right)\right).$
The claim now follows from the observation that
$\sum_{{\mathcal{L}}^{\prime}\leq{\mathcal{L}}}|D^{\circ}_{{\mathcal{L}}}(\mathbb{F}_{q^{m}})|=|D_{{\mathcal{L}}}(\mathbb{F}_{q^{m}})|$
and is thus a palindromic form of weight $\dim D_{{\mathcal{L}}}$ while the
$\eta_{e}(m;q^{d})$ factors are palindromic forms of weight $d$. Together,
$\prod_{i=1}^{r}\eta_{e_{i}}^{|\Lambda_{i}|}(m;q^{k_{i}/\ell_{i}})|D_{{\mathcal{L}}}(\mathbb{F}_{q})|$
is a palindromic form of weight
$\sum_{i=1}^{r}|\Lambda_{i}|\frac{k_{i}}{\ell_{i}}+\dim D_{\mathcal{L}}=\dim
X$
since the first term above is precisely the number of smooth components of
$Z_{f}$ cutting out $D_{\mathcal{L}}$ because each $\lambda\in\Lambda_{i}$
corresponds to an orbit of divisors of size $k_{i}/\ell_{i}$.
The indicator functions ${\mathcal{I}}(\ell|\gcd(m,k))$ (for $\ell|k$) can be
written as a sum of roots of unity $\zeta_{k}$ of order $k$ as
${\mathcal{I}}(\ell|\gcd(m,k))=\frac{1}{k}\sum_{a=1}^{k}\zeta_{k}^{amk/\ell}.$
Therefore, they are palindromic forms of weight $0$. Since
${\mathcal{I}}(\gcd(m,k)=\ell)={\mathcal{I}}(\ell|\gcd(m,k))-\left(1-\prod_{1<d,d\ell|k}\left(1-{\mathcal{I}}(d\ell|\gcd(m,k))\right)\right),$
(3.2)
${\mathcal{I}}(\gcd(m,k)=\ell)$ is also a palindromic form of weight $0$. All
together, this shows that our integral is a palindromic form of weight $\dim
Y=\dim X$. ∎
Despite appearances, the function $\eta_{f}$ does not depend on the resolution
$\pi:\widetilde{X}\to X$ by the birational change of variables formula. We
record the following simplification when all components of the exceptional
divisor are geometrically irreducible over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$.
###### Corollary 3.7.
Let $m$ be the degree of the residue field extension of $L/K$. Let $f:X\to Y$
be as in the previous theorem and suppose that there exists a resolution of
singularities $\pi:\widetilde{X}\to X$ over ${{\mathcal{O}}_{K}}$ with
$\pi^{-1}(Z_{f})=\bigcup_{i=1}^{r}D_{i}$ a sncd over ${{\mathcal{O}}_{K}}$.
Then
$\eta_{f}({L})=\sum_{J\subset\\{1,\dots,r\\}}\left|D_{J}(\mathbb{F}_{q^{m}})\right|\prod_{j\in
J}\eta_{e_{j}}(m;q)$
for some positive integers $e_{j}$.
We prove a modification of the above theorem taking into consideration
unramified twists, as in Definition 2.23.
Let $\tau=(H,gH)$ be an admissible pair, i.e., $gH=Hg$. As above, we take
$f:X\to Y$ to be a generically finite, Galois map with Galois group $G$ over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$. Define $f_{H}:X\to X/H$ to be the
natural quotient map. Since $gH=Hg$, $g$ descends to an automorphism of $X/H$.
We suppose that $(g,X/H,f_{H}(Z_{f}))$ satisfies Hypothesis 3.3 and define
$\widetilde{X/H}$ to be a $g$-equivariant resolution
$\pi_{H}:\widetilde{X/H}\to X/H.$
Since all the maps involved are $g$-equivariant and twisting by $g$ is
functorial, we obtain maps
$\pi_{\tau}:{{}^{g}\left(\widetilde{X/H}_{\SS}\right)}\to{{}^{g}{(X/H)_{\SS}}}$
where we base change to $\SS$ _first_ and then twist by $g$. We denote
${{}^{g}\left(\widetilde{X/H}_{\SS}\right)}$ by ${{}^{\tau}X_{\SS}}$ and the
resulting map by ${{}^{\tau}f_{\SS}}:{{}^{\tau}X_{\SS}}\to Y$. By
construction, ${{}^{\tau}f_{\SS}}$ is an étale map over the unramified open
locus $U_{f}$.
###### Theorem 3.8.
Let $f:X\to Y$ be a generically finite, Galois map with group $G$ between
_smooth, proper_ varieties with $(g,X/H,f_{H}(Z_{f}))$ and
$(g^{-1},X/H,f_{H}(Z_{f}))$ satisfying Hypothesis 3.3 as above and define
${L}\to\eta_{\tau}({L})\coloneqq\int_{{{}^{\tau^{-1}}X_{\SS}}({{\mathcal{O}}_{L}})}\operatorname{Jac}({{}^{\tau^{-1}}f_{{\mathcal{O}}_{L}}})d\mu_{{{}^{\tau^{-1}}X_{\SS}}}.$
Then, $\eta_{\tau}({L}),\eta_{\tau^{-1}}({L})\in\mathbb{Q}$ and
$\eta_{\tau}({L})+\eta_{\tau^{-1}}({L})$ is a palindromic form of weight $\dim
Y$.
###### Proof.
We maintain the notation from the beginning of the proof of Theorem 3.6 so
that the exceptional locus for ${{}^{\tau}f}_{\SS}:{{}^{\tau}X}_{\SS}\to
Y_{\SS}$ is $\bigcup_{i=1}^{r}{{}^{g}D}_{i,\SS}$ with the $D_{i}$ defined over
$\operatorname{Spec}{{\mathcal{O}}_{K}}$ and irreducible, $D_{i}=\bigcup_{j\in
J_{i}}D_{ij}$ and so on. By Theorem 3.6,
$\eta_{\tau^{-1}}({L})=\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}}\prod_{i=1}^{r}\eta_{e_{i}}^{|\Lambda_{i}|}(m;q^{k_{i}/\ell_{i}})\left(|{{}^{g}D}_{{\mathcal{L}},\SS}(\mathbb{F}_{q^{m}})|\right)\right)\in\mathbb{Q},$
while $\eta_{\tau}({L})+\eta_{\tau^{-1}}({L})$ is equal to
$\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}}\prod_{i=1}^{r}\eta_{e_{i}}^{|\Lambda_{i}|}(m;q^{k_{i}/\ell_{i}})\left(|{{}^{g}D}_{{\mathcal{L}},\SS}(\mathbb{F}_{q^{m}})|+|{{}^{g^{-1}}D}_{{\mathcal{L}},\SS}(\mathbb{F}_{q^{m}})|\right)\right).$
It is possible (and indeed likely!) that even though $D_{{\mathcal{L}},\SS}$
has $\mathbb{F}_{q^{m}}$ points, the twist ${{}^{g}D}_{{\mathcal{L}},\SS}$ is
not defined over ${{\mathcal{O}}_{L}}$ and consequently has no
$\mathbb{F}_{q^{m}}$ points. We emphasize that the twists
${{}^{g}D}_{{\mathcal{L}},\SS}$ are obtained after base changing _first_.
Therefore, it is not obvious a-priori that the above expression is a
palindromic form of weight $\dim Y$. Nevertheless, we will prove that it is so
using the Lefschetz trace formula.
To that end, fix an auxiliary prime $\ell$ and a smooth, projective variety
$D/\mathbb{F}_{q}$. As is well known, the action of $\sigma_{L}$ on
$H_{\text{\'{e}t}}^{*}({{}^{g^{-1}}}D\otimes_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})$
is equivalent to the action of $g\sigma_{L}$ on
$H_{\text{\'{e}t}}^{*}(D\otimes_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})$.
Thus, in conjunction with Grothendieck-Lefschetz trace formula, we obtain
$|{{}^{g^{-1}}D}_{{\mathcal{L}},\SS}(\mathbb{F}_{q^{m}})|=\sum_{i=0}^{2\dim
D}(-1)^{i}\operatorname{tr}\left(g\sigma_{q}^{m}|H^{i}_{\text{\'{e}t}}(D_{{\mathcal{L}}}\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})\right),$
and consequently,
$\displaystyle|{{}^{g}D}_{{\mathcal{L}},\SS}(\mathbb{F}_{q^{m}})|+|{{}^{g^{-1}}D}_{{\mathcal{L}},\SS}(\mathbb{F}_{q^{m}})|$
$\displaystyle=\sum_{i=0}^{2\dim
D}(-1)^{i}\operatorname{tr}\left((g+g^{-1})\sigma_{q}^{m}|H^{i}_{\text{\'{e}t}}(D_{{\mathcal{L}}}\times_{\mathbb{F}_{q}}\overline{\mathbb{F}}_{q},\mathbb{Q}_{\ell})\right)$
$\displaystyle=\rho_{D_{\mathcal{L}},g}(m).$
Thus, our integral above reduces to
$\eta_{\tau}({L})+\eta_{\tau^{-1}}({L})=\sum_{\ell_{1}|k_{1},\dots,\ell_{r}|k_{r}}\prod_{i=1}^{r}{\mathcal{I}}(\gcd(m,k_{i})=\ell_{i})\left(\sum_{{\mathcal{L}}}\prod_{i=1}^{r}\eta_{e_{i}}^{|\Lambda_{i}|}(m;q^{k_{i}/\ell_{i}})\rho_{D_{\mathcal{L}},g}(m)\right)$
where $\rho_{D_{\mathcal{L}},g}$ is as in Lemma 2.8. By lemma, $\rho_{X,g}$ is
a palindromic form of weight $\dim D_{\mathcal{L}}$ and the proof is completed
exactly as in Theorem 3.6. ∎
###### Theorem 3.9.
Let $f:X\to Y$ be a generically finite, Galois map with Galois group $G$. For
an admissible pair $\tau=(H,gH)$, suppose that the tuple
$(g^{-1},X/H,f_{H}(Z_{f}))$ satisfies Hypothesis 3.3. Then, we have
${\eta_{\tau}({L})}=\sum_{\tau^{\prime}\leq\tau}\rho_{f,\tau^{\prime}}({L})\gamma([\tau^{\prime},\tau]),$
(3.3)
where $\rho_{f,\tau}$ is as in Definition 2.18 and
$\gamma([\tau^{\prime},\tau])$ are as in §2.4.
###### Proof.
Suppose $\tau=(H,gH)$ and for ease of notation, we let $k=g^{-1}$ with
$\nu=\tau^{-1}$. Let
$\widetilde{U}={{}^{k}f_{\SS}}^{-1}(U_{f}),\widetilde{U}_{\nu}={{}^{k}\left(f_{\SS}^{-1}(U_{f})/H\right)}$
with quotient map $f_{\nu}:\widetilde{U}_{\nu}\to U_{f}$.
Suppose $P\in U_{f}({L})$ with geometric pre-images $Q_{1},\dots,Q_{n}$ in
$X(\overline{{L}})={{}^{k}X}(\overline{{L}})$ (where the two sets are
identified using the canonical identification with the only difference being
the Frobenius action). For such a $Q_{i}$, we denote the corresponding inertia
group and Frobenius coset in $G$ by $I_{i}$ and $\sigma_{i}$, respectively. We
similarly define the group schemes ${{}^{k}I}_{i}$ and ${{}^{k}\sigma_{i}}$ in
${}^{k}G$. In fact, ${{}^{k}I}_{i}=I_{i}$ under the canonical identification
of ${{}^{k}G}({{\mathcal{O}}_{L}}^{\mathrm{ur}})\cong
G({{\mathcal{O}}_{L}}^{\mathrm{ur}})=G$ over the unramified closure of
${{\mathcal{O}}_{L}}$ since such a base change does not affect inertia groups.
Finally, we let $\tau_{Q_{i}}=(I_{i},\sigma_{i})$ be the corresponding
admissible pair.
Let $\overline{Q}_{i}$ be the image of $Q_{i}$ in $\widetilde{U}_{\nu}$. It
lands in $\widetilde{U}_{\nu}({L})$ precisely when both the inertia group and
Frobenius fix it. Therefore
$\displaystyle\overline{Q}_{i}\in\widetilde{U}_{\nu}({L})$ $\displaystyle\iff
I_{i}\overline{Q}_{i}=\overline{Q}_{i}\text{ and
}{{}^{k}\sigma_{i}}(\overline{Q}_{i})=\overline{Q}_{i}$ $\displaystyle\iff
I_{i}\subset H\text{ and }k\sigma_{i}\in H$
where the final equivalence follows from the faithfulness of the action of $G$
on $Q_{1}\dots,Q_{n}$. Altogether, we see that the image of $Q_{i}$ in
${{}^{\nu}X_{\SS}}$ is ${{\mathcal{O}}_{L}}$-rational precisely when
$\tau_{Q_{i}}\leq\tau$, i.e., $I_{i}\subset H$ and $\sigma_{i}\in gH$. In
other words, we have shown that $f_{\nu}$ is a covering space map with image
$f_{\nu}(\widetilde{U}_{\nu}({L}))=\bigsqcup_{\begin{subarray}{c}\tau^{\prime}\leq\tau\\\
\text{ up to conjugacy }\end{subarray}}U_{f,\tau^{\prime}}({L}).$
For every such $P$ with some $i$ such that $\tau_{i}\leq\tau$, the $j$ such
that $\tau_{j}\leq\tau$ correspond to $\lambda\in G$ such that
$\lambda({{}^{k}I}_{i},{{}^{k}\sigma_{i}})\lambda^{-1}\leq\tau$. The number of
distinct images of these $Q_{j}$ in ${{}^{\nu}X_{\SS}}$ is precisely equal to
$\alpha([({{}^{k}I}_{i},{{}^{k}\sigma_{i}}),\tau])$ since the ones in the same
$H$ orbit get identified (recall the definitions of $\alpha,\beta$ in
Definition 2.12).
Therefore, the degree of $f_{\nu}$ over $U_{f,\tau^{\prime}}({L})$ is
precisely $\alpha(\tau^{\prime},\tau)$. If we pick a compact open $V\subset
U_{f,\tau^{\prime}}({L})$ and a differential form $\omega$ on it, we have
$\alpha(\tau^{\prime},\tau)\int_{V}|\omega|=\int_{f_{\nu}^{-1}(V)}|{f_{\nu}}^{*}\omega|$
by Theorem 2.3. We pick $\omega$ to be a gauge form with respect to the
measure $\mu_{Y}$ and sum over a disjoint union cover $V_{i}$ of
$\bigsqcup_{\tau^{\prime}\leq\tau}U_{f,\tau^{\prime}}({L})$ to obtain the
required identity
$\sum_{\tau^{\prime}\leq\tau}\rho_{f,\tau^{\prime}}({L})\frac{\alpha(\tau^{\prime},\tau)}{\beta(\tau^{\prime},\tau)}=\int_{\widetilde{U}_{\nu}({L})}|\operatorname{Jac}(f_{\nu})|d\mu_{X_{\SS}}.$
We divide by $\beta(\tau^{\prime},\tau)$ in the first term because that is
precisely the amount we over-count by when we sum over all types
$\tau^{\prime}\leq\tau$ instead of up to conjugacy. ∎
###### Theorem 3.10.
Let $f:X\to Y$ be a generically finite, Galois map with Galois group $G$
between _smooth, proper_ varieties and $\tau$ an admissible pair (with respect
to $G$). For every admissible pair $\tau^{\prime}\leq\tau$ and
$\tau^{\prime}\leq\tau^{-1}$, suppose that the tuples
$(g_{\tau^{\prime}},X/H_{\tau^{\prime}},f_{H_{\tau^{\prime}}}(Z_{f}))$ satisfy
Hypothesis 3.3. Then, the densities $\rho_{f,\tau}({L})\in\mathbb{Q}$ and the
function
${L}\to\rho_{f,\tau}({L})+\rho_{f,\tau^{-1}}({L})$
is a palindromic form of weight $\dim Y$, where $\rho_{f,\tau}$ is as in
Definition 2.18. We note that the above function is the measure of the set
$U_{f,\tau}({L})\cup U_{f,\tau^{-1}}({L})$ since these are disjoint subsets of
$Y({L})$.
###### Proof.
We fix ${L}$ and elements $\widetilde{\eta},\widetilde{\rho}$ in the incidence
algebra for the poset of types satisfying (for all $g\in G$ and types
$\tau^{\prime}\leq\tau$)
$\widetilde{\rho}([(e,g),\tau])=\rho_{f,\tau}({L}),$
$\widetilde{\eta}([\tau^{\prime},\tau])=\widetilde{\rho}\ast\gamma([\tau^{\prime},\tau]).$
By Equation 3.3, we see that
$\widetilde{\eta}([(e,g),\tau])=\eta_{\tau}({L})$. Since
$\gamma([\tau,\tau])>0$, $\gamma$ is an invertible element of the incidence
algebra so that
$\widetilde{\rho}=\widetilde{\eta}\ast\gamma^{-1}$
and evaluating at $[(e,g),\tau]$ for some type $\tau\geq(e,g)$, we obtain
$\rho_{f,\tau}({L})=\sum_{\tau^{\prime}\leq\tau}{\eta_{\tau^{\prime}}({L})}\gamma^{-1}([\tau^{\prime},\tau]).$
This is a rational number since $\eta_{\tau^{\prime}}({L})\in\mathbb{Q}$
(Theorem 3.8) and $\gamma([\tau^{\prime},\tau])\in\mathbb{Q}$ (and hence also
its inverse in the incidence algebra). By definition, we have
$\alpha([\tau^{\prime},\tau])=\alpha([\tau^{\prime-1},\tau^{-1}])$ and
similarly for $\beta$ and hence also for $\gamma$ and $\gamma^{-1}$.
Therefore, if we sum the above identity with the corresponding one for
$\tau^{-1}$, we obtain
$\displaystyle\rho_{f,\tau}({L})+\rho_{f,\tau^{-1}}({L})$
$\displaystyle=\sum_{\tau^{\prime}\leq\tau}{\eta_{\tau^{\prime}}({L})}\gamma^{-1}([\tau^{\prime},\tau])+\sum_{\tau^{\prime-1}\leq\tau^{-1}}{\eta_{\tau^{\prime-1}}({L})}\gamma^{-1}([\tau^{\prime-1},\tau^{-1}])$
$\displaystyle=\sum_{\tau^{\prime}\leq\tau}\left(\eta_{\tau^{\prime}}({L})+\eta_{\tau^{\prime-1}}({L})\right){\gamma^{-1}([\tau^{\prime},\tau])}.$
By Lemma 2.8, we see that
$\eta_{\tau^{\prime}}({L})+\eta_{\tau^{\prime-1}}({L})$ is a palindromic form
of weight $\dim Y$ and since the poset of admissible pairs and $\gamma^{-1}$
depend only on the group $G$ and not on ${L}$, the above identity shows that
$\rho_{f,\tau}({L})+\rho_{f,\tau^{-1}}({L})$ is a palindromic form of weight
$\dim Y$. ∎
As an immediate corollary, we have that the measures in Definition 2.20 take
rational values and are palindromic forms of the correct weight.
###### Corollary 3.11.
Let $f:X\to Y$ be a generically finite, Galois map between _smooth, proper_
varieties. For every admissible pair $\tau^{\prime}\leq\tau$ and
$\tau^{\prime}\leq\tau^{-1}$, suppose that the tuples
$(g_{\tau^{\prime}},X/H_{\tau^{\prime}},f_{H_{\tau^{\prime}}}(Z_{f}))$ satisfy
Hypothesis 3.3.
Then the measures $\rho_{f,H_{\tau},H_{1,\tau}}({L})$ are palindromic forms of
weight $\dim Y$.
###### Proof.
For a fixed normal inclusion $H\subset H_{1}$ with the quotient cyclic, we
obtain an admissible pair $\tau_{\theta}=(H,\theta H)$ for any generator
$\theta\in H_{1}/H$. The corresponding $U_{f,\tau_{\theta}}({L})$ are disjoint
and moreover,
$U_{f,H_{\tau},H_{1,\tau}}({L})=\bigsqcup_{\theta}U_{f,\tau_{\theta}}({L})\implies\rho_{f,H_{\tau},H_{1,\tau}}({L})=\sum_{\theta\text{
a generator}}\rho_{f,\tau_{\theta}}({L}).$
so that $2\rho_{f,H_{\tau},H_{1,\tau}}({L})=\sum_{\theta\text{ a
generator}}\left(\rho_{f,\tau_{\theta}}({L})+\rho_{f,\tau^{-1}_{\theta}}({L})\right)$
is a palindromic form of weight $\dim Y$ by the previous theorem and hence, so
is $\rho_{f,H_{\tau},H_{1,\tau}}({L})$. ∎
As the next theorem shows, the hypothesis on the resolution of singularities
above are automatically satisfied at almost all primes.
###### Theorem 3.12.
Let $M$ be a number field and $T=\operatorname{Spec}\mathcal{O}_{M}$. Let
$f:X\to Y$ over $T$ be a generically finite, Galois map with $Y_{M}$ a smooth
proper variety over $M$.
Then, for almost all primes $\mathfrak{p}$ and every admissible pair $\tau$,
the function
${L}\to\rho_{f,\tau}({L})+\rho_{f,\tau^{-1}}({L})$
is a palindromic form of weight $\dim Y$, as ${L}$ ranges over extensions of
the completion $\widehat{\mathcal{O}}_{M,\mathfrak{p}}$.
###### Proof.
We pick a characteristic zero $G$-equivariant resolution of varieties
$\tilde{X}\to X$ (see [1],[16] or [5]). As in Remark 3.4, we can spread the
resolution out to an open subset of $T$. The assumptions of Theorem 3.10 are
thus satisfied for all but finitely many primes $\mathfrak{p}\in T$ and the
theorem follows. ∎
We also note that as the size of the residue field tends to infinity, the
densities $\rho_{f,\tau}(L)$ simplify drastically.
###### Corollary 3.13.
We maintain the hypothesis of the above theorem. As
$N(\mathfrak{p})\to\infty$, we have the following limiting behaviour
$\lim_{N\mathfrak{p}\to\infty}\frac{\rho_{f,\tau}(\widehat{\mathcal{O}}_{M,\mathfrak{p}})}{(N\mathfrak{p})^{\dim
X}}=\begin{cases}0&\text{ if }|H_{\tau}|>1\\\
\frac{|[g_{\tau}]|}{|G|}\end{cases}$
where $G$ is the Galois group of $f$ and $[g_{\tau}]$ denotes the
$G$-conjugacy class of $g_{\tau}$.
###### Proof.
The points $P\in Y(\widehat{\mathcal{O}}_{M,\mathfrak{p}})$ where the fiber
$f^{-1}(P)$ has a non trivial action of the inertia group are all contained in
the locus of points whose reductions modulo $\mathfrak{p}$ lie in the image of
the ramification locus $f(Z_{f})(\mathbb{F}_{\mathfrak{p}})$. Therefore, their
density is upper bounded by
$\frac{|f(Z_{f})(\mathbb{F}_{\mathfrak{p}})|}{Y(\mathbb{F}_{\mathfrak{p}})}\to
0\text{ as }N\mathfrak{p}\to\infty$
by the Lang-Weil estimates. This proves the first part of the corollary.
Similarly, for the second part, we can assume that the map $f:X\to Y$ is étale
and Galois by excising the ramified locus since it has measure $0$ in the
limit $N\mathfrak{p}\to\infty$. In this case, the problem reduces to the
finite field version by Hensel lifting and the second claim follows from the
Chebotarev density theorem over finite fields (or indeed, also from our proof
of Theorem 3.9 which extends the proof of the classical Chebotarev density
theorem). ∎
## 4 A conjecture on the density of polynomials with fixed factorization type
In this section, we will prove [4, Conjecture 1.2] as an application of the
ideas in this paper. We recall some notation from their paper first.
### 4.1 Factorization types of polynomials
A _factorization type of degree $n$_ is a multiset
$\\{f_{1}^{e_{1}}f_{2}^{e_{2}}\dots f_{r}^{e_{r}}\\}$ where the $f_{i},e_{i}$
are positive integers satisfying $\sum_{i}f_{i}e_{i}=n$. We allow repeats in
the list of symbols $f_{i}^{e_{i}}$ but the order in which they appear does
not matter. We will often omit exponents $e_{i}$ if $e_{i}=1$.
For an étale extension $L/K$ of degree $n$ over a local field $K$, we define
its factorization type to be $(f_{1}^{e_{1}},\dots,f_{r}^{e_{r}})$ if a
uniformizer of $K$ factors in $L$ as $P_{1}^{e_{1}}\dots P_{r}^{e_{r}}$ where
the $P_{i}$ are primes of $L$ having residue degree $f_{i}$.
We consider non-zero polynomials $h(z)$ of degree $n$ in
${{\mathcal{O}}_{K}}[z]$ as elements in ${{\mathcal{O}}_{K}}^{n+1}$ through
their coordinates with the associated Haar measure $\mu_{\mathrm{Haar}}$
normalized so that the total measure is $1$.
###### Conjecture 4.1 (Conjecture 1.2, [4], generalized to a local field).
Let $\sigma$ be any factorization type of degree $n$ and $K$ a local field
with size of residue field $q$. Set
* •
$\rho(n,\sigma;q)\coloneqq$ the density of polynomials $h\in K[z]$ of degree
$n$ such that $L=K[z]/h(z)$ is étale over $K$ with factorization type
$\sigma$,
* •
$\alpha(n,\sigma;q)\coloneqq$ the density of monic polynomials
$h\in{{\mathcal{O}}_{K}}[z]$ of degree $n$ such that $L=K[z]/h(z)$ is étale
over $K$ with factorization type $\sigma$,
* •
$\beta(n,\sigma;q)\coloneqq$ the density of polynomials $h(z)\equiv
z^{n}\pmod{\mathfrak{m}_{K}}$ of degree $n$ such that $L=K[z]/h(z)$ is étale
over $K$ with factorization type $\sigma$.
Then $\rho(n,\sigma;q),\alpha(n,\sigma;q)$ and $\beta(n,\sigma;q)$ are
rational functions of $q$ and satisfy
$\rho(n,\sigma;q^{-1})=\rho(n,\sigma;q);$ (4.1)
$\alpha(n,\sigma;q^{-1})=\beta(n,\sigma;q).$ (4.2)
Implicitly, we have made the claim that the densities depend on $K$ only
through the size of its residue field $\mathbb{F}_{q}$.
We will prove Equation 4.1 for all factorization types $\sigma$ and all primes
$q$ in the rest of this paper. It should be possible to also prove Equation
4.2 by extending our methods although we do not do so here.
Let us first relate the above conjecture to the densities appearing in the
rest of this paper. To every squarefree polynomial $h(z)$ with coefficients in
a local field $K$ associated to a point in $\mathbb{P}^{n}(K)$ through its
coefficients, we can associate a conjugacy class of an admissible pair
$[\tau_{h}]$ (for $S_{n}$) using the map
$f:(\mathbb{P}^{1})^{n}\to(\mathbb{P}^{1})^{n}/S_{n}=\mathbb{P}^{n}.$
If we further fix an ordering of the roots of $h(z)$, that corresponds to
fixing an element in the conjugacy class $[\tau_{h}]$. For an admissible pair
$\tau$, recall that
$U_{f,\tau}(K)=\\{h(z)\in\mathbb{P}^{n}(K)\text{ squarefree with
}\tau\in[\tau_{h}]\\}$
with density $\rho_{f,\tau}(K)=\mu_{\mathbb{P}^{n}}(U_{f,\tau})(K)$. The Haar
measure on ${{\mathcal{O}}_{K}}^{n+1}$ relates to the canonical measure on
$\mathbb{P}^{n}({{\mathcal{O}}_{K}})$ as follows.
###### Lemma 4.2.
Consider the canonical quotient map
$\pi:{{\mathcal{O}}_{K}}^{n+1}\setminus\\{0\\}\to\mathbb{P}^{n}({{\mathcal{O}}_{K}}).$
For a measurable set $S\subset\mathbb{P}^{n}({{\mathcal{O}}_{K}})$, we have
the equality of measures
$\mu_{\mathrm{Haar}}(\pi^{-1}(S))=\frac{\mu_{\mathbb{P}^{n}}(S)}{|\mathbb{P}^{n}(\mathbb{F}_{q})|}$
where $q$ is the size of the residue field of $K$.
###### Proof.
In brief, the canonical top form on ${{\mathcal{O}}_{K}}^{n+1}$ integrated
over a $\mathbb{G}_{m}$ orbit (acting diagonally) gives a gauge form on
$\mathbb{P}^{n}({{\mathcal{O}}_{K}})$. Therefore, the measures of $S$ and
$\pi^{-1}(S)$ are equal to a global normalization. More explicitly:
Consider local coordinates where $v_{p}(a_{0})\leq v_{p}(a_{i})$ for all
$0<i\leq n$. On this chart, we can realize $\pi$ as a projection
$\pi:{{\mathcal{O}}_{K}}^{n+1}\setminus\\{0\\}\to{{\mathcal{O}}_{K}}^{n};(a_{0},\dots,a_{n})\to(a_{1}/a_{0},\dots,a_{n}/a_{0})$
onto the final $n$ co-ordinates. From this description, one computes that the
preimage of a set $S$ has measure exactly
$\mu_{\mathrm{Haar}}(S)=\frac{\mu_{\mathbb{P}^{n}}(S)}{q^{n}}\left(\frac{q-1}{q}+\frac{q-1}{q^{n+2}}+\dots\right)=\frac{\mu_{\mathbb{P}^{n}}(S)(q-1)}{q^{n+1}-1}$
as required. ∎
In the other direction, to each admissible pair $\tau$ as above (with respect
to $S_{n}$), we can associate a factorization type $s(\tau)$ such that if
$\tau\in[\tau_{h}]$ for some squarefree polynomial $h(z)$, then $s(\tau)$ is
the factorization type for $h(z)$:
###### Definition 4.3.
[Factorization types from admissible pairs] Recall that an admissible pair
$\tau$ has associated to it a normal inclusion of subgroups of $S_{n}$:
$H_{\tau}\subset H_{1,\tau}$ where the smaller group corresponds to the
inertia group and the bigger group to the total Galois group. The action of
$H_{1,\tau}$ on the set $[n]=\\{1,\dots,n\\}$ will partition $[n]$ into orbits
$\pi_{1},\dots,\pi_{r}$ and we define $H_{1,\tau}^{(i)}$ to be the projection
of $H_{1,\tau}$ in the symmetric group
$S_{\pi_{i}}=\operatorname{Sym}(\pi_{i})$ and similarly for $H_{\tau}^{(i)}$.
Furthermore, the action of $H_{\tau}^{(i)}$ will partition $\pi_{i}$ into
further orbits $\pi_{i,j}$. Since $H_{\tau}$ is normal in $H_{1,\tau}$, the
$\pi_{i,j}$ all have the same size that we denote by $e_{i}$ and the number of
such orbits will be denoted by $f_{i}$. Given this data, the factorization
type $s(\tau)$ is defined to be the multiset with $r$ elements $f_{i}^{e_{i}}$
corresponding to the parts $\pi_{1},\dots,\pi_{r}$.
Note that $s(\tau)$ only depends on the conjugacy class of $\tau$. We note in
passing that the coset $g_{\tau}$ cyclically permutes the partitions
$\pi_{i,j}$ for $j=1,\dots,f_{i}$. The next lemma relates the factorization
type densities to the splitting type densities from earlier in the paper and
along the way, shows that $s([\tau_{h}])=\sigma_{h}$ as claimed above.
###### Lemma 4.4.
Let $\sigma=\\{f_{1}^{e_{1}},\dots,f_{r}^{e_{r}}\\}$ be a factorization type.
Then, we have the identity
$\rho(n,\sigma;q)=\frac{1}{|\mathbb{P}^{n}(\mathbb{F}_{q})|}\sum_{\begin{subarray}{c}\tau\text{
up to }S_{n}\text{ conjugacy}\\\ \text{satisfying
}s(\tau)=\sigma\end{subarray}}\rho_{f,\tau}(K).$
###### Proof.
We will show that the polynomials contributing to the density calculation in
the right hand side correspond precisely to the polynomials contributing to
the density calculation on the left hand side. This will follow if we show
that a polynomial $h(z)$ with admissible pair $[\tau_{h}]$ satisfies
$s([\tau_{h}])=\sigma_{h}$. Since we are taking admissible pairs up to
conjugacy on the right, this will prove exactly the bijective correspondence
we need. The normalizing factor $|\mathbb{P}^{n}(\mathbb{F}_{q})|^{-1}$
follows from Lemma 4.2.
To this end, let $h(z)\in K(z)$ be a squarefree degree $n$ polynomial with
factorization type $\sigma$. Upon fixing an ordering of its roots, we obtain
an admissible pair $\tau_{h}$. The corresponding decomposition of $[n]$ into
parts $\pi_{1},\dots,\pi_{r}$ as in Definition 4.3 above corresponds exactly
to the orbits of $\operatorname{Gal}(\overline{K}/K)$ on the set of roots and
hence to the decomposition of $h(z)=h_{1}(z)\dots h_{r}(z)$ into irreducible
polynomials over $K$. Moreover, $H_{1,\tau}^{(i)},H_{\tau}^{(i)}$ correspond
respectively to the Galois group, inertia group of the splitting field
$L_{i}/K$ of $h_{i}(z)$ and it follows that $f_{i},e_{i}$ are respectively the
residue degree, inertia degree of the extension $K\subset K(z)/h_{i}(z)$.
∎
###### Theorem 4.5.
For a factorization type $\sigma$, let $\mathcal{L}_{\sigma}$ be the set of
local fields for which the characteristic of the residue field is co-prime to
the exponents $e_{i}$ occurring in $\sigma$.
For this set of local fields, $\rho(n,\sigma;q)$ is equal to a rational
function $r_{\sigma}(t)$ evaluated at $t=q$, the size of the residue field.
This rational function takes values in $\mathbb{Q}$ and satisfies the symmetry
$r_{\sigma}(t^{-1})=r_{\sigma}(t).$
###### Proof.
By [30], [11], $\rho(n,\sigma;q)$ is known to be a rational function for the
set of local fields considered in the theorem. It thus suffices to prove the
above symmetry after evaluating $t$ at infinitely many values $q$ coming from
$\mathcal{L}_{\sigma}$ since rational functions are determined by their values
at any large enough set of points.
In particular, we can fix an arbitrarily large $p$ and consider all the
extensions of $\mathbb{Z}_{p}$. For this set of extensions, we know by Theorem
3.12 and Lemma 4.4 that $\rho(n,\sigma;p)$ is a palindromic form (over
$\mathbb{Q}$) of weight $1$ since $\rho_{f,\tau}(K)$ and
$|\mathbb{P}^{n}(\mathbb{F}_{q})|$ are palindromic forms of weight $n$ so that
their ratio is a palindromic form of weight $1$. In other words, this shows
that
$r_{\sigma}(q^{-1})=r_{\sigma}(q)$
as $q$ ranges over the powers of the prime $p$, thus completing the proof. ∎
The above theorem very quickly deals with ”tame” primes as a combination of
earlier results and the general theorems in this paper. Unfortunately, this is
the limit of this direct application of our general theorems in this context
since the splitting densities $\rho_{f,\tau}(K)$ fail to be rational functions
in general - they are only rational along arithmetic progressions.
In the final part of this section, we relate the factorization densities more
directly to certain integrals over a particularly nice class of quotients. In
forthcoming work, we complete the proof of rationality (and hence also of the
functional equation) for _all_ primes, including the wild ones. See 4.9 for
some more detail.
### 4.2 Reducing the computation of factorization densities to certain
integrals
For a factorization type $\sigma=\\{f_{1}^{e_{1}},\dots,f_{r}^{e_{r}}\\}$ of
degree $n$ we define some associated notions. We pick a partition
$\mathcal{P}_{\sigma}$ of $[n]=\\{1,\dots,n\\}$ with distinct parts
$\pi^{\sigma}_{i,j}$ of size $e_{i}$ for $1\leq i\leq r$ and $1\leq j\leq
f_{i}$. If required, we will express the $f_{i},e_{i},r$ by
$f_{i}(\sigma),e_{i}(\sigma),r(\sigma)$ to make the dependence on $\sigma$
clear. Note that this partition is well defined up-to $S_{n}$ conjugacy and
indeed, all the constructions here will be well defined precisely up to this
relabelling.
We next define an associated admissible pair
$\tau_{\sigma}=(H_{\sigma},g_{\sigma}H_{\sigma})$ where $H_{\sigma}$ is the
maximal subgroup of $S_{n}$ preserving the partition $\mathcal{P}_{\sigma}$,
i.e., $H_{\sigma}\cong\prod_{i=1}^{r}(S_{e_{i}}^{f_{i}})$ while the coset
$g_{\sigma}H_{\sigma}$ cyclically permutes the cosets in the order
$\pi^{\sigma}_{i,1},\dots,\pi^{\sigma}_{i,f_{i}}$ for each $i$.
Since everything is only well defined up to conjugacy, we say that $A\lesssim
B$ if $A$ is less than some $S_{n}$-conjugate of $B$. The mapping $\tau\to
s(\tau)$ is adjoint to $\sigma\to\tau_{\sigma}$ in the following sense.
###### Lemma 4.6.
For any admissible pair $\tau$ for $S_{n}$ and factorization type $\sigma$,
$\tau\lesssim\tau_{\sigma}$ if and only if
$\tau_{s(\tau)}\lesssim\tau_{\sigma}$.
###### Proof.
The admissible pair $\tau$ induces a partition $\mathcal{P}_{\tau}$ of $[n]$
corresponding to the orbits of $H_{\tau}$ on $[n]$ as in Definition 4.3. The
inequality $\tau\lesssim\tau_{\sigma}$ is equivalent to there being a
coarsening of $\mathcal{P}_{\tau}$ to the partition $\mathcal{P}_{\sigma}$
such that the coset $g_{\tau}H_{\tau}$ induces the same permutation as
$g_{\sigma}H_{\sigma}$. Since $\tau$ and $\tau_{s(\tau)}$ induce the same
partition of $[n]$ and the same induced cyclic permutation on the set of
cosets, the lemma follows. ∎
We define a partial order on the set of factorization types by
$\sigma^{\prime}\leq\sigma\iff\tau_{s(\sigma^{\prime})}\lesssim\tau_{s(\sigma)}.$
We can then rephrase the above lemma as
$\tau\lesssim\tau_{\sigma}\iff s(\tau)\leq\sigma.$
We need one more lemma before the main theorems of this section.
###### Lemma 4.7.
The invariant $\alpha$ from definition 2.12 satisfy the following identity
$\alpha(\tau,\tau_{\sigma})=\alpha(\tau_{s(\tau)},\tau_{\sigma}).$
###### Proof.
By definition, $\alpha(\tau,\tau_{\sigma})$ is the number of elements in
$H_{\sigma}\backslash S_{n}$ which induce a permutation of the partition
$\mathcal{P}_{\tau}$ while preserving the properties that $\mathcal{P}_{\tau}$
is a refinement of $\mathcal{P}_{\sigma}$ and the induced cyclic permutation
of $g_{\tau}$ on $\mathcal{P}_{\sigma}$ equals the induced cyclic permutation
of $g_{\sigma}$ on $\mathcal{P}_{\sigma}$. Since $\tau$ and $\tau_{s(\tau)}$
induce the same partition of $[n]$ and the same induced cyclic permutation,
the lemma follows. ∎
For each factorization type $\sigma$, we denote a resolution of the quotient
map by
$f_{\sigma}:X_{\sigma}\to{{}^{g_{\sigma}}}\left((\mathbb{P}^{1})^{n}/H_{\sigma}\right)\to\mathbb{P}^{n}$
where $X_{\sigma}$ is a nice resolution of singularities that we assume exists
(as in the rest of this paper).
###### Theorem 4.8.
For any factorization type $\sigma$ of degree $n$, we have the identity
$\eta_{f_{\sigma}}(K)=\sum_{\begin{subarray}{c}\tau:s(\tau)\leq\sigma\\\
\text{up to }S_{n}\text{
conjugacy}\end{subarray}}\alpha(\tau,\tau_{\sigma})\rho_{f_{\sigma},\tau}(K)=\sum_{\sigma^{\prime}\leq\sigma}\alpha(\tau_{\sigma^{\prime}},\tau_{\sigma})|\mathbb{P}^{n}(\mathbb{F}_{p})|\rho(n,\sigma;p).$
###### Proof.
The first identity follows immediately from Theorem 3.9 and Lemma 4.6. The
second identity follows from Lemma 4.7 and Lemma 4.4. ∎
Finally, we note as before that since
$\alpha(\tau_{\sigma^{\prime}},\tau_{\sigma})>0$ for all
$\sigma^{\prime}\leq\sigma$, we can Möbius invert the above system of
identities with respect to the poset of factorization types to obtain
$\rho(n,\sigma;q)=\frac{1}{|\mathbb{P}^{n}(\mathbb{F}_{q})|}\sum_{\sigma^{\prime}\leq\sigma}\alpha^{-1}(\tau_{\sigma^{\prime}},\tau_{\sigma})\eta_{f_{\sigma^{\prime}}}(K)$
(4.3)
where $\alpha^{-1}$ is the inverse of $\alpha$ in the incidence poset of
factorization types. We have thus shown that the conjecture on factorization
densities is equivalent to showing that the integrals
$\eta_{f_{\sigma}}(K)/|\mathbb{P}^{n}(\mathbb{F}_{q})|$ are rational functions
in $q$ invariant under the transformation $q\to q^{-1}$.
###### Remark 4.9.
In order to prove that the integrals appearing in the above equation are
rational functions in $q$, we need to find explicitly the nice resolution
$f_{\sigma}:X_{\sigma}\to\mathbb{P}^{n}$ assumed to exist above. We find it
quite likely that one can find an explicit resolution in characteristics $p$
not dividing any of the exponents $e_{i}$ appearing in $\sigma$ and in this
way prove that the relevant subvarieties are all of Tate type, i.e., have
polynomial point counts. However, finding a uniform resolution for all primes
appears significantly more complicated. In forthcoming work, we describe a
resolution by an Artin stack, prove the relevant change of variables formula
and in this way complete the proof of Equation 4.1 for _all_ primes. We end
this paper by describing exactly the kind of varieties of which we need such a
resolution.
We consider $\mathbb{P}^{n}$ as the parameter space for degree at most $n$
univariate polynomials as usual so that $\prod_{i=1}^{r}\mathbb{P}^{n_{i}}$
parametrizes tuples $(h_{1},\dots,h_{r})$ of polynomials. We have reduced the
proof of rationality to finding a resolution for the complement of the
_resultant locus_ $\mathcal{R}\subset\mathbb{\prod}_{i=1}^{r}P^{n_{i}}$
corresponding to the locus where two of polynomials $h_{i},h_{j}$ share a
common root. Crucially, we want such a compactification over
$\operatorname{Spec}\mathbb{Z}$ (or at least over an open cover).
This problem is closely analogous to the problem of finding a resolution of
the discriminant locus $\mathcal{D}\subset\mathbb{P}^{n}$ corresponding to
polynomials $h$ having repeated roots. In the forthcoming work, we also
describe how to resolve this locus by an Artin stack.
## References
* [1] Dan Abramovich and Jianhua Wang “Equivariant resolution of singularities in characteristic 0” In _arXiv preprint alg-geom/9609013_ , 1996
* [2] Victor V Batyrev “Birational Calabi–Yau n-folds have equal Betti numbers” In _arXiv preprint alg-geom/9710020_ , 1997
* [3] Manjul Bhargava “Mass formulae for extensions of local fields, and conjectures on the density of number field discriminants” In _International Mathematics Research Notices_ 2007.9 OUP, 2007, pp. rnm052–rnm052
* [4] Manjul Bhargava, John Cremona, Tom Fisher and Stevan Gajović “The density of polynomials of degree n overZ_p having exactly r roots in Q_p” In _Proceedings of the London Mathematical Society_ Wiley Online Library, 2022
* [5] Edward Bierstone “Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant” In _Inventiones mathematicae_ 128.2 Springer, 1997, pp. 207–302
* [6] Franziska Bittner “The universal Euler characteristic for varieties of characteristic zero” In _Compositio Mathematica_ 140.4 London Mathematical Society, 2004, pp. 1011–1032
* [7] André Bloch and György Pólya “On the roots of certain algebraic equations” In _Proceedings of the London Mathematical Society_ 2.1 Oxford Academic, 1932, pp. 102–114
* [8] Joe Buhler, Daniel Goldstein, David Moews and Joel Rosenberg “The probability that a random monic p-adic polynomial splits” In _Experimental Mathematics_ 15.1 Taylor & Francis, 2006, pp. 21–32
* [9] Xavier Caruso “Where are the zeroes of a random p-adic polynomial?” In _Forum of Mathematics, Sigma_ 10, 2022 Cambridge University Press
* [10] Javier Carvajal-Rojas and Takehiko Yasuda “On the behavior of stringy motives under Galois quasi-$\backslash$’etale covers” In _arXiv preprint arXiv:2105.05214_ , 2021
* [11] Ilaria Del Corso and Roberto Dvornicich “Uniformity over primes of tamely ramified splittings” In _manuscripta mathematica_ 101.2, 2000, pp. 239–266 DOI: 10.1007/pl00005848
* [12] Amir Dembo, Bjorn Poonen, Qi-Man Shao and Ofer Zeitouni “Random polynomials having few or no real zeros” In _Journal of the American Mathematical Society_ 15.4, 2002, pp. 857–892
* [13] Jan Denef and François Loeser “Caractéristiques d’Euler-Poincaré, fonctions zêta locales et modifications analytiques” In _Journal of the American Mathematical Society_ JSTOR, 1992, pp. 705–720
* [14] Jan Denef and François Loeser “Definable sets, motives and p-adic integrals” In _Journal of the American Mathematical Society_ 14.2, 2000, pp. 429–469 DOI: 10.1090/s0894-0347-00-00360-x
* [15] Jan Denef and Diane Meuser “A functional equation of Igusa’s local zeta function” In _American Journal of Mathematics_ 113.6 JSTOR, 1991, pp. 1135–1152
* [16] Santiago Encinas and Orlando Villamayor “Good points and constructive resolution of singularities” In _Acta mathematica_ 181.1 Springer Netherlands, Dordrecht, 1998, pp. 109–158
* [17] Steven Evans “The expected number of zeros of a random system of p -adic polynomials” In _Electronic Communications in Probability_ 11 Institute of Mathematical StatisticsBernoulli Society, 2006, pp. 278–290
* [18] Michael Groechenig, Dimitri Wyss and Paul Ziegler “Geometric stabilisation via p-adic integration” In _Journal of the American Mathematical Society_ 33.3, 2020, pp. 807–873
* [19] Michael Groechenig, Dimitri Wyss and Paul Ziegler “Mirror symmetry for moduli spaces of Higgs bundles via p-adic integration” In _Inventiones mathematicae_ 221.2 Springer, 2020, pp. 505–596
* [20] Jun-ichi Igusa “An introduction to the theory of local zeta functions, volume 14 of AMS” In _IP Studies in Advanced Mathematics. American Mathematical Society, Providence, RI_ 6, 2000
* [21] Mark Kac “On the average number of real roots of a random algebraic equation” In _Bulletin of the American Mathematical Society_ 49.4 American Mathematical Society, 1943, pp. 314–320
* [22] Avinash Kulkarni and Antonio Lerario “p-adic Integral Geometry” In _SIAM Journal on Applied Algebra and Geometry_ 5.1 SIAM, 2021, pp. 28–59
* [23] François Loeser “Seattle lectures on motivic integration” In _Algebraic geometry—Seattle 2005_ 80, 2009, pp. 745–784
* [24] Oanh Nguyen and Van Vu “Random polynomials: central limit theorems for the real roots” In _Duke Mathematical Journal_ 170.17 Duke University Press, 2021, pp. 3745–3813
* [25] Bjorn Poonen “Rational points on varieties” American Mathematical Soc., 2017
* [26] Miles Reid “La correspondance de McKay” In _Asterisque-Societe Mathematique de France_ 276 CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE, 2002, pp. 53–72
* [27] Jean-Pierre Serre “Une formule de masse pour les extensions totalement ramifiees de degres dinne d’un corps local.” In _CR Acad. Sc. Paris A_ 286, 1978, pp. 1031–1036
* [28] Roy Shmueli “The probability that a p-adic random étale algebra is an unramified field” arXiv, 2022 DOI: 10.48550/arXiv.2211.12995
* [29] Takehiko Yasuda “The wild McKay correspondence and p-adic measures” In _Journal of the European Mathematical Society_ 19.12, 2017, pp. 3709–3743
* [30] John Yin “Density of $p$-adic polynomials generating extensions with fixed splitting type” arXiv, 2022 DOI: 10.48550/ARXIV.2211.10425
|
# Learning Markov Random Fields for Combinatorial Structures via
Sampling through Lovász Local Lemma
Nan Jiang1, Yi Gu2, Yexiang Xue1
###### Abstract
Learning to generate complex combinatorial structures satisfying constraints
will have transformative impacts in many application domains. However, it is
beyond the capabilities of existing approaches due to the highly intractable
nature of the embedded probabilistic inference. Prior works spend most of the
training time learning to separate valid from invalid structures but do not
learn the inductive biases of valid structures. We develop NEural Lovász
Sampler (Nelson), which embeds the sampler through Lovász Local Lemma (LLL) as
a fully differentiable neural network layer. Our Nelson-CD embeds this sampler
into the contrastive divergence learning process of Markov random fields.
Nelson allows us to obtain valid samples from the current model distribution.
Contrastive divergence is then applied to separate these samples from those in
the training set. Nelson is implemented as a fully differentiable neural net,
taking advantage of the parallelism of GPUs. Experimental results on several
real-world domains reveal that Nelson learns to generate 100% valid
structures, while baselines either time out or cannot ensure validity. Nelson
also outperforms other approaches in running time, log-likelihood, and MAP
scores.
## Introduction
In recent years, tremendous progress has been made in generative modeling
(Hinton 2002; Tsochantaridis et al. 2005; Goodfellow et al. 2014; Kingma and
Welling 2014; Germain et al. 2015; Larochelle and Murray 2011; van den Oord,
Kalchbrenner, and Kavukcuoglu 2016; Arjovsky, Chintala, and Bottou 2017; Song
and Ermon 2019; Song et al. 2021; Murphy, Weiss, and Jordan 1999; Yedidia,
Freeman, and Weiss 2000; Wainwright and Jordan 2006).
Learning a generative model involves increasing the divergence in likelihood
scores between the structures in the training set and those structures sampled
from the current generative model distribution. While current approaches have
achieved successes in un-structured domains such as vision or speech, their
performance is degraded in the structured domain, because it is already
computationally intractable to search for a valid structure in a combinatorial
space subject to constraints, not to mention sampling, which has a higher
complexity. In fact, when applied in a constrained domain, existing approaches
spend most of their training time manipulating the likelihood of invalid
structures, but not learning the difference between valid structures inside
and outside of the training set. In the meantime, tremendous progress has been
made in automated reasoning (Braunstein, Mézard, and Zecchina 2005; Andersen
et al. 2007; Chavira, Darwiche, and Jaeger 2006; Van Hentenryck 1989; Gogate
and Dechter 2012; Sang, Bearne, and Kautz 2005). Nevertheless, reasoning and
learning have been growing independently for a long time. Only recently do
ideas emerge exploring the role of reasoning in learning (Kusner, Paige, and
Hernández-Lobato 2017; Jin, Barzilay, and Jaakkola 2018; Dai et al. 2018; Hu
et al. 2017; Lowd and Domingos 2008; Ding et al. 2021).
The Lovász Local Lemma (LLL) (Erdős and Lovász 1973) is a classic gem in
combinatorics, which at a high level, states that there exists a positive
probability that none of a series of bad events occur, as long as these events
are mostly independent from one another and are not too likely individually.
Recently, Moser and Tardos (2010) came up with an algorithm, which samples
from the probability distribution proven to exist by LLL. Guo, Jerrum, and Liu
(2019) proved that the algorithmic-LLL is an unbiased sampler if those bad
events satisfy the so-called “extreme” condition. The expected running time of
the sampler is also shown to be polynomial. As one contribution of this paper,
we offer proofs of the two aforementioned results using precise mathematical
notations, clarifying a few descriptions not precisely defined in the original
proof. While this line of research clearly demonstrates the potential of LLL
in generative learning (generating samples that satisfy all hard constraints),
it is not clear how to embed LLL-based samplers into learning and no empirical
studies have been performed to evaluate LLL-based samplers in machine
learning.
In this paper, we develop NEural Lovász Sampler (Nelson), which implements the
LLL-based sampler as a fully differentiable neural network. Our Nelson-CD
embeds Nelson into the contrastive divergence learning process of Markov
Random Fields (MRFs). Embedding LLL-based sampler allows the contrastive
learning algorithm to focus on learning the difference between the training
data and the valid structures drawn from the current model distribution.
Baseline approaches, on the other hand, spend most of their training time
learning to generate valid structures. In addition, Nelson is fully
differentiable, hence allowing for efficient learning harnessing the
parallelism of GPUs.
Related to our Nelson are neural-based approaches to solve combinatorial
optimization problems (Selsam et al. 2019; Duan et al. 2022; Li, Chen, and
Koltun 2018). Machine learning is also used to discover better heuristics
(Yolcu and Póczos 2019; Chen and Tian 2019). Reinforcement learning (Karalias
and Loukas 2020; Bengio, Lodi, and Prouvost 2021) as well as approaches
integrating search with neural nets (Mandi et al. 2020) are found to be
effective in solving combinatorial optimization problems as well. Regarding
probabilistic inference, there are a rich line of research on MCMC-type
sampling (Neal 1993; Dagum and Chavez 1993; Ge, Xu, and Ghahramani 2018) and
various versions of belief propagation (Murphy, Weiss, and Jordan 1999; Ihler,
Fisher III, and Willsky 2005; Coja-Oghlan, Müller, and Ravelomanana 2020; Ding
and Xue 2020). SampleSearch (Gogate and Dechter 2011) integrates importance
sampling with constraint-driven search. Probabilistic inference based on
hashing and randomization obtains probabilistic guarantees for marginal
queries and sampling via querying optimization oracles subject to randomized
constraints (Gomes, Sabharwal, and Selman 2006; Ermon et al. 2013b; Achlioptas
and Theodoropoulos 2017; Chakraborty, Meel, and Vardi 2013).
We experiment Nelson-CD on learning preferences towards (i) random
$K$-satisfiability solutions (ii) sink-free orientations of un-directed graphs
and (iii) vehicle delivery routes. In all these applications, Nelson-CD (i)
has the fastest training time due to seamless integration into the learning
framework (shown in Tables 1(a), 3(a)). (ii) Nelson generates samples 100%
satisfying constraints (shown in Tables 1(b), 3(b)), which facilitates
effective contrastive divergence learning. Other baselines either cannot
satisfy constraints or time out. (iii) The fast and valid sample generation
allows Nelson to obtain the best learning performance (shown in Table 1(c),
2(a,b), 3(c,d)).
Our contributions can be summarized as follows: (a) We present Nelson-CD, a
contrastive divergence learning algorithm for constrained MRFs driven by
sampling through the Lovász Local Lemma (LLL). (b) Our LLL-based sampler
(Nelson) is implemented as a fully differentiable multi-layer neural net,
allowing for end-to-end training on GPUs. (c) We offer a mathematically sound
proof of the sample distribution and the expected running time of the Nelson
algorithm. (d) Experimental results reveal the effectiveness of Nelson in (i)
learning models with high likelihoods (ii) generating samples 100% satisfying
constraints and (iii) having high efficiency in training111Code is at:
https://github.com/jiangnanhugo/nelson-cd.
Please refer to the Appendix in the extended version (Jiang, Gu, and Xue 2022)
for the whole proof and the experimental settings..
## Preliminaries
#### Markov Random Fields (MRF)
represent a Boltzmann distribution of the discrete variables
$X=\\{X_{i}\\}_{i=1}^{n}$ over a Boolean hypercube
$\mathcal{X}=\\{0,1\\}^{n}$. For $x\in\mathcal{X}$, we have:
$P_{\theta}(X=x)=\frac{\exp\left(\phi_{\theta}(x)\right)}{Z(\theta)}=\frac{\exp\left(\sum_{j=1}^{m}\phi_{\theta,j}(x_{j})\right)}{Z(\theta)}.$
(1)
Here,
$Z(\theta)=\sum_{x^{\prime}\in\mathcal{X}}\exp\left(\phi_{\theta}(x^{\prime})\right)$
is the partition function that normalizes the total probability to $1$. The
potential function is $\phi_{\theta}(x)=\sum_{j=1}^{m}\phi_{\theta,j}(x_{j})$.
Each $\phi_{\theta,j}$ is a factor potential, which maps a value assignment
over a subset of variables $X_{j}\subseteq X$ to a real number. We use upper
case letters, such as $X_{j}$ to represent (a set of) random variables, and
use lower case letters, such as $x_{j}$, to represent its value assignment. We
also use $\mathtt{var}(\phi_{\theta,j})$ to represent the domain of
$\phi_{\theta,j}$, i.e., $\mathtt{var}(\phi_{\theta,j})=X_{j}$. $\theta$ are
the parameters to learn.
#### Constrained MRF
is the MRF model subject to a set of hard constraints
$\mathcal{C}=\\{c_{j}\\}_{j=1}^{L}$. Here, each constraint $c_{j}$ limits the
value assignments of a subset of variables $\mathtt{var}(c_{j})\subseteq X$.
We write $c_{j}(x)=1$ if the assignment $x$ satisfies the constraint $c_{j}$
and $0$ otherwise. Note that $x$ is an assignment to all random variables, but
$c_{j}$ only depends on variables $\mathtt{var}(c_{j})$. We denote
$C(x)=\prod_{j=1}^{L}c_{j}(x)$ as the indicator function. Clearly, $C(x)=1$ if
all constraints are satisfied and $0$ otherwise. The constrained MRF is:
$P_{\theta}(X=x|\mathcal{C})=\frac{\exp\left(\phi_{\theta}(x)\right)C(x)}{Z_{\mathcal{C}}(\theta)},$
(2)
where
$Z_{\mathcal{C}}(\theta)={\sum}_{x^{\prime}\in\mathcal{X}}\exp\left(\phi_{\theta}(x)\right)C(x)$
sums over only valid assignments.
#### Learn Constrained MRF
Given a data set $\mathcal{D}=\\{x^{k}\\}_{k=1}^{N}$, where each $x^{k}$ is a
valid assignment that satisfies all constraints, learning can be achieved via
maximal likelihood estimation. In other words, we find the optimal parameters
$\theta^{*}$ by minimizing the negative $\log$-likelihood
$\ell_{\mathcal{C}}(\theta)$:
$\displaystyle\ell_{\mathcal{C}}(\theta)$
$\displaystyle=-\frac{1}{N}\sum_{k=1}^{N}\log P_{\theta}(X=x^{k}|\mathcal{C})$
(3) $\displaystyle=-\frac{1}{N}\sum_{k=1}^{N}\phi_{\theta}(x^{k})+\log
Z_{\mathcal{C}}(\theta).$
The parameters $\theta$ can be trained using gradient descent:
$\theta^{t+1}=\theta^{t}-\eta\nabla\ell_{\mathcal{C}}(\theta)$, where $\eta$
is the learning rate. Let $\nabla\ell_{\mathcal{C}}(\theta)$ denotes the
gradient of the objective $\ell_{\mathcal{C}}(\theta)$, that is calculated as:
$\displaystyle\nabla\ell_{\mathcal{C}}$
$\displaystyle(\theta)=-\frac{1}{N}\sum_{k=1}^{N}\nabla\phi_{\theta}(x^{k})+\nabla\log
Z_{\mathcal{C}}(\theta)$ (4)
$\displaystyle=-\mathbb{E}_{{x}\sim\mathcal{D}}\left(\nabla\phi_{\theta}({x})\right)+\mathbb{E}_{\tilde{x}\sim
P_{\theta}(x|\mathcal{C})}\left(\nabla\phi_{\theta}(\tilde{x})\right).$
The first term is the expectation over all data in training set $\mathcal{D}$.
During training, this is approximated using a mini-batch of data randomly
drawn from the training set $\mathcal{D}$. The second term is the expectation
over the current model distribution $P_{\theta}(X=x|\mathcal{C})$ (detailed in
Appendix C.2). Because learning is achieved following the directions given by
the divergence of two expectations, this type of learning is commonly known as
contrastive divergence (CD) (Hinton 2002). Estimating the second expectation
is the bottleneck of training because it is computationally intractable to
sample from this distribution subject to combinatorial constraints. Our
approach, Nelson, leverages the sampling through Lovász Local Lemma to
approximate the second term.
#### Factor Potential in Single Variable Form
Our method requires each factor potential $\phi_{\theta,j}(x_{j})$ in Eq. (1)
to involve only one variable. This is NOT an issue as all constrained MRF
models can be re-written in single variable form by introducing additional
variables and constraints. Our transformation follows the idea in Sang,
Bearne, and Kautz (2005). We illustrate the idea by transforming one-factor
potential $\phi_{\theta,j}(x_{j})$ into the single variable form. First,
notice all functions including $\phi_{\theta,j}(x_{j})$ over a Boolean
hypercube $\\{0,1\\}^{n}$ have a (unique) discrete Fourier expansion:
$\phi_{\theta,j}(x_{j})=\sum_{S\in[{\mathtt{var}(\phi_{\theta,j})}]}\hat{\phi}_{\theta,j,S}~{}\chi_{S}(x).$
(5)
Here $\chi_{S}(x)=\prod_{X_{i}\in S}X_{i}$ is the basis function and
$\hat{\phi}_{\theta,j,S}$ are Fourier coefficients.
$[{\mathtt{var}(\phi_{\theta,j})}]$ denotes the power set of
${\mathtt{var}(\phi_{\theta,j})}$. For example, if
$\mathtt{var}(\phi_{\theta,j})=\\{X_{1}$, $X_{2}\\}$, then
$[{\mathtt{var}(\phi_{\theta,j})}]=\\{\emptyset,\\{X_{1}\\},\\{X_{2}\\},\\{X_{1},X_{2}\\}\\}$.
See Mansour (1994) for details of Fourier transformation. To transform
$\phi_{\theta,j}(x_{j})$ into single variable form, we introduce a new Boolean
variable $\hat{\chi}_{S}$ for every $\chi_{S}(x)$. Because $\hat{\chi}_{S}$
and all $X_{i}$’s are Boolean, we can use combinatorial constraints to
guarantee $\hat{\chi}_{S}=\prod_{X_{i}\in S}X_{i}$. These constraints are
incorporated into $\mathcal{C}$. Afterward, $\phi_{\theta,j}(x_{j})$ is
represented as the sum of several single-variable factors. Notice this
transformation is only possible when the MRF is subject to constraints. We
offer a detailed example in Appendix C.1 for further explanation. Equipped
with this transformation, we assume all $\phi_{\theta,j}(x_{j})$ are single
variable factors for the rest of the paper.
#### Extreme Condition
The set of constraints $\mathcal{C}$ is called “extremal” if no variable
assignment violates two constraints sharing variables, according to Guo,
Jerrum, and Liu (2019).
###### Condition 1.
A set of constraints $\mathcal{C}$ is called extremal if and only if for each
pair of constraints $c_{i},c_{j}\in\mathcal{C}$, (i) either their domain
variables do not intersect, i.e.,
${\mathtt{var}(c_{i})}\cap{\mathtt{var}(c_{j})}=\emptyset$. (ii) or for all
$x\in\mathcal{X}$, $c_{i}(x)=1$ or $c_{j}(x)=1$.
## Sampling Through Lovász Local Lemma
Lovász Local Lemma (LLL) (Erdős and Lovász 1973) is a fundamental method in
combinatorics to show the existence of a valid instance that avoids all the
bad events, if the occurrences of these events are “mostly” independent and
are not very likely to happen individually. Since the occurrence of a bad
event is equivalent to the violation of a constraint, we can use the LLL-based
sampler to sample from the space of constrained MRFs. To illustrate the idea
of LLL-based sampling, we assume the constrained MRF model is given in the
single variable form (as discussed in the previous section):
$\displaystyle P_{\theta}(X=x|\mathcal{C})$
$\displaystyle=\frac{\exp\left(\sum_{i=1}^{n}\theta_{i}x_{i}\right)C(x)}{Z_{\mathcal{C}}(\theta)},$
(6)
where
$Z_{\mathcal{C}}(\theta)=\sum_{x^{\prime}\in\mathcal{X}}\exp\left(\sum_{i=1}^{n}\theta_{i}x_{i}\right)C(x)$.
As shown in Algorithm 1, the LLL-based sampler (Guo, Jerrum, and Liu 2019)
takes the random variables $X=\\{X_{i}\\}_{i=1}^{n}$, the parameters of
constrained MRF $\theta$, and constraints $\mathcal{C}=\\{c_{j}\\}_{j=1}^{L}$
that satisfy Condition 1 as the inputs. In Line 1 of Algorithm 1, the sampler
gives an initial random assignment of each variable following its marginal
probability:
$x_{i}\sim\frac{\exp(\theta_{i}x_{i})}{\sum_{x_{i}\in\\{0,1\\}}\exp(\theta_{i}x_{i})}$,
for $1\leq i\leq n$. Here we mean that $x_{i}$ is chosen with probability mass
$\frac{\exp(\theta_{i}x_{i})}{\sum_{x_{i}\in\\{0,1\\}}\exp(\theta_{i}x_{i})}$.
Line 2 of Algorithm 1 checks if the current assignment satisfies all
constraints in $\mathcal{C}$. If so, the algorithm terminates. Otherwise, the
algorithm finds the set of violated constraints
$S=\\{c_{j}|c_{j}(x)=0,c_{j}\in\mathcal{C}\\}$ and re-samples related
variables $X_{k}\in\mathtt{var}(S)$ using the same marginal probability, i.e.,
$x_{k}\sim\frac{\exp(\theta_{k}x_{k})}{\sum_{x_{k}\in\\{0,1\\}}\exp(\theta_{k}x_{k})}$.
Here $\mathtt{var}(S)=\cup_{c_{j}\in S}~{}\mathtt{var}(c_{j})$. The algorithm
repeatedly samples all those random variables violating constraints until all
the constraints are satisfied.
Algorithm 1 Sampling Through Lovász Local Lemma.
1: Random variables $X=\\{X_{i}\\}_{i=1}^{n}$; Constraints
$\mathcal{C}=\\{c_{j}\\}_{j=1}^{L}$; Parameters of the constrained MRF
$\theta$.
2:$x_{i}\sim\frac{\exp(\theta_{i}x_{i})}{\sum_{x_{i}\in\\{0,1\\}}\exp(\theta_{i}x_{i})}$,
for $1\leq i\leq n$. $\triangleright$ initialize
3:while $C(x)=0$ do
4: Find all violated constraints $S\subseteq\mathcal{C}$ in $x$.
5:
$x_{k}{\sim}\frac{\exp(\theta_{k}x_{k})}{\underset{{x_{k}\in\\{0,1\\}}}{\sum}\exp(\theta_{k}x_{k})},\text{for
}x_{k}\in\mathtt{var}(S)$.$\triangleright$ resample return A valid sample $x$
drawn from $P_{\theta}(X=x|\mathcal{C})$.
Under Condition 1, Algorithm 1 guarantees each sample is from the constrained
MRFs’ distribution $P_{\theta}(X=x|\mathcal{C})$ (in Theorem 1). In Appendix
A, we present the detailed proof and clarify the difference to the original
descriptive proof (Guo, Jerrum, and Liu 2019).
###### Theorem 1 (Probability Distribution).
Given random variables $X=\\{X_{i}\\}_{i=1}^{n}$, constraints
$\mathcal{C}=\\{c_{j}\\}_{j=1}^{L}$ that satisfy Condition 1, and the
parameters of the constrained MRF in the single variable form $\theta$. Upon
termination, Algorithm 1 outputs an assignment $x$ that is randomly drawn from
the constrained MRF distribution: $x\sim P_{\theta}(X=x|\mathcal{C})$.
###### Sketch of Proof.
We first show that in the last round, the probability of obtaining two
possible assignments conditioning on all previous rounds in Algorithm 1 has
the same ratio as the probability of those two assignments under distribution
$P_{\theta}(X=x|\mathcal{C})$. Then we show when Algorithm 1 ends, the set of
all possible outputs is equal to the domain of non-zero probabilities of
$P_{\theta}(X=x|\mathcal{C})$. Thus we conclude the execution of Algorithm 1
produces a sample from $P_{\theta}(X=x|\mathcal{C})$ because of the identical
domain and the match of probability ratios of any two valid assignments. ∎
The expected running time of Algorithm 1 is determined by the number of rounds
of re-sampling. In the uniform case that $\theta_{1}=\ldots=\theta_{n}$, the
running time is linear in the size of the constraints $\mathcal{O}(L)$. The
running time for the weighted case has a closed form. We leave the details in
Appendix B.
## Neural Lovász Sampler
We first present the proposed Neural Lovász Sampler (Nelson) that implements
the LLL-based sampler as a neural network, allowing us to draw multiple
samples in parallel on GPUs. We then demonstrate how Nelson is embedded in CD-
based learning for constrained MRFs.
### Nelson: Neural Lovász Sampler
#### Represent Constraints as CNF
Nelson obtains samples from the constrained MRF model in single variable form
(Eq. 6). To simplify notations, we denote
$P_{\theta}(X_{i}=x_{i})=\frac{\exp(\theta_{i}x_{i})}{\sum_{x_{i}\in\\{0,1\\}}\exp(\theta_{i}x_{i})}$.
Since our constrained MRF model is defined on the Boolean hyper-cube
$\\{0,1\\}^{n}$, we assume all constraints $\\{c_{j}\\}_{j=1}^{L}$ are given
in the Conjunctive Normal Form (CNF). Note that all propositional logic can be
reformulated in CNF format with at most a polynomial-size increase. A formula
represented in CNF is a conjunction ($\wedge$) of clauses. A clause is a
disjunction ($\vee$) of literals, and a literal is either a variable or its
negation ($\neg$). Mathematically, we use $c_{j}$ to denote a clause and use
$l_{j,k}$ to denote a literal. In this case, a CNF formula would be:
$\displaystyle c_{1}\wedge\ldots\wedge c_{L},\quad\text{where
}c_{j}=l_{j,1}\vee\ldots\vee l_{j,K}$ (7)
A clause is true if and only if at least one of the literals in the clause is
true. The whole CNF is true if all clauses are true.
We transform each step of Algorithm 1 into arithmetic operations, hence
encoding it as a multi-layer neural network. To do that, we first need to
define a few notations:
* •
Vector of assignment $x^{t}=(x^{t}_{1},\dots,x^{t}_{n})$, where $x_{i}^{t}$ is
the assignment of variable $X_{i}$ in the $t$-th round of Algorithm 1.
$x^{t}_{i}=1$ denotes variable $X_{i}$ takes value $1$ (or true).
* •
Vector of marginal probabilities $P=(P_{1},\ldots,P_{n})$, where $P_{i}$ is
the probability of variable $X_{i}$ taking value $0$ (false):
$P_{i}=P_{\theta}(X_{i}=0)={\exp(0)}/{(\exp(0)+\exp(\theta_{i}))}$.
* •
Tensor $W\in\\{-1,0,1\\}^{L\times K\times n}$ and matrix
$b\in\\{0,1\\}^{L\times n}$, that are used for checking constraint
satisfaction:
$\displaystyle W_{jki}$ $\displaystyle=\begin{cases}1&\text{if $k$-th literal
of clause $c_{j}$ is }X_{i},\\\ -1&\text{if $k$-th literal of clause $c_{j}$
is }\neg X_{i},\\\ 0&\text{otherwise}.\end{cases}$ (8) $\displaystyle b_{jk}$
$\displaystyle=\begin{cases}1&\text{if $k$-th literal of clause $c_{j}$ is
negated},\\\ 0&\text{otherwise}.\end{cases}$ (9)
* •
Matrix $V\in\\{0,1\\}^{L\times n}$, denoting the mapping from clauses to
variables in the CNF form for constraints $\mathcal{C}$:
$V_{ji}\mbox{=}\begin{cases}1&\text{if clause $c_{j}$ contains a literal
involving $X_{i}$}\\\ 0&\text{otherwise}.\end{cases}$ (10)
* •
Vector of resampling indicators $A^{t}$, where $A^{t}_{i}=1$ indicates
variable $X_{i}$ needs to be resampled at round $t$.
Given these defined variables, we represent each step of Algorithm 1 using
arithmetic operations as follows:
#### Initialization
To complete line 1 of Algorithm 1, given the marginal probability vector $P$,
the first step is sampling an initial assignment of $X$,
$x^{1}=(x^{1}_{1},\ldots,x^{1}_{n})$. It is accomplished by: for $1\leq i\leq
n$,
$x^{1}_{i}=\begin{cases}1&\text{if }u_{i}>P_{i},\\\ 0&\text{otherwise}.\\\
\end{cases}$ (11)
Here $u_{i}$ is sampled from the uniform distribution in $[0,1]$.
#### Check Constraint Satisfaction
To complete line 2 of Algorithm 1, given an assignment $x^{t}$ at round $t\geq
1$, tensor $W$ and matrix $b$, we compute $Z^{t}$ as follows:
$Z^{t}=W\circledast x^{t}+b,$ (12)
where $\circledast$ represents a special multiplication between tensor and
vector: $(W\circledast x)_{jk}=\sum_{i=1}^{n}W_{jki}x^{t}_{i}$. Note that
$Z^{t}_{jk}=1$ indicates the $k$-th literal of $j$-th clause is true (takes
value $1$). Hence, we compute $S^{t}_{j}$ as:
$\displaystyle S^{t}_{j}$ $\displaystyle=1-\max_{1\leq k\leq
K}Z_{jk},\quad\text{ for }1\leq j\leq L.$ (13)
Here $S^{t}_{j}=1$ indicates $x^{t}$ violates $j$-th clause. We check
$\sum_{j=1}^{L}S^{t}_{j}\neq 0$ to see if any clause is violated, which
corresponds to $C(x)=0$ and is the continuation criteria of the while loop.
#### Extract Variables in Violated Clauses
To complete line 3 of Algorithm 1, we extract all the variables that require
resampling based on vector $S^{t}$ computed from the last step. The vector of
resampling indicator $A^{t}$ can be computed as:
$A^{t}_{i}=\mathbf{1}\left(\sum_{j=1}^{L}{S_{j}^{t}}V_{ji}\geq
1\right),\quad\text{ for }1\leq i\leq n$ (14)
where $\sum_{j=1}^{L}{S_{j}^{t}}V_{ji}\geq 1$ implies $X_{i}$ requires
resampling.
#### Resample
To complete line 4 of Algorithm 1, given the marginal probability vector $P$,
resample indicator vector $A^{t}$ and assignment $x^{t}$, we draw a new random
sample $x^{t+1}$. This can be done using this update rule: for $1\leq i\leq
n$,
$x_{i}^{t+1}=\begin{cases}(1-A^{t}_{i})x_{i}^{t}+A^{t}_{i}&\text{if
}u_{i}>P_{i},\\\ (1-A^{t}_{i})x_{i}^{t}&\text{otherwise}.\end{cases}$ (15)
Again, $u_{i}$ is drawn from the uniform distribution in $[0,1]$. Drawing
multiple assignments in parallel is attained by extending $x^{t}$ with a new
dimension (See implementation in Appendix D.1). Example 1 show the detailed
steps of Nelson (See more examples in Appendix A.5).
###### Example 1.
Assume we have random variables $X_{1},X_{2},X_{3}$ with $n=3$, Constraints
$\mathcal{C}=(X_{1}\vee X_{2})\wedge(\neg X_{1}\vee X_{3})$ in the CNF form
with $L=2,K=2$. Tensor $W$ is:
$\displaystyle
W\mbox{=}\begin{bmatrix}w_{11}\mbox{=}[w_{111},w_{112},w_{113}],&w_{12}\mbox{=}[w_{121},w_{122},w_{123}]\\\
w_{21}\mbox{=}[w_{211},w_{212},w_{213}],&w_{22}\mbox{=}[w_{221},w_{222},w_{223}]\\\
\end{bmatrix},$ $\displaystyle
w_{11}=[1,0,0],w_{12}=[0,1,0],w_{21}\mbox{=}[-1,0,0],w_{22}\mbox{=}[0,0,1].$
Note that $w_{111}=1$ means $X_{1}$ is the 1st literal in the 1st clause and
$w_{211}=-1$ means $\neg X_{1}$ is the 1st literal in the 2nd clause. Matrix
$b$ and the mapping matrix $V$ are:
$b=\begin{bmatrix}0&0\\\ 1&0\\\ \end{bmatrix},\quad V=\begin{bmatrix}1&1&0\\\
1&0&1\\\ \end{bmatrix},$
$b_{21}=1$ indicates the 1st literal in the 2nd clause is negated. For the
mapping matrix, $V_{11}=V_{12}=1$ implies the 1st clause contains $X_{1}$ and
$X_{2}$. For $t=1$, suppose we have an initialized assignment
$x^{1}=[0\;0\;1]^{\top}$, meaning $X_{1}=X_{2}=0,X_{3}=1$. The intermediate
results of $Z^{1},S^{1},A^{1}$ become:
$Z^{1}=\begin{bmatrix}0&0\\\ 1&1\\\ \end{bmatrix},\quad
S^{1}=\begin{bmatrix}1\\\ 0\\\ \end{bmatrix},\quad A^{1}=\begin{bmatrix}1\\\
1\\\ 0\\\ \end{bmatrix},$
where $S^{1}_{1}=1$ implies the $1$st clause is violated.
$A^{1}_{1}=A^{1}_{2}=1$ denotes variables $X_{1},X_{2}$ require resampling.
Algorithm 2 Learn Constrained MRFs via Nelson-CD.
1:Dataset $\mathcal{D}$; Constraints $\mathcal{C}$; #Samples $m$; Learning
Iterations $T_{\max}$; Parameters of Constrained MRFs $\theta$.
2:$\textsc{Nelson}(W,b,V)\leftarrow\text{build}(X,\mathcal{C})$.
$\triangleright$ in Sec. Neural Lovász Sampler
3:for $t=1$ to $T_{\max}$ do
4: $\\{{x}^{j}\\}_{j=1}^{m}\sim\mathcal{D}$. $\triangleright$ from data
5: $\\{\tilde{x}^{j}\\}_{j=1}^{m}\leftarrow\textsc{Nelson}(\theta^{t},m)$.
$\triangleright$ from model
6:
$g^{t}\leftarrow\frac{1}{m}\sum_{j=1}^{m}\nabla\phi(x^{j})-\nabla\phi(\tilde{x}^{j})$
$\triangleright$ divergence
7: $\theta^{t+1}\leftarrow\theta^{t}-\eta g^{t}$. $\triangleright$ update
parameters return The converged MRF model $\theta^{T_{\max}}$.
### Contrastive Divergence-based Learning
The whole learning procedure is shown in Algorithm 2. At every learning
iteration, we call Nelson to draw assignments $\\{\tilde{x}^{j}\\}_{j=1}^{m}$
from constrained MRF’s distribution $P_{\theta}(X|\mathcal{C})$. Then we pick
$m$ data points at random from the training set
$\\{x^{j}\\}_{j=1}^{m}\sim\mathcal{D}$. The divergence $g^{t}$ in line 5 of
Algorithm 2 is an estimation of $\nabla\ell_{\mathcal{C}}(\theta)$ in Eq. (4).
Afterward, the MRFs’ parameters are updated, according to line 6 of Algorithm
2. After $T_{\max}$ learning iterations, the algorithm outputs the constrained
MRF model with parameters $\theta^{T_{\max}}$.
## Experiments
We show the efficiency of the proposed Nelson on learning MRFs defined on the
solutions of three combinatorial problems. Over all the tasks, we demonstrate
that Nelson outperforms baselines on learning performance, i.e., generating
structures with high likelihoods and MAP@10 scores (Table 1(c), 2(a,b),
3(c,d)). Nelson also generates samples which 100% satisfy constraints (Tables
1(b), 3(b)). Finally, Nelson is the most efficient sampler. Baselines either
time out or cannot generate valid structures (Tables 1(a), 3(a)).
### Experimental Settings
#### Baselines
We compare Nelson with other contrastive divergence learning algorithms
equipped with other sampling approaches. In terms of baseline samplers, we
consider:
* •
Gibbs sampler (Carter and Kohn 1994), which is a special case of MCMC that is
widely used in training MRF models.
* •
Weighted SAT samplers, including WAPS (Gupta et al. 2019), WeightGen
(Chakraborty et al. 2014) and XOR sampler (Ermon et al. 2013a; Ding and Xue
2021).
* •
Uniform SAT samplers, including CMSGen (Golia et al. 2021), QuickSampler
(Dutra et al. 2018), UniGen (Soos, Gocht, and Meel 2020) and KUS (Sharma et
al. 2018). Notice these samplers cannot sample SAT solutions from a non-
uniform distribution. We include them in the learning experiments as a
comparison, and exclude them in the weighted sampling experiment (in Fig. 2).
#### Metrics
In terms of evaluation metrics, we consider:
* •
Training time per iteration, which computes the average time for every
learning method to finish one iteration.
* •
Validness, that is the percentage of generated solutions that satisfy the
given constraints $\mathcal{C}$.
* •
Mean Averaged Precision (MAP$@10$), which is the percentage that the solutions
in the training set $\mathcal{D}$ reside among the top-10 w.r.t. likelihood
score. The higher the MAP@10 scores, the better the model generates structures
closely resembling those in the training set.
* •
$\log$-likelihood of the solutions in the training set $\mathcal{D}$ (in Eq.
3). The model that attains the highest $\log$-likelihood learns the closest
distribution to the training set.
* •
Approximation error of $\nabla\log Z_{\mathcal{C}}(\theta)$, which is the
$L_{1}$ distance between the exact value $\nabla\log Z_{\mathcal{C}}(\theta)$
and the approximated value given by the sampler.
See Appendix D for detailed settings of baselines and evaluation metrics, as
well as the following task definition, dataset construction, and potential
function definition.
### Random $K$-SAT Solutions with Preference
#### Task Definition & Dataset
This task is to learn to generate solutions to a $K$-SAT problem. We are given
a training set $\mathcal{D}$ containing solutions to a corresponding CNF
formula $c_{1}\wedge\ldots\wedge c_{L}$. Note that not all solutions are
equally likely to be presented in $\mathcal{D}$. The learning task is to
maximize the log-likelihood of the assignments seen in the training set
$\mathcal{D}$. Once learning is completed, the inference task is to generate
valid solutions that closely resemble those in $\mathcal{D}$ (Dodaro and
Previti 2019). To generate the training set $\mathcal{D}$, we use CNFGen
(Lauria et al. 2017) to generate the random $K$-SAT problem and use Glucose4
solver to generate random valid solutions (Ignatiev, Morgado, and Marques-
Silva 2018).
Problem | (a) Training time per iteration (Mins) ($\downarrow$)
---|---
size | Nelson | XOR | WAPS | WeightGen | CMSGen | KUS | QuickSampler | Unigen | Gibbs
$10$ | 0.13 | $26.30$ | $1.75$ | $0.64$ | $0.22$ | 0.72 | 0.40 | 0.66 | 0.86
$20$ | 0.15 | $134.50$ | $3.04$ | T.O. | 0.26 | 0.90 | 0.30 | 2.12 | 1.72
$30$ | 0.19 | $1102.95$ | $6.62$ | T.O. | 0.28 | 2.24 | 0.32 | 4.72 | 2.77
$40$ | 0.23 | T.O. | 33.70 | T.O. | 0.31 | 19.77 | 0.39 | 9.38 | 3.93
$50$ | 0.24 | T.O. | 909.18 | T.O. | 0.33 | 1532.22 | 0.37 | 13.29 | 5.27
$500$ | 5.99 | T.O. | T.O. | T.O. | 34.17 | T.O. | T.O. | T.O. | $221.83$
$1000$ | 34.01 | T.O. | T.O. | T.O. | $177.39$ | T.O. | T.O. | T.O. | $854.59$
| (b) Validness of generated solutions ($\%$) ($\uparrow$)
$10-50$ | $\mathbf{100}$ | $\mathbf{100}$ | $\mathbf{100}$ | $\mathbf{100}$ | $\mathbf{100}$ | $\mathbf{100}$ | $82.65$ | $\mathbf{100}$ | $90.58$
$500$ | $\mathbf{100}$ | T.O. | T.O. | T.O. | $\mathbf{100}$ | T.O. | $7.42$ | $\mathbf{100}$ | $54.27$
$1000$ | $\mathbf{100}$ | T.O. | T.O. | T.O. | $\mathbf{100}$ | T.O. | $0.00$ | $\mathbf{100}$ | $33.91$
| (c) Approximation error of $\nabla\log Z_{\mathcal{C}}(\theta)$
($\downarrow$)
10 | 0.10 | 0.21 | 0.12 | 3.58 | 3.96 | 4.08 | 3.93 | 4.16 | 0.69
12 | 0.14 | 0.19 | 0.16 | 5.58 | 5.50 | 5.49 | 5.55 | 5.48 | 0.75
14 | 0.15 | 0.25 | 0.19 | T.O. | 6.55 | 6.24 | 7.79 | 6.34 | 1.30
16 | 0.16 | 0.25 | 0.15 | T.O. | 9.08 | 9.05 | 9.35 | 9.03 | 1.67
18 | 0.18 | 0.30 | 0.23 | T.O. | 10.44 | 10.30 | 11.73 | 10.20 | 1.90
Table 1: Sampling efficiency and accuracy for learning $K$-SAT solutions with
preferences. The proposed Nelson is the most efficient (see “Training Time Per
Epoch”) and always generates valid assignments (see “Validness”) with a small
approximation error (see “Approximation Error of Gradient”) against all
baselines. T.O. means time out.
#### Sampler’s Efficiency and Accuracy
Table 1 shows the proposed Nelson is an efficient sampler that generates valid
assignments, in terms of the training time for learning constrained MRF,
approximation error for the gradient and validness of the generated
assignments. In Table 1(a), Nelson takes much less time for sampling against
all the samplers and can train the model with the dataset of problem size
$1000$ within an hour. In Table 1(b), Nelson always generates valid samples.
The performance of QuickSampler and Gibbs methods decreases when the problem
size becomes larger. In Table 1(c), Nelson, XOR and WAPS are the three
algorithms that can effectively estimate the gradient while the other
algorithms incur huge estimation errors. Also, the rest methods are much
slower than Nelson.
Learning Quality Table 2 demonstrates Nelson-CD learns a more accurate
constrained MRF model by measuring the log-likelihood and MAP@10 scores. Note
that baselines including Quicksampler, Weightgen, KUS, XOR and WAPS timed out
for the problem sizes we considered. Compared with the remaining baselines,
Nelson attains the best log-likelihood and MAP@10 metric.
| (a) $\log$-likelihood ($\uparrow$)
---|---
Problem | Nelson | Gibbs | CMSGen | Quicksampler
size | WeightGen,KUS
| XOR, WAPS
$100$ | $-49.16$ | $\mathbf{-36.36}$ | $-60.12$ | T.O.
$300$ | $\mathbf{-52.61}$ | $-53.11$ | $-128.39$
$500$ | $\mathbf{-196.47}$ | $-197.21$ | $-272.49$
$700$ | $\mathbf{-238.60}$ | $-238.75$ | $-389.44$
$1000$ | $\mathbf{-294.22}$ | $-296.33$ | $-532.85$
| (b) MAP@10 (%) ($\uparrow$)
$100$ | $82.13$ | $83.32$ | $\mathbf{86.34}$ | T.O.
$300$ | $\mathbf{66.37}$ | $64.42$ | $64.50$
$500$ | $\mathbf{90.03}$ | $73.14$ | $70.67$
$700$ | $\mathbf{69.74}$ | $\mathbf{69.74}$ | $48.10$
$1000$ | $\mathbf{91.70}$ | $77.56$ | $78.72$
Table 2: The quality of learning outcomes for learning random $K$-SAT
solutions with preferences. Nelson achieves the best likelihood and MAP@10
scores. T.O. is time out.
Figure 1: Running time and the percentage of valid structures sampled
uniformly at random from solutions of K-SAT problems. Among all the problem
sizes, Nelson always generate valid solutions and is the most efficient
sampler.
Figure 2: Running time, the percentage of valid solutions generated, and
rounds of resampling for weighted sample generation of K-SAT solutions. Among
all the problem sizes, Nelson scales the best among all approaches and always
generates valid solutions.
#### Abalation Study
We also evaluated the samplers’ efficiency in isolation (not embedded in
learning). The sampling cases we considered are uniform and weighted (mainly
following the experiment setting in Chakraborty and Meel (2019)). In weighted
sampling, the weights are specified by fixed values to the single factors in
Eq. (6). In the uniform sampling case in Fig. 1, Nelson and Quicksampler
require much less time to draw samples compared to other approaches. However,
the solutions generated by Quicksampler rarely satisfy constraints. In the
weighted sampling case in Fig. 2, Nelson scales better than all the competing
samplers as the sizes of the $K$-SAT problems increase.
Problem | (a) Training Time Per Epoch (Mins) ($\downarrow$)
---|---
size | Nelson | Gibbs | CMSGen
$10$ | $\mathbf{0.53}$ | $9.85$ | $0.69$
$20$ | $\mathbf{0.53}$ | $80.12$ | $1.93$
$30$ | $\mathbf{0.72}$ | $256.38$ | $3.65$
$40$ | $\mathbf{0.93}$ | $777.01$ | $5.99$
$50$ | $\mathbf{1.17}$ | T.O. | $9.08$
| (b) Validness of Orientations ($\%$) ($\uparrow$)
$7$ | $\mathbf{100}$ | $50.16$ | $\mathbf{100}$
$8$ | $\mathbf{100}$ | $64.63$ | $\mathbf{100}$
$9$ | $\mathbf{100}$ | $47.20$ | $\mathbf{100}$
$10$ | $\mathbf{100}$ | $62.60$ | $\mathbf{100}$
$11$ | $\mathbf{100}$ | $84.95$ | $\mathbf{100}$
| (c) Approximation Error of $\nabla\log Z_{\mathcal{C}}(\theta)$
($\downarrow$)
$5$ | $\mathbf{0.01}$ | $0.09$ | $0.21$
$7$ | $\mathbf{0.05}$ | $0.08$ | $2.37$
$9$ | $\mathbf{0.03}$ | $0.11$ | $2.37$
$11$ | $\mathbf{0.04}$ | $0.17$ | $8.62$
$13$ | $\mathbf{0.05}$ | $0.28$ | $11.27$
| (d) MAP@10 (%) ($\uparrow$)
$10$ | $61.14$ | $60.01$ | $\mathbf{64.56}$
$20$ | $\mathbf{55.26}$ | $55.20$ | $47.79$
$30$ | $\mathbf{100.00}$ | $96.29$ | $\mathbf{100.00}$
$40$ | $\mathbf{40.01}$ | $39.88$ | $38.90$
$50$ | $\mathbf{46.12}$ | T.O. | $42.11$
Table 3: Sample efficiency and learning performance of the sink-free
orientation task. Nelson is the most efficient (see Training Time Per Epoch)
and always generates valid assignments (see Validness), has the smallest error
approximating gradients, and has the best learning performance (see MAP@10)
among all baselines.
### Sink-Free Orientation in Undirected Graphs
Task Definition & Dataset A sink-free orientation of an undirected graph is a
choice of orientation for each arc such that every vertex has at least one
outgoing arc (Cohn, Pemantle, and Propp 2002). This task has wide applications
in robotics routing and IoT network configuration (Takahashi et al. 2009).
Even though finding a sink-free orientation is tractable, sampling a sink-free
orientation from the space of all orientations is still #P-hard. Given a
training set of preferred orientations $\mathcal{D}$ for the graph, the
learning task is to maximize the log-likelihood of the orientations seen in
the training set. The inference task is to generate valid orientations that
resemble those in the training set. To generate the training set, we use the
Erdős-Rényi random graph from the NetworkX library. The problem size is
characterized by the number of vertices in the graph. The baselines we
consider are CD-based learning with Gibbs sampling and CMSGen.
#### Learning Quality
In Table 3(a), we show the proposed Nelson method takes much less time to
train MRF for one epoch than the competing approaches. Furthermore, in Table
3(b), Nelson and CMSGen generate $100\%$ valid orientations of the graph while
the Gibbs-based model does not. Note the constraints for this task satisfy
Condition 1, hence Nelson sampler’s performance is guaranteed by Theorem 1. In
Table 3(c), Nelson attains the smallest approximation error for the gradient
(in Eq. 4) compared to baselines. Finally, Nelson learns a higher MAP@10 than
CMSGen. The Gibbs-based approach times out for problem sizes larger than $40$.
In summary, our Nelson is the best-performing algorithm for this task.
### Learn Vehicle Delivery Routes
#### Task Definition & Dataset
Given a set of locations to visit, the task is to generate a sequence to visit
these locations in which each location is visited once and only once and the
sequence closely resembles the trend presented in the training data. The
training data are such routes collected in the past. The dataset is
constructed from TSPLIB, which consists of $29$ cities in Bavaria, Germany.
The constraints for this problem do not satisfy Condition 1. We still apply
the proposed method to evaluate if the Nelson algorithm can handle those
general hard constraints.
In Fig. 3, we see Nelson can obtain samples of this delivery problem
efficiently. We measure the number of resamples taken as well as the
corresponding time used by the Nelson method. Nelson takes roughly 50 times of
resamples with an average time of $0.3$ seconds to draw a batch (the batch
size is $100$) of valid visiting sequences.
Figure 3: Frequency histograms for the number of resample and the total time
of Nelson method for uniformly sampling visiting paths for vehicle routing
problem.
## Conclusion
In this research, we present Nelson, which embeds a sampler based on Lovász
Local Lemma into the contrastive divergence learning of Markov random fields.
The embedding is fully differentiable. This approach allows us to learn
generative models over constrained domains, which presents significant
challenges to other state-of-the-art models. We also give sound proofs of the
performance of the LLL-based sampler. Experimental results on several real-
world domains reveal that Nelson learns to generate 100% valid structures,
while baselines either time out or cannot generate valid structures. Nelson
also outperforms other approaches in the running times and in various learning
metrics.
## Acknowledgments
We thank all the reviewers for their constructive comments. This research was
supported by NSF grants IIS-1850243, CCF-1918327.
## References
* Achlioptas and Theodoropoulos (2017) Achlioptas, D.; and Theodoropoulos, P. 2017. Probabilistic Model Counting with Short XORs. In _SAT_ , volume 10491 of _Lecture Notes in Computer Science_ , 3–19. Springer.
* Andersen et al. (2007) Andersen, H. R.; Hadzic, T.; Hooker, J. N.; and Tiedemann, P. 2007. A Constraint Store Based on Multivalued Decision Diagrams. In _CP_ , volume 4741 of _Lecture Notes in Computer Science_ , 118–132. Springer.
* Arjovsky, Chintala, and Bottou (2017) Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein Generative Adversarial Networks. In _ICML_ , volume 70 of _Proceedings of Machine Learning Research_ , 214–223. PMLR.
* Bengio, Lodi, and Prouvost (2021) Bengio, Y.; Lodi, A.; and Prouvost, A. 2021. Machine learning for combinatorial optimization: A methodological tour d’horizon. _Eur. J. Oper. Res._ , 290(2): 405–421.
* Braunstein, Mézard, and Zecchina (2005) Braunstein, A.; Mézard, M.; and Zecchina, R. 2005. Survey propagation: an algorithm for satisfiability. _Random Struct. Algorithms_ , 27: 201–226.
* Carter and Kohn (1994) Carter, C. K.; and Kohn, R. 1994. On Gibbs sampling for state space models. _Biometrika_ , 81(3): 541–553.
* Chakraborty et al. (2014) Chakraborty, S.; Fremont, D. J.; Meel, K. S.; Seshia, S. A.; and Vardi, M. Y. 2014\. Distribution-Aware Sampling and Weighted Model Counting for SAT. In _AAAI_ , 1722–1730. AAAI Press.
* Chakraborty and Meel (2019) Chakraborty, S.; and Meel, K. S. 2019. On Testing of Uniform Samplers. In _AAAI_ , 7777–7784.
* Chakraborty, Meel, and Vardi (2013) Chakraborty, S.; Meel, K. S.; and Vardi, M. Y. 2013. A Scalable and Nearly Uniform Generator of SAT Witnesses. In _CAV_ , volume 8044, 608–623. Springer.
* Chavira, Darwiche, and Jaeger (2006) Chavira, M.; Darwiche, A.; and Jaeger, M. 2006. Compiling relational Bayesian networks for exact inference. _Int. J. Approx. Reason._ , 42(1-2): 4–20.
* Chen and Tian (2019) Chen, X.; and Tian, Y. 2019. Learning to Perform Local Rewriting for Combinatorial Optimization. In _NeurIPS_ , 6278–6289.
* Cohn, Pemantle, and Propp (2002) Cohn, H.; Pemantle, R.; and Propp, J. G. 2002. Generating a Random Sink-free Orientation in Quadratic Time. _Electron. J. Comb._ , 9(1).
* Coja-Oghlan, Müller, and Ravelomanana (2020) Coja-Oghlan, A.; Müller, N.; and Ravelomanana, J. B. 2020. Belief Propagation on the random k-SAT model. arXiv:2011.02303.
* Dagum and Chavez (1993) Dagum, P.; and Chavez, R. M. 1993. Approximating Probabilistic Inference in Bayesian Belief Networks. _IEEE Trans. Pattern Anal. Mach. Intell._ , 15(3): 246–255.
* Dai et al. (2018) Dai, H.; Tian, Y.; Dai, B.; Skiena, S.; and Song, L. 2018. Syntax-Directed Variational Autoencoder for Structured Data. In _ICLR (Poster)_. OpenReview.net.
* Ding et al. (2021) Ding, F.; Ma, J.; Xu, J.; and Xue, Y. 2021. XOR-CD: Linearly Convergent Constrained Structure Generation. In _ICML_ , volume 139 of _Proceedings of Machine Learning Research_ , 2728–2738. PMLR.
* Ding and Xue (2020) Ding, F.; and Xue, Y. 2020. Contrastive Divergence Learning with Chained Belief Propagation. In _PGM_ , volume 138 of _Proceedings of Machine Learning Research_ , 161–172. PMLR.
* Ding and Xue (2021) Ding, F.; and Xue, Y. 2021. XOR-SGD: provable convex stochastic optimization for decision-making under uncertainty. In _UAI_ , volume 161 of _Proceedings of Machine Learning Research_ , 151–160. AUAI Press.
* Dodaro and Previti (2019) Dodaro, C.; and Previti, A. 2019. Minipref: A Tool for Preferences in SAT (short paper). In _RCRA + RiCeRcA_ , volume 2538 of _CEUR Workshop Proceedings_. CEUR-WS.org.
* Duan et al. (2022) Duan, H.; Vaezipoor, P.; Paulus, M. B.; Ruan, Y.; and Maddison, C. J. 2022. Augment with Care: Contrastive Learning for Combinatorial Problems. In _ICML_ , volume 162 of _Proceedings of Machine Learning Research_ , 5627–5642. PMLR.
* Dutra et al. (2018) Dutra, R.; Laeufer, K.; Bachrach, J.; and Sen, K. 2018. Efficient sampling of SAT solutions for testing. In _ICSE_ , 549–559. ACM.
* Erdős and Lovász (1973) Erdős, P.; and Lovász, L. 1973. Problems and results on 3-chromatic hypergraphs and some related questions. In _Colloquia Mathematica Societatis Janos Bolyai 10. Infinite and Finite Sets, Keszthely (Hungary)_. Citeseer.
* Ermon et al. (2013a) Ermon, S.; Gomes, C. P.; Sabharwal, A.; and Selman, B. 2013a. Embed and Project: Discrete Sampling with Universal Hashing. In _NIPS_ , 2085–2093.
* Ermon et al. (2013b) Ermon, S.; Gomes, C. P.; Sabharwal, A.; and Selman, B. 2013b. Taming the Curse of Dimensionality: Discrete Integration by Hashing and Optimization. In _ICML (2)_ , volume 28 of _JMLR Workshop and Conference Proceedings_ , 334–342. JMLR.org.
* Fichte, Hecher, and Zisser (2019) Fichte, J. K.; Hecher, M.; and Zisser, M. 2019. An Improved GPU-Based SAT Model Counter. In _CP_ , 491–509.
* Ge, Xu, and Ghahramani (2018) Ge, H.; Xu, K.; and Ghahramani, Z. 2018. Turing: A Language for Flexible Probabilistic Inference. In _AISTATS_ , volume 84, 1682–1690. PMLR.
* Germain et al. (2015) Germain, M.; Gregor, K.; Murray, I.; and Larochelle, H. 2015. MADE: Masked Autoencoder for Distribution Estimation. In _ICML_ , volume 37 of _JMLR Workshop and Conference Proceedings_ , 881–889. JMLR.org.
* Gogate and Dechter (2011) Gogate, V.; and Dechter, R. 2011. SampleSearch: Importance sampling in presence of determinism. _Artif. Intell._ , 175(2): 694–729.
* Gogate and Dechter (2012) Gogate, V.; and Dechter, R. 2012. Importance sampling-based estimation over AND/OR search spaces for graphical models. _Artificial Intelligence_ , 184-185: 38 – 77.
* Golia et al. (2021) Golia, P.; Soos, M.; Chakraborty, S.; and Meel, K. S. 2021. Designing Samplers is Easy: The Boon of Testers. In _FMCAD_ , 222–230. IEEE.
* Gomes, Sabharwal, and Selman (2006) Gomes, C. P.; Sabharwal, A.; and Selman, B. 2006. Near-Uniform Sampling of Combinatorial Spaces Using XOR Constraints. In _NIPS_ , 481–488. MIT Press.
* Goodfellow et al. (2014) Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A. C.; and Bengio, Y. 2014. Generative Adversarial Nets. In _NIPS_ , 2672–2680.
* Guo, Jerrum, and Liu (2019) Guo, H.; Jerrum, M.; and Liu, J. 2019. Uniform Sampling Through the Lovász Local Lemma. _J. ACM_ , 66(3): 18:1–18:31.
* Gupta et al. (2019) Gupta, R.; Sharma, S.; Roy, S.; and Meel, K. S. 2019. WAPS: Weighted and Projected Sampling. In _TACAS_ , volume 11427, 59–76.
* Hinton (2002) Hinton, G. E. 2002. Training Products of Experts by Minimizing Contrastive Divergence. _Neural Comput._ , 14(8): 1771–1800.
* Hu et al. (2017) Hu, Z.; Yang, Z.; Liang, X.; Salakhutdinov, R.; and Xing, E. P. 2017. Toward Controlled Generation of Text. In _ICML_ , volume 70 of _Proceedings of Machine Learning Research_ , 1587–1596. PMLR.
* Ignatiev, Morgado, and Marques-Silva (2018) Ignatiev, A.; Morgado, A.; and Marques-Silva, J. 2018. PySAT: A Python Toolkit for Prototyping with SAT Oracles. In _SAT_ , volume 10929 of _Lecture Notes in Computer Science_ , 428–437. Springer.
* Ihler, Fisher III, and Willsky (2005) Ihler, A. T.; Fisher III, J. W.; and Willsky, A. S. 2005. Loopy Belief Propagation: Convergence and Effects of Message Errors. _J. Mach. Learn. Res._ , 6: 905–936.
* Jerrum (2021) Jerrum, M. 2021. Fundamentals of Partial Rejection Sampling. arXiv:2106.07744.
* Jiang, Gu, and Xue (2022) Jiang, N.; Gu, Y.; and Xue, Y. 2022. Learning Markov Random Fields for Combinatorial Structures with Sampling through Lovász Local Lemma. arXiv:2212.00296.
* Jin, Barzilay, and Jaakkola (2018) Jin, W.; Barzilay, R.; and Jaakkola, T. S. 2018. Junction Tree Variational Autoencoder for Molecular Graph Generation. In _ICML_ , volume 80 of _Proceedings of Machine Learning Research_ , 2328–2337. PMLR.
* Karalias and Loukas (2020) Karalias, N.; and Loukas, A. 2020. Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs. In _NeurIPS_ , 6659–6672.
* Kingma and Welling (2014) Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In _ICLR_.
* Kusner, Paige, and Hernández-Lobato (2017) Kusner, M. J.; Paige, B.; and Hernández-Lobato, J. M. 2017. Grammar Variational Autoencoder. In _ICML_ , volume 70, 1945–1954. PMLR.
* Larochelle and Murray (2011) Larochelle, H.; and Murray, I. 2011. The Neural Autoregressive Distribution Estimator. In _AISTATS_ , volume 15 of _JMLR Proceedings_ , 29–37. JMLR.org.
* Lauria et al. (2017) Lauria, M.; Elffers, J.; Nordström, J.; and Vinyals, M. 2017. CNFgen: A Generator of Crafted Benchmarks. In _SAT_ , volume 10491, 464–473. Springer.
* Li, Chen, and Koltun (2018) Li, Z.; Chen, Q.; and Koltun, V. 2018. Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search. In _NeurIPS_ , 537–546.
* Lowd and Domingos (2008) Lowd, D.; and Domingos, P. M. 2008. Learning Arithmetic Circuits. In _UAI_ , 383–392. AUAI Press.
* Mahmoud (2022) Mahmoud, M. 2022. _GPU Enabled Automated Reasoning_. Ph.D. thesis, Mathematics and Computer Science.
* Mandi et al. (2020) Mandi, J.; Demirovic, E.; Stuckey, P. J.; and Guns, T. 2020. Smart Predict-and-Optimize for Hard Combinatorial Optimization Problems. In _AAAI_ , 1603–1610. AAAI Press.
* Mansour (1994) Mansour, Y. 1994. Learning Boolean functions via the Fourier transform. In _Theoretical advances in neural computation and learning_ , 391–424. Springer.
* Moser and Tardos (2010) Moser, R. A.; and Tardos, G. 2010. A constructive proof of the general lovász local lemma. _J. ACM_ , 57(2): 11:1–11:15.
* Murphy, Weiss, and Jordan (1999) Murphy, K. P.; Weiss, Y.; and Jordan, M. I. 1999. Loopy Belief Propagation for Approximate Inference: An Empirical Study. In _UAI_ , 467–475. Morgan Kaufmann.
* Neal (1993) Neal, R. M. 1993. _Probabilistic inference using Markov chain Monte Carlo methods_. Department of Computer Science, University of Toronto Toronto, ON, Canada.
* Prevot, Soos, and Meel (2021) Prevot, N.; Soos, M.; and Meel, K. S. 2021. Leveraging GPUs for Effective Clause Sharing in Parallel SAT Solving. In _SAT_ , 471–487.
* Rosa, Giunchiglia, and O’Sullivan (2011) Rosa, E. D.; Giunchiglia, E.; and O’Sullivan, B. 2011. Optimal stopping methods for finding high quality solutions to satisfiability problems with preferences. In _SAC_ , 901–906. ACM.
* Sang, Bearne, and Kautz (2005) Sang, T.; Bearne, P.; and Kautz, H. 2005. Performing Bayesian Inference by Weighted Model Counting. In _AAAI_ , AAAI’05, 475–481.
* Selsam et al. (2019) Selsam, D.; Lamm, M.; Bünz, B.; Liang, P.; de Moura, L.; and Dill, D. L. 2019\. Learning a SAT Solver from Single-Bit Supervision. In _ICLR (Poster)_. OpenReview.net.
* Sharma et al. (2018) Sharma, S.; Gupta, R.; Roy, S.; and Meel, K. S. 2018. Knowledge Compilation meets Uniform Sampling. In _LPAR_ , volume 57 of _EPiC Series in Computing_ , 620–636.
* Song and Ermon (2019) Song, Y.; and Ermon, S. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. In _NeurIPS_ , 11895–11907.
* Song et al. (2021) Song, Y.; Sohl-Dickstein, J.; Kingma, D. P.; Kumar, A.; Ermon, S.; and Poole, B. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In _ICLR_. OpenReview.net.
* Soos, Gocht, and Meel (2020) Soos, M.; Gocht, S.; and Meel, K. S. 2020. Tinted, Detached, and Lazy CNF-XOR Solving and Its Applications to Counting and Sampling. In _CAV (1)_ , volume 12224 of _Lecture Notes in Computer Science_ , 463–484. Springer.
* Takahashi et al. (2009) Takahashi, J.; Yamaguchi, T.; Sekiyama, K.; and Fukuda, T. 2009. Communication timing control and topology reconfiguration of a sink-free meshed sensor network with mobile robots. _IEEE/ASME transactions on mechatronics_ , 14(2): 187–197.
* Tsochantaridis et al. (2005) Tsochantaridis, I.; Joachims, T.; Hofmann, T.; and Altun, Y. 2005. Large Margin Methods for Structured and Interdependent Output Variables. _J. Mach. Learn. Res._ , 6: 1453–1484.
* van den Oord, Kalchbrenner, and Kavukcuoglu (2016) van den Oord, A.; Kalchbrenner, N.; and Kavukcuoglu, K. 2016. Pixel Recurrent Neural Networks. In _ICML_ , volume 48 of _JMLR Workshop and Conference Proceedings_ , 1747–1756. JMLR.org.
* Van Hentenryck (1989) Van Hentenryck, P. 1989. _Constraint Satisfaction in Logic Programming_. Cambridge, MA, USA: MIT Press. ISBN 0-262-08181-4.
* Wainwright and Jordan (2006) Wainwright, M. J.; and Jordan, M. I. 2006. Log-determinant relaxation for approximate inference in discrete Markov random fields. _IEEE Trans. Signal Process._ , 54(6-1): 2099–2109.
* Yedidia, Freeman, and Weiss (2000) Yedidia, J. S.; Freeman, W. T.; and Weiss, Y. 2000. Generalized Belief Propagation. In _NIPS_ , 689–695. MIT Press.
* Yolcu and Póczos (2019) Yolcu, E.; and Póczos, B. 2019. Learning Local Search Heuristics for Boolean Satisfiability. In _NeurIPS_ , 7990–8001.
## Appendix A Probability Distribution of Algorithm 1
### Definitions and Notations
This section is for the proofs related to the probability distribution in the
proposed Algorithm 1. For convenience, commonly used notations are listed in
Table 4. We make some slight changes to some notations that appear in the main
paper to make sure they are consistent and well-defined in this proof.
Similar to the previous analysis (Guo, Jerrum, and Liu 2019; Jerrum 2021), we
begin by introducing the concept “dependency graph” (in Definition 1) for the
constraints $\mathcal{C}$.
###### Definition 1 (Dependency Graph).
The dependency graph $G=(\mathcal{C},E)$, where the vertex set is the set of
constraints $\mathcal{C}$. Two vertices $c_{i}$ and $c_{j}$ are connected with
an edge $(c_{i},c_{j})\in E$ if and only if they are defined on at least one
common random variable, i.e.,
$\mathtt{var}(c_{i})\cap\mathtt{var}(c_{j})\neq\emptyset$.
For keeping track of the whole sampling procedure, we need the concept
“sampling record”(in Definition 2) (Guo, Jerrum, and Liu 2019), which are the
broken constraints at every round in Algorithm 1. It is also known as the
witness tree in Moser and Tardos (2010). This allows us to check the
constraint satisfaction of the assignment at every round.
Under Condition 1, for any edge in the dependency graph $(c_{i},c_{j})\in E$,
either $\mathbf{1}(x,c_{i})=0$ or $\mathbf{1}(x,c_{j})=0$ for all
$x\in\mathcal{X}$. In other words, two constraints with shared related
variables, representing two adjacent vertices in the dependency graph $G$, are
not broken simultaneously. Thus, the constraints in record $S_{t}$ form an
independent set222A set of vertices with no two adjacent vertices in the
graph. over the dependency graph under Condition 1.
###### Definition 2 (Sampling Record).
Given dependency graph $G(\mathcal{C},E)$, let $X_{t}=x$ be one possible
assignment obtained at round $t$ of Algorithm 1. Let
$S_{t}\subseteq\mathcal{C}$ be the set of vertices in graph $G$ (subset of
constraints) that $x$ violates
$\displaystyle S_{t}=\\{c_{i}|c_{i}\in\mathcal{C}\text{ and
}\mathbf{1}(x,c_{i})=0\\},$ (16)
where indicator function $\mathbf{1}(x,c_{i})=0$ implies $x$ violates
constraint $c_{i}$ at round $t$. Define the sampling record as the sequence of
violated constraints $S_{1},\ldots,S_{t}$ throughout the execution.
At round $t$ ($t\geq 1$) of Algorithm 1, suppose the violated constraints is
$S_{t}\subseteq\mathcal{C}$. The constraints that are not adjacent to $S_{t}$
in the dependency graph are still satisfied after re-sample. The only possible
constraints that might be broken after the re-sample operation are among
$S_{t}$ itself, or those constraints directly connected to $S_{t}$ in the
dependency graph. Therefore,
$S_{t+1}\subset\Gamma(S_{t}),\qquad\text{ for all }t\geq 1.$
where $\Gamma(S_{t})$ is the set of vertices of $S_{t}$ and its adjacent
neighbors in the dependency graph $G$ (see Table 4). When Algorithm 1
terminates at round $T+1$, no constraints are violated anymore, i.e.,
$s_{T+1}=\emptyset$. To summarize the above discussion on a sampling record by
Algorithm 1, we have the following Claim 1.
###### Claim 1.
Under Condition 1, a potential sampling record of length $T+1$ by the
Algorithm 1 is a sequence of independent sets:
$S_{1},S_{2},\ldots,S_{T},\emptyset$ with
1. 1.
$S_{t+1}\subseteq\Gamma(S_{t})$ and $S_{t}\neq\emptyset$, for $1\leq t\leq T$;
2. 2.
$s_{T+1}=\emptyset$.
Table 4: Summary of all the notations used in the theoretical analysis of Algorithm 1. Notation | Definition
---|---
$X=\\{X_{i}\\}_{i=1}^{n}$ | set of discrete random variables
$x\in\mathcal{X}$ | possible assignments for variables $X$
$x_{i}\in\mathcal{X}_{i}$ | variable $X_{i}$ can take all values in $\mathcal{X}_{i}$
$\mathcal{C}=\\{c_{j}\\}_{j=1}^{m}$ | given constraints
$S_{t}\subseteq\mathcal{C}$ | subset of constraints violated at round $t$ of Algorithm 1
$G(\mathcal{C},E)$ | the dependency graph (in Definition 1)
$\mathtt{var}(c_{j})$ | the indices of domain variables that are related to constraint $c_{i}$
$\mathtt{var}(S_{t})$ | the indices for domain variables that are related to constraints $S_{t}$
$\Gamma(c_{j})$ | $c_{j}$ and its direct neighbors in the dependency graph
$\Gamma(S_{t})$ | $S_{t}$ and direct neighbors of $S_{t}$ in the dependency graph
$\mathcal{C}\backslash\Gamma(S_{t})$ | all constraints in $\mathcal{C}$ but not in $\Gamma(S_{t})$
$S_{1},\ldots,S_{T},\emptyset$ | a sampling record of Algorithm 1 (in Definition 2)
$\mathbf{1}(x,c_{i})$ | indicator function that evaluates if assignment $x$ satisfies constraint $c_{i}$
$\mathbf{1}(x,S_{t})$ | indicator function that evaluates if assignment $x$ satisfies constraints in $S_{t}$
$\mathbf{1}(x,\mathcal{C})$ | indicator function that evaluates if assignment $x$ satisfies all constraints $\mathcal{C}$
$P_{\theta}(X|\mathcal{C}\backslash\Gamma(S_{t}))$ | see Definition 3
$\mathbb{P}(X|S_{t})$ | see Definition 4
#### Extra Notations related to Constrained MRF
The constrained MRF model over constraints set $\mathcal{C}$ is defined as:
$\displaystyle P_{\theta}(X=x|\mathcal{C})$
$\displaystyle=\frac{\exp(\sum_{i=1}^{n}\theta_{i}x_{i})\mathbf{1}(x,\mathcal{C})}{\sum_{x^{\prime}\in\mathcal{X}}\exp(\sum_{i=1}^{n}\theta_{i}x^{\prime}_{i})\mathbf{1}(x^{\prime},\mathcal{C})}$
where the partition function only sums over valid assignments in
$\mathcal{X}$. Note that $C(x)$ in Equation (6) is the same as
$\mathbf{1}(x^{\prime},\mathcal{C})$ in the above equation. We slightly change
the notations for consistency in this proof. Also notice that the output
distribution can no longer be factorized after constraints are enforced, since
the partition function cannot be factorized. Our task is to draw samples from
this distribution.
To analyze the intermediate steps in Algorithm 1, we further need to define
the following notations.
###### Definition 3.
The constrained MRF distribution for constraints
$\mathcal{C}\backslash\Gamma(S_{t})$ is
$\displaystyle P_{\theta}(X=x|\mathcal{C}\backslash\Gamma(S_{t}))$
$\displaystyle=\frac{\exp(\sum_{i=1}^{n}\theta_{i}x_{i})\mathbf{1}(x,\mathcal{C}\backslash\Gamma(S_{t}))}{\sum_{x^{\prime}\in\mathcal{X}}\exp(\sum_{i=1}^{n}\theta_{i}x^{\prime}_{i})\mathbf{1}(x^{\prime},\mathcal{C}\backslash\Gamma(S_{t}))}$
###### Definition 4.
At round $t$ of Algorithm 1, assume $S_{t}\subseteq\mathcal{C}$ are the set of
broken constraints, Define $\mathbb{P}(X_{t+1}=x|S_{1},\ldots,S_{t})$ to be
the probability of obtaining a new assignment $x$ after we re-sample random
variables indexed by $\mathtt{var}(S_{t})$.
### Ratio Property Lemma
###### Lemma 1 (Ratio Property).
Under Condition 1, assume Algorithm 1 is at round $t$. Conditioning on
observing one possible sampling record $S_{1},\ldots,S_{t}$, Algorithm 1 step
4 will re-sample variables in $\mathtt{var}(S_{t})$ at round $t+1$. Let
$x,x^{\prime}\in\mathcal{X}$ be two possible assignments after this re-sample.
The probability ratio of obtaining these two results equals that under
constrained MRF $P_{\theta}(x|\mathcal{C}\backslash\Gamma(S_{t}))$:
$\frac{\mathbb{P}(X_{t+1}=x|S_{1},\ldots,S_{t})}{\mathbb{P}(X_{t+1}=x^{\prime}|S_{1},\ldots,S_{t})}=\frac{P_{\theta}(X=x|\mathcal{C}\backslash\Gamma(S_{t}))}{P_{\theta}(X=x^{\prime}|\mathcal{C}\backslash\Gamma(S_{t}))},$
(17)
where $\mathbb{P}(X_{t+1}=x|S_{1},\ldots,S_{t})$ is the probability of
Algorithm 1 step 4 produces assignment $x$ at round $t+1$, conditioning on the
observed record $S_{1},\ldots,S_{t}$ and re-sample $\mathtt{var}(S_{t})$.
$P_{\theta}(X=x|\mathcal{C}\backslash\Gamma(S_{t}))$ is the constrained MRF
(for the constraints $\mathcal{C}\backslash\Gamma(S_{t})$) probability on
assignment $x$.
###### Proof.
During the intermediate step of the algorithm, assume the set of constraints
$S_{t}$ are violated. We want to re-sample variables indexed by
$\mathtt{var}(S_{t})$, so variables indexed by
$\mathtt{var}(\mathcal{C}\backslash\Gamma(S_{t}))$ won’t change assignments.
Also, because $\Gamma(S_{t})$ is the largest possible set of constraints that
can be infected by the re-sample, constraints
$\mathcal{C}\backslash\Gamma(S_{t})$ are still satisfied after the re-sample.
At round $t$, we re-sample variables in $\mathtt{var}(S_{t})$ according to
step 4 in Algorithm 1, we thus have:
$\mathbb{P}(X^{t+1}_{\mathtt{var}(S_{t})}=x_{\mathtt{var}(S_{t})}|S_{1}\ldots
S_{t})=\underset{{i\in\mathtt{var}(S_{t})}}{\prod}\frac{\exp(\theta_{i}x_{i})}{\underset{x^{\prime}_{i}\in\mathcal{X}_{i}}{\sum}\exp(\theta_{i}x^{\prime}_{i})}.$
Here the notation $X^{t+1}_{\mathtt{var}(S_{t})}=x_{\mathtt{var}(S_{t})}$
means $X_{i}=x_{i}$ for $i\in\mathtt{var}(S_{t})$ at round $t$. For any two
possible assignments $x,x^{\prime}$ after the re-sample,
$x_{i}=x^{\prime}_{i},\quad\text{ for
}i\in\\{1,\ldots,n\\}\backslash\mathtt{var}(S_{t})$
since the rest variable’s assignments are kept the same after re-sample. Thus,
we can have ratio:
$\frac{\mathbb{P}(X_{t+1}=x|S_{1},\ldots,S_{t})}{\mathbb{P}(X_{t+1}=x^{\prime}|S_{1},\ldots,S_{t})}=\frac{\exp(\sum_{i\in\mathtt{var}(S_{t})}\theta_{i}x_{i})}{\exp(\sum_{i\in\mathtt{var}(S_{t})}\theta_{i}x^{\prime}_{i})}=\frac{\exp(\sum_{i\in\mathtt{var}(\Gamma(S_{t}))}\theta_{i}x_{i})}{\exp(\sum_{i\in\mathtt{var}(\Gamma(S_{t}))}\theta_{i}x^{\prime}_{i})}.$
(18)
For the last step, since every assignment outside $\mathtt{var}(S_{t})$ is not
changed, we can enlarge the index set of summation to $\Gamma(S_{t})$ by
multiplying
$\displaystyle
1=\frac{\exp(\sum_{i\in\mathtt{var}(\Gamma(S_{t}))\backslash\mathtt{var}(S_{t})}\theta_{i}x_{i})}{\exp(\sum_{i\in\mathtt{var}(\Gamma(S_{t}))\backslash\mathtt{var}(S_{t})}\theta_{i}x^{\prime}_{i})}.$
After re-sample, we knows that $x$ must satisfy the constraints
$\mathcal{C}\backslash\Gamma(S_{t})$. Thus, the probability of this $x$
conditioned on constraints $\mathcal{C}\backslash\Gamma(S_{t})$ holding in the
constrained MRF model is:
$P_{\theta}(X=x|\mathcal{C}\backslash\Gamma(S_{t}))=\frac{\exp(\sum_{i=1}^{n}\theta_{i}x_{i})\mathbf{1}(x,\mathcal{C}\backslash\Gamma(S_{t}))}{\sum_{x^{\prime}\in\mathcal{X}}\exp(\sum_{i=1}^{n}\theta_{i}x^{\prime}_{i})\mathbf{1}\left(x^{\prime},\mathcal{C}\backslash\Gamma(S_{t})\right)}=\frac{\exp(\sum_{i=1}^{n}\theta_{i}x_{i})}{\sum_{x^{\prime}\in\mathcal{X}}\exp(\sum_{i=1}^{n}\theta_{i}x^{\prime}_{i})\mathbf{1}\left(x^{\prime},\mathcal{C}\backslash\Gamma(S_{t})\right)}.$
In the constrained MRF model (for constraints
$\mathcal{C}\backslash\Gamma(S_{t})$), the ratio of these two probabilistic
assignments $x,x^{\prime}$ is:
$\displaystyle\frac{P_{\theta}(X=x|\mathcal{C}\backslash\Gamma(S_{t}))}{P_{\theta}(X=x^{\prime}|\mathcal{C}\backslash\Gamma(S_{t}))}=\frac{\exp(\sum_{i\in\mathtt{var}(\Gamma(S_{t}))}\theta_{i}x_{i})}{\exp(\sum_{i\in\mathtt{var}(\Gamma(S_{t}))}\theta_{i}x^{\prime}_{i})},$
(19)
because the $x_{i}$ outside $\mathtt{var}(\Gamma(S_{t}))$ remains the same.
Note that $x,x^{\prime}$ are two possible assignments produced according to to
step 4 in Algorithm 1 at round $t$. Combining Equation (18) and Equation (19),
we conclude that:
$\displaystyle\frac{\mathbb{P}(X_{t+1}=x|S_{1},\ldots,S_{t})}{\mathbb{P}(X_{t+1}=x^{\prime}|S_{1},\ldots,S_{t})}=\frac{P_{\theta}(X=x|\mathcal{C}\backslash\Gamma(S_{t}))}{P_{\theta}(X=x^{\prime}|\mathcal{C}\backslash\Gamma(S_{t}))}.$
The proof is finished. ∎
### Proof of Theorem 1
Suppose the re-sampling process terminates at round $T+1$ and we obtain a
valid sample $x$. Upon the termination of Algorithm 1, all the constraints are
satisfied. So we have: $S_{T+1}=\emptyset$. In other words,
$\mathbf{1}(x,\mathcal{C})=1$.
Let $x,x^{\prime}$ be two possible valid assignments produced at round $T+1$
by the Algorithm 1. Using the analysis in Lemma 1, we can still have:
$\frac{\mathbb{P}(X_{T+1}=x|S_{1},\ldots,S_{T})}{\mathbb{P}(X_{T+1}=x^{\prime}|S_{1},\ldots,S_{T})}=\frac{\exp(\sum_{i\in\mathtt{var}(S_{T})}\theta_{i}x_{i})}{\exp(\sum_{i\in\mathtt{var}(S_{T})}\theta_{i}x^{\prime}_{i})}.$
The probability of this $x$ in the constrained MRF model (for constraints
$\mathcal{C}$) is:
$P_{\theta}(X=x|\mathcal{C})=\frac{\exp(\sum_{i=1}^{n}\theta_{i}x_{i})\mathbf{1}(x,\mathcal{C})}{\sum_{x^{\prime}\in\mathcal{X}}\exp(\sum_{i=1}^{n}\theta_{i}x^{\prime}_{i})\mathbf{1}\left(x^{\prime},\mathcal{C}\right)}=\frac{\exp(\sum_{i=1}^{n}\theta_{i}x_{i})}{\sum_{x^{\prime}\in\mathcal{X}}\exp(\sum_{i=1}^{n}\theta_{i}x^{\prime}_{i})\mathbf{1}\left(x^{\prime},\mathcal{C}\right)}.$
Then we conclude that:
$\frac{\mathbb{P}(X_{T+1}=x|S_{1},\ldots,S_{T})}{\mathbb{P}(X_{T+1}=x^{\prime}|S_{1},\ldots,S_{T})}=\frac{P_{\theta}(X=x|\mathcal{C})}{P_{\theta}(X=x^{\prime}|\mathcal{C})}.$
Note that this ratio property holds for all the possible sampling records
$S_{1},\ldots,S_{T},\emptyset$.
#### Summation of All Possible Sampling Records
Define $\mathbb{P}(S_{1},\ldots,S_{T})$ to be the probability of observing
record $S_{1},\ldots,S_{T}$ by Algorithm 1. For any possible sampling record
$S_{1},\ldots,S_{T},\emptyset$, the ratio property still holds:
$\frac{\mathbb{P}(X_{T+1}=x|S_{1},\ldots
S_{T})\mathbb{P}(S_{1},\ldots,S_{T})}{\mathbb{P}(X_{T+1}=x^{\prime}|S_{1},\ldots,S_{T})\mathbb{P}(S_{1},\ldots,S_{T})}=\frac{P_{\theta}(X=x|\mathcal{C})}{P_{\theta}(X=x^{\prime}|\mathcal{C})}$
where the term $\mathbb{P}(S_{1},\ldots,S_{T})$ on the Left-hand-side (LHS) is
actually the same. After we summarize over all possible sampling records
$S_{1},\ldots,S_{T},\emptyset$, the ratio property still holds. Let
$\mathbb{P}(X_{T+1}=x)$ be the probability of obtaining one valid assignment
$x$ by the execution of Algorithm 1.
$\frac{\mathbb{P}(X_{T+1}=x)}{\mathbb{P}(X_{T+1}=x^{\prime})}=\frac{\sum_{S_{1},\ldots,S_{T}}\mathbb{P}(X_{T+1}=x|S_{1},\ldots,S_{T})\mathbb{P}(S_{1},\ldots,S_{T})}{\sum_{S_{1},\ldots,S_{T}}\mathbb{P}(X_{T+1}=x^{\prime}|S_{1},\ldots,S_{T})\mathbb{P}(S_{1},\ldots,S_{T})}=\frac{P_{\theta}(X=x|\mathcal{C})}{P_{\theta}(X=x^{\prime}|\mathcal{C})}$
(20)
#### Sample Space Analysis At Termination
We need one more statement to show Theorem 1 holds. Let
$\mathcal{X}_{\text{LLL}}$ be the set of all possible assignments $x$ that can
be generated by Algorithm 1:
$\mathcal{X}_{\text{LLL}}=\bigcup_{S_{1}\ldots
S_{T}}\\{x|\mathbb{P}(X_{T+1}=x|S_{1}\ldots S_{T})\neq 0\text{ and
}\mathbb{P}(S_{1}\ldots S_{T})\neq 0\\}.$
where $\mathbb{P}(S_{1}\ldots S_{T})\neq 0$ means $S_{1},\ldots,S_{T}$ is a
possible record. $\mathbb{P}(X_{T+1}=x|S_{1}\ldots S_{T})\neq 0$ means it is
possible to obtain $x$ given the record $S_{1},\ldots,S_{T}$.
Let $\mathcal{X}_{\mathcal{C}}$ be the set of assignments $x$ that satisfy all
the constraints in the constrained MRF (for constraints $\mathcal{C}$):
$\mathcal{X}_{\mathcal{C}}=\\{x|P_{\theta}(X=x|\mathcal{C})\neq 0,\text{ for
all }x\in\mathcal{X}\\}.$
###### Lemma 2.
$\mathcal{X}_{\text{LLL}}\subseteq\mathcal{X}_{\mathcal{C}}$ and
$\mathcal{X}_{\mathcal{C}}\subseteq\mathcal{X}_{\text{LLL}}$, thus
$\mathcal{X}_{\text{LLL}}=\mathcal{X}_{\mathcal{C}}$.
###### Proof.
When Algorithm 1 terminates, it only produces valid assignments; thus, we must
have: $\mathcal{X}_{\text{LLL}}\subseteq\mathcal{X}_{\mathcal{C}}$. On the
other hand, there is always a non-zero probability that Algorithm 1 will
generate every valid assignment $x\in\mathcal{X}_{\mathcal{C}}$, which implies
that $\mathcal{X}_{\mathcal{C}}\subseteq\mathcal{X}_{\text{LLL}}$. Therefore
we can conclude that $\mathcal{X}_{\text{LLL}}=\mathcal{X}_{\mathcal{C}}$. ∎
Lemma 2 show that the two distributions have the same sample space when
Algorithm 1 terminates. What’s more, Equation (20) shows they have the same
probability ratio for any possible valid assignments $x,x^{\prime}$. This
shows that the execution of the Algorithm 1 is a random draw from the
constrained MRF distribution $P_{\theta}(X=x|\mathcal{C})$. The proof of
Theorem 1 is finished.
### Difference to the Original Proof
The main difference in the above proof to the existing proof in (Guo, Jerrum,
and Liu 2019, Lemma 7) is that: We show Lemma 1 that characterizes the
proportional ratio of getting different assignments of variables, which is
more general than the descriptive proof for Guo, Jerrum, and Liu (2019, Lemma
7).
### A Running Example in View of Markov Chain
We dedicate this section to demonstrate the execution of Algorithm 1 with
Example 1. Algorithm 1 can be viewed as a Markov chain, so we will show the
probability of obtaining valid samples is unbiased by running thousands of
steps of the Markov chain. The constraints are
$\mathcal{C}=\\{c_{1}=(X_{1}\vee X_{2}),c_{2}=(\neg X_{1}\vee X_{3})\\}$. We
use the assignment of all the variables as the states $s_{1},\ldots,s_{8}$ in
the rounds of Algorithm 1.
$\displaystyle s_{1}$ $\displaystyle=(X_{0}=0,X_{1}=0,X_{2}=0)$ (21)
$\displaystyle s_{2}$ $\displaystyle=(X_{0}=0,X_{1}=0,X_{2}=1)$ $\displaystyle
s_{3}$ $\displaystyle=(X_{0}=0,X_{1}=1,X_{2}=0)$ $\displaystyle s_{4}$
$\displaystyle=(X_{0}=0,X_{1}=1,X_{2}=1)$ $\displaystyle s_{5}$
$\displaystyle=(X_{0}=1,X_{1}=0,X_{2}=0)$ $\displaystyle s_{6}$
$\displaystyle=(X_{0}=1,X_{1}=0,X_{2}=1)$ $\displaystyle s_{7}$
$\displaystyle=(X_{0}=1,X_{1}=1,X_{2}=0)$ $\displaystyle s_{8}$
$\displaystyle=(X_{0}=1,X_{1}=1,X_{2}=1)$
Here $s_{1},s_{2},s_{3},s_{4}$ correspond to valid assignments of variables
with respect to the constraints $\mathcal{C}$ and $s_{5},s_{6},s_{7},s_{8}$
correspond to invalid assignments of variables, that requires resampling.
For simplicity, we consider the uniform setting where
$\theta_{1}=\theta_{2}=\theta_{3}$. The goal is to sample every valid
assignment with equal probability. Therefore, the probability for every
variable is:
$P(X_{i})=\begin{cases}\frac{1}{2}&\text{ for variable }X_{i}\text{ taking
value }1\\\ \frac{1}{2}&\text{ for variable }X_{i}\text{ taking value }0\\\
\end{cases}$
for $i=1,2,3$. Based on Algorithm 1, we know the probability of transferring
from $s_{i}$ to $s_{j}$ ($1\leq i,j\leq 8$). Thus we can construct the
transition matrix between every state:
$T=\bNiceMatrix[first-row,code-for-first-row=,first-col,code-for-first-
col=,]&s_{1}s_{2}s_{3}s_{4}s_{5}s_{6}s_{7}s_{8}\\\ s_{1}\mathbf{1}0000000\\\
s_{2}0\mathbf{1}000000\\\ s_{3}00\mathbf{1}00000\\\ s_{4}000\mathbf{1}0000\\\
s_{5}\frac{1}{4}\frac{1}{4}\frac{1}{4}0\frac{1}{4}000\\\
s_{6}0\frac{1}{4}00\frac{1}{4}\frac{1}{4}0\frac{1}{4}\\\
s_{7}\frac{1}{4}0\frac{1}{4}\frac{1}{4}00\frac{1}{4}0\\\
s_{8}00\frac{1}{4}00\frac{1}{4}\frac{1}{4}\frac{1}{4}\\\ $ (22)
where $T_{ij}=T(s_{i},s_{j})$ denotes the transition probability from state
$s_{i}$ to state $s_{j}$.
Taking state $s_{5}$ as an example, it violates constraint $C_{2}$ thus
$X_{2},X_{3}$ will be resampled. There are 4 possible assignments of
$X_{2},X_{3}$, which corresponds to states $\\{s_{1},s_{2},s_{3},s_{5}\\}$.
Since each variable is resampled uniformly at random, the probability of
transition from state $s_{5}$ to the states $\\{s_{1},s_{2},s_{3},s_{5}\\}$
are $1/4$. The Algorithm 1 will terminate once it reaches states
$\\{s_{1},s_{2},s_{3},s_{4}\\}$, which corresponds to the (valid) state only
transit to itself with probability 1. Thus we find $T(s_{i},s_{i})=1$ for
$i=1,2,3,4$.
For a randomly initialized assignment:
$x=\bNiceMatrix[first-row,code-for-first-row=,first-col,code-for-first-
col=,]&s_{1}s_{2}s_{3}s_{4}s_{5}s_{6}s_{7}s_{8}\\\
\frac{1}{8}\frac{1}{8}\frac{1}{8}\frac{1}{8}\frac{1}{8}\frac{1}{8}\frac{1}{8}\frac{1}{8}\\\
$ (23)
that has an equal probability of being any state. After executing Algorithm 1
for 2000 steps, we have:
$T^{2000}=\bNiceMatrix[first-row,code-for-first-row=,first-col,code-for-first-
col=,]&s_{1}s_{2}s_{3}s_{4}s_{5}s_{6}s_{7}s_{8}\\\ s_{1}\mathbf{1}0000000\\\
s_{2}0\mathbf{1}000000\\\ s_{3}00\mathbf{1}00000\\\ s_{4}000\mathbf{1}0000\\\
s_{5}\frac{1}{3}\frac{1}{3}\frac{1}{3}00000\\\
s_{6}\frac{1}{6}\frac{1}{2}\frac{1}{6}\frac{1}{6}000\frac{1}{4}\\\
s_{7}\frac{1}{3}0\frac{1}{3}\frac{1}{3}0000\\\
s_{8}\frac{1}{6}\frac{1}{6}\frac{1}{6}\frac{1}{2}0000\\\ ,\quad
xT^{2000}=\bNiceMatrix[first-row,code-for-first-row=,first-col,code-for-first-
col=,]&s_{1}s_{2}s_{3}s_{4}s_{5}s_{6}s_{7}s_{8}\\\
\frac{1}{4}\frac{1}{4}\frac{1}{4}\frac{1}{4}0000\\\ $ (24)
This implies Algorithm 1 outputs every valid assignment with the same
probability in the uniform setting, which follows the result in Theorem 1.
## Appendix B Running Time Analysis of Algorithm 1
We dedicate this section to showing the running time of Algorithm 1 on a
general weighted case. The expected running time of Algorithm 1 is determined
by the number of rounds of re-sampling. Algorithm 1 re-sample all the related
random variables simultaneously in every single round. However, it is hard to
get an estimation of the exact total running time over the random variables.
Instead, we can only have a loose upper bound of the expected running time
over the sequence of sampling record (the sequence of violated constraints).
The overall structure of the proof is similar to the proof in Guo, Jerrum, and
Liu (2019, Theorem 13). We show the difference in our proof at the end of this
section.
### Definitions and Notations
We define the following terms to simplify our notations.
###### Definition 5.
Let $S_{t}$ be a subset of vertices in a dependency graph. 1) Define
$p_{S_{t}}$ as the probability of constraints in $S_{t}$ being violated:
$p_{S_{t}}=\mathbb{P}\left(\bigwedge_{c_{i}\in S_{t}}\neg{c}_{i}\right)$ (25)
where we use $\neg{c}_{i}$ to indicate the constraint $c_{i}$ is violated. 2)
Define $q_{S_{t}}$ as the probability that only the constraints in $S_{t}$ are
violated and nothing else.
$q_{S_{t}}=\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\right)$ (26)
where $\bigwedge_{c_{i}\in S_{t}}\neg{c}_{i}$ corresponds to only the
constraints in $S_{t}$ are violated and
$\bigwedge_{c_{j}\in\mathcal{C}\backslash S_{t}}c_{j}$ corresponds to all the
rest constraints are satisfied. So $q_{\\{c_{i}\\}}$ is the probability that
only constraint $c_{i}$ is broken and all the rest still hold. Similarly,
$q_{\emptyset}$ denotes the probability that all the constraints are
satisfied.
###### Lemma 3.
Given Definition 5, we can further expand $q_{S_{t}}$ under Condition 1:
$q_{S_{t}}=p_{S_{t}}\mathbb{P}\left(\wedge_{c_{j}\in\mathcal{C}\backslash\Gamma(S_{t})}c_{j}\right)$
###### Proof.
We can split $q_{S_{t}}$ into the probability of two independent events:
$\displaystyle q_{S_{t}}$ $\displaystyle=\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\right)$ By definition of $q_{S_{t}}$ in Equation (26)
$\displaystyle=\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{j}\in\mathcal{C}\backslash\Gamma(S_{t})}c_{j}\right)$
$\displaystyle=\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\right)\mathbb{P}\left(\bigwedge_{c_{n}\in\mathcal{C}\backslash\Gamma(S_{t})}c_{j}\right)$
$\displaystyle=p_{S_{t}}\mathbb{P}\left(\wedge_{c_{j}\in\mathcal{C}\backslash\Gamma(S_{t})}c_{j}\right).$
By definition of $p_{S_{t}}$ in Equation (25)
The second equality holds because under Condition 1, adjacent vertices have
zero probability. In other words, when we observe that constraints in $S_{t}$
are violated, constraints in $\Gamma(S_{t})\backslash S_{t}$ cannot be
violated. The third equality holds because the random variables in
$\mathtt{var}(S_{t})$ are independent to those variables in
$\mathtt{var}(\mathcal{C}\backslash\Gamma(S_{t}))$. So we can apply
$P(AB)=P(A)P(B)$ when the events $A,B$ are independent to each other. ∎
###### Remark 1 (Equivalence of Record).
At round $t$ of Algorithm 1, it finds all the constraints $S_{t}$ that are
broken ($\bigwedge_{c_{i}\in S_{t}}\neg{c}_{i}$), which implies the rest of
the constraints $\mathcal{C}\backslash\Gamma(S_{t})$ are satisfied
($\bigwedge_{c_{j}\in\mathcal{C}\backslash S_{t}}c_{j}$). Thus the probability
of observing $S_{t}$ in the record is equivalent to the following:
$\displaystyle\mathbb{P}(S_{t})=\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\right)$ (27)
###### Lemma 4.
Given a possible sampling record $S_{1}$ $\ldots$ $S_{t-1}$,$S_{t}$ by
Algorithm 1, the following equality holds for the pair $(S_{t-1},S_{t})$:
$\sum_{S_{t}}q_{S_{t}}=\mathbb{P}(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})$
###### Proof.
By Definition 2 of the sampling record, we have $S_{t}\subset\Gamma(S_{t-1})$.
The relationship of its complement would be:
$\mathcal{C}\backslash\Gamma(S_{t-1})\subset\mathcal{C}\backslash S_{t}.$
Using the above result, we have:
$\mathbb{P}\left(\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\wedge\bigwedge_{c_{k}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{k}\right)=\mathbb{P}\left(\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\right)$ (28)
Based on Remark 1 and Baye’s theorem, we have:
$\displaystyle\mathbb{P}(S_{t}|\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})$
$\displaystyle=\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\Big{|}\wedge_{c_{k}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{k}\right)$
By Equation (27) (29) $\displaystyle=\frac{\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{j}\in\mathcal{C}\backslash
S_{t}}c_{j}\wedge\bigwedge_{c_{k}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{k}\right)}{\mathbb{P}\left(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i}\right)}$
By Bayes’s formula $\displaystyle=\frac{\mathbb{P}\left(\bigwedge_{c_{i}\in
S_{t}}\neg{c}_{i}\wedge\bigwedge_{c_{k}\in\mathcal{C}\backslash
S_{t}}c_{k}\right)}{\mathbb{P}\left(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i}\right)}$
By Equation (28)
$\displaystyle=\frac{q_{S_{t}}}{\mathbb{P}\left(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i}\right)}.$
By definition of $p_{S_{t}}$ in Equation (26)
Since LHS of Equation (29) sums over all possible $S_{t}$ is one:
$\sum_{S_{t}}\mathbb{P}(S_{t}|\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})=1$.
Thus, summing over $S_{t}$ for the RHS of Equation (29), we have:
$\displaystyle
1=\sum_{S_{t}}\frac{q_{S_{t}}}{\mathbb{P}\left(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i}\right)}=\frac{\sum_{S_{t}}q_{S_{t}}}{\mathbb{P}\left(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i}\right)}$
(30)
In the second equality, the reason that we can move the summation operator to
the numerator is that the denominator is a constant w.r.t. all possible
$S_{t}$. To be specific, given $S_{t}\subseteq\Gamma(S_{t-1})$, we have
$S_{t}$ is independent to $\mathcal{C}\backslash\Gamma(S_{t-1})$. Based on
Equation (30), we finally obtain:
$\sum_{S_{t}}q_{S_{t}}=\mathbb{P}(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i}).$
The proof is finished. ∎
###### Lemma 5.
The probability of observing the sampling record $S_{1},\ldots,S_{T}$ by
Algorithm 1 under Condition 1 is:
$\mathbb{P}(S_{1},\ldots,S_{T})=q_{S_{T}}\prod_{t=1}^{T-1}p_{S_{t}}$ (31)
###### Proof.
Given sampling record $S_{1},\ldots,S_{t-1}$, the conditional probability of
observing the next record $S_{t},S^{\prime}_{t}$ can be expanded based on
Lemma 1,
$\frac{\mathbb{P}(S_{t}|S_{1},\ldots,S_{t-1})}{\mathbb{P}(S^{\prime}_{t}|S_{1},\ldots,S_{t-1})}=\frac{\mathbb{P}(S_{t}|\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})}{\mathbb{P}(S^{\prime}_{t}|\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})}$
Based on Equation (29), we can simplify the RHS of the above ratio equality
and] obtain:
$\frac{\mathbb{P}(S_{t}|S_{1},\ldots,S_{t-1})}{\mathbb{P}(S^{\prime}_{t}|S_{1},\ldots,S_{t-1})}=\frac{q_{S_{t}}}{q_{S^{\prime}_{t}}}$
Because of $\sum_{S_{t}}\mathbb{P}(S_{t}|S_{1},\ldots,S_{t-1})=1$ and Equation
(30), we can get:
$\mathbb{P}(S_{t}|S_{1},\ldots,S_{t-1})=\frac{q_{S_{t}}}{\mathbb{P}(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})}$
(32)
We finally compute the probability of observing the sampling record
$S_{1}\,\ldots,S_{T}$ by:
$\displaystyle\mathbb{P}(S_{1}\,\ldots,S_{T})=$
$\displaystyle\mathbb{P}(S_{1})\prod_{t=2}^{T}\mathbb{P}(S_{t}|S_{1},\ldots,S_{t-1})$
By Chain rule $\displaystyle=$ $\displaystyle
q_{S_{1}}\prod_{t=2}^{T}\frac{q_{S_{t}}}{\mathbb{P}(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})}$
By Equation (32) $\displaystyle=$ $\displaystyle
q_{S_{T}}\prod_{t=2}^{T}\frac{q_{S_{t-1}}}{\mathbb{P}(\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})}$
Left shift the numerator from $S_{t}$ to $S_{t-1}$ $\displaystyle=$
$\displaystyle q_{S_{T}}\prod_{t=1}^{T-1}p_{S_{t}}$ Plugin Lemma 3
The proof is finished. ∎
### An Upper Bound on Expected Running Time
Suppose the expected number of samplings of constraints $c_{i}$ is
$\mathbb{E}(T_{i})$, then the total running time will be:
$\mathbb{E}(T)\leq\sum_{i=1}^{n}\mathbb{E}(T_{i})$
Since each random variable has equal status, then the question comes down to
the computation of individual $T_{i}$’s expectation. Let $S_{1},\ldots,S_{T}$
be any record of the algorithm that successfully terminates, and
$T_{i}(S_{1},\ldots,S_{T})$ be the total number of sampling related to
constraint $c_{i}$ throughout this record. Based on Lemma 5, we have:
$\displaystyle\mathbb{E}(T_{i})$
$\displaystyle=\sum_{S_{1},\ldots,S_{T}}\mathbb{P}(S_{1},\ldots,S_{T})T_{i}(S_{1},\ldots,S_{T})$
By far, we have shown the original proof of our work. We leave the difference
between our proof with the existing one in Appendix B.
The rest of the computation can be done in the same way as the proof in Guo,
Jerrum, and Liu (2019). Thus we cite the necessary intermediate steps in the
existing work and finish the proof logic for the coherence of the whole
running time analysis.
###### Lemma (Guo, Jerrum, and Liu (2019) Lemma 12).
Let $q_{\emptyset}$ be a non-zero probability of all the constraints are
satisfied. Let $q_{\\{c_{j}\\}}$ denote the probability that only constraint
$c_{j}$ is broken and the rest all hold. If $q_{\emptyset}>0$, then
$\mathbb{E}(T_{i})={q_{\\{c_{j}\\}}}/{q_{\emptyset}}$.
After incorporating our fix, we can conclude the upper bound on the expected
running time in Theorem 2.
###### Theorem 2 (Guo, Jerrum, and Liu (2019) Theorem 13).
Under Condition 1, the total number of re-samplings throughout the algorithm
is then $\frac{1}{q_{\emptyset}}\sum_{j=1}^{L}q_{\\{c_{j}\\}}$.
### Difference to the Existing Proof
The main difference in the above proof to the existing proof in (Guo, Jerrum,
and Liu 2019, Theorem 13) is that: based on Equation (32) and (29), we show
$\mathbb{P}(S_{t}|S_{1},\ldots,S_{t-1})=\mathbb{P}(S_{t}|\wedge_{c_{i}\in\mathcal{C}\backslash\Gamma(S_{t-1})}c_{i})$
In Guo, Jerrum, and Liu (2019)’s Equation (9), the first step cannot holds
without the above equality. The original paper uses this result directly
without providing enough justification.
## Appendix C Constrained MRF Model
### Single Variable Form of Constrained MRF
Here we provide an example of transforming MRF with pairwise and single
potential functions into a single potential form by introducing extra
variables. Given random variables $X_{1},X_{2},X_{3}$, we have the following
example MRF model:
$\displaystyle\phi_{\theta}(x_{1},x_{2},x_{3})$
$\displaystyle=\theta_{1}x_{1}+\theta_{2}x_{2}+\theta_{3}x_{1}x_{2}$
$\displaystyle P_{\theta}(x)$
$\displaystyle=\frac{\exp(\phi_{\theta}(x_{1},x_{2},x_{3}))}{Z({\theta})}$
In the above formula, we have a cross term $x_{1}x_{2}$. Two Boolean variables
can have 4 different assignments in total. Therefore we can construct 4 extra
Boolean variables to encode all these assignments. To illustrate, we introduce
extra random variables $\hat{X}_{00}$, $\hat{X}_{01}$, $\hat{X}_{10}$,
$\hat{X}_{11}$. We further introduce extra constraints: When
$X_{1}=0,X_{2}=0$, the extra variable must take values: $\hat{X}_{00}=1$,
$\hat{X}_{01}=0$, $\hat{X}_{10}=0$, $\hat{X}_{11}=0$. See the rest constraints
in Table 5.
$X_{1}$ | $X_{2}$ | $\hat{X}_{00}$, $\hat{X}_{01}$, $\hat{X}_{10}$, $\hat{X}_{11}$
---|---|---
$0$ | $0$ | $1,0,0,0$
$0$ | $1$ | $0,1,0,0$
$1$ | $0$ | $0,0,1,0$
$1$ | $1$ | $0,0,0,1$
Table 5: 4 constraints for converting pairwise terms in the potential function
into single variable form.
Then the new potential function, including extended variables and pairwise to
single variable constraints $\mathcal{C}$, is reformulated as:
$\displaystyle\hat{\phi}_{\theta}(x_{1},x_{2},x_{3},\hat{x}_{00},\hat{x}_{01},\hat{x}_{10},\hat{x}_{11})$
$\displaystyle=\theta_{1}x_{1}+\theta_{2}x_{2}+\theta_{3}\hat{x}_{00}+\theta_{3}\hat{x}_{01}+\theta_{3}\hat{x}_{10}+\theta_{3}\hat{x}_{11}$
$\displaystyle P_{\theta}(x|\mathcal{C})$
$\displaystyle=\frac{\exp(\hat{\phi}_{\theta}(x_{1},x_{2},x_{3},\hat{x}_{00},\hat{x}_{01},\hat{x}_{10},\hat{x}_{11}))}{Z_{\mathcal{C}}(\theta)}$
For clarity, the newly added constraints do not impact Condition 1. Since the
single variable transformation in the MRFs model originates from Sang, Bearne,
and Kautz (2005), thus is not considered as our contribution.
### Gradient of $\log$-Partition Function $\nabla\log Z_{\mathcal{C}}(\theta)$
We use the Chain rule of the gradient to give a detailed deduction of Equation
(4).
$\displaystyle\nabla\log Z_{\mathcal{C}}(\theta)=\frac{\nabla
Z_{\mathcal{C}}(\theta)}{Z_{\mathcal{C}}(\theta)}=\frac{1}{Z_{\mathcal{C}}(\theta)}\nabla\sum_{x\in\mathcal{X}}\exp\left(\phi_{\theta}(x)\right)C(x)=\sum_{x\in\mathcal{X}}\frac{\exp(\phi_{\theta}(x))C(x)}{Z_{\mathcal{C}}(\theta)}{\nabla\phi_{\theta}(x)}=$
$\displaystyle\sum_{x\in\mathcal{X}}P_{\theta}(x|\mathcal{C})\nabla\phi_{\theta}(x)$
(33) $\displaystyle=$ $\displaystyle\mathbb{E}_{x\sim
P_{\theta}(\tilde{x}|\mathcal{C})}\left({\nabla\phi_{\theta}(x)}\right)$
The above result shows the gradient of the constrained partition function is
equivalent to the expectation of the gradient of the potential function
$\nabla\phi_{\theta}$ over the model’s distribution (i.e.,
$P_{\theta}(\tilde{x}|\mathcal{C})$). Therefore, we transform the gradient
estimation problem into the problem of sampling from the current MRF model.
## Appendix D Experiment Settings and Configurations
### Implementation Details
#### Implementation of Nelson
The proposed sampler can be implemented with Numpy, Pytorch, or Jax. We
further offer a “batch version” implementation, that draws a batch of samples
in parallel on GPU. The batched sampler is useful for those tasks that require
a huge number of samples to estimate the gradient with a small approximation
error.
In Section Nelson: Neural Lovász Sampler, we define vector of assignment
$x^{t}=(x^{t}_{1},\dots,x^{t}_{n})$, where $x_{i}^{t}$ is the assignment of
variable $X_{i}$ in the $t$-th round of Algorithm 1. $x^{t}_{i}=1$ denotes
variable $X_{i}$ takes value $1$ (or true). In the batch version, we define
the matrix for a batch of assignments. Let $b$ be the batch size, we have
$x^{t}=\begin{bmatrix}x^{t}_{11}&\dots&x^{t}_{1n}\\\ \vdots&\ddots&\vdots\\\
x^{t}_{b1}&\dots&x^{t}_{bn}\\\ \end{bmatrix}$
In the following part, we provide the detailed computation pipeline for the
batch version of the proposed algorithm.
#### Initialization
The first step is to sample an initial assignment of $X$ from the given the
marginal probability vector $P$:
$x^{1}_{li}=\begin{cases}1&\text{if }u_{li}>P_{i},\\\ 0&\text{otherwise}.\\\
\end{cases},\quad\text{ if}1\leq i\leq n,1\leq l\leq b$ (34)
Here $u_{li}$ is sampled from the uniform distribution over $[0,1]$.
#### Check Constraint Satisfaction
The second step extract which constraint is violated. Given an assignment
$x^{t}$ at round $t\geq 1$, tensor $W$ and matrix $b$, the computation of
tensor $Z^{t}$ is:
$Z^{t}_{lik}=\sum_{i=1}^{n}W_{ikj}x_{lj}^{t}+b_{lik},$
The special multiplication between tensor and matrix can be efficiently
implemented with the Einstein
summation333https://github.com/dgasmith/opt˙einsum. Note that $Z^{t}_{ljk}=1$
indicates for $l$-th batched assignment $x_{l}$, the $k$-th literal of $j$-th
clause is true (takes value $1$). Next, we compute $S^{t}_{lj}$ as:
$\displaystyle S^{t}_{lj}$ $\displaystyle=1-\max_{1\leq k\leq
K}Z_{ljk},\quad\text{ for }1\leq j\leq L,1\leq l\leq b$
Here $S^{t}_{lj}=1$ indicates $x_{l}^{t}$ violates $j$-th clause. We can check
$\sum_{l=1}^{b}\sum_{j=1}^{L}S^{t}_{lj}\neq 0$ to see if any clause is
violated for the current batch of assignments, which corresponds to
$\sum_{i=1}^{b}C(x_{l})=0$.
#### Extract Variables in Violated Clauses
We extract all the variables that require resampling based on vector $S^{t}$
computed from the last step. The vector of the resampling indicator matrix
$A^{t}$ can be computed as:
$A^{t}_{li}=\mathbf{1}\left(\sum_{j=1}^{L}{S_{lj}^{t}}V_{ji}\geq
1\right),\quad\text{ for }1\leq i\leq n,1\leq l\leq b$
where $\sum_{j=1}^{L}{S_{lj}^{t}}V_{ji}\geq 1$ implies $X_{li}$ requires
resampling.
#### Resample
Given the marginal probability vector $P$, resample indicator matrix $A^{t}$
and assignment matrix $x^{t}$, we draw a new random sample $x^{t+1}$.
$x_{li}^{t+1}=\begin{cases}(1-A^{t}_{li})x_{li}^{t}+A^{t}_{li}&\text{if
}u_{{li}}>P_{i},\\\
(1-A^{t}_{li})x_{li}^{t}&\text{otherwise}.\end{cases}\quad\text{ for }1\leq
i\leq n,1\leq l\leq b$
where $u_{li}$ is drawn from the uniform distribution in $[0,1]$.
Since GPUs are more efficient at computing tensor, matrix, and vector
operations but are slow at processing for loops. Drawing a batch of samples
using the above extended computational pipeline is much faster than using a
for loop over the computational pipeline in Section Nelson: Neural Lovász
Sampler.
The sampler is involved with one hyper-parameter $T_{\textit{tryout}}$. Nelson
would terminate when it reaches $T_{\textit{tryout}}$ number of re-samples.
This practice is commonly used to handle randomized programs that might run
forever in rare cases.
Figure 4: Implementation pipeline of the Nelson-CD algorithm with $m=1$. The
proposed Nelson can be efficiently adapted to a Pytorch-based machine learning
library and enforces constraint satisfaction during learning.
#### Implementation of Algorithm 2
We first use the Constraints $\mathcal{C}$ and parameters $\theta^{t}$ to
build the current Nelson module. Then we draw $m$ samples from Nelson module
$\\{\tilde{x}^{j}\\}_{j=1}^{m}$ and draw from dataset randomly $m$ samples
$\\{x^{j}\\}_{j=1}^{m}$. Continuing from that point, we compute the potential
value from the two sets of inputs, i.e.,
$\\{\phi_{\theta}(\tilde{x}^{j})\\}_{j=1}^{m}$ and
$\\{\phi_{\theta}(\tilde{x}^{j})\\}_{j=1}^{m}$. Pytorch would be slow if we
compute each potential’s gradient using a for-loop. To bypass this problem, we
instead compute the following:
$\overline{\ell_{\mathcal{C}}(\theta)}=\frac{1}{m}\sum_{j=1}^{m}\phi_{\theta}(x^{j})-\frac{1}{m}\sum_{j=1}^{m}\phi_{\theta}(\tilde{x}^{j}).$
(35)
Following that, we call the PyTorch library’s gradient function, which
computes exactly
$\displaystyle\nabla\overline{\ell_{\mathcal{C}}(\theta)}$
$\displaystyle=\nabla\left(\frac{1}{m}\sum_{j=1}^{m}\phi_{\theta}(x^{j})-\frac{1}{m}\sum_{j=1}^{m}\phi_{\theta}(\tilde{x}^{j})\right)=\frac{1}{m}\sum_{j=1}^{m}\nabla\phi_{\theta}(x^{j})-\frac{1}{m}\sum_{j=1}^{m}\nabla\phi_{\theta}(\tilde{x}^{j})$
Note that $\nabla\overline{\ell_{\mathcal{C}}(\theta)}$ recovers the result in
Equation (4). Finally, we update the parameters $\theta$. The proposed Nelson
module and the neural network are computed on the same GPU device. This allows
us to exploit the parallel computing power of modern GPUs and remove time for
the data transfer from CPU to GPU or vice versa. See Figure 4 for a visualized
overview of the implementation with Pytroch.
### Learn Random K-SAT Solutions with Preference
#### Task Definition
We are given a training set $\mathcal{D}$ containing some preferred
assignments $\mathcal{D}=\\{x^{j}\\}_{j=1}^{N}$ for the corresponding CNF
formula $c_{1}\wedge\ldots\wedge c_{L}$. We require the CNF formula to be
true. This means, by the definition of CNF formulas, that every clause has to
be satisfied. These clauses become our set of constraints. Under the
constrained MRF model, the learning task is to maximize the log-likelihood of
the assignments seen in the training set $\mathcal{D}$. The inference task is
to generate valid solutions from the learned model’s distribution (Dodaro and
Previti 2019; Rosa, Giunchiglia, and O’Sullivan 2011).
#### Dataset
We denote the Boolean variables’ size in $K$-SAT as the “problem size”. We
consider several datasets of different problem sizes generated from
CNFGen444https://github.com/MassimoLauria/cnfgen (Lauria et al. 2017) random
$K$-SAT functions. $K$ is fixed as $5$; the number of variables and clauses
are kept the same, ranging from $10$ to $1500$. We generate $100$ different
CNF formulas for every problem size. To generate the training set
$\mathcal{D}$, we use the Glucose4 solver from
PySAT555https://pysathq.github.io/ library (Ignatiev, Morgado, and Marques-
Silva 2018) to generate $200$ assignments randomly as the preferred
assignments for every formula.
It should be pointed out that we don’t consider datasets like SATLIB and SAT
competitions. It is mainly because these datasets are hard instances with a
much larger input space but a limited number of solutions. Nelson would
generally take exponential time to find these solutions, just like finding
needles in a haystack. The other reasons are that using neural networks to
learn these limited assignments is straightforward since we can simply hard-
wire the network to memorize all the valid assignments. The main purpose of
this work is to let a constrained MRF learn a representation for the
underlying preference pattern, not create a neural solver that can generate
valid assignments for any CNF formula. Thus, we conform to the settings of the
easy formula where obtaining valid solutions is easy.
### Learn Sink-Free Orientation in Undirected Graphs
Task Definition In graph theory, a sink-free orientation of an undirected
graph is a choice of orientation for each edge such that every vertex has at
least one outgoing edge (Cohn, Pemantle, and Propp 2002). It has wide
applications in robotics routing and IoT network configuration (Takahashi et
al. 2009). The Constraints for this problem are that every vertex has at least
one outgoing edge after orientation. As stated in (Guo, Jerrum, and Liu 2019),
these constraints satisfy Condition 1.
See Figure 5 for an example graph and one possible sink-free edge orientation.
We define binary variables $X_{1},\ldots,X_{m}$, and associate variable
$X_{i}$ to edge $e_{i}$ for $1\leq i\leq m$. Variable $X_{i}$ takes value $1$
if the edge orientation is $v_{i}\to v_{j}$ where $i<j$. Otherwise, $X_{i}$
takes value $0$. The constraints are:
$\mathcal{C}=(X_{1}\vee X_{2})\wedge(\neg X_{1}\vee X_{3}\vee\neg
X_{4})\wedge(\neg X_{2}\vee\neg X_{3}\vee X_{5})\wedge(X_{4}\vee\neg X_{5})$
where the single constraint $c_{1}=(X_{1}\vee X_{2})$ corresponds to vertex
$v_{1}$, constraint $c_{2}=(\neg X_{1}\vee X_{3}\vee\neg X_{4})$ corresponds
to vertex $v_{2}$, constraint $c_{3}=(\neg X_{2}\vee\neg X_{3}\vee X_{5})$
corresponds to vertex $v_{3}$, and constraint $c_{4}=(X_{4}\vee\neg X_{5})$
corresponds to vertex $v_{4}$. The orientation assignment matrix $x$ shown in
Figure 5(b) implies: $X_{1}=1,X_{2}=1,X_{3}=1,X_{4}=0,X_{5}=1$.
$v_{1}$$v_{2}$$v_{3}$(a) An undirected graph $G$ with its adjacency matrix
$A$$v_{4}$$A=\bNiceMatrix[first-row,code-for-first-row=,first-col,code-for-
first-col=,]&v_{1}v_{2}v_{3}v_{4}\\\ v_{1}0110\\\ v_{2}1011\\\ v_{3}1101\\\
v_{4}0110\\\ $ $e_{1}$ $e_{2}$ $e_{3}$ $e_{4}$ $e_{5}$
$v_{1}$$v_{2}$$v_{3}$(b) An orientation of the edges and the orientation
matrix $x$$v_{4}$$x=\bNiceMatrix[first-row,code-for-first-row=,first-col,code-
for-first-col=,]&v_{1}v_{2}v_{3}v_{4}\\\ v_{1}0110\\\ v_{2}0010\\\
v_{3}0001\\\ v_{4}0100\\\ $ $e_{1}$ $e_{2}$ $e_{3}$ $e_{4}$ $e_{5}$
Figure 5: (a) An un-directed graph $G(V,E)$ where the vertices are
$V=\\{v_{1},v_{2},v_{3},v_{4}\\}$ and the un-directed edges are
$E=\\{e_{1}=(v_{1},v_{2}),e_{2}=(v_{1},v_{3}),e_{3}=(v_{2},v_{3}),e_{4}=(v_{2},v_{4}),e_{5}=(v_{3},v_{4})\\}$.
(b) A possible sink-free orientation of the edges in the graph and its matrix
representation $x$, where every vertex has at least one outgoing edge.
#### Notations
Let graph $G(V,E)$ be an un-directed graph; its adjacency matrix $A$ that
represents graph connectivity is:
$A_{ij}=\begin{cases}{1}&\text{If }(v_{i},v_{j})\in E\\\
0&\text{otherwise}\end{cases}$ (36)
A possible assignment for the orientation of every edge can be represented as
a matrix $x\in\\{0,1\\}^{|V|\times|V|}$:
$x_{ij}=\begin{cases}1&\text{if the edge orientation is }v_{i}\to v_{j}\\\
0&\text{otherwise}\end{cases}$ (37)
In the constrained MRF model defined in Eq. (6), the potential function of one
orientation of all edges is
$\phi_{\theta}(x)=\sum_{i=1}^{|V|}\sum_{j=1}^{|V|}\theta_{ij}A_{ij}x_{ij}$
A single constraint for vertex $v_{k}$ is
$c_{k}(x)=\mathbf{1}\left(\sum_{j=1}^{n}A_{k,j}x_{k,j}=1\right)$. If there is
no ongoing edge of vertex $v_{k}$. The constraint function $C(x)$ is defined
as: $\prod_{i=1}^{n}c_{k}(x)$. In Algorithm 1 step 1, edge $(v_{i},v_{j})$
will pick the orientation $v_{i}\to v_{j}$ with probability:
$\frac{\exp(\theta_{ij}A_{ij}x_{ij})}{\exp(\theta_{ji}A_{ji}x_{ji})+\exp(\theta_{ij}A_{ij}x_{ij})}$
#### Dataset
We use the NetworkX666https://networkx.org/ package to generate random Erdos
Renyi graph with edge probability $0.55$. The problem size refers to the
number of vertices in the graph, we range the problem size from 10 to 100. For
each problem size, we generate 100 different random undirected graphs. We then
convert the graph into CNF form using the above edge-variable conversion rule.
Afterward, we follow the same processing steps as the previous problem that
learn preferential solution distribution for random K-SAT. .
### Learn Vehicle Delivery Routes
Given a set of locations to visit, the task is to generate a sequence to visit
these locations in which each location is visited once and only once and the
sequence closely resembles the trend presented in the training data. The
training data are such routes collected in the past. The dataset is
constructed from TSPLIB, which consists of $29$ cities in Bavaria, Germany. In
Figure 3, we see Nelson can obtain samples of this delivery problem highly
efficiently.
A possible travel plan can be represented as a matrix
$x\in\\{0,1\\}^{|V|\times|V|}$:
$x_{ij}=\begin{cases}1&\text{if edge }v_{i}\to v_{j}\text{ is selected}\\\
0&\text{otherwise}\end{cases}$ (38)
The constraints are that every routing plan should visit every location once
and only once.
Similarly, in the constrained MRF model defined in Eq. (6), the potential
function of the vehicle routing plan is
$\phi_{\theta}(x)=\sum_{i=1}^{|V|}\sum_{j=1}^{|V|}\theta_{ij}A_{ij}x_{ij}$
### Detailed Baselines Configuation
In terms of sampling-based methods, we consider:
* •
Gibbs sampler (Carter and Kohn 1994), a special case of MCMC that is widely
used in training MRF models. In each step, the Gibbs algorithm samples one
dimension based on a conditional marginal distribution. We follow this
implementation777https://github.com/Fading0924/BPChain-CD/blob/master/mrf.py.
* •
Weighted SAT samplers, including WAPS888https://github.com/meelgroup/waps
(Gupta et al. 2019),
WeightGen999https://bitbucket.org/kuldeepmeel/weightgen/src/master/
(Chakraborty et al. 2014) and XOR
sampler101010https://cs.stanford.edu/~ermon/code/srcPAWS.zip (Ermon et al.
2013a; Ding and Xue 2021).
* •
Uniform SAT samplers, including
UniGen111111https://github.com/meelgroup/unigen (Soos, Gocht, and Meel 2020),
QuickSampler121212https://github.com/RafaelTupynamba/quicksampler (Dutra et
al. 2018), CMSGen131313https://github.com/meelgroup/cmsgen (Golia et al. 2021)
and KUS141414https://github.com/meelgroup/KUS (Sharma et al. 2018).
Currently, there are only GPU-based SAT solvers (Prevot, Soos, and Meel 2021;
Mahmoud 2022) and model counters (Fichte, Hecher, and Zisser 2019), GPU-based
SAT samplers are not available by far.
### Detailed Definition of Evaluation Metrics
In terms of evaluation metrics, we consider
* •
Training time per epoch. The average time for the whole learning method to
finish one epoch with each sampler.
* •
Validness. The learned model is adopted to generate assignments and we
evaluate the percentage of generated assignments that satisfy the constraints.
* •
Mean Averaged Precision (MAP$@10$). This is a ranking-oriented metric that can
evaluate the closeness of the learned MRF distribution to the goal
distribution. If the model learns the goal distribution in the training set,
then it would assign a higher potential value to those assignments in the
training set than all the rest unseen assignments. Based on this principle, we
randomly pick two sets of inputs in those valid assignments: seen assignments
from the training set and unseen assignments that are randomly generated. We
use the value of factor potential $\phi(x)$ to rank those assignments in
ascending order. Next, we check how many preferred solutions can fall into the
Top-$10$ by computing the following
$\text{MAP}@10=\sum_{k=1}^{10}\frac{\\#\text{preferred assignments among
top-}k}{k}$
* •
$\log$-likelihood of assignments in the training set $\mathcal{D}$. The model
that attains the highest $\log$-likelihood learns the closest distribution to
the training set. Specifically, given a training set
$\mathcal{D}=\\{x^{k}\\}_{k=1}^{N}$ and parameters $\theta$, the log-
likelihood value is:
$\displaystyle{\frac{1}{N}\sum_{k=1}^{N}\log P_{\theta}(X=x^{k}|\mathcal{C})}$
$\displaystyle=\frac{1}{N}\sum_{k=1}^{N}\phi_{\theta}(x^{k})-\log
Z_{\mathcal{C}}(\theta)$ (39)
We use the ACE algorithm to compute the approximated value of $\log
Z_{\mathcal{C}}(\theta)$151515http://reasoning.cs.ucla.edu/ace/moreInformation.html.
* •
Approximation Error of $\nabla\log Z_{\mathcal{C}}(\theta)$, that is the
$L_{1}$ distance between the exact gradient of $\log Z_{\mathcal{C}}(\theta)$
in Eq. (4) and the empirical gradient from the sampler. For small problem
sizes, we enumerate all $x\in\mathcal{X}$ to get the exact gradient and draw
samples $\\{\tilde{x}^{j}\\}_{j=1}^{m}$ with $m=2000$ from every sampler for
approximation.
$\Big{|}\underbrace{\sum_{x\in\mathcal{X}}\frac{\exp\left(\sum_{j=1}^{n}\theta_{j}x_{j}\right)C(x)}{Z_{\mathcal{C}}(\theta)}{x_{i}}}_{\text{{Exact
gradient term}}}\ -\
\underbrace{\sum_{j=1}^{m}\tilde{x}^{j}_{i}}_{\text{{Estimated gradient with
sampler}}}\Big{|}$
For fixed parameter $\theta$, the best sampler would attain the smallest
approximation error.
### Hyper-parameter Settings
In the implementation of Nelson, we set the maximum tryout of resampling as
$T_{tryout}=1000$ for all the experiments and all the datasets.
For the hyper-parameters used in learning the constrained MRF, we set the
number of samples from the model to be $m=200$, the learning rate $\eta$ is
configured as $0.1$ and the total learning iterations are $T_{\max}=1000$.
(a) Uniform Case.
(b) Weighted Case.
Figure 6: The distribution of resampling steps in the Nelson and Algorithmic-
LLL (Moser and Tardos 2010). Both of them get a valid sample within
$T_{\mathit{tryouts}}$. Nelson takes much fewer resamples than Algorithmic-LLL
because it resamples all the violated clauses at every iteration while
Algorithmic-LLL only resamples one of them.
|
# Hit-and-run mixing via localization schemes
Yuansi Chen Duke University Ronen Eldan Microsoft Research, Redmond
###### Abstract
We analyze the hit-and-run algorithm for sampling uniformly from an isotropic
convex body $K$ in $n$ dimensions. We show that the algorithm mixes in time
$\tilde{O}(n^{2}/\psi_{n}^{2})$, where $\psi_{n}$ is the smallest
isoperimetric constant for any isotropic logconcave distribution, also known
as the Kannan-Lovasz-Simonovits (KLS) constant [KLS95]. Our bound improves
upon previous bounds of the form $\tilde{O}(n^{2}R^{2}/r^{2})$, which depend
on the ratio $R/r$ of the radii of the circumscribed and inscribed balls of
$K$, gaining a factor of $n$ in the case of isotropic convex bodies.
Consequently, our result gives a mixing time estimate for the hit-and-run
which matches the state-of-the-art bounds for the ball walk. Our main proof
technique is based on an annealing of localization schemes introduced in Chen
and Eldan [CE22], which allows us to reduce the problem to the analysis of the
mixing time on truncated Gaussian distributions.
## 1 Introduction
Sampling from a high dimensional distribution is a fundamental computational
problem in many fields, such as Bayesian statistics, machine learning,
statistical physics, and others involving stochastic models. A particularly
important class of high dimensional distributions consists of uniform
distributions over convex bodies. For example, the problem of sampling
uniformly from a convex body is closely related to that of efficiently
computing its volume, which is a fundamental problem in computer science and
has been extensively studied in the last three decades (see [DFK91, LS90,
AK91, LS93], the survey [Vem05] and the thesis [Cou17]). Besides the volume
computation, uniform sampling from a convex body can be seen as a special case
of sampling from truncated Gaussian distributions, which arise naturally in
Bayesian statistical models involving probit regression and censored data
[AC93, HH06].
The hit-and-run algorithm is a widely-used Markov-chain-based sampling method.
It was introduced by Smith in 1984 [Smi84] and is closely related to the
popular Gibbs sampler [Tur71]. In the case of uniform sampling from a convex
body, the hit-and-run algorithm works iteratively as follows. At each step, it
starts from a point $x$ inside the convex body, chooses a uniformly
distributed random direction, and then samples a point uniformly from the line
segment formed by intersection of the convex body with the line segment
passing through $x$ in the randomly chosen direction. This process is repeated
until the chain is well-mixed.
The hit-and-run algorithm has been shown to mix rapidly for any log-concave
distribution [LV06a]. In particular, the works [Lov99, LV06b] show that the
hit-and-run algorithm mixes in $\tilde{O}(n^{2}R^{2}/r^{2})$ from a warm start
or any interior point, where $R$ and $r$ are the radii of the circumscribed
and inscribed balls of $K$. In the case of _isotropic_ convex bodies, it was
an open question whether the mixing time of the hit-and-run has to depend on
the ratio $R/r$, which is typically of the order $\sqrt{n}$ (see [LV18b]). By
isotropic, we mean that the uniform distribution over the convex body has mean
$0$ and covariance $\mathbb{I}_{n}$. Consequently, it was unknown whether the
hit-and-run algorithm mixes in $\tilde{O}(n^{2})$ steps for any isotropic
convex body. The main contribution of this paper is to provide a positive
answer to this question:
###### Theorem 1 (main theorem, informal).
Let $\mu$ be the uniform distribution on an $n$-dimensional isotropic convex
set $K$. Let $\nu$ be a measure obtained by running $t$ steps of the hit-and-
run chain starting from any measure whose density with respect to $\mu$ is at
most $M$. Then the total variation distance between $\mu$ and $\nu$ is at most
$\epsilon$ under the condition
$\displaystyle t\geq n^{2}\left(\frac{M\log(n)}{\epsilon}\right)^{O(1)}.$
A formal statement of this result can be found in Theorem 2 at the end of this
section. Our result makes two main contributions to the existing literature on
sampling from convex sets, as described next.
First, our result effectively matches the known mixing bound for the hit-and-
run walk with the state-of-the-art bound for the _ball walk_ , another well-
studied sampling algorithm. Given a convex set $K$ from which we want to
produce samples, the ball walk is the Markov chain whose step is defined as
follows: Given a starting point $x$, it chooses a point $y$ uniformly in a
ball of fixed radius around $x$ and moves to $y$ if $y$ is in $K$; otherwise,
it rejects the move and stays at $x$. It was shown in [KLS97] that the ball
walk mixes in $\tilde{O}(n^{2}R^{2}/r^{2})$ from a warm start, where $R$ and
$r$ are the radii of the circumscribed and inscribed balls of $K$. In addition
to the result which depends on the ratio $R/r$, the results in [KLS97] also
imply that the ball walk mixes in $\tilde{O}(n^{2}/\psi_{n}^{2})$ steps for an
isotropic convex body, where $\psi_{n}$ is the Kannan-Lovasz-Simonovits (KLS)
constant [KLS95]. Using the best known bound for $\psi_{n}$ [Che21, KL22,
JLV22] which is of order $\log^{-5}n$, the mixing time of the ball walk
becomes $\tilde{O}(n^{2})$. Thus, compared to the ball walk, our result gives
a matching bound in terms of the dimension $n$ dependency for sampling an
isotropic convex body using the hit-and-run. To the best of our knowledge, the
ball walk is currently the Markov chain with the best known mixing rate for
sampling a general isotropic convex body.
Second, our result is the first application of the _annealing with
localization schemes_ technique, which was put forth in [CE22], towards
sampling from continuous distributions. The main proof strategy is to use the
stochastic localization process [Eld13] in order to reduce the original mixing
time analysis to that of sampling from a _truncated Gaussian distribution_ ,
which is known to be well-behaved in many cases. While the high-level of the
proof follows this technique, the adaptation of the technique to the hit-and-
run chain seems to require substantial additional work, resulting in a
framework which we hope would become relevant for other chains as well.
Furthermore, in regard to the comparison with the ball walk, we would like to
highlight that unlike the ball walk which depends on a parameter indicating
the jump size (which corresponds to the radius of the ball), the hit-and-run
chain is more canonical in the sense that it does not depend on a “scale”
parameter. Given a membership oracle for an isotropic convex body, it is often
not clear what the optimal choice of jump size should be, with no extra
information about the convex body one needs to assume the worst case (taking a
jump size of $n^{-1/2}$). The hit-and-run chain, by construction, chooses the
(in a sense) optimal jump size for each point. Therefore, in light of our
bound, while for the worst-case example of an isotropic convex body the two
algorithms have comparable performance, there are cases of convex bodies for
which the hit-and-run algorithm is strictly faster than the ball-walk (this is
essentially true whenever the inscribed radius is much bigger than order $1$).
In other words our bound effectively establishes that the hit-and-run walk is
always at least as good as the ball walk for sampling for isotropic convex
bodies, but in some cases it’s actually strictly better.
##### Remark.
It is known that the hit-and-run also mixes rapidly from a _cold start_ ,
namely it can be started from any single interior point of the convex body
[LV06b]. Specifically, [LV06b] shows that the mixing time in this case is
$\tilde{O}\left(n^{2}\frac{R^{2}}{r^{2}}\text{polylog}(M)\right)$, hence the
dependence on the warmness parameter $M$ is poly-logarithmic rather than
polynomial. This leads to the open question of whether one can obtain a
$\tilde{O}(n^{2}\text{polylog}(M))$ mixing bound for the hit-and-run chain in
the case of isotropic convex bodies.
### 1.1 Related work
Sampling uniformly from a convex body is a well-studied problem in the
literature. The first polynomial-time algorithm for this problem was proposed
by Dyer, Frieze and Kannan [DFK91]. Many subsequent algorithms and improved
mixing times have been developed [LS93, KLS97]. The best known mixing time for
sampling a general convex body, whose circumscribed and inscribed balls have
radii $R$ and $r$, respectively, from a warm start is
$\tilde{O}(n^{2}R^{2}/r^{2})$, achieved by the ball walk in [KLS97]. The
results in [KLS97] also imply that the ball walk mixes in
$\tilde{O}(n^{2}/\psi_{n}^{2})$ for an isotropic convex body, where $\psi_{n}$
is the Kannan-Lovasz-Simonovits (KLS) constant [KLS95]. Sampling from a convex
body is closely related to the problem of efficiently computing its volume.
Faster mixing often leads to faster volume computation algorithms. For related
literature on volume computation, we refer the readers to [KLS97, LV06c, CV18,
JLLV21] and the references therein.
Since a convex polytope is a special case of a convex body, the problem of
sampling uniformly from a polytope is covered by algorithms which sample from
general convex bodies. However, the additional structure of a polytope also
allows for new algorithms with provably better mixing times [KN09, LV17,
LV18a, CDWY18, MV19, LLV20].
Specifically for the hit-and-run, [Lov99] shows that it mixes rapidly from a
constant-warm start. That is, in $\tilde{O}(n^{2}R^{2}/r^{2})$ steps, the
total variation distance between the current distribution and the stationary
distribution is at most a small constant. Here $R$ and $r$ are the radii of
the circumscribed and inscribed balls of the convex body, respectively. This
bound has the same order of magnitude as for the ball walk, so hit-and-run is
no worse when we assume a bound on $R/r$.
While the ball walk is known to mix slowly if started at a corner of the
convex body, the hit-and-run is known to mix rapidly from any single interior
point. [LV06b] shows that the hit-and-run mixes in
$\tilde{O}(n^{2}R^{2}/r^{2}\text{polylog}(M))$ from any $M$-warm start. So
even if $M$ is of order $n$ to a fixed degree, the mixing time remains in the
same order. This line of work has been extended in [LV06a] for sampling
general logconcave distributions with smoothness assumptions.
In terms of proof techniques, our proof uses the stochastic localization
process [Eld13] to transform the original uniform distribution to a simpler
truncated Gaussian distribution. The idea of using localization processes
towards proving mixing bounds for Markov chains is summarized in the framework
introduced in [CE22]. Localization schemes have also been applied to the
analysis of high dimensional distributions that arise in functional analysis,
convex and discrete geometry, combinatorics and mathematical physics (see the
survey [Eld22]). In particular, the use of the stochastic localization process
has led to the near-resolution of the Kannan, Lovász and Simonovits
conjecture, Bourgain’s hyperplane conjecture and the thin-shell conjecture in
convex geometry [Che21, KL22, JLV22].
### 1.2 Formal statement of the problem and results
Next, we make the required definitions towards the precise statement of our
main result. We assume that $K\subset\mathbb{R}^{n}$ is a convex body, which
is a compact convex set with nonempty interior.
##### Target distribution.
The distribution from which we want to sample is the uniform measure on the
convex body $K\subset^{n}$,
$\displaystyle\mu(x)\propto\mathbf{1}_{K}(x),\forall x\in^{n}.$
##### Isotropic position.
We say a measure $\nu$ on n is isotropic if
$\displaystyle{\mathbb{E}}_{X\sim\nu}[X]=0\text{ and
}\mathrm{Var}_{X\sim\nu}[X]=\mathbb{I}_{n}.$
A convex body is called isotropic if the corresponding uniform measure is
isotropic.
##### Hit-and-run Markov chain for a convex body.
The hit-and-run chain is defined by the following transition step: given the
current state $u\in K$, we generate unit vector $\theta\in^{n}$ which is
sampled from the uniform measure on the unit sphere and consider the line
$\ell:=\\{u+t\theta;~{}t\in\mathbb{R}\\}$. The next point is then chosen
uniformly from the segment $\ell\cap K$.
##### Hit-and-run-transition kernel for a general target density.
Next, we give a more general definition of the hit-and-run chain. Given a
density $\nu$ with respect to the Lebesgue measure on $\mathbb{R}^{n}$, we
denote by $P_{u\to\cdot}(\nu)$ the hit-and-run one-step transition kernel with
starting point $u\in^{n}$ with respect to the underlying (target) density
$\nu$, which is defined as follows: For any measurable set $A\subseteq^{n}$,
we have
$\displaystyle P_{u\to
A}(\nu):=\frac{2}{n\pi_{n}}\cdot\int_{A}\frac{\nu(x)dx}{\nu(\ell_{ux})\left|u-x\right|^{n-1}},$
(1)
where $\pi_{n}$ is the volume of the unit ball
$\pi_{n}=\operatorname{vol}(\mathbb{B}^{n})=\frac{\pi^{n/2}}{\text{Gamma}(n/2+1)}$,
with Gamma as the gamma function and $\nu(\ell_{ux})$ is the integral of $\nu$
along the line $\ell_{ux}$ through $u$ and $x$. Specifically, for $\ell_{ux}$
the line through $u$ and $x$, we define
$\displaystyle\nu(\ell_{ux}):=\int_{\ell_{ux}}\nu(v)\mathcal{H}_{1}(dv)$
where $\mathcal{H}_{1}$ is the one-dimensional Hausdorff measure. It is
straightforward to check that when taking $\nu$ to be the uniform measure on
$K$, this definition identifies with the previous one. Additionally, it is not
hard to see that the above chain is reversible, and its stationary
distribution is $\nu$ [LV06b].
##### Lazy chain.
Given a Markov chain with transition kernel $P$. We define its lazy variant
$P^{\text{after-lazy}}$, which stays in the same state with probability at
least $\frac{1}{2}$, as
$\displaystyle P^{\text{after-lazy}}_{x\to S}=\frac{1}{2}\delta_{x\to
S}+\frac{1}{2}P_{x\to S}.$
Here $\delta_{x\to\cdot}$ is the Dirac distribution at $x$. Since the lazy
variant only slows down the convergence rate by a constant factor, we study
lazy Markov chains in this paper for its convenience in theoretical analysis.
Next we introduce a few notions in order to quantify the mixing time of the
hit-and-run algorithm.
##### Total-variance distance.
We denote the total variation (TV) distance between two probability
distributions $\mathcal{P}_{1},\mathcal{P}_{2}$ by
$\displaystyle{\rm{d}}_{{\rm{TV}}}(\mathcal{P}_{1},\mathcal{P}_{2})=\sup_{A\in\mathfrak{B}(^{n})}\left|\mathcal{P}_{1}(A)-\mathcal{P}_{2}(A)\right|,$
where $\mathfrak{B}(^{n})$ is the Borel sigma-algebra on n. If
$\mathcal{P}_{1}$ and $\mathcal{P}_{2}$ admit densities $p_{1}$ and $p_{2}$
respectively, we may write
$\displaystyle{\rm{d}}_{{\rm{TV}}}(\mathcal{P}_{1},\mathcal{P}_{2})=\frac{1}{2}\int\left|p_{1}(x)-p_{2}(x)\right|dx.$
##### Warm start.
We say an initial distribution $\mu_{\rm{init}}$ is $M$-warm if it satisfies
$\displaystyle\sup_{S\in\mathfrak{B}(^{n})}\frac{\mu_{\rm{init}}(S)}{\mu(S)}\leq
M.$
If $M$ is a constant that does not depend on $n$, then we say
$\mu_{\rm{init}}$ is constant $M$-warm.
##### Mixing time.
For an error tolerance $\epsilon\in(0,1)$, the total variance distance
$\epsilon$-mixing time of the Markov chain $P$ with initial distribution
$\mu_{\rm{init}}$ and target distribution $\mu$ is defined as
$\displaystyle\mathfrak{t}_{\rm{mix}}(\epsilon,\mu_{\rm{init}},\mu):=\inf\left\\{k\in\mathbb{N}\mid{\rm{d}}_{{\rm{TV}}}\left(\mathcal{T}^{k}_{P}(\mu_{\rm{init}}),\mu\right)\leq\epsilon\right\\}.$
With the above definitions in hand, we state our main theorem. We are
interested in obtaining an upper bound for the mixing time of the hit-and-run
algorithm for sampling from an isotropic density $\mu\propto\mathbf{1}_{K}$ in
terms of the $\epsilon$-mixing time. Our main theorem reads:
###### Theorem 2.
Let $\mu$ be the uniform distribution on an $n$-dimensional isotropic convex
body $K\subset\mathbb{R}^{n}$. There exist universal constants $C,c>0$, such
that for any $M$-warm initial distribution $\mu_{\rm{init}}$ and any error
tolerance $\epsilon\in(0,1)$ such that $n\geq c\log\frac{M}{\epsilon}$, the
$\epsilon$-mixing time of the lazy hit-and-run is upper bounded as follows
$\displaystyle\mathfrak{t}_{\rm{mix}}\left(\epsilon,\mu_{\rm{init}},\mu\right)\leq
C\frac{n^{2}}{\psi_{n}^{2}}\left(\frac{M}{\epsilon}\right)^{11}\log^{5}\frac{M}{\epsilon}.$
The proof, together with an outline of the proof strategy, are provided in
Section 3.
## 2 Preliminaries
In this section, we introduce notation, background and preliminary results
needed for our proof.
### 2.1 Logconcavity and concentration
##### Logconcave density.
We say a density $\nu$ is logconcave if it satisfies
$\displaystyle\nu(x)^{\tau}\nu(y)^{1-\tau}\leq\nu(\tau x+(1-\tau)y),\text{for
}x,y\in^{n},\tau\in[0,1].$
For example, $\mu\propto\mathbf{1}_{K}$ with $K$ being a convex set is
logconcave.
##### Cheeger’s isoperimetric constant.
We define the Cheeger’s isoperimetric constant of a measure $\nu$
$\displaystyle\psi_{\nu}:=\inf_{A\subseteq^{n}}\left\\{\frac{\int_{\partial
A}\nu}{\min\left\\{\nu(A),1-\nu(A)\right\\}}\right\\}.$
And
$\displaystyle\psi_{n}:=\inf_{\nu\text{ isotropic logconcave on
}^{n}}\psi_{\nu}.$
It is known that $\psi_{n}\geq\log^{-5}(n)$ [KL22]. A closely related quantity
if $\kappa_{n}>0$ defined as follows
$\displaystyle\kappa_{n}^{2}:=\sup_{\nu\text{ isotropic
logconcave}}\sup_{\theta\in\mathbb{S}^{n}}\left\\{\left|{\mathbb{E}}_{X\sim\nu}\left\langle
X,\theta\right\rangle\left(X\otimes X\right)\right|^{2}\right\\},$
where the first supremum is taken over all isotropic logconcave measures on n
and $\mathbb{S}^{n}$ is the unit sphere in n. It is known in Eldan [Eld13]
that there exists a universal constant $C>0$ such that
$\frac{1}{\psi_{n}^{2}}\leq C\log n\cdot\kappa_{n}^{2}$.
### 2.2 Definitions related to geometric convexity
Let $K$ be a convex body (compact, convex, full-dimensional convex set) in n.
Denote by $\mathbb{B}^{n}(x,\tau)$ with ball with center $x$ and radius $\tau$
in n. Let $\operatorname{vol}$ be the $n$-dimensional Lebesgue measure. Define
$\displaystyle\lambda(u,t):=\frac{\operatorname{vol}(K\cap\mathbb{B}^{n}(u,t))}{\operatorname{vol}(\mathbb{B}^{n}(0,t))},$
(2)
the fraction of a ball of radius $t$ centered around $u$ that intersects $K$.
Let $K_{r}$ be the set of points $x\in K$ with large $\lambda(x,2r)$, that is,
$\displaystyle K_{r}:=\left\\{x\in
K\mid\lambda\left(x,2r\right)\geq\frac{63}{64}\right\\}.$ (3)
As shown in [Lov99], the set $K_{r}$ is convex thanks to the Brunn-Minkowski
Theorem.
### 2.3 Definitions regarding Markov chains
We are interested in the problem of sampling from a target measure $\nu$ on n.
Given a Markov chain with transition kernel
$P:^{n}\times\mathfrak{B}(^{n})\to_{\geq 0}$ where $\mathfrak{B}(^{n})$
denotes the Borel $\sigma$-algebra on n, the $k$-step transition kernel
$P^{k}$ is defined recursively by
$\displaystyle P^{k}_{x\to dy}=\int_{z\in^{n}}P^{k-1}_{x\to dz}P_{z\to dy}.$
##### Associated transition operator.
Let $\mathcal{T}_{P}$ denote the transition operator associated to the Markov
chain. It is defined as
$\displaystyle\mathcal{T}_{P}(\nu)(S):=\int_{y\in^{n}}d\nu(y)P(y,S),\quad\forall
S\in\mathfrak{B}(^{n}).$
When $\nu$ is the distribution of the current state, $\mathcal{T}_{P}(\nu)$ is
the distribution of the next state. And
$\mathcal{T}_{P}^{n}(\nu):=\mathcal{T}_{P^{n}}(\nu)$ is the distribution of
the state after $n$ steps.
##### Dirichlet form.
Let $\mathcal{L}_{2}(\nu)$ be the space of square integrable functions under
the measure $\nu$. The Dirichlet form
$\mathcal{E}_{P}:\mathcal{L}_{2}(\nu)\times\mathcal{L}_{2}(\nu)\to_{\geq 0}$
associated with the transition kernel $P$ is given by
$\displaystyle\mathcal{E}_{P}(f,g):=\frac{1}{2}\int\int\left(f(x)-f(y)\right)\left(g(x)-g(y)\right)P_{x\to
dy}d\nu(x).$
##### Truncated conductance.
For $s\in(0,1)$, we define the $s$-conductance $\Phi_{s}$ of the Markov chain
$P$ with its stationary measure $\nu$ as follows
$\displaystyle\Phi_{s}(P):=\inf_{S:s<\nu(S)<1-s}\frac{\int_{S}P_{x\to
S^{c}}d\nu(x)}{\min\left\\{\nu(S),\nu(S^{c})\right\\}-s}.$ (4)
When compared to conductance (the case $s=0$), $s$-conductance allows us to
ignore small parts of the distribution where the conductance is difficult to
bound. Remark that using the Dirichlet form notation, we can write
$\displaystyle\Phi_{s}(P)=\inf_{S:s<\nu(S)<1-s}\frac{\mathcal{E}(\mathbf{1}_{S},\mathbf{1}_{S})}{\min\left\\{\nu(S),\nu(S^{c})\right\\}-s}.$
The following lemma by Lovász and Simonovits [LS93] connects the
$s$-conductance with the mixing time of a Markov chain.
###### Lemma 3 (Corollary 1.5 in [LS93]).
Consider a reversible lazy Markov chain with transition kernel $P$ and
stationary distribution $\mu$. Let $\mu_{\rm{init}}$ be an $M$-warm initial
distribution. Let $0<s<\frac{1}{2}$. Then
$\displaystyle{\rm{d}}_{{\rm{TV}}}\left(\mathcal{T}_{P}^{N}(\mu_{\rm{init}}),\mu\right)\leq
Ms+M\left(1-\frac{\Phi_{s}^{2}}{2}\right)^{n}.$
### 2.4 Background on the stochastic localization process
For a probability density $\nu$ on $\mathbb{R}^{n}$, define
$\displaystyle\mathbf{b}(\nu):=\int x\nu(x)dx,$
its center of mass.
Given a density $\mu$ on n, we define the stochastic localization (SL) process
([Eld13]) with a positive semi-definite control matrix $C_{t}$, by
$\displaystyle\mu_{t}(x):=\frac{1}{Z(t,c_{t})}\exp\left(c_{t}^{\top}x-\frac{1}{2}\langle
B_{t}x,x\rangle\right)\mu(x),$ (5)
where $c_{t}$ and $B_{t}$ satisfy the following stochastic differential
equations:
$\displaystyle dc_{t}$
$\displaystyle=C_{t}dW_{t}+C_{t}^{2}\mathbf{b}(\mu_{t})dt$ $\displaystyle
dB_{t}$ $\displaystyle=C_{t}^{2}dt.$
Here $W_{t}$ is the standard Brownian motion and $C_{t}$ is any process
adapted to $W_{t}$ which takes values in the space of $n\times n$ matrices.
The existence and uniqueness of the solutions of the SDE is shown via the
standard existence and uniqueness results on SDEs (see e.g. Lemma 3 in
[Che21]).
It is known that $\mu_{t}$ satisfies the following SDE for $x\in^{n}$
$\displaystyle
d\mu_{t}(x)=(x-\mathbf{b}(\mu_{t}))^{\top}C_{t}dW_{t}\mu_{t}(x).$ (6)
Define also
$\displaystyle
A_{t}:=\int(x-\mathbf{b}(\mu_{t}))(x-\mathbf{b}(\mu_{t}))^{\top}\mu_{t}(x)dx,$
(7)
the covariance matrix of $\mu_{t}$.
Note that running the SL process with a starting measure
$\mu\propto\mathbf{1}_{K}$ results in a density at time $t$ which is a random
density which has an explicit form of a truncated Gaussian on a convex set.
##### Truncated Gaussian on a convex set.
For $m>0$ and $\beta\in^{n}$, define
$\displaystyle\nu_{\beta,m}(x):=\frac{e^{-\frac{m}{2}\left|x-\beta\right|^{2}}\mathbf{1}_{K}}{\int_{{}^{n}}e^{-\frac{m}{2}\left|x-\beta\right|^{2}}\mathbf{1}_{K}dx}.$
It is the Gaussian with mean $\beta\in^{n}$ and variance $\frac{1}{m}$
supported on the convex set $K$.
In light of Eq. (5), we have that for all $t\geq 0$, under the choice
$C_{s}=\mathbb{I}_{n}$ for all $s\in[0,t]$, the measure $\mu_{t}$ obtained by
the SL process has the form
$\displaystyle\mu_{t}=\nu_{c_{t}/t,t}.$ (8)
The following is a special case of a classical result by Brascamp and Lieb
[BL02]:
###### Theorem 4.
One has $\mathrm{Cov}(\nu_{\beta,m})\preceq\frac{1}{m}\mathbb{I}_{n}$.
If follows that, for every $t>0$, assuming the choice $C_{s}=\mathbb{I}_{n}$
for $s\in[0,t)$, one has almost surely
$A_{t}\preceq\frac{1}{t}\mathbb{I}_{n}.$ (9)
### 2.5 Other notation
We use $\gamma:\to_{+}$ to denote the standard Gaussian density function, and
$\Gamma:\to[0,1]$ to denote the standard Gaussian cumulative density function.
We use $\mathcal{N}(b,\sigma^{2})$ to denote the Gaussian measure with mean
$b$ and variance $\sigma^{2}$. For $p$ and $\tilde{p}$ two measures,
$p*\tilde{p}$ denotes the convolution of the two.
We use big-O notation $O(\cdot)$ to denote asymptotic upper bounds which
ignore all constants. For example, we write $g_{1}(n)=O(g_{2}(n))$ if there
exists a universal constant $c>0$ such that $g_{1}(n)\leq cg_{2}(n)$ when $n$
is larger than a universal constant. We use $\tilde{O}(\cdot)$ to denote
asymptotic upper bounds which ignore both constants and poly-logarithmic
factors on the parameters involved.
## 3 Proof of the main theorem
We prove Theorem 2, by bounding the $s$-conductance, via Lemma 3. Roughly
speaking, for constant $M$-warm initial distribution, taking
$\displaystyle
s=\frac{\epsilon}{2M},n\geq\frac{2}{\Phi_{s}^{2}}\log\frac{2M}{\epsilon}$
results in a mixing time of $\frac{2}{\Phi_{s}^{2}}\log\frac{2M}{\epsilon}$.
As a consequence, the main focus of the proof is in lower bounding the
$s$-conductance.
Unlike [Lov99], we do not bound the $s$-conductance directly, as that requires
an argument that depends in rather intricate ways on the geometry of the
convex set $K$. Instead, we employ an annealing of localization schemes
introduced in [CE22] which attempts to reduce the analysis to a simpler case
in which the measure is localized, in the sense that the uniform measure on
$K$ is multiplied by a Gaussian density with small variance.
This is done by considering the stochastic localization (SL) process defined
in subsection 2.4 to the measure $\mu$. Given $\mu\sim\mathbf{1}_{K}$, we
consider the process $(\mu_{t})_{t\geq 0}$ defined by equation (5) with the
choice $C_{t}=\mathbb{I}_{n}$ up to time $T$. We fix the choice of the time
$T=n$.
The annealing technique boils down to the following two main steps:
1. 1.
Show that for a fixed set $E$ whose measure is bounded away from $0$ and $1$,
the quantity $\mu_{t}(E)$ is also bounded away from those values with non-
negligible probability. This behavior is referred to in [CE22] as _approximate
conservation of variance_. Under this condition, the conductance for the
transition kernel at time $0$ can be lower-bounded in terms of the conductance
of the transition kernel at time $T$, so that one only needs to give a lower
bound on the latter quantity.
2. 2.
Bound the conductance of the hit-and-run chain which corresponds to the
measure $\mu_{T}$ which, according to Eq. (8), is a Gaussian with variance
$\tfrac{1}{T}\mathbb{I}_{n}$ restricted to the set $K$. This is an easier task
than the analysis of the original conductance, since this measure is typically
localized well-inside the convex body $K$, so that a step of hit-and-run is
hardly affected by its boundary.
The first of the two steps highlighted above is captured by the following
lemma.
###### Lemma 5.
Let $K$ be an isotropic convex body in $\mathbb{R}^{n}$ and let $\mu_{T}$ be
defined as above. Let $\zeta>0$ and let $K_{r}$ be a convex subset of $K$ with
$\mu(K_{r})\geq 1-\zeta/100$. Then for any $E\subset K_{r}$ whose measure
$\mu(E)=\xi\in(0,1/2]$ satisfies $\zeta\leq c\xi^{2}/\sqrt{\log(10^{4}/\xi)}$,
we have that
$\displaystyle\mathbb{P}\left(\mu_{T}(E)(\mu_{T}(K_{r})-\mu_{T}(E))\geq\frac{1}{16}\cdot\mu(E)(\mu(K_{r})-\mu(E))\right)\geq\frac{c\xi^{3/2}}{\log\frac{1}{\xi}\sqrt{n\log(n)\kappa^{2}_{n}}},$
for a universal constant $c>0$.
The following lemma lower bounds the volume of $K_{r}$ used in Lemma 5. See
[Lov99] for a proof.
###### Lemma 6 (Lemma 2 in [Lov99]).
Suppose $K$ contains a unit ball. Let $\mu\propto\mathbf{1}_{K}$. Then
$\displaystyle\mu(K_{r})\geq 1-2\sqrt{n}r.$
In order to reduce the conductance of the original Markov chain to that of
$P_{\cdot\to\cdot}(\mu_{T})$, we also need the following lemma. Its proof
follows from [CE22, Proposition 48] and from the fact that the hit-and-run
chain is the Markov chain associated to the subspace-localization scheme
described in [CE22, Section 2.1].
###### Lemma 7.
Consider the process $(\mu_{t})_{t}$ defined above. Fix a function
$f\in\mathcal{L}_{2}(\mu)$. Let $P_{t}=P_{\cdot\to\cdot}(\mu_{t})$. Define
$\displaystyle D_{t}:=\mathcal{E}_{P_{t}}(f,f).$
Then $(D_{t})_{t\geq 0}$ is a super-martingale.
The next lemma, which corresponds to the second step described above, gives
the conductance for the hit-and-run on the “transformed” density $\mu_{T}$.
###### Lemma 8.
There exists a universal constant $c>0$ such that the following holds true.
Let $\nu_{\beta,n}$ be a probability measure defined as a truncated Gaussian
on a convex set, given by the formula
$\displaystyle\nu_{\beta,n}(x)\propto
e^{-\frac{n}{2}\left|x-\beta\right|^{2}}\mathbf{1}_{K}.$
Define $\Upsilon:=\left\\{u\in
K\mid\left|u-\beta\right|\in\left[\frac{1}{\sqrt{2}},\sqrt{2}\right]\right\\}$
and $\delta:=1-\nu_{\beta,n}(\Upsilon)$. Suppose $S_{1}\cup S_{2}$ is a
partition of $K$ and let $0<r\leq\frac{1}{16\sqrt{n}}$. Then we have
$\displaystyle\int_{S_{1}}P_{u\to
S_{2}}(\nu_{\beta,n})d\nu_{\beta,n}(x)\geq\frac{r^{2}\sqrt{n}}{c}\left[\nu_{\beta,n}(S_{1}\cap
K_{r})\cdot\nu_{\beta,n}(S_{2}\cap
K_{r})-8\left(1+\frac{32}{r}\right)\delta\right].$
Recall from Eq. (8) that $\mu_{t}$ is uniquely determined by the vector
$c_{t}$, and is given by the formula $\mu_{t}=\nu_{c_{t}/t,t}$. The next lemma
shows that at time $T=n$, the set $\Upsilon$ defined in Lemma 8 has large
measure with high probability.
###### Lemma 9.
Let $(\mu_{t})_{t}$ be the process defined above and let $(c_{t})_{t}$ be the
corresponding process which appears in Eq. (5), then at time $T=n$,
1. (i)
the random vector $c_{n}/n$ has the law
$\mu*\mathcal{N}(0,\frac{1}{n}\mathbb{I}_{n})$.
2. (ii)
there exists an event $\mathfrak{E}\subseteq^{n}$ with measure at least
$1-2e^{-\frac{n}{32}}$ under the law of $c_{n}/n$, such that for
$v\in\mathfrak{E}$, we have
$\displaystyle{\mathbb{P}}_{\mathbf{x}\sim\nu_{v,n}}\left(\frac{\sqrt{2}}{2}<\left|\mathbf{x}-v\right|<\sqrt{2}\right)\geq
1-e^{-\frac{n}{32}}.$
###### Proof of Theorem 2.
Fix the following parameters
$\displaystyle s$ $\displaystyle=\frac{\epsilon}{2M},$ $\displaystyle\zeta$
$\displaystyle=s^{2}/\sqrt{\log(10^{4}/s)}/10^{8},$ $\displaystyle r$
$\displaystyle=\frac{\zeta}{200\sqrt{n}},$ $\displaystyle T$
$\displaystyle=n.$ (10)
Let $(\mu_{t})_{t}$ be the stochastic localization process applied to the
measure $\mu$. As highlighted above, we provide a lower bound for the
$s$-conductance of $\mu$ in terms of that of $\mu_{T}$, and then derive a
lower bound for the $s$-conductance of $\mu_{T}$.
According to Theorem 4.1 in [KLS95], for $K$ isotropic in n, there exists a
point $x\in K$ such that $\mathbb{B}^{n}\left(x,1\right)\subseteq K$. Together
with Lemma 6, for the above choice of $r$, we obtain
$\displaystyle\mu(K_{r})\geq 1-\frac{\zeta}{100}.$
Let $S_{1}\cup S_{2}=K$ be a partition of $K$, with
$s\leq\mu(S_{1})\leq\frac{1}{2}$. Let $P_{t}=P_{\cdot\to\cdot}(\mu_{t})$ be
the hit-and-run transition kernel with the target density $\mu_{t}$. To lower
bound the $s$-conductance, we need to lower bound
$\displaystyle\mathcal{E}_{P_{0}}(\mathbf{1}_{S_{1}},\mathbf{1}_{S_{1}})=\int_{S_{1}}P_{x\to
S_{2}}(\mu)d\mu(x).$
Define
$\displaystyle\mathfrak{E}=\left\\{v\in^{n}\mid{\mathbb{P}}_{\mathbf{x}\sim\nu_{v,n}}\left(\frac{\sqrt{2}}{2}<\left|\mathbf{x}-v\right|<\sqrt{2}\right)\geq
1-e^{-\frac{n}{32}}\right\\}.$
According to Lemma 9 we have
${\mathbb{P}}\left(\frac{c_{n}}{n}\in\mathfrak{E}\right)\geq
1-2e^{-\tfrac{n}{32}}.$ (11)
Applying Lemma 8 with $r$ as above, $\beta=c_{n}/n$ and $\delta=e^{-n/32}$, we
obtain
$\displaystyle\int_{S_{1}}P_{u\to S_{2}}(\mu_{n})d\mu_{n}(x)$
$\displaystyle\geq\frac{r^{2}\sqrt{n}}{c^{\prime\prime\prime}}\left(\mu_{n}(S_{1}\cap
K_{r})\cdot\mu_{n}(S_{2}\cap
K_{r})-8\left(1+\frac{32}{r}\right)e^{-n/32}\right)\mathbf{1}_{\frac{c_{n}}{n}\in\mathfrak{E}}$
$\displaystyle\geq\left(\frac{c\zeta^{2}}{\sqrt{n}}\mu_{n}(S_{1}\cap
K_{r})\cdot\mu_{n}(S_{2}\cap
K_{r})-c^{\prime}\sqrt{n}e^{-n/32}\right)\mathbf{1}_{\frac{c_{n}}{n}\in\mathfrak{E}}.$
(12)
We have
$\displaystyle\quad\int_{S_{1}}P_{x\to S_{2}}(\mu)d\mu(x)$
$\displaystyle\overset{(i)}{\geq}{\mathbb{E}}\left[\int_{S_{1}}P_{x\to
S_{2}}(\mu_{n})d\mu_{n}(x)\right]$
$\displaystyle\overset{\eqref{eq:condbound}}{\geq}{\mathbb{E}}\left[\left(\frac{c\zeta^{2}}{\sqrt{n}}\mu_{n}(S_{1}\cap
K_{r})\cdot\mu_{n}(S_{2}\cap
K_{r})-c^{\prime}\sqrt{n}e^{-n/32}\right)\mathbf{1}_{\frac{c_{n}}{n}\in\mathfrak{E}}\right]$
$\displaystyle\overset{(ii)}{\geq}\frac{c\zeta^{2}}{\sqrt{n}}\left(\frac{c^{\prime\prime}s^{3/2}}{\log\frac{1}{s}\sqrt{n\log(n)\kappa_{n}^{2}}}-2e^{-n/32}\right)\cdot\mu(S_{1}\cap
K_{r})\cdot\left(\mu(K_{r})-\mu(S_{1}\cap
K_{r})\right)-c^{\prime}\sqrt{n}e^{-n/32}$
$\displaystyle\overset{(iv)}{\geq}C\cdot\frac{s^{5.5}}{n\log^{2}(\frac{1}{s})\kappa_{n}\sqrt{\log(n)}}\left[\min\left\\{\mu(S_{1}),\mu(S_{1}^{c})\right\\}-s/2\right],$
where $c,c^{\prime}c^{\prime\prime},c^{\prime\prime\prime},C$ are all
universal constants. Here (i) follows from Lemma 7, which claims that
$\left(\mathcal{E}_{P_{t}}(\mathbf{1}_{S_{1}},\mathbf{1}_{S_{1}})\right)_{t\geq
0}$ is a super-martingale. (ii) follows from Lemma 5 and Eq. (11). (iv)
follows because there exists a constant $c>1$ such that for $n\geq
c\log(\frac{M}{\epsilon})$, meaning that
$\sqrt{n\log(n)}\kappa_{n}e^{-n/32}\ll s^{2}$. The above calculation
establishes a lower bound on the $s$-conductance
$\displaystyle\Phi_{s}\geq
C\frac{s^{5.5}}{n\log^{2}(\frac{1}{s})\kappa_{n}\sqrt{\log(n)}}.$
According to [KL22], $\frac{1}{\psi_{n}^{2}}\leq C\log(n)\kappa_{n}^{2}$. We
conclude with Lemma 3.
##### Overview of the rest of the proof
In the rest of the proof, in Section 4 we prove Lemma 5, which shows the
approximate conservation of variance of any indicator function. In Section 5
we prove Lemma 8 which shows the conductance of the hit-and-run on the
transformed density, and Lemma 9 which shows that with high probability most
of the mass of the transformed density is concentrated in a shell.
## 4 Approximate conservation of variance
The goal of this section is to prove Lemma 5. To this end, we fix a set
$E\subset\mathbb{R}^{n}$; our aim is to show that we non-negligible
probability we have both that $\mu_{T}(E)$ is bounded away from $0$ and $1$
and that $\mu_{T}(K_{r})$ is close to $1$.
We have not been able to show this directly with respect to the SL process
with the choice $C_{t}=\mathbb{I}_{n}$. Instead, we consider a different
choice of driving matrix $C_{t}$ and stopping time (described below) which on
one hand makes the analysis more tractable and on the other hand has the
resulting distribution of the random measure being the same as that of
$\mu_{T}$.
### 4.1 Construction of the three-stage process
The driving matrix $C_{t}$ is chosen so that the process has three different
stages, as follows.
Given $\mu\sim\mathbf{1}_{K}$, $E\subset^{n}$ such that
$\mu(E)=\xi\in(0,1/2]$, we run three stages of SL to obtain
$(\hat{\mu}_{t})_{t\geq 0}$ as follows
* •
Stage 1: Starting from $\mu$, run SL with $C_{t}=I_{n}$ from time $0$ to time
$T_{1}:=\frac{\xi}{\kappa_{n}^{2}\log(n)}$ to obtain
$(\hat{\mu}_{t})_{t\in[0,T_{1}]}$.
* •
Stage 2: Starting from $\hat{\mu}_{T_{1}}$, we choose a driving matrix $C_{t}$
which satisfies the following. There is a stopping time $T_{2}$ so that:
1. 1.
One has $\hat{\mu}_{T_{1}}(E)=\hat{\mu}_{T_{2}}(E)$ almost surely.
2. 2.
The matrix $n\mathbb{I}_{n}-\int_{0}^{T_{2}}C_{t}^{2}dt$ is positive definite
and its rank is almost surely at most $1$.
* •
Stage 3: Starting from $\hat{\mu}_{T_{2}}$, and defining
$n\mathbb{I}_{n}-\int_{0}^{T_{2}}C_{t}^{2}dt=\lambda_{1}\theta\theta^{\top}$
where $|\theta|=1$, run SL with $C_{t}=\theta\theta^{\top}$ for a time period
$n-\lambda_{1}$.
Note that the driving matrix in Stage 2 is defined only implicitly. The exact
choice of driving matrix which satisfies the two conditions of this stage is
of no consequence for the rest of the proof, rather it is only important to
establish to existence of such a choice. Roughly speaking these two conditions
can be obtained by choosing $C_{t}=\text{Proj}_{H_{t}\cap v_{t}^{\perp}}$,
where $v_{t}=\int_{E}(x-\mathbf{b}(\hat{\mu}_{t}))d\hat{\mu}_{t}(x)$ and
$H_{t}$ is the image of the matrix $n\mathbb{I}_{n}-\int_{0}^{t}C_{s}^{2}ds$.
We refer the reader to [EKZ21, Lemma 2] for the exact construction.
Observe that it follows from the definition of the process that, at the end of
Stage $3$, one has almost surely
$\int_{0}^{T_{3}}C_{t}^{2}dt=n\mathbb{I}_{n}.$ (13)
Since $\hat{\mu}_{t}$ is a martingale and $T_{3}$ is a stopping time, applying
the optional stopping theorem yields
$\displaystyle{\mathbb{E}}[\hat{\mu}_{T_{3}}(x)]=\mu(x),~{}~{}\forall
x\in^{n}.$ (14)
According to formula 8, there exists a random variable
$\hat{\mathbf{y}}_{T_{3}}$ on n such that $\hat{\mu}_{T_{3}}$ takes the form
$\displaystyle\hat{\mu}_{T_{3}}(x)=\nu_{\hat{\mathbf{y}}_{T_{3}},n}(x)=\frac{1}{Z_{T_{3}}(\hat{\mathbf{y}}_{T_{3}})}\exp\left(-\frac{n}{2}\left|x-\hat{\mathbf{y}}_{T_{3}}\right|^{2}\right)\mu(x),~{}~{}~{}\forall
x\in^{n}.$
where $Z_{T_{3}}(\hat{\mathbf{y}}_{T_{3}})$ is the normalizing constant.
Letting $p_{\hat{\mathbf{y}}_{T_{3}}}$ be the distribution of
$\hat{\mathbf{y}}_{T_{3}}$, Eq. (14) implies that
$\displaystyle\int_{{}^{n}}\nu_{y,n}(x)p_{\hat{\mathbf{y}}_{T_{3}}}(dy)=\mu(x),~{}~{}\forall
x\in^{n}.$
On the other hand, applying the same argument for $\mu_{T}$ which was obtained
via the SL process with driving matrix $C_{t}=\mathbb{I}_{n}$, there exists a
random variable $\mathbf{y}_{n}$ such that, almost surely
$\mu_{T}(x)=\frac{1}{Z(\mathbf{y}_{n})}\exp\left(-\frac{n}{2}\left|x-\mathbf{y}_{n}\right|^{2}\right)\mu(x)=\nu_{\mathbf{y}_{n},n}(x),~{}~{}\forall
x\in^{n}.$
Denoting the law of $\mathbf{y}_{n}$ by $p_{\mathbf{y}}$, the martingale
property yields
$\displaystyle\int_{{}^{n}}\nu_{y,n}(x)p_{\mathbf{y}}(dy)=\mu(x),~{}~{}\forall
x\in^{n}.$
We conclude that
$\int_{{}^{n}}\nu_{y,n}(x)p_{\hat{\mathbf{y}}_{T_{3}}}(dy)=\int_{{}^{n}}\nu_{y,n}(x)p_{\mathbf{y}}(dy)=\mu(x),~{}~{}\forall
x\in^{n}.$ (15)
The following lemma shows that $p_{\hat{\mathbf{y}}_{T_{3}}}=p_{\mathbf{y}}$,
which implies that $\mu_{T}$ and $\hat{\mu}_{T_{3}}$ have the same
distribution. Its proof is provided in Subsection 4.3.
###### Lemma 10.
Given $\mu(x)\sim\mathbf{1}_{K}$ the uniform distribution on a bounded convex
set $K\subset^{n}$. If there exists a density $p$ on n, such that
$\displaystyle\mu(x)=\int\nu_{y,n}(x)p(y)dy,\forall x\in^{n},$
then $p$ is uniquely defined almost everywhere.
Since the statement of Lemma 5 only depends on the random measure $\mu_{T}$
and is oblivious of the path leading to it, we can prove this lemma via the
analysis of the process $(\hat{\mu}_{t})_{t\in[0,T_{3}]}$. The rest of the
proof therefore boils down to the next lemma, proven in Subsection 4.2.
###### Lemma 11.
Suppose that $\mu=\mathbf{1}_{K}$ is isotropic and logconcave and that
$\mu(K_{r})\geq 1-\zeta/100$. Let $E\subset K_{r}$ satisfy
$\mu(E)=\xi\in(0,1/2]$ with $\zeta\leq\xi^{2}/\sqrt{\log(10^{4}/\xi)}/10^{8}$.
Let $(\hat{\mu}_{t})_{t}$ be the 3-stage process defined above. Then,
$\displaystyle{\mathbb{P}}\left(\hat{\mu}_{T_{3}}(E)(\hat{\mu}_{T_{3}}(K_{r})-\hat{\mu}_{T_{3}}(E))\geq\frac{1}{16}\cdot\mu(E)(\mu(K_{r})-\mu(E))\right)\geq\frac{c\xi^{3/2}}{\log(\frac{1}{\xi})\sqrt{n\kappa_{n}^{2}\log(n)}},$
for a universal constant $c>0$.
###### Proof of Lemma 5.
Lemma 5 directly follows from (15) combined with the two lemmas above.
### 4.2 Approximate conservation of variance for the process $\hat{\mu}_{t}$
To prove Lemma 11, we proceed with three lemmas which deal with each stage of
the stochastic process one by one. Consider the following events,
$\displaystyle W_{1}$
$\displaystyle:=\left\\{\hat{\mu}_{T_{1}}(E)\in\left[\xi-\frac{\xi}{4},\xi+\frac{\xi}{4}\right]\right\\}\cap\left\\{\hat{\mu}_{T_{1}}(K_{r})\geq
1-\frac{\zeta}{20}\right\\},$ $\displaystyle W_{2}$
$\displaystyle:=\left\\{\hat{\mu}_{T_{2}}(E)\in\left[\xi-\frac{\xi}{4},\xi+\frac{\xi}{4}\right]\right\\}\cap\left\\{\hat{\mu}_{T_{2}}(K_{r})\geq
1-\frac{\zeta}{10}\right\\},\text{ and }$ $\displaystyle W_{3}$
$\displaystyle:=\left\\{\hat{\mu}_{T_{3}}(E)\in\left[\frac{\xi}{2},\frac{3}{4}\right]\right\\}\cap\left\\{\hat{\mu}_{T_{3}}(K_{r})\geq\frac{7}{8}\right\\}.$
###### Lemma 12.
For $\mu(E)=\xi\in(0,1/2]$ and $\mu(K_{r})\geq 1-\zeta/100$, there exists a
universal constant $C>0$ such that at the end of Stage 1 (meaning, for
$T_{1}=\frac{\xi}{C\kappa_{n}^{2}\log n}$), we have
$\displaystyle{\mathbb{P}}\left(W_{1}\right)\geq 0.6.$
###### Lemma 13.
We have
$\displaystyle{\mathbb{P}}\left(W_{2}|~{}W_{1}\right)\geq 0.5.$
###### Lemma 14.
Under the assumption $\zeta\leq\xi^{2}/\sqrt{\log(10^{4}/\xi)}/10^{8}$, we
have
$\displaystyle{\mathbb{P}}\left(W_{3}|~{}W_{2}\right)\geq\frac{c}{\log(\frac{1}{\xi})}\sqrt{\frac{T_{1}}{n}},$
for a universal constant $c>0$.
#### 4.2.1 Analysis of Stage 1
The proof of Lemma 12 relies on the analysis developed in recent years around
the KLS conjecture (see e.g. [Che21, KL22]). We apply the following upper
bound on the operator norm of the covariance matrix $A_{t}$, proven by Klartag
and Lehec:
###### Lemma 15 (Lemma 5.2 in [KL22]).
For every $T\leq(C\kappa_{n}^{2}\log n)^{-1}$ we have
$\displaystyle{\mathbb{P}}(\left\|A_{t}\right\|_{2}\geq 2\text{ for }0\leq
t\leq T)\leq\exp\left(-\frac{1}{CT}\right),$
where $C$ is a universal constant.
While Lemma 5.2 in [KL22] only shows the result for a fixed $t\leq T$, it is
not hard to see that, using Doob’s inequality, the same proof can be
generalized to the case of all $t\in[0,T]$ with a small modification on the
constant $C$.
Equipped with Lemma 15, Lemma 12 then follows from a simple stochastic
calculus.
###### Proof of Lemma 12.
Let $g_{t}=\hat{\mu}_{t}(E)$. We have
$\displaystyle
dg_{t}=\int_{E}(x-\mathbf{b}(\hat{\mu}_{t}))^{\top}dW_{t}\hat{\mu}_{t}(x)dx.$
Its quadratic variation is
$\displaystyle d[g]_{t}$
$\displaystyle=\left|\int_{E}(x-\mathbf{b}(\hat{\mu}_{t}))\hat{\mu}_{t}(x)dx\right|^{2}dt$
$\displaystyle\leq\left\|A_{t}\right\|_{2}dt,$
where $A_{t}$ is the covariance matrix of $\mathbf{b}(\hat{\mu}_{t})$. We have
$\displaystyle{\mathbb{P}}\left(\hat{\mu}_{t}(E)\in\left[\xi-\frac{\xi}{4},\xi+\frac{\xi}{4}\right]\right)$
$\displaystyle={\mathbb{P}}\left(\tilde{W}_{[g]_{t}}\in\left[-\frac{\xi}{4},\frac{\xi}{4}\right]\right)$
$\displaystyle\geq
0.9-{\mathbb{P}}\left(\int_{0}^{t}\left\|A_{\tau}\right\|_{2}d\tau>\frac{\xi}{64}\right)$
Applying Lemma 15 on the operator norm of $A_{t}$ with
$t\leq\frac{\xi}{C\kappa_{n}^{2}\log n}$, it follows that
${\mathbb{P}}\left(\hat{\mu}_{T_{1}}(E)\in\left[\xi-\frac{\xi}{4},\xi+\frac{\xi}{4}\right]\right)\geq
0.8.$
Next, since $\hat{\mu}_{t}(K_{r})$ is a martingale, we have
$\mathbb{E}[\hat{\mu}_{T_{1}}(K_{r})]\geq 1-\zeta/100$. Since
$\hat{\mu}_{T_{1}}(K_{r})\leq 1$ almost surely, we can use Markov’s inequality
to conclude that
${\mathbb{P}}\left(\hat{\mu}_{T_{1}}(K_{r})\leq 1-\zeta/20\right)\leq 0.2.$
This concludes the lemma.
#### 4.2.2 Analysis of Stage 2
In the second stage of the process, the measure of $E$ is kept constant almost
surely, so Lemma 13 boils down to a simple application of Markov’s inequality.
###### Proof of Lemma 13.
In the second stage, by construction, we have
$\displaystyle\hat{\mu}_{T_{2}}(E)=\hat{\mu}_{T_{1}}(E).$
Additionally, since $\hat{\mu}_{t}(K_{r})$ is a martingale, for
$t\in[T_{1},T_{2}]$, we have
$\displaystyle{\mathbb{E}}[\hat{\mu}_{t}(K_{r})\mid
W_{1}]=\mu_{T_{1}}(K_{r})\geq 1-\zeta/20.$
Then ${\mathbb{P}}(\hat{\mu}_{t}(K_{r})\geq
1-\zeta/10)+(1-\zeta/10)\left(1-{\mathbb{P}}(\hat{\mu}_{t}(K_{r})\geq
1-\zeta/10)\right)\geq 1-\zeta/20$, which results in
$\displaystyle{\mathbb{P}}(\hat{\mu}_{t}(K_{r})\geq 1-\zeta/10)\geq 0.5.$
We conclude by applying the union bound.
#### 4.2.3 Analysis of Stage 3
In this section we prove Lemma 14. Here is the main observation: since the
driving matrix $C_{t}$ is fixed to be the rank-$1$ matrix $\theta\theta^{T}$
between time $T_{2}$ and $T_{3}$, the SL process during that time only depends
on the marginal of the measure $\hat{\mu}_{T_{2}}$ onto the direction
$\theta$, which means that the proof boils down to the analysis of a one-
dimensional SL process. This analysis, however, is quite long and technical
and requires a few lemmas on properties of one-dimensional logconcave measures
summarized in Appendix A.
Recall that we condition on the event $W_{2}$ which amounts to
$\displaystyle\hat{\mu}_{T_{2}}(E)\in\left[\xi-\frac{\xi}{4},\xi+\frac{\xi}{4}\right]\text{
and }\hat{\mu}_{T_{2}}(K_{r})\geq 1-\zeta/10,$
For a unit vector $\theta$ (obtained by Stage 2) the third stage of the
process runs the SL process with a control matrix $C_{t}=\theta\theta^{\top}$
time period $\alpha:=n-\lambda_{1}$. That is, for $t\geq T_{2}$,
$\displaystyle
d\hat{\mu}_{t}(x)=(x-\mathbf{b}(\hat{\mu}_{t}))^{\top}\theta\theta^{\top}dW_{t}\hat{\mu}_{t}(x).$
Define
$\displaystyle\sigma^{2}:=\theta^{\top}A_{T_{2}}\theta=\theta^{\top}\mathrm{Cov}(\hat{\mu}_{T_{2}})\theta,$
the variance of the starting measure in the direction of $\theta$. As a result
of the Brascamp and Lieb inequality [BL02], we have $A_{T_{2}}\preceq
B_{T_{2}}^{-1}$, and hence
$\displaystyle\sigma^{2}\leq\frac{1}{T_{1}}.$ (16)
For $t\in[0,\alpha]$, define $\omega_{t}$ to be the density on obtained by
taking the push-forward of $\hat{\mu}_{T_{2}+t}$ via
$x\mapsto\frac{1}{\sigma}\cdot x^{\top}\theta$. That is,
$\displaystyle\omega_{t}(z):=\int_{H(z)}\hat{\mu}_{T_{2}+t}(x)dx,$ (17)
where, for $z\in\mathbb{R}$,
$H(z):=\\{x\in\mathbb{R}^{n};~{}\theta^{T}x=z\sigma\\}$ is defined as the
fiber corresponding to the value $z$. For a subset $S\subseteq^{n}$, define
$\displaystyle
h_{S}(z):=\begin{cases}\displaystyle\frac{\int_{H(z)}\mathbf{1}_{\\{x\in
S\\}}\hat{\mu}_{T_{2}}(x)dx}{\int_{H(z)}\hat{\mu}_{T_{2}}(x)dx}&\text{ if
}\omega_{0}(z)\neq 0,\\\ 0&\text{otherwise.}\end{cases}$ (18)
Observe that $h_{S}(z)\in[0,1]$ for all $z$. Note that the value of $h_{S}$
remains unchanged if the term $\hat{\mu}_{T_{2}}$ is replaced with
$\hat{\mu}_{T_{2}+t}$ in the above formula, for any $t\in[0,\alpha]$. This is
because
$\displaystyle\hat{\mu}_{T_{2}+t}(x)\propto\hat{\mu}_{T_{2}}(x)\exp\left(-t(\theta^{\top}x)^{2}+\tilde{c}_{t}\theta^{\top}x\right),$
for some $\tilde{c}_{t}$. This means that for a fixed value of $z$ the points
in $H(z)$ are all multiplied by the same factor. The definition of $h_{S}$
allows us to identify $\omega_{t}(h_{S})$ with $\hat{\mu}_{T_{2}+t}(S)$ as
shown in the following lemma.
###### Lemma 16.
For the stochastic process $\omega_{t}$ defined above, for $t>0$, we have
$\displaystyle\mathbf{b}(\omega_{t})$
$\displaystyle=\theta^{\top}\mathbf{b}(\hat{\mu}_{T_{2}+t})/\sigma$
$\displaystyle\mathrm{Var}(\omega_{t})$
$\displaystyle=\theta^{\top}\operatorname{Cov}(\hat{\mu}_{T_{2}+t})\theta/\sigma^{2}$
$\displaystyle\omega_{t}(h_{S})$ $\displaystyle=\hat{\mu}_{T_{2}+t}(S).$
In particular, $\mathrm{Var}(\omega_{0})=1$. Additionally,
$(\omega_{t})_{t\geq 0}$ satisfies the following stochastic differential
equation
$\displaystyle
d\omega_{t}=\sigma(z-\mathbf{b}(\omega_{t}))d\bar{W}_{t}\omega_{t}(z),$ (19)
where $\bar{W}_{t}=\theta^{\top}W_{t}$ is a one-dimensional Brownian motion.
The proof of this lemma is straightforward, and is provided in Subsection
4.2.5 below. Based on the above stochastic differential equation, for $t>0$,
define
$\displaystyle\mathbf{y}_{t}:=\frac{1}{t}\int_{0}^{t}\frac{1}{\sigma}d\bar{W}_{s}+\mathbf{b}(\omega_{s})ds.$
According to Lemma 2.1 in [Eld13], $\omega_{t}$ is uniquely determined by the
value of $\mathbf{y}_{t}$, for all $t\in[0,\alpha]$. Additionally, it takes
the form
$\displaystyle\omega_{t}(z)=\frac{\omega_{0}(z)\exp(-t\sigma^{2}(z-\mathbf{y}_{t})^{2})}{\int\omega_{0}(\varsigma)\exp(-t\sigma^{2}(\varsigma-\mathbf{y}_{t})^{2})d\varsigma}.$
An application of Theorem 2 in [EAM22] shows that $\mathbf{y}_{\alpha}$ has
the law $\rho$, where
$\displaystyle\rho:=\omega*\mathcal{N}\left(0,\frac{1}{\alpha\sigma^{2}}\right).$
(20)
With a slight abuse of notation, we introduce a deterministic density
$\omega_{y,t}$ with two subindices as
$\displaystyle\omega_{y,t}(z):=\frac{\omega_{0}(z)\exp\left(-t\sigma^{2}(z-y)^{2}\right)}{\int\omega_{0}(\varsigma)\exp\left(-t\sigma^{2}(\varsigma-y)^{2}\right)d\varsigma},\quad\forall
z\in.$ (21)
From here, we can identify $\omega_{t}$ with $\omega_{\mathbf{y}_{t},t}$. The
next lemma shows that the function $y\mapsto\omega_{y,\alpha}(h_{S})$ is
Lipschitz for a fixed $S\subset$.
###### Lemma 17.
Let $h:\mathbb{R}^{n}\to[0,1]$. Then for all $y,\tilde{y}\in$, we have
$\displaystyle\left|\omega_{y,\alpha}(h)-\omega_{\tilde{y},\alpha}(h)\right|\leq\sqrt{\alpha\sigma^{2}}\left|y-\tilde{y}\right|.$
Its proof is provided in Subsection 4.2.5. With the above notation and lemmas,
we are ready to prove Lemma 14. It is done by discussing the two cases based
on the value of $\alpha\sigma^{2}$:
1. 1.
$\alpha\sigma^{2}<\frac{1}{512}$
2. 2.
$\alpha\sigma^{2}\geq\frac{1}{512}$.
Define the events
$\displaystyle H_{1}$
$\displaystyle:=\left\\{\omega_{Y,\alpha}(h_{E})\in\left[\frac{1}{2}\xi,\frac{3}{4}\right]\right\\}\cap\left\\{\omega_{Y,\alpha}(h_{K_{r}})\geq\frac{7}{8}\right\\},\text{
and }$ $\displaystyle H_{0}$
$\displaystyle:=\left\\{\omega_{0}(h_{E})\in\left[\frac{3}{4}\xi,\frac{5}{4}\xi\right]\right\\}\cap\left\\{\omega_{0}(h_{K_{r}})\geq
1-\frac{\zeta}{10}\right\\}.$
The results in the two cases are summarized in the two lemmas below.
###### Lemma 18.
If $\alpha\sigma^{2}\leq\frac{1}{512}$ and $\zeta\leq 0.1\xi$, then
$\displaystyle{\mathbb{P}}_{Y\sim\rho}\left(H_{1}\mid H_{0}\right)\geq
0.1\xi.$
###### Lemma 19.
If $\alpha\sigma^{2}\geq\frac{1}{512}$,
$\zeta\leq\xi^{2}/\sqrt{\log(10^{4}/\xi)}/10^{8}$, then there exists a
universal constant $C>0$ such that
$\displaystyle{\mathbb{P}}_{Y\sim\rho}\left(H_{1}|H_{0}\right)\geq\frac{C}{\log(\frac{1}{\xi})}\sqrt{\frac{T_{1}}{n}}\xi.$
###### Proof of Lemma 14.
Using the identification in Lemma 16 between the terms in $\omega$ and those
in $\hat{\mu}$, it is clear that Lemma 14 follows from the two lemmas above.
#### 4.2.4 Proof of Lemma 18 and Lemma 19
First, we prove Lemma 18 where $\alpha\sigma^{2}\leq\frac{1}{512}$. The bound
is proven by direct analysis of the process $\omega_{t}(h_{E})$ via the
stochastic differential equation (19).
###### Proof of Lemma 18.
Let $g_{t}:=\omega_{t}(h_{E})$. According to Eq. (19), $g_{t}$ satisfies
$\displaystyle dg_{t}=\int\sigma(z-\mathbf{b}(\omega_{t}))\cdot
d\bar{W}_{t}h_{E}(z)\omega_{t}(z)dz.$
It is a martingale for $t\geq 0$, with
$g_{0}=w_{0}(h_{E})=\hat{\mu}_{T_{2}}(E)\in[\frac{3}{4}\xi,\frac{5}{4}\xi]$.
We first claim that for any $t_{1}\in[0,\alpha]$ to $\alpha$, we have almost
surely that
$\displaystyle
g_{t_{1}}\in[1/2,5/8]~{}~{}\Rightarrow~{}~{}{\mathbb{P}}(g_{\alpha}\in[3/8,6/8]\mid\omega_{t_{1}})\geq
0.4.$ (22)
The proof of the claim (22) is deferred to the end. Assuming the claim for
now, we complete the proof of Lemma 18.
If $g_{0}\in[1/2,5/8]$, then we directly apply the claim (22) from time $0$ to
$\alpha$ to obtain
$\displaystyle{\mathbb{P}}(g_{\alpha}\in[3/8,6/8])\geq 0.4.$
Otherwise, $g_{0}\in[\frac{3}{4}\xi,1/2)$, we define the stopping time
$\displaystyle\boldsymbol{\tau}:=\min\left\\{t\in[0,\alpha]\mid
g_{t}=\frac{1}{2}\xi\text{ or }g_{t}=\frac{1}{2}\right\\}\wedge\alpha.$ (23)
Since $(g_{t})_{t\geq 0}$ is a martingale with
${\mathbb{E}}[g_{t}]=g_{0}\in[\tfrac{3}{4}\xi,1/2)$, applying the optimal
stopping time theorem for bounded martingale, we have
$\displaystyle{\mathbb{E}}[g_{\boldsymbol{\tau}}]=g_{0}\in[\frac{3}{4}\xi,1/2).$
Separating the above expectation into three cases
$g_{\boldsymbol{\tau}}=\frac{1}{2}\xi$, $g_{\boldsymbol{\tau}}=\frac{1}{2}$ or
$\boldsymbol{\tau}=\alpha$, we obtain
$\displaystyle{\mathbb{P}}\left(g_{\boldsymbol{\tau}}=\frac{1}{2}\text{ or
}\boldsymbol{\tau}=\alpha\right)\geq\frac{1}{2}\xi.$
Under the above event, if $\boldsymbol{\tau}=\alpha$, we have
$g_{\alpha}\in(\frac{1}{2}\xi,\frac{1}{2})$. Otherwise, we apply the claim
(22) from time $\boldsymbol{\tau}$ to $\alpha$ to obtain
$\displaystyle{\mathbb{P}}(g_{\alpha}\in[3/8,6/8])$
$\displaystyle={\mathbb{P}}(g_{\alpha}\in[3/8,6/8]\mid
g_{\boldsymbol{\tau}}=\frac{1}{2}\text{ or
}\boldsymbol{\tau}=\alpha){\mathbb{P}}(g_{\boldsymbol{\tau}}=\frac{1}{2}\text{
or }\boldsymbol{\tau}=\alpha)$ $\displaystyle\geq
0.4{\mathbb{P}}(g_{\boldsymbol{\tau}}=\frac{1}{2}\text{ or
}\boldsymbol{\tau}=\alpha)$ $\displaystyle\geq 0.2\xi.$
Combining all the cases above, we conclude that
$g_{\alpha}\in[\frac{1}{2}\xi,\frac{3}{4}]$ given
$g_{0}\in[\frac{3}{4}\xi,\frac{5}{4}\xi]$ with probability at least $0.2\xi$.
On the other hand, since $(\omega_{t}(h_{K_{r}}))_{t\geq 0}$ is a martingale,
we have $\mathbb{E}[\omega_{\alpha}(h_{K_{r}})|H_{0}]\geq 1-\zeta/10$. Thus,
by Markov’s inequality and by the fact that $\omega_{\alpha}(h_{K_{r}})\leq 1$
almost surely,
$\displaystyle{\mathbb{P}}(\omega_{\alpha}(h_{K_{r}})\geq 7/8)\geq 1-\zeta.$
Recalling that $\zeta\leq 0.1\xi$, Lemma 18 follows from a union bound.
##### Proof of the claim (22):
The quadratic variation of $g_{t}$ is
$\displaystyle d[g]_{t}$
$\displaystyle=\left|\int\sigma(z-\mathbf{b}(\omega_{t}))h_{E}(z)\omega_{t}(z)dz\right|^{2}dt$
$\displaystyle\overset{(i)}{\leq}\int\sigma^{2}(z-\mathbf{b}(\omega_{t}))^{2}\omega_{t}(z)dz\cdot\int
h_{E}(z)^{2}\omega_{t}(z)dzdt$
$\displaystyle\leq\sigma^{2}\mathrm{Var}(\omega_{t})dt.$
(i) follows from the Cauchy-Schwarz inequality. To control
$\varphi_{t}:=\mathrm{Var}(\omega_{t})$, we observe that it satisfies
$\displaystyle
d\varphi_{t}=\int\sigma(z-\mathbf{b}(\omega_{t}))^{3}d\bar{W}_{t}\omega_{t}(z)dz-\sigma^{2}\varphi_{t}^{2}dt.$
We have ${\mathbb{E}}d\varphi_{t}\leq 0$, hence
${\mathbb{E}}[\varphi_{t}]\leq\varphi_{0}\leq 1$.
Consequently, ${\mathbb{E}}[g]_{\alpha}\leq\frac{1}{512}$. Conditioned on
$g_{0}\in[\frac{1}{2},\frac{5}{8}]$, we have
$\displaystyle{\mathbb{P}}\left(\frac{3}{8}\leq
g_{\alpha}\leq\frac{6}{8}\right)$
$\displaystyle\geq{\mathbb{P}}\left(-\frac{1}{8}\leq\tilde{W}_{[g]_{\alpha}}\leq\frac{1}{8}\right)$
$\displaystyle=1-{\mathbb{P}}\left(\max_{0\leq\ell\leq\frac{1}{256}}\left|\tilde{W}_{\ell}\right|>\frac{1}{8}\right)-{\mathbb{P}}\left([g]_{\alpha}>\frac{1}{256}\right)$
$\displaystyle\overset{(i)}{\geq}1-4{\mathbb{P}}(\tilde{W}_{\frac{1}{256}}>\frac{1}{8})-{\mathbb{P}}([g]_{\alpha}>\frac{1}{256})$
$\displaystyle\overset{(ii)}{\geq}0.9-0.5=0.4$
(i) follows from the reflection principle. (ii) follows from the tail bound of
the normal distribution ${\mathbb{P}}_{X\sim\mathcal{N}(0,1)}(X>2)\leq 0.023$
and the Markov inequality.
Next, we prove Lemma 19 which concerns the case that
$\frac{1}{512}\leq\alpha\sigma^{2}$. The main idea in the proof is to use the
Lipschitz bound established in Lemma 17: Since the function
$y\to\omega_{y,\alpha}(h_{E})$ is Lipschitz and since it must attain values
both close to $0$ and to $1$, we conclude that there exists an interval with
non-negligible mass, on which $\omega_{y,\alpha}(E)$ both is bounded away from
both $0$ and $1$. The main caveat is then to show that
$\omega_{y,\alpha}(h_{K_{r}})$ can be made large at the same time. This
crucially relies on the convexity of $K_{r}$, used in the following lemma,
whose proof is found in the next subsection.
###### Lemma 20.
Suppose that (i) $\frac{1}{512}\leq\alpha\sigma^{2}$, (ii)
$\zeta\leq\xi^{2}/\sqrt{\log(10^{4}/\xi)}/10^{8}$ and (iii)
$\omega_{0}(h_{K_{r}})\geq 1-\zeta/10$. Then, there exists an interval
$I\subset$ such that $\rho(I)\geq 1-\xi/10$ and of length at most
$100(4+\log\frac{1}{\xi})$, such that
$\displaystyle\omega_{y,\alpha}(h_{K_{r}})\geq 7/8,~{}~{}\forall y\in I.$
###### Proof of Lemma 19.
Let $I$ be the interval obtained by applying Lemma 20. We partition $I$ into
three subsets
$\displaystyle I_{1}$
$\displaystyle:=\left\\{y\in\mid\omega_{y,\alpha}(h_{E})\leq
1/2\xi\right\\}\cap I,$ $\displaystyle I_{2}$
$\displaystyle:=\left\\{y\in\mid\omega_{y,\alpha}(h_{E})\in[1/2\xi,3/4]\right\\}\cap
I,$ $\displaystyle I_{3}$
$\displaystyle:=\left\\{y\in\mid\omega_{y,\alpha}(h_{E})\geq 3/4\right\\}\cap
I.$
Since
${\mathbb{E}}[\omega_{Y,\alpha}(h_{E})|H_{0}]=\omega_{0}(h_{E})\in[3/4\xi,5/4\xi]$,
Markov’s inequality implies that
$\displaystyle 5/4\xi\geq
0+3/4\cdot{\mathbb{P}}(\omega_{Y,\alpha}(h_{E})>3/4|H_{0}).$
Recalling that the measure $\rho$ defined in Eq. (20) satisfies $\rho(I)\geq
1-\frac{1}{10}\xi$, we obtain by a union bound that
$\displaystyle{\mathbb{P}}(Y\in I_{1}\cup
I_{2}|H_{0})={\mathbb{P}}(\omega_{Y,\alpha}(h_{E})\leq
3/4|H_{0})-\frac{1}{10}\xi\geq 1-\frac{5}{3}\xi-\frac{1}{10}\xi\geq 1/10,$
where $Y$ is $\rho$-distributed. If ${\mathbb{P}}(Y\in I_{2}|H_{0})\geq 1/20$,
then we are done. Otherwise, we obtain
$\displaystyle{\mathbb{P}}(Y\in I_{1})\geq 1/20.$ (24)
Similarly, since
$\displaystyle 3/4\xi\leq{\mathbb{E}}[\omega_{Y,\alpha}(h_{E})]\leq
1\cdot{\mathbb{P}}(\omega_{Y,\alpha}(h_{E})\geq
1/2\xi)+1/2\xi\cdot{\mathbb{P}}(\omega_{Y,\alpha}(h_{E})\leq 1/2\xi),$
we obtain
$\displaystyle{\mathbb{P}}(Y\in I_{2}\cup
I_{3})={\mathbb{P}}(\omega_{Y,\alpha}(h_{E})\geq
1/2\xi)-\frac{1}{10}\xi\geq\frac{1}{10}\xi.$
If ${\mathbb{P}}(Y\in I_{2})\geq\frac{1}{20}\xi$, then we are done. Otherwise,
we obtain
$\displaystyle{\mathbb{P}}(Y\in I_{3})\geq\frac{1}{20}\xi.$ (25)
Observe that the distance between $I_{1}$ and $I_{3}$ is large, because for
any $y\in I_{1}$ and $\tilde{y}\in I_{3}$, we have
$\displaystyle\omega_{\tilde{y},\alpha}(h_{E})-\omega_{y,\alpha}(h_{E})\geq\frac{3}{4}-\frac{1}{2}\xi\geq\frac{1}{2}.$
And according to Lemma 17, $y\mapsto\omega_{y,\alpha}(h_{E})$ is
$\sqrt{\alpha\sigma^{2}}\leq\sqrt{n/T_{1}}$-Lipschitz as $\alpha\leq n$ and
$\sigma^{2}\leq\frac{1}{T_{1}}$ in Eq. (16). Hence,
$\displaystyle\left|\tilde{y}-y\right|\geq\frac{1}{2}\sqrt{\frac{T_{1}}{n}}.$
(26)
Let $(\mathbf{1}_{I}\cdot\omega)$ be the measure $\omega$ restricted on the
set $I$. Finally, we consider
$(\mathbf{1}_{I}\cdot\omega)*\mathcal{N}\left(0,\frac{1}{\alpha\sigma^{2}}\right)$,
which satisfies a diameter isoperimetric inequality (see Theorem 4.2 in
[Vem05]). Hence,
$\displaystyle\rho(I_{2})$
$\displaystyle\geq\frac{2d(I_{1},I_{3})}{100\left(1+\log(\frac{1}{\xi})\right)}\min\left\\{\rho(I_{1}),\rho(I_{3})\right\\}$
$\displaystyle\geq\frac{C}{\log(\frac{1}{\xi})}\sqrt{\frac{T_{1}}{n}}\xi,$
where the inequality follows from Eq. (24) (25) and (26).
#### 4.2.5 Additional proofs in the third SL stage
In this subsection, we complete the missing proofs of the lemmas in the
previous subsection.
###### Proof of Lemma 16.
Based on the $\omega_{t}$ definition (17), we can express the mean and
variance of $\omega_{t}$ using $\hat{\mu}_{T_{2}+t}$ as follows.
$\displaystyle\mathbf{b}(\omega_{t})$ $\displaystyle=\int z\omega_{t}(z)dz$
$\displaystyle=\int\int_{\theta^{\top}x/\sigma=z}\theta^{\top}x/\sigma\hat{\mu}_{T_{2}+t}(x)dxdz$
$\displaystyle=\theta^{\top}\mathbf{b}(\hat{\mu}_{T_{2}+t})/\sigma$
$\displaystyle\mathrm{Var}(\omega_{t})$
$\displaystyle=\int\left(z-\mathbf{b}(\omega_{t})\right)\left(z-\mathbf{b}(\omega_{t})\right)\omega_{t}(z)dz$
$\displaystyle=\int\int_{\sigma\theta^{\top}x=z}\frac{1}{\sigma^{2}}\theta^{\top}(x-\mathbf{b}(\hat{\mu}_{t}))(x-\mathbf{b}(\hat{\mu}_{t}))^{\top}\theta\hat{\mu}_{T_{2}+t}(x)dxdz$
$\displaystyle=\theta^{\top}\operatorname{Cov}(\hat{\mu}_{T_{2}+t})\theta/\sigma^{2}.$
We deduce that $\mathrm{Var}(\omega_{0})=1$ as
$\sigma^{2}=\theta^{\top}\operatorname{Cov}(\hat{\mu}_{T_{2}+t})\theta$. For
any $z\in$, we have
$\displaystyle d\omega_{t}(z)$
$\displaystyle=\int_{\theta^{\top}x/\sigma=z}d\hat{\mu}_{T_{2}+t}(x)dx$
$\displaystyle=\int_{\theta^{\top}x/\sigma=z}(x-\mathbf{b}(\hat{\mu}_{T_{2}+t}))^{\top}\theta\theta^{\top}dW_{t}\hat{\mu}_{T_{2}+t}(x)dx$
$\displaystyle=\int_{\theta^{\top}x/\sigma=z}\sigma(z-\mathbf{b}(\omega_{t}))\theta^{\top}dW_{t}\hat{\mu}_{T_{2}+t}(x)dx$
$\displaystyle=\sigma(z-\mathbf{b}(\omega_{t}))\theta^{\top}dW_{t}\omega_{t}(z).$
Based on the above calculation, $(\omega_{t})_{t\geq 0}$ undergoes a
1-dimensional SL and the amount of Gaussian that it makes appear at time $t$
is $t\sigma^{2}$.
Based on the $h_{S}$ definition, we can express $\hat{\mu}_{T_{2}+t}(S)$ using
$\omega_{t}$ and $h_{S}$ as follows
$\displaystyle\omega_{t}(h_{S})$ $\displaystyle=\int h_{S}(z)\omega_{t}(z)dz$
$\displaystyle=\int\frac{\int_{\theta^{\top}x/\sigma=z}\mathbf{1}_{x\in
S}\hat{\mu}_{T_{2}+t}(x)dx}{\omega_{t}(z)}\omega_{t}(z)dz$
$\displaystyle=\hat{\mu}_{T_{2}+t}(S).$
###### Proof of Lemma 17.
First, we look at the derivative of $\omega_{y,\alpha}$ with respect to $y$.
Recall that
$\displaystyle\omega_{y,\alpha}=\frac{\exp(-\frac{\alpha\sigma^{2}}{2}\left|x-y\right|^{2})\nu(x)}{\int\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})\nu(z)dz}.$
Taking derivative with respect to $y$, we obtain
$\displaystyle\frac{\partial\omega_{y,\alpha}(x)}{\partial y}$
$\displaystyle=-\alpha\sigma^{2}\left[(y-x)-\frac{\int(y-z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})\omega(z)dz}{\int\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})\omega(z)dz}\right]\omega_{y,\alpha}$
$\displaystyle=\alpha\sigma^{2}[x-\mathbf{b}(\omega_{y,\alpha})]\omega_{y,\alpha}(x).$
Consequently,
$\displaystyle\frac{\partial\omega_{y,\alpha}(h)}{\partial y}=\int
h(x)\cdot\alpha\sigma^{2}[x-\mathbf{b}(\omega_{y,\alpha})]\omega_{y,\alpha}(x)dx.$
Second, let $v$ be the unit vector in the direction of $y-\tilde{y}$. Then
$y-\tilde{y}=\left|y-\tilde{y}\right|v$. Applying the mean value theorem,
there exists $\hat{y}$ such that
$\displaystyle\left|\omega_{y,\alpha}(h)-\omega_{\tilde{y},\alpha}(h)\right|$
$\displaystyle=\left|(y-\tilde{y})^{\top}\frac{\partial\omega_{z,\alpha}(h)}{\partial
z}|_{z=\hat{y}}\right|$
$\displaystyle=\alpha\sigma^{2}\left|y-\tilde{y}\right|\left|\int
h(x)v\cdot(x-\mathbf{b}(\omega_{\hat{y},\alpha}))\omega_{\hat{y},\alpha}(x)dx\right|$
$\displaystyle\leq\alpha\sigma^{2}\left|y-\tilde{y}\right|\left[\int\left(x-\mathbf{b}(\omega_{\hat{y},\alpha})\right)^{2}\omega_{\hat{y},\alpha}(x)dx\right]^{1/2}\left[\int
h(x)^{2}\omega_{\hat{y},\alpha}(x)dx\right]^{1/2}$
$\displaystyle\leq\alpha\sigma^{2}\left|y-\tilde{y}\right|\mathrm{Var}(\omega_{\hat{y},\alpha})^{1/2}\cdot
1$
$\displaystyle\leq\alpha\sigma^{2}\left|y-\tilde{y}\right|\frac{1}{(\alpha\sigma^{2})^{1/2}}$
$\displaystyle=\left(\alpha\sigma^{2}\right)^{1/2}\left|y-\tilde{y}\right|.$
The last inequality follows because $\omega_{y,\alpha}$ is
$\alpha\sigma^{2}$-strongly logconcave for any $y$. This completes the proof.
Before we can prove Lemma 20, we need the following fact regarding on two
logconcave densities.
###### Lemma 21.
Let $p,\tilde{p}$ be two logconcave densities on such that $p$ is isotropic
and ${\rm{d}}_{{\rm{TV}}}(p,\tilde{p})<\epsilon$. Let $0<\delta<1$. Set $a$
and $b$ to be the $\delta$ and $(1-\delta)$-quantiles respectively of $p$,
respectively. Suppose further that $\epsilon\leq\delta^{2}/10^{5}$. Then for
all $z\in[a,b]$, we have $\frac{\tilde{p}}{p}(z)>0.9$.
###### Proof of Lemma 21.
Without loss of generality (by convolving with small Gaussian and taking
limit), we can assume that $\frac{\tilde{p}}{p}$ is well-defined on and both
densities are $C^{2}$-smooth. Let $\tilde{a}$ and $\tilde{b}$ be the
$\delta/2$ and $(1-\delta/2)$-quantiles of $p$. Define
$f(\cdot):=\frac{\tilde{p}}{p}(\cdot)$.
Suppose there is $z_{0}\in[a,b]$ such that $f(z_{0})\leq 0.9$. Define
$w_{-}:=\sup\left\\{z\in[\tilde{a},z_{0}]\mid f(z)\geq 0.95\right\\}$, noting
that such $w_{-}\in[\tilde{a},z_{0}]$ must indeed exist, for otherwise we
always have $f(x)\leq 0.95$ for $x\in[\tilde{a},a]$, which implies
$\displaystyle\epsilon>{\rm{d}}_{{\rm{TV}}}(\nu,\tilde{\nu})\geq\int_{a}^{\tilde{a}}(1-f(x))p(x)dx\geq\frac{\delta}{2}\cdot
0.05,$
which contradicts the assumption $\epsilon\leq\delta^{2}/10^{5}$ and
$\delta<1$. The same reasoning allows us to define
$\displaystyle w_{+}:=\inf\left\\{z\in[z_{0},\tilde{b}]\mid f(z)\geq
0.95\right\\}.$
By the definition of $w_{-}$ and $w_{+}$, we have $f(x)\leq 0.95$ for all
$x\in[w_{-},w_{+}]$, which implies that
$\displaystyle p([w_{-},w_{+}])\cdot
0.05\leq\int_{w_{-}}^{w_{+}}(1-f(x))p(x)dx\leq{\rm{d}}_{{\rm{TV}}}(p,\tilde{p})<\epsilon.$
Using Lemma 28 below, we deduce that $p(z)\geq\frac{1}{16e}\delta$ for
$z\in[w_{-},w_{+}]$ which implies that
$\displaystyle\left|w_{+}-w_{-}\right|\leq\frac{320e\cdot\epsilon}{\delta}.$
(27)
By the mean value theorem, there exists $u_{-}\in[w_{-},z_{0}]$ such that
$\displaystyle\left(\log\tilde{p}-\log p\right)^{\prime}(u_{-})$
$\displaystyle=\frac{\log f(z_{0})-\log f(w_{-})}{\left|z_{0}-w_{-}\right|}$
$\displaystyle\leq\frac{-\log\frac{0.95}{0.9}}{\left|z_{0}-w_{-}\right|}$
$\displaystyle\leq\frac{-\log\frac{0.95}{0.9}\cdot\delta}{320e\cdot\epsilon}=:-\mathfrak{D}.$
Similarly, there exist $u_{+}$ between $z_{0}$ and $w_{+}$ such that
$\displaystyle\left(\log\tilde{p}-\log
p\right)^{\prime}(u_{+})\geq\mathfrak{D}.$
Combining the two bounds above, we have
$\displaystyle\left(\log\tilde{p}-\log
p\right)^{\prime}(u_{+})-\left(\log\tilde{p}-\log p\right)^{\prime}(u_{-})\geq
2\mathfrak{D}.$
Since $\log\tilde{p}$ is concave, its derivative is non-decreasing, and
consequently we have
$\displaystyle\left(\log\tilde{p}\right)^{\prime}(u_{+})-\left(\log\tilde{p}\right)^{\prime}(u_{-})\leq
0.$
It follows that
$\displaystyle\left(\log p\right)^{\prime}(u_{-})-\left(\log
p\right)^{\prime}(u_{+})\geq 2\mathfrak{D}.$
Then either $\left(\log p\right)^{\prime}(u_{-})$ or $-\left(\log
p\right)^{\prime}(u_{+})$ has to be larger than $\mathfrak{D}$. If
$-\left(\log p\right)^{\prime}(u_{+})\geq\mathfrak{D}$, because $\log(p)$ is
non-increasing, for $z\geq u_{+}$,
$\displaystyle\left(\log p\right)^{\prime}(x)\leq-\mathfrak{D}.$
Integrating leads to the bound $p(z)\leq p(u_{+})e^{-\mathfrak{D}(z-u_{+})}$.
Integrating again, we have
$\displaystyle\int_{u_{+}}^{\infty}p(z)dz\leq
p(u_{+})\frac{1}{\mathfrak{D}}\leq\frac{1}{\mathfrak{D}}=\frac{320e\cdot\epsilon}{\log\frac{0.95}{0.9}\cdot\delta}\leq\frac{20000\epsilon}{\delta}\leq\frac{1}{5}\delta,$
where $p(u_{+})\leq 1$ follows from Lemma 25. This leads to a contradiction to
the assumption that the mass on the right of $u_{+}$ is at least
$\frac{1}{2}\delta$. The other case $\left(\log
p\right)^{\prime}(u_{-})\geq\mathfrak{D}$ is similar.
###### Proof of Lemma 20.
The proof is split into two disjoint cases:
$\frac{1}{512}\leq\alpha\sigma^{2}<\frac{400\log(10^{4}/\xi)}{\xi^{2}}$ and
$\alpha\sigma^{2}\geq\frac{400\log(10^{4}/\xi)}{\xi^{2}}$.
##### Case 1:
Assume that
$\frac{1}{512}\leq\alpha\sigma^{2}<\frac{400\log(10^{4}/\xi)}{\xi^{2}}$.
Let $\delta=\xi/20$. Let $a$ and $b$ be the $\delta$ and
$(1-\delta)$-quantiles of $\rho$. In this case, we show that the interval
$I=[a,b]$ satisfies the requirements of the lemma. In this case we trivially
have $\rho(I)\geq 1-\xi/10$.
Since $\rho$ is the convolution of $\omega$ which has variance $1$ and
$\mathcal{N}\left(0,\frac{1}{\alpha\sigma^{2}}\right)$ which has variance
$512$, the variance of $\rho$ is upper-bounded by $513$.
Applying the tail bound for isotropic log-concave density in Lemma 27, we
obtain
$\displaystyle\left|a-b\right|\leq 2\sqrt{513}\log\frac{e}{\delta}\leq
100\left(4+\log\frac{1}{\xi}\right).$
It remains to show that $\omega_{y,\alpha}(h_{K_{r}})\geq 7/8$ for all $y\in
I$. Let
$J:=\left\\{y\in[a,b]\mid\omega_{y,\alpha}(h_{K_{r}})\geq\frac{15}{16}\right\\}$.
Note that $J$ is not necessarily convex. Since
${\mathbb{E}}_{Y\sim\rho}{\omega_{Y,\alpha}(K_{r})}=\omega_{0}(K_{r})\geq
1-\zeta/10$, we have by Markov’s inequality that
$\displaystyle{\mathbb{P}}_{Y\sim\rho}\left(\omega_{Y,\alpha}(K_{r})\geq\frac{15}{16}\right)\geq
1-2\zeta.$ (28)
Fix any $z_{0}\in[a,b]\setminus J$, define the largest interval around $z_{0}$
contained in $[a,b]\setminus J$ as follows
$\displaystyle z_{+}$
$\displaystyle=\sup\left\\{\tilde{z}\mid[z_{0},\tilde{z}]\subseteq[a,b]\setminus
J\right\\}$ $\displaystyle z_{-}$
$\displaystyle=\inf\left\\{\tilde{z}\mid[\tilde{z},z_{0}]\subseteq[a,b]\setminus
J\right\\}.$
For any $\tilde{z}\in[z_{-},z_{+}]$, we have
$\omega_{\tilde{z},\alpha}(h_{K_{r}})<\frac{15}{16}$ according to the
definition of $J$. It follows from Eq. (28) that $\rho([z_{-},z_{+}])\leq
2\zeta$. On the other hand, since the variance of $\rho$ is bounded by $513$,
applying Lemma 28, we obtain $\rho(z)\geq\frac{1}{8\sqrt{513}e}\delta$ for
$z\in[a,b]$. Hence,
$\displaystyle\left|z_{+}-z_{-}\right|\leq\frac{2\zeta}{\frac{1}{8\sqrt{513}e}\delta}\leq\frac{2000\zeta}{\delta}.$
Finally, we can use the Lipschitz property of
$y\mapsto\omega_{y,\alpha}(h_{K_{r}})$ to lower bound
$\omega_{\tilde{z},\alpha}(h_{K_{r}})$ for $\tilde{z}\in[z_{-},z_{+}]$.
According to Lemma 17, $y\mapsto\omega_{y,\alpha}(h_{K_{r}})$ is
$\sqrt{\alpha\sigma^{2}}\leq\sqrt{\frac{400\log(10^{4}/\xi)}{\xi^{2}}}$-Lipschitz.
Note that the assumption that
$\zeta\leq\xi^{2}/\sqrt{\log(10^{4}/\xi)}/10^{8}$ implies
$\frac{2000\zeta}{\delta}\cdot\frac{\sqrt{400\log(10^{4}/\xi)}}{\xi}\leq\frac{1}{16}$,
which in turn ensures that
$\displaystyle\omega_{\tilde{z},\alpha}(h_{K_{r}})\geq 7/8,\text{ for
}\tilde{z}\in[z_{-},z_{+}].$
Hence, for any $y\in[a,b]$, we have $\omega_{y,\alpha}(h_{K_{r}})\geq 7/8$.
This concludes the case
$\frac{1}{512}\leq\alpha\sigma^{2}\leq\frac{400\log(10^{4}/\xi)}{\xi^{2}}$
with the choice of the interval $I=[a,b]$.
##### Case 2:
Assume that $\alpha\sigma^{2}\geq\frac{400\log(10^{4}/\xi)}{\xi^{2}}$.
Take $\delta=\xi/100$. Let $a$ and $b$ be the $\delta$ and
$(1-\delta)$-quantiles of $\omega$, respectively. Let
$I:=[a+\delta,b-\delta]$. Using the same reasoning as in the beginning of the
previous case, we know that its length satisfies
$\left|I\right|\leq\left|a-b\right|\leq 100\left(4+\log\frac{1}{\xi}\right)$.
Next, let us show that $\rho(I)\geq 1-\xi/10$. To that end, let $X\sim\omega$
and $Z\sim\mathcal{N}(0,\frac{1}{\alpha\sigma^{2}})$ be independent of each
other. Then
$\displaystyle\rho([a+\delta,b-\delta])$
$\displaystyle={\mathbb{P}}(X+Z\in[a+\delta,b-\delta])$
$\displaystyle\geq{\mathbb{P}}(X\in[a+2\delta,b-2\delta]\text{ and
}Z\in[-\delta,\delta])$
$\displaystyle={\mathbb{P}}(X\in[a+2\delta,b-2\delta])\cdot{\mathbb{P}}(Z\in[-\delta,\delta])$
$\displaystyle\overset{(v)}{\geq}(\omega([a,b])-4\delta\cdot
1)\cdot\left(1-\delta/500\right)$ $\displaystyle\geq 1-7\delta\geq 1-\xi/10$
The step (v) follows because $\omega([a,b])=1-2\delta$,
$\omega[a+2\delta,b-2\delta]\geq\omega([a,b])-4\delta\cdot 1$ using Lemma 25
and by the Gaussian tail bound in Eq. (30).
To conclude the proof it remains to show that for $y\in I$ one has
$\omega_{y,\alpha}(h_{K_{r}})\geq 7/8$, where
$\displaystyle\omega_{y,\alpha}(h_{K_{r}})$
$\displaystyle=\frac{\int\omega(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})h_{K_{r}}(z)dz}{\int\omega(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz}$
$\displaystyle=\hat{\mu}_{T_{2}}(K_{r})\frac{\int\tilde{\omega}(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz}{\int\omega(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz},$
and where
$\tilde{\omega}(x)=\frac{\omega(x)h_{K_{r}}(x)}{\hat{\mu}_{T_{2}}(K_{r})}$.
Observe that $\tilde{\omega}$ is, by definition, the push-forward of the
measure
$\tilde{\mu}_{T_{2}}:=\hat{\mu}_{T_{2}}\cdot\mathbf{1}_{K_{r}}/\hat{\mu}_{T_{2}}(K_{r})$
via $x\mapsto\frac{1}{\sigma}\cdot x^{\top}\theta$. Since by construction
${\rm{d}}_{{\rm{TV}}}(\hat{\mu}_{T_{2}},\tilde{\mu}_{T_{2}})\leq
1-\hat{\mu}_{T_{2}}(K_{r})\leq\zeta/10$, we obtain
$\displaystyle{\rm{d}}_{{\rm{TV}}}(\omega,\tilde{\omega})\leq{\rm{d}}_{{\rm{TV}}}(\hat{\mu}_{T_{2}},\tilde{\mu}_{T_{2}})\leq\zeta/10.$
By the Prékopa-Leindler inequality, we have that $\tilde{\omega}$ is
logconcave, thus we may apply Lemma 21 with
$\epsilon=\zeta/10\leq\delta^{2}/10^{5}$, we deduce that
$\frac{\tilde{\omega}}{\omega}(z)\geq 0.9,~{}~{}\forall z\in[a,b].$ (29)
Recall that the standard Gaussian tail bound for
$Z\sim\mathcal{N}(0,\frac{1}{\alpha\sigma^{2}})$ implies for $\eta>0$,
$\displaystyle{\mathbb{P}}\left(\left|Z\right|\geq\frac{\eta}{\sqrt{\alpha\sigma^{2}}}\right)\leq
2e^{-\eta^{2}/2}.$ (30)
Take $\eta=2\log^{1/2}(\frac{1000}{\delta})$, then
$e^{-\eta^{2}/2}\leq\delta/1000$. Note that the assumption
$\alpha\sigma^{2}\geq\frac{400\log(10^{4}/\xi)}{\xi^{2}}$ ensures that
$\frac{\eta}{\sqrt{\alpha\sigma^{2}}}\leq\delta$. We have
$\displaystyle M_{1}$
$\displaystyle:=\int_{[y-\delta,y+\delta]}\omega(z)\frac{1}{\sqrt{2\pi\frac{1}{\alpha\sigma^{2}}}}\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz$
$\displaystyle\overset{(i)}{\geq}\frac{1}{8e}\delta\int_{[y-\delta,y+\delta]}\omega(z)\frac{1}{\sqrt{2\pi\frac{1}{\alpha\sigma^{2}}}}\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz$
$\displaystyle\geq\frac{1}{8e}\delta(1-2e^{-\eta^{2}/2})\geq\delta/50,$
where (i) follows from Lemma 28.
$\displaystyle M_{2}$
$\displaystyle:=\int_{[y-\delta,y+\delta]^{c}}\omega(z)\frac{1}{\sqrt{2\pi\frac{1}{\alpha\sigma^{2}}}}\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz$
(31)
$\displaystyle\overset{(ii)}{\leq}\int_{[y-\delta,y+\delta]^{c}}\frac{1}{\sqrt{2\pi\frac{1}{\alpha\sigma^{2}}}}\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz$
(32) $\displaystyle\leq 2e^{-\eta^{2}/2}\leq\delta/500,$ (33)
where (ii) follows from Lemma 25. The two above displays imply that
$\int\omega(z)\exp\left(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2}\right)dz\leq\frac{10}{9}\int_{[y-\delta,y+\delta]}\omega(z)\exp\left(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2}\right)dz.$
(34)
Hence, for $y\in[a+\delta,b-\delta]$, we have
$\displaystyle\omega_{y,\alpha}(h_{K_{r}})$
$\displaystyle=\hat{\mu}_{T_{2}}(K_{r})\frac{\int\tilde{\omega}(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz}{\int\omega(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz}$
$\displaystyle\stackrel{{\scriptstyle\eqref{eq:wconc}}}{{\geq}}0.9\hat{\mu}_{T_{2}}(K_{r})\frac{\int_{[y-\delta,y+\delta]}\tilde{\omega}(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz}{\int_{[y-\delta,y+\delta]}\omega(z)\exp(-\frac{\alpha\sigma^{2}}{2}\left|z-y\right|^{2})dz}$
$\displaystyle\stackrel{{\scriptstyle\eqref{eq:wtildew}}}{{\geq}}0.9^{2}\hat{\mu}_{T_{2}}(K_{r})\geq\frac{7}{8}.$
This completes the proof.
### 4.3 A uniqueness result for the distribution attained by stochastic
localization
In this subsection, we prove Lemma 10, which follows from the inversion of the
multidimensional Mellin transform [Ant07] (also related to the inverse Laplace
transform). To see how the problem is related to the inversion of the
multidimensional Mellin transform, we first provide some background.
The Mellin transform [Mel96] (see also [Ant07]) of a function $\Phi(x)$
defined in the positive orthant ${}^{n}_{+}$ is given by the integral
$\displaystyle\mathfrak{M}[\Phi](z)=\int_{{}^{n}_{+}}\Phi(x)x^{z-1}dx.$
Suppose we know that for $t>0$,
$\displaystyle\mu(x)=\int_{{}^{n}}\nu_{y,t}(x)p(y)dy,\forall x\in^{n},$
where
$\nu_{y,t}(x)=\frac{\exp\left(-\frac{t}{2}\left|x-y\right|^{2}\right)\mu(x)}{\int\exp\left(-\frac{t}{2}\left|z-y\right|^{2}\right)\mu(z)dz}$.
Suppose $\tilde{p}$ also satisfy
$\displaystyle\mu(x)=\int_{{}^{n}}\nu_{y,t}(x)\tilde{p}(y)dy,\forall
x\in^{n}.$
Taking the difference between the two and rearranging the terms that depending
on $y$ and $x$ in the integral separately, we obtain
$\displaystyle
0=\mu(x)\exp\left(-\frac{t}{2}\left|x\right|^{2}\right)\int_{{}^{n}}\frac{(p(y)-\tilde{p}(y))\exp\left(-\frac{t}{2}\left|y\right|^{2}\right)}{\int\exp\left(-\frac{t}{2}\left|z-y\right|^{2}\right)\mu(z)dz}\cdot\exp(tx\cdot
y)dy.$
Using the change of variable $w=\exp(ty)\in^{n}_{+}$, we obtain
$\displaystyle
0=\mu(x)\exp\left(-\frac{t}{2}\left|x\right|^{2}\right)\int_{{}^{n}_{+}}\frac{\left[p\left(\frac{1}{t}\log
w\right)-\tilde{p}\left(\frac{1}{t}\log
w\right)\right]\exp\left(-\frac{1}{2t}\left|\log
w\right|^{2}\right)}{\int\exp\left(-\frac{t}{2}\left|z-\frac{1}{t}\log
w\right|^{2}\right)\mu(z)dz}\cdot\frac{1}{t}w^{x-1}dw.$
Since $\mu(x)\neq 0$ for $x\in K$, we have that for $x\in K$,
$\displaystyle 0=\int_{{}^{n}_{+}}\frac{\left[p\left(\frac{1}{t}\log
w\right)-\tilde{p}\left(\frac{1}{t}\log
w\right)\right]\exp\left(-\frac{1}{2t}\left|\log
w\right|^{2}\right)}{\int\exp\left(-\frac{t}{2}\left|z-\frac{1}{t}\log
w\right|^{2}\right)\mu(z)dz}\cdot w^{x-1}dw.$ (35)
It is now clear that $x\mapsto 0$ is the Mellin transform of
$w\mapsto\frac{\left[p\left(\frac{1}{t}\log
w\right)-\tilde{p}\left(\frac{1}{t}\log
w\right)\right]\exp\left(-\frac{1}{2t}\left|\log
w\right|^{2}\right)}{\int\exp\left(-\frac{t}{2}\left|z-\frac{1}{t}\log
w\right|^{2}\right)\mu(z)dz}$. Since $w\mapsto\frac{1}{t}\log w$ is a one-to-
one mapping from ${}^{n}_{+}$ to n, in order to show that $p$ is uniquely
defined, it is sufficient to show that the inversion of the multidimensional
Mellin transform is well-defined.
###### Proof of Lemma 10.
The main strategy is to verify the conditions and then to apply Theorem 2 in
[Ant07] on the inversion of the Mellin transform for the functions appeared in
Eq. (35). In the notation of [Ant07], take the convex set $U:=K\subset^{n}$
and the convex set $\Theta:=\mathbb{B}^{n}(0,2\pi)$, and set
$\displaystyle\mathcal{F}(\cdot):=0$
on $U+i^{n}$. $\mathcal{F}$ is holomorphic on $U+i^{n}$ and bounded. Applying
Theorem 2 in [Ant07], its inverse Mellin transform is well-defined, and it is
$0$ almost everywhere. We conclude that the function
$\displaystyle w\mapsto\frac{\left[p\left(\frac{1}{t}\log
w\right)-\tilde{p}\left(\frac{1}{t}\log
w\right)\right]\exp\left(-\frac{1}{2t}\left|\log
w\right|^{2}\right)}{\int\exp\left(-\frac{t}{2}\left|z-\frac{1}{t}\log
w\right|^{2}\right)\mu(z)dz}$
is $0$ almost everywhere, which implies $p=\tilde{p}$ almost everywhere.
## 5 Conductance for the transformed density
In this section we first prove Lemma 9 and then prove Lemma 8.
### 5.1 Distance of a typical point from $\mu_{t}$ to its center
The main idea to prove Lemma 9 is to first use the observation in [EAM22] on
the distribution of $c_{n}$ and then apply the standard Gaussian
concentration.
###### Proof of Lemma 9.
Recall that $\mu_{n}$ is uniquely defined given $c_{n}$. The first part of the
lemma is a direct application of Theorem 2 in [EAM22]. Additionally, given the
data generation $X\sim\mu,Z\sim\mathcal{N}(0,\frac{1}{n}\mathbb{I}_{n})$
independent, and $c_{n}/n\sim X+Z$, a point drawn from $\mu_{t}$ has the same
law as the conditional distribution
$\displaystyle X\mid c_{n}.$
We are interested in the conditional distribution $X-\frac{c_{n}}{n}\mid
c_{n}$. Applying the standard chi-square tail bound (see Lemma 1 in [LM00]),
we obtain the unconditional bound
$\displaystyle{\mathbb{P}}(\left|Z\right|^{2}\geq 2)\leq e^{-\frac{n}{16}}$
$\displaystyle{\mathbb{P}}(\left|Z\right|^{2}\leq\frac{1}{2})\leq
e^{-\frac{n}{16}}.$
Hence,
$\displaystyle{\mathbb{P}}(\left|Z\right|\geq\sqrt{2}\text{ or
}\left|Z\right|\leq\frac{\sqrt{2}}{2})\leq 2e^{-\frac{n}{16}}.$ (36)
Let $\mathfrak{E}$ denote the event
$\mathfrak{E}:=\left\\{a\in^{n}\mid{\mathbb{P}}(\left|Z\right|>\sqrt{2}\text{
or }\left|Z\right|\leq\frac{\sqrt{2}}{2}\mid
c_{n}=a)>e^{-\frac{n}{32}}\right\\}$. Writing out Eq. (36) with conditional
probability, we obtain
$\displaystyle
2e^{-\frac{n}{16}}\geq{\mathbb{P}}(\left|Z\right|\geq\sqrt{2}\text{ or
}\left|Z\right|\leq\frac{\sqrt{2}}{2})\geq{\mathbb{P}}(\left|Z\right|\geq\sqrt{2}\text{
or }\left|Z\right|\leq\frac{\sqrt{2}}{2}\mid
c_{n}\in\mathfrak{E})\cdot{\mathbb{P}}(\mathfrak{E})>e^{-\frac{n}{32}}{\mathbb{P}}(\mathfrak{E}).$
Hence, ${\mathbb{P}}(\mathfrak{E})<2e^{-\frac{n}{32}}$.
### 5.2 Overlap bound: Proof of Lemma 8
To lower bound the $s$-conductance in Lemma 8, we first need to bound the
transition overlap for close points in $K_{r}$ defined in Eq. (3). This
concept of transition overlap was previously proposed in [Lov99]. In order to
introduce the transition overlap, we first define the notion of $1/8$-quantile
hit-and-run step-size.
###### Definition 22 ($1/8$-quantile hit-and-run step-size).
Given a target density $\nu$ supported on a convex set $K$. For $x\in K$,
define $F_{x}(\nu)$, the median step-size of the hit-and-run with target
density $\nu$, as the step-size such that
$\displaystyle{\mathbb{P}}_{Y\sim
P_{x\to\cdot}(\nu)}\left(\left|Y-x\right|\leq F_{x}(\nu)\right)=\frac{1}{8}.$
(37)
The hit-and-run transition kernel $y\mapsto P_{x\to y}$ is continuous on $K$
for any $x\in K$, so the above quantity is well-defined.
###### Lemma 23.
Fix $\beta\in^{n}$ and the target density $\nu_{\beta,n}$. Let $u,v\in K_{r}$
such that $\left|u-\beta\right|\in\left[\frac{1}{\sqrt{2}},\sqrt{2}\right]$.
Suppose that
$\displaystyle\left|u-v\right|<\frac{2}{\sqrt{n}}\min\left\\{F_{u}(\nu_{\beta,n}),1\right\\},$
then there exists a universal constant $C\in(0,1)$ such that
$\displaystyle{\rm{d}}_{{\rm{TV}}}\left(P_{u\to\cdot}(\nu_{\beta,n}),P_{v\to\cdot}(\nu_{\beta,n})\right)<1-C\min\left\\{\sqrt{n}F_{u}(\nu_{\beta,n}),1\right\\}.$
To prove the transition overlap bound in Lemma 23, we need to obtain a rough
estimate of $F_{u}$.
###### Lemma 24.
Let $r>0$. For any $\beta\in^{n}$ and $u\in K_{r}$ such that
$\left|u-\beta\right|\leq\sqrt{2}$, we have
$\displaystyle
F_{u}(\nu_{\beta,n})\geq\frac{1}{128}\min\left\\{2r,\frac{1}{8\sqrt{n}}\right\\}.$
Deferring the proof of Lemma 24 to the end, we are ready to prove Lemma 23.
###### Proof of Lemma 23.
For the sake of brevity, throughout the proof we omit the expression
$\nu_{\beta,n}$ in our notation (abbreviating $P=P(\nu_{\beta,n})$ for
example), since $\nu_{\beta,n}$ is the only target density considered in this
proof. It is sufficient to prove that for any $A\subset K$ nonempty measurable
subset, we have
$\displaystyle P_{u\to A}-P_{v\to A}<1-C\min\left\\{\sqrt{n}F_{u},1\right\\}$
Since $P_{u\to A}\leq 1$, this is implied by showing that
$P_{v\to A}\geq C\min\left\\{\sqrt{n}F_{u},1\right\\}.$ (38)
We partition $A$ into four parts as follows
* •
$A_{1}$ is the part too close to $u$
$\displaystyle A_{1}:=\left\\{x\in A:\left|x-u\right|<F_{u}\right\\}.$
* •
$A_{2}$ is the part which is not almost orthogonal to $u-v$
$\displaystyle A_{2}:=\left\\{x\in
A:\left|(x-u)^{\top}(u-v)\right|>\frac{2}{\sqrt{n}}\left|x-u\right|\cdot\left|u-v\right|\right\\}.$
* •
$A_{3}$ is the part for which the angle $\angle p_{u}(x)\beta u$ satisfies
$\sin(\angle p_{u}(x)\beta u)>\frac{2}{\sqrt{n}}$, as illustrated in Figure 1
$\displaystyle A_{3}:=\left\\{x\in A:\sin\left(\angle p_{u}(x)\beta
u\right)>\frac{2}{\sqrt{n}}\right\\},$
where $p_{u}(x)$ is the projection of $\beta$ onto the line through $u$ and
$x$. We omit the dependency on $x$ and use $p_{u}$ when the dependency on $x$
is clear.
* •
$S:=A\setminus\left(A_{1}\cup A_{2}\cup A_{3}\right)$ is the rest.
The proof proceeds in two steps:
1. 1.
Show that $P_{u\to S}\geq\frac{1}{4}$.
2. 2.
Show that there exists a constant $C^{\prime}>0$ such that $P_{v\to S}\geq
C^{\prime}\cdot P_{u\to S}$. According to the definition of $P_{u\to\cdot}$ in
Eq. (1), we need to
* •
show that $\left|x-v\right|$ can be upper bounded via $\left|x-u\right|$,
* •
show that $\nu(\ell_{vx})$ can be upper bounded via $\nu(\ell_{ux})$.
Note that the two steps establish a lower bound on $P_{v\to A}$, which
concludes the proof in light of (38).
##### Step 1.
According to the definition (37) of $F_{u}$, we have
$\displaystyle P_{u\to A_{1}}\leq
P_{u\to\mathbb{B}^{n}_{u}(F_{u})}=\frac{1}{8}.$
Given $u,v$, $P_{u\to A_{2}}$ only depends on the uniform distribution on the
unit sphere. We evoke the following well-known result [Tko12, B+97] on the
area upper bound of a spherical cap of angle $\phi\in(0,\frac{\pi}{2})$,
$\displaystyle\frac{\mathcal{A}_{n}(\phi)}{2\mathcal{A}_{n}(\pi/2)}\leq
e^{-n\cos(\phi)^{2}/2},$ (39)
where $\mathcal{A}_{n}(\phi)$ denotes the area of the cap of angle $\phi$ of a
unit $n$-sphere (unit sphere in n). Applying Eq. (39) with
$\cos(\phi)=\frac{2}{\sqrt{n}}$, we have
$\displaystyle P_{u\to A_{2}}\leq 0.3.$
Similarly, given $u$, $P_{u\to A_{3}}$ only depends on the uniform
distribution on the unit sphere and an application of Eq. (39) implies
$\displaystyle P_{u\to A_{3}}\leq 0.3.$
Combining the three displays above, we obtain
$\displaystyle P_{u\to S}\geq 1-\frac{1}{8}-0.3-0.3\geq\frac{1}{4}.$
##### Step 2.
First, for $x\in S$, we have
$\displaystyle\left|x-u\right|\overset{(i)}{\geq}F(u)\overset{(ii)}{\geq}\frac{\sqrt{n}}{2}\left|u-v\right|.$
(40)
(i) follows from $x\notin A_{1}$. (ii) follows from the assumption of the
lemma. Second, we show that $\left|x-v\right|$ can be upper bounded via
$\left|x-u\right|$ as follows
$\displaystyle\left|x-v\right|^{2}$
$\displaystyle=\left|x-u\right|^{2}+\left|u-v\right|^{2}+2(x-u)^{\top}(u-v)$
$\displaystyle\overset{(i)}{\leq}\left|x-u\right|^{2}+\left|u-v\right|^{2}+\frac{4}{\sqrt{n}}\left|x-u\right|\left|u-v\right|$
$\displaystyle\overset{(ii)}{\leq}\left|x-u\right|^{2}+\frac{4}{n}\left|x-u\right|^{2}+\frac{8}{n}\left|x-u\right|^{2}$
$\displaystyle\leq\left(1+\frac{12}{n}\right)\left|x-u\right|^{2}$
$\displaystyle\leq\left(\left(1+\frac{6}{n}\right)\left|x-u\right|\right)^{2}.$
(41)
(i) follows from $x\notin A_{2}$. (ii) follows from Eq. (40). Third, let
$q_{u}$ and $q_{v}$ be the unit vectors parallel to $u-x$ and $v-x$
respectively, we have
$\displaystyle\frac{\nu(\ell_{vx})}{\nu(\ell_{ux})}$
$\displaystyle=\frac{\int_{p_{v}+tq_{v}\in
K}\nu(p_{v}+tq_{v})dt}{\int_{p_{u}+tq_{u}\in K}\nu(p_{u}+tq_{u})dt}$
$\displaystyle=\frac{\int_{p_{v}+tq_{v}\in
K}e^{-\frac{n}{2}\left|p_{v}-\beta\right|^{2}-\frac{n}{2}t^{2}}dt}{\int_{p_{u}+tq_{u}\in
K}e^{-\frac{n}{2}\left|p_{u}-\beta\right|^{2}-\frac{n}{2}t^{2}}dt}$
$\displaystyle=\underbrace{\frac{e^{-\frac{n}{2}\left|p_{v}-\beta\right|^{2}}}{e^{-\frac{n}{2}\left|p_{u}-\beta\right|^{2}}}}_{Q_{1}(x)}\underbrace{\frac{\int_{p_{v}+tq_{v}\in
K}e^{-\frac{n}{2}t^{2}}dt}{\int_{p_{u}+tq_{u}\in
K}e^{-\frac{n}{2}t^{2}}dt}}_{Q_{2}(x)}.$ (42)
To derive an upper bound for the ratio $Q_{2}(x)$ we, roughly speaking, use
the fact that the chord $\ell_{ux}\cap K$ contains an interval, centered at
$p_{u}$, whose length is larger than $F_{u}$. From the definition of $A_{3}$,
we have $\left|p_{u}-u\right|=\sin(\angle p_{u}(x)\beta
u)\left|u-\beta\right|\leq\frac{2}{\sqrt{n}}\left|u-\beta\right|\leq\frac{2\sqrt{2}}{\sqrt{n}}$.
From the definition of $A_{1}$, we have $\left|x-u\right|\geq F_{u}$. If
$F_{u}\leq\frac{2}{\sqrt{n}}$, then
$\frac{1}{\sqrt{2\pi/n}}\int_{p_{u}+tq_{u}\in
K}e^{-\frac{n}{2}t^{2}}dt\geq\sqrt{n}F_{u}\cdot\gamma(2\sqrt{2}+2)$;
otherwise, $\frac{1}{\sqrt{2\pi/n}}\int_{p_{u}+tq_{u}\in
K}e^{-\frac{n}{2}t^{2}}dt\geq\Gamma(2\sqrt{2}+2)-\Gamma(2\sqrt{2})$.
Combining both cases, we conclude that $R_{2}$ obeys the bound
$\displaystyle Q_{2}(x)=\frac{\frac{1}{\sqrt{2\pi/n}}\int_{p_{v}+tq_{v}\in
K}e^{-\frac{n}{2}t^{2}}dt}{\frac{1}{\sqrt{2\pi/n}}\int_{p_{u}+tq_{u}\in
K}e^{-\frac{n}{2}t^{2}}dt}\leq\frac{1}{\min\left\\{\sqrt{n}F_{u}\cdot\gamma(2\sqrt{2}+2),\Gamma(2\sqrt{2}+2)-\Gamma(2\sqrt{2})\right\\}}.$
(43)
To upper bound the first ratio $Q_{1}(x)$, it is sufficient to upper bound
$-\left|p_{v}-\beta\right|^{2}+\left|p_{u}-\beta\right|^{2}$. Let
$\delta_{1}:=\angle{u\beta v},\delta_{2}:=\angle{uxv}$. According to the
definition of $A_{1}$, we have
$\displaystyle\sin(\delta_{2})\leq\frac{\left|u-v\right|}{\left|u-x\right|}\leq\frac{2}{\sqrt{n}}.$
(44)
From the assumption, we have
$\left|u-v\right|\leq\frac{2}{\sqrt{n}},$ (45)
and $\left|u-\beta\right|\geq\frac{\sqrt{2}}{2}$. Hence,
$\displaystyle\sin(\delta_{1})\leq\frac{\left|u-v\right|}{\left|u-\beta\right|}\leq\frac{2\sqrt{2}}{n}.$
(46)
Figure 1: $u,v,\beta,x$ placed in a 3D plot
To lower-bound $\left|p_{v}-\beta\right|$, we need to upper-bound
$\left|p_{v}-v\right|$ and hence the angle $\angle{v\beta p_{v}}$. Note that
$\frac{\pi}{2}-\angle{v\beta p_{v}}=\angle{p_{v}v\beta}=\angle{v\beta
x}+\angle{vx\beta}$. The two angles $\angle{v\beta x}$ and $\angle{vx\beta}$
can be bounded via $\angle{u\beta x}$ and $\angle{ux\beta}$ respectively.
Let $\Delta_{1}=\angle{u\beta x}-\angle{v\beta x}$ and
$\Delta_{2}=\angle{ux\beta}-\angle{vx\beta}$. We claim that
$|\Delta_{i}|\leq\delta_{i}$ for $i=1,2$. Indeed, if
$w_{1}=\frac{u-\beta}{|u-\beta|}$, $w_{2}=\frac{v-\beta}{|v-\beta|}$ and
$w_{3}=\frac{x-\beta}{|x-\beta|}$ then by the spherical triangle inequality we
have
$|\arccos(\langle w_{1},w_{3}\rangle)-\arccos(\langle
w_{2},w_{3}\rangle)|\leq\arccos(\langle w_{1},w_{2}\rangle),$
which amounts to the fact that $|\Delta_{1}|\leq\delta_{1}$, and an analogous
argument shows that $|\Delta_{2}|\leq\delta_{2}$. We therefore have
$\displaystyle\left|p_{v}-v\right|$
$\displaystyle=\left|v-\beta\right|\sin(\angle{p_{v}\beta v})$
$\displaystyle=\left|v-\beta\right|\sin(\frac{\pi}{2}-\angle{v\beta
x}-\angle{vx\beta})$
$\displaystyle=\left|v-\beta\right|\sin(\frac{\pi}{2}-\angle{u\beta
x}-\angle{ux\beta}+\Delta_{1}+\Delta_{2})$
$\displaystyle=\left|v-\beta\right|\sin(\angle{p_{u}\beta
u}+\Delta_{1}+\Delta_{2})$
$\displaystyle=\left|v-\beta\right|\left(\sin(\angle{p_{u}\beta
u})+\sin(|\Delta_{1}|+\sin(|\Delta_{2}|)\right)$
$\displaystyle\leq\left|v-\beta\right|\left(\frac{\left|p_{u}-u\right|}{\left|u-\beta\right|}+\left|\sin(\delta_{1})\right|+\left|\sin(\delta_{2})\right|\right).$
Plugging the bounds (44) (46) into the above equation, together with the
definition of $A_{3}$, we obtain
$\displaystyle\frac{\left|p_{v}-v\right|}{\left|v-\beta\right|}\leq\frac{\left|p_{u}-u\right|}{\left|u-\beta\right|}+\frac{2\sqrt{2}+2}{\sqrt{n}}\leq\frac{10}{\sqrt{n}}.$
Together with the assumption $\left|v-\beta\right|\leq\sqrt{2}$, we deduce
that
$\displaystyle\left|p_{v}-v\right|\leq\frac{10\sqrt{2}}{\sqrt{n}}.$ (47)
Using the fact that $p_{u}$ and $p_{v}$ are orthogonal projections, we have
$\displaystyle-\left|p_{v}-\beta\right|^{2}+\left|p_{u}-\beta\right|^{2}$
$\displaystyle=-\left|v-\beta\right|^{2}+\left|p_{v}-v\right|^{2}+\left|u-\beta\right|^{2}-\left|p_{u}-u\right|^{2}$
$\displaystyle\leq\left|u-v\right|\left(2\left|u-\beta\right|+\left|u-v\right|\right)+\left|p_{v}-v\right|^{2}-\left|p_{u}-u\right|^{2}$
$\displaystyle\leq\left|u-v\right|\left(2\left|u-\beta\right|+\left|u-v\right|\right)+\left|p_{v}-v\right|^{2}$
$\displaystyle\stackrel{{\scriptstyle\eqref{eq:uminusv}\wedge\eqref{eq:pvminusv}}}{{\leq}}\frac{4}{n}\left(2\sqrt{2}+\frac{4}{n}\right)+\frac{200}{n}$
$\displaystyle\leq\frac{230}{n},$
where the first inequality is obtained by the triangle inequality. We
therefore have
$\displaystyle
Q_{1}(x)=\frac{e^{-\frac{n}{2}\left|p_{v}-\beta\right|^{2}}}{e^{-\frac{n}{2}\left|p_{u}-\beta\right|^{2}}}\leq
e^{115}.$ (48)
Combining the bounds for the two ratios, we obtain
$\displaystyle P_{v\to S}$
$\displaystyle=\frac{2}{n\pi_{n}}\int_{S}\frac{f(x)}{\nu(\ell_{vx})\left|x-v\right|^{n-1}}dx$
$\displaystyle\overset{(i)}{\geq}\frac{2}{n\pi_{n}}\int_{S}\frac{f(x)}{Q_{1}(x)Q_{2}(x)\nu(\ell_{ux})\left|x-v\right|^{n-1}}dx$
$\displaystyle\overset{(ii)}{\geq}\frac{2}{e^{6}\cdot
n\pi_{n}}\int_{S}\frac{f(x)}{Q_{1}(x)Q_{2}(x)\nu(\ell_{ux})\left|x-u\right|^{n-1}}dx$
$\displaystyle\stackrel{{\scriptstyle\eqref{eq:R_2_bound}\wedge\eqref{eq:R_1_bound}}}{{\geq}}C\min\left\\{\sqrt{n}F_{u},1\right\\}\cdot
P_{u\to S},$
where (i) applies Eq. (5.2). (ii) follows from (5.2) and
$(1+\frac{6}{n})^{n}\leq e^{6}$. Here $C$ is a universal constant that depends
on the universal constant that appear in Eq. (43) and (48).
Finally, we have
$\displaystyle P_{u\to A}-P_{v\to A}$ $\displaystyle\leq 1-P_{v\to S}$
$\displaystyle\leq 1-C\min\left\\{\sqrt{n}F_{u},1\right\\}\cdot P_{u\to S}$
$\displaystyle\leq 1-\frac{1}{4}C\min\left\\{\sqrt{n}F_{u},1\right\\}.$
We conclude. Remark that the constants were not optimized for the sake of the
simplicity of the derivation.
###### Proof of Lemma 24.
The lower bound proof proceeds similarly as that of Lemma 3.2 in [LV06b],
except that we have to deal with the Gaussian supported on the convex set
$\nu_{\beta,n}$. Define $s:K\to^{+}$ as
$\displaystyle
s(u):=\sup\left\\{t\in_{+}\middle|\lambda(u,t)\geq\frac{63}{64}\right\\},$
(49)
where $\lambda(u,t)$ is defined in Eq. (2). By the above definition of $s(u)$
and since $u\in K_{r}$, we have $s(u)\geq 2r$. To simplify notation when the
dependency on $u$ is clear, we simply write $s:=s(u)$. Let $\eta$ denote the
fraction of the surface of the ball $\mathbb{B}^{n}(u,s/2)$ that is not in
$K$. Then using the fact that $K$ is convex, we have
$\displaystyle\operatorname{vol}\left(\mathbb{B}^{n}\left(u,s\right)\setminus
K\right)\geq\eta\cdot\operatorname{vol}(\mathbb{B}^{n}\left(0,s\right))-\operatorname{vol}\left(\mathbb{B}^{n}\left(0,s/2\right)\right).$
On the other hand, the definition of $s$ implies that
$\displaystyle\operatorname{vol}\left(\mathbb{B}^{n}\left(u,s\right)\setminus
K\right)\leq\frac{1}{64}\operatorname{vol}(\mathbb{B}^{n}\left(0,s\right)).$
We deduce that for $n\geq 9$,
$\displaystyle\eta\leq\frac{1}{64}+2^{-n}\leq\frac{3}{128}.$
Take a line $\ell$ through $u$ with direction uniformly distributed on the
unit sphere. Then with probability at least $1-2\eta$,
$\ell\cap\mathbb{B}^{n}(u,s/2)\subseteq K$.
Now, let $p_{u}$ be the orthogonal projection of the point $\beta$ on the line
$\ell$. Then $|p_{u}-u|=|u-\beta|\cos(\alpha)$ where $\alpha$ is the angle
between $\ell$ and the line connecting $u$ and $\beta$. An application of the
spherical cap area upper bound as in Eq. (39) with
$\cos(\alpha)=\frac{2\sqrt{2}}{\sqrt{n}}$ and a union bound implies that with
probability at least $1-2\eta-\frac{1}{16}$, we have
$\ell\cap\mathbb{B}^{n}(u,s/2)\subseteq K$ and
$\left|p_{u}-u\right|\leq\sqrt{2}\cdot\frac{2\sqrt{2}}{\sqrt{n}}\leq\frac{4}{\sqrt{n}}$.
Define $\tau:=\min\left\\{s,\frac{1}{8\sqrt{n}}\right\\}$. Then
$\displaystyle{\mathbb{P}}_{Y\sim
P_{u\to\cdot}}\left(\left|Y-u\right|\leq\frac{\tau}{128}\mid Y\in\ell\right)$
$\displaystyle\overset{(i)}{\leq}\frac{\Gamma\left(\sqrt{n}\left(b+\frac{\tau}{256}\right)\right)-\Gamma\left(\sqrt{n}\left(b-\frac{\tau}{256}\right)\right)}{\Gamma\left(\sqrt{n}\left(b+\frac{s}{2}\right)\right)-\Gamma\left(\sqrt{n}\left(b-\frac{s}{2}\right)\right)}$
$\displaystyle\leq\frac{\Gamma\left(\sqrt{n}\left(b+\frac{\tau}{256}\right)\right)-\Gamma\left(\sqrt{n}\left(b-\frac{\tau}{256}\right)\right)}{\Gamma\left(\sqrt{n}\left(b+\frac{\tau}{2}\right)\right)-\Gamma\left(\sqrt{n}\left(b-\frac{\tau}{2}\right)\right)}$
$\displaystyle\overset{\phantom{(ii)}}{\leq}\frac{\frac{\sqrt{n}\tau}{128}\gamma(\sqrt{n}\left(b-\frac{\tau}{256}\right))}{\sqrt{n}\tau\gamma(\sqrt{n}(b+\frac{\tau}{2}))}$
$\displaystyle\overset{(ii)}{\leq}\frac{1}{64},$
where $b=\left|p_{u}-u\right|$, $\Gamma$ is the cumulative density function of
the standard Gaussian and $\gamma$ is the density function of the standard
Gaussian. (i) follows from looking at the one dimensional truncated Gaussian.
(ii) follows because $\sqrt{n}b\leq 4$, $\tau\leq\frac{1}{8\sqrt{n}}$, and a
numerical calculation shows the ratio
$\frac{\gamma(\sqrt{n}\left(b-\frac{\tau}{256}\right))}{\gamma(\sqrt{n}(b+\frac{\tau}{2}))}\leq
2$. For the unconditional probability, we have
$\displaystyle{\mathbb{P}}_{Y\sim
P_{u\to\cdot}}\left(\left|Y-u\right|\leq\frac{\tau}{128}\right)\leq(2\eta+\frac{1}{16})\cdot
1+(1-2\eta-\frac{1}{16})\cdot\frac{1}{64}<\frac{1}{8}.$
Hence,
$\displaystyle
F_{u}(\nu_{\beta,n})\geq\frac{\tau}{128}\geq\frac{1}{128}\min\left\\{2r,\frac{1}{8\sqrt{n}}\right\\}.$
#### 5.2.1 Proof of Lemma 8
###### Proof of Lemma 8.
To simplify notation, let $\nu=\nu_{\beta,n}$. Consider the truncated density
$\nu_{\dagger}$
$\displaystyle\nu_{\dagger}(x):=\mathbf{1}_{K_{r}}(x)e^{-\frac{n}{2}\left|x-\beta\right|^{2}}\frac{1}{\nu(K_{r})}$
Note that since $K_{r}$ is convex, $\nu_{\dagger}$ is still $n$-strongly-
logconcave. Consequently, it satisfies the isoperimetric inequality (see
Theorem 5.4 in [CV18]), for $U_{1},U_{2},U_{3}$ a partition of $K$,
$\displaystyle\nu_{\dagger}(U_{3})\geq\log 2\cdot\sqrt{n}\cdot
d(U_{1},U_{2})\cdot\nu_{\dagger}(U_{1})\nu_{\dagger}(U_{2}).$
$S_{1}$$S_{2}$$S^{\prime}_{2}$$S^{\prime}_{1}$$K$$K_{r}$$\Upsilon^{c}$ Figure
2: Illustration of the partition of $K$ in conductance lower bound
Let $\varrho=\frac{r\sqrt{n}}{64C}$ where $C$ is the constant from Lemma 23.
Define the sets
$\displaystyle S_{1}^{\prime}:=\left\\{u\in S_{1}\cap K_{r}\mid P_{u\to
S_{2}}(\nu)<\frac{\varrho}{2}\right\\},\quad S_{2}^{\prime}:=\left\\{u\in
S_{2}\cap K_{r}\mid P_{u\to S_{1}}(\nu)<\frac{\varrho}{2}\right\\}.$
There are two cases:
* •
Case 1: $\nu(S_{1}^{\prime})\leq\nu(S_{1}\cap K_{r})/2$ or
$\nu(S_{2}^{\prime})\leq\nu(S_{2}\cap K_{r})/2$.
* •
Case 2: $\nu(S_{i}^{\prime})\geq\nu(S_{i}\cap K_{r})/2$ for $i=1,2$.
##### Case 1:
Assuming that $\nu(S_{1}\cap K_{r}\setminus S_{1}^{\prime})\geq\nu(S_{1}\cap
K_{r})/2$, we obtain
$\displaystyle\int_{S_{1}}P_{x\to S_{2}}(\nu)d\nu(x)$
$\displaystyle\geq\int_{S_{1}\cap K_{r}\setminus S_{1}^{\prime}}P_{x\to
S_{2}}(\nu)d\nu(x)$
$\displaystyle\overset{(i)}{\geq}\frac{\varrho}{2}\nu(S_{1}\cap K_{r}\setminus
S_{1}^{\prime})$ $\displaystyle\geq\frac{\varrho}{4}\nu(S_{1}\cap K_{r}).$
(i) follows from the definition of $S_{1}^{\prime}$. The proof is similar for
the case where the roles of $S_{1}$ and $S_{2}$ are switched, and using the
reversibility of the kernel $P$ which implies that $\int_{S_{1}}P_{x\to
S_{2}}(\nu)d\nu(x)=\int_{S_{2}}P_{x\to S_{1}}(\nu)d\nu(x)$.
##### Case 2:
For any $u\in S_{1}^{\prime}\cap\Upsilon$ and $v\in
S_{2}^{\prime}\cap\Upsilon$, we have
$\displaystyle{\rm{d}}_{{\rm{TV}}}\left(P_{u\to\cdot}(\nu),P_{v\to\cdot}(\nu)\right)\geq
P_{u\to S_{1}}(\nu)-P_{v\to S_{1}}(\nu)=1-P_{u\to S_{2}}(\nu)-P_{v\to
S_{1}}(\nu)>1-\varrho.$
Lemma 24 implies that for $u\in S_{1}^{\prime}\cap\Upsilon$ and
$r\leq\frac{1}{16\sqrt{n}}$, we have
$\displaystyle F_{u}(\nu_{\beta,n})\geq\frac{r}{64}.$ (50)
Together with Lemma 23, we have that
$\left|u-v\right|\geq\Delta:=\frac{r}{32\sqrt{n}}$. Since it holds for any
pair of $u\in S_{1}^{\prime}\cap\Upsilon$ and $v\in
S_{2}^{\prime}\cap\Upsilon$, its implies that
$\displaystyle
d\left(S_{1}^{\prime}\cap\Upsilon,S_{2}^{\prime}\cap\Upsilon\right)\geq\Delta.$
We have
$\displaystyle\int_{S_{1}}P_{x\to S_{2}}(\nu)d\nu(x)$
$\displaystyle=\frac{1}{2}\left(\int_{S_{1}}P_{x\to
S_{2}}d\nu(x)+\int_{S_{2}}P_{x\to S_{1}}d\nu(x)\right)$
$\displaystyle\geq\frac{1}{2}\left(\int_{S_{1}\cap K_{r}\setminus
S_{1}^{\prime}}P_{x\to S_{2}}d\nu(x)+\int_{S_{2}\cap K_{r}\setminus
S_{2}^{\prime}}P_{x\to S_{1}}d\nu(x)\right)$
$\displaystyle\overset{(i)}{\geq}\frac{\varrho}{4}\left[\nu(S_{1}\cap
K_{r}\setminus S_{1}^{\prime})+\nu(S_{2}\cap K_{r}\setminus
S_{2}^{\prime})\right]$
$\displaystyle=\frac{\varrho}{4}\nu(K_{r}\setminus(S_{1}^{\prime}\cup
S_{2}^{\prime})).$
(i) follows from the definition of $S_{1}^{\prime}$ and $S_{2}^{\prime}$. Note
that the three sets $S_{1}^{\prime}\cap\Upsilon,S_{2}^{\prime}\cap\Upsilon$
and $K_{r}\setminus((S_{1}^{\prime}\cup S_{2}^{\prime})\cap\Upsilon)$ form a
partition of $K_{r}$. We have
$\displaystyle\nu(K_{r}\setminus(S_{1}^{\prime}\cup S_{2}^{\prime}))+\delta$
$\displaystyle\geq\nu(K_{r}\setminus((S_{1}^{\prime}\cup
S_{2}^{\prime})\cap\Upsilon))$
$\displaystyle\overset{(i)}{\geq}\frac{\log(2)\cdot\sqrt{n}}{\nu(K_{r})}\cdot
d(S_{1}^{\prime}\cap\Upsilon,S_{2}^{\prime}\cap\Upsilon)\cdot\nu(S_{1}^{\prime}\cap\Upsilon)\nu(S_{2}^{\prime}\cap\Upsilon)$
$\displaystyle\geq\frac{1}{2}\Delta\sqrt{n}\cdot\left(\nu(S_{1}^{\prime}\cap\Upsilon)\right)\nu(S_{2}^{\prime}\cap\Upsilon)$
$\displaystyle\geq\frac{1}{2}\Delta\sqrt{n}\cdot\left(\nu(S_{1}^{\prime})-\delta\right)\nu(S_{2}^{\prime}\cap\Upsilon)$
$\displaystyle\geq\frac{1}{2}\Delta\sqrt{n}\cdot\nu(S_{1}^{\prime})\nu(S_{2}^{\prime}\cap\Upsilon)-\frac{1}{2}\Delta\sqrt{n}\delta$
$\displaystyle\geq\frac{1}{2}\Delta\sqrt{n}\cdot\nu(S_{1}^{\prime})\nu(S_{2}^{\prime})-\Delta\sqrt{n}\delta$
$\displaystyle\overset{(ii)}{\geq}\frac{1}{8}\Delta\sqrt{n}\cdot\nu(S_{1}\cap
K_{r})\cdot\nu(S_{2}\cap K_{r})-\Delta\sqrt{n}\delta.$
(i) applies the isoperimetric inequality for $\nu_{\dagger}$. (ii) applies the
condition of Case 2. Combine the above two displays, we conclude there exists
a universal constant $C^{\prime}>0$ such that
$\displaystyle\int_{S_{1}}P_{u\to
S_{2}}(\nu)d\nu(x)\geq\frac{r^{2}\sqrt{n}}{C^{\prime}}\left[\nu(S_{1}\cap
K_{r})\cdot\nu(S_{2}\cap K_{r})-8\left(1+\frac{32}{r}\right)\delta\right].$
## References
* [AC93] James H Albert and Siddhartha Chib. Bayesian analysis of binary and polychotomous response data. Journal of the American statistical Association, 88(422):669–679, 1993.
* [AK91] David Applegate and Ravi Kannan. Sampling and integration of near log-concave functions. In Proceedings of the twenty-third annual ACM symposium on Theory of computing, pages 156–163, 1991.
* [Ant07] Irina A Antipova. Inversion of many-dimensional Mellin transforms and solutions of algebraic equations. Sbornik: Mathematics, 198(4):447, 2007.
* [B+97] Keith Ball et al. An elementary introduction to modern convex geometry. Flavors of geometry, 31(1-58):26, 1997.
* [BL02] Herm Jan Brascamp and Elliott H Lieb. On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. In Inequalities, pages 441–464. Springer, 2002.
* [CDWY18] Yuansi Chen, Raaz Dwivedi, Martin J Wainwright, and Bin Yu. Fast MCMC sampling algorithms on polytopes. The Journal of Machine Learning Research, 19(1):2146–2231, 2018\.
* [CE22] Yuansi Chen and Ronen Eldan. Localization schemes: A framework for proving mixing bounds for Markov chains. arXiv preprint arXiv:2203.04163, 2022.
* [Che21] Yuansi Chen. An almost constant lower bound of the isoperimetric coefficient in the KLS conjecture. Geometric and Functional Analysis, 31(1):34–61, 2021.
* [Cou17] Benjamin Cousins. Efficient high-dimensional sampling and integration. PhD thesis, Georgia Institute of Technology, 2017.
* [CV18] Ben Cousins and Santosh Vempala. Gaussian cooling and ${O}^{*}(n^{3})$ algorithms for volume and Gaussian volume. SIAM Journal on Computing, 47(3):1237–1273, 2018.
* [DFK91] Martin Dyer, Alan Frieze, and Ravi Kannan. A random polynomial-time algorithm for approximating the volume of convex bodies. Journal of the ACM (JACM), 38(1):1–17, 1991.
* [EAM22] Ahmed El Alaoui and Andrea Montanari. An information-theoretic view of stochastic localization. IEEE Transactions on Information Theory, 2022.
* [EKZ21] Ronen Eldan, Frederic Koehler, and Ofer Zeitouni. A spectral condition for spectral gap: fast mixing in high-temperature Ising models. Probability Theory and Related Fields, pages 1–17, 2021.
* [Eld13] Ronen Eldan. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. Geometric and Functional Analysis, 23(2):532–569, 2013.
* [Eld22] Ronen Eldan. Analysis of high-dimensional distributions using pathwise methods. In Proceedings of ICM, 2022.
* [HH06] Leonhard Held and Chris C Holmes. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian analysis, 1(1):145–168, 2006.
* [JLLV21] He Jia, Aditi Laddha, Yin Tat Lee, and Santosh Vempala. Reducing isotropy and volume to KLS: an ${O}(n^{3}\psi^{2})$ volume algorithm. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 961–974, 2021.
* [JLV22] Arun Jambulapati, Yin Tat Lee, and Santosh S Vempala. A slightly improved bound for the KLS constant. arXiv preprint arXiv:2208.11644, 2022.
* [KL22] Bo’az Klartag and Joseph Lehec. Bourgain’s slicing problem and KLS isoperimetry up to polylog. arXiv preprint arXiv:2203.15551, 2022.
* [KLS95] Ravi Kannan, László Lovász, and Miklós Simonovits. Isoperimetric problems for convex bodies and a localization lemma. Discrete & Computational Geometry, 13(3):541–559, 1995.
* [KLS97] Ravi Kannan, László Lovász, and Miklós Simonovits. Random walks and an ${O}^{*}(n^{5})$ volume algorithm for convex bodies. Random Structures & Algorithms, 11(1):1–50, 1997.
* [KN09] Ravi Kannan and Hariharan Narayanan. Random walks on polytopes and an affine interior point method for linear programming. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 561–570, 2009.
* [LLV20] Aditi Laddha, Yin Tat Lee, and Santosh Vempala. Strong self-concordance and sampling. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 1212–1222, 2020.
* [LM00] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. Annals of Statistics, pages 1302–1338, 2000.
* [Lov99] László Lovász. Hit-and-run mixes fast. Mathematical programming, 86(3):443–461, 1999.
* [LS90] László Lovász and Miklós Simonovits. The mixing rate of Markov chains, an isoperimetric inequality, and computing the volume. In Proceedings [1990] 31st annual symposium on foundations of computer science, pages 346–354. IEEE, 1990.
* [LS93] László Lovász and Miklós Simonovits. Random walks in a convex body and an improved volume algorithm. Random structures & algorithms, 4(4):359–412, 1993.
* [LV06a] László Lovász and Santosh Vempala. Fast algorithms for logconcave functions: Sampling, rounding, integration and optimization. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), pages 57–68. IEEE, 2006.
* [LV06b] László Lovász and Santosh Vempala. Hit-and-run from a corner. SIAM Journal on Computing, 35(4):985–1005, 2006.
* [LV06c] László Lovász and Santosh Vempala. Simulated annealing in convex bodies and an ${O}^{*}(n^{4})$ volume algorithm. Journal of Computer and System Sciences, 72(2):392–417, 2006.
* [LV07] László Lovász and Santosh Vempala. The geometry of logconcave functions and sampling algorithms. Random Structures & Algorithms, 30(3):307–358, 2007.
* [LV17] Yin Tat Lee and Santosh S Vempala. Geodesic walks in polytopes. In Proceedings of the 49th Annual ACM SIGACT Symposium on theory of Computing, pages 927–940, 2017.
* [LV18a] Yin Tat Lee and Santosh S Vempala. Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 1115–1121, 2018.
* [LV18b] Yin Tat Lee and Santosh S Vempala. The Kannan-Lov$\backslash$’asz-Simonovits conjecture. arXiv preprint arXiv:1807.03465, 2018.
* [Mel96] Hjalmar Mellin. Über die fundamentale Wichtigkeit des Satzes von Cauchy für die Theorien der Gamma-und der hypergeometrischen Functionen, volume 21. Societatis litterariae fennicae, 1896.
* [MV19] Oren Mangoubi and Nisheeth K Vishnoi. Faster polytope rounding, sampling, and volume computation via a sub-linear ball walk. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1338–1357. IEEE, 2019.
* [Smi84] Robert L Smith. Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions. Operations Research, 32(6):1296–1308, 1984.
* [SW14] Adrien Saumard and Jon A Wellner. Log-concavity and strong log-concavity: a review. Statistics surveys, 8:45, 2014.
* [Tko12] Tomasz Tkocz. An upper bound for spherical caps. The American Mathematical Monthly, 119(7):606–607, 2012.
* [Tur71] Valentin F Turchin. On the computation of multidimensional integrals by the Monte-Carlo method. Theory of Probability & Its Applications, 16(4):720–724, 1971\.
* [Vem05] Santosh Vempala. Geometric random walks: a survey. Combinatorial and computational geometry, 52(573-612):2, 2005.
## Appendix A Summary of properties of a logconcave distribution
Here is a list of well-known properties of logconcave distributions.
* •
The log-concavity of a measure is preserved by affine transformations and by
marginalization, see Proposition 3.1 and Theorem 3.3 in [SW14].
* •
The strong log-concavity of a measure is preserved by affine transformations,
by convolution (Theorem 3.7 in [SW14]) and by marginalization (Theorem 3.8 in
[SW14]).
* •
The isoperimetric constant of a 1-dimensional isotropic logconcave density is
lower bounded by $\log(2)/2\approx 0.34$ (see Theorem 4.3 in [Vem05]).
* •
The maximal value of a 1-dimensional isotropic logconcave density $p$ is
bounded by $1$ (Lemma 5.5 (a) in [LV07]). It is restated in Lemma 25.
* •
For an isotropic logconcave density $p$, $p(0)\geq 1/8$ (Lemma 5.5 (b) in
[LV07]). It is restated in Lemma 26.
* •
An isotropic logconcave density has an exponential tail (Lemma 5.17 in
[LV07]). It is restated in Lemma 27.
* •
Let $a,b$ be the $\delta$ and $(1-\delta)$-quantile of $p$, then
$p(a)\geq\frac{1}{8e}\delta$ and $p(b)\geq\frac{1}{8e}\delta$ as shown in
Lemma 28.
* •
$\left|p^{\prime}(a)\right|<\frac{2}{\delta}$ as shown in Lemma 29.
###### Lemma 25 (Lemma 5.5 (a) in [LV07]).
Let $p$ be an isotropic logconcave density on . Then for any $x\in$,
$\displaystyle p(x)\leq 1.$
###### Lemma 26 (Lemma 5.5 (b) in [LV07]).
Let $p$ be an isotropic logconcave density on , then $p(0)\geq\frac{1}{8}$.
See Lemma 5.5 in [LV07] for a proof of the above two lemmas.
###### Lemma 27.
For any isotropic logconcave density $p$ in n, and any $t>0$, we have
$\displaystyle{\mathbb{P}}_{X\sim p}(\left|X\right|\geq t\sqrt{n})\leq
e^{-t+1}.$
See Lemma 5.17 in [LV07] for a proof.
###### Lemma 28.
Let $p$ be an isotropic logconcave density on and let $z$ be the
$(1-\delta)$-quantile (that is, $\int_{-\infty}^{z}p(x)dz=1-\delta$), with
$0<\delta\leq 1/e$, then $p(z)\geq\frac{1}{8e}\delta$. Similarly, let $w$ be
the $\delta$-quantile, then $p(w)\geq\frac{1}{8e}\delta$. Additionally, for
any $\tilde{z}\in[w,z]$, we have $p(\tilde{z})\geq\frac{1}{8e}\delta$.
###### Proof of Lemma 28.
According to Lemma 5.4 in [LV07], we have
$\displaystyle z\geq 0.$
Based on the tail of the logconcave density in Lemma 5.7 in [LV07], if
$z\geq\log(e/\delta)$, then $z>1$ and
$\displaystyle\delta<e^{-z+1}\leq\delta,$
which leads to a contradiction. Hence, $z<\log(e/\delta)$.
Suppose $p(z)<\frac{p(0)}{e}\delta<p(0)$. Applying the mean value theorem,
there exists $y\in[0,z]$, such that
$\displaystyle(\log p)^{\prime}(y)=\frac{\log p(z)-\log p(0)}{z}\leq-1.$
The derivative is non-increasing, as $p$ is logconcave. Thus, for $x\geq z$,
we have
$\displaystyle(\log p)^{\prime}(x)\leq-1.$
Integrating from $z$ to $x$, we obtain
$\displaystyle p(x)\leq p(z)e^{-(x-z)}.$
Integrating again from $z$ to $\infty$, we obtain
$\displaystyle\delta=\int_{z}^{\infty}p(x)dx\leq
p(z)<\frac{p(0)}{e}\delta\leq\frac{1}{e}\delta,$
which is a contradiction according to Lemma 25. Hence,
$\displaystyle p(z)\geq\frac{p(0)}{e}\delta\geq\frac{1}{8e}\delta,$
where the last step follows from Lemma 26.
The proof of the $\delta$-quantile is similar. Additionally, since $(\log
p)^{\prime}(z)\leq-1$, $(\log p)^{\prime}(w)\geq 1$ and the derivative is non-
increasing, we conclude that for any $\tilde{z}\in[w,z]$, we have
$p(\tilde{z})\geq\frac{1}{8e}\delta$.
###### Lemma 29.
Let $p$ be an isotropic logconcave density on and let $z$ be the
$(1-\delta)$-quantile. That is, $\int_{-\infty}^{z}p(x)dz=1-\delta$. Suppose
$0<\delta\leq 1/e$, then there exists a universal constant $c>0$ such that
$p^{\prime}(z)\geq-\frac{2}{\delta}$. By symmetry, a similar result holds for
the $\delta$-quantile.
###### Proof of Lemma 29.
Suppose $p^{\prime}(z)<-\frac{2}{\delta}$. Then because $0\leq p(z)\leq 1$,
$\displaystyle(\log
p)^{\prime}(z)=\frac{p^{\prime}(z)}{p(z)}<-\frac{2}{\delta}.$
The derivative is non-increasing as $p$ is logconcave. Thus, for $x\geq z$, we
have
$\displaystyle(\log p)^{\prime}(x)<-\frac{2}{\delta}.$
Integrating twice, we obtain
$\displaystyle\delta=\int_{z}^{\infty}p(x)dx\leq\int_{z}^{\infty}e^{-\frac{2}{\delta}(x-z)}dx\leq\frac{\delta}{2},$
which is a contradiction.
|
# A Commonsense-Infused Language-Agnostic Learning Framework for Enhancing
Prediction of Political Polarity in Multilingual News Headlines
Swati Swati<EMAIL_ADDRESS>Adrian Mladenić Grobelnik<EMAIL_ADDRESS>Dunja Mladenić<EMAIL_ADDRESS>Marko Grobelnik<EMAIL_ADDRESS>Jožef Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia Jožef
Stefan International Postgraduate School, Jamova cesta 39, 1000 Ljubljana,
Slovenia
###### Abstract
Predicting the political polarity of news headlines is a challenging task that
becomes even more challenging in a multilingual setting with low-resource
languages. To deal with this, we propose to utilise the Inferential
Commonsense Knowledge via a Translate-Retrieve-Translate strategy to introduce
a learning framework. To begin with, we use the method of translation and
retrieval to acquire the inferential knowledge in the target language. We then
employ an attention mechanism to emphasise important inferences. We finally
integrate the attended inferences into a multilingual pre-trained language
model for the task of bias prediction. To evaluate the effectiveness of our
framework, we present a dataset of over 62.6K multilingual news headlines in
five European languages annotated with their respective political polarities.
We evaluate several state-of-the-art multilingual pre-trained language models
since their performance tends to vary across languages (low/high resource).
Evaluation results demonstrate that our proposed framework is effective
regardless of the models employed. Overall, the best performing model trained
with only headlines show 0.90 accuracy and F1, and 0.83 jaccard score. With
attended knowledge in our framework, the same model show an increase in 2.2%
accuracy and F1, and 3.6% jaccard score. Extending our experiments to
individual languages reveals that the models we analyze for Slovenian perform
significantly worse than other languages in our dataset. To investigate this,
we assess the effect of translation quality on prediction performance. It
indicates that the disparity in performance is most likely due to poor
translation quality. We release our dataset and scripts at:
https://github.com/Swati17293/KG-Multi-Bias for future research. Our framework
has the potential to benefit journalists, social scientists, news producers,
and consumers.
###### keywords:
News, Bias, NLP, Commonsense, Inferential commonsense knowledge, Multilingual,
Headline, Low-resource, Imbalanced sample distribution, Pre-trained language
models
††journal: Journal of LaTeX Templates
[para]default
## 1 Introduction
News plays a significant role in the functioning of a democratic society [1,
2]. Even though it is presumed to be a reliable source of information [3],
bias is inevitable [4]. As a result, research communities devote a great deal
of attention to the study of news bias [5, 6, 7]. However, the first step in
conducting such a study is to identify it [8, 9]. Although the task may appear
trivial, it is in fact challenging as bias can manifest itself at different
levels in complex ways [10]. When it comes to news headlines, this task
becomes even more challenging as headlines are inherently short, catchy or
appealing, context-deficient, and contain only subtle bias clues [11, 12].
With the rise of digital journalism and micro-blogging, the headline is
becoming the only part of a news item that people read [13]. Furthermore,
since it serves as an entry point of an article, people are more likely to
form an opinion by simply reading it without reading the rest of the article
[14, 15]. They seem to be swayed more by its creativity than its clarity [16].
Journalists often use this to their advantage by fabricating facts in a way
that expresses their intended point of view, which captures the readers’
emotions and interests [14, 17].
Such biased reporting has a direct impact on how the public perceives events
such as elections [18], protests [19], terrorism [20], and so on [21, 22].
Therefore, it is important to identify bias to help people form an unbiased
and well-informed opinion [23, 24]. Some studies deal with news bias, but most
of them are for High-Resource Languages (HRLs) such as English and German [25,
26]. Such research is especially scarce for Low-Resource Languages (LRLs)
[27], even though mitigating the effects of bias is equally important in
assisting readers of these languages [28].
With a scarcity of standard labelled data, existing studies, and external
knowledge to draw from, the task of news bias identification in these LRLs
becomes even more challenging [29, 27]. As a result, resolving these issues
necessitates understanding the narrative being presented [30]. This can be
accomplished by identifying connections between what is explicitly stated and
what is implied [31].
It is well-known that incorporating commonsense reasoning abilities can
facilitate the inference of such connections by identifying a set of unstated
causes and effects [32, 33]. Such additional knowledge has been proven to be
beneficial for several tasks [34, 35, 36], including the prediction of bias in
English news headlines [37]. To this end, we use the popular neural knowledge
model COMET [38] trained on ATOMIC2020 [38] to generate the Inferential
Commonsense knowledge (IC_Knwl). Since the textual descriptions of commonsense
in the ATOMIC2020 knowledge repository are composed in English, it creates a
language barrier.
Thus, to extend its capability beyond this barrier, we propose to leverage the
Translate-Retrieve-Translate (TRT) approach [39]. Specifically, given a
headline in the target language, TRT first translates it into English and then
acquires the associated knowledge in English. It then translates the knowledge
back into the target language. As illustrated in Figure 1, IC_Knwl in the
target language can help enhance the prediction accuracy.
(a) Novinky.cz (Czech): Hackeři vyhlásili Rusku válku, vyřazují z provozu
jeden cíl za druhým (Hackers have declared war on Russia, decommissioning one
target after another)
IC_Knwl: Hackeři jsou vidět jako ‘agresivní’, který ‘chce zničit nepřítele‘
(Hackers are seen as ‘aggressive’ who wants to ‘to take revenge on Russia’)
Political polarity: Left Center
(b) 24ur.com (Slovenian): Hekerska skupina Anonymous trdi, da je vdrla v rusko
centralno banko (The hacker group Anonymous claims to have hacked into
Russia’s central bank)
IC_Knwl: Hekerji veljajo za ‘zlonamerne’, ki želijo ‘dati izjavo’ (Hackers are
seen as ‘malicious’ who wants to ‘make a statement’)
Political polarity: Least Biased
Figure 1: News headlines from (a) Czech and (b) Slovenian news outlets on the
“hacker attacks on Russia” with varying political polarities. Inferential
Commonsense Knowledge (IC_Knwl) can help improve prediction accuracy by
facilitating the acquisition of additional bias-cues.
(Note: this example shows only a subset of IC_Knwl relations. Image source:
24ur.com, novinky.cz, Translation: translate.google.com)
To finally predict the political polarity of multilingual news headlines, we
present a learning framework in Section 4.2.1. Given a multilingual headline,
we first utilise COMET with TRT to acquire IC_Knwl in the target language.
Next, we employ an attention mechanism to emphasise important inferences. We
finally integrate the attended IC_Knwl into a multilingual pre-trained
language model for bias prediction.
However, there are no standard labelled datasets available for evaluating our
framework [27]. Prior studies either restrict their scope to news in a single
language [11] or analyse news in different languages separately [40]. Even the
overall ratings for news outlets that publish in these languages are
unavailable on popular bias rating platforms such as allsides.com and
adfontesmedia.com.
Given the limited number of news outlets publishing in these LRLs for each
bias class [41], imbalanced data distribution poses another challenge.
Furthermore, no labelled data may exist for some LRLs. Especially for European
LRLs, data and knowledge resources are extremely scarce [29]. To this end, we
present our dataset of news headlines in five European LRLs annotated with
their respective political leanings (ref. Section 3). It is constructed to
mimic the challenges encountered by LRLs.
For a model to overcome the aforementioned challenges, cross-lingual transfer
learning is crucial [42, 43, 44]. It can be achieved with the help of
multilingual Pre-trained Language Models (PLMs) [45, 46, 47]. These models can
generate vector embeddings of texts in different languages that are aligned in
a single vector space, enabling few-shot/zero-shot learning. Advances in
multilingual PLMs have shown promise in numerous NLP tasks [48, 49]. However,
to use them effectively, systems must be fine-tuned to the task at hand [50].
Unfortunately, as stated previously, the majority of these LRLs lack large
enough data sets for such fine-tuning. They also suffer from the problem of
specificity in their vocabulary that focuses on their cultural heritage, which
further hinders the performance of these models [51]. Therefore, in this
study, we also evaluate several state-of-the-art multilingual PLMs for their
effectiveness (ref. Section 4.2.3).
### 1.1 Contributions
The key contributions of our work are summarised as follows:
* 1.
Proposing to leverage Inferential Commonsense Knowledge (IC_Knwl) through a
Translate-Retrieve-Translate (TRT) strategy to facilitate comprehension of the
overall narrative of the multilingual headlines.
* 2.
Introducing an IC_Knwl-infused language-agnostic learning framework for
enhancing the prediction of political polarity in multilingual news headlines
under imbalanced sample distribution.
* 3.
Presenting a dataset of multilingual news headlines in five European low-
resource languages annotated with their respective political polarities.
* 4.
Thorough experiments with several state-of-the-art multilingual pre-trained
language models to assess their effectiveness.
* 5.
Analysing the impact of IC_Knwl infusion on overall performance and across
languages with and without attention mechanism.
The remainder of this paper is structured as follows: After a brief review of
the key related works in Section 2, we introduce our dataset and provide an
overview of its data collection framework in Section 3. We then present the
materials and methods utilised in this study in Section 4. In Section 5, we
present the results and analysis of our experiments followed by research
implications in Section 6. Finally, in Section 7, we present the concluding
remarks and potential directions for future research.
## 2 Literature review
In our learning framework, we predict the political polarity of multilingual
news headlines by incorporating commonsense knowledge into a pre-trained
multilingual language model. Consequently, we organise the related work in
this section from these three perspectives as follows:
### 2.1 Prediction of polarity in multilingual news headlines
Researchers have long been interested in studying news articles and headlines
in order to address problems such as fake news detection [52, 53, 54],
sentiment analysis [55, 56], topic modelling [57, 58], and so on [59, 60].
While predicting the polarity of news articles is not a new problem [61, 62,
63], modelling it at the headline level has received less attention [37].
Earlier studies relied on predefined linguistic feature sets [64, 65] and
standard machine learning techniques [66]. Recent studies, on the other hand,
have advanced to deep-learning techniques [11, 67, 68]. In particular,
Transformers-based models have demonstrated remarkable performance
enhancements [69, 70]. However, the majority of these studies focus on
languages with abundant resources, with only a few exceptions studying
languages with limited resources [11]. Moreover, these studies are either
limited to a single language [71, 22] or analyse news in different languages
independently [40].
The lack of large-scale annotated gold-standard datasets for these languages
further complicates the task [27, 51]. Most existing datasets were generated
manually [11]. Manual annotation requires a substantial amount of time and
effort. Moreover, these small-scale datasets are not suitable for training
deep learning models [72]. There are also datasets generated using an approach
in the form of distant supervision, in which the polarity of a news outlet is
mapped to each of its articles [73, 64]. The polarity is typically obtained
from prominent bias rating platforms, such as allsides.com and
adfontesmedia.com where a team of domain experts employs specialised
guidelines for annotations. Even though distant supervision facilitates the
creation of large datasets, bias ratings are typically not available for all
outlets, especially those that publish in languages with limited resources
[41]. Another possibility is to combine the datasets available in different
languages. However, this strategy would result in an uneven distribution of
topics and events across polarity classes and languages.
To mitigate the aforementioned issues of data scarcity, we present a diverse
and scalable multilingual news headline dataset in five low-resource languages
to predict political leanings (ref. Section 3). Inspired by but distinct from
these related works, we then introduce our learning framework (ref. Section
4.2.1). We infuse it with inferential commonsense knowledge and explore its
application for the task of polarity prediction. Furthermore we propose a
language-agnostic learning framework which we utilise to evaluate the
effectiveness of several state-of-the-art multilingual pre-trained language
models.
### 2.2 Commonsense knowledge
Multiple studies have revealed that large-scale pre-trained language models
are implicitly capable of encoding some commonsense and factual knowledge [74,
75]. However, these models hardly acquire inferential commonsense knowledge,
especially in context-deficient settings [76, 77]. Consequently, recent
studies have investigated the application of such knowledge in a number of
NLP-related tasks [78, 79, 80]. It has been demonstrated that injecting such
knowledge improves output performance on a variety of tasks, including reading
comprehension [81], question answering [82], and story generation [83], among
others [84, 85, 86].
There exist several widely used commonsense knowledge resources such as
ConceptNet [87], SentiNet [88], GLUCOSE [89], ATOMIC2020 [38], etc [90, 91,
92]. ConceptNet is a semantic network containing concept-level relational
commonsense knowledge as phrases and words in natural language. SentiNet is a
well-known resource used for sentiment analysis at the concept level. GLUCOSE
is a large-scale resource used for capturing implicit casual knowledge in
narrative contexts. Structured as if-then relations with an emphasis on
inferential knowledge, ATOMIC2020 is a resource composed of everyday
commonsense knowledge.
These knowledge resources are used to train generative models such as COMET
[38] and ParaCOMET [93]. Trained on ConceptNet and ATOMIC2020, COMET is
capable of generating a diverse range of context-relevant commonsense
descriptions. Motivated by the related studies, we thus use COMET trained on
the ATOMIC2020 knowledge base. However, different from these studies, we use
it to identify unstated causes and effects in context-deficient headlines.
### 2.3 Multilingual pre-trained language models
A number of language representation models, such as BERT [94], ELECTRA [95],
XLNet [96], etc. [97, 98], have emerged in recent years. The majority of them
are based on transformers, a non-sequential deep learning approach that
provides positional embeddings via a multi-headed attention technique [99].
Due to their many advantages [100], they are popular not only for solving a
wide range of NLP-related tasks [101, 102, 103, 104] but also for a variety of
other practical applications [105, 106, 107].
A number of their multilingual variants, such as Multilingual BERT (mBERT)
[94], XLM-RoBERTa (XLM-R) [108], and Multilingual Bidirectional Auto-
Regressive Transformers (mBART) [109], have shown promising results for text
processing across multiple languages [42, 110, 111]. If followed by task-
specific fine-tuning, they have proven to be effective [112]. However, they
are ineffective at generating sentence-level representations [113].
Several models designed to generate semantically meaningful sentence
representations, such as Sentence BERT (SBERT) [114], Universal Sentence
Encoder (USE) [113], and Language-Agnostic Sentence Representations (LASER)
[115], were proposed to address this limitation. They have proven useful in a
variety of NLP applications [116, 117]. Over the past few years, several
similar frameworks have been extended to support over 100 languages [113,
118]. Some even support low-resource languages such as Slovenian, Romanian,
and so on [115, 47].
Despite having millions of parameters and being trained on diverse datasets,
these models are not guaranteed to generalise to all tasks and domains [112].
As a result, we investigate and compare several state-of-the-art PLMs in this
study for their effectiveness.
## 3 Dataset
We introduce our dataset and describe its data collection framework in this
section. To begin with, we introduce two primary data sources that serve as
the foundation for our dataset. We then present a detailed description of our
framework for data collection followed by a description of our dataset.
### 3.1 Primary data sources
We present two primary data sources Media Bias/Fact Check (MBFC) and Event
Registry (ER) in this section. We use the bias rating portal MBFC to select
media outlets and retrieve their associated bias labels. We use ER to crawl
the headlines of articles published by these selected media outlets.
#### 3.1.1 Media Bias Fact/Check
Several well-known platforms, such as allsides.com, adfontesmedia.com, and
mediabiasfactcheck.com [119], publish bias ratings for media outlets. However,
due to the scarcity of such ratings for outlets in low-resource languages, we
choose to acquire labels exclusively from mediabiasfactcheck (MBFC). It is a
trustworthy bias rating and fact-checking platform with extensive coverage and
regular updates. It has been employed to predict and assess media bias in a
number of studies [120]. In addition, it has also been utilised to develop
tools such as ‘Iffy Quotient’ [121], which monitors the prevalence of fake
news and questionable sources on social media.
To assign bias ratings to media sources, it establishes five levels of
political bias: ‘left’, ‘left-center’, ‘center’, ‘right-center’, and ‘right’
[122]. It also assigns ratings based on their credibility and factual
accuracy. These ratings are assigned by a group of paid contractors and
volunteers who are instructed to adhere to a predetermined methodology [123].
Based on a quantifiable system, its methodology includes both objective and
subjective measures.
#### 3.1.2 Event Registry
To scrape news headlines, we use the Event Registry [124] platform. It has a
custom collection of over 150,000 diverse sources from around the world in
over 50 languages. It is widely used in studies involving news event analysis
[125, 126, 127]. Its primary objective is to cluster contents as events, but
it also facilitates the collection of news stories and articles. It offers a
Python API111https://eventregistry.org for accessing news content minutes
after it has been published online. It has several search options for
filtering out the desired content, such as searching by any news outlet,
keyword, language, and among others. Using this API it is possible to extract
news content as well as metadata published by different publishers in
different languages.
### 3.2 Data collection framework
Figure 2: Data Collection Framework. We use Media Bias/Fact Check (MBFC) and
Event Registry (ER) as the primary data sources in the framework.
As illustrated in the data collection framework in Figure 2, we begin the
process by compiling a list of low-resource European languages (L).
$L=\\{l_{1},l_{2},...l_{n}\\}$, with $n$ representing the total number of
languages in the list. $\forall l\in L$, we then compile a list of media
outlets ($O$) publishing in $l$ ranked by MBFC (ref. Section 3.1.1). We define
$O=\\{o_{1},o_{2},...o_{m}\\}$, with $m$ as the the total number of outlets in
the list. $\forall o\in O$, we then check whether $o$ is ranked as a
questionable source or not. Since questionable sources are prone to promote
unfounded claims or theories as facts and offer little or no references to
credible sources of information, they may turn out to be untrustworthy.
Therefore, we discard such sources. $\forall$ unquestionable $o$, we extract
the political bias label $b$ assigned by MBFC.
We then define an explicit temporal query ($Q_{t}$):
$\displaystyle Q_{t}=\\{Q_{o},\;Q_{l},\;Q_{cat},\;Q_{dt}\\}$ (1)
Where, $Q_{o}$, $Q_{l}$, and $Q_{cat}$ defines the query $o$, $l$,
categories222https://eventregistry.org/documentation?tab=suggCategories Note:
For our dataset, we only use the categories defined by ER as ‘news’.
respectively, and $Q_{dt}$ defines the time-constraint using $Q_{sd}$ and
$Q_{ed}$ as the start and end dates:
$\displaystyle Q_{dt}=[Q_{sd},Q_{ed}]$ (2)
To scrape all the article headlines ($H$) published by each unquestionable
$o$, we utilise $Q_{t}$ to query the Event Registry (ER) (ref. Section 3.1.2):
$\displaystyle H=ER\;(Q_{t})$ (3)
Finally, we assign the previously extracted bias label $b$ to the headlines in
$H$ to construct the dataset. To generate the train/valid/test splits, we
adopt a stratified split to simulate the imbalance in the collected data
across the languages.
### 3.3 Dataset description
Our dataset consists of news headlines annotated with their respective
political leanings. We construct it to mimic the challenges encountered by
LRLs. We begin by selecting five low-resource European languages: Czech,
Finnish, Romanian, Slovenian, and Swedish. We then compile a list of media
outlets ranked by MBFC in these selected languages. We end up with seven news
outlets: 24ur, Dagens Nyheter, Delo, Digi24, Helsingin Sanomat, Hotnews, and
Novinky with bias labels: Left Center, Least Biased, and Right center. In the
end, we manage to generate $62,689$ news headlines with an average length of
$10.2$ words.
In Table 1, we list the statistics for each language in the dataset. It is
carefully documented and adheres to the requirements of the FAIR Data
Principles333https://www.nature.com/articles/sdata201618/.
| All | Czech | Finnish | Romanian | Slovenian | Swedish
---|---|---|---|---|---|---
Train | 50,157 | 9,992 | 7,120 | 5,829 | 15,557 | 11,659
Test | 6,269 | 1,237 | 940 | 756 | 1,879 | 1,457
Valid | 6,263 | 1,310 | 880 | 764 | 1,853 | 1,456
Total | 62,689 | 12,539 | 8,940 | 7,349 | 19,289 | 14,572
Len. | 10.2 | 9.4 | 10.2 | 12.8 | 8.8 | 8.9
Table 1: Dataset Statistics. Len: average number of words in the headline.
## 4 Materials and methods
In this section, we begin by stating the research objectives followed by
formally defining the task of predicting the political polarity of
multilingual news headlines. We then present our learning framework and its
key components, followed by a brief discussion of baseline models and the
evaluation metrics used in this study.
### 4.1 Research objectives
The primary objective of this study is to investigate the impact of our
proposed framework for predicting political polarity in multilingual news
headlines. It takes the advantage of the state-of-the-art pre-trained language
models and the inferential commonsense knowledge in a multilingual setting. In
this context, we define the following research objectives :
* 1.
RO1: Introduce a knowledge-infused language-agnostic learning framework.
* 2.
RO2: Evaluate the impact of using an inferential commonsense knowledge as a
source of additional information in a multilingual setting.
* 3.
RO3: Compare the effectiveness of several state-of-the-art multilingual pre-
trained language models.
* 4.
RO4: Investigate the influence of knowledge attention on prediction
performance.
### 4.2 Task definition
We denote a language by $l\in L$, a short news headline text by $H$, an
auxiliary piece of information as inferential commonsense knowledge by
$IC\\_Knwl$, a $H$ in $l$ as $H^{l}$, an $IC\\_Knwl$ in $l$ as
$IC\\_Knwl^{l}$, and a political bias label by $b\in B$. We define the sets
$L=\\{l_{1},l_{2},...l_{n}\\}$ and $B=\\{b_{1},b_{2},...b_{N}\\}$, where $n$
and $N$ represent the number of languages and bias labels in the respective
sets $L$ and $B$. Given $H^{l}$, its corresponding $IC\\_Knwl^{l}$ can be
acquired using the commonsense knowledge modelling function $C$ with the
appropriate model parameters $\alpha$, as shown in Eq. 4.
$\displaystyle IC\\_Knwl^{l}=C(H^{l},\alpha)$ (4)
$H^{l}$ can then be fused with the acquired $IC\\_Knwl^{l}$ to represent its
extended feature space $(H^{l},IC\\_Knwl^{l})$. Given $H^{l}$, the task aims
to train a classifier that maps its extended feature space to the bias set
$B$. It can be mathematically formulated using Eq. 5 with $f$ as the bias
prediction function and $\theta$ as the model parameters.
$\displaystyle b=f((H^{l},IC\\_Knwl^{l}),\theta)$ (5)
#### 4.2.1 Methodology
To fulfill RO1, we propose a framework which is primarily based on inferential
commonsense knowledge. It helps uncover contextual features that in turn can
help predict the polarity of multilingual news headlines. To facilitate
generalisation, our framework is compatible with any multilingual pre-trained
language model. Figure 3 depicts its overall architecture. Its key components
include Knowledge Acquisition, Feature Encoding, Knowledge Attention, and Bias
Prediction. Each of these components is described in detail in the following
subsections.
Figure 3: An overview of our proposed learning framework. To predict
political polarity of multilingual news headlines, it combines Inferential
Commonsense Knowledge retrieved via the Translate-Retrieve-Translate strategy
with multilingual pre-trained language models.
#### 4.2.2 Knowledge acquisition
The $\text{ATOMIC}^{20}_{20}$ (ATlas Of MachIne Commonsense
2020)444https://allenai.org/data/atomic-2020 [38] is a well-known, publicly
available commonsense knowledge resource that is “able to cover more correct
facts about more diverse types of commonsense knowledge than any existing,
publicly-available commonsense knowledge resource”. Its relations are composed
of textual descriptions containing more than one million tuples of everyday
inferential knowledge about entities and events. It is coded into different
relation types, which are categorised into different sub-types, such as nine
commonsense relations for social interaction, seven for physical entities, and
seven for events. Figure 4 illustrates a subset of these relations generated
in response to a sample news headline.
Figure 4: A small subset of $IC\\_Knwl$ relations generated using
$\text{ATOMIC}^{20}_{20}$ as the knowledge base in response to the news
headline ‘Musk sold Tesla shares for 110 billion’. Nodes in the colours red,
green, blue, and orange represent relations depicting social interactions,
events, physical entities, and category intersection, respectively.
Relations of type social-interaction provide an insight into socially
triggered states and behavioural patterns. As demonstrated by the examples in
Table 2, it is valuable for predicting people’s reactions and behaviour in a
given situation by assessing their intentions and goals. Motivated by its
effectiveness in enhancing the performance of models designed to handle short
news headlines in English language [37], we utilise it as the sole relation
type for $IC\\_Knwl$ in our work.
Relation | Interpretation | Examples
---|---|---
xAttr | X is seen as | lucky; competitive
xEffect | as a result, X | wins the game; personx wins the race
xIntent | because X wanted | to win; to be the best
xNeed | but before, X needed | to train hard; to enter the contest
xReact | as a result, X feels | happy; excited
xWant | as a result, X wants | to celebrate; to win
oEffect | as a result, others | loses the game; loses money
oReact | as a result, others feel | disappointed; sad
oWant | as a result, others want | to congratulate X, to win the game
Table 2: Examples of social interaction relation retrieved using
$\text{ATOMIC}^{20}_{20}$ as the knowledge base for the short news headline
‘Grit Won’. Each relation type is interpreted using the human-readable
template provided in [38].
To retrieve $IC\\_Knwl$, we use COMmonsensE Transformers (COMET)
555https://github.com/allenai/comet-atomic-2020/ [128, 38] trained on the
$\text{ATOMIC}^{20}_{20}$ knowledge graphs. COMET is a large pre-trained
neural-network model that generates $IC\\_Knwl$ in response to a query text.
Given $H$, Inference type ($I_{type}$), and number of returned references
($k$), $IC\\_Knwl$ can be retrieved using the following equation,
$\displaystyle IC\\_Knwl=COMET(H,I_{type},k)$ (6)
where, $I_{type}=[i_{1},i_{2},...i_{x}]$ with $i$ as the inference type
defined in Table 2 and $x$ as the total relations in the set. Since COMET
returns the $IC\\_Knwl$ as a list of inference results $\forall i\in
I_{type}$, we set $k=1$ to return only one inference result per $I_{type}$.
Furthermore, while retrieving $IC\\_Knwl$, we combine the returned pieces of
inferences of each $I_{type}$ to make it more meaningful. For example,
1. 1.
Headline: Grit Won
2. 2.
IC_Knwl: xAttr: lucky, xIntent: to win, xEffect: wins the game, xWant: to
celebrate, xReact: happy, oWant: to congratulate X, oEffect: looses the game,
oReact: disappointed
3. 3.
Processed IC_Knwl: PersonX is lucky, needed to train hard, intended to win,
wins the game, wants to celebrate, feels happy. Others want to congratulate X,
looses the game, feel disappointed.
Finally, to generate $IC\\_Knwl^{l}$ for $H^{l}$, we use the aforementioned
method along with the Translate-Retrieve-Translate (TRT) approach [39].
Specifically, given a $H^{l}$, we first translate it into English and retrieve
its associated $IC\\_Knwl$ in English. We then translate the retrieved
$IC\\_Knwl$ into the target language $l$ to finally get the $IC\\_Knwl^{l}$.
We use the Google Translate API666https://cloud.google.com/translate for our
translations.
#### 4.2.3 Feature encoding
To acquire feature vectors $H^{l}{}^{\prime}$ and $IC\\_Knwl^{l}{}^{\prime}$,
we use multilingual pre-trained language models (PLMs). For their optimal
performance, they are required to map embedding vectors of text written in
different languages into a single vector space. As a result, the degree of
vector alignment influences their performance. In this regard, we explore the
state-of-the-art multilingual PLMs defined in Section 4.3. These PLMs differ
from word-embedding models as they are trained on a wide range of tasks that
require modelling the meaning of word sequences as opposed to individual
words.
#### 4.2.4 Knowledge attention
Ideally, not all retrieved inferences are expected to be of the same
relevance. Consequently, we use the Sigmoid function [129] on
$IC\\_Knwl^{l}{}^{\prime}$ to determine the relevance of each of them.
Following the work of Majumder et al. [130], we then multiply
$IC\\_Knwl^{l}{}^{\prime}$ by the resulting relevance scores to highlight the
most significant inferences. We use this vector in a Multi-Layer Perceptron
(MLP) network trained to mix inferences from different $I_{type}$ to finally
generate the attended vector $\widetilde{IC}\\_Knwl^{l}{}^{\prime}$:
$\displaystyle\widetilde{IC}\\_Knwl^{l}{}^{\prime}=MLP(Sigmoid(IC\\_Knwl^{l}{}^{\prime})\;\odot\;IC\\_Knwl^{l}{}^{\prime})$
(7)
where $\odot$ denotes element-wise multiplication.
#### 4.2.5 Bias prediction
To predict the bias label $\hat{b}$, we first fuse the vectors
$H^{l}{}^{\prime}$ and $\widetilde{IC}\\_Knwl^{l}{}^{\prime}$ to generate $F$:
$\displaystyle
F=H^{l}{}^{\prime}\;\oplus\;\widetilde{IC}\\_Knwl^{l}{}^{\prime}$ (8)
where $\oplus$ represents the concatenation operation.
We then feed the fused vector $F$ to an MLP network and forward the resultant
vector to a Fully Connected layer (FC) having Softmax ($\sigma$) activation to
finally predict $\hat{b}$:
$\displaystyle\hat{b}=FC(\sigma(MLP(F))$ (9)
We train our network using the AdaMax [131] as the optimizer with its default
parameters. We use the Categorical cross-entropy as the loss function, which
is defined as follows:
$\displaystyle Loss=-\sum_{i=1}^{|B|}(b_{i}*\log(\hat{b}_{i}))$ (10)
where $b_{i}$ and $\hat{b}_{i}$ are the actual and predicted probabilities of
selecting the $i^{th}$ bias label in $B$.
### 4.3 Baseline models
Based on their superior performance in a variety of related tasks in
multilingual settings [49, 132], we chose the following state-of-the-art
baseline models for a comprehensive evaluation of our proposed framework.
* 1.
ml-MiniLM [114] [paraphrase-multilingual-MiniLM-L12-v2]: a multilingual
version of the sentence transformer, paraphrase-MiniLM-L12-v2 [46]. It
generates 384-dimensional aligned dense vectors. It is pre-trained on parallel
data for more than 50 languages. It trades accuracy for speed and its reduced
dimension results in lower memory requirements.
* 2.
distil-mUSE [114] [distiluse-base-multilingual-cased-v2]: multilingual
Universal Sentence Encoder (mUSE) [133] is based on the transformer
architecture [99], which uses a multi-task trained dual-encoder to embed texts
into a single vector space. The multilingual knowledge distilled version of
mUSE (distil-mUSE) supports over 50 languages. It maps text to a
512-dimensional dense vector space.
* 3.
ml-mpnet [114] [paraphrase-multilingual-mpnet-base-v2]: a multilingual version
of the sentence transformer, paraphrase-mpnet-base-v2 [46]. It is pre-trained
on parallel data for over 50 languages and generates 768-dimensional aligned
dense vectors. It outperforms other multilingual models based on sentence
transformers. However, its increased computational complexity makes it time-
intensive.
* 4.
LaBSE [45][LaBSE/2]: a language-agnostic BERT based model that maps text into
a 768-dimensional dense vector space. To map single plain-text segments to
encoder inputs, it requires a separate preprocessor API build for the
universal-sentence-encoder-cmlm multilingual
models777https://tfhub.dev/google/universal-sentence-encoder-
cmlm/multilingual-preprocess/2. It is trained and optimised to generate
aligned vectors for bilingual sentence pairs, and it currently supports over
109 languages. Although the model, like other BERT models, can be fine-tuned,
the authors recommend that it be used as it is.
* 5.
cmlm-ml [47] [cmlm/multilingual-base/1]: a multilingual model trained with a
conditional masked language model (cmlm-ml). Its architecture is based on a
12-layer BERT transformer [134], but it is far more complex. Similar to LaBSE,
it also requires an additional preprocessor to map plain-text inputs to
encoder inputs. It transforms text into 768-dimensional aligned vectors and
supports more than 100 languages. Although its inference speed is
significantly slower than that of other comparable models, its performance is
far superior.
### 4.4 Evaluation metrics
To assess the performance of our proposed framework, we employ well-known
metrics used to evaluate prediction models [135], such as Accuracy (A) and
F1-score ($F_{1}$). However, in the case of an imbalanced dataset like ours,
where true negative instances outnumber true positive instances for several
languages, they are not a reliable indicator. Jaccard (J) [136] score is a
reliable metric for evaluating models where no examples exist for each class.
It disregards true negatives in favour of true positives, facilitating the
interpretation of the results. It is even more reliable when evaluating models
for individual languages since the imbalance is more apparent. As a result, we
also employ the Jaccard score to gain a deeper understanding. We compute these
metrics using the values of the confusion matrix defined in Table 3.
True Positive (TP): | label is present and is predicted.
---|---
True Negative (TN): | label is not present and is not predicted.
False Positive (FP): | label is not present but is predicted.
False Negative (FN): | label is present but is not predicted.
Table 3: Description of the values of the confusion matrix.
The metrics we use are defined as follows:
* 1.
Accuracy (A): fraction of true prediction over the total.
$\displaystyle A=(TP+TN)/(TP+TN+FP+FN))$ (11)
* 2.
F1-score ($F_{1}$): harmonic mean of Precision ($P$) and Recall ($R$), where
$P$ is the fraction of relevant instances among the retrieved instances and
$R$ represents the fraction of relevant instances that were retrieved:
$\displaystyle F_{1}=2TP/(2TP+FP+FN)$ (12)
* 3.
Jaccard (J): fraction of correctly predicted instances over all instances
except those where a label is not present and is not predicted.
$\displaystyle J=(TP)/(TP+FP+FN))$ (13)
To ensure all bias classes are treated equally, we use the macro-averaged
$F_{1}$ and macro-averaged $J$ ($J_{m}$) scores to evaluate the overall
performance of the models.
To evaluate the performance of the models for each language, we use the micro-
averaged $J$ ($J_{\mu}$) score which accounts for the problem of class
imbalance. Inspired by Nagle [137], we also report the Relative Performance
(RP) of the models for each language used in our study. RP is defined as the
ratio of the absolute performance of the models under consideration. To
compute it, any underlying evaluation metric (ex. $J$, $A$, $F_{1}$, etc.) can
be used. In particular, we report on the relative performance of models
trained with only headlines to those trained with or without additional
knowledge and attention mechanism.
## 5 Results and discussion
We begin this section by analysing the experimental results of the models
trained across all reported languages. Following that, we examine the
performance of the models evaluated for individual languages. Finally, we
present the findings of a case study that investigates the effect of
translation quality on prediction accuracy.
### 5.1 Overall performance
We evaluate the baseline models and our proposed framework across all the
reported languages and present their performance in terms of accuracy($A$),
macro-averaged-F1 ($F_{1}$), and macro-averaged-Jaccard($J_{m}$) scores in
Table 4. As the results indicate, with $0.92$ $A$ and $F_{1}$, and $0.86$
$J_{m}$, our proposed framework trained with headlines and attended IC_Knwl
using cmlm-ml clearly outperforms other models trained with headlines only. It
surpasses the performance of the best model (cmlm-ml) in terms of $A$ and
$F_{1}$, and $J_{m}$ by $2.2\%$ and $3.6\%$ respectively.
| ml-MiniLM | distil-mUSE | ml-mpnet | LaBSE | cmlm-ml | ours
---|---|---|---|---|---|---
$\bm{A}$ | 0.62 | 0.64 | 0.66 | 0.75 | 0.90 | 0.92
$\bm{F_{1}}$ | 0.57 | 0.61 | 0.63 | 0.74 | 0.90 | 0.92
$\bm{J_{m}}$ | 0.40 | 0.44 | 0.46 | 0.59 | 0.83 | 0.86
Table 4: Comparison between the baseline models and our proposed framework in
terms of Accuracy($A$), macro-averaged-F1 ($F_{1}$), and macro-averaged-
Jaccard($J_{m}$) scores across all the reported languages. Trained with
headlines and attended IC_Knwl using cmlm-ml, our framework outperforms the
baseline models trained with headlines only.
To determine whether the IC_Knwl contains knowledge useful for bias
prediction, we train the models with IC_Knwl as the only input. We report the
results in the first column of Table 5. We observe that models trained
exclusively with IC_Knwl achieve comparable results to models trained only
with headlines. In terms of $A$, $F_{1}$, and $J_{m}$ scores, models other
than cmlm-ml show an average improvement of $22\%$, $28\%$, and $47\%$,
whereas cmlm-ml shows a slight decrease in performance of $5\%$, $2\%$, and
$5\%$ respectively. The findings demonstrate that the IC_Knwl does provide
useful inferential information for the task of bias prediction (RO2).
| | IC_Knwl
---
| Headline+
---
IC_Knwl
| Headline+
---
Attn(IC_Knwl)
| $\bm{A}$ | $\bm{F_{1}}$ | $\bm{J_{m}}$ | $\bm{A}$ | $\bm{F_{1}}$ | $\bm{J_{m}}$ | $\bm{A}$ | $\bm{F_{1}}$ | $\bm{J_{m}}$
ml-MiniLM | 0.78 | 0.78 | 0.64 | 0.81 | 0.81 | 0.68 | 0.86 | 0.87 | 0.77
distil-mUSE | 0.78 | 0.78 | 0.64 | 0.83 | 0.83 | 0.71 | 0.90 | 0.90 | 0.83
ml-mpnet | 0.81 | 0.81 | 0.69 | 0.83 | 0.84 | 0.72 | 0.89 | 0.89 | 0.81
LaBSE | 0.86 | 0.87 | 0.77 | 0.89 | 0.90 | 0.82 | 0.90 | 0.91 | 0.83
cmlm-ml | 0.86 | 0.88 | 0.79 | 0.91 | 0.92 | 0.85 | 0.92 | 0.92 | 0.86
Table 5: Accuracy($A$), macro-averaged-F1 ($F_{1}$), and macro-averaged-
Jaccard($J_{m}$) scores of the analysed models for all the reported languages.
Each model is trained using IC_Knwl, headlines with IC_Knwl
(Headline+IC_Knwl), and headlines with attended IC_Knwl
(Headline+Attn(IC_Knwl)) respectively.
Furthermore, as evident in column two of Table 5, integrating IC_Knwl with the
headline can significantly improve the performance of all models by enhancing
their reasoning abilities. In terms of $A$, $F_{1}$, and $J_{m}$ scores, these
models exhibit average performance improvements of $4\%$, $4\%$, and $7\%$,
respectively, over models trained exclusively with IC_Knwl.
Integration of IC_Knwl, on the other hand, may not always function as expected
and may introduce unwanted noises. Given the fact that they are generated
automatically rather than manually, noise is inevitable, which may weaken
their role in bias prediction. To minimise the impact of this noise, we
integrate IC_Knwl with an attention mechanism and present the results in
column three of Table 5. The introduction of attention results in an average
performance gain of $5\%$, $4\%$, and $9\%$ in terms of $A$, $F_{1}$, and
$J_{m}$ scores, respectively.
The good performance of the models can be attributed to their deep network
architectures, which enable them to learn rich universal text representations.
Furthermore, it demonstrates that integrating IC_Knwl significantly improves
their performance, while the introduction of attention improves it even
further (RO4). To summarise, the results indicate that our proposed framework
for bias prediction is effective regardless of the models used (RO3).
### 5.2 Language-wise performance
The models evaluated for individual languages present plausible results, as
shown in Table 6. However, the performance of models across languages varies
significantly due to an imbalanced number of samples per class.
Headline Headline+IC_Knwl Headline+ Attn(IC_Knwl) $\bm{A}$
$\bm{F_{1}{}_{\mu}}$ $\bm{J_{\mu}}$ $\bm{A}$ $\bm{F_{1}{}_{\mu}}$
$\bm{J_{\mu}}$ $\bm{A}$ $\bm{F_{1}{}_{\mu}}$ $\bm{J_{\mu}}$ ml-MiniLM 0.53
0.53 0.36 1.05 1.03 1.05 1.67 1.16 1.23 distil-mUSE 0.53 0.53 0.36 1.15 1.15
1.19 1.16 1.16 1.27 Slovenian ml-mpnet 0.55 0.54 0.37 1.05 1.07 1.10 1.12 1.10
1.14 LaBSE 0.56 0.55 0.38 1.19 1.21 1.31 1.02 1.02 1.06 cmlm-ml 0.54 0.70 0.71
1.01 1.01 1.01 1.02 1.04 1.05 ml-MiniLM 0.47 0.47 0.31 1.87 1.87 2.54 1.06
1.05 1.11 distil-mUSE 0.55 0.55 0.38 1.50 1.50 1.86 1.14 1.13 1.25 Romanian
ml-mpnet 0.56 0.56 0.38 1.60 1.58 2.13 1.05 1.05 1.09 LaBSE 0.81 0.81 0.68
1.14 1.14 1.27 1.02 1.02 1.03 cmlm-ml 0.95 0.94 0.89 1.00 1.01 1.01 1.01 1.00
1.01 ml-MiniLM 0.52 0.51 0.34 1.71 1.74 2.35 1.08 1.08 1.17 distil-mUSE 0.56
0.56 0.38 1.60 1.58 2.23 1.10 1.11 1.20 Swedish ml-mpnet 0.58 0.58 0.41 1.58
1.58 2.07 1.07 1.07 1.15 LaBSE 0.78 0.78 0.64 1.26 1.26 1.54 1.00 1.00 1.00
cmlm-ml 0.98 0.98 0.96 1.01 1.01 1.03 1.00 1.00 1.00 ml-MiniLM 0.81 0.81 0.68
1.17 1.17 1.33 1.00 1.00 1.00 distil-mUSE 0.79 0.78 0.64 1.24 1.24 1.48 1.01
1.02 1.03 Finnish ml-mpnet 0.82 0.82 0.69 1.18 1.18 1.36 1.02 1.02 1.04 LaBSE
0.84 0.84 0.72 1.17 1.17 1.36 1.00 1.00 1.00 cmlm-ml 0.99 0.99 0.98 1.00 1.00
1.01 1.00 1.00 1.00 ml-MiniLM 0.80 0.80 0.67 1.17 1.16 1.31 1.01 1.02 1.02
distil-mUSE 0.87 0.86 0.76 1.10 1.11 1.21 1.02 1.02 1.04 Czech ml-mpnet 0.84
0.83 0.72 1.15 1.16 1.30 1.02 1.02 1.04 LaBSE 0.92 0.91 0.84 1.07 1.08 1.17
1.00 1.00 1.00 cmlm-ml 0.99 0.98 0.97 1.00 1.01 1.02 1.00 1.00 1.00
Table 6: Accuracy($A$), micro-averaged-F1($F_{1}{}_{\mu}$), and micro-
averaged-Jaccard($J_{\mu}$) scores of the analysed models for each language
used in the study. Each model is trained using headlines, headlines with
IC_Knwl (Headline+IC_Knwl), and headlines with attended IC_Knwl
(Headline+Attn(IC_Knwl)) respectively. For Headline+IC_Knwl, we report its
relative performance to the models trained with headlines only. For
Headline+Attn(IC_Knwl), we report its relative performance to the models
trained with headlines and IC_Knwl.
Among all the low-resource languages present in the dataset used for this
study, the models analysed for Czech demonstrate the most impressive
performance, with an average $A$, $F_{1}{}_{\mu}$, and $J_{\mu}$ of $0.88$,
$0.87$, and $0.79$ respectively for the models trained with headlines only.
Since it leaves little room for performance improvement, models trained with
additional IC_Knwl with/without attention contribute an average of only $1.01$
times more to the calculated scores.
Following that, we have the models analysed for Finnish with the next best
average $A$, $F_{1}{}_{\mu}$, and $J_{\mu}$ of $0.85$, $0.84$, and $0.79$
respectively for the models trained with headlines only. With the additional
IC_Knwl, the average scores for $A$ and $F_{1}{}_{\mu}$ increase by $1.15$
times and $J_{\mu}$ by $1.30$ times. Nonetheless, the benefits of employing
attention are negligible.
The impressive performance of the languages Czech and Finnish can be
attributed to the fact that all of their samples belong to the class ‘Left-
Center.’ Since all of their bias labels are from the same class, it is
possible that the classifiers may end up modelling the language specifics and
writing style of the outlet in addition to the bias embedded in the headlines.
The models evaluated for Swedish and Romanian produce the next best results
that are nearly identical to each other, differing only by a small margin. For
Swedish, models trained with only headlines show an average
$A$/$F_{1}{}_{\mu}$ of $0.68$ and $J_{\mu}$ of $0.54$. These score differs by
only $0.02$ points for Romanian. IC_Knwl provides a substantial performance
boost for both the languages. Swedish and Romanian have $A$/$F_{1}{}_{\mu}$
boosts of $1.43$ and $1.42$ times, and $J_{\mu}$ boosts of $1.84$ and $1.76$
times, respectively. They clearly benefit from the attention as well. Both the
languages exhibit a $1.05$ times boost in $A$/$F_{1}{}_{\mu}$ and a $1.09$
times boost in $J_{\mu}$.
In the case of the models analysed for Slovenian, one can notice a significant
performance gap when compared to others. It demonstrates the lowest
performance with an average $A$, $F_{1}{}_{\mu}$, and $J_{\mu}$ of $0.54$,
$0.57$, and $0.43$ respectively for the PLMs trained with headlines only. With
the additional IC_Knwl, the average scores for $A$/$F_{1}{}_{\mu}$ increase by
$1.09$ times and $J_{\mu}$ by $1.13$ times. Moreover, the benefits of
employing attention can be noticed by a performance increase of $1.19$,
$1.09$, and $1.15$ times in terms of $A$, $F_{1}{}_{\mu}$, and $J_{\mu}$
scores. Even with the highest number of examples ($~{}30\%$ ref. Table 1), its
performance is low. To some extent, this could be attributed to the lower
average headline length and language complexities that hinder the models’
ability to comprehend the text for the task of bias prediction. Alternately,
it could be due to the limited embedding coverage of the Slovenian language.
Models trained with IC_Knwl indicate low $A$ and high $J_{\mu}$. This implies
that with more true negatives than true positives, performance evaluation
using $A$ as the evaluation metric can turn out to be misleading. For
instance, since there are no Slovenian examples that exist for the Right
Center class, considering instances where they are not present and are not
predicted (true negatives), would not be credible. In such cases, considering
$J_{\mu}$ is more reliable since it disregards true negatives in favour of
true positives.
To sum up, across all languages analysed, cmlm-ml performed the best among
models, whereas ml-MiniLM performed the worst. Overall, the results indicate
the models trained with only headlines are capable of predicting bias inherent
in them, even for low-resource languages like the ones used in this study.
Moreover, IC_Knwl significantly enhances model performance, especially when
attention is employed.
### 5.3 Qualitative analysis
In this section, we assess the effect of translation quality on prediction
performance by analysing translation errors. We use the Slovenian language as
a case study since the models analysed for it exhibit a significant
performance gap when compared to other languages in our dataset. With the help
of native Slovenian speakers in our research group, we discover several
translation errors which we classify as follows:
1. 1.
Entity Detection Error: occurs when the translation engine misinterprets the
entities referenced in the headline.
2. 2.
Comprehension Error: arises when the translation engine fails to comprehend
the meaning of a headline, resulting in an unintelligible translation.
3. 3.
Improper Sentence Formation: when the translated headline grasps the basic
idea of the original headline, but fails to form a coherent translated
sentence, this error type occurs.
4. 4.
Inversion of Meaning: takes place when the translation engine inverts the
semantic meaning of a headline, resulting in a seemingly meaningful
translation with a dissimilar semantic meaning.
5. 5.
Miscellaneous Error: a category reserved for errors that do not fit into any
of the aforementioned categories.
Entity Detection Error Slovenian Headline: Vodomec na 32. Liffu filmu Pohodi
plin! [0.4pt/2pt] Generated Translation: Aquarius on the 32nd Liff movie Walk
the Gas! [0.4pt/2pt] Correct Translation: Kingfisher on the 32nd Liff awarded
to movie Walk the Gas! [0.4pt/2pt] Comment: The entity ‘Vodomec’, which means
‘Common Kingfisher’, is translated incorrectly as ‘Aquarius’. However, it
refers to the name of an award in this context. Comprehension Error Slovenian
Headline: Počivalšek: Janša SMC ni ničesar prepustil [0.4pt/2pt] Generated
Translation: Resting place: Janša SMC did not leave anything [0.4pt/2pt]
Correct Translation: Počivalšek: Janša left nothing for SMC [0.4pt/2pt]
Comment: The surname ‘Počivalšek’ is mistranslated as ‘Resting place’.
Furthermore, there exists no distinction between the surname ‘Janša’ and the
political party ‘SMC’. Improper Sentence Formation Slovenian Headline: Nad
zdravstvene delavce z grožnjami in žalitvami [0.4pt/2pt] Generated
Translation: Above health professionals with threats and insults [0.4pt/2pt]
Correct Translation: Threats, insults towards health professionals [0.4pt/2pt]
Comment: Depending on the context, ‘Nad’ could mean ‘Above’ or ‘Towards’. The
translation engine misinterprets ‘Nad’ in this case, resulting in an improper
sentence formation. Inversion of Meaning Slovenian Headline: Na spletu podatki
533 milijonov Facebook uporabnikov, tudi 230.000 Slovencev [0.4pt/2pt]
Generated Translation: There are 533 million Facebook users online, including
230,000 Slovenians [0.4pt/2pt] Correct Translation: Data of 533 million
Facebook users leaked online, including 230,000 Slovenians [0.4pt/2pt]
Comment: Although the translation is comprehensible, it refers to Facebook
users instead of Facebook user data. Miscellaneous Error Slovenian Headline:
Grujović naj bi streljal v silobranu, priča trdi drugače [0.4pt/2pt] Generated
Translation: Grujović allegedly shot in the silobran, the witness claims
otherwise [0.4pt/2pt] Correct Translation: Grujović allegedly shot in self-
defense, the witness claims otherwise [0.4pt/2pt] Comment: Since ‘silobranu’
is misinterpreted as an entity, there is no attempt to translate ‘v
silobranu’, which means ‘in self-defense’.
Table 7: Case study of Slovenian headlines to understand the translation error
types (translation: from Slovenian to English).
Table 7 provides an example with appropriate justifications for each of these
error types. In the majority of cases, the translation engine’s lack of
contextual awareness resulted in mistranslations. In some cases, the missing
context could be inferred from the headline alone, whereas in others, reading
the entire article or researching the entities mentioned in the headline
appears to be the only way to obtain adequate context. Errors linked to a lack
of vocabulary or other factors were less common.
Overall, the performance gap between Slovenian and other languages could be
attributed to the language’s poor translation quality relative to the other
languages, as evidenced by the relatively numerous instances of improper
translation. Given the complexity of the language and the small number of
native speakers, the conclusion seems plausible.
## 6 Research implications
Predicting the political polarity of news headlines has many positive
implications. It can not only help readers identify politically biased news
but also allow journalists and the individuals involved in the news production
process to assess their work objectively. Furthermore, such insights would
also be interesting for researchers and social scientists. In this section, we
further discuss the theoretical implications of our research and the ways in
which our proposed framework can enhance practical applications.
### 6.1 Theoretical implications
Our study proposes a new perspective by leveraging Inferential Commonsense
Knowledge (IC_Knwl) via a Translate-Retrieve-Translate strategy to facilitate
comprehension of the overall narrative of the multilingual headlines. Using
IC_Knwl, it introduces a language-agnostic learning framework to enhance the
prediction of political polarity in multilingual news headlines. To the best
of our knowledge, our proposed framework is one of the earliest attempts to
leverage IC_Knwl in a multilingual context for polarity prediction of news
headlines. Since the existing work lacks annotated datasets for the task, it
presents a dataset of multilingual news headlines. It simulates the real-world
challenges of imbalanced data distribution by annotating headlines in five
European low-resource languages with their respective political polarities.
Our experimental investigation demonstrates the advantages of using IC_Knwl,
shedding light on the prospects of utilising it for the downstream tasks. It
also demonstrates the effectiveness of multiple state-of-the-art multilingual
pre-trained language models.
### 6.2 Practical implications
Our study highlights the role of Inferential Commonsense Knowledge (IC_Knwl)
in facilitating the comprehension of short news headline text. It demonstrates
that the IC_Knwl, when used in conjunction with the translate-retrieve-
translate technique, can effectively aid in the comprehension of narratives in
a multilingual context. When fused with multilingual pre-trained language
models (PLMs), it enhances the political polarity prediction of multilingual
news headlines. Both implicit and explicit knowledge are expected in effective
systems. The performance enhancement achieved by fusing the implicit knowledge
obtained from the PLMs with explicit knowledge in the form of IC Knwl supports
this view.
Given that a system is expected to deal with low-resource situations in the
real world, our proposed framework is language-agnostic and thus adaptable to
such scenarios. Another common problem in real world scenarios is scarcity of
annotated data. Our proposed dataset, which focuses on low-resource languages
with an imbalanced distribution, addresses this issue. Furthermore, our
framework for data generation facilitates future expansion and the creation of
custom datasets for related tasks.
## 7 Conclusions and future works
In this paper, we introduced a language-agnostic learning framework infused
with Inferential Commonsense Knowledge (IC_Knwl) for enhancing the prediction
of political polarity in multilingual news headlines under imbalanced sample
distribution. We proposed to leverage IC_Knwl through a Translate-Retrieve-
Translate (TRT) strategy to help uncover contextual features for comprehension
of the overall narrative of the multilingual headlines. Since not all the
retrieved inferences are expected to be of equal relevance, we also employed
an attention mechanism to emphasise relevant inferences. We used the neural-
network model COMET trained on the $\text{ATOMIC}^{20}_{20}$ knowledge graphs
to retrieve IC_Knwl and employed the Google Translate API for translation.
Furthermore, we presented an annotated dataset of news headlines in five low-
resource European languages.
We conducted an extensive evaluation of our framework with several
multilingual pre-trained language models (PLMs). The evaluation results
revealed their impressive performance, which can be attributed to their
complex network architectures. The results also demonstrated that
incorporating IC_Knwl and employing attention significantly enhanced their
performance. Overall, the results indicate that the proposed framework for
bias prediction is effective regardless of the models used. Even the models
evaluated for individual languages present plausible results. Furthermore, we
conducted a thorough case study on the Slovenian headlines to investigate
translation errors. The study uncovered numerous instances of improper
translation, indicating that the performance gap between Slovenian and other
languages may be attributable to the language’s poor translation quality.
In the future, we plan to diversify our additional knowledge sources. In
particular, we intend to investigate how knowledge sources such as Wiktionary
and ConceptNet influence the task of polarity prediction. Another possible
direction is to extend this study beyond polarity prediction to its
quantification and correction. It would also be interesting to experiment with
auxiliary tasks involving news headlines in a multitask learning paradigm.
## CRediT authorship contribution statement
Swati Swati: Conceptualization, Data curation, Investigation, Methodology,
Software, Validation, Visualization, Writing - original draft. Adrian Mladenić
Grobelnik: Investigation, Validation, Writing - review & editing. Dunja
Mladenić: Conceptualization, Supervision, Writing - review & editing. Marko
Grobelnik: Conceptualization, Funding acquisition, Supervision, Writing -
review & editing.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
## Data availability
Dataset and scripts available at: https://github.com/Swati17293/KG-Multi-Bias
## Acknowledgements
This work was supported by the Slovenian Research Agency under the project
J2-1736 Causalify and the European Union’s Horizon 2020 research and
innovation program under the Marie Skłodowska-Curie grant agreement No 812997.
## References
* [1] N. Helberger, On the democratic role of news recommenders, Digital Journalism 7 (8) (2019) 993–1012. doi:10.1080/21670811.2019.1623700.
* [2] B. McNair, Journalism and democracy, in: The handbook of journalism studies, Routledge, 2009, pp. 257–269. doi:10.4324/9780203877685-27.
* [3] J. M. Miller, J. A. Krosnick, News media impact on the ingredients of presidential evaluations: Politically knowledgeable citizens are guided by a trusted source, American Journal of Political Science (2000) 301–315doi:10.2307/2669312.
* [4] S. Park, S. Kang, S. Chung, J. Song, Newscube: delivering multiple aspects of news to mitigate media bias, in: Proceedings of the SIGCHI conference on human factors in computing systems, Association for Computing Machinery, 2009, pp. 443–452. doi:10.1145/1518701.1518772.
* [5] S. R. Davis, C. J. Worsnop, E. M. Hand, Gender bias recognition in political news articles, Machine Learning with Applications 8 (2022) 100304. doi:10.1016/j.mlwa.2022.100304.
* [6] T. Spinde, An interdisciplinary approach for the automated detection and visualization of media bias in news articles, in: 2021 International Conference on Data Mining Workshops (ICDMW), IEEE, 2021, pp. 1096–1103. doi:10.1109/ICDMW53433.2021.00144.
* [7] T. Spinde, L. Rudnitckaia, J. Mitrović, F. Hamborg, M. Granitzer, B. Gipp, K. Donnay, Automated identification of bias inducing words in news articles using linguistic and context-oriented features, Information Processing & Management 58 (3) (2021) 102505. doi:10.1016/j.ipm.2021.102505.
* [8] K. Chen, M. Babaeianjelodar, Y. Shi, K. Janmohamed, R. Sarkar, I. Weber, T. Davidson, M. De Choudhury, S. Yadav, A. Khudabukhsh, et al., Partisan us news media representations of syrian refugees, arXiv preprint arXiv:2206.09024doi:10.48550/arXiv.2206.09024.
* [9] W. Chipidza, The effect of toxicity on covid-19 news network formation in political subcommunities on reddit: An affiliation network approach, International Journal of Information Management 61 (2021) 102397. doi:10.1016/j.ijinfomgt.2021.102397.
* [10] F. Hamborg, K. Donnay, B. Gipp, Automated identification of media bias in news articles: an interdisciplinary literature review, International Journal on Digital Libraries 20 (4) (2019) 391–415. doi:https://doi.org/10.1007/s00799-018-0261-y.
* [11] R. R. R. Gangula, S. R. Duggenpudi, R. Mamidi, Detecting political bias in news articles using headline attention, in: Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, 2019, pp. 77–84. doi:10.18653/v1/W19-4809.
* [12] P. Laban, L. Bandarkar, M. A. Hearst, News headline grouping as a challenging nlu task, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, 2021, pp. 3186–3198. doi:10.18653/v1/2021.naacl-main.255.
* [13] K. Holmqvist, J. Holsanova, M. Barthelson, D. Lundqvist, Reading or scanning? a study of newspaper and net paper reading, in: The Mind’s Eye, Elsevier, 2003, pp. 657–670. doi:10.1016/B978-044451020-4/50035-9.
* [14] B. C. Andrew, Media-generated shortcuts: Do newspaper headlines present another roadblock for low-information rationality?, Harvard International Journal of Press/Politics 12 (2) (2007) 24–43. doi:10.1177/1081180X07299795.
* [15] U. K. Ecker, S. Lewandowsky, E. P. Chang, R. Pillai, The effects of subtle misinformation in news headlines., Journal of experimental psychology: applied 20 (4) (2014) 323. doi:10.1037/xap0000028.
* [16] K. Molek-Kozakowska, Towards a pragma-linguistic framework for the study of sensationalism in news headlines, Discourse & Communication 7 (2) (2013) 173–197. doi:10.1177/1750481312471668.
* [17] E. Ifantidou, Newspaper headlines and relevance: Ad hoc concepts in ad hoc contexts, Journal of Pragmatics 41 (4) (2009) 699–720. doi:10.1016/j.pragma.2008.10.016.
* [18] M. McCluskey, A content analysis of 2004 presidential election headlines of the los angeles times and the washington times, Electronic Theses and Dissertations 358, https://stars.library.ucf.edu/etd/358.
* [19] S. M. Jovanović, et al., Headlines against democracy: Operational code analysis of the serbian daily informer’s headlines in relation to the anti-government protests’ first phase (2018–2019), Journal of Media Research-Revista de Studii Media 14 (41) (2021) 23–41, https://www.ceeol.com/search/article-detail?id=1000890.
* [20] D. Zeng, W. Gong, S. Li, Critical discourse analysis on the news headline about terrorism: A case study of the english reports on counter-terrorism from turkey, Modern Linguistics 6 (3) (2018) 496–501. doi:10.12677/ml.2018.63057.
* [21] B. Andrew, Political journalism represented by headline news: Canadian public and commercial media compared, Canadian Journal of Political Science/Revue canadienne de science politique 46 (2) (2013) 455–478. doi:10.1017/S0008423913000462.
* [22] P. Navia, R. Osorio, El mercurio lies, and la tercera lies more. political bias in newspaper headlines in chile, 1994–2010, Bulletin of Latin American Research 34 (4) (2015) 467–485. doi:10.1111/blar.12364.
* [23] F. Hamborg, N. Meuschke, A. Aizawa, B. Gipp, Identification and analysis of media bias in news articles, 2017. doi:10.18452/1446.
* [24] F. Hamborg, N. Meuschke, B. Gipp, Bias-aware news analysis using matrix-based news aggregation, International Journal on Digital Libraries 21 (2) (2020) 129–147. doi:10.1007/s00799-018-0239-9.
* [25] D. Aksenov, P. Bourgonje, K. Zaczynska, M. Ostendorff, J. M. Schneider, G. Rehm, Fine-grained classification of political bias in german news: A data set and initial experiments, in: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), 2021, pp. 121–131. doi:10.18653/v1/2021.woah-1.13.
* [26] S. Guo, K. Q. Zhu, Modeling multi-level context for informational bias detection by contrastive learning and sentential graph network, arXiv preprint arXiv:2201.10376doi:10.48550/arXiv.2201.10376.
* [27] T. M. Doan, J. A. Gulla, A survey on political viewpoints identification, Online Social Networks and Media 30 (2022) 100208. doi:10.1016/j.osnem.2022.100208.
* [28] S. Park, S. Kang, S. Chung, J. Song, A computational framework for media bias mitigation, ACM Transactions on Interactive Intelligent Systems (TiiS) 2 (2) (2012) 1–32. doi:10.1145/2209310.2209311.
* [29] R. Del Gratta, F. Frontini, A. F. Khan, J. Mariani, C. Soria, The lremap for under-resourced languages, in: Workshop Collaboration and Computing for Under-Resourced Languages in the Linked Open Data Era, Satellite Workshop of LREC, Vol. 14, 2014, https://www.academia.edu/download/39974165/The_LREMap_for_Under-Resourced_Languages20151113-28056-rmmjbf.pdf.
* [30] E. G. Bruneau, M. Cikara, R. Saxe, Going beyond the headlines: Narratives mitigate intergroup empathy bias., in: Proceedings of the 34th Annual Meeting of the Cognitive Science Society, CogSci 2012, cognitivesciencesociety.org, 2012, https://mindmodeling.org/cogsci2012/papers/0475/index.html.
* [31] R. T. Berner, Commentary: The narrative and the headline, Newspaper Research Journal 4 (3) (1983) 33–40. doi:10.1177/073953298300400305.
* [32] D. Li, X. Zhu, Y. Li, S. Wang, D. Li, J. Liao, J. Zheng, Enhancing emotion inference in conversations with commonsense knowledge, Knowledge-Based Systems 232 (2021) 107449. doi:10.1016/j.knosys.2021.107449.
* [33] J. Li, Z. Lin, P. Fu, W. Wang, Past, present, and future: Conversational emotion recognition through structural modeling of psychological knowledge, in: Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, 2021, pp. 1204–1214. doi:10.18653/v1/2021.findings-emnlp.104.
* [34] L. Du, X. Ding, K. Xiong, T. Liu, B. Qin, Enhancing pretrained language models with structured commonsense knowledge for textual inference, Knowledge-Based Systems 254 (2022) 109488. doi:10.1016/j.knosys.2022.109488.
* [35] R. Li, Z. Jiang, L. Wang, X. Lu, M. Zhao, D. Chen, Enhancing transformer-based language models with commonsense representations for knowledge-driven machine comprehension, Knowledge-Based Systems 220 (2021) 106936. doi:10.1016/j.knosys.2021.106936.
* [36] A. Lieto, G. L. Pozzato, S. Zoia, V. Patti, R. Damiano, A commonsense reasoning framework for explanatory emotion attribution, generation and re-classification, Knowledge-Based Systems 227 (2021) 107166. doi:10.1016/j.knosys.2021.107166.
* [37] S. Swati, M. Grobelnik, Ic-bait: An inferential commonsense-driven model for predicting political polarity in news headlines, Available at SSRN 4114271doi:10.2139/ssrn.4114271.
* [38] J. D. Hwang, C. Bhagavatula, R. Le Bras, J. Da, K. Sakaguchi, A. Bosselut, Y. Choi, (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 2021, pp. 6384–6392, https://ojs.aaai.org/index.php/AAAI/article/view/16792.
* [39] Y. Fang, S. Wang, Y. Xu, R. Xu, S. Sun, C. Zhu, M. Zeng, Leveraging knowledge in multilingual commonsense reasoning, in: Findings of the Association for Computational Linguistics: ACL 2022, Association for Computational Linguistics, 2022, pp. 3237–3246. doi:10.18653/v1/2022.findings-acl.255.
* [40] A. Bonyadi, M. Samuel, Headlines in newspaper editorials: A contrastive study, Sage Open 3 (2) (2013) 2158244013494863. doi:10.1177/2158244013494863.
* [41] MBFC, Finnish news - media bias/fact check - https://mediabiasfactcheck.com/finnish-news/, https://mediabiasfactcheck.com/finnish-news/, accessed: July 4, 2022.
* [42] M. Li, H. Zhou, J. Hou, P. Wang, E. Gao, Is cross-linguistic advert flaw detection in wikipedia feasible? a multilingual-bert-based transfer learning approach, Knowledge-Based Systems 252 (2022) 109330. doi:10.1016/j.knosys.2022.109330.
* [43] J. Lu, V. Behbood, P. Hao, H. Zuo, S. Xue, G. Zhang, Transfer learning using computational intelligence: A survey, Knowledge-Based Systems 80 (2015) 14–23. doi:10.1016/j.knosys.2015.01.010.
* [44] E. W. Pamungkas, V. Basile, V. Patti, A joint learning approach with knowledge injection for zero-shot cross-lingual hate speech detection, Information Processing & Management 58 (4) (2021) 102544. doi:10.1016/j.ipm.2021.102544.
* [45] F. Feng, Y. Yang, D. Cer, N. Arivazhagan, W. Wang, Language-agnostic bert sentence embedding, arXiv preprint arXiv:2007.01852doi:10.48550/arXiv.2007.01852.
* [46] N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3982–3992. doi:10.18653/v1/D19-1410.
* [47] Z. Yang, Y. Yang, D. Cer, J. Law, E. Darve, Universal sentence representation learning with conditional masked language model, in: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 6216–6228. doi:10.18653/v1/2021.emnlp-main.502.
* [48] A. Kruspe, M. Häberle, I. Kuhn, X. X. Zhu, Cross-language sentiment analysis of european twitter messages during the covid-19 pandemic, in: Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Association for Computational Linguistics, 2020, https://aclanthology.org/2020.nlpcovid19-acl.14.
* [49] Y. Pei, S. Chen, Z. Ke, W. Silamu, Q. Guo, Ab-labse: Uyghur sentiment analysis via the pre-training model with bilstm, Applied Sciences 12 (3) (2022) 1182. doi:10.3390/app12031182.
* [50] R. N. Patel, E. Burgin, H. Assem, S. Dutta, Efficient multi-lingual sentence classification framework with sentence meta encoders, in: 2021 IEEE International Conference on Big Data (Big Data), IEEE, 2021, pp. 1889–1899. doi:10.1109/BigData52589.2021.9671714.
* [51] Z. Talat, A. Neveol, S. Biderman, M. Clinciu, M. Dey, S. Longpre, S. Luccioni, M. Masoud, M. Mitchell, D. Radev, et al., You reap what you sow: On the challenges of bias evaluation under multilingual settings, in: Proceedings of BigScience Episode# 5–Workshop on Challenges & Perspectives in Creating Large Language Models, Association for Computational Linguistics, 2022, pp. 26–41. doi:10.18653/v1/2022.bigscience-1.3.
* [52] A. Roy, K. Basak, A. Ekbal, P. Bhattacharyya, A deep ensemble framework for fake news detection and multi-class classification of short political statements, in: Proceedings of the 16th International Conference on Natural Language Processing, 2019, pp. 9–17, https://aclanthology.org/2019.icon-1.2/.
* [53] T. Saikh, A. Anand, A. Ekbal, P. Bhattacharyya, A novel approach towards fake news detection: deep learning augmented with textual entailment features, in: International Conference on Applications of Natural Language to Information Systems, Springer, 2019, pp. 345–358. doi:10.1007/978-3-030-23281-8_30.
* [54] T. Saikh, A. De, A. Ekbal, P. Bhattacharyya, A deep learning approach for automatic detection of fake news, in: Proceedings of the 16th International Conference on Natural Language Processing, 2019, pp. 230–238. doi:https://aclanthology.org/2019.icon-1.27.
* [55] K. K. King, B. Wang, Diffusion of real versus misinformation during a crisis event: a big data-driven approach, International Journal of Information Management (2021) 102390doi:10.1016/j.ijinfomgt.2021.102390.
* [56] L. Rotim, M. Tutek, J. Šnajder, Takelab at semeval-2017 task 5: Linear aggregation of word embeddings for fine-grained sentiment analysis of financial news, in: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), 2017, pp. 866–871. doi:10.18653/v1/S17-2148.
* [57] D. Korenčić, S. Ristov, J. Šnajder, Document-based topic coherence measures for news media text, Expert systems with Applications 114 (2018) 357–373. doi:10.1016/j.eswa.2018.07.063.
* [58] M. B. Pandur, J. Dobša, S. Beliga, A. Meštrović, Topic modelling and sentiment analysis of covid-19 related news on croatian internet portal, Information Society 2020 (2020) 5–9.
* [59] E. Müller-Budack, J. Theiner, S. Diering, M. Idahl, S. Hakimov, R. Ewerth, Multimodal news analytics using measures of cross-modal entity and context consistency, International Journal of Multimedia Information Retrieval 10 (2) (2021) 111–125. doi:https://doi.org/10.1007/s13735-021-00207-4.
* [60] G. Tahmasebzadeh, S. Hakimov, E. Müller-Budack, R. Ewerth, A feature analysis for multimodal news retrieval, in: Proceedings of the 1st International Workshop on Cross-lingual Event-centric Open Analytics co-located with the 17th Extended Semantic Web Conference (ESWC 2020), Aachen: RWTH, 2020. doi:https://doi.org/10.34657/5197.
* [61] D. D’Alessio, Media bias in presidential election coverage, 1948-2008: Evaluation via formal measurement, Lexington Books, 2012.
* [62] N. Palić, J. Vladika, D. Čubelić, I. Lovrenčić, M. Buljan, J. Šnajder, Takelab at semeval-2019 task 4: Hyperpartisan news detection, in: Proceedings of the 13th International Workshop on Semantic Evaluation, 2019, pp. 995–998. doi:10.18653/v1/S19-2172.
* [63] R. L. Stevenson, M. T. Greene, A reconsideration of bias in the news, Journalism Quarterly 57 (1) (1980) 115–121. doi:10.1177/107769908005700117.
* [64] W.-F. Chen, H. Wachsmuth, K. Al Khatib, B. Stein, Learning to flip the bias of news headlines, in: Proceedings of the 11th International Conference on Natural Language Generation, Association for Computational Linguistics, 2018, pp. 79–88. doi:10.18653/v1/W18-6509.
* [65] T. Groseclose, J. Milyo, A measure of media bias, The Quarterly Journal of Economics 120 (4) (2005) 1191–1237. doi:10.1162/003355305775097542.
* [66] M. Iyyer, P. Enns, J. Boyd-Graber, P. Resnik, Political ideology detection using recursive neural networks, in: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2014, pp. 1113–1122, https://aclanthology.org/P14-1105.pdf.
* [67] N. R. Naredla, F. F. Adedoyin, Detection of hyperpartisan news articles using natural language processing technique, International Journal of Information Management Data Insights 2 (1) (2022) 100064. doi:10.1016/j.jjimei.2022.100064.
* [68] I. Tourni, L. Guo, T. H. Daryanto, F. Zhafransyah, E. E. Halim, M. Jalal, B. Chen, S. Lai, H. Hu, M. Betke, et al., Detecting frames in news headlines and lead images in us gun violence coverage, in: Findings of the Association for Computational Linguistics: EMNLP 2021, 2021, pp. 4037–4050. doi:10.18653/v1/2021.findings-emnlp.339.
* [69] J.-D. Krieger, T. Spinde, T. Ruas, J. Kulshrestha, B. Gipp, A domain-adaptive pre-training approach for language bias detection in news, in: Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries, 2022, pp. 1–7. doi:10.1145/3529372.3530932.
* [70] V. Magotra, E. Hirani, V. Mehta, S. Dholay, News bias detection using transformers, in: Communication and Intelligent Systems, Springer, 2022, pp. 319–326. doi:10.1007/978-981-19-2130-8_26.
* [71] A. Hoyer, Spanish news framing of the syrian refugee crisis, WWU Honors Program Senior Projects 26, https://cedar.wwu.edu/wwu_honors/26/.
* [72] L. Fan, M. White, E. Sharma, R. Su, P. K. Choubey, R. Huang, L. Wang, In plain sight: Media bias through the lens of factual reporting, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 6343–6349. doi:10.18653/v1/D19-1664.
* [73] R. Baly, G. Da San Martino, J. Glass, P. Nakov, We can detect your bias: Predicting the political ideology of news articles, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2020, pp. 4982–4991. doi:10.18653/v1/2020.emnlp-main.404.
* [74] F. Petroni, T. Rocktäschel, S. Riedel, P. Lewis, A. Bakhtin, Y. Wu, A. Miller, Language models as knowledge bases?, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, 2019, pp. 2463–2473. doi:10.18653/v1/D19-1250.
* [75] V. Shwartz, P. West, R. Le Bras, C. Bhagavatula, Y. Choi, Unsupervised commonsense question answering with self-talk, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2020, pp. 4615–4629. doi:10.18653/v1/2020.emnlp-main.373.
* [76] N. Do, E. Pavlick, Are rotten apples edible? challenging commonsense inference ability with exceptions, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics, 2021, pp. 2061–2073, https://aclanthology.org/2021.findings-acl.181.pdf.
* [77] N. Kassner, H. Schütze, Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 2020, pp. 7811–7818, https://aclanthology.org/2020.acl-main.698.pdf?ref=https://githubhelp.com.
* [78] Q. Tu, Y. Li, J. Cui, B. Wang, J.-R. Wen, R. Yan, Misc: A mixed strategy-aware model integrating comet for emotional support conversation, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 308–319. doi:10.18653/v1/2022.acl-long.25.
* [79] T. Young, E. Cambria, I. Chaturvedi, H. Zhou, S. Biswas, M. Huang, Augmenting end-to-end dialogue systems with commonsense knowledge, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32, 2018, https://ojs.aaai.org/index.php/AAAI/article/view/11923.
* [80] H. Zhou, T. Young, M. Huang, H. Zhao, J. Xu, X. Zhu, Commonsense knowledge aware conversation generation with graph attention., in: IJCAI, 2018, pp. 4623–4629, https://www.ijcai.org/Proceedings/2018/0643.pdf.
* [81] T. Mihaylov, A. Frank, Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 821–832. doi:10.18653/v1/P18-1076.
* [82] Y. K. Lal, H. Liu, N. Tandon, N. Chambers, R. Mooney, N. Balasubramanian, Analyzing the contribution of commonsense knowledge sources for why-question answering, in: ACL 2022 Workshop on Commonsense Representation and Reasoning, 2022, https://openreview.net/pdf?id=H4xz8zteub9.
* [83] J. Chen, J. Chen, Z. Yu, Incorporating structured commonsense knowledge in story completion, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 6244–6251, https://ojs.aaai.org/index.php/AAAI/article/view/5183.
* [84] Q.-F. Wang, E. Cambria, C.-L. Liu, A. Hussain, Common sense knowledge for handwritten chinese text recognition, Cognitive Computation 5 (2) (2013) 234–242. doi:10.1007/s12559-012-9183-y.
* [85] P. Zhong, D. Wang, P. Li, C. Zhang, H. Wang, C. Miao, Care: Commonsense-aware emotional response generation with latent concepts, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 2021, pp. 14577–14585. doi:https://ojs.aaai.org/index.php/AAAI/article/view/17713.
* [86] L. Zhu, G. Pergola, L. Gui, D. Zhou, Y. He, Topic-driven and knowledge-aware transformer for dialogue emotion detection, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 1571–1582. doi:10.18653/v1/2021.acl-long.125.
* [87] R. Speer, J. Chin, C. Havasi, Conceptnet 5.5: An open multilingual graph of general knowledge, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 31, 2017, https://dl.acm.org/doi/abs/10.5555/3298023.3298212.
* [88] E. Cambria, S. Poria, D. Hazarika, K. Kwok, Senticnet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings, in: Proceedings of the AAAI conference on artificial intelligence, Vol. 32, 2018, https://ojs.aaai.org/index.php/AAAI/article/view/11559.
* [89] N. Mostafazadeh, A. Kalyanpur, L. Moon, D. Buchanan, L. Berkowitz, O. Biran, J. Chu-Carroll, Glucose: Generalized and contextualized story explanations, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2020, pp. 4569–4586. doi:10.18653/v1/2020.emnlp-main.370.
* [90] H. Rashkin, M. Sap, E. Allaway, N. A. Smith, Y. Choi, Event2mind: Commonsense inference on events, intents, and reactions, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, 2018, pp. 463–473. doi:10.18653/v1/P18-1043.
* [91] J. Romero, S. Razniewski, Inside quasimodo: Exploring construction and usage of commonsense knowledge, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 3445–3448. doi:10.1145/3340531.3417416.
* [92] N. Tandon, G. De Melo, G. Weikum, Webchild 2.0: Fine-grained commonsense knowledge distillation, in: Proceedings of ACL 2017, System Demonstrations, Association for Computational Linguistics, 2017, pp. 115–120. doi:10.18653/v1/P17-4020.
* [93] S. Gabriel, C. Bhagavatula, V. Shwartz, R. Le Bras, M. Forbes, Y. Choi, Paragraph-level commonsense transformers with recurrent memory, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 2021, pp. 12857–12865, https://ojs.aaai.org/index.php/AAAI/article/view/17521.
* [94] J. D. M.-W. C. Kenton, L. K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, Association for Computational Linguistics, 2019, pp. 4171–4186. doi:10.18653/v1/N19-1423.
* [95] K. Clark, M.-T. Luong, Q. V. Le, C. D. Manning, Electra: Pre-training text encoders as discriminators rather than generators, arXiv preprint arXiv:2003.10555doi:10.48550/arXiv.2003.10555.
* [96] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, Q. V. Le, Xlnet: Generalized autoregressive pretraining for language understanding, Advances in neural information processing systems 32, http://papers.neurips.cc/paper/8812-xlnet-generalized-autoregressive-pretraining-for-language-understanding.pdf.
* [97] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, R. Soricut, Albert: A lite bert for self-supervised learning of language representations, arXiv preprint arXiv:1909.11942doi:10.48550/arXiv.1909.11942.
* [98] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, L. Zettlemoyer, Deep contextualized word representations, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,(Long Papers), Vol. 1, Association for Computational Linguistics, 2018, pp. 2227–2237. doi:10.18653/v1/N18-1202.
* [99] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, Advances in neural information processing systems 30 (2017) 6000–6010, https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
* [100] T. Lin, Y. Wang, X. Liu, X. Qiu, A survey of transformers, arXiv preprint arXiv:2106.04554doi:10.48550/arXiv.2106.04554.
* [101] B. Gain, D. Bandyopadhyay, T. Saikh, A. Ekbal, Iitp@ coliee 2019: legal information retrieval using bm25 and bert, arXiv preprint arXiv:2104.08653doi:10.48550/arXiv.2104.08653.
* [102] K. Mishra, M. Firdaus, A. Ekbal, Please be polite: Towards building a politeness adaptive dialogue system for goal-oriented conversations, Neurocomputing 494 (2022) 242–254. doi:10.1016/j.neucom.2022.04.029.
* [103] S. Yadav, M. Sarrouti, D. Gupta, Nlm at mediqa 2021: Transfer learning-based approaches for consumer question and multi-answer summarization, in: Proceedings of the 20th Workshop on Biomedical Language Processing, 2021, pp. 291–301. doi:10.18653/v1/2021.bionlp-1.34.
* [104] S. Yadav, D. Gupta, A. B. Abacha, D. Demner-Fushman, Question-aware transformer models for consumer health question summarization, Journal of Biomedical Informatics 128 (2022) 104040. doi:10.1016/j.jbi.2022.104040.
* [105] S. Pingali, S. Yadav, P. Dutta, S. Saha, Multimodal graph-based transformer framework for biomedical relation extraction, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 3741–3747, https://aclanthology.org/2021.findings-acl.328.pdf.
* [106] J. Shin, Y. Lee, K. Jung, Effective sentence scoring method using bert for speech recognition, in: Asian Conference on Machine Learning, PMLR, 2019, pp. 1081–1093, https://proceedings.mlr.press/v101/shin19a.html.
* [107] G. V. Singh, M. Firdaus, A. Ekbal, P. Bhattacharyya, Unity in diversity: Multilabel emoji identification in tweets, IEEE Transactions on Computational Social Systemsdoi:10.1109/TCSS.2022.3162865.
* [108] A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, É. Grave, M. Ott, L. Zettlemoyer, V. Stoyanov, Unsupervised cross-lingual representation learning at scale, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 2020, pp. 8440–8451. doi:10.18653/v1/2020.acl-main.747.
* [109] Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, L. Zettlemoyer, Multilingual denoising pre-training for neural machine translation, Transactions of the Association for Computational Linguistics 8 (2020) 726–742. doi:10.1162/tacl_a_00343.
URL
https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00343/96484/Multilingual-
Denoising-Pre-training-for-Neural
* [110] A. Kumar, V. H. C. Albuquerque, Sentiment analysis using xlm-r transformer and zero-shot transfer learning on resource-poor indian language, Transactions on Asian and Low-Resource Language Information Processing 20 (5) (2021) 1–13. doi:10.1145/3461764.
* [111] E. Novak, L. Bizjak, D. Mladenić, M. Grobelnik, Why is a document relevant? understanding the relevance scores in cross-lingual document retrieval, Knowledge-Based Systems 244 (2022) 108545. doi:10.1016/j.knosys.2022.108545.
* [112] B. Muller, Y. Elazar, B. Sagot, D. Seddah, First align, then predict: Understanding the cross-lingual ability of multilingual bert, in: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Association for Computational Linguistics, 2021, pp. 2214–2231. doi:10.18653/v1/2021.eacl-main.189.
* [113] D. Cer, Y. Yang, S.-y. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, et al., Universal sentence encoder, arXiv preprint arXiv:1803.11175doi:10.48550/arXiv.1803.11175.
* [114] N. Reimers, I. Gurevych, Making monolingual sentence embeddings multilingual using knowledge distillation, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2020, pp. 4512–4525. doi:10.18653/v1/2020.emnlp-main.365.
* [115] F. Feng, Y. Yang, D. Cer, N. Arivazhagan, W. Wang, Language-agnostic bert sentence embedding, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, 2022, pp. 878–891. doi:10.18653/v1/2022.acl-long.62.
* [116] V. Chaudhary, Y. Tang, F. Guzmán, H. Schwenk, P. Koehn, Low-resource corpus filtering using multilingual sentence embeddings, in: Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), Association for Computational Linguistics, 2019, pp. 261–266. doi:10.18653/v1/W19-5435.
* [117] A.-S. Mohammad, M. M. Hammad, A. Sa’ad, A.-T. Saja, E. Cambria, Gated recurrent unit with multilingual universal sentence encoder for arabic aspect-based sentiment analysis, Knowledge-Based Systems (2021) 107540doi:10.1016/j.knosys.2021.107540.
* [118] A. Conneau, D. Kiela, H. Schwenk, L. Barrault, A. Bordes, Supervised learning of universal sentence representations from natural language inference data, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2017, pp. 670–680. doi:10.18653/v1/D17-1070.
* [119] MBFC, Media bias/fact check - search and learn the bias of news media, https://mediabiasfactcheck.com, accessed: June 6, 2022.
* [120] R. Baly, G. Karadzhov, D. Alexandrov, J. Glass, P. Nakov, Predicting factuality of reporting and bias of news media sources, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2018, pp. 3528–3539. doi:10.18653/v1/D18-1389.
* [121] P. Resnick, A. Ovadya, G. Gilchrist, Iffy quotient: A platform health metric for misinformation, Center for Social Media Responsibility 17, http://umsi.info/iffy-quotient-whitepaper.
* [122] MBFC, Left vs. right bias: How we rate the bias of media sources - media bias/fact check, https://mediabiasfactcheck.com/left-vs-right-bias-how-we-rate-the-bias-of-media-sources/, accessed: June 6, 2022.
* [123] MBFC, methodology - media bias/fact check, https://mediabiasfactcheck.com/methodology/, accessed: June 6, 2022.
* [124] G. Leban, B. Fortuna, J. Brank, M. Grobelnik, Event registry: learning about world events from news, in: Proceedings of the 23rd International Conference on World Wide Web, 2014, pp. 107–110. doi:10.1145/2567948.257702.
* [125] S. Swati, D. Mladenić, T. Erjavec, Eveout: an event-centric news dataset to analyze an outlet’s event selection patterns, Informatica 45 (7). doi:10.31449/inf.v45i7.3410.
* [126] S. Swati, D. Mladenić, Are you following the right news-outlet? a machine learning based approach to outlet prediction, in: In Proceedings of the Slovenian KDD Conference on Data Mining and Data Warehouses (SiKDD), 2020, https://ailab.ijs.si/Dunja/SiKDD2020/Papers/08-swati_outlet_prediction.pdf.
* [127] S. Swati, D. Mladenić, Understanding the impact of geographical bias on news sentiment: A case study on london and rio olympics, in: In Proceedings of the Slovenian KDD Conference on Data Mining and Data Warehouses (SiKDD), 2021, https://ailab.ijs.si/dunja/SiKDD2021/Papers/Swati+Mladenic.pdf.
* [128] A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, Y. Choi, Comet: Commonsense transformers for automatic knowledge graph construction, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 2019, pp. 4762–4779. doi:10.18653/v1/P19-1470.
* [129] K. Nantomah, On some properties of the sigmoid function, Asia Mathematikahttps://hal.archives-ouvertes.fr/hal-02635089/.
* [130] N. Majumder, P. Hong, S. Peng, J. Lu, D. Ghosal, A. Gelbukh, R. Mihalcea, S. Poria, Mime: Mimicking emotions for empathetic response generation, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, 2020, pp. 8968–8979. doi:10.18653/v1/2020.emnlp-main.721.
* [131] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, in: International Conference on Learning Representations (ICLR), 2015, pp. 1–13, https://hdl.handle.net/11245/1.505367.
* [132] U. N. Sayar Ghosh Roy, T. Raha, Z. Abid, V. Varma, Leveraging multilingual transformers for hate speech detection, CEUR Workshop Proceedings: FIRE 2020 \- Forum for Information Retrieval Evaluationhttp://ceur-ws.org/Vol-2826/T2-4.pdf.
* [133] Y. Yang, D. Cer, A. Ahmad, M. Guo, J. Law, N. Constant, G. H. Abrego, S. Yuan, C. Tar, Y.-H. Sung, et al., Multilingual universal sentence encoder for semantic retrieval, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020, pp. 87–94. doi:10.18653/v1/2020.acl-demos.12.
* [134] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805doi:10.48550/arXiv.1810.04805.
* [135] K. Kowsari, K. Jafari Meimandi, M. Heidarysafa, S. Mendu, L. Barnes, D. Brown, Text classification algorithms: A survey, Information 10 (4) (2019) 150. doi:10.3390/info10040150.
* [136] R. Real, J. M. Vargas, The probabilistic basis of jaccard’s index of similarity, Systematic biology 45 (3) (1996) 380–385. doi:10.2307/2413572.
* [137] B. Nagle, A proposal for dealing with grade inflation: The relative performance index, Journal of Education for Business 74 (1) (1998) 40–43. doi:10.1080/08832329809601659.
|
# The viscous damping of three dimensional spherical gas bubble inside
unbounded compressible liquid
Lifeng Zhao111School of Mathematical Sciences, University of Science and
Technology of China, Hefei, Anhui, 230026, PR China<EMAIL_ADDRESS>Liangchen Zou222School of Mathematical Sciences, University of Science and
Technology of China, Hefei, Anhui, 230026, PR China<EMAIL_ADDRESS>
###### Abstract
The present paper considers a homogeneous bubble inside an unbounded
polytropic compressible liquid with viscosity. The system is governed by the
Navier-Stokes equation with free boundary which is determined by the kinematic
and dynamic boundary conditions on the bubble-liquid interface. The global
existence of solution is proved, and the $\dot{H}^{1}$ asymptotic stability of
the spherical equilibrium in terms of viscous damping together with a explicit
decay rate is given in bare energy methods.
## 1 Introduction
The bubble-liquid system is omnipresent in nature, and has prevalent
applications in different fields. Examples include microbubble ultrasound
contrast agents [17], the damage to ships caused by underwater explosions
[7][12], the bubble dynamics in magmas [16], the influence of cavitation on
ship propellers [9], etc. For a collection of bubble phenomenons and
applications, one can refer to the review article [11] by Leighton.
The research of bubble dynamics can be traced to Rayleigh’s study [14] of
spherical homogeneous gas bubble in an incompressible, inviscid liquid with
surface tension, which investigated the pressure during the cavity collapse.
In the incompressible, spherical symmetry case, the dynamics of the bubble-
liquid system is then reduced to the well-known Rayleigh-Plesset equation.
However, Rayleigh-Plesset equation failed to explain the damped oscillation of
underwater explosion bubble, which was found caused by the compressibility. To
this end, Keller [8] modified the Rayleigh-Plesset equation by introducing a
wave context. Rayleigh-Plesset equation and Keller equation have been widely
studied by both numerical and mathematical methods in a great variety of
settings. For a systematic overview of Rayleigh-Plesset equation, one can
refer to Ohnawa and Suzuki [13] and the references therein, which investigated
Rayleigh-Plesset equation and Keller equation mathematically, and presented
related numerical results .
When compressibility, nonlinearity and asymmetry are taken into
considerations, the analysis to the bubble-liquid system becomes complicated,
and the bubble-liquid system is then described by compressible Euler or
Navier-Stokes equations depending on whether viscosity is considered or not on
exterior domains with free boundaries. Shapiro and Weinstein [15] described
the dynamics of homogeneous bubble surrounded by a compressible, inviscid
liquid with surface tension and proved exponential radiative decays in linear
approximation near the spherical equilibrium by using spherical harmonics
decomposition. For inhomogeneous bubble, the recent work of Lai and Weinstein
[10] proved the asymptotic stability of spherical equilibrium provided the
liquid external to the bubble is incompressible.
Compared to the incompressible, spherical symmetry case, where viscosity
contributes nothing to the liquid external to the gas bubble, viscosity plays
an important role in the compressible setting. It was known pretty early that
the radius of a pulsing gas bubble in a liquid undergoes damping induced by
various mechanisms including thermal effects, energy radiated outward by sound
waves and the energy lost due to viscosity [3], see also the resent book [19].
The present paper will focus on the viscous damping, and we consider a
homogeneous gas bubble surrounded by a compressible viscous liquid with
surface tension and spherical symmetry. The bubble-liquid system consists of
three parts: the external liquid, the gas bubble within and the bubble-liquid
interface. The liquid is governed by Navier-Stokes equations. The pressure of
the homogeneous bubble is assumed to satisfy the polytropic gas law. The
interface is determined by the kinematic and dynamic boundary conditions
related to the liquid and bubble pressure together with the surface tension.
Therefore, as a whole, the bubble-liquid system is determined by the equation
system
$\displaystyle\partial_{t}\rho+\nabla\cdot(\rho u)=0,$
$\xi\in\Omega(t)^{c},\;t>0,$ (1.1) $\displaystyle\rho\partial_{t}u+\rho
u\cdot\nabla u+\nabla p=\mu\nabla\cdot D(u),$ $\xi\in\Omega(t)^{c},\;t>0,$
(1.2) $\displaystyle\partial_{t}\Xi(z,t)=u(\Xi(z,t),t)\cdot n(\Xi(z,t),t),$
$z\in\mathbb{S},\;t>0,$ (1.3) $\displaystyle p(\Xi(z,t),t)-\mu
D(u)(\Xi(z,t),t)=p_{b}(t)-2\sigma H[\Xi],$ $z\in\mathbb{S},\;t>0,$ (1.4)
where $u$, $\rho$, $p$ denote the velocity, density and pressure of the liquid
external to the bubble; $\Omega(t)\subset\mathbb{R}^{3}$ is the space occupied
by the bubble; $D(u):=\frac{1}{2}\left(\nabla u+\nabla u^{T}\right)$ is the
stress tensor; the viscosity coefficient $\mu$ is assumed to be a positive
constant. Here the bubble surface is assumed to be diffeomorphism to the unit
sphere $\mathbb{S}$ through $\Xi$. $n(\Xi,t)$ denotes the outer normal vector
at the $\Xi(z,t)$ on the bubble surface. $\sigma$ is the surface tension, and
$H[\Xi]:=\frac{1}{2}\nabla\cdot n$ is the mean curvature at $\Xi$. The
pressure of the liquid $p$ is assumed polytropic, namely,
$p=C_{0}\rho^{\gamma}$ for $\gamma>1$. As mentioned above, the bubble pressure
$p_{b}$ is assumed homogeneous and satisfies the polytropic gas law:
$p_{b}=C_{1}|\Omega(t)|^{-\gamma_{0}}$ for $\gamma_{0}>1$.
Since we restrict the study to the spherical symmetry setting, suppose that
$\rho(\xi,t)=\rho(r,t),\;u(\xi,t)=u(r,t)\frac{\xi}{r},\;\Xi(z,t)=R(t)\frac{\xi}{r}\text{,
with }r=|\xi|.$
Then the outer normal vector $n=\frac{\xi}{r}$, and the mean curvature
$H[\Xi]=R^{-1}$. Hence in the spherical case, system (1.1-1.4) becomes
$\displaystyle\partial_{t}\rho+r^{-2}\partial_{r}(\rho u)=0,$ $r>R(t),\;t>0,$
(1.5) $\displaystyle\rho\partial_{t}u+\rho
u\partial_{r}u+\partial_{r}p=\mu\partial_{r}(\partial_{r}+\frac{2}{r})u,$
$r>R(t),\;t>0,$ (1.6) $\displaystyle\frac{dR}{dt}=u|_{r=R(t)},$ $t>0,$ (1.7)
$\displaystyle(p-\mu\partial_{r}u)|_{r=R(t)}=p_{b}-2\sigma R^{-1},$ $t>0.$
(1.8)
The system (1.5-1.8) admits an equilibrium state, and after
nondimensionalization [15, Appendix C], one can assume the equilibrium state
to be
$\rho=1,\;u=0,\;R=1,\;p=\frac{Ca}{2}\rho^{\gamma},\;p_{b}=\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}},$
where $Ca$ is called the cavitation number, and $We$ is the Weber number. The
equations (1.5-1.8) are then rewritten as
$\displaystyle\partial_{t}\rho+r^{-2}\partial_{r}(\rho u)=0,$ $r>R(t),\;t>0,$
(1.9) $\displaystyle\rho\partial_{t}u+\rho
u\partial_{r}u+\frac{Ca}{2}\partial_{r}(\rho^{\gamma})=\mu\partial_{r}(\partial_{r}+\frac{2}{r})u,$
$r>R(t),\;t>0,$ (1.10) $\displaystyle\frac{dR}{dt}=u|_{r=R(t)},$ $t>0,$ (1.11)
$\displaystyle(\frac{Ca}{2}\rho^{\gamma}-\mu\partial_{r}u)|_{r=R(t)}=\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1},$
$t>0.$ (1.12)
The system (1.9)-(1.12) is a free boundary problem with a nonlinear boundary
condition (1.12), so it is natural to introduce the Lagrangian coordinates.
Namely, define the Lagrangian coordinate $x:=\int_{R(t)}^{r}\rho(s,t)s^{2}ds$.
Physically, $x$ stands for the mass of liquid external to the bubble but
inside a spherical domain with radius $r$. Then using (1.9), a direct
calculation gives that
$\left[\begin{matrix}\frac{\partial x}{\partial r}&\frac{\partial x}{\partial
t}\\\ \frac{\partial t}{\partial r}&\frac{\partial t}{\partial
t}\end{matrix}\right]=\left[\begin{matrix}\rho r^{2}&-\rho r^{2}u\\\
0&1\end{matrix}\right],\;\left[\begin{matrix}\frac{\partial r}{\partial
x}&\frac{\partial r}{\partial t}\\\ \frac{\partial t}{\partial
x}&\frac{\partial t}{\partial t}\end{matrix}\right]=\left[\begin{matrix}(\rho
r^{2})^{-1}&u\\\ 0&1\end{matrix}\right].$ (1.13)
In view of (1.13), the system (1.9-1.12) is transformed to:
$\displaystyle\partial_{t}\rho+\rho^{2}\partial_{x}(r^{2}u)=0,$ $x>0,\;t>0,$
(1.14)
$\displaystyle\partial_{t}u+\frac{Ca}{2}r^{2}\partial_{x}(\rho^{\gamma})=\mu
r^{2}\partial_{x}\left(\rho\partial_{x}(r^{2}u)\right),$ $x>0,\;t>0,$ (1.15)
$\displaystyle\frac{dR}{dt}=u|_{x=0},$ $t>0$ (1.16)
$\displaystyle(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}\partial_{x}u)|_{x=0}=\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1},$
$t>0,$ (1.17) $\displaystyle
r=\left(R(t)^{3}+3\int_{0}^{x}\rho^{-1}(y,t)dy\right)^{\frac{1}{3}}=r(x,0)+\int_{0}^{t}u(y,\tau)d\tau,$
$x>0,\;t>0,$ (1.18)
with the initial value
$(u,\;\rho,\;R)|_{t=0}=(u_{0},\;\rho_{0},\;R_{0}),$ (1.19)
and compatibly
$r_{0}(x)=\left(R_{0}^{3}+3\int_{0}^{x}\rho_{0}^{-1}(y)dy\right)^{\frac{1}{3}}.$
(1.20)
The first result is the global existence and the uniqueness of the generalized
solution to (1.14-1.19), which is defined as following:
###### Definition 1.1.
$(u,\;\rho,\;R)$ is said to be a generalized solution to system (1.14-1.19) on
$[0,T]$ with initial value $(u_{0},\;\rho_{0},\;R_{0})$, if
$u\in C\left([0,T],\;L^{2}(0,+\infty)\right),\;r^{2}\partial_{x}u\in
C\left([0,T],\;L^{2}(0,+\infty)\right),$ $\partial_{t}u\in
L^{\infty}\left([0,T],\;L^{2}(0,+\infty)\right),\;r^{2}\partial_{t}\partial_{x}u\in
L^{2}\left([0,T],\;L^{2}(0,+\infty)\right),$ $\rho-1\in
C\left([0,T],\;L^{2}(0,+\infty)\right),\;r^{2}\partial_{x}(\log\rho)\in
C\left([0,T],\;L^{2}(0,+\infty)\right),$ $\partial_{t}\rho\in
L^{\infty}\left([0,T],\;L^{2}(0,+\infty)\right),\;r^{2}\partial_{t}\partial_{x}(\log\rho)\in
L^{\infty}\left([0,T],\;L^{2}(0,+\infty)\right),$
$\inf_{(x,t)\in(0,+\infty)\times[0,T]}\rho>0,\;\inf_{t\in[0,T]}R>0,\;\rho\in
L^{\infty}\left([0,T],\;L^{\infty}(0,+\infty)\right),\;R\in L^{\infty}[0,T],$
and (1.14)(1.15) are satisfied in the
$L^{\infty}\left([0,T],\;L^{2}(0,+\infty)\right)$ sense while (1.16)(1.17) are
satisfied in the trace sense.
Now, we are in the position to state the main results:
###### Theorem 1.2 (Global existence and uniqueness).
Suppose that the initial value $(u_{0},\;\rho_{0},\;R_{0})$ satisfies that
$u_{0}\in L^{2}(0,+\infty),\;r_{0}^{2}\partial_{x}u_{0}\in
L^{2}(0,+\infty),\;r_{0}^{2}\partial_{x}\left(\rho\partial_{x}(r^{2}u)\right)\in
L^{2}(0,+\infty),$
$\int_{0}^{\infty}H(\rho_{0})dx<+\infty,\;r_{0}^{2}\partial_{x}(\log\rho_{0})\in
L^{2}(0,+\infty),\;\text{where
}H(\rho)=\rho^{\gamma-1}-\gamma+(\gamma-1)\rho^{-1},$
$\inf\rho_{0}>0,\;\sup\rho_{0}\leq+\infty,\;0<R_{0}<\infty,\;\text{and
}r_{0}\text{ is given by (1.20).}$
Then there exists a unique global generalized solution to (1.14-1.19).
In contrast to free boundary problems for Navier-Stokes equations on bounded
domains with vacuums (for example, [4][6][18]), the bubble pressure being
positive avoids the formation of vacuums. However, the unboundedness of the
domain also causes some ambiguity when establishing energy estimates,
including the elliptic estimates and that the lack of decay of $u$ in space
makes integration by parts ambiguous. To overcome these ambiguity, we borrow
the idea from Jiang [5] considering a related initial boundary value problem
on bounded domains, and constructing approximate solutions to (1.14-1.19)
using the solutions to this initial boundary problem on bounded domains.
###### Theorem 1.3 (Viscous damping).
Suppose that the initial value $(u_{0},\;\rho_{0},\;R_{0})$ is close enough to
the equilibrium state in the sense that for some small positive $\delta$
$\|u_{0}\|_{L^{2}}^{2}+\int_{0}^{\infty}H(\rho_{0})dx+\|r_{0}^{2}\partial_{x}(\log\rho_{0})\|_{L^{2}}^{2}+(R_{0}-1)^{2}\leq\delta.$
(1.21)
Then the global generalized solution given by Theorem 1.2 satisfies that
$\|r^{2}\partial_{x}u\|_{L^{2}}^{2}+\left\|\frac{u}{r}\right\|_{L^{2}}^{2}+\|r^{2}\partial_{x}\rho\|_{L^{2}}^{2}+(R-1)^{2}\leq
C(1+t)^{-1},$ (1.22)
where $C$ is a constant depending on the initial data.
The proof of Theorem 1.3 requires a more careful estimate to bound the density
$\rho$ from above and below uniformly in time by using the Bresch-Desjardins
entropy estimate [2] and making full use of the dissipations to cancel bad
boundary terms. Note that compared with assumptions of Theorem 1.2 on
regularities of the initial data, the smallness assumption (1.21) only applies
on the low regularities $u_{0}$, $H(\rho_{0})$, $R_{0}$, and
$r_{0}^{2}\partial_{x}(\log\rho_{0})$, which implies that a large gradient of
the velocity in the initial data will not inhibit the resulted decay. The
novelty is that system (1.14-1.19) involves a nonlinear boundary condition
(1.17), and we avoid linearizing system (1.14-1.19) and work totally in the
nonlinear scheme.
In the following several sections, $Ca,\;We,\;\mu$ denote corresponding fixed
constants. $c,\;C$ are used to denote constants only depending on the initial
value and the above fixed constants. $c(T),\;C(T)$ are used to denote
constants depending on the initial value, the above fixed constants and the
time span $[0,T]$. For simplicity, sometimes $\partial_{\alpha}f$ is written
as $f_{\alpha}$ for $\alpha=x,\;t,\;f=u,\;\rho$, etc. It is necessary to note
that $c,c(T),C,C(T)$ are required independent on the size of the bounded
domains.
The plan of the paper is as follows. In the next section, we state the related
initial boundary value problem on bounded domains and prove the existence of
global solutions to this related problem in a standard procedure: the short
time existence, a-prior estimates and the continuity argument. In the first
part of Section 3 the approximated solutions are constructed from the
solutions on bounded domains and weak compactness is employed to obtain the
exact solution to (1.14-1.19). Then the uniqueness is proved in the second
part of Section 3. Finally, the uniform in time estimates are given in Section
4, which is then applied to obtain the viscous damping with the help of the
differential inequality in Lemma 4.8.
#### Acknowledgements
L. Zhao is supported by NSFC Grant of China No. 12271497 and the National Key
Research and Development Program of China No. 2020YFA0713100.
## 2 The bubble-liquid system on bounded domain
In this section, we temporarily abbreviate $L^{\infty}(0,k)$ as $L^{\infty}$,
$L^{2}(0,k)$ as $L^{2}$, and correspondingly $\|\cdot\|_{L^{\infty}(0,k)}$ as
$\|\cdot\|_{L^{\infty}}$, $\|\cdot\|_{L^{2}(0,k)}$ as $\|\cdot\|_{L^{2}}$. Now
consider the bubble-liquid system on bounded domain $[0,k]$ for $k>1$, namely
$\displaystyle\partial_{t}\rho+\rho^{2}\partial_{x}(r^{2}u)=0,$ $k>x>0,\;t>0,$
(2.1)
$\displaystyle\partial_{t}u+\frac{Ca}{2}r^{2}\partial_{x}(\rho^{\gamma})=\mu
r^{2}\partial_{x}\left(\rho\partial_{x}(r^{2}u)\right),$ $k>x>0,\;t>0,$ (2.2)
$\displaystyle\frac{dR}{dt}=u|_{x=0},\;u|_{x=k}=0,$ $t>0,$ (2.3)
$\displaystyle(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}\partial_{x}u)|_{x=0}=\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1},$
$t>0,$ (2.4) $\displaystyle
r=\left(R(t)^{3}+3\int_{0}^{x}\rho^{-1}(y,t)dy\right)^{\frac{1}{3}}=r(x,0)+\int_{0}^{t}u(y,\tau)d\tau,$
$k>x>0,\;t>0,$ (2.5)
$\displaystyle(u,\;\rho,\;R)|_{t=0}=(u_{0},\;\rho_{0},\;R_{0}),$ $k>x>0,$
(2.6) $\displaystyle
r_{0}(x)=\left(R_{0}^{3}+3\int_{0}^{x}\rho_{0}^{-1}(y)dy\right)^{\frac{1}{3}},$
$k>x>0.$ (2.7)
###### Proposition 2.1 (Global existence on bounded domains).
Suppose the initial data $(u_{0},\;\rho_{0},\;R_{0})$ satisfies that
$u_{0}\in L^{2}(0,k),\;r_{0}^{2}\partial_{x}u_{0}\in
L^{2}(0,k),\;r_{0}^{2}\partial_{x}\left(\rho_{0}\partial_{x}(r_{0}^{2}u_{0})\right)\in
L^{2}(0,k),$
$\int_{0}^{k}H(\rho_{0})dx<+\infty,\;r_{0}^{2}\partial_{x}(\log\rho_{0})\in
L^{2}(0,k),\;\inf_{x\in[0,+\infty)}\rho_{0}>0,\;\sup_{x\in[0,+\infty)}\rho_{0}\leq+\infty,\;0<R_{0}<\infty.$
Then there exists a unique global generalized solution to (2.1-2.7).
The proof of Proposition 2.1 includes the short time existence of solutions,
a-priori estimates, and a standard continuity argument. The proof of the
uniqueness is omitted here since it is the same as the uniqueness in the
unbounded case, whose proof is given in Section 3. The following several
lemmas in this section are devoted to establish the a-priori estimates.
$(u,\;\rho,\;R)$ is assumed to be any generalized solution of (2.1-2.7) on
$[0,T]$. We begin with the following basic energy identity.
###### Lemma 2.2 (Basic energy).
Introduce the notations
$P(R)=\frac{1}{3\gamma_{0}-3}\left(\frac{Ca}{2}+\frac{2}{We}\right)\left(R^{-3\gamma_{0}+3}-1\right)+\frac{1}{We}\left(R^{2}-1\right)+\frac{Ca}{6}\left(R^{3}-1\right),$
and
$E_{0}:=\frac{1}{2}\int_{0}^{k}u_{0}^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\int_{0}^{k}H(\rho_{0})dx+P(R_{0}).$
Then for any $t\in[0,T]$,
$\frac{1}{2}\int_{0}^{k}u^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\int_{0}^{k}H(\rho)dx+P(R)+\mu\int_{0}^{t}\int_{0}^{k}\rho(r^{2}u_{x})^{2}dxd\tau+2\mu\int_{0}^{t}\int_{0}^{k}\rho^{-1}\frac{u^{2}}{r^{2}}dxd\tau=E_{0}.$
(2.8)
###### Proof.
Multiply (2.2) by $u$ to deduce that
$\frac{1}{2}\partial_{t}(u^{2})+\frac{Ca}{2}((\rho^{\gamma}-1)r^{2}u)_{x}-\frac{Ca}{2}(\rho^{\gamma}-1)(r^{2}u)_{x}-\mu\left(\rho(r^{2}u)_{x}r^{2}u\right)_{x}+\mu\rho(r^{2}u)_{x}^{2}=0.$
(2.9)
(2.1) yields that
$(\rho^{\gamma}-1)(r^{2}u)_{x}=-\frac{1}{\gamma-1}\partial_{t}H(\rho)$. From
(2.5), it holds that
$\rho(r^{2}u)_{x}=\rho r^{2}u_{x}+2r^{-1}u,$
$(\rho(r^{2})_{x}r^{2}u^{2})_{x}=(2ru^{2})_{x}=4\frac{u}{r}(r^{2}u_{x})+2\rho^{-1}\frac{u^{2}}{r^{2}}.$
Hence the cross term in $\rho(r^{2}u)_{x}^{2}$ is cancelled by the above
boundary term. Then using boundary conditions (2.3)(2.4), the proof is
complete by integrating (2.9) on $[0,k]\times[0,T]$ . ∎
The most important part of the a-prior estimates is the control of both lower
and upper bounds of $\rho$. This control is established through the Bresch-
Desjardins entropy estimates stated in Lemma 2.5, which requires the control
of $\|r^{-1}u\|_{L^{\infty}}$ and a boundary term involving $\rho|_{x=0}$. To
this end, we first state the following two lemmas. In fact, using the
dissipation terms in the basic energy identity and the radial property, a
better $L^{\infty}$ control of $u$ can be proved:
###### Lemma 2.3 ($L^{\infty}$ control of $u$).
For any $t\in[0,T]$,
$\int_{0}^{t}\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}d\tau\leq\mu^{-1}E_{0}$.
###### Proof.
A direct computation shows that
$\partial_{x}(u^{2}r)=\rho^{-1}\frac{u^{2}}{r^{2}}+2\left(\rho^{-\frac{1}{2}}\frac{u}{r}\right)\left(\rho^{\frac{1}{2}}r^{2}u_{x}\right)$,
and therefore $|\partial_{x}(u^{2}r)|\leq
2\rho^{-1}\frac{u^{2}}{r^{2}}+\rho(r^{2}u_{x})^{2}$. Hence
$\int_{0}^{t}\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}d\tau\leq\int_{0}^{t}\int_{0}^{k}|\partial_{x}(u^{2}r)|dxd\tau\leq\int_{0}^{t}\int_{0}^{k}\rho(r^{2}u_{x})^{2}dxd\tau+2\int_{0}^{t}\int_{0}^{k}\rho^{-1}\frac{u^{2}}{r^{2}}dxd\tau\leq\mu^{-1}E_{0}.$
∎
For the control of $\rho|_{x=0}$, we have the following estimate.
###### Lemma 2.4 (Control of $\rho|_{x=0}$).
There exists $0<c(T)<C(T)$, such that $c(T)\leq\rho|_{x=0}\leq C(T)$, for any
$t\in[0,T]$.
###### Proof.
For simplicity, denote $\rho|_{x=0}$ by $\tilde{\rho}$. (2.1) gives that
$\rho
r^{2}u_{x}=\rho(r^{2}u)_{x}-2r^{-1}u=-\rho^{-1}\partial_{t}\rho-2r^{-1}\partial_{t}r,$
and thus
$(\rho
r^{2}u_{x})|_{x=0}=-\partial_{t}\left(\log(\tilde{\rho}R^{2})\right)=\frac{1}{\gamma}(\tilde{\rho}R^{2})^{\gamma}\partial_{t}\left((\tilde{\rho}R^{2})^{-\gamma}\right).$
Dividing (2.4) by $\mu(\tilde{\rho}R^{2})^{\gamma}$ to deduce that
$\frac{d}{dt}\left((\tilde{\rho}R^{2})^{-\gamma}\right)+\frac{\gamma}{\mu}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right](\tilde{\rho}R^{2})^{-\gamma}=\frac{Ca}{2}\frac{\gamma}{\mu}R^{-2\gamma}.$
(2.10)
Solving (2.10) as an ODE of $(\tilde{\rho}R^{2})^{-\gamma}$ yields that
$(\tilde{\rho}R^{2})^{-\gamma}(t)=(\tilde{\rho}R_{0}^{2})^{-\gamma}S(t)+\frac{Ca}{2}\frac{\gamma}{\mu}\int_{0}^{t}R^{-2\gamma}S(t-\tau)d\tau,$
(2.11)
where
$S(t):=\exp\left\\{-\frac{\gamma}{\mu}\int_{0}^{t}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]d\tau\right\\}.$
Remark that $P(R)$ is convex for $R\in(0,+\infty)$, and reaches the minimum
$0$ at $R=1$. Therefore, Lemma 2.2, which gives $P(R)\leq E_{0}$ implies that
there exist $0<c<C$, such that for any $t\in[0,T]$,
$c\leq R(t)\leq
C,\;r(x,t)=\left(R(t)^{3}+3\int_{0}^{x}\rho^{-1}(y,t)dy\right)^{\frac{1}{3}}\geq
R(t)\geq c.$ (2.12)
Hence the proof is complete by (2.11). ∎
Using the above two lemmas, we are in a position to state the BD entropy
estimate:
###### Lemma 2.5 (Bresch-Desjardins entropy estimate).
Define for $t\in[0,T]$ that
$E_{1}(t):=\frac{1}{2}\int_{0}^{k}\left(u+\mu
r^{2}(\log\rho)_{x}\right)^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\int_{0}^{k}H(\rho)dx+\frac{Ca}{2}\frac{4\mu}{\gamma}\int_{0}^{t}\int_{0}^{k}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}dxd\tau.$
(2.13)
There exists $C(T)>0$ such that for any $t\in[0,T]$, $E_{1}(t)\leq C(T)$.
###### Proof.
Using (2.1), the viscous term can be rewritten as
$r^{2}(\rho(r^{2}u)_{x})_{x}=-r^{2}(\log\rho)_{xt}=-\partial_{t}\left(r^{2}(\log\rho)_{x}\right)+2(\log\rho)_{x}ru.$
Take it into (2.2) to show $\partial_{t}\left(u+\mu
r^{2}(\log\rho)_{x}\right)+\frac{Ca}{2}(\rho^{\gamma})_{x}r^{2}=2\mu(\log\rho)_{x}ru$.
Then multiplying the resulted equation by $\left(u+\mu
r^{2}(\log\rho)_{x}\right)$ and noting that
$(\rho^{\gamma})_{x}r^{2}u=\left((\rho^{\gamma}-1)r^{2}u)\right)_{x}-(\rho^{\gamma}-1)(r^{2}u)_{x}=\left((\rho^{\gamma}-1)r^{2}u)\right)_{x}+\frac{1}{\gamma-1}\partial_{t}H(\rho),$
it follows
$\displaystyle\frac{1}{2}\partial_{t}\left(u+\mu
r^{2}(\log\rho)_{x}\right)^{2}+\frac{Ca}{2}\frac{1}{\gamma-1}\partial_{t}H(\rho)+\frac{Ca}{2}\frac{4\mu}{\gamma}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}$
(2.14) $\displaystyle=$ $\displaystyle
2\mu(\log\rho)_{x}ru(u+\mu(\log\rho)_{x}r^{2})+\frac{Ca}{2}\left((1-\rho^{\gamma})r^{2}u\right)_{x}.$
Integrating (2.14) on $[0,k]$ yields that
$\displaystyle\frac{1}{2}\frac{d}{dt}\int_{0}^{k}\left(u+\mu
r^{2}(\log\rho)_{x}\right)^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\frac{d}{dt}\int_{0}^{k}H(\rho)dx+\frac{Ca}{2}\frac{4\mu}{\gamma}\int_{0}^{k}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}dx$
(2.15) $\displaystyle=$ $\displaystyle
2\mu\int_{0}^{k}(\log\rho)_{x}ru(u+\mu(\log\rho)_{x}r^{2})dx+\frac{Ca}{2}(\tilde{\rho}^{\gamma}-1)R^{2}\frac{dR}{dt}.$
Then controlling the two terms on the right-hand side of (2.15) using Lemma
2.3 and Lemma 2.4, we find
$\displaystyle\frac{Ca}{2}(\tilde{\rho}^{\gamma}-1)R^{2}\frac{dR}{dt}\leq
C(T)\|ur^{\frac{1}{2}}\|_{L^{\infty}},$
and
$\displaystyle\mu\int_{0}^{k}(\log\rho)_{x}ru(u+\mu(\log\rho)_{x}r^{2})dx$
$\displaystyle\leq$ $\displaystyle
2\left\|\frac{u}{r}\right\|_{L^{\infty}}\int_{0}^{k}\mu
r^{2}(\log\rho)_{x}(u+\mu(\log\rho)_{x}r^{2})dx$ $\displaystyle\leq$
$\displaystyle C\|ur^{\frac{1}{2}}\|_{L^{\infty}}\left[\int_{0}^{k}\left(u+\mu
r^{2}(\log\rho)_{x}\right)^{2}dx+\int_{0}^{k}u^{2}dx\right],$
which together with (2.15) complete the proof by Gronwall’s inequality. ∎
Lemma 2.5 in fact provides a control for the $x$-derivative of $\rho$. Hence
with the help of the radial property and Lemma 2.4, Lemma 2.2 and Lemma 2.5
give the lower and upper bounds of $\rho$:
###### Lemma 2.6 (Lower and upper bound of density).
There exist $\underline{\rho}(T)>0$, $\overline{\rho}(T)>0$ such that for any
$(x,t)\in[0,k]\times[0,T]$,
$\underline{\rho}(T)\leq\rho(x,t)\leq\overline{\rho}(T)$.
###### Proof.
Let $f_{i}(\alpha),\;i=1,2$ denote the two roots of $H(\rho)=\alpha$ . Let
$\alpha\geq\frac{1}{k}\left(\frac{Ca}{2}\frac{1}{\gamma-1}\right)^{-1}E_{0}$,
and thus $\alpha\geq\frac{1}{k}\int_{0}^{k}H(\rho)dx$. Since $\forall
t\in[0,T]$,
$\displaystyle k>$ $\displaystyle
m\left\\{x\in(0,k):H(\rho)(x,t)>\alpha\right\\}$ $\displaystyle=$
$\displaystyle
m\left\\{x\in(0,k):\rho(x,t)<f_{1}(\alpha)\right\\}+m\left\\{x\in(0,k):\rho(x,t)>f_{2}(\alpha)\right\\},$
there exists $x_{0}=x_{0}(t)\in[0,k]$ for each $t\in[0,T]$ such that
$f_{1}(\alpha)\leq\rho(x_{0}(t),t)\leq f_{2}(\alpha)$. Then for any
$(x,t)\in[0,k]\times[0,T]$,
$\left|\log\frac{\rho(x,t)}{\rho(x_{0}(t),t)}\right|\leq\int_{0}^{k}|(\log\rho)_{x}|dx\leq\left(\int_{0}^{k}\left(r^{2}(\log\rho)_{x}\right)^{2}dx\right)^{\frac{1}{2}}\left(\int_{0}^{k}r^{-4}dx\right)^{\frac{1}{2}}.$
(2.16)
To control the term $\int_{0}^{k}r^{-4}dx$, use the definition of $r$ (2.5)
and (2.12) to calculate that
$\frac{d}{dt}\int_{0}^{k}r^{-4}dx=-4\int_{0}^{k}r^{-5}udx\leq
C\|ur^{\frac{1}{2}}\|_{L^{\infty}}\int_{0}^{k}r^{-4}dx.$ (2.17)
Applying Gronwall’s inequality to (2.17) with the initial data
$\int_{0}^{k}r_{0}^{-4}dx\leq\int_{0}^{k}\left(R_{0}^{3}+3x\inf_{[0,k]}\rho_{0}^{-1}\right)^{-\frac{4}{3}}dx\leq
R_{0}^{-1}\sup_{[0,k]}\rho_{0}$
shows that $\int_{0}^{k}r^{-4}dx\leq C(T)$. Therefore, in view of
(2.8)(2.13)(2.16), there exist positive constant $C(T)$ such that
$\left|\log\frac{\rho(x,t)}{\rho(x_{0}(t),t)}\right|\leq C(T)$, and thus the
proof is complete by the selection of $\rho(x_{0}(t),t)$. ∎
To complete the a-priori estimates for generalized solution $(u,\;\rho,\;R)$,
it remains to control the $L^{\infty}_{t}L^{2}_{x}$ norms of 1-order
derivatives of $u$ and $\rho$, together with
$r^{2}\partial_{t}\partial_{x}(\log\rho)$ and the higher order dissipation
$r^{2}\partial_{t}\partial_{x}u$. Noting that
$r^{2}u_{x}=\\-\rho^{-2}\rho_{t}-\rho^{-1}\frac{u}{r}$ and
$-\frac{Ca}{2}(\rho^{\gamma})_{x}r^{2}=u_{t}+\mu r^{2}(\log\rho)_{xt}$, it
suffices to establish the energy identity and the BD entropy estimate of
$(u_{t},\;\rho_{t},\;R_{t})$.
###### Lemma 2.7 (Energy estimate for 1-order derivatives).
Define for $t\in[0,T]$ that
$\displaystyle E_{2}(t):=$
$\displaystyle\frac{1}{2}\int_{0}^{k}u_{t}^{2}dx+\frac{Ca}{2}\int_{0}^{k}\left[\frac{2\gamma}{(\gamma-1)^{2}}\left(\rho^{\frac{\gamma-1}{2}}\right)_{t}^{2}+\frac{4}{\gamma-1}\rho^{\frac{\gamma-1}{2}}\left(\rho^{\frac{\gamma-1}{2}}\right)_{t}\frac{u}{r}+3\rho^{\gamma-1}\frac{u^{2}}{r^{2}}\right]dx$
$\displaystyle+\frac{\mu}{2}\int_{0}^{t}\int_{0}^{k}\rho(r^{2}u_{tx})^{2}dxd\tau+\mu\int_{0}^{t}\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dxd\tau$
$\displaystyle+\left[\frac{3\gamma_{0}}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}+1}+\frac{1}{We}\right]\left(\frac{dR}{dt}\right)^{2}.$
There exists $C(T)>0$ such that $E_{2}(t)\leq C(T)$, $\forall t\in[0,T]$.
###### Proof.
To establish the energy identity for $(u_{t},\;\rho_{t},\;R_{t})$, we
differentiate (2.2) with respect to $t$, multiply the result equation by
$u_{t}$ and compute each term.
Step 1. The treatment of $\left((\rho^{\gamma})_{x}r^{2}\right)_{t}u_{t}$.
Exchanging the $x,t$ derivatives and applying integration by parts yield that
$\displaystyle\left((\rho^{\gamma})_{x}r^{2}\right)_{t}u_{t}=\left[(\rho^{\gamma}r^{2})_{t}u_{t}\right]_{x}-(\rho^{\gamma})_{t}(r^{2}u)_{xt}+(\rho^{\gamma})_{t}\left((r^{2})_{t}u\right)_{x}-\rho^{\gamma}\left((r^{2})_{t}u_{t}\right)_{x}.$
Then using (2.1), the second term can be rewritten as
$-(\rho^{\gamma})_{t}(r^{2}u)_{xt}=(\rho^{\gamma})_{t}(\rho^{-2}\rho_{t})_{t}=\frac{2\gamma}{(\gamma-1)^{2}}\partial_{t}\left[(\rho^{\frac{\gamma-1}{2}})_{t}\right]^{2}-\frac{\gamma(\gamma+1)}{2}\rho^{\gamma-4}(\partial_{t}\rho)^{3}.$
Noting that $r_{t}=u$, exchanging the derivatives in the forth term gives
$-\rho^{\gamma}\left((r^{2})_{t}u_{t}\right)_{x}=-\rho^{\gamma}(r(u^{2})_{t})_{x}=-\left[\rho^{\gamma}(ru^{2})_{x}\right]_{t}+\rho^{\gamma}(u^{3})_{x}+(\rho^{\gamma})_{t}(ru^{2})_{x}.$
Using (2.1) again, the first term on right-hand side is
$-\left[\rho^{\gamma}(ru^{2})_{x}\right]_{t}=\left[3\rho^{\gamma-1}\frac{u^{2}}{r^{2}}+\frac{4}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u}{r}\right]_{t}.$
The rest nonlinear terms are
$3(\rho^{\gamma})_{t}(ru^{2})_{x}=-3(\rho^{\gamma})_{t}\left(2\rho^{-2}\rho_{t}\frac{u}{r}+3\rho^{-1}\frac{u^{2}}{r^{2}}\right)=\frac{-18\gamma}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u^{2}}{r^{2}}-\frac{24\gamma}{(\gamma-1)^{2}}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}\frac{u}{r},$
and
$\rho^{\gamma}(u^{3})_{x}=-\rho^{\gamma}(3\rho^{-2}\rho_{t}\frac{u^{2}}{r^{2}}+6\rho^{-1}\frac{u^{3}}{r^{3}})=-6\rho^{\gamma-1}\frac{u^{3}}{r^{3}}-\frac{6}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u^{2}}{r^{2}}.$
Let $J$ collect all the nonlinear terms appeared, namely
$J:=-\frac{\gamma(\gamma+1)}{2}\rho^{\gamma-4}(\partial_{t}\rho)^{3}-\frac{24\gamma}{(\gamma-1)^{2}}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}\frac{u}{r}-\frac{6(3\gamma+1)}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u^{2}}{r^{2}}-6\rho^{\gamma-1}\frac{u^{3}}{r^{3}}.$
As a result,
$\displaystyle\left((\rho^{\gamma})_{x}r^{2}\right)_{t}u_{t}=\left[(\rho^{\gamma}r^{2})_{t}u_{t}\right]_{x}+\frac{2\gamma}{(\gamma-1)^{2}}\partial_{t}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}+\partial_{t}\left[3\rho^{\gamma-1}\frac{u^{2}}{r^{2}}+\frac{4}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u}{r}\right]+J.$
(2.18)
Step 2. The treatment of $\left[(\rho(r^{2}u)_{x})_{x}r^{2}\right]_{t}u_{t}$.
Exchanging $x,t$ derivatives and integrating by parts give
$\left[(\rho(r^{2}u)_{x})_{x}r^{2}\right]_{t}u_{t}=\left[(\rho(r^{2}u)_{x}r^{2})_{t}u_{t}\right]_{x}-\rho(r^{2}u)_{x}((r^{2})_{t}u_{t})_{x}-(\rho(r^{2}u)_{x})_{t}(r^{2}u_{t})_{x}.$
The boundary term can be rewritten as
$\left[(\rho(r^{2}u)_{x}r^{2})_{t}u_{t}\right]_{x}=\left[(\rho
r^{4}u_{x})_{t}u_{t}\right]_{x}+\left[(2ru)_{t}u_{t}\right]_{x}=\left[(\rho
r^{4}u_{x})_{t}u_{t}\right]_{x}+\left(2u^{2}u_{t}\right)_{x}+(2ru_{t}^{2})_{x}.$
The third term, which involves the dissipation is
$-(\rho(r^{2}u)_{x})_{t}(r^{2}u_{t})_{x}=-\rho(r^{2}u_{t})_{x}^{2}+\rho^{2}(r^{2}u)_{x}^{2}(r^{2}u_{t})_{x}+6\frac{u^{2}}{r^{2}}(r^{2}u_{t})_{x}-4\rho\frac{u}{r}(r^{2}u)_{x}(r^{2}u_{t})_{x}.$
Using $r_{t}=u$, the second term is
$-\rho(r^{2}u)_{x}((r^{2})_{t}u_{t})_{x}=6(r^{2}u)_{x}\frac{u}{r}\frac{u_{t}}{r}-2\rho(r^{2}u)_{x}^{2}\frac{u_{t}}{r}-2\rho(r^{2}u)_{x}\frac{u}{r}(r^{2}u_{t})_{x}.$
Let $K$ collect all the nonlinear terms, namely
$K:=-6\rho\frac{u}{r}(r^{2}u)_{x}(r^{2}u_{t})_{x}+\rho^{2}(r^{2}u)_{x}^{2}(r^{2}u_{t})_{x}-2\rho(r^{2}u)_{x}^{2}\frac{u_{t}}{r}+6\frac{u^{2}}{r^{2}}(r^{2}u_{t})_{x}+6(r^{2}u)_{x}\frac{u}{r}\frac{u_{t}}{r}.$
Note that the cross term in $-\rho(r^{2}u_{t})_{x}^{2}$ is cancelled by
$(2ru_{t}^{2})_{x}$. Hence we find
$\displaystyle\left[(\rho(r^{2}u)_{x})_{x}r^{2}\right]_{t}u_{t}=\left[(\rho
r^{4}u_{x})_{t}u_{t}\right]_{x}+\left(2u^{2}u_{t}\right)_{x}-\rho(r^{2}u_{xt})^{2}-2\rho^{-1}\frac{u_{t}^{2}}{r^{2}}+K.$
(2.19)
Step 3. The boundary term
$\left.\left[\frac{Ca}{2}(\rho^{\gamma}r^{2})_{t}u_{t}-\mu(\rho
r^{4}u_{x})_{t}u_{t}-2\mu u^{2}u_{t}\right]\right|_{x=0}$.
Differentiate the boundary condition (2.4) with respect to $t$.
$\left.\left(\frac{Ca}{2}(\rho^{\gamma}r^{2})_{t}-\mu\rho(r^{4}u_{x})_{t}\right)\right|_{x=0}=\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}+2}-\frac{2}{We}R\right]_{t}.$
(2.20)
Multiplying (2.20) by $u_{t}|_{x=0}=\frac{d^{2}R}{dt^{2}}$, we obtain
$\displaystyle\left.\left[\frac{Ca}{2}(\rho^{\gamma}r^{2})_{t}u_{t}-\mu(\rho
r^{4}u_{x})_{t}u_{t}\right]\right|_{x=0}$ (2.21) $\displaystyle=$
$\displaystyle\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}+2}-\frac{2}{We}R\right]_{t}\frac{d^{2}R}{dt^{2}}$
$\displaystyle=$
$\displaystyle-\frac{3\gamma_{0}-2}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}+1}\frac{d}{dt}\left(\frac{dR}{dt}\right)^{2}-\frac{1}{We}\frac{d}{dt}\left(\frac{dR}{dt}\right)^{2}$
$\displaystyle=$
$\displaystyle-\frac{3\gamma_{0}-2}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)\frac{d}{dt}\left[R^{-3\gamma_{0}+1}\left(\frac{dR}{dt}\right)^{2}\right]-\frac{1}{We}\frac{d}{dt}\left(\frac{dR}{dt}\right)^{2}$
$\displaystyle-\frac{(3\gamma_{0}-2)(3\gamma_{0}-1)}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}\left(\frac{dR}{dt}\right)^{3}.$
Let $L$ collect the nonlinear terms, namely
$L=-\frac{(3\gamma_{0}-2)(3\gamma_{0}-1)}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}\left(\frac{dR}{dt}\right)^{3}-2\mu(u^{2}u_{t})|_{x=0}.$
Adding (2.18)(2.19) up with coefficient $\frac{Ca}{2}$ and $\mu$ respectively
and integrating on $[0,k]$, one concludes with the help of (2.21) that
$\displaystyle\frac{1}{2}\frac{d}{dt}\int_{0}^{k}u_{t}^{2}dx+\frac{Ca}{2}\frac{d}{dt}\int_{0}^{k}\left[\frac{2\gamma}{(\gamma-1)^{2}}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}+\frac{4}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u}{r}+3\rho^{\gamma-1}\frac{u^{2}}{r^{2}}\right]dx$
(2.22)
$\displaystyle+\frac{d}{dt}\left[\frac{3\gamma_{0}-2}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}+1}\left(\frac{dR}{dt}\right)^{2}+\frac{1}{We}\left(\frac{dR}{dt}\right)^{2}\right]$
$\displaystyle+\mu\int_{0}^{k}\rho(r^{2}u_{tx})^{2}dx+2\mu\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx$
$\displaystyle=$
$\displaystyle-\frac{Ca}{2}\int_{0}^{k}Jdx+\mu\int_{0}^{k}Kdx+L.$
Step 4. Control of the nonlinear terms.
First note that by (2.2),
$\displaystyle\int_{0}^{x}\frac{u_{t}}{r^{2}}dy=$
$\displaystyle\left(\mu\rho(r^{2}u)_{x}-\frac{Ca}{2}\rho^{\gamma}\right)(x,t)-\left(\mu\rho(r^{2}u)_{x}-\frac{Ca}{2}\rho^{\gamma}\right)(0,t)$
(2.23) $\displaystyle=$
$\displaystyle\left(\mu\rho(r^{2}u)_{x}-\frac{Ca}{2}\rho^{\gamma}\right)(x,t)+\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}-2\mu
R^{-1}\frac{dR}{dt}.$
In view of equation (2.1), there is
$\rho(r^{2}u)_{x}=-\rho^{-1}\partial_{t}\rho$. Hence replacing
$\rho(r^{2}u)_{x}$ by $-\rho^{-1}\partial_{t}\rho$ in (2.23) yields that
$\displaystyle\|\mu\rho^{-1}\rho_{t}\|_{L^{\infty}}\leq$
$\displaystyle\left\|\frac{Ca}{2}\rho^{\gamma}-\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}+\frac{2}{We}R^{-1}\right\|_{L^{\infty}}+2\mu
R^{-1}\left|\frac{dR}{dt}\right|+\int_{0}^{k}\left|\frac{u_{t}}{r^{2}}\right|dx$
(2.24) $\displaystyle\leq$ $\displaystyle C(T)+2\mu
R^{-\frac{3}{2}}\|ur^{\frac{1}{2}}\|_{L^{\infty}}+\left(\int_{0}^{k}u_{t}^{2}dx\right)^{\frac{1}{2}}\left(\int_{0}^{k}r^{-4}dx\right)^{\frac{1}{2}}$
$\displaystyle\leq$ $\displaystyle
C(T)\left(1+\left(\int_{0}^{k}u_{t}^{2}dx\right)^{\frac{1}{2}}+\|ur^{\frac{1}{2}}\|_{L^{\infty}}\right),$
where we used the boundedness of $\rho$ and $R$ from Lemma 2.6 and (2.12).
Using (2.24), the four terms in $J$ can be controlled as following.
$\displaystyle\left|\int_{0}^{k}\rho^{\gamma-4}\rho_{t}^{3}dx\right|\leq$
$\displaystyle
C\|\rho^{-1}\rho_{t}\|_{L^{\infty}}\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx$
$\displaystyle\leq$ $\displaystyle
C(T)\left(1+\left(\int_{0}^{k}u_{t}^{2}dx\right)^{\frac{1}{2}}+\|ur^{\frac{1}{2}}\|_{L^{\infty}}\right)\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx.$
$\left|\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}\frac{u}{r}dx\right|\leq\left\|\frac{u}{r}\right\|_{L^{\infty}}\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx\leq
C\|ur^{\frac{1}{2}}\|_{L^{\infty}}\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx.$
$\left|\int_{0}^{k}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u^{2}}{r^{2}}dx\right|\leq
C\|ur^{\frac{1}{2}}\|_{L^{\infty}}\left(\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\int_{0}^{k}\rho^{\gamma-1}\frac{u^{2}}{r^{2}}dx\right).$
$\left|\int_{0}^{k}\rho^{\gamma-1}\frac{u^{3}}{r^{3}}dx\right|\leq
C\|ur^{\frac{1}{2}}\|_{L^{\infty}}\int_{0}^{k}\rho^{\gamma-1}\frac{u^{2}}{r^{2}}dx.$
Adding the above inequalities up gives the control of $\int Jdx$:
$\left|\int_{0}^{k}Jdx\right|\leq
C(T)\left(1+\left(\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx\right)^{\frac{1}{2}}+\|ur^{\frac{1}{2}}\|_{L^{\infty}}\right)\left(\int_{0}^{k}u_{t}^{2}dx+\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}+\int_{0}^{k}\rho^{\gamma-1}\frac{u^{2}}{r^{2}}\right).$
(2.25)
Let $\epsilon>0$ be a small constant. Using Lemma 2.6, inequality (2.24) and
the equation (2.1), the terms in $K$ have the following controls.
$\displaystyle\int_{0}^{k}\rho\frac{u}{r}(r^{2}u)_{x}(r^{2}u_{t})dx\leq$
$\displaystyle
C\left\|\frac{u}{r}\right\|_{L^{\infty}}\left(\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx\right)^{\frac{1}{2}}\left(\int_{0}^{k}\rho(r^{2}u)_{x}^{2}dx\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx+C_{\epsilon}(T)\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx.$
$\displaystyle\int_{0}^{k}\rho^{2}(r^{2}u)_{x}^{2}(r^{2}u_{t})_{x}dx\leq$
$\displaystyle\left(\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx\right)^{\frac{1}{2}}\left(\int_{0}^{k}\rho^{3}(r^{2}u)_{x}^{4}dx\right)^{\frac{1}{2}}$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx+C_{\epsilon}\int_{0}^{k}\rho^{3}(r^{2}u)_{x}^{4}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx+C_{\epsilon}(T)\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx\left(1+\int_{0}^{k}u_{t}^{2}dx+\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\right).$
$\int_{0}^{k}\rho(r^{2}u)_{x}^{2}\frac{u_{t}}{r}dx\leq\epsilon\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx+C_{\epsilon}(T)\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx\left(1+\int_{0}^{k}u_{t}^{2}dx+\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\right).$
$\int_{0}^{k}(r^{2}u)_{x}\frac{u}{r}\frac{u_{t}}{r}dx\leq\epsilon\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx+C_{\epsilon}(T)\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx.$
$\int_{0}^{k}(r^{2}u_{t})_{x}\frac{u^{2}}{r^{2}}dx\leq\epsilon\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx+C_{\epsilon}(T)\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\int_{0}^{k}\rho^{\gamma-1}\frac{u^{2}}{r^{2}}dx.$
Adding the above estimates up and noting that
$\int_{0}^{k}\rho(r^{2}u_{t})_{x}^{2}dx\leq
C\int_{0}^{k}\rho(r^{2}u_{xt})^{2}dx+C\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx,$
one gets the control of $\int Kdx$:
$\displaystyle\left|\int_{0}^{k}Kdx\right|$ (2.26) $\displaystyle\leq$
$\displaystyle
C_{\epsilon}(T)\left(1+\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\right)\left(\int_{0}^{k}u_{t}^{2}dx+\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\int_{0}^{k}\rho^{\gamma-1}\frac{u^{2}}{r^{2}}dx\right)$
$\displaystyle+\epsilon\int_{0}^{k}\rho(r^{2}u_{xt})^{2}dx+\epsilon\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx.$
At last, to estimate $L$, we use
$\partial_{x}(u_{t}^{2}r)=2(\rho^{-\frac{1}{2}}\frac{u_{t}}{r})(\rho^{\frac{1}{2}}r^{2}u_{xt})+\rho^{-1}\frac{u_{t}^{2}}{r^{2}}$
to obtain
$\left|R^{-3\gamma_{0}}(\frac{dR}{dt})^{3}\right|\leq
C\|ur^{\frac{1}{2}}\|_{L^{\infty}}\left(\frac{dR}{dt}\right)^{2},$
and
$|(u^{2}u_{t})|_{x=0}|\leq\epsilon\|u_{t}r^{\frac{1}{2}}\|^{2}_{L^{\infty}}+C_{\epsilon}(u|_{x=0})^{4}\leq\epsilon\int_{0}^{k}\rho(r^{2}u_{tx})^{2}dx+2\epsilon\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx+C_{\epsilon}\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\left(\frac{dR}{dt}\right)^{2}.$
Now use the above two inequalities and (2.25)(2.26) in (2.22) and choose
$\epsilon$ small enough to deduce that
$\frac{dE_{2}}{dt}\leq
C(T)\left(1+\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}+\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx\right)E_{2}.$
(2.27)
Note that
$(\rho^{\frac{\gamma-1}{2}})_{t}^{2}=C\rho^{\gamma-3}\rho_{t}^{2}=C\rho^{\gamma+1}(r^{2}u)_{x}^{2}\leq
C(T)\left(\rho(r^{2}u_{x})^{2}+2\rho^{-1}\frac{u^{2}}{r^{2}}\right),$
which is integrable in time by Lemma 2.2. Hence using Gronwall’s inequality to
(2.27) in view of Lemma 2.2 and Lemma 2.3, if follows that $E_{2}(t)\leq
C(T)$, $\forall t\in[0,T].$ ∎
###### Lemma 2.8 (Bresch-Desjardins entropy estimate for 1-order
derivatives).
Define for $t\in[0,T]$ that
$\displaystyle E_{3}(t):=$
$\displaystyle\frac{1}{2}\int_{0}^{k}\left(u_{t}+\mu
r^{2}(\log\rho)_{xt}\right)^{2}dx+\frac{Ca}{4}\mu\gamma\int_{0}^{t}\int_{0}^{k}\rho^{\gamma}(\log\rho)_{xt}^{2}dxd\tau$
$\displaystyle+\frac{Ca}{2}\int_{0}^{k}\left[\frac{2\gamma}{(\gamma-1)^{2}}\left(\rho^{\frac{\gamma-1}{2}}\right)_{t}^{2}+\frac{4}{\gamma-1}\rho^{\frac{\gamma-1}{2}}\left(\rho^{\frac{\gamma-1}{2}}\right)_{t}\frac{u}{r}+3\rho^{\gamma-1}\frac{u^{2}}{r^{2}}\right]dx.$
There exists $C(T)>0$ such that $E_{3}(t)\leq C(T),\;\forall t\in[0,T]$.
###### Proof.
By differentiating (2.2) with respect to $t$ and using
$(\rho(r^{2}u)_{x})_{x}=(\log\rho)_{xt}$, it holds that
$\partial_{t}\left(u_{t}+\mu
r^{2}(\log\rho)_{xt}\right)+\frac{Ca}{2}\partial_{t}\left((\rho^{\gamma})_{x}r^{2}\right)=0.$
(2.28)
Multiply (2.28) by $(u_{t}+\mu r^{2}(\log\rho)_{xt})$ to deduce that
$\frac{1}{2}\partial_{t}\left(u_{t}+\mu
r^{2}(\log\rho)_{x}t\right)^{2}+\frac{Ca}{2}\left((\rho^{\gamma})_{x}r^{2}\right)_{t}u_{t}+\frac{Ca}{2}\mu\left((\rho^{\gamma})_{x}r^{2}\right)_{t}(\log\rho)_{xt}r^{2}=0.$
(2.29)
Treat the second term by the same way as in Step 1 of Lemma 2.7, and then
(2.29) yields that
$\displaystyle\frac{1}{2}\partial_{t}\left(u_{t}+\mu
r^{2}(\log\rho)_{x}t\right)^{2}+\frac{Ca}{2}\partial_{t}\left[\frac{2\gamma}{(\gamma-1)^{2}}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}+3\rho^{\gamma-1}\frac{u^{2}}{r^{2}}+\frac{4}{\gamma-1}\rho^{\frac{\gamma-1}{2}}(\rho^{\frac{\gamma-1}{2}})_{t}\frac{u}{r}\right]$
(2.30)
$\displaystyle+\frac{Ca}{2}\left[(\rho^{\gamma}r^{2})_{t}u_{t}\right]_{x}+\frac{Ca}{2}\mu\gamma(\rho^{\gamma})_{x}(\log\rho)_{t}(\log\rho)_{xt}r^{4}+2\frac{Ca}{2}\mu(\rho^{\gamma})_{x}(\log\rho)_{xt}r^{3}u$
$\displaystyle+\frac{Ca}{2}\mu\gamma\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}+\frac{Ca}{2}J=0.$
Estimate $\left|\int_{0}^{k}Jdx\right|$ in the same way as in Step 4 in the
proof of Lemma 2.7, and note that Lemma 2.7 already shows that
$\int_{0}^{k}u_{t}^{2}dx\leq C(T)$. Then one gets the control of the $J$
terms:
$\left|\int_{0}^{k}Jdx\right|\leq
C(T)(1+\|ur^{\frac{1}{2}}\|_{L^{\infty}})\left(\int_{0}^{k}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\int_{0}^{k}\rho^{\gamma-1}\frac{u^{2}}{r^{2}}dx\right).$
(2.31)
Note that $-\frac{Ca}{2}(\rho^{\gamma})_{x}r^{2}=\partial_{t}u+\mu
r^{2}(\log\rho)_{xt}$. Using (2.24), the rest two nonlinear terms are
estimated as following:
$\displaystyle\left|\int_{0}^{k}(\rho^{\gamma})_{x}(\log\rho)_{t}(\log\rho)_{xt}r^{4}dx\right|$
(2.32) $\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\int_{0}^{k}\rho^{-\gamma}(\rho^{\gamma})_{x}^{2}(\log\rho)_{t}^{2}r^{4}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}(T)(1+\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2})\int_{0}^{k}(u_{t}+\mu
r^{2}(\log\rho)_{xt})^{2}dx,$
$\displaystyle\left|\int_{0}^{k}(\rho^{\gamma})_{x}(\log\rho)_{xt}r^{3}udx\right|$
(2.33) $\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\int_{0}^{k}\rho^{-\gamma}(\rho^{\gamma})_{x}^{2}r^{4}\frac{u^{2}}{r^{2}}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{k}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}(T)\|ur^{\frac{1}{2}}\|_{L^{\infty}}^{2}\int_{0}^{k}(u_{t}+\mu
r^{2}(\log\rho)_{xt})^{2}dx.$
To control the boundary term
$\left[(\rho^{\gamma}r^{2})_{t}u_{t}\right]|_{x=0}$, first by Lemma 2.4 and
(2.10), there exists $C(T)>0$ such that
$\left|\frac{d}{dt}(\tilde{\rho}R^{2})^{\gamma}\right|\leq C(T)$. Then using
Lemma 2.7, it follows that
$|(\rho^{\gamma}r^{2})_{t}|_{x=0}|=\left|\frac{d}{dt}(\tilde{\rho}R^{2})^{\gamma}R^{2-2\gamma}+(\tilde{\rho}R^{2})^{\gamma}\frac{d}{dt}R^{2-2\gamma}\right|\leq
C(T).$ (2.34)
Again by Lemma 2.7,
$\int_{0}^{t}|u_{t}|_{x=0}|^{2}d\tau\leq\int_{0}^{t}\|u_{t}r^{\frac{1}{2}}\|_{L^{\infty}}dx\leq\int_{0}^{t}\int_{0}^{k}\rho(r^{2}u_{tx})^{2}dxd\tau+2\int_{0}^{t}\int_{0}^{k}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dxd\tau\leq
C(T).$ (2.35)
Finally, by choosing $\epsilon$ small enough, integrating (2.30) on $[0,k]$,
and use Gronwall’s inequality with the help of (2.31-2.35), the proof is
complete. ∎
Proof of Proposition 2.1. Proposition 2.1 is proved through short time
existence, a-priori estimates in Lemma 2.2-2.8 and a continuity argument. The
short time existence under the assumptions of Proposition 2.1 can be shown by
using energy estimates and Galerkin approximation as in [1, chapter 2], see
also [6]. The equations
$r^{2}\partial_{x}(\log\rho)=(\gamma\rho^{\gamma})^{-1}r^{2}\partial_{x}(\rho^{\gamma})=(\frac{Ca}{2}\gamma\rho^{\gamma})^{-1}(u_{t}+\mu
r^{2}(\log\rho)_{xt})$
and $r^{2}\partial_{x}u=\partial_{x}(r^{2}u)-\frac{u}{\rho
r}=-\rho^{-2}\partial_{t}\rho-\frac{u}{\rho r}$ show that
$\|r^{2}\partial_{x}(\log\rho)\|_{L^{2}}$ and $\|r^{2}\partial_{x}u\|_{L^{2}}$
are controlled by the bounds given by Lemma 2.7 and Lemma 2.8 with
coefficients depending on $\sup_{x\in[0,k]}\rho$ and
$\left(\inf_{x\in[0,k]}\rho\right)^{-1}$, which are also bounded by Lemma 2.6.
Therefore the global existence of generalized solution to (2.1-2.7) is proved
by using a standard continuity argument with the a-priori estimates in Lemma
2.2-2.8.
## 3 Construction of the global solution and the Uniqueness
This section is devoted to the construction and its uniqueness of global
generalized solutions. The construction is under the same frame as in [5], in
which solutions on bounded domains are regarded as approximate solutions, and
then compactness argument is applied to obtain the wanted solution to the
original problem on the unbounded exterior domain. Since the problem
considered in this paper involves an additional free boundary compared with
[5], for the sake of rigorousness, we give in this section an explicit
description to the construction.
### 3.1 Construction of the approximate solutions
Let $\phi$ be a smooth cut off function on $\mathbb{R}^{+}$ such that
$0\leq\phi\leq 1$, $\phi(z)=1$ for $z\in[0,\frac{1}{2}]$, $\phi(z)=0$ for
$z\geq 1$, $\left|\frac{d^{i}\phi}{dz^{i}}\right|\leq C$ for $i=1,2,3$ and
$z\in\mathbb{R}^{+}$. Define $\phi_{k}(z)=\phi(\frac{z}{k})$ for
$k\in\mathbb{N}$. Now for the initial value $(u_{0},\;\rho_{0},\;R_{0})$
satisfying the assumptions in Theorem 1.2, define for $k\in\mathbb{N}$ that
$u_{k,0}:=u_{0}\phi_{k},\;\rho_{k,0}^{-1}:=1+(\rho_{0}^{-1}-1)\phi_{k},\;R_{k,0}:=R_{0}.$
(3.1)
Similarly, define
$r_{k,0}(x)=\left(R_{k,0}^{3}+3\int_{0}^{x}\rho_{k,0}^{-1}(x)dy\right)^{\frac{1}{3}}.$
Noting that $\rho_{0}$ is bounded from both above and below, it is easy to
check that
$\displaystyle\left(u_{k,0},\;\rho_{k,0},\;r_{k,0}^{2}(u_{k,0})_{x},\;r^{2}_{k,0}(\rho_{k,0})_{x},\;r_{k,0}^{2}(\rho_{k,0}(r^{2}_{k,0}u_{k,0})_{x})_{x}\right)$
(3.2) $\displaystyle\rightarrow$
$\displaystyle\left(u_{0},\;\rho_{0},\;r_{0}^{2}(u_{0})_{x},\;r_{0}^{2}(\rho_{0})_{x},\;r_{0}^{2}(\rho_{0}(r_{0}^{2}u_{0})_{x})_{x}\right)\text{
in }L^{2},\text{ as }k\rightarrow+\infty.$
Now let $(u_{k},\;\rho_{k},\;R_{k})$ be given by Proposition 2.1 with initial
data $(u_{k,0},\;\rho_{k,0},\;R_{k,0})$. Define $r_{k}$ as in (2.5) with
$(u,\;\rho,\;R)$ replaced by $(u_{k},\;\rho_{k},\;R_{k})$. Then the estimates
established by Lemma 2.2-2.8 hold for each $(u_{k},\;\rho_{k},\;R_{k})$ with
the initial data $(u_{k,0},\;\rho_{k,0},\;R_{k,0})$. Now define
$\tilde{u}_{k}=u_{k}\phi_{k}$ and
$\tilde{\rho}_{k}^{-1}=1+(\rho_{k}^{-1}-1)\phi_{k}$ for $k\in\mathbb{N}.$ Let
$T>0$ be arbitrary. Then by definition, for $k>2N$
$(\tilde{u}_{k},\;\tilde{\rho}_{k})=(u_{k},\;\rho_{k}),\;\forall(x,t)\in[0,N]\times[0,T].$
(3.3)
Denote $Q_{T}:=[0,+\infty)\times[0,T]$, $Q_{k,T}:=[0,k]\times[0,T]$, and
abbreviate $\|\cdot\|_{L^{p}_{t}L^{q}_{x}(Q_{T})}$ as
$\|\cdot\|_{L^{p}_{t}L^{q}_{x}}$. Remark again that the estimates in Lemma
2.2-2.8 are not dependent on $k$ but only on the norms of the initial value,
which is by (3.1) and (3.2) uniformly bounded in $k$. We then check by
applying Lemma 2.2-2.8 on $(u_{k},\;\rho_{k},\;R_{k})$ and (3.2) that
$\|\tilde{u}_{k}\|_{L^{\infty}_{t}L^{2}_{x}}=\|u_{k}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}\leq\|u_{k}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}\leq
C,$
$\|\partial_{t}\tilde{u}_{k}\|_{L^{\infty}_{t}L^{2}_{x}}=\|\partial_{t}u_{k}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}\leq\|\partial_{t}u_{k}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}\leq
C(T),$ (3.4) $\inf_{(x,t)\in Q_{T}}\tilde{\rho}_{k}\geq\inf_{(x,t)\in
Q_{k,T}}\rho_{k}\geq c(T),\;\sup_{(x,t)\in
Q_{T}}\tilde{\rho}_{k}\leq\sup_{(x,t)\in Q_{k,T}}\rho_{k}\leq C(T),$ (3.5)
$\int_{0}^{\infty}H(\tilde{\rho}_{k})dx\leq
C(T)\int_{0}^{\infty}(\tilde{\rho}_{k}^{-1}-1)^{2}dx\leq
C(T)\int_{0}^{k}(\rho_{k}^{-1}-1)^{2}dx\leq C(T)\int_{0}^{k}H(\rho_{k})dx\leq
C(T),$ (3.6)
$\|\partial_{t}\tilde{\rho_{k}}^{-1}\|_{L^{\infty}_{t}L^{2}_{x}}=\|\partial_{t}\rho_{k}^{-1}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}\leq\|\partial_{t}\rho_{k}^{-1}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}\leq
C(T).$
To bound the norms involving $x$-derivatives, first by the definition (2.5) of
$r_{k}$, it holds that
$c(T)(1+3x)\leq r_{k}^{3}\leq C(T)(1+3x).$ (3.7)
We then control the norm of $x$-derivative of $\tilde{u}_{k}$ by
$\displaystyle\|(1+3x)^{\frac{2}{3}}(\tilde{u}_{k})_{x}\|_{L^{\infty}_{t}L^{2}_{x}}\leq$
$\displaystyle\|(1+3x)^{\frac{2}{3}}(u_{k})_{x}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}+\|(1+3x)^{\frac{2}{3}}(\phi_{k})_{x}u_{k}\|_{L^{\infty}_{t}L^{2}_{x}}$
(3.8) $\displaystyle\leq$ $\displaystyle
C(T)\|r_{k}^{2}(u_{k})_{x}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}+\|(1+3x)^{\frac{2}{3}}(\phi_{k})_{x}u_{k}\|_{L^{\infty}_{t}L^{2}_{x}}$
$\displaystyle\leq$ $\displaystyle C(T),$
where in the last step we use the inequality
$\left|(1+3x)^{\frac{2}{3}}(\phi_{k})_{x}\right|\leq
C\chi_{\left\\{2^{-1}k\leq x\leq k\right\\}}k^{-1}(1+3x)^{\frac{2}{3}}\leq
C(1+3x)^{-\frac{1}{3}}.$
Similarly, the $x$-derivative of $\tilde{\rho}_{k}$ have the control that
$\displaystyle\|(1+3x)^{\frac{2}{3}}(\tilde{\rho}_{k}^{-1})_{x}\|_{L^{\infty}_{t}L^{2}_{x}}\leq$
$\displaystyle\|(1+3x)^{\frac{2}{3}}(\rho_{k}^{-1})_{x}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}+\|(1+3x)^{\frac{2}{3}}(\phi_{k})_{x}(\rho_{k}^{-1}-1)\|_{L^{\infty}_{t}L^{2}_{x}}$
(3.9) $\displaystyle\leq$ $\displaystyle
C(T)\|r_{k}^{2}(\rho_{k}^{-1})_{x}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}+\|(1+3x)^{\frac{2}{3}}(\phi_{k})_{x}(\rho_{k}^{-1}-1)\|_{L^{\infty}_{t}L^{2}_{x}}$
$\displaystyle\leq$ $\displaystyle C(T).$
The mixed derivative of $u$ can be bounded easily by using (3.7) that
$\displaystyle\|(1+3x)^{\frac{2}{3}}(\tilde{u}_{k})_{xt}\|_{L^{2}_{t}L^{2}_{x}}\leq$
$\displaystyle\|(1+3x)^{\frac{2}{3}}(u_{k})_{xt}\phi_{k}\|_{L^{2}_{t}L^{2}_{x}}+\|(1+3x)^{\frac{2}{3}}(\phi_{k})_{x}(u_{k})_{t}\|_{L^{2}_{t}L^{2}_{x}}$
(3.10) $\displaystyle\leq$ $\displaystyle
C(T)\|r_{k}^{2}(u_{k})_{xt}\|_{L^{2}_{t}L^{2}_{x}(Q_{k,T})}+C(T)\|\frac{(u_{k})_{t}}{r_{k}}\|_{L^{2}_{t}L^{2}_{x}(Q_{k,T})}$
$\displaystyle\leq$ $\displaystyle C(T).$
To control the mixed derivative of $\rho_{k}$, first note that (2.24) and the
inequality
$\displaystyle\|u_{k}^{2}r\|_{L^{\infty}_{x}(0,k)}\leq$
$\displaystyle\int_{0}^{k}\rho_{k}(r_{k}^{2}(u_{k})_{x})^{2}dx+2\int_{0}^{k}\rho_{k}^{-1}\frac{u_{k}^{2}}{r_{k}^{2}}dx$
$\displaystyle=$
$\displaystyle\int_{0}^{k}\rho_{k}(r_{k}^{2}u_{k})_{x}^{2}dx+2(r_{k}u_{k}^{2})|_{x=0}$
$\displaystyle\leq$ $\displaystyle
C(T)\left(\|\partial_{t}\rho_{k}^{-1}\|_{L^{2}_{x}(0,k)}^{2}+\left(\frac{dR_{k}}{dt}\right)^{2}\right)$
imply that $\|(\log\rho_{k})_{t}\|_{L^{\infty}_{t}L^{\infty}_{x}(Q_{k,T})}\leq
C(T)$. Therefore
$\displaystyle\|(1+3x)^{\frac{2}{3}}(\log\tilde{\rho}_{k})_{xt}\|_{L^{\infty}_{t}L^{2}_{x}}$
(3.11) $\displaystyle\leq$ $\displaystyle
C(T)\|r_{k}^{2}\left(\tilde{\rho}_{k}\rho_{k}^{-1}(\log\rho_{k})_{t}\phi_{k}\right)_{x}\|_{L^{\infty}_{t}L^{2}_{x}}$
$\displaystyle\leq$ $\displaystyle
C(T)\|r_{k}^{2}(\tilde{\rho}_{k})_{x}\rho_{k}^{-1}(\log\rho_{k})_{t}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}+C(T)\|r_{k}^{2}\tilde{\rho}_{k}(\rho_{k}^{-1})_{x}(\log\rho_{k})_{t}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}$
$\displaystyle+C(T)\|r_{k}^{2}\tilde{\rho}_{k}\rho_{k}^{-1}(\log\rho_{k})_{xt}\phi_{k}\|_{L^{\infty}_{t}L^{2}_{x}}+C(T)\|r_{k}^{2}\tilde{\rho}_{k}\rho_{k}^{-1}(\log\rho_{k})_{t}(\phi_{k})_{x}\|_{L^{\infty}_{t}L^{2}_{x}}$
$\displaystyle\leq$ $\displaystyle
C(T)\|(\log\rho_{k})_{t}\|_{L^{\infty}_{t}L^{\infty}_{x}(Q_{k,T})}\left(\|r_{k}^{2}(\rho_{k}^{-1})_{x}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}+\|r_{k}^{2}(\tilde{\rho}_{k}^{-1})_{x}\|_{L^{\infty}_{t}L^{2}_{x}}\right)$
$\displaystyle+C(T)\|r_{k}^{2}(\log\rho_{k})_{xt}\|_{L^{\infty}_{t}L^{2}_{x}(Q_{k,T})}+C(T)\|r_{k}^{2}(\phi_{k})_{x}\partial_{t}\rho_{k}^{-1}\|_{L^{\infty}_{t}L^{2}_{x}}$
$\displaystyle\leq$ $\displaystyle C(T).$
Summarizing the estimates (3.4-3.11), one concludes that
$\displaystyle\|\tilde{u}_{k},\;(\tilde{\rho}^{-1}_{k}-1),\;\partial_{t}\tilde{u}_{k},\;\partial_{t}\tilde{\rho}^{-1},\;(1+3x)^{\frac{2}{3}}\partial_{x}\tilde{u}_{k},\;(1+3x)^{\frac{2}{3}}\partial_{x}\tilde{\rho}^{-1}_{k},\;(1+3x)^{\frac{2}{3}}(\log\tilde{\rho})_{xt}\|_{L^{\infty}_{t}L^{2}_{x}}^{2}$
(3.12)
$\displaystyle+\int_{0}^{T}\|(1+3x)^{\frac{2}{3}}(\tilde{u}_{k})_{xt}\|_{L^{2}}^{2}d\tau\leq
C(T).$
Hence there exist functions $(u,\;\rho^{-1})$ and a subsequence of
$(\tilde{u}_{k},\;\tilde{\rho}^{-1}_{k})$ (still denoted by
$(\tilde{u}_{k},\;\tilde{\rho}^{-1}_{k})$) such that as $k\rightarrow+\infty$,
$\displaystyle\left(\tilde{u}_{k},\;(\tilde{\rho}^{-1}-1),\;\partial_{t}\tilde{u}_{k},\;\partial_{t}\tilde{\rho}^{-1},\;(1+3x)^{\frac{2}{3}}\partial_{x}\tilde{u}_{k},\;(1+3x)^{\frac{2}{3}}\partial_{x}\tilde{\rho}^{-1}_{k},\;(1+3x)^{\frac{2}{3}}(\log\tilde{\rho})_{xt}\right)$
(3.13) $\displaystyle\rightharpoonup$
$\displaystyle\left(u,\;(\rho^{-1}-1),\;\partial_{t}u,\;\partial_{t}\rho^{-1},\;(1+3x)^{\frac{2}{3}}\partial_{x}u,\;(1+3x)^{\frac{2}{3}}\partial_{x}\rho^{-1},\;(1+3x)^{\frac{2}{3}}(\log\rho)_{xt}\right)$
in the weak-$\ast$ sense of $L^{\infty}([0,T],L^{2})$, and that
$\displaystyle(1+3x)^{\frac{2}{3}}(u_{k})_{xt}\rightharpoonup(1+3x)^{\frac{2}{3}}u_{xt}\text{
in the weak sense of $L^{2}\left([0,T],L^{2}\right)$,}$
with $(u,\;\rho)$ satisfying
$\displaystyle\|u,\;(\rho^{-1}-1),\;\partial_{t}u,\;\partial_{t}\rho^{-1},\;(1+3x)^{\frac{2}{3}}\partial_{x}u,\;(1+3x)^{\frac{2}{3}}\partial_{x}\rho^{-1}_{k},\;(1+3x)^{\frac{2}{3}}(\log\rho)_{xt}\|_{L^{\infty}_{t}L^{2}_{x}}^{2}$
(3.14)
$\displaystyle+\int_{0}^{T}\|(1+3x)^{\frac{2}{3}}u_{xt}\|_{L^{2}}^{2}d\tau\leq
C(T).$
Moreover, for any $\psi\in C_{c}^{\infty}(Q_{T})$ with $\psi\geq 0$, since
$\lim_{k\rightarrow+\infty}\int_{Q_{T}}\tilde{\rho}_{k}\psi
dxdt=\int_{Q_{T}}\rho\psi dxdt,$ and $c(T)\int_{Q_{T}}\psi
dxdt\leq\lim_{k\rightarrow+\infty}\int_{Q_{T}}\tilde{\rho}_{k}\psi dxdt\leq
C(T)\int_{Q_{T}}\psi dxdt,$ it holds that
$c(T)\leq\rho\leq C(T),\text{ on }Q_{T}.$ (3.15)
Define $r(x,t)=r_{0}(x)+\int_{0}^{t}ud\tau,\;R(t)=r(0,t).$ We now check that
$(u,\;\rho,\;R)$ is a generalized solution to (1.14-1.19). First, (1.16) holds
immediately by the construction of $R$. To check the initial value (1.19) that
$(u,\;\rho,\;R)|_{t=0}=(u_{0},\;\rho_{0},\;R_{0})$, let $\varphi\in
C_{c}^{\infty}[0,+\infty)$ with $\text{supp }\varphi\subset[0,N]$. Then for
$k>2N$,
$\displaystyle\left(u(0)-u_{0},\;\varphi\right)_{L^{2}}=$
$\displaystyle\left(u(0)-u_{k,0},\;\varphi\right)_{L^{2}}$ $\displaystyle=$
$\displaystyle\frac{1}{T}\int_{0}^{T}\left(u(t)-u_{k}(t),\;\varphi\right)_{L^{2}}dt+\frac{1}{T}\int_{0}^{T}(t-T)\left(\partial_{t}u-\partial_{t}u_{k},\;\varphi\right)_{L^{2}}dt$
$\displaystyle=$
$\displaystyle\frac{1}{T}\int_{0}^{T}\left(u(t)-\tilde{u}_{k}(t),\;\varphi\right)_{L^{2}}dt+\frac{1}{T}\int_{0}^{T}(t-T)\left(\partial_{t}u-\partial_{t}\tilde{u}_{k},\;\varphi\right)_{L^{2}}dt$
$\displaystyle\rightarrow$ $\displaystyle 0,\text{ as }k\rightarrow+\infty.$
Hence $u(t=0)=u_{0}$, and similarly $\rho(t=0)=\rho_{0}$. For any $N>0$, in
view of Rellich’s selection theorem, there exists a subsequence of
$(\tilde{u}_{k},\;\tilde{\rho}^{-1}_{k})$, still denoted by
$(\tilde{u}_{k},\;\tilde{\rho}^{-1}_{k})$, such that
$(\tilde{u}_{k},\;\tilde{\rho}^{-1}_{k})\rightarrow(u,\rho^{-1}),\;\text{strongly
in }L^{2}\left((0,N)\times(0,T)\right).$ (3.16)
Then by (3.3), $(u_{k},\;\rho_{k})$ also converges strongly to $(u,\;\rho)$ in
$L^{2}\left((0,N)\times(0,T)\right)$, and thus
$r_{k}\rightarrow r,\text{ strongly in }C\left([0,T],L^{2}(0,N)\right).$
(3.17)
Therefore, in view of (3.3)(3.13), for any $t\in[0,T]$,
$\displaystyle R_{k}(t)-R(t)=$
$\displaystyle\int_{0}^{t}(u_{k}-u)|_{x=0}d\tau$ (3.18) $\displaystyle=$
$\displaystyle\frac{1}{N}\int_{0}^{t}\int_{0}^{N}(u_{k}-u)dxd\tau+\frac{1}{N}\int_{0}^{t}\int_{0}^{N}(x-N)(\partial_{x}u_{k}-\partial_{x}u)dxd\tau$
$\displaystyle\rightarrow$ $\displaystyle 0,\;\text{as }k\rightarrow+\infty$
In particular, $R(0)=\lim_{k\rightarrow+\infty}R_{k}(0)=R_{0}$, which verifies
(1.19).
To check (1.14), let $\psi\in C_{c}^{\infty}(Q_{T})$ with
$\text{supp}\psi\subset(0,N)\times(0,T)$. First by the weak convergence (3.13)
and (3.3), one has
$\int_{Q_{T}}\partial_{t}\rho^{-1}\psi
dxdt=\lim_{k\rightarrow+\infty}\int_{Q_{T}}\partial_{t}\tilde{\rho}^{-1}_{k}\psi
dxdt=\lim_{k\rightarrow+\infty}\int_{Q_{T}}\partial_{t}\rho^{-1}_{k}\psi
dxdt.$
Then using equation (2.1) and integrating by parts, it holds
$\lim_{k\rightarrow+\infty}\int_{Q_{T}}\partial_{t}\rho^{-1}_{k}\psi
dxdt=\lim_{k\rightarrow+\infty}\int_{Q_{T}}(r^{2}_{k}u_{k})_{x}\psi
dxdt=-\lim_{k\rightarrow+\infty}\int_{Q_{T}}r_{k}^{2}u_{k}\psi_{x}dxdt$
Next, using the strong convergence (3.16)(3.17) and integrating by parts
again, it follows
$\int_{Q_{T}}\partial_{t}\rho^{-1}\psi
dxdt=-\lim_{k\rightarrow+\infty}\int_{Q_{T}}r_{k}^{2}u_{k}\psi_{x}t=-\lim_{k\rightarrow+\infty}\int_{Q_{T}}r^{2}u\psi_{x}dxdt=\int_{Q_{T}}(r^{2}u)_{x}\psi
dxdt$
Hence $\partial_{t}\rho^{-1}=(r^{2}u)_{x}$, which is exactly (1.14). Moreover,
it follows from
$\frac{1}{3}\partial_{t}\partial_{x}r^{3}=\partial_{x}(r^{2}u)=\partial_{t}\rho^{-1}$
that
$\partial_{x}r^{3}=3\rho^{-1}-3\rho^{-1}_{0}+\partial_{x}r_{0}^{3}=3\rho^{-1},\;\partial_{x}r=\rho^{-1}r^{-2},$
(3.19) $r^{3}(x,t)=R^{3}(t)+3\int_{0}^{x}\rho^{-1}(y,t)dy,$
which together with the construction of $r$ verify (1.18). Since
$c(T)\leq\rho\leq C(T)$, it also follows that
$c(T)\leq(1+3x)^{-1}r^{3}\leq C(T).$ (3.20)
To check (1.15), write the inner product of the viscous term with an arbitrary
function $\psi$ as
$\displaystyle\int_{Q_{T}}(\rho_{k}(r_{k}^{2}u_{k})_{x})_{x}r_{k}^{2}\psi
dxdt=$
$\displaystyle-\int_{Q_{T}}\rho_{k}(r_{k}^{2}u_{k})_{x}(r_{k}^{2}\psi)dxdt$
$\displaystyle=$
$\displaystyle\int_{Q_{T}}(\rho-\rho_{k})(r_{k}^{2}u_{k})_{x}(r_{k}^{2}\psi)_{x}dxdt-\int_{Q_{T}}\rho(r_{k}^{2}u_{k})_{x}(r^{2}\psi)_{x}dxdt$
$\displaystyle-\int_{Q_{T}}\rho(r_{k}^{2}u_{k})_{x}[(r_{k}^{2}-r^{2})\psi]_{x}dxdt.$
By the strong convergence of $\tilde{\rho}^{-1}_{k}$ (3.16), the bounds of
$\rho$ (3.15), $\rho_{k}$ (3.5), $r_{k}$ (3.7), $\tilde{u}_{k}$ (3.12) in view
of (3.3) and the compact support of $\psi$, the first term on the right-hand
side tends to 0 as $k\rightarrow+\infty$. Similarly, the third term vanishes
as $k\rightarrow+\infty$ by (3.3)(3.7)(3.12)(3.15)(3.16)(3.17) and the bounds
of $r$ (3.20), while the second term tends to
$-\int_{Q_{T}}\rho(r^{2}u)_{x}(r^{2}\psi)_{x}dxdt=\int_{Q_{T}}(\rho(r^{2}u)_{x})_{x}r^{2}\psi
dxdt$ by (3.13)(3.15)(3.19)(3.20) and the equation (1.14). Hence we find that
$\lim_{k\rightarrow+\infty}\int_{Q_{T}}(\rho_{k}(r_{k}^{2}u_{k})_{x})_{x}r_{k}^{2}\psi
dxdt=\int_{Q_{T}}(\rho(r^{2}u)_{x})_{x}r^{2}\psi dxdt.$ (3.21)
For the pressure term, write
$\displaystyle\int_{Q_{T}}(\rho_{k}^{\gamma})_{x}r_{k}^{2}\psi dxdt=$
$\displaystyle-\int_{Q_{T}}(\rho_{k}^{\gamma}-1)(r_{k}^{2}\psi)_{x}dxdt$
$\displaystyle=$
$\displaystyle\int_{Q_{T}}(\rho^{\gamma}-\rho_{k}^{\gamma})(r_{k}^{2}\psi)_{x}dxdt-\int_{Q_{T}}(\rho^{\gamma}-1)(r^{2}\psi)_{x}dxdt$
$\displaystyle-\int_{Q_{T}}(\rho^{\gamma}-1)((r_{k}^{2}-r^{2})\psi)_{x}dxdt.$
(3.3)(3.5)(3.7)(3.15)(3.16) imply that the first term tends to 0, and
(3.3)(3.7)(3.14)(3.15)(3.16) (3.17)(3.20) imply that the third term tends to
0. Hence
$\lim_{k\rightarrow+\infty}\int_{Q_{T}}(\rho_{k}^{\gamma})_{x}r_{k}^{2}\psi
dxdt=\int_{Q_{T}}(\rho^{\gamma})_{x}r^{2}\psi dxdt,$ (3.22)
(3.3)(3.13)(3.21)(3.22) then imply that
$\displaystyle 0=$
$\displaystyle\lim_{k\rightarrow+\infty}\int_{Q_{T}}\left[\partial_{t}u_{k}+\frac{Ca}{2}(\rho_{k}^{\gamma})_{x}r_{k}^{2}-\mu
r_{k}^{2}(\rho_{k}(r_{k}^{2}u_{k})_{x})_{x}\right]\psi dxdt$
$\displaystyle=\int_{Q_{T}}\left[\partial_{t}u+\frac{Ca}{2}(\rho^{\gamma})_{x}r^{2}-\mu
r^{2}(\rho(r^{2}u)_{x})_{x}\right]\psi dxdt,$
which verifies equation (1.15).
At last, to check (1.17), take $\psi\in C_{c}^{\infty}(Q_{T})$ such that
$\text{supp}\psi\subset[0,N)\times(0,T)$. Then by (1.14),
$\displaystyle-\int_{0}^{T}\left.\left(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}u_{x}\right)\right|_{x=0}\psi|_{x=0}dt$ $\displaystyle=$
$\displaystyle\int_{Q_{T}}\left(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}u_{x}\right)\psi_{x}dxdt+\int_{Q_{T}}\left(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}u_{x}\right)_{x}\psi dxdt$ $\displaystyle=$
$\displaystyle\int_{Q_{T}}\left(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}u_{x}\right)\psi_{x}dxdt+\int_{Q_{T}}\left(\frac{Ca}{2}(\rho^{\gamma})_{x}+\mu(\log\rho)_{xt}+2\mu\frac{u_{x}}{r}+2\mu\rho^{-1}\frac{u}{r^{4}}\right)\psi
dxdt,$
and the same equation holds with $(u,\;\rho,\;r)$ replaced by
$(u_{k},\;\rho_{k},\;r_{k})$. By (3.3)(3.5)(3.7)(3.13)(3.15)
(3.16)(3.17)(3.20) and the convergence of $R_{k}$ (3.18), (1.17) is verified
by
$\displaystyle\int_{0}^{T}\left.\left(\frac{Ca}{2}\rho^{\gamma}-\mu\rho
r^{2}u_{x}\right)\right|_{x=0}\psi|_{x=0}dt$ $\displaystyle=$
$\displaystyle\lim_{k\rightarrow+\infty}\int_{0}^{T}\left.\left(\frac{Ca}{2}\rho_{k}^{\gamma}-\mu\rho_{k}r_{k}^{2}(u_{k})_{x}\right)\right|_{x=0}\psi|_{x=0}dt$
$\displaystyle=$
$\displaystyle\lim_{k\rightarrow+\infty}\int_{0}^{T}\left(\left(\frac{Ca}{2}+\frac{2}{We}\right)R_{k}^{-3\gamma_{0}}-\frac{2}{We}R_{k}^{-1}\right)\psi|_{x=0}dt$
$\displaystyle=$
$\displaystyle\int_{0}^{T}\left(\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right)\psi|_{x=0}dt.$
Hence $(u,\;\rho,\;R)$ is a generalized solution to (1.14-1.19) on $[0,T]$.
Since $T>0$ is chosen arbitrarily, we conclude that $(u,\;\rho,\;R)$ is in
fact a global generalized solution.
### 3.2 Uniqueness
Let $(u_{1},\;\rho_{1},\;R_{1})$, $(u_{2},\;\rho_{2},\;R_{2})$ be two global
generalized solutions to (1.14-1.19), namely, for $i=1,2$,
$\displaystyle\partial_{t}\rho_{i}+\rho_{i}^{2}\partial_{x}(r_{i}^{2}u_{i})=0,$
$x>0,\;t>0,$ (3.23)
$\displaystyle\partial_{t}u_{i}+\frac{Ca}{2}r_{i}^{2}\partial_{x}(\rho_{i}^{\gamma})=\mu
r_{i}^{2}\partial_{x}\left(\rho_{i}\partial_{x}(r_{i}^{2}u_{i})\right),$
$x>0,\;t>0,$ (3.24) $\displaystyle\frac{dR_{i}}{dt}=u_{i}|_{x=0},$ $t>0,$
(3.25)
$\displaystyle(\frac{Ca}{2}\rho_{i}^{\gamma}-\mu\rho_{i}r_{i}^{2}\partial_{x}u_{i})|_{x=0}=\left(\frac{Ca}{2}+\frac{2}{We}\right)R_{i}^{-3\gamma_{0}}-\frac{2}{We}R_{i}^{-1},$
$t>0,$ (3.26) $\displaystyle
r_{i}=\left(R_{i}(t)^{3}+3\int_{0}^{x}\rho_{i}^{-1}(y,t)dy\right)^{\frac{1}{3}}=r_{i}(x,0)+\int_{0}^{t}u_{i}(y,\tau)d\tau,$
$x>0,\;t>0,$ (3.27)
$\displaystyle(u_{i},\;\rho_{i},\;R_{i})|_{t=0}=(u_{0},\;\rho_{0},\;R_{0}),$
$x>0$, (3.28)
and the control (3.14)(3.15)(3.20) hold. To prove the uniqueness, it suffices
to show $(u_{1},\;\rho_{1},\;R_{1})=(u_{2},\;\rho_{2},\;R_{2})$ on $[0,T]$ for
arbitrary $T>0$. To begin with, subtracting (3.24) with $i=1,2$, and
multiplying the resulted equation by $(u_{1}-u_{2})$ yield the equation
$\displaystyle 0=$
$\displaystyle\frac{1}{2}\frac{d}{dt}(u_{1}-u_{2})^{2}+\frac{Ca}{2}\left[(\rho_{1}^{\gamma})_{x}r_{1}^{2}-(\rho_{2}^{\gamma})_{x}r_{2}^{2}\right](u_{1}-u_{2})$
(3.29)
$\displaystyle-\mu\left[(\rho_{1}(r_{1}^{2}u_{1})_{x})_{x}r_{1}^{2}-(\rho_{2}(r_{2}^{2}u_{2})_{x})_{x}r_{2}^{2}\right](u_{1}-u_{2})$
$\displaystyle=$
$\displaystyle\frac{1}{2}\frac{d}{dt}(u_{1}-u_{2})^{2}+\frac{Ca}{2}\left[(\rho_{1}^{\gamma}r_{1}^{2}-\rho_{2}^{\gamma}r_{2}^{2})(u_{1}-u_{2})\right]_{x}-\frac{Ca}{2}(\rho_{1}^{\gamma}r_{1}^{2}-\rho_{2}^{\gamma}r_{2}^{2})(u_{1}-u_{2})_{x}$
$\displaystyle-\frac{Ca}{2}(\rho_{1}^{2}(r_{1}^{2})_{x}-\rho_{2}^{2}(r_{2}^{2})_{x})(u_{1}-u_{2})-\mu\left[(\rho_{1}(r_{1}^{4}(u_{1})_{x}-\rho_{2}(r_{2}^{4}(u_{2})_{x})(u_{1}-u_{2})\right]_{x}$
$\displaystyle+\mu\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}+2\mu\rho_{2}^{-1}r_{2}^{-2}(u_{1}-u_{2})^{2}$
$\displaystyle+\mu(\rho_{1}r_{1}^{4}-\rho_{2}r_{2}^{4})(u_{1})_{x}(u_{1}-u_{2})_{x}+2\mu(\rho_{1}^{-1}r_{1}^{-2}-\rho_{2}^{-1}r_{2}^{-2})u_{1}(u_{1}-u_{2}).$
Using (3.26), integrating (3.29) on $(0,+\infty)$ yields that
$\displaystyle\frac{1}{2}\frac{d}{dt}\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx-\frac{Ca}{2}\int_{0}^{\infty}(\rho_{1}^{\gamma}(r_{1}^{2})_{x}-\rho_{2}^{\gamma}(r_{2}^{2})_{x})(u_{1}-u_{2})dx$
(3.30)
$\displaystyle-\frac{Ca}{2}\int_{0}^{\infty}(\rho_{1}^{\gamma}r_{1}^{2}-\rho_{2}^{\gamma}r_{2}^{2})(u_{1}-u_{2})_{x}dx+\mu\int_{0}^{\infty}\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx+2\mu\int_{0}^{\infty}\rho_{2}^{-1}r_{2}^{-2}(u_{1}-u_{2})^{2}dx$
$\displaystyle+\mu\int_{0}^{\infty}(\rho_{1}r_{1}^{4}-\rho_{2}r_{2}^{4})(u_{1})_{x}(u_{1}-u_{2})_{x}dx+2\mu\int_{0}^{\infty}(\rho_{1}^{-1}r_{1}^{-2}-\rho_{2}^{-1}r_{2}^{-2})u_{1}(u_{1}-u_{2})dx$
$\displaystyle+\left[\frac{2}{We}(R_{1}-R_{2})-\left(\frac{Ca}{2}+\frac{2}{We}\right)(R_{1}^{-3\gamma_{0}+2}-R_{2}^{-3\gamma_{0}+2})\right]\left(\frac{dR_{1}}{dt}-\frac{dR_{2}}{dt}\right)=0.$
Since
$(R_{2}-R_{1})^{-1}\left(R_{2}^{-3\gamma_{0}+2}-R_{1}^{-3\gamma_{0}+2}\right)=\int_{0}^{1}(2-3\gamma_{0})\left(R_{1}+\lambda(R_{2}-R_{1})\right)^{1-3\gamma_{0}}d\lambda,$
the difference of $R_{i}$ can be bounded by
$\displaystyle\left[\frac{2}{We}(R_{1}-R_{2})-\left(\frac{Ca}{2}+\frac{2}{We}\right)(R_{1}^{-3\gamma_{0}+2}-R_{2}^{-3\gamma_{0}+2})\right]\left(\frac{dR_{1}}{dt}-\frac{dR_{2}}{dt}\right)$
(3.31) $\displaystyle=$
$\displaystyle\frac{d}{dt}\left[\left(\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)\frac{R_{2}^{-3\gamma_{0}+2}-R_{1}^{-3\gamma_{0}+2}}{R_{1}-R_{2}}+\frac{1}{We}\right)(R_{1}-R_{2})^{2}\right]$
$\displaystyle+\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)(R_{1}-R_{2})^{2}\frac{d}{dt}\int_{0}^{1}(2-3\gamma_{0})\left(R_{1}+\lambda(R_{2}-R_{1})\right)^{1-3\gamma_{0}}d\lambda$
$\displaystyle=$
$\displaystyle\frac{d}{dt}\left[\left(\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)\frac{R_{2}^{-3\gamma_{0}+2}-R_{1}^{-3\gamma_{0}+2}}{R_{1}-R_{2}}+\frac{1}{We}\right)(R_{1}-R_{2})^{2}\right]$
$\displaystyle+\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)(R_{1}-R_{2})^{2}(2-3\gamma_{0})(1-3\gamma_{0})\int_{0}^{1}\left(R_{1}+\lambda(R_{2}-R_{1})\right)^{-3\gamma_{0}}(u_{1}+\lambda(u_{2}-u_{1}))d\lambda$
$\displaystyle\geq$
$\displaystyle\frac{d}{dt}\left[\left(\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)\frac{R_{2}^{-3\gamma_{0}+2}-R_{1}^{-3\gamma_{0}+2}}{R_{1}-R_{2}}+\frac{1}{We}\right)(R_{1}-R_{2})^{2}\right]-C(T)(R_{1}-R_{2})^{2},$
where in the last step we use that $\|u_{i}\|_{L^{\infty}}\leq
C(T)\left(\|u_{i}\|_{L^{2}}+\|r_{i}^{2}(u_{i})_{x}\|_{L^{2}}\right)\leq C(T)$
in view of (3.14)(3.15)(3.20) and that $c\leq R_{i}\leq C$ for $i=1,2$. Then
using Cauchy-Schwarz inequality and (3.14)(3.15)(3.20), the two terms with
coefficient $\frac{Ca}{2}$ in (3.30) have the control that
$\displaystyle-\frac{Ca}{2}\int_{0}^{\infty}(\rho_{1}^{\gamma}(r_{1}^{2})_{x}-\rho_{2}^{\gamma}(r_{2}^{2})_{x})(u_{1}-u_{2})dx-\frac{Ca}{2}\int_{0}^{\infty}(\rho_{1}^{\gamma}r_{1}^{2}-\rho_{2}^{\gamma}r_{2}^{2})(u_{1}-u_{2})_{x}dx$
(3.32)
$\displaystyle+\mu\int_{0}^{\infty}(\rho_{1}r_{1}^{4}-\rho_{2}r_{2}^{4})(u_{1})_{x}(u_{1}-u_{2})_{x}dx+2\mu\int_{0}^{\infty}(\rho_{1}^{-1}r_{1}^{-2}-\rho_{2}^{-1}r_{2}^{-2})u_{1}(u_{1}-u_{2})dx$
$\displaystyle\geq$
$\displaystyle-\frac{\mu}{2}\int_{0}^{\infty}\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx-\mu\int_{0}^{\infty}\rho_{2}^{-1}r_{2}^{-2}(u_{1}-u_{2})^{2}dx$
$\displaystyle-C\int_{0}^{\infty}(\rho_{1}^{\gamma}r_{1}^{2}-\rho_{2}^{\gamma}r_{2}^{2})^{2}\rho_{2}^{-1}r_{2}^{-4}dx-C\int_{0}^{\infty}(\rho_{1}^{\gamma}(r_{1}^{2})_{x}-\rho_{2}^{\gamma}(r_{2}^{2})_{x})^{2}\rho_{2}r_{2}^{2}dx$
$\displaystyle-C\int_{0}^{\infty}(\rho_{1}r_{1}^{4}-\rho_{2}r_{2}^{4})^{2}(u_{1})_{x}^{2}\rho_{2}^{-1}r_{2}^{-4}dx-C\int_{0}^{\infty}(\rho_{1}^{-1}r_{1}^{-2}-\rho_{2}^{-1}r_{2}^{-2})^{2}\rho_{2}r_{2}^{2}u_{1}^{2}dx.$
$\displaystyle\geq$
$\displaystyle-\frac{\mu}{2}\int_{0}^{\infty}\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx-\mu\int_{0}^{\infty}\rho_{2}^{-1}r_{2}^{-2}(u_{1}-u_{2})^{2}dx$
$\displaystyle-C(T)\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-1})^{2}dx-C(T)\int_{0}^{\infty}(1-\frac{r_{1}}{r_{2}})^{2}dx-C(T)\int_{0}^{\infty}(1-\frac{r_{2}}{r_{1}})^{2}dx,$
where we also used that
$\|u_{1}\|_{L^{\infty}}^{2}\leq
C(T)\left(\|u_{1}\|_{L^{2}}^{2}+\|r_{1}^{2}(u_{1})_{x}\|_{L^{2}}^{2}\right)\leq
C(T),$ (3.33) $\|\rho_{1}(r_{1}^{2}u_{1})_{x}\|_{L^{\infty}}\leq
C(T)(1+\|(u_{1})_{t}\|_{L^{2}})\leq C(T).$ (3.34)
(3.34) is verified by dividing (3.24) with $r_{1}^{2}$ and integrating on
$[x,+\infty)$, namely
$\mu\rho_{1}(r_{1}^{2}u_{1})_{x}(x,t)=-\int_{x}^{\infty}\frac{(u_{1})_{t}}{r_{1}^{2}}dy+\frac{Ca}{2}(\rho_{1}^{\gamma}(x,t)-1).$
The third line in (3.30) can be absorbed by using Cauchy-Schwarz inequality
and (3.33)(3.34), therefore using (3.31)(3.32) in (3.30) yields
$\displaystyle\frac{1}{2}\frac{d}{dt}\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx+\frac{d}{dt}\left[\left(\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)\frac{R_{2}^{-3\gamma_{0}+2}-R_{1}^{-3\gamma_{0}+2}}{R_{1}-R_{2}}+\frac{1}{We}\right)(R_{1}-R_{2})^{2}\right]$
(3.35)
$\displaystyle+\frac{\mu}{2}\int_{0}^{\infty}\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx+\mu\int_{0}^{\infty}\rho_{2}^{-1}r_{2}^{-2}(u_{1}-u_{2})^{2}dx$
$\displaystyle\leq$ $\displaystyle
C(T)\left[\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-1})^{2}dx+\int_{0}^{\infty}(1-\frac{r_{1}}{r_{2}})^{2}dx+\int_{0}^{\infty}(1-\frac{r_{2}}{r_{1}})^{2}dx+(R_{1}-R_{2})^{2}\right].$
To close (3.35), it remains to control
$\|\rho_{1}^{-1}-\rho_{2}^{-1}\|_{L^{2}}$ and
$\|1-r_{i}^{-2}r_{j}^{2}\|_{L^{2}}$. In fact, for the difference of
$\rho_{i}^{-1}$, we have the estimate that
$\displaystyle\frac{d}{dt}\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-2})^{2}dx=2\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-2})\left((r_{1}^{2}u_{1})_{x}-(r_{2}^{2}u_{2})_{x}\right)dx$
(3.36) $\displaystyle\leq$
$\displaystyle\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-1})^{2}dx+2\int_{0}^{\infty}\left(r_{1}^{2}(u_{1})_{x}-r_{2}^{2}(u_{2})_{x}\right)^{2}dx+2\int_{0}^{\infty}\left((r_{1}^{2})_{x}u_{1}-(r_{2}^{2})_{x}u_{2}\right)^{2}dx$
$\displaystyle\leq$
$\displaystyle\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-1})^{2}dx+4\int_{0}^{\infty}(r_{1}^{2}(u_{1})_{x})^{2}\left(1-\frac{r_{2}}{r_{1}}\right)^{2}dx+4\int_{0}^{\infty}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx$
$\displaystyle+16\int_{0}^{\infty}u_{1}^{2}(\rho_{1}^{-1}r_{1}^{-1}-\rho_{2}^{-1}r_{2}^{-1})^{2}dx+16\int_{0}^{\infty}\rho_{2}^{-2}r_{2}^{-2}(u_{1}-u_{2})^{2}dx$
$\displaystyle\leq$ $\displaystyle
C(T)\left[\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-1})^{2}dx+\int_{0}^{\infty}\left(1-\frac{r_{2}}{r_{1}}\right)^{2}dx+\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx+\int_{0}^{\infty}\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx\right].$
Using $\partial_{t}r_{i}=u_{i}$ for $i=1,2$,
$\|1-r_{i}^{-2}r_{j}^{2}\|_{L^{2}}$ can be bounded easily:
$\displaystyle\frac{d}{dt}\int_{0}^{\infty}\left(1-\frac{r_{2}}{r_{1}}\right)^{2}dx$
(3.37) $\displaystyle=$ $\displaystyle
2\int_{0}^{\infty}\left(\frac{r_{2}}{r_{1}}-1\right)\left(\frac{r_{1}u_{2}-r_{2}u_{1}}{r_{1}^{2}}\right)dx$
$\displaystyle=$
$\displaystyle-2\int_{0}^{\infty}\left(1-\frac{r_{2}}{r_{1}}\right)^{2}\frac{u_{2}}{r_{1}}dx+2\int_{0}^{\infty}\frac{r_{2}}{r_{1}^{2}}\left(\frac{r_{2}}{r_{1}}-1\right)(u_{2}-u_{1})dx$
$\displaystyle\leq$ $\displaystyle
C(T)\int_{0}^{\infty}\left(1-\frac{r_{2}}{r_{1}}\right)^{2}dx+C(T)\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx,$
and in the same way,
$\displaystyle\frac{d}{dt}\int_{0}^{\infty}\left(1-\frac{r_{1}}{r_{2}}\right)^{2}dx\leq
C(T)\int_{0}^{\infty}\left(1-\frac{r_{1}}{r_{2}}\right)^{2}dx+C(T)\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx.$
(3.38)
To cancel the bad term
$\int_{0}^{\infty}\rho_{2}r_{2}^{4}(u_{1}-u_{2})_{x}^{2}dx$ in (3.36), choose
a small enough $\epsilon(T)>0$, we conclude by (3.35)(3.36)(3.37)(3.38) that
$\displaystyle\frac{d}{dt}\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx+\frac{d}{dt}\left[\left(\frac{1}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)\frac{R_{2}^{-3\gamma_{0}+2}-R_{1}^{-3\gamma_{0}+2}}{R_{1}-R_{2}}+\frac{1}{We}\right)(R_{1}-R_{2})^{2}\right]$
$\displaystyle+\epsilon(T)\frac{d}{dt}\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-2})^{2}dx+\frac{d}{dt}\int_{0}^{\infty}\left(1-\frac{r_{2}}{r_{1}}\right)^{2}dx+\frac{d}{dt}\int_{0}^{\infty}\left(1-\frac{r_{1}}{r_{2}}\right)^{2}dx$
$\displaystyle\leq$ $\displaystyle
C(T)\left[\int_{0}^{\infty}(u_{1}-u_{2})^{2}dx+\int_{0}^{\infty}(\rho_{1}^{-1}-\rho_{2}^{-1})^{2}dx+(R_{1}-R_{2})^{2}\right]$
$\displaystyle+C(T)\int_{0}^{\infty}\left((1-\frac{r_{1}}{r_{2}})^{2}+(1-\frac{r_{2}}{r_{1}})^{2}\right)dx.$
Gronwall’s inequality then shows that
$(u_{1},\;\rho_{1},\;R_{1})=(u_{2},\;\rho_{2},\;R_{2})$.
## 4 Uniform estimates and asymptotic stability
In this section, uniform in time estimates are given for solutions with
initial data closed to the equilibrium. Then Theorem 1.3 is proved by making
full use of the dissipation terms in energy estimates. Let $(u,\;\rho,\;R)$ be
the global generalized solution to (1.14)-(1.19) with
$(u_{0},\;\rho_{0},\;R_{0})$ satisfying the assumption (1.21). Let
$(\tilde{u}_{k},\;\tilde{\rho}_{k},\;R_{k})$ be the same as in Section 3. We
first establish the same basic energy identity for $(u,\;\rho,\;R)$ as
$(u_{k},\;\rho_{k},\;R_{k})$ in Section 2.
###### Lemma 4.1 (Basic energy).
Let $P(R)$ be the same as in Lemma 2.2, and let
$E_{0}(t):=\frac{1}{2}\int_{0}^{\infty}u^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\int_{0}^{\infty}H(\rho)dx+P(R).$
Then for any $t\in[0,T]$,
$E_{0}(t)+\mu\int_{0}^{t}\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dxd\tau+2\mu\int_{0}^{t}\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dxd\tau=E_{0}(0).$
(4.1)
###### Proof.
Using (1.14) and (1.15), a direct calculation gives that
$\displaystyle\frac{1}{2}\int_{0}^{\infty}u^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\int_{0}^{\infty}H(\rho)dx+P(R)+\mu\int_{0}^{t}\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dxd\tau+2\mu\int_{0}^{t}\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dxd\tau$
$\displaystyle=$
$\displaystyle\int_{0}^{t}\int_{0}^{\infty}\left\\{uu_{t}+\frac{Ca}{2}(\rho^{\gamma}-1)\rho^{-2}\rho_{t}+\rho(r^{2}u_{x})^{2}+\rho^{-1}\frac{u^{2}}{r^{2}}\right\\}dxd\tau+\int_{0}^{t}\frac{d}{dt}P(R)d\tau+E_{0}(0)$
$\displaystyle=$
$\displaystyle-\frac{Ca}{2}\int_{0}^{t}\int_{0}^{\infty}\left[(\rho^{\gamma}-1)_{x}r^{2}u+(\rho^{\gamma}-1)(r^{2}u)_{x}\right]dxd\tau+\mu\int_{0}^{t}\int_{0}^{\infty}\left[(\rho(r^{2}u)_{x})_{x}r^{2}u+\rho(r^{2}u)_{x}^{2}\right]dxd\tau$
$\displaystyle-2\mu\int_{0}^{t}\int_{0}^{\infty}\left[2ruu_{x}+\rho^{-1}\frac{u^{2}}{r^{2}}\right]dxd\tau+\int_{0}^{t}\frac{d}{dt}P(R)d\tau+E_{0}(0)$
Note that by (3.13),
$-\frac{Ca}{2}\int_{0}^{t}\int_{0}^{\infty}\left[(\rho^{\gamma}-1)_{x}r^{2}u+(\rho^{\gamma}-1)(r^{2}u)_{x}\right]dxd\tau=-\frac{Ca}{2}\lim_{k\rightarrow+\infty}\int_{0}^{t}\int_{0}^{\infty}\left[(\rho^{\gamma}-1)r^{2}\tilde{u}_{k}\right]_{x}dxd\tau,$
$\mu\int_{0}^{t}\int_{0}^{\infty}\left[(\rho(r^{2}u)_{x})_{x}r^{2}u+\rho(r^{2}u)_{x}^{2}\right]dxd\tau=\mu\lim_{k\rightarrow+\infty}\int_{0}^{t}\int_{0}^{\infty}\left[(\rho(r^{2}u)_{x})r^{2}\tilde{u}_{k}\right]_{x}dxd\tau,$
$-2\mu\int_{0}^{t}\int_{0}^{\infty}\left[2ruu_{x}+\rho^{-1}\frac{u^{2}}{r^{2}}\right]dxd\tau=-2\mu\lim_{k\rightarrow+\infty}\int_{0}^{t}\int_{0}^{\infty}\left[ru\tilde{u}_{k}\right]_{x}dxd\tau,$
and that in view of the boundary conditions (1.16)(1.17) the sum of the right-
hand sides of the above three equations is
$\lim_{k\rightarrow+\infty}\int_{0}^{t}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}+2}-\frac{Ca}{2}R^{2}-\frac{2}{We}R\right]\frac{dR_{k}}{dt}d\tau,$
which is exactly $-\int_{0}^{t}\frac{d}{dt}P(R)d\tau$ since
$\frac{dR_{k}}{dt}=-\int_{0}^{\infty}(\tilde{u}_{k})_{x}dx$ and
$(1+3x)^{\frac{2}{3}}(\tilde{u}_{k})_{x}\rightharpoonup(1+3x)^{\frac{2}{3}}u_{x}$
in the weak-$\ast$ sense of $L^{\infty}([0,T],L^{2})$. Hence all the terms on
the right-hand side of the energy identity are cancelled except $E_{0}(0)$. ∎
From Lemma 4.1, we see that $P(R)\leq\ E_{0}$, and the convexity of $P(R)$
yields for some positive constant $C$ that
$(R-1)^{2}\leq CE_{0}.$ (4.2)
Similar to Lemma 2.3, a corollary is that
$\int_{0}^{\infty}\|u^{2}r\|_{L^{\infty}}dt\leq\mu^{-1}E_{0}(0).$ (4.3)
As in Lemma 2.5, define for $t>0$ that
$E_{1}(t):=\frac{1}{2}\int_{0}^{\infty}\left(u+\mu
r^{2}(\log\rho)_{x}\right)^{2}dx+\frac{Ca}{2}\frac{1}{\gamma-1}\int_{0}^{\infty}H(\rho)dx+P(R).$
Then a same calculation gives that
###### Lemma 4.2 (Bresch-Desjardins entropy equation).
$\frac{d}{dt}E_{1}(t)+\frac{Ca}{2}\frac{4\mu}{\gamma}\int_{0}^{\infty}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}dx=2\mu\int_{0}^{\infty}(\log\rho)_{x}ru(u+\mu(\log\rho)_{x}r^{2})dx+\mu(\rho
r^{2}u_{x})|_{x=0}R^{2}\frac{dR}{dt}.$ (4.4)
Using Lemma 4.1 and Lemma 4.2, we derive a uniform in time estimate of $E_{1}$
provided that the initial data is close to the equilibrium in the sense that
$E_{0}(0)+E_{1}(0)\leq\delta$ for $\delta\lesssim 1$ sufficiently small.
###### Lemma 4.3.
There exists $\delta>0$ such that if the initial data
$(u_{0},\;\rho_{0},\;R_{0})$ is close to the equilibrium in the sense that
$E_{0}(0)+E_{1}(0)\leq\delta$, then there exists $C>0$ such that
(i) $E_{0}(t)+E_{1}(t)\leq C(E_{0}(0)+E_{1}(0))$ for any $t>0$,
(ii)
$\int_{0}^{\infty}\int_{0}^{\infty}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}dxdt\leq
C(E_{0}(0)+E_{1}(0)),$
(iii) $\rho\leq 1+C(E_{0}(0)+E_{1}(0))^{\frac{1}{2}},\;\rho^{-1}\leq
1+C(E_{0}(0)+E_{1}(0))^{\frac{1}{2}}$ for any
$(x,t)\in\mathbb{R}^{+}\times\mathbb{R}^{+}$.
###### Proof.
We begin with the estimates of the right-hand side of (4.4). Using (1.18) and
$\int_{0}^{\infty}r^{-4}dx=\int_{R(t)}^{\infty}\rho r^{-2}dr\leq
R(t)^{-1}\sup_{x\in\mathbb{R}}\rho$, it holds that
$\displaystyle\left|\int_{0}^{\infty}\mu ru^{2}(\log\rho)_{x}dx\right|\leq$
$\displaystyle\left(\frac{Ca\gamma}{4\mu}\right)^{-1}\int_{0}^{\infty}\rho^{-\gamma}\frac{u^{4}}{r^{2}}dx+\frac{Ca}{2}\frac{\mu}{2\gamma}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dx$
(4.5) $\displaystyle\leq$
$\displaystyle\left(\frac{Ca\gamma}{4\mu}\right)^{-1}\|u^{2}r\|_{L^{\infty}}\|r^{-3}\rho^{-\gamma}\|_{L^{\infty}}\int_{0}^{\infty}u^{2}dx+\frac{Ca}{2}\frac{\mu}{2\gamma}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dx,$
and
$\displaystyle\left|\int_{0}^{\infty}\frac{u}{r}(\mu(\log\rho)_{x}r^{2})^{2}dx\right|$
(4.6) $\displaystyle\leq$
$\displaystyle\left(\frac{Ca\gamma}{4\mu}\right)^{-1}\int_{0}^{\infty}\frac{u^{2}}{r^{2}}\rho^{-\gamma}(\mu(\log\rho)_{x}r^{2})^{2}dx+\frac{Ca}{2}\frac{\mu}{2\gamma}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dx$
$\displaystyle\leq$
$\displaystyle\left(\frac{Ca\gamma}{4\mu}\right)^{-1}\|u^{2}r\|_{L^{\infty}}\|r^{-3}\rho^{-\gamma}\|_{L^{\infty}}\int_{0}^{\infty}(\mu(\log\rho)_{x}r^{2})^{2}dx+\frac{Ca}{2}\frac{\mu}{2\gamma}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dx.$
To control the lower bound of $\rho$, note first that
$\left|\partial_{x}(1-\rho^{-\frac{1}{4}})^{2}\right|=\left|\frac{1}{2}(1-\rho^{-\frac{1}{4}})\rho^{-\frac{1}{4}}(\log\rho)_{x}\right|\leq\frac{1}{4}(\rho^{-\frac{1}{4}}-\rho^{-\frac{1}{2}})^{2}r^{-4}+\frac{1}{4}(\log\rho)_{x}^{2}r^{4}.$
In order to control the first term on the right-hand side, we use the
inequality
$(\rho^{\lambda_{1}}-\rho^{\lambda_{2}})^{2}\leq
C_{\lambda_{1},\lambda_{2}}H(\rho)\leq C_{\lambda_{1},\lambda_{2}}E_{0}$
for $\lambda_{1},\;\lambda_{2}$ with
$-\frac{1}{2}\leq\lambda_{2}\leq\lambda_{1}\leq\frac{\gamma-1}{2}$. Hence by
integrating in $x$ we obtain the lower bound of $\rho$ that
$\|1-\rho^{-\frac{1}{4}}\|^{2}_{L^{\infty}}\leq C(E_{0}+E_{1})$. Similarly,
for the upper bound, the inequality
$\left|\partial_{x}(\rho^{\frac{\gamma-1}{4}}-1)^{2}\right|\leq\left|\frac{\gamma-1}{2}(\rho^{\frac{\gamma-1}{4}}-1)\rho^{\frac{\gamma-1}{4}}(\log\rho)_{x}\right|\leq\frac{\gamma-1}{2}(\rho^{\frac{\gamma-1}{2}}-\rho^{\frac{\gamma-1}{4}})^{2}r^{-4}+\frac{\gamma-1}{2}(\log\rho)_{x}^{2}r^{4}$
yields that $\|\rho^{\frac{\gamma-1}{4}}-1\|^{2}_{L^{\infty}}\leq
C(E_{0}+E_{1})$. Hence
$\|\rho\|_{L^{\infty}}\leq\left(1+C(E_{0}+E_{1})^{\frac{1}{2}}\right)^{\frac{4}{\gamma-1}},\;\|\rho^{-\gamma}\|_{L^{\infty}}\leq\left(1+C(E_{0}+E_{1})^{\frac{1}{2}}\right)^{4\gamma}.$
(4.7)
Note that $(u,\;\rho,\;R)$ also satisfies (2.10), and thus
$(\rho|_{x=0}R^{2})^{-\gamma}$ can be represented by (2.11). Let $S(t)$ be as
in Lemma 2.4. Since $|R-1|^{2}\leq CP(R)\leq C\delta$, it holds for
sufficiently small $\delta$ that
$\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\geq\frac{Ca}{2}-C|R-1|\geq\frac{Ca}{2}-C\delta^{\frac{1}{2}}>0,$
(4.8)
which guarantees that $\int_{0}^{\infty}S(t)dt<+\infty$. Introduce the
notation
$\mathfrak{R}(t)=\frac{Ca}{2}R^{-2\gamma}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]^{-1}(t).$
It then follows from (4.2) that $(\mathfrak{R}-1)^{2}\leq CE_{0}$. Integrating
by parts in (2.11) yields that
$\displaystyle(\rho|_{x=0}R^{2})^{-\gamma}=$
$\displaystyle\mathfrak{R}(t)+\left((\rho_{0}|_{x=0}R_{0}^{2})^{-\gamma}-\mathfrak{R}(0)\right)S(t)-\int_{0}^{t}S(t-\tau)\frac{d\mathfrak{R}}{dt}(\tau)d\tau$
(4.9) $\displaystyle=$
$\displaystyle\mathfrak{R}(t)+\left((\rho_{0}|_{x=0}R_{0}^{2})^{-\gamma}-\mathfrak{R}(t)\right)S(t)$
$\displaystyle+\frac{\gamma}{\mu}\int_{0}^{t}(\mathfrak{R}(\tau)-\mathfrak{R}(t))\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]S(t-\tau)d\tau.$
Hence $1-C\delta^{\frac{1}{2}}\leq(\rho|_{x=0}R^{2})^{-\gamma}\leq
1+C\delta^{\frac{1}{2}}$ for all $t>0$ in view of the integrability and
boundedness of $S(t)$. Since
$\mu(\rho
r^{2}u_{x})|_{x=0}=-\mu(\rho^{-1}\rho_{t}-2ur^{-1})|_{x=0}=\frac{\mu}{\gamma}(\rho|_{x=0}R^{2})^{\gamma}\partial_{t}(\rho|_{x=0}R^{2})^{-\gamma},$
and from (2.10)(4.9) that
$\displaystyle\frac{\mu}{\gamma}(\rho|_{x=0}R^{2})^{\gamma}\partial_{t}(\rho|_{x=0}R^{2})^{-\gamma}$
$\displaystyle=$
$\displaystyle-(\rho|_{x=0}R^{2})^{\gamma}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]\left((\rho|_{x=0}R^{2})^{-\gamma}-\mathfrak{R}(t)\right)$
$\displaystyle=$
$\displaystyle-(\rho|_{x=0}R^{2})^{\gamma}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]\left((\rho_{0}|_{x=0}R_{0}^{2})^{-\gamma}-\mathfrak{R}(0)\right)S(t)$
$\displaystyle+(\rho|_{x=0}R^{2})^{\gamma}\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]\int_{0}^{t}S(t-\tau)\frac{d\mathfrak{R}}{dt}(\tau)d\tau,$
it follows
$\left|\mu(\rho r^{2}u_{x})|_{x=0}R^{2}\frac{dR}{dt}\right|\leq
C\left((E_{0}(0)+E_{1}(0))^{\frac{1}{2}}S(t)\left|\frac{dR}{dt}\right|+\int_{0}^{t}S(t-\tau)\left|\frac{d\mathfrak{R}}{dt}\right|(\tau)d\tau\left|\frac{dR}{dt}\right|\right),$
(4.10)
where we used that
$\left|(\rho_{0}|_{x=0}R_{0}^{2})^{-\gamma}-\mathfrak{R}(0)\right|\leq
C(E_{0}(0)+E_{1}(0))^{\frac{1}{2}}$ since $E_{0}(0)+E_{1}(0)\lesssim 1$.
Moreover, (4.8) implies
$S(t)\leq\exp\left\\{-\frac{\gamma}{\mu}\left(\frac{Ca}{2}-C\delta^{\frac{1}{2}}\right)t\right\\}$,
and thus $\int_{0}^{\infty}S(t)\left|\frac{dR}{dt}\right|dt\leq
C\left(\int_{0}^{\infty}\left|\frac{dR}{dt}\right|^{2}dt\right)^{\frac{1}{2}}$.
Using Young’s inequality, it holds
$\int_{0}^{\infty}\int_{0}^{t}S(t-\tau)\left|\frac{d\mathfrak{R}}{dt}\right|(\tau)d\tau\left|\frac{dR}{dt}\right|(t)dt\leq
C\int_{0}^{\infty}\left|\frac{dR}{dt}\right|^{2}dt.$
Integrating (4.10) in $t$, the boundary term $\mu(\rho
r^{2}u_{x})|_{x=0}R^{2}\frac{dR}{dt}$ has the control
$\displaystyle\int_{0}^{\infty}\left|\mu(\rho
r^{2}u_{x})|_{x=0}R^{2}\frac{dR}{dt}\right|dt$ (4.11) $\displaystyle\leq$
$\displaystyle
C\left[(E_{0}(0)+E_{1}(0))^{\frac{1}{2}}\left(\int_{0}^{\infty}\left|\frac{dR}{dt}\right|^{2}dt\right)^{\frac{1}{2}}+\int_{0}^{\infty}\left|\frac{dR}{dt}\right|^{2}dt\right]$
$\displaystyle\leq$ $\displaystyle
C\left[(E_{0}(0)+E_{1}(0))^{\frac{1}{2}}\left(\int_{0}^{\infty}\|u^{2}r\|_{L^{\infty}}dt\right)^{\frac{1}{2}}+\int_{0}^{\infty}\|u^{2}r\|_{L^{\infty}}dt\right].$
Now conclude from (4.4)(4.5)(4.6)(4.7) that
$\displaystyle\frac{d}{dt}E_{1}+\frac{Ca}{2}\frac{2\mu}{\gamma}\int_{0}^{\infty}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}dx$
(4.12) $\displaystyle\leq$ $\displaystyle\left|\mu(\rho
r^{2}u_{x})|_{x=0}R^{2}\frac{dR}{dt}\right|+C\|u^{2}r\|_{L^{\infty}}\left(1+(E_{0}+E_{1})^{2\gamma}\right)\left(\int_{0}^{\infty}u^{2}dx\right)$
$\displaystyle+C\|u^{2}r\|_{L^{\infty}}\left(1+(E_{0}+E_{1})^{2\gamma}\right)\left(\int_{0}^{\infty}(\mu(\log\rho)_{x}r^{2})^{2}dx\right)$
$\displaystyle\leq$ $\displaystyle\left|\mu(\rho
r^{2}u_{x})|_{x=0}R^{2}\frac{dR}{dt}\right|+C\|u^{2}r\|_{L^{\infty}}\left(1+(E_{0}+E_{1})^{2\gamma}\right)(E_{0}+E_{1}).$
In view of (4.3), (4.11), $\frac{d}{dt}E_{0}\leq 0$ and that
$E_{0}(0)+E_{1}(0)\leq\delta\lesssim 1$, adding $\frac{d}{dt}E_{0}$ on the
left-hand side of (4.12) and using Gronwall’s inequality, it follows
$E_{0}(t)+E_{1}(t)\leq
C(E_{0}(0)+E_{1}(0)),\;\int_{0}^{\infty}\int_{0}^{\infty}\left(r^{2}(\rho^{\frac{\gamma}{2}})_{x}\right)^{2}dxdt\leq
C(E_{0}(0)+E_{1}(0)).$
At last, using (4.7) in view of $E_{0}(0)+E_{1}(0)\leq\delta\lesssim 1$,
$\rho$ has uniform bounds from above and below in time as given in (iii). ∎
###### Remark 4.4.
Through the proof of Lemma 4.3, one can see that the restriction
$E_{0}(0)+E_{1}(0)\leq\delta\lesssim 1$ is due to the requirement that
$\int_{0}^{\infty}S(t)dt\leq+\infty$ and the feasibility of applying
Gronwall’s inequality to (4.13).
In order to derive the viscous damping, it remains to establish the energy
estimates for the derivatives of the solution. To this end, we have the
following two energy identities.
###### Lemma 4.5 (Energy identities for 1-order derivatives).
Define for $t>0$,
$E_{2}(t)=\frac{1}{2}\int_{0}^{\infty}u_{t}^{2}dx+\frac{Ca}{2}\frac{2\gamma}{(\gamma-1)^{2}}\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\left[\frac{3\gamma_{0}}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)-\frac{1}{We}\right]\left(\frac{dR}{dt}\right)^{2},$
$E_{3}(t)=\frac{1}{2}\int_{0}^{\infty}\left(u_{t}+\mu(\log\rho)_{xt}r^{2}\right)^{2}dx+\frac{Ca}{2}\frac{2\gamma}{(\gamma-1)^{2}}\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx.$
Then
$\displaystyle\frac{d}{dt}E_{2}(t)+\mu\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\mu\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx=-2\frac{Ca}{2}\int_{0}^{\infty}(\rho^{\gamma})_{x}ruu_{t}dx$
(4.13)
$\displaystyle+\frac{Ca}{2}\frac{\gamma(\gamma+1)}{2}\int_{0}^{\infty}\rho^{\gamma-4}\rho_{t}^{3}dx+4\gamma\frac{Ca}{2}\int_{0}^{\infty}\rho^{\gamma-3}\rho_{t}^{2}\frac{u}{r}dx+6\gamma\frac{Ca}{2}\int_{0}^{\infty}\rho^{\gamma-2}\rho_{t}\frac{u^{2}}{r^{2}}dx$
$\displaystyle-\mu\int_{0}^{\infty}\rho_{t}(r^{2}u)_{x}(r^{2}u_{t})_{x}dx-\mu\int_{0}^{\infty}\rho\left((r^{2})_{t}u\right)_{x}(r^{2}u_{t})_{x}dx+\mu\int_{0}^{\infty}\left(\rho(r^{2}u)_{x}\right)_{x}(r^{2})_{t}u_{t}dx$
$\displaystyle+2\mu(u^{2}u_{t})|_{x=0}-3\gamma_{0}\left(\frac{Ca}{2}+\frac{2}{We}\right)(R^{-3\gamma_{0}+1}-1)\frac{dR}{dt}\frac{d^{2}R}{dt^{2}}.$
and
$\displaystyle\frac{d}{dt}E_{3}(t)+\frac{Ca}{2}\mu\gamma\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx=-2\frac{Ca}{2}\int_{0}^{\infty}(\rho^{\gamma})_{x}ruu_{t}dx+\frac{Ca}{2}(\rho^{\gamma})_{t}|_{x=0}R^{2}\frac{d^{2}R}{dt^{2}}$
(4.14)
$\displaystyle+\frac{Ca}{2}\frac{\gamma(\gamma+1)}{2}\int_{0}^{\infty}\rho^{\gamma-4}\rho_{t}^{3}dx+4\gamma\frac{Ca}{2}\int_{0}^{\infty}\rho^{\gamma-3}\rho_{t}^{2}\frac{u}{r}dx+6\gamma\frac{Ca}{2}\int_{0}^{\infty}\rho^{\gamma-2}\rho_{t}\frac{u^{2}}{r^{2}}dx$
$\displaystyle-2\frac{Ca}{2}\mu\int_{0}^{\infty}(\rho^{\gamma})_{x}r^{3}u(\log\rho)_{xt}dx-\frac{Ca}{2}\mu\gamma\int_{0}^{\infty}r^{4}(\rho^{\gamma})_{x}(\log\rho)_{t}(\log\rho)_{xt}dx.$
###### Proof.
To derive (4.13), we differentiate (1.15) with respect to $t$ and multiply the
resulted equation by $u_{t}$. Using integration by parts, equation (1.14) and
(1.18), the term involving pressure is
$\displaystyle\left[(\rho^{\gamma})_{x}r^{2}\right]_{t}u_{t}=$
$\displaystyle(\rho^{\gamma})_{x}(r^{2})_{t}u_{t}+(\rho^{\gamma})_{xt}r^{2}u_{t}$
(4.15) $\displaystyle=$
$\displaystyle\left[(\rho^{\gamma})_{t}r^{2}u_{t}\right]_{x}-(\rho^{\gamma})_{t}(r^{2}u_{t})_{x}+2(\rho^{\gamma})_{x}ruu_{t}$
$\displaystyle=$
$\displaystyle\left[(\rho^{\gamma})_{t}r^{2}u_{t}\right]_{x}+\frac{2\gamma}{(\gamma-1)^{2}}\partial_{t}((\rho^{\frac{\gamma-1}{2}})_{t}^{2})$
$\displaystyle-\frac{\gamma(\gamma+1)}{2}\rho^{\gamma-4}\rho_{t}^{3}-4\gamma\rho^{\gamma-3}\rho_{t}^{2}\frac{u}{r}-6\gamma\rho^{\gamma-2}\rho_{t}\frac{u^{2}}{r^{2}}+2(\rho^{\gamma})_{x}ruu_{t},$
and for the term involving viscosity that
$\displaystyle\left[(\rho(r^{2}u)_{x})_{x}r^{2}\right]_{t}u_{t}=$
$\displaystyle\left[(\rho(r^{2}u)_{x})_{t}r^{2}u_{t}\right]_{x}-\left[\rho(r^{2}u)_{x}\right]_{t}(r^{2}u_{t})_{x}+(\rho(r^{2}u)_{x})_{x}(r^{2})_{t}u_{t}$
$\displaystyle=$ $\displaystyle\left[(\rho
r^{2}u_{x})_{t}r^{2}u_{t}+2ru_{t}^{2}-2u^{2}u_{t}\right]_{x}-\rho(r^{2}u_{t})_{x}^{2}$
$\displaystyle-\rho_{t}(r^{2}u)_{x}(r^{2}u_{t})_{x}-\rho((r^{2})_{t}u)_{x}(r^{2}u_{t})_{x}+(\rho(r^{2}u)_{x})_{x}(r^{2})_{t}u_{t}$
$\displaystyle=$ $\displaystyle\left[(\rho
r^{2}u_{x})_{t}r^{2}u_{t}\right]_{x}-2(u^{2}u_{t})_{x}-\rho(r^{2}u_{xt})^{2}-2\rho^{-1}\frac{u_{t}^{2}}{r^{2}}$
$\displaystyle-\rho_{t}(r^{2}u)_{x}(r^{2}u_{t})_{x}-\rho((r^{2})_{t}u)_{x}(r^{2}u_{t})_{x}+(\rho(r^{2}u)_{x})_{x}(r^{2})_{t}u_{t}.$
The boundary term, in view of (1.17), is
$\displaystyle\left.\left[-\frac{Ca}{2}(\rho^{\gamma})_{t}+\mu(\rho
r^{2}u_{x})_{t}\right]\right|_{x=0}(r^{2}u_{t})|_{x=0}$ $\displaystyle=$
$\displaystyle-\left[\left(\frac{Ca}{2}+\frac{2}{We}\right)R^{-3\gamma_{0}}-\frac{2}{We}R^{-1}\right]_{t}R^{2}\frac{d^{2}R}{dt^{2}}$
$\displaystyle=$
$\displaystyle\left[\frac{3\gamma_{0}}{2}\left(\frac{Ca}{2}+\frac{2}{We}\right)-\frac{1}{We}\right]\frac{d}{dt}\left(\frac{dR}{dt}\right)^{2}+3\gamma_{0}\left(\frac{Ca}{2}+\frac{2}{We}\right)(R^{-3\gamma_{0}+1}-1)\frac{dR}{dt}\frac{d^{2}R}{dt^{2}}.$
Then by integrating the equation on $(x,t)\in(0,+\infty)\times(0,T)$ and
playing the same trick as in Lemma 4.1 (using $\tilde{u}_{k}$ and (3.13) to
apply integration by parts), we arrive at (4.13) . To derive (4.14), first
note that
$\left((\rho^{\gamma})_{x}r^{2}\right)_{t}(\log\rho)_{xt}r^{2}=\gamma\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}+2(\rho^{\gamma})_{x}r^{3}u(\log\rho)_{xt}+\gamma
r^{4}(\rho^{\gamma})_{x}(\log\rho)_{t}(\log\rho)_{xt}.$
Then (4.14) follows from multiplying (2.28) by
$(u_{t}+\mu(\log\rho)_{xt}r^{2})$, integrating on $(0,+\infty)$ and (4.15). ∎
With the help of Lemma 4.5, one can establish controls for the derivatives of
the solution.
###### Lemma 4.6.
Suppose $E_{0}(0)$ and $E_{1}(0)$ satisfy the same assumption as in Lemma 4.3,
then there exists $C>0$ such that
(i) $E_{2}(t)+E_{3}(t)\leq C(E_{2}(0)+E_{3}(0))$ for any $t>0$,
(ii)
$\int_{0}^{\infty}\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dxdt+2\int_{0}^{\infty}\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dxdt\leq
C(E_{2}(0)+E_{3}(0))$,
(iii)
$\int_{0}^{\infty}\int_{0}^{\infty}r^{4}\rho^{\gamma}(\log\rho)_{xt}^{2}dxdt\leq
C(E_{2}(0)+E_{3}(0))$.
###### Proof.
Lemma 4.6 is established by using (4.13)(4.14) and Gronwall’s inequality.
Therefore, the proof is essentially based on the estimates of the nonlinear
terms in (4.13) and (4.14). According to (iii) of Lemma 4.3 and
$E_{0}(0)+E_{1}(0)\lesssim 1$, $\rho$ is uniformly bounded in the sense that
$\rho\leq C$ and $\rho^{-1}\leq C$, which will be used throughout the proof.
Step 1. Control of the boundary terms.
Similar to Lemma 2.3, we have the $L^{\infty}$ control for $u_{t}$ that
$\|u_{t}^{2}r\|_{L^{\infty}}\leq\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx.$
(4.16)
From
$\left|(\rho^{\gamma})_{t}|_{x=0}\right|\leq
C\|(\log\rho)_{t}\|_{L^{\infty}}\leq
C\|\rho^{\frac{\gamma}{2}}r^{2}(\log\rho)_{xt}\|_{L^{2}}\|\rho^{\frac{-\gamma}{2}}r^{-2}\|_{L^{2}}\leq
C\|\rho^{\frac{\gamma}{2}}r^{2}(\log\rho)_{xt}\|_{L^{2}},$ (4.17)
it follows for $\lambda>0$ to be determined that the boundary term of (4.14)
can be bounded by
$\displaystyle\left|(\rho^{\gamma})_{t}|_{x=0}R^{2}\frac{d^{2}R}{dt^{2}}\right|\leq
C\left|(\rho^{\gamma})_{t}|_{x=0}\right|\|u_{t}r^{\frac{1}{2}}\|_{L^{\infty}}$
(4.18) $\displaystyle\leq$
$\displaystyle\lambda\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\lambda}\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right).$
Using $\frac{dR}{dt}=u|_{x=0}$, $r\geq R$, and $C\geq R\geq c$, one has the
control for the nonlinear boundary terms in (4.13) that
$\displaystyle\left|(u^{2}u_{t})|_{x=0}\right|\leq\|u_{t}\|_{L^{\infty}}u^{2}|_{x=0}\leq\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\|u^{2}r\|_{L^{\infty}}\left(\frac{dR}{dt}\right)^{2}.$
(4.19)
Next, using boundary condition (1.16),
$\mu\partial_{x}\left(\rho(r^{2}u)_{x}\right)=\frac{u_{t}}{r^{2}}+\frac{Ca}{2}(\rho^{\gamma})_{x}$
and $\left|\frac{dR}{dt}\right|=\left|u|_{x=0}\right|\leq\|u\|_{L^{\infty}}$,
we have
$\displaystyle\left|\left(R^{-3\gamma_{0}+1}-1\right)\frac{dR}{dt}\frac{d^{2}R}{dt^{2}}\right|$
(4.20) $\displaystyle\leq$
$\displaystyle\epsilon\|u_{t}\|_{L^{\infty}}^{2}+C_{\epsilon}\left(\frac{dR}{dt}\right)^{2}(R-1)^{2}$
$\displaystyle\leq$
$\displaystyle\epsilon\|u_{t}\|_{L^{\infty}}^{2}+C_{\epsilon}\left.\left(\frac{dR}{dt}\right)^{2}\left(\frac{Ca}{2}(\rho^{\gamma}-1)-\mu\rho
r^{2}u_{x}\right)^{2}\right|_{x=0}$ $\displaystyle\leq$
$\displaystyle\epsilon\|u_{t}\|_{L^{\infty}}^{2}+C_{\epsilon}\left(\frac{dR}{dt}\right)^{2}\left(\|\rho-1\|_{L^{\infty}}^{2}+\|\rho(r^{2}u)_{x}\|_{L^{\infty}}^{2}+\|u^{2}r\|_{L^{\infty}}\right)$
$\displaystyle\leq$
$\displaystyle\epsilon\|u_{t}\|_{L^{\infty}}^{2}+C_{\epsilon}\left(\frac{dR}{dt}\right)^{2}\left(\|(\rho^{\frac{\gamma}{2}})_{x}r^{2}\|_{L^{2}}^{2}+\|u_{t}\|_{L^{2}}^{2}+\|u^{2}r\|_{L^{\infty}}\right)$
$\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\left(\frac{dR}{dt}\right)^{2}\left(\|(\rho^{\frac{\gamma}{2}})_{x}r^{2}\|_{L^{2}}^{2}+\|u^{2}r\|_{L^{\infty}}\right)$
$\displaystyle+C_{\epsilon}\|u^{2}r\|_{L^{\infty}}\|u_{t}\|_{L^{2}}^{2}.$
Step 2. The rest terms in (4.13).
We begin with the estimates of the terms with coefficient $\frac{Ca}{2}$.
Using (4.16) and Hölder inequality, one finds
$\displaystyle\left|\int_{0}^{\infty}(\rho^{\gamma})_{x}ruu_{t}dx\right|\leq$
$\displaystyle\epsilon\|u_{t}^{2}r\|_{L^{\infty}}+C_{\epsilon}\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\int_{0}^{\infty}(\rho^{\gamma})_{x}^{2}r^{4}dx$
(4.21) $\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)$
$\displaystyle+C_{\epsilon}\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\int_{0}^{\infty}\left(u_{t}+\mu(\log\rho)_{xt}r^{2}\right)^{2}dx.$
Use (4.17) and (1.14) to show
$\displaystyle\left|\int_{0}^{\infty}\rho^{\gamma-4}\rho_{t}^{3}dx\right|$
(4.22) $\displaystyle\leq$
$\displaystyle\epsilon\|(\log\rho)_{t}\|_{L^{\infty}}^{2}+C_{\epsilon}\left(\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx\right)^{2}$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\left(\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\right)\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx,$
and similarly
$\displaystyle\left|\int_{0}^{\infty}\rho^{\gamma-3}\rho_{t}^{2}\frac{u}{r}dx\right|\leq\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx.$
(4.23)
Using $c\leq R\leq C$ and the equation
$2(ru^{2})|_{x=0}+\int_{0}^{\infty}\rho(r^{2}u)_{x}^{2}dx=\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx,$
we have the inequality
$\displaystyle\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\leq$
$\displaystyle
C\left(\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\left(\frac{dR}{dt}\right)^{2}\right),$
(4.24)
and thus the last term is controlled by
$\displaystyle\left|\int_{0}^{\infty}\rho^{\gamma-2}\rho_{t}\frac{u^{2}}{r^{2}}dx\right|$
(4.25) $\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\left(\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\right)^{2}$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\left(\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\right)\left(\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\left(\frac{dR}{dt}\right)^{2}\right).$
For the terms with coefficient $\mu$, first note that by
$\mu\partial_{x}(\rho(r^{2}u)_{x})=\frac{u_{t}}{r^{2}}+\frac{Ca}{2}(\rho^{\gamma})_{x}$,
one has
$\|(\log\rho)_{t}\|^{2}_{L^{\infty}}=\|\rho(r^{2}u)_{x}\|^{2}_{L^{\infty}}\leq
C\left(\|u_{t}\|_{L^{2}}^{2}+\|(\rho^{\gamma})_{x}r^{2}\|_{L^{2}}^{2}\right).$
(4.26)
Then using (4.26), it holds
$\displaystyle\left|\int_{0}^{\infty}\rho_{t}(r^{2}u)_{x}(r^{2}u_{t})_{x}dx\right|$
(4.27) $\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho(r^{2}u_{t})_{x}^{2}dx+C_{\epsilon}\|\rho(r^{2}u)_{x}\|_{L^{\infty}}^{2}\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\left(\|u_{t}\|_{L^{2}}^{2}+\|(\rho^{\gamma})_{x}r^{2}\|_{L^{2}}^{2}\right)\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dx\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx$
$\displaystyle+C_{\epsilon}\left(\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\right)\int_{0}^{\infty}u_{t}^{2}dx.$
Using $r_{t}=u$, (4.24) and
$(ru^{2})_{x}=-3\rho^{-1}\frac{u^{2}}{r^{2}}+2\frac{u}{r}(r^{2}u)_{x}$, it
follows
$\displaystyle\left|\int_{0}^{\infty}\rho((r^{2})_{t}u)_{x}(r^{2}u_{t})_{x}dx\right|$
(4.28) $\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\int_{0}^{\infty}\rho(ru^{2})_{x}^{2}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\|u^{2}r\|_{L^{\infty}}\left(\int_{0}^{\infty}\rho(r^{2}u)_{x}^{2}dx+\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx\right)$
$\displaystyle\leq$
$\displaystyle\epsilon\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+C_{\epsilon}\|u^{2}r\|_{L^{\infty}}\left(\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dx+\left(\frac{dR}{dt}\right)^{2}\right).$
Using equation (1.14), the last term can be controlled easily
$\displaystyle\left|\int_{0}^{\infty}\left(\rho(r^{2}u)_{x}\right)_{x}(r^{2})_{t}u_{t}dx\right|=$
$\displaystyle
2\left|\int_{0}^{\infty}(\log\rho)_{xt}r^{2}\frac{u}{r}u_{t}dx\right|$ (4.29)
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\int_{0}^{\infty}\frac{u^{2}}{r^{2}}u_{t}^{2}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\|u^{2}r\|_{L^{\infty}}\int_{0}^{\infty}u_{t}^{2}dx.$
Step 3. The rest terms in (4.14).
Since the terms with coefficient $\frac{Ca}{2}$ are the same as in (4.13), it
suffices to estimate the rest two terms. Noting that
$\frac{Ca}{2}(\rho^{\gamma})_{x}r^{2}=u_{t}+\mu r^{2}(\log\rho)_{xt}$, it
holds by (4.26) that
$\displaystyle\left|\int_{0}^{\infty}r^{4}(\rho^{\gamma})_{x}(\log\rho)_{t}(\log\rho)_{xt}dx\right|$
(4.30) $\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\|(\log\rho)_{t}\|_{L^{\infty}}^{2}\int_{0}^{\infty}(\rho^{\gamma})_{x}^{2}r^{4}dx$
$\displaystyle\leq$
$\displaystyle\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dx\left(\int_{0}^{\infty}u_{t}^{2}dx+\int_{0}^{\infty}\left(u_{t}+\mu
r^{2}(\log\rho)_{xt}\right)^{2}dx\right).$
The other term can be bounded easily
$\displaystyle\left|\int_{0}^{\infty}(\rho^{\gamma})_{x}r^{3}u(\log\rho)_{xt}dx\right|\leq\epsilon\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx+C_{\epsilon}\|u^{2}r\|_{L^{\infty}}\int_{0}^{\infty}\left(u_{t}+\mu
r^{2}(\log\rho)_{xt}\right)^{2}dx.$ (4.31)
Now, choose positive $\lambda>0$, $A>0$ such that
$\lambda\leq\frac{\mu\gamma}{3}$ and $A\geq 3\mu^{-1}\frac{Ca}{2}C_{\lambda}$.
Then by collecting (4.18)-(4.31), we find that
$\displaystyle\frac{d}{dt}(AE_{2}+E_{3})+\frac{2A\mu}{3}\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+\frac{Ca}{3}\mu\gamma\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx$
(4.32) $\displaystyle\leq$
$\displaystyle\epsilon(A+1)\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx+\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx\right)$
$\displaystyle+C_{\epsilon}(A+1)\left(\int_{0}^{\infty}\rho(r^{2}u)_{x}^{2}dx+\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx+\int_{0}^{\infty}r^{4}(\rho^{\frac{\gamma}{2}})_{x}^{2}dx\right)(AE_{2}+E_{3}).$
Choose $\epsilon>0$ small such that
$\epsilon(A+1)\leq\min\left\\{\frac{A\mu}{3},\frac{Ca}{6}\mu\gamma\right\\}$.
Then (4.32) becomes
$\displaystyle\frac{d}{dt}(AE_{2}+E_{3})+\frac{A\mu}{3}\left(\int_{0}^{\infty}\rho(r^{2}u_{xt})^{2}dx+2\int_{0}^{\infty}\rho^{-1}\frac{u_{t}^{2}}{r^{2}}dx\right)+\frac{Ca}{6}\mu\gamma\int_{0}^{\infty}\rho^{\gamma}r^{4}(\log\rho)_{xt}^{2}dx$
(4.33) $\displaystyle\leq$ $\displaystyle
C\left(\int_{0}^{\infty}\rho(r^{2}u)_{x}^{2}dx+\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dx+\int_{0}^{\infty}r^{4}(\rho^{\frac{\gamma}{2}})_{x}^{2}dx\right)(AE_{2}+E_{3}).$
Hence (i)(ii)(iii) follow from applying Gronwall’s inequality to (4.33) in
view of Lemma 4.1 and Lemma 4.3. ∎
###### Corollary 4.7.
Suppose $E_{0}(0)$ and $E_{1}(0)$ satisfy the same assumptions as in Lemma
4.3. Then there exists $C>0$ such that
(i) $\int_{0}^{\infty}E_{3}(t)dt\leq C(E_{0}(0)+E_{1}(0))$,
(ii) $\int_{0}^{\infty}E_{2}(t)dt\leq C(E_{0}(0)+E_{1}(0)+E_{2}(0)+E_{3}(0))$.
###### Proof.
The equation (1.15) yields that
$\int_{0}^{\infty}\int_{0}^{\infty}(u_{t}+\mu
r^{2}(\log\rho)_{xt})^{2}dxdt=\left(\frac{Ca}{2}\right)^{2}\int_{0}^{\infty}\int_{0}^{\infty}(\rho^{\gamma})_{x}^{2}r^{4}dxdt.$
Hence by (ii)(iii) of Lemma 4.3,
$\int_{0}^{\infty}\int_{0}^{\infty}(u_{t}+\mu
r^{2}(\log\rho)_{xt})^{2}dxdt\leq
C\int_{0}^{\infty}\int_{0}^{\infty}(\rho^{\frac{\gamma}{2}})_{x}^{2}r^{4}dxdt\leq
C(E_{0}(0)+E_{1}(0)).$ (4.34)
Meanwhile, using Lemma 4.1 and $\rho\approx 1$ again, it follows from
$\int_{0}^{\infty}\int_{0}^{\infty}\rho(r^{2}u)_{x}^{2}dxdt\leq
C\left(\int_{0}^{\infty}\int_{0}^{\infty}\rho(r^{2}u_{x})^{2}dxdt+2\int_{0}^{\infty}\int_{0}^{\infty}\rho^{-1}\frac{u^{2}}{r^{2}}dxdt\right)$
that
$\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}(\rho^{\frac{\gamma-1}{2}})_{t}^{2}dxdt\leq
C\int_{0}^{\infty}\int_{0}^{\infty}\rho(r^{2}u)_{x}^{2}dxdt\leq CE_{0}(0).$
(4.35)
(i) is a direct consequence of (4.34) and (4.35). To show (ii), write
$u_{t}=(u_{t}+\mu r^{2}(\log\rho)_{xt})-\mu r^{2}(\log\rho)_{xt}$. Then (ii)
follows from (4.34)(4.35), (iii) of Lemma 4.6 and $\rho\approx 1$. ∎
To finish the proof of Theorem 1.3, the following lemma is necessary.
###### Lemma 4.8.
Suppose that $E(t)>0$ and $\alpha(t)>0$ such that
$\int_{0}^{\infty}E(t)dt\leq+\infty$, $\int_{0}^{\infty}\alpha dt\leq+\infty$,
$E(0)<+\infty$ and $\frac{d}{dt}E\leq\alpha E$. Then there exists constant
$C>0$ such that $E(t)\leq C(1+t)^{-1}$.
###### Proof.
A direct computation gives
$\frac{d}{dt}\left((1+t)E\right)\leq\alpha(1+t)E+E$. Then Gronwall’s
inequality yields that $(1+t)E(t)\leq\exp(\int_{0}^{\infty}\alpha
dt)\left(E(0)+\int_{0}^{\infty}Edt\right)$. ∎
Proof of Theorem 1.3. With the help of Lemma 4.1, Lemma 4.3 and corollary 4.7,
applying Lemma 4.8 to (4.33) yields that
$\|u_{t}\|_{L^{2}}^{2}+\|r^{2}\rho_{x}\|_{L^{2}}^{2}+\|\rho_{t}\|_{L^{2}}^{2}+\left(\frac{dR}{dt}\right)^{2}\leq
C(1+t)^{-1}.$
Hence the decay of $\|r^{2}\rho_{x}\|_{L^{2}}$, $\|r^{2}u_{x}\|_{L^{2}}$ and
$\left\|\frac{u}{r}\right\|_{L^{2}}$ follow from (4.24). To prove (1.22), it
remains to show $(R-1)^{2}\lesssim(1+t)^{-1}$. In view of $R\approx 1$ and
$\rho\approx 1$, (1.17) gives that
$(R-1)^{2}\leq
C\left(\|\rho-1\|_{L^{\infty}}^{2}+\|(\log\rho)_{t}\|_{L^{\infty}}^{2}+\left(\frac{dR}{dt}\right)^{2}\right).$
The first two terms on the right-hand side are controlled by using Sobolev
embedding $\|\rho-1\|_{L^{\infty}}^{2}\lesssim\|r^{2}\rho_{x}\|_{L^{2}}^{2}$
and
$\|(\log\rho)_{t}\|_{L^{\infty}}^{2}\lesssim\|r^{2}(\log\rho)_{xt}\|_{L^{2}}^{2}\lesssim\|r^{2}\rho_{x}\|_{L^{2}}^{2}+\|u_{t}\|_{L^{2}}^{2}$.
Since $\|r^{2}\rho_{x}\|_{L^{2}}^{2}$, $\|u_{t}\|_{L^{2}}^{2}$ and
$\left(\frac{dR}{dt}\right)^{2}$ all decay with speed $(1+t)^{-1}$, it follows
$(R-1)^{2}\lesssim(1+t)^{-1}$.
## References
* [1] S. N. Antontsev, A. V. Kazhiktov, and V. N. Monakhov. Boundary value problems in mechanics of nonhomogeneous fluids. 1990\.
* [2] D. Bresch and B. Desjardins. Existence of global weak solutions for a 2D viscous shallow water equations and convergence to the quasi-geostrophic model. Communications in Mathematical Physics, 238:211–223, 2003.
* [3] C. Devin. Survey of thermal, radiation, and viscous damping of pulsating air bubbles in water. The Journal of the Acoustical Society of America, 31(12):1654–1667, 1959.
* [4] Z.-H. Guo, H.-L. Li, and Z.-P. Xin. Lagrange structure and dynamics for solutions to the spherically symmetric compressible Navier-Stokes equations. Communications in Mathematical Physics, 309:371–412, 2012.
* [5] S. Jiang. Global spherically symmetric solutions to the equations of a viscous polytropic ideal gas in an exterior domain. Communications in Mathematical Physics, 178(2):339 – 374, 1996\.
* [6] S. Jiang, Z.-P. Xin, and P. Zhang. Global weak solutions to 1D compressible isentropic Navier-Stokes equations with density-dependent viscosity. Methods and applications of analysis, 12:239–252, 2005.
* [7] A. H. Keil. The response of ships to underwater explosions. 1961\.
* [8] J. B. Keller and I. I. Kolodner. Damping of underwater explosion bubble oscillations. Journal of Applied Physics, 27(10):1152–1161, 1956.
* [9] G. Kuiper. Cavitation Research and Ship Propeller Design, pages 33–50. Springer Netherlands, Dordrecht, 1998.
* [10] C.-C. Lai and M. I. Weinstein. Free boundary problem for a gas bubble in a liquid, and asymptotic stability of the manifold of spherically symmetric equilibria. arXiv e-prints, page arXiv:2207.04079, July 2022.
* [11] T. G. Leighton. From seas to surgeries, from babbling brooks to baby scans: The acoustics of gas bubbles in liquids. International Journal of Modern Physics B, 18(25):3267–3314, 2004\.
* [12] C.-C. Liang and Y.-S. Tai. Shock responses of a surface ship subjected to noncontact underwater explosions. Ocean Engineering, 33:748–772, 2006.
* [13] M. Ohnawa and Y. Suzuki. Mathematical and numerical analysis of the Rayleigh-Plesset and the Keller equations. In Mathematical Fluid Dynamics, Present and Future, volume 183, pages 159–180. Springer New York LLC, 2016. 8th CREST-SBM nternational Conference on Mathematical Fluid Dynamics, Present and Future, 2014 ; Conference date: 11-11-2014 Through 14-11-2014.
* [14] L. Rayleigh. Viii. On the pressure developed in a liquid during the collapse of a spherical cavity. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 34(200):94–98, 1917.
* [15] A. M. Shapiro and M. I. Weinstein. Radiative decay of bubble oscillations in a compressible fluid. SIAM Journal on Mathematical Analysis, 43(2):828–876, 2011.
* [16] R.S.J. Sparks. The dynamics of bubble formation and growth in magmas: A review and analysis. Journal of Volcanology and Geothermal Research, 3(1):1–37, 1978\.
* [17] E. Stride and N. Saffari. Microbubble ultrasound contrast agents: A review. Proceedings of the Institution of Mechanical Engineers. Part H, Journal of engineering in medicine, 217:429–47, 02 2003.
* [18] T. Yang, Z.-A. Yao, and C.-J. Zhu. Compressible Navier-Stokes equations with density-dependent viscosity and vacuum. Communications in Partial Differential Equations, 26(5-6):965–981, 2001.
* [19] Y.-N. Zhang, X.-X. Zheng, and X. Du. Chapter 8 - Damping mechanisms of oscillating gas/vapor bubbles in liquids. In O. Hamdaoui and K. Kerboua, editors, Energy Aspects of Acoustic Cavitation and Sonochemistry, pages 131–145. Elsevier, 2022.
|
# Investigating the transition form factors of
$\Lambda_{b}\to\Lambda_{c}(2625)$ and $\Xi_{b}\to\Xi_{c}(2815)$ and the
corresponding weak decays with support from baryon spectroscopy
Yu-Shuai Li1,2<EMAIL_ADDRESS>Xiang Liu1,2,3<EMAIL_ADDRESS>1School
of Physical Science and Technology, Lanzhou University, Lanzhou 730000, China
2Research Center for Hadron and CSR Physics, Lanzhou University and Institute
of Modern Physics of CAS, Lanzhou 730000, China
3Lanzhou Center for Theoretical Physics, Key Laboratory of Theoretical Physics
of Gansu Province, and Frontiers Science Center for Rare Isotopes, Lanzhou
University, Lanzhou 730000, China
###### Abstract
We calculate the form factors of the $\Lambda_{b}\to\Lambda_{c}(2625)$ and
$\Xi_{b}\to\Xi_{c}(2815)$ transitions, and additionally evaluate the
corresponding semileptonic decays and the color-allowed two-body nonleptonic
decays. In order to obtain the concerned form factors, we use the three-body
light-front quark model with the support from baryon spectroscopy. In this
work, as important physical inputs, the spatial wave functions of concerned
baryons are obtained by the Gaussian expansion method with a semirelativistic
potential model. For the semileptonic processes, the branching ratios of the
electron and muon channels can reach up to the order of magnitude of $1\%$,
where our result
$\mathcal{B}(\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\mu^{-}\nu_{\mu})=(1.641\pm
0.113)\%$ is consistent with the current experimental data. As for the
nonleptonic processes, the decays to $\pi^{-}$, $\rho^{-}$, and $D_{s}^{(*)-}$
final states have considerable widths. These discussed decay modes could be
accessible at the LHCb experiment.
## I introduction
The investigation of bottom baryon weak decay is a fiery topic in heavy flavor
physics, which has drawn attentions at both theoretical and experimental
arenas. There exist abundant decay modes involved in bottom baryons due to
their higher mass. So the bottom baryon decay provides a superb platform to
test the Quantum Chromodynamics (QCD), and to search for new physics beyond
the Standard Model (SM) via detecting whether the lepton flavor universality
(LFU) is violated or not BaBar:2012obs ; BaBar:2013mob ; Belle:2015qfa ;
LHCb:2015gmp ; Belle:2016dyj ; Belle:2019rba ; FermilabLattice:2021cdg .
Besides, it is also helpful to discover the new exotic states including the
hidden-charm pentaquark states $P_{c}(4312)$, $P_{c}(4380)$, $P_{c}(4440)$,
and $P_{c}(4457)$ in $\Lambda_{b}\to J/\psi pK$ LHCb:2015yax ; LHCb:2019kea
mode, the $P_{c}(4337)$ in the $B_{s}^{0}\to J/\psi p\bar{p}$ LHCb:2021chn
mode, and the $P_{cs}(4459)$ in the $\Xi_{b}\to J/\psi\Lambda K$ LHCb:2020jpq
mode.
In theoretical aspect, the bottom baryons decaying into the $J^{P}=1/2^{+}$
ground charmed baryons by both semileptonic and nonleptonic processes have
been widely studied via various theoretical approaches, including lattice QCD
(LQCD) Gottlieb:2003yb ; Detmold:2015aaa , QCD sum rules Huang:2005mea ;
Azizi:2018axf ; Zhao:2020mod , light-cone sum rules Wang:2009yma ;
Duan:2022uzm ; Miao:2022bga , and various phenomenological quark models
Pervin:2005ve ; Ebert:2006rp ; Ke:2007tg ; Gutsche:2015mxa ; Faustov:2016pal ;
Gutsche:2018nks ; Chua:2018lfa ; Ke:2019smy ; Chua:2019yqh ; Rahmani:2020kjd ;
Geng:2020ofy ; Li:2021qod ; Li:2021kfb . However, compared with these studies
mentioned above, the investigation of the decays of bottom baryon into the
$P$-wave baryon state should still be paid more attention. In the past years,
some theoretical groups were dedicated to this issue. For example, Pervin et
al. studied the semileptonic decays of $\Lambda_{b}$ into $\Lambda_{c}$ baryon
with $J^{P}=(1/2^{\pm},3/2^{-})$ by a constituent quark model with both
nonrelativistic and semirelativistic Hamiltonians Pervin:2005ve . Gutsche et
al. also studied the same channels by a covariant confined quark model (CCQM)
Gutsche:2018nks . The heavy quark spin symmetry (HQSS) was also applied to
estimate the semileptonic decays
$\Lambda_{b}\to\Lambda_{c}(2595,2625)\ell^{-}\nu_{\ell}$ Nieves:2019kdh .
Besides, Meinel and Rendon performed the first LQCD calculation of the
$\Lambda_{b}\to\Lambda_{c}(2595,2625)\ell^{-}\nu_{\ell}$ decays Meinel:2021rbm
; Meinel:2021mdj . For the nonleptonic process, Chua calculated a series of
color-allowed decay of bottom baryon into $P$-wave charmed baryon by the
light-front quark model (LFQM) Chua:2019yqh . And Liang et al. evaluated the
nonleptonic $\Lambda_{b}\to\Lambda_{c}(2595,2625)\pi^{-}$ Liang:2016ydj ,
$\Lambda_{b}\to\Lambda_{c}(2595,2625)D_{s}^{-}$ Liang:2016ydj , and
semileptonic $\Lambda_{b}\to\Lambda_{c}(2625)\ell\nu_{\ell}$ Liang:2016exm
decays by assuming the $\Lambda_{c}(2595)$ and $\Lambda_{c}(2625)$ as a
dynamically generated resonance from the $DN$, $D^{*}N$ interaction and
coupled channels Liang:2016ydj ; Liang:2016exm . Pavao et al. carried out the
investigation of the $\Xi_{b}^{-}\to\Xi_{c}^{0}(2815)\pi^{-}(D_{s}^{-})$ and
$\Xi_{b}^{-}\to\Xi_{c}^{0}(2815)\ell^{-}\nu_{\ell}$ processes when treating
the $\Xi_{c}(2815)$ as dynamically generated resonance from the vector meson-
baryon interaction Pavao:2017cpt .
In our former work, we once studied the form factors and the semileptonic
decays into charmed baryons with $J^{P}=1/2^{-}$ by LFQM Li:2021qod , which is
supported by the baryon spectroscopy. This treatment is different from that
given in Ref. Chua:2019yqh . We should indicate that there exists difference
of the results of these discussed transition when adopting different
frameworks Pervin:2005ve ; Liang:2016ydj ; Liang:2016exm ; Pavao:2017cpt ;
Gutsche:2018nks ; Nieves:2019kdh ; Chua:2019yqh , which should be clarified by
further experimental measurement. In general, the issue around these weak
transitions of bottom baryons is still open.
As a continuation of the decays into charmed baryons with $J^{P}=1/2^{-}$
Li:2021qod , in this work, we investigate the weak transitions relevant to the
$J^{P}=3/2^{-}$ charmed baryons, which include the
$\Lambda_{b}\to\Lambda_{c}(2625)$ and $\Xi_{b}\to\Xi_{c}(2815)$ processes,
where the $\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ are treated as the
conventional $\rho$-mode excited $P$-wave charmed baryons111In this work, we
do not consider several other possible spin 3/2 $\Lambda_{c}$ and $\Xi_{c}$
resonances such as the $\Lambda_{c}(2860)$ LHCb:2017jym and $\Xi_{c}(2645)$
Belle:2016lhy , which have positive parity different from that of the
$\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$. Here, the $\Lambda_{c}(2860)$ and
$\Xi_{c}(2645)$ are good candidates of $D$-wave Chen:2017aqm and $S$-wave
charmed baryons Chen:2016iyi , while the discussed $\Lambda_{c}(2625)$ and
$\Xi_{c}(2815)$ are assigned as $P$-wave charmed baryons.. Note that this
assignment is suitable since the experimental mass value of the
$\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ can be reproduced by the potential
models Chen:2015kpa ; Guo:2019ytq ; Yu:2022ymb ; Li:2022xtj ; Capstick:1985xss
. Although we adopt the similar way given by Ref. Li:2021qod , we still want
to emphasize the improvement made in this work. First of all, the involved
charmed baryons in the final state have $3/2^{-}$ quantum number, which makes
whole deduction framework become more complicated. Especially, at present the
study around the production of charmed baryons with $3/2^{-}$ is not enough
compared with that relevant to these low-lying charmed baryons. Our work is a
timely investigation of this issue. Secondly, in Ref. Li:2021qod , we focus on
the weak decays of the $\Lambda_{b}$ baryon into $1/2^{\pm}$ charmed baryons.
However, in the present work we study the $\Xi_{b}$ decays into $3/2^{-}$
charmed baryons, which is motivated by the experiment fact that the data of
the $\Xi_{b}$ bottom baryon can be also largely produced in the $pp$
collisions at Large Hadron Collider (LHC) LHCb:2019sxa . Obviously, the
present work is just at the right time, which may provide valuable hint to
future experimental search for these discussed decays. Especially, with the
high-luminosity upgrade to LHC, the LHCb experiment will have great potential
to explore these discussed transitions.
As indicated in Ref. Li:2021qod , the baryon spectroscopy can provide
important input to the spatial wave functions of these involved baryons when
estimating the weak transition matrix element or the corresponding form
factors. In the realistic calculation of baryon spectroscopy, we adopt the
three quarks treatment, which is different from the quark-diquark
approximation used by former theoretical works of weak decays Ke:2007tg ;
Wang:2017mqp ; Zhao:2018zcb ; Zhao:2018mrg ; Chua:2018lfa ; Ke:2017eqo ;
Zhu:2018jet ; Chua:2019yqh ; Wang:2022ias ; Zhao:2022vfr . With the support
from baryon spectroscopy, the dependence of the result on the $\beta$ value,
which is a parameter of the simple hadronic oscillator, can be avoided as
indicated in Refs. Li:2021qod ; Li:2021kfb . In the next section, we will
introduce more details of the deduction.
This paper is organized as follows. After the Introduction, the deduction of
the formulas of eight transition form factors of the
$\mathcal{B}_{b}(1/2^{+})\to\mathcal{B}_{c}(3/2^{-})$ process is given in Sec.
II. For obtaining the spatial wave functions of the involved baryons, we
introduce the semirelativistic potential model and adopt the Gaussian
expansion method (GEM) in Sec. III. And then, in Sec. IV we present the
results of the form factors of the $\Lambda_{b}\to\Lambda_{c}(2625)$ and
$\Xi_{b}\to\Xi_{c}(2815)$ transitions, and further evaluate the corresponding
semileptonic decays and color-allowed two-body nonleptonic decays. Finally,
this paper ends with the discussion and conclusion.
## II The transition form factors of the bottom baryon to the charmed baryon
The $b\to c$ weak decay is usually dependent on the hadronic structure
reflected by the baryon to baryon weak transition matrix element
$\langle\mathcal{B}_{c}|\bar{c}\gamma^{\mu}(1-\gamma^{5})b|\mathcal{B}_{b}\rangle$.
In this section, we briefly introduce how to calculate the matrix element.
Since the constituent quarks are confined in hadron, the matrix element cannot
be calculated by the perturbative QCD. Usually, the matrix element can be
parameterized in terms of a series of dimensionless form factors Chua:2019yqh
, i.e.,
$\begin{split}\langle\mathcal{B}_{c}(&3/2^{-})|\bar{c}\gamma^{\mu}b|\mathcal{B}_{b}(1/2^{+})\rangle\\\
=&\bar{u}_{\alpha}(P^{\prime},J_{z}^{\prime})\bigg{[}g_{1}^{V}(q^{2})g^{\alpha\mu}+g_{2}^{V}(q^{2})\frac{P^{\alpha}}{M}\gamma^{\mu}+g_{3}^{V}(q^{2})\frac{P^{\alpha}P^{\prime\mu}}{MM^{\prime}}\\\
&+g_{4}^{V}(q^{2})\frac{P^{\alpha}P^{\mu}}{M^{2}}\bigg{]}u(P,J_{z}),\end{split}$
(2.1)
$\begin{split}\langle\mathcal{B}_{c}(&3/2^{-})|\bar{c}\gamma^{\mu}\gamma^{5}b|\mathcal{B}_{b}(1/2^{+})\rangle\\\
=&\bar{u}_{\alpha}(P^{\prime},J_{z}^{\prime})\bigg{[}f_{1}^{A}(q^{2})g^{\alpha\mu}+f_{2}^{A}(q^{2})\frac{P^{\alpha}}{M}\gamma^{\mu}+f_{3}^{A}(q^{2})\frac{P^{\alpha}P^{\prime\mu}}{MM^{\prime}}\\\
&+f_{4}^{A}(q^{2})\frac{P^{\alpha}P^{\mu}}{M^{2}}\bigg{]}\gamma^{5}u(P,J_{z}),\end{split}$
(2.2)
where $M$ and $M^{\prime}$ are the masses of the initial bottom baryon and the
final charmed baryon, respectively. $P$ and $P^{\prime}$ are the corresponding
three-momentum, and $J_{z}$ and $J_{z}^{\prime}$ are the third components of
the spins. Here, we ignore the spin quantum number since they are definite.
In this work, we use the standard light-front quark model to calculate the
relevant form factors. The light-front quark model, which was proposed by
Terentev and Berestetsky in a relativistic quark model Terentev:1976jk ;
Berestetsky:1977zk based on the light front formalism and light front
quantization of QCD, have been widely and successfully used in studying the
weak decay form factors (see Ref. Chang:2019obq and its references). In this
work, we take the same framework Ke:2019smy ; Ke:2019lcf ; Ke:2021pxk to
calculate the relevant form factors. In the concrete calculation, there exists
input of the spatial wave function of these discussed baryon states. Usually,
one takes a simple Harmonic Oscillator (SHO) wave function, which must result
in the calculated physical quantities dependent on the $\beta$ value, which is
the parameter in the SHO wave function. For avoiding such problem, we proposed
to directly adopt the numerical spatial wave function by solving the potential
model with the help of the Gaussian expansion method Li:2021qod ; Li:2021kfb .
In analog to Refs. Cheung:1995ub ; Cheng:1996if ; Geng:1997ws ; Cheng:2004cc ;
Wang:2017mqp ; Ke:2019smy ; Ke:2019lcf ; Geng:2022xpn , the vertex function of
a single heavy flavor baryon $\mathcal{B}_{Q}$ with spin $J$ and momentum $P$
can be written as
$\begin{split}|\mathcal{B}_{Q}(P,J&,J_{z})\rangle=\int\frac{d^{3}\tilde{p}_{1}}{2(2\pi)^{3}}\frac{d^{3}\tilde{p}_{2}}{2(2\pi)^{3}}\frac{d^{3}\tilde{p}_{3}}{2(2\pi)^{3}}2(2\pi)^{3}\\\
&\times\sum_{\lambda_{1},\lambda_{2},\lambda_{3}}\Psi^{J,J_{z}}(\tilde{p}_{i},\lambda_{i})C^{\alpha\beta\gamma}\delta^{3}(\tilde{P}-\tilde{p}_{1}-\tilde{p}_{2}-\tilde{p}_{3})\\\
&\times~{}F_{q_{1}q_{2}Q}~{}|q_{1\alpha}(\tilde{p}_{1},\lambda_{1})\rangle~{}|q_{2\beta}(\tilde{p}_{2},\lambda_{2})\rangle~{}|Q_{\gamma}(\tilde{p}_{3},\lambda_{3})\rangle,\end{split}$
(2.3)
where $C^{\alpha\beta\gamma}$ and $F_{q_{1}q_{2}q_{3}}$ represent the color
and flavor factors, respectively, and $\lambda_{i}$ and $p_{i}$ ($i$=1,2,3)
are the helicities and light-front momenta of the on-mass-shell quarks,
respectively, defined as
$\tilde{p}_{i}=(p_{i}^{+},\vec{p}_{i\bot}),\quad
p_{i}^{+}=p_{i}^{0}+p_{i}^{3},\quad\vec{p}_{i\bot}=(p_{i}^{1},p_{i}^{2}).$
(2.4)
For describing the motions of the constituents, we should introduce the
intrinsic variables $(x_{i},~{}\vec{k}_{i})$ ($i=1,2,3$)
$p_{i}^{+}=x_{i}P^{+},~{}~{}\vec{p}_{i\bot}=x_{i}\vec{P}_{i\bot}+\vec{k}_{i\bot},~{}~{}\sum_{i=1}^{3}\vec{k}_{i\bot}=0,~{}~{}\sum_{i=1}^{3}x_{i}=1,$
(2.5)
where $x_{i}$ are the light-front momentum fractions constrained by
$0<x_{i}<1$.
In this work, the spin-spatial wave functions for anti-triplet single heavy
baryon $\mathcal{B}_{Q}(\bar{3}_{f},J^{P}=1/2^{+})$ and
$\mathcal{B}_{Q}(\bar{3}_{f},J^{P}=3/2^{-})$ are written as Korner:1994nh ;
Hussain:1995xs ; Tawfiq:1998nk
$\begin{split}\Psi^{1/2,J_{z}}(\tilde{p}_{i},\lambda_{i})=&A_{0}\bar{u}(p_{1},\lambda_{1})[(\not{P}+M_{0})\gamma^{5}]v(p_{2},\lambda_{2})\\\
&\times\bar{u}_{Q}(p_{3},\lambda_{3})u(P,J_{z})\psi(x_{i},\vec{k}_{i}),\\\
\Psi^{3/2,J_{z}}(\tilde{p}_{i},\lambda_{i})=&B_{0}\bar{u}(p_{1},\lambda_{1})[(\not{P}+M_{0})\gamma^{5}]v(p_{2},\lambda_{2})\\\
&\times\bar{u}_{Q}(p_{3},\lambda_{3})K^{\alpha}u_{\alpha}(P,J_{z})\psi(x_{i},\vec{k}_{i}),\end{split}$
(2.6)
respectively.
As the fundamental inputs, the spatial wave functions $\psi$ should be
discussed here. Usually, the single heavy flavor baryon is regarded as a quasi
two-body bound state of the light quark cluster with heavy quark ($b\
(\text{or}\ c)$) to form the $\rho$-mode excitation. The spatial wave function
of a single heavy baryon can be written as Ke:2019smy ; Ke:2019lcf ;
Ke:2021pxk
$\begin{split}\psi(x_{i},\vec{k}_{i})=&N_{\psi}\sqrt{\frac{e_{1}e_{2}e_{3}}{x_{1}x_{2}x_{3}M_{0}}}\phi_{\rho}\Big{(}\frac{m_{1}\vec{k}_{2}-m_{2}\vec{k}_{1}}{m_{1}+m_{2}}\Big{)}\\\
&\times\phi_{\lambda}\Big{(}\frac{(m_{1}+m_{2})\vec{k}_{3}-m_{3}(\vec{k}_{1}+\vec{k}_{2})}{m_{1}+m_{2}+m_{3}}\Big{)},\end{split}$
(2.7)
where $\vec{k}_{i}=(\vec{k}_{i\bot},k_{iz})$ with Ke:2019smy
$k_{iz}=\frac{x_{i}M_{0}}{2}-\frac{m_{i}^{2}+\vec{k}_{i\bot}^{2}}{2x_{i}M_{0}}.$
(2.8)
The $\phi_{\rho(\lambda)}$ is the spatial wave function of
$\rho(\lambda)$-mode excitation.
The normalized factors in Eq. (2.6) are expressed as
$\begin{split}A_{0}&=\frac{1}{\sqrt{16P^{+}M_{0}^{3}(e_{1}+m_{1})(e_{2}+m_{2})(e_{3}+m_{3})}},\\\
B_{0}&=\frac{\sqrt{3}}{\sqrt{16P^{+}M_{0}^{3}(e_{1}+m_{1})(e_{2}+m_{2})(e_{3}-m_{3})(e_{3}+m_{3})^{2}}},\end{split}$
where the factor in Eq. (2.7) is $N_{\psi}=(4\pi^{3/2})^{2}$ for the ground
state and $N_{\psi}=(4\pi^{3/2})^{2}/\sqrt{3}$ for the $P$-wave state. These
factors are determined by the following normalizations:
$\begin{split}\sum_{J_{z},J_{z}^{\prime}}&\langle\mathcal{B}_{Q}(P^{\prime},J,J_{z}^{\prime})|\mathcal{B}_{Q}(P,J,J_{z})\rangle=\sum_{J_{z},J_{z}^{\prime}}2(2\pi)^{3}P^{+}\delta^{3}(\tilde{P}-\tilde{P}^{\prime})\delta_{J_{z},J_{z}^{\prime}},\end{split}$
(2.9)
and
$\begin{split}\int&\Bigg{(}\prod_{i=1}^{3}\frac{dx_{i}d^{2}\vec{k}_{i\bot}}{2(2\pi)^{3}}\Bigg{)}2(2\pi)^{3}\delta\Big{(}1-\sum_{i}x_{i}\Big{)}\\\
&\times\delta^{2}\Big{(}\sum_{i}\vec{k}_{i\bot}\Big{)}\psi^{*}(x_{i},\vec{k}_{i})\psi(x_{i},\vec{k}_{i})=1.\end{split}$
(2.10)
With the above vertex wave functions in the framework of LFQM, the general
expression of the weak transition matrix element can be expressed as
$\begin{split}\langle\mathcal{B}_{c}(P^{\prime},J_{z}^{\prime})|\bar{c}\Gamma^{\mu}_{i}b|\mathcal{B}_{b}(P,J_{z})\rangle=&\int\Big{(}\frac{dx_{1}d^{2}\vec{k}_{1\bot}}{2(2\pi)^{3}}\Big{)}\Big{(}\frac{dx_{2}d^{2}\vec{k}_{2\bot}}{2(2\pi)^{3}}\Big{)}\frac{\psi_{c}^{\ast}(x_{i}^{\prime},\vec{k}_{i\bot}^{\prime})\psi_{b}(x_{i},\vec{k}_{i\bot})}{(16/\sqrt{3})\sqrt{x_{3}x_{3}^{\prime}M_{0}^{3}M_{0}^{\prime
3}}}\\\
&\times\frac{\text{Tr}[(\not{P}^{\prime}-M_{0}^{\prime})\gamma^{5}(\not{p}_{1}+m_{1})(\not{P}+M_{0})\gamma^{5}(\not{p}_{2}-m_{2})]}{\sqrt{(e_{1}+m_{1})(e_{2}+m_{2})(e_{3}+m_{3})(e_{1}^{\prime}+m_{1}^{\prime})(e_{2}^{\prime}+m_{2}^{\prime})(e_{3}^{\prime}-m_{3}^{\prime})(e_{3}^{\prime}+m_{3}^{\prime})^{2}}}\\\
&\times\bar{u}_{\alpha}(P^{\prime},J_{z}^{\prime})K^{\prime\alpha}(\not{p}_{3}^{\prime}+m_{3}^{\prime})\Gamma^{\mu}_{i}(\not{p}_{3}+m_{3})u(P,J_{z}).\end{split}$
(2.11)
Here, the Lorentz structures is defined as
$\Gamma^{\mu}_{i}=\big{\\{}\gamma^{\mu},\gamma^{\mu}\gamma^{5}\big{\\}}$,
$K^{\prime}=\big{[}(m_{1}^{\prime}+m_{2}^{\prime})p_{3}^{\prime}-m_{3}^{\prime}(p_{1}^{\prime}+p_{2}^{\prime})\big{]}/\big{(}m_{1}^{\prime}+m_{2}^{\prime}+m_{3}^{\prime}\big{)}$
is the $\lambda$-mode momentum of the $P$-wave charmed baryon, and the
$\psi_{b}$ and $\psi_{c}$ are the spatial wave functions of the bottom baryon
and the charmed baryon, respectively.
Next, we should introduce how to extract the form factors by setting the
$q^{+}=0$ and $\vec{q}_{\bot}\neq 0$ condition. To extract the four form
factors of the vector current, we multiply
$\bar{u}(P,J_{z})\Gamma_{i}^{V,\mu\beta}u_{\beta}(P^{\prime},J_{z}^{\prime})$
on both sides of Eq. (2.11) with specifically setting
$\Gamma_{i}^{\mu}=\gamma^{\mu}$, and then sum over the polarizations of the
initial and the final baryons. The left side can be replaced by Eq. (2.1), and
the right side can be calculated by performing the traces and then the
integrations. The Lorentz structures are chosen as
$\Gamma_{i}^{V,\mu\beta}=\big{\\{}g^{\beta\mu},P^{\beta}\gamma^{\mu},P^{\beta}P^{\prime\mu},P^{\beta}P^{\mu}\big{\\}}$
Wang:2022ias ; Zhao:2022vfr . The complete expressions of the form factors of
the vector current are
$\begin{split}g_{1}^{V}(q^{2})=&-\frac{1}{2\tilde{Q}_{+}}G_{1}^{V}(q^{2})-\frac{M_{0}^{\prime}}{2\tilde{Q}_{-}\tilde{Q}_{+}}G_{2}^{V}(q^{2})+\frac{M_{0}^{2}+M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2}}{\tilde{Q}_{-}\tilde{Q}_{+}^{2}}G_{3}^{V}(q^{2})-\frac{M_{0}^{\prime
2}}{\tilde{Q}_{-}\tilde{Q}_{+}^{2}}G_{4}^{V}(q^{2}),\\\
g_{2}^{V}(q^{2})=&-\frac{MM_{0}^{\prime}}{2\tilde{Q}_{-}\tilde{Q}_{+}}G_{1}^{V}(q^{2})-\frac{2MM_{0}^{\prime
2}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}}G_{2}^{V}(q^{2})+\frac{MM_{0}^{\prime}(M_{0}^{2}+4M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2})}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}G_{3}^{V}(q^{2})+\frac{2MM_{0}^{\prime
3}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}G_{4}^{V}(q^{2}),\\\
g_{3}^{V}(q^{2})=&\frac{MM^{\prime}(M_{0}^{2}+M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2})}{\tilde{Q}_{-}\tilde{Q}_{+}^{2}}G_{1}^{V}(q^{2})+\frac{MM^{\prime}M_{0}^{\prime}(M_{0}^{2}+4M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2})}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}G_{2}^{V}(q^{2})\\\
&-\frac{2MM^{\prime}(M_{0}^{4}+2M_{0}^{3}M_{0}^{\prime}+2M_{0}M_{0}^{\prime}(M_{0}^{\prime
2}-q^{2})+(M_{0}^{\prime 2}-q^{2})^{2}+2M_{0}^{2}(6M_{0}^{\prime
2}-q^{2}))}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{3}}G_{3}^{V}(q^{2})\\\
&+\frac{4MM^{\prime}M_{0}^{\prime
2}(2M_{0}^{2}-M_{0}M_{0}^{\prime}+2(M_{0}^{\prime
2}-q^{2}))}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{3}}G_{4}^{V}(q^{2}),\\\
g_{4}^{V}(q^{2})=&-\frac{M^{2}M_{0}^{\prime
2}}{\tilde{Q}_{-}\tilde{Q}_{+}^{2}}G_{1}^{V}(q^{2})+\frac{2M^{2}M_{0}^{\prime
3}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}G_{2}^{V}(q^{2})+\frac{4M^{2}M_{0}^{\prime
2}(2M_{0}^{2}-M_{0}M_{0}^{\prime}+2(M_{0}^{\prime
2}-q^{2}))}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{3}}G_{3}^{V}(q^{2})-\frac{20M^{2}M_{0}^{\prime
4}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{3}}G_{4}^{V}(q^{2}),\end{split}$ (2.12)
where $M$ and $M^{\prime}$ are the physical masses of the bottom and charmed
baryons, respectively, and $\tilde{Q}_{\pm}=(M_{0}\pm
M_{0}^{\prime})^{2}-q^{2}$ with
$M_{0}^{(\prime)2}=\frac{\vec{k}_{1\bot}^{(\prime)2}+m_{1}^{(\prime)2}}{x_{1}}+\frac{\vec{k}_{2\bot}^{(\prime)2}+m_{2}^{(\prime)2}}{x_{2}}+\frac{\vec{k}_{3\bot}^{(\prime)2}+m_{3}^{(\prime)2}}{x_{3}}$
(2.13)
being the invariant mass square Ke:2019smy . Besides,
$\begin{split}G_{(1,2,3,4)}^{V}(q^{2})=&\int\bigg{(}\frac{dx_{1}d^{2}\vec{k}_{1\bot}}{2(2\pi)^{3}}\bigg{)}\bigg{(}\frac{dx_{2}d^{2}\vec{k}_{2\bot}}{2(2\pi)^{3}}\bigg{)}\frac{\psi_{b}(x_{i},\vec{k}_{i\bot})\psi_{c}^{\ast}(x_{i}^{\prime},\vec{k}^{\prime}_{i\bot})}{\sqrt{x_{3}x_{3}^{\prime}}}A_{0}B_{0}^{\prime}\text{Tr}[\cdots]\\\
&\times\text{Tr}\big{[}(G_{\mathcal{B}_{c}})_{\beta\alpha}K^{\prime\alpha}(\not{p}_{3}^{\prime}+m_{3}^{\prime})\gamma^{\mu}(\not{p}_{3}+m_{3})(\not{P}+M_{0})\Gamma_{(1,2,3,4),\mu}^{V,\beta}\big{]}\end{split}$
(2.14)
with
$\displaystyle A_{0}$ $\displaystyle=$ $\displaystyle
1\big{/}{\sqrt{16M_{0}^{3}(e_{1}+m_{1})(e_{2}+m_{2})(e_{3}+m_{3})}},$ (2.15)
$\displaystyle B_{0}^{\prime}$ $\displaystyle=$
$\displaystyle\sqrt{3}\big{/}{\sqrt{16M_{0}^{\prime
3}(e_{1}^{\prime}+m_{1}^{\prime})(e_{2}^{\prime}+m_{2}^{\prime})(e_{3}^{\prime}-m_{3}^{\prime})(e_{3}^{\prime}+m_{3}^{\prime})^{2}}},$
(2.16) $\displaystyle\text{Tr}[\cdots]$ $\displaystyle=$
$\displaystyle\text{Tr}[(\not{P}^{\prime}-M_{0}^{\prime})\gamma^{5}(\not{p}_{1}+m_{1})(\not{P}-M_{0})\gamma^{5}(\not{p}_{2}-m_{2})],$
(2.17) $\displaystyle(G_{\mathcal{B}_{c}})^{\mu\nu}$ $\displaystyle=$
$\displaystyle-(\not{P}^{\prime}+M_{0}^{\prime})\Big{[}g^{\mu\nu}-\frac{1}{3}\gamma^{\mu}\gamma^{\nu}-\frac{2}{3M_{0}^{\prime
2}}P^{\prime\mu}P^{\prime\nu}-\frac{1}{3M_{0}^{\prime}}\big{(}\gamma^{\mu}P^{\prime\nu}-\gamma^{\nu}P^{\prime\mu}\big{)}\Big{]}.$
(2.18)
Analogously, the form factors of the axial-vector current can be extracted
with the structures
$\bar{u}(P,J_{z})\Gamma^{A,\mu\beta}_{i}u_{\beta}(P^{\prime},J_{z}^{\prime})$,
where
$\Gamma_{i}^{A,\mu\beta}=\big{\\{}g^{\beta\mu}\gamma^{5},P^{\beta}\gamma^{\mu}\gamma^{5},P^{\beta}P^{\prime\mu}\gamma^{5},P^{\beta}P^{\mu}\gamma^{5}\big{\\}}$
is defined. The complete expressions of the form factors of the axial-vector
current are expressed as
$\begin{split}f_{1}^{A}(q^{2})=&\frac{1}{2\tilde{Q}_{-}}F_{1}^{A}(q^{2})-\frac{M_{0}^{\prime}}{2\tilde{Q}_{-}\tilde{Q}_{+}}F_{2}^{A}(q^{2})-\frac{M_{0}^{2}-M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}}F_{3}^{A}(q^{2})+\frac{M_{0}^{\prime
2}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}}F_{4}^{A}(q^{2}),\\\
f_{2}^{A}(q^{2})=&\frac{MM_{0}^{\prime}}{2\tilde{Q}_{-}\tilde{Q}_{+}}F_{1}^{A}(q^{2})-\frac{2MM_{0}^{\prime
2}}{\tilde{Q}_{-}\tilde{Q}_{+}^{2}}F_{2}^{A}(q^{2})-\frac{MM_{0}^{\prime}(M_{0}^{2}-4M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2})}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}F_{3}^{A}(q^{2})-\frac{2MM_{0}^{\prime
3}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}F_{4}^{A}(q^{2}),\\\
f_{3}^{A}(q^{2})=&-\frac{MM^{\prime}(M_{0}^{2}-M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2})}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}}F_{1}^{A}(q^{2})+\frac{MM^{\prime}M_{0}^{\prime}(M_{0}^{2}-4M_{0}M_{0}^{\prime}+M_{0}^{\prime
2}-q^{2})}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}F_{2}^{A}(q^{2})\\\
&+\frac{2MM^{\prime}(M_{0}^{4}-2M_{0}^{3}M_{0}^{\prime}-2M_{0}M_{0}^{\prime}(M_{0}^{\prime
2}-q^{2})+(M_{0}^{\prime 2}-q^{2})^{2}+2M_{0}^{2}(6M_{0}^{\prime
2}-q^{2}))}{\tilde{Q}_{-}^{3}\tilde{Q}_{+}^{2}}F_{3}^{A}(q^{2})\\\
&-\frac{4MM^{\prime}M_{0}^{\prime
2}(2M_{0}^{2}+M_{0}M_{0}^{\prime}+2(M_{0}^{\prime
2}-q^{2}))}{\tilde{Q}_{-}^{3}\tilde{Q}_{+}^{2}}F_{4}^{A}(q^{2}),\\\
f_{4}^{A}(q^{2})=&\frac{M^{2}M_{0}^{\prime
2}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}}F_{1}^{A}(q^{2})+\frac{2M^{2}M_{0}^{\prime
3}}{\tilde{Q}_{-}^{2}\tilde{Q}_{+}^{2}}F_{2}^{A}(q^{2})-\frac{4M^{2}M_{0}^{\prime
2}(2M_{0}^{2}+M_{0}M_{0}^{\prime}+2(M_{0}^{\prime
2}-q^{2}))}{\tilde{Q}_{-}^{3}\tilde{Q}_{+}^{2}}F_{3}^{A}(q^{2})+\frac{20M^{2}M_{0}^{\prime
4}}{\tilde{Q}_{-}^{3}\tilde{Q}_{+}^{2}}F_{4}^{A}(q^{2}),\\\ \end{split}$
(2.19)
where
$\begin{split}F_{(1,2,3,4)}^{V}(q^{2})=&\int\bigg{(}\frac{dx_{1}d^{2}\vec{k}_{1\bot}}{2(2\pi)^{3}}\bigg{)}\bigg{(}\frac{dx_{2}d^{2}\vec{k}_{2\bot}}{2(2\pi)^{3}}\bigg{)}\frac{\psi_{b}(x_{i},\vec{k}_{i\bot})\psi_{c}^{\ast}(x_{i}^{\prime},\vec{k}^{\prime}_{i\bot})}{\sqrt{x_{3}x_{3}^{\prime}}}A_{0}B_{0}^{\prime}\text{Tr}[\cdots]\\\
&\times\text{Tr}\big{[}(G_{\mathcal{B}_{c}})_{\beta\alpha}K^{\prime\alpha}(\not{p}_{3}^{\prime}+m_{3}^{\prime})\gamma^{\mu}\gamma^{5}(\not{p}_{3}+m_{3})(\not{P}+M_{0})\Gamma_{(1,2,3,4),\mu}^{A,\beta}\big{]}.\end{split}$
(2.20)
All the traces in Eq. (2.14), Eq. (2.17), and Eq. (2.20) are calculable with
the help of the FEYNCALC program Mertig:1990an ; Shtabovenko:2016sxi ;
Shtabovenko:2020gxv , where the following relations
$\begin{split}P\cdot P&=M_{0}^{2},~{}~{}~{}~{}P^{\prime}\cdot
P^{\prime}=M_{0}^{\prime 2},\\\ P\cdot P^{\prime}&=(M_{0}^{2}+M_{0}^{\prime
2}-q^{2})/2,\\\ p_{1}\cdot P&=e_{1}M_{0},~{}~{}~{}~{}p_{2}\cdot
P=e_{2}M_{0},\\\ p_{1}\cdot
P^{\prime}&=e_{1}^{\prime}M_{0}^{\prime},~{}~{}~{}~{}p_{2}\cdot
P^{\prime}=e_{2}^{\prime}M_{0}^{\prime},\\\ p_{1}\cdot
p_{2}&=(M_{0}^{2}+m_{3}^{2}-m_{1}^{2}-m_{2}^{2}-2e_{3}M_{0})/2,\end{split}$
(2.21)
are used. We also have $p_{i}^{(\prime)2}=m_{i}^{(\prime)2}$ with
$m_{i}^{(\prime)}$ being the mass of corresponding quark. Moreover,
$e_{i}^{(\prime)}$, the energy of $i^{(\prime)}$-th quark, is defined as
$e_{i}^{\prime}=\frac{1}{2}\Big{(}x_{i}^{(\prime)}M_{0}^{(\prime)}+\frac{m_{i}^{(\prime)2}+\vec{k}_{i\bot}^{(\prime)2}}{x_{i}^{(\prime)}M_{0}^{(\prime)}}\Big{)}.$
(2.22)
## III The semirelativistic potential model for getting baryon wave function
In Refs. Chua:2018lfa ; Ke:2019smy ; Chua:2019yqh ; Ke:2019lcf , the spatial
wave function of baryon is usually treated as a simple harmonic oscillator
form with a phenomenological parameter $\beta$, which results in the $\beta$
dependence of the calculated form factors. For avoiding the $\beta$ dependence
of result, we can take the numerical spatial wave function as input, which is
obtained by solving the three-body Schrödinger equation with the
semirelativistic potential model. So, in the present section, we should
introduce the semirelativistic potential model and the GEM.
In the study of baryon spectroscopy, the baryon wave function and its mass can
be obtained by solving the Schrödinger equation
$\mathcal{H}|\Psi_{\mathbf{J},\mathbf{M_{J}}}\rangle=E|\Psi_{\mathbf{J},\mathbf{M_{J}}}\rangle$
(3.1)
with the Rayleigh-Ritz variational principle, where $\mathcal{H}$ is the
Hamiltonian and $E$ is the corresponding eigenvalue. In this calculation, the
semirelativistic potential, which was given in Ref. Capstick:1985xss , are
applied. The concerned Hamiltonian Capstick:1985xss ; Li:2021qod ; Li:2021kfb
$\mathcal{H}=K+\sum_{i<j}(S_{ij}+G_{ij}+V^{\text{so(s)}}_{ij}+V^{\text{so(v)}}_{ij}+V^{\text{ten}}_{ij}+V^{\text{con}}_{ij})$
(3.2)
includes the kinetic energy $K=\sum_{i=1,2,3}\sqrt{m_{i}^{2}+p_{i}^{2}}$, the
linear confinement term $S_{ij}$:
$\displaystyle S_{ij}$ $\displaystyle=$
$\displaystyle-\frac{3}{4}\left(br_{ij}\left[\frac{e^{-\sigma_{ij}^{2}r_{ij}^{2}}}{\sqrt{\pi}\sigma_{ij}r_{ij}}+\left(1+\frac{1}{2\sigma_{ij}^{2}r_{ij}^{2}}\right)\frac{2}{\sqrt{\pi}}\right.\right.$
(3.3)
$\displaystyle\left.\left.\times\int_{0}^{\sigma_{ij}r_{ij}}e^{-x^{2}}dx\right]\right)\mathbf{F_{i}}\cdot\mathbf{F_{j}}+\frac{c}{3}$
with
$\sigma_{ij}^{2}=\sigma_{0}^{2}\left[\frac{1}{2}+\frac{1}{2}\bigg{(}\frac{4m_{i}m_{j}}{(m_{i}+m_{j})^{2}}\bigg{)}^{4}+s^{2}\bigg{(}\frac{2m_{i}m_{j}}{m_{i}+m_{j}}\bigg{)}^{2}\right],$
(3.4)
the Coulomb-like potential $G_{ij}$:
$G_{ij}=\sum_{k}\frac{\alpha_{k}}{r_{ij}}\left[\frac{2}{\sqrt{\pi}}\int_{0}^{\tau_{k}r_{ij}}e^{-x^{2}}dx\right]\mathbf{F_{i}}\cdot\mathbf{F_{j}},$
(3.5)
the scalar typed spin-orbit interaction $V^{\text{so}(s)}$:
$V^{\text{so}(s)}_{ij}=-\frac{\mathbf{r_{ij}}\times\mathbf{p_{i}}\cdot\mathbf{S_{i}}}{2m_{i}^{2}}\frac{1}{r_{ij}}\frac{\partial
S_{ij}}{\partial
r_{ij}}+\frac{\mathbf{r_{ij}}\times\mathbf{p_{j}}\cdot\mathbf{S_{j}}}{2m_{j}^{2}}\frac{1}{r_{ij}}\frac{\partial
S_{ij}}{\partial r_{ij}},$ (3.6)
the vector typed spin-orbit interaction $V^{\text{so}(v)}$:
$\begin{split}V^{\text{so}(v)}_{ij}=&\frac{\mathbf{r_{ij}}\times\mathbf{p_{i}}\cdot\mathbf{S_{i}}}{2m_{i}^{2}}\frac{1}{r_{ij}}\frac{\partial
G_{ij}}{\partial
r_{ij}}-\frac{\mathbf{r_{ij}}\times\mathbf{p_{j}}\cdot\mathbf{S_{j}}}{2m_{j}^{2}}\frac{1}{r_{ij}}\frac{\partial
G_{ij}}{\partial r_{ij}}\\\
&-\frac{\mathbf{r_{ij}}\times\mathbf{p_{j}}\cdot\mathbf{S_{i}}-\mathbf{r_{ij}}\times\mathbf{p_{i}}\cdot\mathbf{S_{j}}}{m_{i}~{}m_{j}}\frac{1}{r_{ij}}\frac{\partial
G_{ij}}{\partial r_{ij}},\end{split}$ (3.7)
the tensor potential $V^{\text{tens}}$:
$\begin{split}V^{\text{tens}}_{ij}=&-\frac{1}{m_{i}m_{j}}\left[\left(\mathbf{S_{i}}\cdot\mathbf{\hat{r}_{ij}}\right)\left(\mathbf{S_{j}}\cdot\mathbf{\hat{r}_{ij}}\right)-\frac{\mathbf{S_{i}}\cdot\mathbf{S_{j}}}{3}\right]\\\
&\times\left(\frac{\partial^{2}G_{ij}}{\partial r_{ij}^{2}}-\frac{\partial
G_{ij}}{r_{ij}\partial r_{ij}}\right),\end{split}$ (3.8)
and the spin-dependent contact potential $V^{\text{con}}$:
$V^{\text{con}}_{ij}=\frac{2\mathbf{S_{i}}\cdot\mathbf{S_{j}}}{3m_{i}m_{j}}\nabla^{2}G_{ij},$
(3.9)
where $m_{i}$ stands for the mass of constituent quark $i$, and the
$\mathbf{S_{i}}$ is the corresponding spin operator.
$\langle\mathbf{F_{i}}\cdot\mathbf{F_{j}}\rangle=-2/3$ is for quark-quark
interaction Godfrey:1985xj . It is worthy to note that the running coupling
constant $\alpha_{s}$ is defined as Godfrey:1985xj ; Capstick:1985xss
$\alpha_{s}(r)=\sum_{k=1}^{3}\alpha_{k}\frac{2}{\sqrt{\pi}}\int_{0}^{\gamma_{k}r}e^{-x^{2}}dx$
(3.10)
in Eq. (3.5). Here,
$\\{\alpha_{1},\alpha_{2},\alpha_{3}\\}=\\{0.25,0.15,0.20\\}$, and
$\tau_{k}=\frac{\gamma_{k}\sigma_{ij}}{\sqrt{\gamma_{k}^{2}+\sigma_{ij}^{2}}}$
(3.11)
with
$\\{\gamma_{1},\gamma_{2},\gamma_{3}\\}=\\{1/2,\sqrt{10}/2,\sqrt{1000}/2\\}$.
The remaining parameters are collected into Table 1.
For partially compensating relativistic effect in the non-relativistic limit,
the following transformation Godfrey:1985xj ; Capstick:1985xss
$\begin{split}&G_{ij}\to\left(1+\frac{p^{2}}{E_{i}E_{j}}\right)^{1/2}G_{ij}\left(1+\frac{p^{2}}{E_{i}E_{j}}\right)^{1/2},\\\
&\frac{V^{k}_{ij}}{m_{i}m_{j}}\to\left(\frac{m_{i}m_{j}}{E_{i}E_{j}}\right)^{1/2+\epsilon_{k}}\frac{V^{k}_{ij}}{m_{i}m_{j}}\left(\frac{m_{i}m_{j}}{E_{i}E_{j}}\right)^{1/2+\epsilon_{k}}\end{split}$
(3.12)
should be made, where $E_{i}=\sqrt{p^{2}+m_{i}^{2}}$ is the energy of $i$-th
constituent quark, the subscript $k$ are used to distinguish the contributions
from the contact, tensor, vector spin-orbit, and scalar spin-orbit terms, and
the $\epsilon_{k}$ are used to represent the relevant modification parameters.
By fitting the mass spectrum of the single charmed and single bottom baryon,
the model parameters in the semirelativistic potential model are obtained as
collected in Table 1.
Table 1: The parameters adopted in the semirelativistic potential model Li:2022nim . Besides, the quark masses are chosen as $m_{u}=220\ \text{MeV}$, $m_{d}=220\ \text{MeV}$, $m_{s}=419\ \text{MeV}$, $m_{c}=1628\ \text{MeV}$, and $m_{b}=4977\ \text{MeV}$ Godfrey:1985xj ; Capstick:1985xss . Parameters | Values | Parameters | Values
---|---|---|---
$b~{}(\text{GeV}^{2})$ | $0.1466\pm 0.0007$ | $\epsilon^{\text{so}(s)}$ | $0.5000\pm 0.0762$
$c~{}(\text{GeV})$ | $-0.3490\pm 0.0050$ | $\epsilon^{\text{so}(v)}$ | $-0.1637\pm 0.0131$
$\sigma_{0}~{}(\text{GeV})$ | $1.7197\pm 0.0304$ | $\epsilon^{\text{tens}}$ | $-0.3790\pm 0.5011$
$s$ | $0.5278\pm 0.0718$ | $\epsilon^{\text{con}}$ | $-0.1612\pm 0.0015$
The total wave function of a baryon can be written as
$\begin{split}\Psi_{\mathbf{J},\mathbf{M_{J}}}=&\sum_{\alpha}C^{(\alpha)}\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha)},\\\
\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha)}=&\chi^{\text{color}}\left\\{{\chi^{\text{spin}}}_{\mathbf{S},\mathbf{M_{S}}}\psi^{\text{partial}}_{\mathbf{L},\mathbf{M_{L}}}\right\\}_{\mathbf{J},\mathbf{M_{J}}}\psi^{\text{flavor}},\end{split}$
(3.13)
which is composed of color, spin, spatial, and flavor terms, where
$C^{(\alpha)}$ denotes the coefficient with $\alpha$ being the possible
quantum number. The color wave function $\chi^{\text{color}}=(rgb-rbg+gbr-
grb+brg-bgr)/\sqrt{6}$ is universal for any baryons. In the SU(2) flavor
symmetry, the flavor wave function is expressed as
$\psi_{\Lambda_{Q}}^{\text{flavor}}=(ud-du)Q/\sqrt{2}$ for the
$\Lambda_{Q}$-typed baryon, while the flavor wave function is
$\psi_{\Xi_{Q}}^{\text{flavor}}=(ns-sn)Q/\sqrt{2}$ with $n=u\ (\text{or}\ d)$
and $Q=b\ (\text{or}\ c)$ for a $\Xi_{Q}$-typed baryon. The subscripts S and L
represent the total spin and total orbital angular momentum, respectively. And
$\psi^{\text{spatial}}_{\mathbf{L},\mathbf{M_{L}}}$ is the spatial wave
function of $\rho$-mode and $\lambda$-mode excitation
$\psi^{\text{spatial}}_{\mathbf{L},\mathbf{M_{L}}}=\left\\{\phi_{\boldsymbol{l_{\rho}},\boldsymbol{ml_{\rho}}}\phi_{\boldsymbol{l_{\lambda}},\boldsymbol{ml_{\lambda}}}\right\\}_{\mathbf{L},\mathbf{M_{L}}},$
(3.14)
where the subscripts $\boldsymbol{l_{\rho}}$ and $\boldsymbol{l_{\lambda}}$
are the orbital angular momentum for the $\rho$ and $\lambda$-mode excitation,
respectively. The single heavy baryon can be regarded as a bound state of
light quark cluster and heavy quark. Here, the $\rho$-mode indicates the
radial excitation between two light quarks, while the $\lambda$-mode stands
for the redial excitation between the light quark cluster and heavy quark. For
the concerned bottom and charmed baryons, the internal Jacobi coordinates can
be chosen as
$\vec{\rho}=\vec{r}_{2}-\vec{r}_{1},~{}~{}~{}\vec{\lambda}=\vec{r}_{3}-\frac{m_{1}\vec{r}_{1}+m_{2}\vec{r}_{2}}{m_{1}+m_{2}}.$
(3.15)
For easily illustrating this point, we take the $\Lambda_{c}$ resonance as an
example and present the definitions of the $\rho$-mode and $\lambda$-mode as
displayed in Fig. 1.
Figure 1: The definitions of internal Jacobi coordinates $\vec{\rho}$ and
$\vec{\lambda}$ when taking the $\Lambda_{c}$ baryon as an example.
In the realistic calculation, the Gaussian basis Hiyama:2003cu ;
Hiyama:2012sma ; Yoshida:2015tia
$\begin{split}\phi_{nlm}^{G}(\vec{r})=&\phi^{G}_{nl}(r)~{}Y_{lm}(\hat{r})\\\
=&\sqrt{\frac{2^{l+2}(2\nu_{n})^{l+3/2}}{\sqrt{\pi}(2l+1)!!}}\lim_{\varepsilon\rightarrow
0}\frac{1}{(\nu_{n}\varepsilon)^{l}}\sum_{k=1}^{k_{\text{max}}}C_{lm,k}e^{-\nu_{n}(\vec{r}-\varepsilon\vec{D}_{lm,k})^{2}}\end{split}$
(3.16)
is adopted to expand the spatial wave functions
$\phi_{\boldsymbol{l_{\rho}},\boldsymbol{ml_{\rho}}}$ and
$\phi_{\boldsymbol{l_{\lambda}},\boldsymbol{ml_{\lambda}}}$
($n=1,2,\cdots,n_{max}$), where the Gaussian size parameter $\nu_{n}$ can be
settled as a geometric progression Luo:2022cun
$\nu_{n}=1/r^{2}_{n},~{}~{}~{}r_{n}=r_{min}~{}a^{n-1}$ (3.17)
with
$a=\left(\frac{r_{max}}{r_{min}}\right)^{\frac{1}{n_{max}-1}}.$ (3.18)
The Gaussian basis in the momentum space $\phi_{nlm}^{G}(\vec{k})$ can be
obtained by the replacement $\vec{r}\to\vec{k}$ and $\nu_{n}\to 1/(4\nu_{n})$
in Eq. (3.16). In our calculation, the values of $\rho_{min}$ and $\rho_{max}$
are set as $0.2$ fm and $2.0$ fm, respectively, and $n_{\rho_{max}}=6$. In the
meantime, the same Gaussian sized parameters are also applied to the
$\lambda$-mode excitation.
With above preparation, we can calculate the kinematic, the potential, and the
normalize matrix elements as
$\begin{split}T^{\alpha^{\prime},\alpha}=&\langle\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha^{\prime})}|K|\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha)}\rangle,\\\
V^{\alpha^{\prime},\alpha}=&\langle\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha^{\prime})}|V|\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha)}\rangle,\\\
N^{\alpha^{\prime},\alpha}=&\langle\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha^{\prime})}|\Psi_{\mathbf{J},\mathbf{M_{J}}}^{(\alpha)}\rangle.\end{split}$
(3.19)
Then, the Schrödinger equation in Eq. (3.1) can be solved by the Rayleigh-Ritz
variational principle as
$\Big{(}T^{\alpha^{\prime},\alpha}+V^{\alpha^{\prime},\alpha}\Big{)}C^{(\alpha)}=EN^{\alpha^{\prime},\alpha}C^{(\alpha)}.$
(3.20)
For clarity, we take the $\Lambda_{c}(2625)$ as an example to illustrate the
detailed matrix element defined in Eq. (3.19). The quantum numbers of
$\Lambda_{c}(2625)$ are
$(\alpha)=(l_{\rho},l_{\lambda},L,S_{\rho},S,J)=(0,1,1,0,1/2,3/2)$. By
expanding the wave function in Eq. (3.13) with $n_{\rho_{\text{max}}}\times
n_{\lambda_{\text{max}}}=6\times 6=36$ Gaussian bases in Eq. (3.16), the
matrix element $T^{\alpha^{\prime},\alpha}$ can be written as
$T^{\alpha^{\prime},\alpha}=\begin{bmatrix}T_{1,1,1,1}&\cdots&\cdots&\cdots\\\
\vdots&\ddots&&\\\
\vdots&&T_{n_{\rho}^{\prime},n_{\lambda}^{\prime},n_{\rho},n_{\lambda}}&\\\
\vdots&&&\ddots\\\ \end{bmatrix}_{36\times 36},$ (3.21)
where
$\begin{split}T_{n_{\rho}^{\prime},n_{\lambda}^{\prime},n_{\rho},n_{\lambda}}=&\langle\chi^{\text{spin}}_{S^{\prime},M_{S}^{\prime}}\Big{\\{}\phi_{n_{\rho}^{\prime}l_{\rho}^{\prime}m_{l_{\rho}}^{\prime}}^{G}(\vec{p}_{\rho})\phi_{n_{\lambda}^{\prime}l_{\lambda}^{\prime}m_{l_{\lambda}}^{\prime}}^{G}(\vec{p}_{\lambda})\Big{\\}}_{L^{\prime},M_{L}^{\prime}}|K|\chi^{\text{spin}}_{S,M_{S}}\\\
&\times\Big{\\{}\phi_{n_{\rho}l_{\rho}m_{l_{\rho}}}^{G}(\vec{p}_{\rho})\phi_{n_{\lambda}l_{\lambda}m_{l_{\lambda}}}^{G}(\vec{p}_{\lambda})\Big{\\}}_{L,M_{L}}\rangle.\end{split}$
(3.22)
Here, we neglect the contributions of the color and flavor wave functions,
since their overlap equals to 1. The matrix elements
$V^{\alpha^{\prime},\alpha}$ and $N^{\alpha^{\prime},\alpha}$ can also be
obtained in similar method.
Now, we can handle the Schrödinger equation to obtain the eigenvectors and
eigenvalues, which correspond to the baryon wave functions and the masses,
respectively. In Table 2, we present our results of the masses and the radial
components of the spatial wave functions of the concerned baryons. It is
obvious that the calculated masses are consistent with the experimental values
ParticleDataGroup:2022pth . It also indicates that we can well reproduce the
charmed and bottom baryon spectrum by the adopted potential model, and the
obtained numerical wave functions are as input when getting the form factors
of these discussed weak transitions.
Table 2: The comparisons of the masses by our calculations and the PDG values ParticleDataGroup:2022pth , and the radial components of spatial wave functions of the concerned bottomed baryons $\Lambda_{b}$ and $\Xi_{b}$, as well as the $P$-wave charmed baryons $\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ from the GI model and GEM. The Gaussian bases $(n_{\rho},n_{\lambda})$ listed in the forth column are arranged as $[(1,1),(1,2),\cdots,(1,n_{\lambda_{max}}),(2,1),(2,2),\cdots,(2,n_{\lambda_{max}}),\cdots,(n_{\rho_{max}},1),(n_{\rho_{max}},2),\cdots,(n_{\rho_{max}},n_{\lambda_{max}})]$. For the masses of the $\Xi_{b}$ and $\Xi_{c}(2815)$, the values for the neutral and charged states are degenerated in our calculation since the same mass for the $u$ and $d$ quarks is applied in the potential model. States | This work (GeV) | Experiment (MeV) ParticleDataGroup:2022pth | Eigenvectors
---|---|---|---
$\Lambda_{b}^{0}$ | $5.621\pm 0.005$ | $5619.60\pm 0.17$ | $\Big{[}0.0068\pm 0.0007,0.0442\pm 0.0014,0.0732\pm 0.0016,0.0032\pm 0.0003,$
$0.0011\pm 0.0001,-0.0004\pm 0.0000,0.0270\pm 0.0012,0.0204\pm 0.0010,$
$0.0273\pm 0.0022,0.0067\pm 0.0004,-0.0027\pm 0.0001,0.0007\pm 0.0000,$
$-0.017\pm 0.0002,0.2541\pm 0.0058,0.2427\pm 0.0006,0.0005\pm 0.0002,$
$0.0060\pm 0.0001,-0.0017\pm 0.0000,-0.0037\pm 0.0003,-0.0426\pm 0.0010,$
$0.4052\pm 0.0028,0.0253\pm 0.0025,-0.0023\pm 0.0007,0.0004\pm 0.0002,$
$0.0071\pm 0.0001,-0.0052\pm 0.0008,0.0105\pm 0.0008,0.1224\pm 0.0015,$
$-0.0246\pm 0.0001,0.0054\pm 0.0000,-0.0020\pm 0.0000,0.0010\pm 0.0003,$
$-0.0112\pm 0.0003,-0.0139\pm 0.0001,0.0086\pm 0.0001,-0.0017\pm
0.0000\Big{]}$
$\Xi_{b}^{0,-}$ | $5.809\pm 0.004$ | $5791.9\pm 0.5$ $5797.0\pm 0.6$ | $\Big{[}0.0069\pm 0.0008,0.0293\pm 0.0012,0.0543\pm 0.0016,-0.0002\pm 0.0003,$
$0.0014\pm 0.0001,-0.0004\pm 0.0000,0.0231\pm 0.0013,0.0397\pm 0.0003,$
$0.0278\pm 0.0018,0.0114\pm 0.0003,-0.0037\pm 0.0000,0.0009\pm 0.0000,$
$-0.0093\pm 0.0003,0.2285\pm 0.0053,0.2601\pm 0.0007,-0.0165\pm 0.0004,$
$0.0100\pm 0.0000,-0.0026\pm 0.0000,-0.0043\pm 0.0005,-0.0094\pm 0.0001,$
$0.3992\pm 0.0037,0.0525\pm 0.0026,-0.0092\pm 0.0006,0.0019\pm 0.0001,$
$0.0048\pm 0.0001,-0.0108\pm 0.0005,0.0095\pm 0.0005,0.0813\pm 0.0015,$
$-0.0145\pm 0.0002,0.0033\pm 0.0000,-0.0011\pm 0.0000,0.0011\pm 0.0002,$
$-0.0052\pm 0.0002,-0.0070\pm 0.0001,0.0034\pm 0.0001,-0.0007\pm
0.0000\Big{]}$
$\Lambda_{c}^{+}(2625)$ | $2.623\pm 0.007$ | $2628.11\pm 0.19$ | $\Big{[}0.0012\pm 0.0001,0.0148\pm 0.0007,0.0760\pm 0.0021,0.0359\pm 0.0004,$
$-0.0044\pm 0.0001,0.0010\pm 0.0000,0.0066\pm 0.0003,0.0059\pm 0.0002,$
$0.0376\pm 0.0018,0.0183\pm 0.0015,-0.0034\pm 0.0002,0.0008\pm 0.0000,$
$-0.0027\pm 0.0000,0.0767\pm 0.0022,0.2861\pm 0.0039,0.1060\pm 0.0001,$
$-0.0126\pm 0.0001,0.0027\pm 0.0000,0.0031\pm 0.0002,-0.0383\pm 0.0002,$
$0.2926\pm 0.0013,0.2054\pm 0.0037,-0.0346\pm 0.0004,0.0082\pm 0.0001,$
$0.0028\pm 0.0001,-0.0030\pm 0.0001,-0.0008\pm 0.0012,0.1395\pm 0.0009,$
$-0.0074\pm 0.0003,0.0018\pm 0.0001,-0.0010\pm 0.0000,0.0017\pm 0.0000,$
$-0.0077\pm 0.0004,-0.0222\pm 0.0001,0.0072\pm 0.0000,-0.0013\pm
0.0000\Big{]}$
$\Xi_{c}^{0,+}(2815)$ | $2.811\pm 0.006$ | $2819.79\pm 0.30$ $2816.51\pm 0.25$ | $\Big{[}0.0012\pm 0.0001,0.0100\pm 0.0005,0.0553\pm 0.0020,0.0231\pm 0.0005,$
$-0.0029\pm 0.0001,0.0006\pm 0.0000,0.0065\pm 0.0003,0.0119\pm 0.0001,$
$0.0432\pm 0.0013,0.0252\pm 0.0011,-0.0043\pm 0.0001,0.0010\pm 0.0000,$
$-0.0017\pm 0.0000,0.0715\pm 0.0018,0.2859\pm 0.0038,0.0892\pm 0.0000,$
$-0.0110\pm 0.0000,0.0023\pm 0.0000,0.0042\pm 0.0001,-0.0307\pm 0.0000,$
$0.3138\pm 0.0014,0.2328\pm 0.0038,-0.0377\pm 0.0004,0.0089\pm 0.0001,$
$0.0017\pm 0.0000,-0.0036\pm 0.0001,-0.0046\pm 0.0008,0.1014\pm 0.0011,$
$-0.0049\pm 0.0002,0.0012\pm 0.0000,-0.0005\pm 0.0000,0.0011\pm 0.0000,$
$-0.0032\pm 0.0002,-0.0122\pm 0.0000,0.0031\pm 0.0000,-0.0006\pm
0.0000\Big{]}$
## IV The form factors and weak decays
### IV.1 The weak transition form factors
In the following, we calculate these involved form factors of the
$\Lambda_{b}\to\Lambda_{c}(2625)$ and $\Xi_{b}\to\Xi_{c}(2815)$ transitions
numerically. The masses of baryons are quoted from the Particle Data Group
(PDG) ParticleDataGroup:2022pth , and the spatial wave functions illustrated
in Sec. III are shown in Table 2.
Eq. (2.14) and Eq. (2.20) are worked in spacelike region ($q^{2}<0$), since we
have set the $q^{+}=0$ condition. We need to extrapolate the obtained form
factors to the timelike region ($q^{2}>0$). To do the extrapolation, we take
advantage of the $z$-series parameterization as
$f(q^{2})=\frac{1}{1-q^{2}/(m_{\text{pole}}^{f})^{2}}\Big{[}a_{0}+a_{1}z^{f}(q^{2})\Big{]},$
(4.1)
where $a_{0}^{f}$ and $a_{1}^{f}$ are free parameters needed to be fitted in
spacelike region, and we have Lellouch:1995yv ; Bourrely:2005hp ;
Bourrely:2008za ; Bharucha:2015bzk ; Huang:2022lfr ; Aliev:2022gxi
$z^{f}(q^{2})=\frac{\sqrt{t_{+}^{f}-q^{2}}-\sqrt{t_{+}^{f}-t_{0}}}{\sqrt{t_{+}^{f}-q^{2}}+\sqrt{t_{+}^{f}-t_{0}}}$
(4.2)
with $t_{\pm}^{f}=(M\pm M^{\prime})^{2}$. The parameter $t_{0}$ is chosen as
Bharucha:2015bzk ; Aliev:2022gxi
$0\leqslant t_{0}=t_{+}\bigg{(}1-\sqrt{1-\frac{t_{-}}{t_{+}}}\bigg{)}\leqslant
t_{-}.$ (4.3)
The pole masses are chosen as $m_{B_{c}}=6.275\ \text{GeV}$
ParticleDataGroup:2022pth for $g_{(1,3,4)}^{V}$, $m_{B_{c}^{\ast}}=6.338\
\text{GeV}$ Godfrey:2004ya for $g_{2}^{V}$, $m_{B_{c0}}=6.706\ \text{GeV}$
Godfrey:2004ya for $f_{(1,3,4)}^{A}$, and $m_{B_{c1}}=6.741\ \text{GeV}$
Godfrey:2004ya for $f_{2}^{A}$. In order to fix the free parameters
$a_{0}^{f}$ and $a_{1}^{f}$, we numerically compute 24 points for each form
factors from $q^{2}=-q_{\text{max}}^{2}$ to $q^{2}=-0.01\ \text{GeV}^{2}$ in
the spacelike region, and then fit them with the MINUIT program. The fitted
parameters are collected in Table 3, and the $q^{2}$ dependence of the form
factors of $\Lambda_{b}\to\Lambda_{c}(2625)$ and $\Xi_{b}\to\Xi_{c}(2815)$
transitions are displayed in Fig. 2.
In Table 3, we also present the $\chi^{2}$ values, which is defined by
$\chi^{2}=\frac{1}{n(n-1)}\sum_{i=1}^{n}\Bigg{(}\frac{f^{cal}(q_{i}^{2})-f^{ana}(q_{i}^{2})}{\delta
f^{cal}(q_{i}^{2})}\Bigg{)}^{2},$ (4.4)
to characterize the analytical continuation, where $n=24$, the $f^{cal}$ and
$f^{ana}$ represent the calculated value by quark model and the value of
analytical continuation. The $\delta f^{cal}$ is the error of $f^{cal}$.
Considering that the z-series parameterization have been widely used to
perform the analytical continuation, and the $\chi^{2}$ value in our fitting
is suitable, in this work we take the $z$-series form to deal with the
parameterization.
Table 3: The fitted parameters for the form factors of the $\Lambda_{b}\to\Lambda_{c}(2625)$ and $\Xi_{b}\to\Xi_{c}(2815)$ transitions in Eq. (4.1). $f$ | $a_{0}$ | $a_{1}$ | $\chi^{2}$ | $f$ | $a_{0}$ | $a_{1}$ | $\chi^{2}$
---|---|---|---|---|---|---|---
$\Lambda_{b}\to\Lambda_{c}(2625)$
$g_{1}^{V}$ | $0.0409\pm 0.0002$ | $-0.3224\pm 0.0053$ | $0.015$ | $f_{1}^{A}$ | $-0.0555\pm 0.0003$ | $0.4855\pm 0.0074$ | $0.026$
$g_{2}^{V}$ | $1.0889\pm 0.0016$ | $-10.5808\pm 0.0459$ | $0.460$ | $f_{2}^{A}$ | $0.7205\pm 0.0005$ | $-6.8173\pm 0.0168$ | $1.240$
$g_{3}^{V}$ | $-0.0763\pm 0.0002$ | $0.8045\pm 0.0046$ | $0.310$ | $f_{3}^{A}$ | $0.1093\pm 0.0003$ | $-1.2327\pm 0.0075$ | $0.363$
$g_{4}^{V}$ | $-0.2360\pm 0.0004$ | $2.5599\pm 0.0127$ | $0.473$ | $f_{4}^{A}$ | $-0.2749\pm 0.0006$ | $3.1018\pm 0.0169$ | $0.455$
$\Xi_{b}\to\Xi_{c}(2815)$
$g_{1}^{V}$ | $0.0397\pm 0.0002$ | $-0.3688\pm 0.0065$ | $0.016$ | $f_{1}^{A}$ | $-0.0549\pm 0.0003$ | $0.5577\pm 0.0091$ | $0.028$
$g_{2}^{V}$ | $1.1750\pm 0.0016$ | $-13.1039\pm 0.0538$ | $0.564$ | $f_{2}^{A}$ | $0.7629\pm 0.0006$ | $-8.3589\pm 0.0203$ | $1.403$
$g_{3}^{V}$ | $-0.0884\pm 0.0002$ | $1.0549\pm 0.0064$ | $0.297$ | $f_{3}^{A}$ | $0.1281\pm 0.0003$ | $-1.6232\pm 0.0103$ | $0.361$
$g_{4}^{V}$ | $-0.2719\pm 0.0004$ | $3.3205\pm 0.0139$ | $0.710$ | $f_{4}^{A}$ | $-0.3167\pm 0.0006$ | $3.9975\pm 0.0189$ | $0.625$
---
Figure 2: The $q^{2}$ dependent form factors of the
$\Lambda_{b}\to\Lambda_{c}(2625)$ (top panels) and $\Xi_{b}\to\Xi_{c}(2815)$
(bottom panels) transitions. Here, the uncertainties are also added. However,
they are not obvious when we present the corresponding results.
In Table 4, we compare our results of the form factors
$g_{(1,2,3,4)}^{V}(q^{2})$ and $f_{(1,2,3,4)}^{A}(q^{2})$ at the $q^{2}=0$ and
$q^{2}=q_{\text{max}}^{2}$ endpoints of the $\Lambda_{b}\to\Lambda_{c}(2625)$
transition with the other theoretical predictions by LFQM Chua:2019yqh and
LQCD Meinel:2021mdj . The results of LQCD are reproduced with TABLE IX in Ref.
Meinel:2021mdj . In particular, the central value $O$ and the corresponding
statistical uncertainty $\sigma_{O,\text{stat}}$ are reproduced by the so-
called “nominal-order” fitting, i.e.,
$\begin{split}f(q^{2})&=F^{f}+A^{f}\big{(}\omega(q^{2})-1\big{)},\\\
\omega(q^{2})&=\frac{M^{2}+M^{\prime 2}-q^{2}}{2MM^{\prime}},\end{split}$
(4.5)
and the systematic uncertainty can be obtained by
$\sigma_{O,\text{syst}}=\text{max}\Big{(}|O_{\text{HO}}-O|,\sqrt{|\sigma_{O,\text{HO},\text{stat}}^{2}-\sigma_{O,\text{stat}}^{2}|}\Big{)},$
(4.6)
where $O_{\text{HO}}$ and $\sigma_{O,\text{HO},\text{stat}}$ are the central
value and the corresponding statistical uncertainty in the “higher-order”
fitting:
$f_{\text{HO}}(q^{2})=F_{\text{HO}}^{f}+A_{\text{HO}}^{f}\big{(}\omega(q^{2})-1\big{)}.$
(4.7)
Finally, the total uncertainty can be obtained by adding the systematic and
statistical uncertainties in quadrature as
$\sigma_{O,\text{total}}=\sqrt{\sigma_{O,\text{syst}}^{2}-\sigma_{O,\text{stat}}^{2}}.$
(4.8)
The definitions of the form factors used in LQCD Meinel:2021rbm ;
Meinel:2021mdj can be converted into the present forms by the relations in
the Appendix 2 of Ref. Meinel:2021rbm combined with
$\begin{split}g_{1}^{V}=&F_{1}^{V},~{}~{}~{}g_{2}^{V}=F_{2}^{V},~{}~{}~{}g_{3}^{V}=F_{3}^{V}-M^{\prime}/MF_{4}^{V},~{}~{}~{}g_{4}^{V}=F_{4}^{V},\\\
f_{1}^{A}=&F_{1}^{A},~{}~{}~{}f_{2}^{A}=F_{2}^{A},~{}~{}~{}f_{3}^{A}=F_{3}^{A}-M^{\prime}/MF_{4}^{A},~{}~{}~{}f_{4}^{A}=F_{4}^{A}.\end{split}$
(4.9)
We emphasize that only the $q^{2}_{\text{max}}$ endpoint values are presented,
since the LQCD’s results are limited to small kinematic region near
$q_{\text{max}}^{2}$. Our results of $g_{1,2}^{V}(q_{\text{max}}^{2})$ and
$f_{1,2}^{A}(q_{\text{max}}^{2})$ are comparable with the LQCD’s results,
while others show some deviations. We expect more theoretical works on these
form factors to further enrich our knowledge on these weak decays.
Table 4: The theoretical predictions for the form factors $g_{(1,2,3,4)}^{V}(q^{2})$ and $f_{(1,2,3,4)}^{A}(q^{2})$ at $q^{2}=0$ and $q^{2}=q_{\text{max}}^{2}$ endpoints of the $\Lambda_{b}\to\Lambda_{c}(2625)$ transition using different approaches. | $g_{1}^{V}(0)$ | $g_{2}^{V}(0)$ | $g_{3}^{V}(0)$ | $g_{4}^{V}(0)$
---|---|---|---|---
This Work | $0.0352\pm 0.0002$ | $0.9023\pm 0.0018$ | $-0.0621\pm 0.0002$ | $-0.1909\pm 0.0005$
LFQM Chua:2019yqh | $-0.007^{+0.037}_{-0.026}$ | $0.509^{+0.184}_{-0.173}$ | $0.088^{+0.039}_{-0.043}$ | $0.004^{+0.058}_{-0.053}$
| $f_{1}^{A}(0)$ | $f_{2}^{A}(0)$ | $f_{3}^{A}(0)$ | $f_{4}^{A}(0)$
This Work | $-0.0469\pm 0.0003$ | $0.6003\pm 0.0006$ | $0.0876\pm 0.0003$ | $-0.2202\pm 0.0007$
LFQM Chua:2019yqh | $0.028^{+0.065}_{-0.032}$ | $0.545^{+0.111}_{-0.104}$ | $0.022^{+0.033}_{-0.091}$ | $-0.005^{+0.104}_{-0.068}$
| $g_{1}^{V}(q_{\text{max}}^{2})$ | $g_{2}^{V}(q_{\text{max}}^{2})$ | $g_{3}^{V}(q_{\text{max}}^{2})$ | $g_{4}^{V}(q_{\text{max}}^{2})$
This Work | $0.0603\pm 0.0003$ | $1.6412\pm 0.0023$ | $-0.1171\pm 0.0003$ | $-0.3639\pm 0.0006$
LFQM Chua:2019yqh | $-0.009^{+0.046}_{-0.033}$ | $0.737^{+0.267}_{-0.251}$ | $0.115^{+0.051}_{-0.056}$ | $0.005^{+0.072}_{-0.066}$
LQCD Meinel:2021mdj | $0.0692\pm 0.0045$ | $1.1340\pm 0.1556$ | $-0.7977\pm 0.3646$ | $0.2117\pm 0.2795$
| $f_{1}^{A}(q_{\text{max}}^{2})$ | $f_{2}^{A}(q_{\text{max}}^{2})$ | $f_{3}^{A}(q_{\text{max}}^{2})$ | $f_{4}^{A}(q_{\text{max}}^{2})$
This Work | $-0.0800\pm 0.0004$ | $1.0470\pm 0.0007$ | $0.1636\pm 0.0004$ | $-0.4115\pm 0.0008$
LFQM Chua:2019yqh | $0.035^{+0.082}_{-0.040}$ | $0.756^{+0.154}_{-0.144}$ | $0.027^{+0.041}_{-0.114}$ | $-0.006^{+0.131}_{-0.086}$
LQCD Meinel:2021mdj | $-0.0660\pm 0.0280$ | $0.8310\pm 0.0978$ | $1.3386\pm 3.4803$ | $0.1795\pm 2.9081$
Besides, by Heavy Quark Effective Theory (HQET), one can rewrite the weak
transition matrix element of the concerned
$\mathcal{B}_{b}(\bar{3}_{f},1/2^{+})\to\mathcal{B}_{c}(\bar{3}_{f},3/2^{-})$
transition as Chua:2019yqh
$\langle\mathcal{B}_{c}(v^{\prime})|j_{V-A}^{\mu}|\mathcal{B}_{b}(v)\rangle=-\sigma(\omega)\bar{u}_{\alpha}(v^{\prime})v^{\alpha}\gamma^{\mu}(1-\gamma^{5})u(v),$
(4.10)
where $v=P/M$ and $v^{\prime}=P^{\prime}/M^{\prime}$ are the velocities of the
initial and the final baryons, respectively. Thus, the form factors have
simpler behavior Chua:2019yqh
$g_{2}^{V}=f_{2}^{A}=\sigma(\omega),~{}~{}~{}g_{1,2,3}^{V}=f_{1,2,3}^{A}=0.$
(4.11)
As shown in Fig. 2, obviously our results of $g_{2}^{V}$ and $f_{2}^{A}$ are
apparently larger than those of $g_{(1,3,4)}^{V}$ and $f_{(1,3,4)}^{A}$, which
is consistent with the expectation from HQET.
### IV.2 The semileptonic decays
In this section, we further calculate the semileptonic decays
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\ell^{-}\nu_{\ell}$ and
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}\ell^{-}\nu_{\ell}$
$(\ell^{-}=e^{-},\mu^{-},\tau^{-})$. The differential decay width of the
semileptonic decay can be obtained by
$\begin{split}\frac{d^{2}\Gamma}{dq^{2}d\cos\theta_{\ell}}=&\Big{|}\frac{G_{F}}{\sqrt{2}}V_{cb}\Big{|}^{2}\frac{\sqrt{Q_{+}Q_{-}}q^{2}(1-\hat{m_{\ell}}^{2})^{2}}{512\pi^{3}M^{3}}\\\
&\times\Big{(}L_{1}+L_{2}\cos\theta_{\ell}+L_{3}\cos
2\theta_{\ell}\Big{)},\end{split}$ (4.12)
where $Q_{\pm}=(M\pm M^{\prime})^{2}-q^{2}$,
$\hat{m_{\ell}}^{2}=m_{\ell}^{2}/q^{2}$ and the angular coefficients $L_{1}$,
$L_{2}$, and $L_{3}$ are given as
$\begin{split}L_{1}=&\frac{1}{2}(3+\hat{m_{\ell}}^{2})\Big{(}H_{-3/2,-1}^{2}+H_{-1/2,-1}^{2}+H_{+1/2,+1}^{2}+H_{+3/2,+1}^{2}\Big{)}\\\
&+(1+\hat{m_{\ell}}^{2})\Big{(}H_{+1/2,0}^{2}+H_{-1/2,0}^{2}\Big{)}\\\
&+2\hat{m_{\ell}}^{2}\Big{(}H_{+1/2,t}^{2}+H_{-1/2,t}^{2}\Big{)},\end{split}$
(4.13)
$\begin{split}L_{2}=&2\Big{(}H_{-3/2,-1}^{2}+H_{-1/2,-1}^{2}-H_{+1/2,+1}^{2}-H_{+3/2,+1}^{2}\Big{)}\\\
&-2\hat{m_{\ell}}^{2}\Big{(}\text{Re}[H_{+1/2,0}^{\dagger}H_{+1/2,t}]+\text{Re}[H_{-1/2,0}^{\dagger}H_{-1/2,t}]\\\
&+\text{Re}[H_{+1/2,0}H_{+1/2,t}^{\dagger}]+\text{Re}[H_{-1/2,0}H_{-1/2,t}^{\dagger}]\Big{)},\end{split}$
(4.14)
$\begin{split}L_{3}=&-(1-\hat{m_{\ell}}^{2})\Big{(}H_{+1/2,0}^{2}+H_{-1/2,0}^{2}\Big{)}\\\
&+\frac{1-\hat{m_{\ell}}^{2}}{2}\Big{(}H_{-3/2,-1}^{2}+H_{-1/2,-1}^{2}+H_{+1/2,+1}^{2}+H_{+3/2,+1}^{2}\Big{)}.\end{split}$
(4.15)
The helicity amplitude $H_{\lambda^{\prime},\lambda_{W}}$ is defined as
$H_{\lambda_{\mathcal{B}_{c}},\lambda_{W}}=\epsilon^{*}_{\mu}(\lambda_{W})\langle\mathcal{B}_{c}(P^{\prime},\lambda_{\mathcal{B}_{c}})|V^{\mu}-A^{\mu}|\mathcal{B}_{b}(P,\lambda_{\mathcal{B}_{b}})\rangle$
(4.16)
with $\lambda$, $\lambda^{\prime}$, and $\lambda_{W}$ denoting the helicities
of the initial state $\mathcal{B}_{b}$, the final state $\mathcal{B}_{c}$, and
the off-shell $W$ boson, respectively. We have the relation
$\lambda=\lambda^{\prime}-\lambda_{W}$. Their concerned expressions are
Gutsche:2017wag
$\begin{split}H^{V}_{1/2,t}=&\sqrt{\frac{2}{3}\frac{Q_{-}}{q^{2}}}\frac{Q_{+}}{2MM^{\prime}}\bigg{(}g_{1}^{V}(q^{2})M+g_{2}^{V}(q^{2})M_{-}\\\
&+g_{3}^{V}(q^{2})\frac{M_{+}M_{-}-q^{2}}{2M^{\prime}}+g_{4}^{V}(q^{2})\frac{M_{+}M_{-}+q^{2}}{2M}\bigg{)},\\\
H^{V}_{1/2,0}=&\sqrt{\frac{2}{3}\frac{Q_{+}}{q^{2}}}\bigg{(}g_{1}^{V}(q^{2})\frac{M_{+}M_{-}-q^{2}}{2M^{\prime}}+g_{2}^{V}(q^{2})\frac{Q_{-}M_{+}}{2MM^{\prime}}\\\
&+g_{3}^{V}(q^{2})\frac{Q_{+}Q_{-}}{4MM^{\prime
2}}+g_{4}^{V}(q^{2})\frac{Q_{+}Q_{-}}{4M^{2}M^{\prime}}\bigg{)},\\\
H^{V}_{1/2,1}=&\sqrt{\frac{Q_{+}}{3}}\bigg{(}g_{1}^{V}(q^{2})-g_{2}^{V}(q^{2})\frac{Q_{-}}{MM^{\prime}}\bigg{)},\\\
H^{V}_{3/2,1}=&\sqrt{Q_{+}}g_{1}^{V}(q^{2}),\end{split}$ (4.17)
$\begin{split}H^{A}_{1/2,t}=&-\sqrt{\frac{2}{3}\frac{Q_{+}}{q^{2}}}\frac{Q_{-}}{2MM^{\prime}}\bigg{(}f_{1}^{A}(q^{2})M-f_{2}^{A}(q^{2})M_{+}\\\
&+f_{3}^{A}(q^{2})\frac{M_{+}M_{-}-q^{2}}{2M^{\prime}}+f_{4}^{A}(q^{2})\frac{M_{+}M_{-}+q^{2}}{2M}\bigg{)},\\\
H^{A}_{1/2,0}=&-\sqrt{\frac{2}{3}\frac{Q_{-}}{q^{2}}}\bigg{(}f_{1}^{A}(q^{2})\frac{M_{+}M_{-}-q^{2}}{2M^{\prime}}-f_{2}^{A}(q^{2})\frac{Q_{+}M_{-}}{2MM^{\prime}}\\\
&+f_{3}^{A}(q^{2})\frac{Q_{+}Q_{-}}{4MM^{\prime
2}}+f_{4}^{A}(q^{2})\frac{Q_{+}Q_{-}}{4M^{2}M^{\prime}}\bigg{)},\\\
H^{A}_{1/2,1}=&\sqrt{\frac{Q_{-}}{3}}\bigg{(}f_{1}^{A}(q^{2})-f_{2}^{A}(q^{2})\frac{Q_{+}}{MM^{\prime}}\bigg{)},\\\
H^{A}_{3/2,1}=&-\sqrt{Q_{-}}f_{1}^{A}(q^{2}),\end{split}$ (4.18)
for the vector current and the axial-vector current, respectively, with
$M_{\pm}=M\pm M^{\prime}$. The negative terms can be obtained by the relations
$H^{V}_{-\lambda^{\prime},-\lambda_{W}}=+H^{V}_{\lambda^{\prime},\lambda_{W}},~{}~{}H^{A}_{-\lambda^{\prime},-\lambda_{W}}=-H^{A}_{\lambda^{\prime},\lambda_{W}},$
(4.19)
and the total helicity amplitudes can be obtained by
$H_{\lambda^{\prime},\lambda_{W}}=H^{V}_{\lambda^{\prime},\lambda_{W}}-H^{A}_{\lambda^{\prime},\lambda_{W}}.$
(4.20)
After performing the integral of the angle $\theta_{\ell}$, the differential
decay width can be obtained by
$\begin{split}\frac{d\Gamma}{dq^{2}}=&\Big{|}\frac{G_{F}}{\sqrt{2}}V_{cb}\Big{|}^{2}\frac{\sqrt{s_{+}s_{-}}q^{2}(1-\hat{m_{\ell}}^{2})^{2}}{512\pi^{3}M^{3}}\Big{(}2L_{1}-\frac{2}{3}L_{3}\Big{)}.\end{split}$
(4.21)
And then, the decay width can be obtained by carrying out the integral of
$q^{2}$ in the range $m_{\ell}^{2}$ to $q^{2}_{\text{max}}$.
---
Figure 3: The differential branching ratios of $\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\ell^{-}\nu_{\ell}$ and $\Xi_{b}^{0}\to\Xi_{c}^{+}(2815)\ell^{-}\nu_{\ell}$ with $\ell^{-}=e^{-},\mu^{-},\text{or}\ \tau^{-}$. Table 5: The comparison of our numerical results and the experimental measurement, as well as other theoretical results of the absolute branching ratios of $\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\ell^{-}\nu_{\ell}$ and $\Xi_{b}^{0,-}\to\Xi_{c}^{+,0}(2815)\ell^{-}\nu_{\ell}$ with $\ell=e,\mu,\tau$, where the branching ratios out of or in brackets in the second column correspond to the $\Xi_{b}^{0}\to\Xi_{c}(2815)^{+}$ and $\Xi_{b}^{-}\to\Xi_{c}(2815)^{0}$ transitions, respectively. Here, all values are given as a percent $(\%)$. Mode | This work | Expt. ParticleDataGroup:2022pth | CCQM Gutsche:2018nks | HQSS Nieves:2019kdh | CQM Pervin:2005ve
---|---|---|---|---|---
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)e^{-}\nu_{e}$ | $1.653\pm 0.114$ | - | $0.17\pm 0.03$ | - | $(0.88-1.40)$
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\mu^{-}\nu_{\mu}$ | $1.641\pm 0.113$ | $1.3^{+0.6}_{-0.5}$ | $0.17\pm 0.03$ | $3.5^{+1.3}_{-1.2}$ | $(0.88-1.40)$
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\tau^{-}\nu_{\tau}$ | $0.1688\pm 0.0116$ | - | $0.018\pm 0.004$ | $0.38^{+0.09}_{-0.08}$ | $(0.18-0.22)$
$\Xi_{b}^{0(-)}\to\Xi_{c}^{+(0)}(2815)e^{-}\nu_{e}$ | $1.698\pm 0.122\ (1.803\pm 0.132)$ | - | - | - | -
$\Xi_{b}^{0(-)}\to\Xi_{c}^{+(0)}(2815)\mu^{-}\nu_{\mu}$ | $1.685\pm 0.121\ (1.789\pm 0.131)$ | - | - | - | -
$\Xi_{b}^{0(-)}\to\Xi_{c}^{+(0)}(2815)\tau^{-}\nu_{\tau}$ | $0.1758\pm 0.0126\ (0.1868\pm 0.0137)$ | - | - | - | -
Taking the form factors obtained by the light-front quark model as input, we
calculate the semileptonic decays of the $\Lambda_{b}\to\Lambda_{c}(2625)$ and
$\Xi_{b}\to\Xi_{c}(2815)$ processes. The masses of baryons and leptons are
taken from the PDG ParticleDataGroup:2022pth , and the lifetimes of
$\Lambda_{b}^{0}$ and $\Xi_{b}^{-,0}$ are fixed to be
$\begin{split}\tau_{\Lambda_{b}^{0}}=&(1.471\pm 0.009)\ \text{fs},\\\
\tau_{\Xi_{b}^{-}}=&(1.572\pm 0.040)\ \text{fs},\\\
\tau_{\Xi_{b}^{0}}=&(1.480\pm 0.030)\ \text{fs},\end{split}$
respectively, averaged by the PDG ParticleDataGroup:2022pth . Besides, the
involved CKM matrix element is $V_{cb}=(40.8\pm 1.4)\times 10^{-3}$
ParticleDataGroup:2022pth .
The $q^{2}$ dependence of the differential branching ratios are shown in Fig.
3. Since the ones of $\Xi_{b}^{-}\to\Xi_{c}^{0}(2815)\ell^{-}\nu_{\ell}$ act
similar with the neutral one, we would not display them here. In the meantime,
we also present the branching ratios, and compare our results with the
experimental data and other theoretical results, including the CCQM
Gutsche:2018nks , the HQSS Nieves:2019kdh , and the constituent quark model
(CQM) Pervin:2005ve , in Table 5.
Obviously, our result
$\mathcal{B}(\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\mu^{-}\nu_{\mu})=(1.617\pm
0.003)\%$ is consistent with the current experimental data
$(1.3^{+0.6}_{-0.5})\%$ ParticleDataGroup:2022pth . Moreover, the predicted
branching ratios of the electron and muon channels can reach up to the
magnitude of $1\%$, which are accessible at LHCb. From Table 5, we notice that
our results of $\Lambda_{b}\to\Lambda_{c}(2625)\ell^{-}\nu_{\ell}$ are
consistent with the estimate from the HQSS Nieves:2019kdh and the CQM
Pervin:2005ve , but are larger than the CCQM results Gutsche:2018nks .
Besides, we also find that there exists difference of our result of
$\mathcal{B}(\Xi_{b}\to\Xi_{c}(2815)\ell^{-}\nu_{\ell})$ and that given by
Ref. Pavao:2017cpt , where the $\Xi_{c}(2815)$ resonance is assumed as the
dynamically effect Pavao:2017cpt . It shows that the branching ratios of these
discussed transitions are dependent on the different structure assignments to
the $\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$. We expect more theoretical
studies and the ongoing experiment to explore them, which will be a crucial
test to our result. What is more important is that different structure
assignments to the $\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ can be further
distinguished.
Additionally, other important physical observables, including the leptonic
forward-backward asymmetry ($A_{FB}$), the final hadron polarization
($P_{B}$), and the lepton polarization ($P_{\ell}$) are also investigated in
this work. The leptonic forward-backward asymmetry $A_{FB}$ can be obtained by
$\begin{split}A_{FB}(q^{2})=\frac{\Big{(}\int_{0}^{1}-\int_{-1}^{0}\Big{)}d\cos\theta_{\ell}\frac{d^{2}\Gamma}{dq^{2}d\cos\theta_{\ell}}}{\Big{(}\int_{0}^{1}+\int_{-1}^{0}\Big{)}d\cos\theta_{\ell}\frac{d^{2}\Gamma}{dq^{2}d\cos\theta_{\ell}}}=\frac{3L_{2}}{6L_{1}-2L_{3}}.\end{split}$
(4.22)
The final hadron polarization $P_{B}$ is defined as
$\begin{split}P_{B}(q^{2})=\frac{d\Gamma^{\lambda^{\prime}=(+3/2,+1/2)}/dq^{2}-d\Gamma^{\lambda^{\prime}=(-3/2,-1/2)}/dq^{2}}{d\Gamma/dq^{2}}\end{split}$
(4.23)
with $\lambda^{\prime}$ representing the polarization of the final charmed
hadron, and
$\begin{split}\frac{d\Gamma^{\lambda^{\prime}=(+3/2,+1/2)}}{dq^{2}}=&\frac{4}{3}\Bigg{(}(2+\hat{m_{\ell}}^{2})\Big{(}H_{1/2,0}^{2}+H_{1/2,1}^{2}+H_{3/2,1}^{2}\Big{)}\\\
&+3\hat{m_{\ell}}^{2}H_{1/2,t}^{2}\Bigg{)},\end{split}$ (4.24)
$\begin{split}\frac{d\Gamma^{\lambda^{\prime}=(-3/2,-1/2)}}{dq^{2}}=&\frac{4}{3}\Bigg{(}(2+\hat{m_{\ell}}^{2})\Big{(}H_{-1/2,0}^{2}+H_{-1/2,-1}^{2}\\\
&+H_{-3/2,-1}^{2}\Big{)}+3\hat{m_{\ell}}^{2}H_{-1/2,t}^{2}\Bigg{)}.\end{split}$
(4.25)
And the lepton polarization $P_{\ell}$ can be obtained by
$\begin{split}P_{\ell}(q^{2})=\frac{d\Gamma^{\lambda_{\ell}=+1/2}/dq^{2}-d\Gamma^{\lambda_{\ell}=-1/2}/dq^{2}}{d\Gamma/dq^{2}},\end{split}$
(4.26)
with $\lambda_{\ell}$ denoting the polarization of the lepton $\ell^{-}$, and
$\begin{split}\frac{d\Gamma^{\lambda_{\ell}=+1/2}}{dq^{2}}=&\frac{4}{3}\hat{m_{\ell}}^{2}\Bigg{(}H_{+1/2,0}^{2}+H_{-1/2,0}^{2}+H_{1/2,1}^{2}+H_{3/2,1}^{2}\\\
&+H_{-1/2,-1}^{2}+H_{-3/2,-1}^{2}+3H_{1/2,t}^{2}+3H_{-1/2,t}^{2}\Bigg{)},\end{split}$
(4.27)
$\begin{split}\frac{d\Gamma^{\lambda_{\ell}=-1/2}}{dq^{2}}=&\frac{8}{3}\Bigg{(}H_{1/2,0}^{2}+H_{-1/2,0}^{2}+H_{1/2,1}^{2}+H_{3/2,1}^{2}\\\
&+H_{-1/2,-1}^{2}+H_{-3/2,-1}^{2}\Bigg{)}.\end{split}$ (4.28)
Here, we neglect the common term
$\begin{split}\Big{|}\frac{G_{F}}{\sqrt{2}}V_{cb}\Big{|}^{2}\frac{\sqrt{s_{+}s_{-}}q^{2}(1-\hat{m_{\ell}}^{2})^{2}}{512\pi^{3}M^{3}}\end{split}$
(4.29)
for abbreviation.
The $q^{2}$ dependence of the leptonic forward-backward asymmetries
($A_{FB}$), the final hadron polarizations ($P_{B}$), and the lepton
polarizations ($P_{\ell}$) of the concerned semileptonic decays is displayed
in Fig. 4, Fig. 5, and Fig. 6, respectively. The future experimental
measurement of these physical observables may provide valuable information to
these discussed weak decays.
---
Figure 4: The leptonic forward-backward asymmetries ($A_{FB}$) of
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\ell^{-}\nu_{\ell}$ and
$\Xi_{b}^{0}\to\Xi_{c}^{+}(2815)\ell^{-}\nu_{\ell}$ with
$\ell^{-}=e^{-},\mu^{-},\text{or}\ \tau^{-}$. Here, the uncertainties are also
added. However, they are not obvious when we present the corresponding
results.
---
Figure 5: The final hadron polarizations ($P_{B}$) of
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\ell^{-}\nu_{\ell}$ and
$\Xi_{b}^{0}\to\Xi_{c}^{+}(2815)\ell^{-}\nu_{\ell}$ with
$\ell^{-}=e^{-},\mu^{-},\text{or}\ \tau^{-}$. Here, the uncertainties are also
added. However, they are not obvious when we present the corresponding
results.
---
Figure 6: The lepton polarizations ($P_{\ell}$) of
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\ell^{-}\nu_{\ell}$ and
$\Xi_{b}^{0}\to\Xi_{c}^{+}(2815)\ell^{-}\nu_{\ell}$ with
$\ell^{-}=e^{-},\mu^{-},\text{or}\ \tau^{-}$. Here, the uncertainties are also
added. However, they are not obvious when we present the corresponding
results.
Moreover, we are also interested in the ratios of branching fractions
$\displaystyle
R_{\Lambda_{c}(2625)}=\frac{\mathcal{B}(\Lambda_{b}\to\Lambda_{c}(2625)\tau^{-}\nu_{\tau})}{\mathcal{B}(\Lambda_{b}\to\Lambda_{c}(2625)\ell^{-}\nu_{\ell})}\approx
0.10,$ $\displaystyle
R_{\Xi_{c}(2815)}=\frac{\mathcal{B}(\Xi_{b}\to\Xi_{c}(2815)\tau^{-}\nu_{\tau})}{\mathcal{B}(\Xi_{b}\to\Xi_{c}(2815)\ell^{-}\nu_{\ell})}\approx
0.10$
with $\ell^{-}=e^{-}\ \text{or}\ \mu^{-}$, which reflect the LFU. Our result
of $R_{\Lambda_{c}(2625)}$ is also consistent with $0.11\pm 0.02$ estimated by
the CCQM Gutsche:2018nks .
### IV.3 The color-allowed two-body nonleptonic decays
In this subsection, we further evaluate the color-allowed two-body nonleptonic
decays $\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)M^{-}$ and
$\Xi_{b}^{0,-}\to\Xi_{c}^{+,0}(2815)M^{-}$ with $M^{-}$ being a pseudoscalar
meson ($\pi^{-}$, $K^{-}$, $D^{-}$, or $D_{s}^{-}$) or a vector meson
($\rho^{-}$, $K^{*-}$, $D^{*-}$, or $D_{s}^{*-}$). Based on the naïve
factorization assumption, the hadronic transition matrix element can be
factorized into a product of two independent matrix elements
$\begin{split}\langle\mathcal{B}_{c}(P^{\prime},&J_{z}^{\prime})M^{-}|\mathcal{H}_{\text{eff}}|\mathcal{B}_{b}(P,J_{z})\rangle\\\
=&\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}^{\ast}\langle
M^{-}|\bar{q}^{\prime}\gamma_{\mu}(1-\gamma_{5})q|0\rangle\\\
&\times\langle\mathcal{B}_{c}(P^{\prime},J_{z}^{\prime})|\bar{c}\gamma^{\mu}(1-\gamma_{5})b|\mathcal{B}_{b}(P,J_{z})\rangle,\end{split}$
(4.30)
where the meson part is determined by a decay parameter as
$\begin{split}\langle
M|\bar{q}^{\prime}&\gamma_{\mu}(1-\gamma_{5})q|0\rangle=\left\\{\begin{array}[]{ll}if_{P}q_{\mu},&M\in\text{pseudoscalar\
meson}\\\ f_{V}\epsilon_{\mu}^{*}m_{V},&M\in\text{vector\
meson}\end{array}.\right.\end{split}$ (4.31)
Frankly speaking, the naïve factorization assumption works well for the color-
allowed dominated decays. However, there exists the case, where the color-
suppressed and penguin dominated processes can not be explained by the naïve
factorization, which may show important nonfactorizable contribution to
nonleptonic decays Zhu:2018jet . As shown in Refs. Lu:2009cm ; Chua:2018lfa ;
Chua:2019yqh , the nonfactorizable contributions in bottom baryon decays are
cosiderable comparing with the factorized ones. But the precise study of
nonfactorizable contributions is beyond the scope of the present work, we
still take the approximation of using the naïve factorization assumption.
In our calculation, the decay constants of these involved pseudoscalar and
vector mesons include Cheng:2003sm ; Chua:2019yqh ; Li:2021kfb
$\begin{split}&f_{\pi}=130.2,\ f_{K}=155.6,\ f_{D}=211.9,\ f_{D_{s}}=249.0,\\\
&f_{\rho}=216,\ f_{K^{*}}=210,\ f_{D^{*}}=220,\ f_{D_{s}^{*}}=230,\end{split}$
in the unit of MeV.
On the other hand, the decay amplitudes of the
$\mathcal{B}_{b}\to\mathcal{B}_{c}P$ and $\mathcal{B}_{b}\to\mathcal{B}_{c}V$
processes can be parameterized as
$\mathcal{A}\big{[}\mathcal{B}_{b}\to\mathcal{B}_{c}P\big{]}=iq_{\mu}\bar{u}^{\mu}(C+D\gamma_{5})u,$
(4.32)
$\begin{split}\mathcal{A}\big{[}\mathcal{B}_{b}\to\mathcal{B}_{c}V\big{]}=&\epsilon^{\ast\mu}\bar{u}^{\nu}\big{[}g_{\nu\mu}(C_{1}+D_{1}\gamma_{5})+q_{\nu}\gamma_{\mu}(C_{2}+D_{2}\gamma_{5})\\\
&+q_{\nu}P_{\mu}(C_{3}+D_{3}\gamma_{5})\big{]}u,\end{split}$ (4.33)
respectively, with $P(V)$ denoting the pseudoscalar (vector) meson, where the
parity-violated and parity-conserved amplitudes are written as
$\begin{split}C=&\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{P}\bigg{[}g_{1}^{V}(m_{P}^{2})+(M-M^{\prime})\frac{g_{2}^{V}(m_{P}^{2})}{M}\\\
&+\frac{1}{2}(M^{2}-M^{\prime
2}-m_{P}^{2})\big{(}\frac{g_{3}^{V}(m_{P}^{2})}{MM^{\prime}}+\frac{g_{4}^{V}(m_{P}^{2})}{M^{2}}\big{)}-m_{P}^{2}\frac{g_{3}^{V}(m_{P}^{2})}{MM^{\prime}}\bigg{]},\\\
D=&-\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{P}\bigg{[}f_{1}^{A}(m_{P}^{2})-(M+M^{\prime})\frac{f_{2}^{A}(m_{P}^{2})}{M}\\\
&+\frac{1}{2}(M^{2}-M^{\prime
2}-m_{P}^{2})\big{(}\frac{f_{3}^{A}(m_{P}^{2})}{MM^{\prime}}+\frac{f_{4}^{A}(m_{P}^{2})}{M^{2}}\big{)}-m_{P}^{2}\frac{g_{3}^{V}(m_{P}^{2})}{MM^{\prime}}\bigg{]},\\\
\end{split}$ (4.34)
$\begin{split}C_{1}=&\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{V}g_{1}^{V}(m_{V}^{2}),\\\
D_{1}=&-\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{V}f_{1}^{A}(m_{V}^{2}),\\\
C_{2}=&\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{V}\frac{g_{2}^{V}(m_{V}^{2})}{M},\\\
D_{2}=&-\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{V}\frac{f_{2}^{A}(m_{V}^{2})}{M},\\\
C_{3}=&\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{V}\bigg{(}\frac{g_{3}^{V}(m_{V}^{2})}{MM^{\prime}}+\frac{g_{4}^{V}(m_{V}^{2})}{M^{2}}\bigg{)},\\\
D_{3}=&-\frac{G_{F}}{\sqrt{2}}V_{cb}V_{qq^{\prime}}a_{1}f_{V}\bigg{(}\frac{f_{3}^{A}(m_{V}^{2})}{MM^{\prime}}+\frac{f_{4}^{A}(m_{V}^{2})}{M^{2}}\bigg{)}.\end{split}$
(4.35)
The $m_{P}(m_{V})$ is the mass of the emitted pseudoscalar (vector) meson, and
$a_{1}=c_{1}+c_{2}/N\approx 1.018$ Chua:2019yqh . Besides, the CKM matrix
elements are ParticleDataGroup:2022pth
$\begin{split}&V_{cb}=(40.8\pm 1.4)\times 10^{-3},\ V_{ud}=0.97373\pm
0.00031,\\\ &V_{us}=0.2243\pm 0.0008,\ V_{cd}=0.221\pm 0.004,\\\
&V_{cs}=0.975\pm 0.006.\end{split}$
Finally, the decay width and asymmetry parameter can be evaluated by
$\begin{split}\Gamma=&\frac{|\vec{p}_{c}|^{3}}{12\pi}\bigg{[}\frac{(M+M^{\prime})^{2}-m_{P}^{2}}{M^{\prime
2}}|C|^{2}+\frac{(M-M^{\prime})^{2}-m_{P}^{2}}{M^{\prime
2}}|D|^{2}\bigg{]},\\\
\alpha=&-\frac{2\kappa\text{Re}[C^{\ast}D]}{|C|^{2}+\kappa^{2}|D|^{2}}\end{split}$
(4.36)
with $\kappa=|\vec{p}_{c}|/(E^{\prime}+M^{\prime})$, and
$\begin{split}\Gamma=&\frac{|\vec{p}_{c}|}{32\pi
M^{2}}\sum_{\lambda_{V}}\Big{(}|h_{\lambda_{V}+1/2,\lambda_{V};1/2}^{\text{PV}}|^{2}+|h_{\lambda_{V}+1/2,\lambda_{V};1/2}^{\text{PC}}|^{2}\Big{)},\\\
\alpha=&\frac{\sum_{\lambda_{V}}2h_{\lambda_{V}+1/2,\lambda_{V};1/2}^{\text{PV}}h_{\lambda_{V}+1/2,\lambda_{V};1/2}^{\text{PC}}}{\sum_{\lambda_{V}}(|h_{\lambda_{V}+1/2,\lambda_{V};1/2}^{\text{PV}}|^{2}+|h_{\lambda_{V}+1/2,\lambda_{V};1/2}^{\text{PC}}|^{2})}\end{split}$
(4.37)
with
$\begin{split}h_{3/2,1;1/2}^{\text{PV(PC)}}=&\mp\sqrt{2s_{\pm}}C_{1}(D_{1}),\\\
h_{-1/2,-1;1/2}^{\text{PV(PC)}}=&\mp\sqrt{\frac{2s_{\pm}}{3}}\Big{[}C_{1}(D_{1})-\frac{s_{\mp}}{M^{\prime}}C_{2}(D_{2})\Big{]},\\\
h_{1/2,0;1/2}^{\text{PV(PC)}}=&\mp\frac{\sqrt{s_{\pm}}}{2\sqrt{3}M^{\prime}m_{V}}\Big{[}2(M^{2}-M^{\prime
2}-m_{V}^{2})C_{1}(D_{1})\\\ &\pm 2s_{\mp}(M\pm
M^{\prime})C_{2}(D_{2})+s_{+}s_{-}C_{3}(D_{3})\Big{]},\end{split}$ (4.38)
for the cases associated with the pseudoscalar and the vector meson emitted
processes, respectively. The $\vec{p}_{c}$ is the three-momentum of the
daughter baryon (or meson) in the rest frame of the parent baryon, while the
$M(M^{\prime})$ is the mass of parent (daughter) baryon and $E^{\prime}$
denotes the energy of the daughter baryon.
Table 6: The absolute branching ratios and up-down asymmetry parameters of the $\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)M^{-}$ decays with $M$ denoting a pseudoscalar or vector meson. We also compare the branching ratios (in the unit of $10^{-3}$) with those given by Ref. Chua:2019yqh in the fourth column. Mode | $\mathcal{B}\ (\times 10^{-3})$ | $\alpha$ | Ref. Chua:2019yqh
---|---|---|---
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}\pi^{-}$ | $3.12\pm 0.15$ | $-0.99\pm 0.07$ | $2.40^{+4.09}_{-1.82}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}\rho^{-}$ | $4.25\pm 0.23$ | $-0.88\pm 0.07$ | $4.38^{+6.78}_{-3.17}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}K^{-}$ | $0.232\pm 0.012$ | $-0.99\pm 0.07$ | $0.17^{+0.30}_{-0.13}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}K^{*-}$ | $0.212\pm 0.011$ | $-0.85\pm 0.07$ | $0.22^{+0.33}_{-0.16}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}D^{-}$ | $0.266\pm 0.016$ | $-0.92\pm 0.07$ | $0.13^{+0.22}_{-0.10}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}D^{*-}$ | $0.161\pm 0.007$ | $-0.45\pm 0.05$ | $0.13^{+0.17}_{-0.08}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}D_{s}^{-}$ | $6.60\pm 0.40$ | $-0.90\pm 0.07$ | $2.88^{+4.92}_{-2.16}$
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}D_{s}^{*-}$ | $3.15\pm 0.13$ | $-0.41\pm 0.04$ | $2.41^{+2.98}_{-1.52}$
Table 7: The absolute branching ratios and up-down asymmetry parameters of the $\Xi_{b}^{0,-}\to\Xi_{c}^{+,0}(2815)M^{-}$ decays with $M$ denoting a pseudoscalar or vector meson, where the branching ratios out of or in brackets correspond to the $\Xi_{b}^{0}\to\Xi_{c}(2815)^{+}$ and $\Xi_{b}^{-}\to\Xi_{c}(2815)^{0}$ transitions, respectively. We also compare the branching ratios (in the unit of $10^{-3}$) with those given by Ref. Chua:2019yqh in the fourth column. Mode | $\mathcal{B}\ (\times 10^{-3})$ | $\alpha$ | Ref. Chua:2019yqh
---|---|---|---
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}\pi^{-}$ | $3.13\pm 0.17\ (3.33\pm 0.18)$ | $-0.99\pm 0.07$ | $3.32^{+6.08}_{-2.63}\ (3.53^{+6.46}_{-2.80})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}\rho^{-}$ | $4.29\pm 0.25\ (4.55\pm 0.27)$ | $-0.88\pm 0.07$ | $6.10^{+9.95}_{-4.55}\ (6.49^{+10.58}_{-4.84})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}K^{-}$ | $0.233\pm 0.012\ (0.248\pm 0.014)$ | $-0.99\pm 0.07$ | $0.24^{+0.44}_{-0.19}\ (0.26^{+0.47}_{-0.20})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}K^{\ast-}$ | $0.214\pm 0.012\ (0.227\pm 0.013)$ | $-0.85\pm 0.07$ | $0.30^{+0.48}_{-0.22}\ (0.32^{+0.51}_{-0.24})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}D^{-}$ | $0.275\pm 0.017\ (0.292\pm 0.019)$ | $-0.92\pm 0.07$ | $0.19^{+0.33}_{-0.14}\ (0.20^{+0.35}_{-0.15})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}D^{\ast-}$ | $0.167\pm 0.008\ (0.177\pm 0.009)$ | $-0.45\pm 0.05$ | $0.19^{+0.24}_{-0.12}\ (0.20^{+0.26}_{-0.13})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}D_{s}^{-}$ | $6.80\pm 0.40\ (7.30\pm 0.40)$ | $-0.90\pm 0.07$ | $4.34^{+7.54}_{-3.25}\ (4.65^{+8.08}_{-3.48})$
$\Xi_{b}^{0,-}\to\Xi_{c}(2815)^{+,0}D_{s}^{\ast-}$ | $3.27\pm 0.015\ (3.47\pm 0.017)$ | $-0.41\pm 0.04$ | $3.51^{+4.30}_{-2.18}\ (3.74^{+4.58}_{-2.32})$
By substituting our numerical results of the form factors and the decay
parameters into Eq. (4.36) and Eq. (4.37), the branching ratios and asymmetry
parameters can be further obtained, which are collected in Tables 6-7 for the
$\Lambda_{b}^{0}\to\Lambda_{c}(2625)^{+}M^{-}$ and
$\Xi_{b}^{0,-}\to\Xi_{c}^{+,0}(2815)M^{-}$ decays, respectively with emitting
a pseudoscalar meson ($\pi^{-}$, $K^{-}$, $D^{-}$, and $D_{s}^{-}$) or a
vector meson ($\rho^{-}$, $K^{*-}$, $D^{*-}$, and $D_{s}^{*-}$). Our results
show the process emitted a $\pi^{-}$, $\rho^{-}$, or $D_{s}^{(*)-}$ meson has
considerable branching ratio, which is possible to be explored in the future
experiment, like LHCb. As for other processes, the branching ratios are
suppressed by an order of magnitude due to the smaller values of CKM matrix
element.
In experiment, the LHCb Collaboration measured ParticleDataGroup:2022pth ;
LHCb:2011poy
$\begin{split}\mathcal{B}&(\Lambda_{b}\to\Lambda_{c}(2625)\pi^{-},\Lambda_{c}(2625)\to\Lambda_{c}\pi^{+}\pi^{-})\\\
&=(3.3\pm 1.3)\times 10^{-4}.\end{split}$
Based on the narrow-width approximation and
$\mathcal{B}(\Lambda_{c}(2625)\to\Lambda\pi^{+}\pi^{-})\approx 67\%$
ParticleDataGroup:2022pth , we have
$\mathcal{B}(\Lambda_{b}\to\Lambda_{c}(2625)\pi^{-})=(4.9\pm 1.9)\times
10^{-4}$, which is apparently smaller than our result. It should be clarified
by more precise measurement in future.
In addition, we also compare our results with that in Ref. Chua:2019yqh as
shown in the fourth column of Table 6 and Table 7 for the
$\mathcal{B}(\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)M^{-})$ and
$\mathcal{B}(\Xi_{b}^{0,-}\to\Xi_{c}^{+,0}(2815)M^{-})$ decays, respectively.
Our results are consistent with the results in Ref. Chua:2019yqh , but have
smaller uncertainties. It benefits from our improved treatment of the baryon
wave function. By hypothesizing the $\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ as
the dynamically generated resonances from the vector meson-baryon
interactions, the authors of Refs. Liang:2016ydj ; Pavao:2017cpt calculated
the $\Lambda_{b}\to\Lambda_{c}(2625)D_{s}^{-}$ and
$\Xi_{b}\to\Xi_{c}(2815)\pi^{-}$ channels. Their results show apparently
smaller widths compared with the results from the present work and Ref.
Chua:2019yqh based on the $udc$-scheme of the $\Lambda_{c}(2625)$ and
$\Xi_{c}(2815)$ states. So we also expect the LHCb Collaboration to measure
the corresponding $\pi^{-}$ and $D_{s}^{-}$ channels, which not only is useful
to reveal the inner structures of the $\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$,
but also can enrich the observed modes of $b$-decay.
## V Discussion and conclusion
With the update of High Luminosity Large Hadron Collider and the accumulation
of experimental data, the exploration of the bottom baryon decays into the
$P$-wave excited charmed baryon becomes highlight. In this work, we study the
form factors of the $\Lambda_{b}\to\Lambda_{c}(2625)$ and
$\Xi_{b}\to\Xi_{c}(2815)$ transitions, and further discuss the corresponding
semileptonic decays and color-allowed two-body nonleptonic decays.
As the first step, the weak transition form factors are obtained via three-
body LFQM, where the important inputs, the spatial wave functions of these
concerned baryons, are extracted by solving the Schrödinger equation with the
support of GEM Hiyama:2003cu ; Hiyama:2012sma ; Yoshida:2015tia and by
adopting a semirelativistic three-body potential model Capstick:1985xss ;
Li:2021qod ; Li:2021kfb ; Li:2022nim . By fitting the mass spectrum of the
single bottom and single charmed baryons, the parameters in semirelativistic
potential model can be fixed. This treatment is different from taking a simple
harmonic oscillator wave function with a phenomenal parameter $\beta$. Thus,
we can avoid the $\beta$ dependence of the result, where the present work is
supported by the baryon spectroscopy. Additionally, these calculated form
factors in this work are comparable with the result from LQCD and consistent
with the expectation from HQET.
With the obtained form factors, we further evaluate the weak decays. For the
semileptonic processes, our result of
$\mathcal{B}(\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\mu^{-}\nu_{\mu})=(1.641\pm
0.113)\%$ is consistent with current experimental data, and the branching
ratios of electron and muon channels can reach up to the magnitude of $1\%$,
which are accessible at the LHCb experiment in future. Besides, other
important physical observables, including the leptonic forward-backward
asymmetry ($A_{FB}$), the final hadron polarization ($P_{B}$), and the lepton
polarization ($P_{\ell}$) are also investigated. As for the nonleptonic
processes, the $\pi^{-}$, $\rho^{-}$, and $D_{s}^{(*)-}$-emitted channels have
considerable widths, and they are worthy to be focused by LHCb.
In this work, our study shows the $\Lambda_{b}\to\Lambda_{c}(2625)$ and
$\Xi_{b}\to\Xi_{c}(2815)$ weak transitions have sizable branching ratios,
which can be accessible at experiment. Especially, we notice that different
theoretical groups gave different results of these discussed transitions by
different theoretical frameworks and different structure assignments to the
$\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ Pervin:2005ve ; Liang:2016ydj ;
Pavao:2017cpt ; Gutsche:2018nks ; Chua:2019yqh ; Nieves:2019kdh , which can be
tested by future experimental measurement. At present, only the
$\Lambda_{b}^{0}\to\Lambda_{c}^{+}(2625)\mu^{-}\nu_{\mu}$ was measured
ParticleDataGroup:2022pth . Considering the high-luminosity upgrade to LHC,
the LHCb experiment will have enough interest and potential to carry our the
measurement to these discussed weak transitions in this work. Taking this
opportunity, we suggest LHCb to measure these discussed channels, where these
measurements no doubt can be useful to enrich the $b$-decay modes and can of
course be applied to distinguish different structure assignments to the
$\Lambda_{c}(2625)$ and $\Xi_{c}(2815)$ states.
## ACKNOWLEDGMENTS
This work is supported by the China National Funds for Distinguished Young
Scientists under Grant No. 11825503, National Key Research and Development
Program of China under Contract No. 2020YFA0406400, the 111 Project under
Grant No. B20063, the National Natural Science Foundation of China under Grant
No. 12247101, the project for top-notch innovative talents of Gansu province,
and by the Fundamental Research Funds for the Central Universities under Grant
No. lzujbky-2022-it17.
## References
* (1) J. P. Lees et al. [BaBar], Evidence for an excess of $\bar{B}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ decays, Phys. Rev. Lett. 109 (2012), 101802.
* (2) J. P. Lees et al. [BaBar], Measurement of an Excess of $\bar{B}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ Decays and Implications for Charged Higgs Bosons, Phys. Rev. D 88 (2013) no.7, 072012.
* (3) M. Huschle et al. [Belle], Measurement of the branching ratio of $\bar{B}\to D^{(\ast)}\tau^{-}\bar{\nu}_{\tau}$ relative to $\bar{B}\to D^{(\ast)}\ell^{-}\bar{\nu}_{\ell}$ decays with hadronic tagging at Belle, Phys. Rev. D 92 (2015) no.7, 072014.
* (4) R. Aaij et al. [LHCb], Measurement of the ratio of branching fractions $\mathcal{B}(\bar{B}^{0}\to D^{*+}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}^{0}\to D^{*+}\mu^{-}\bar{\nu}_{\mu})$, Phys. Rev. Lett. 115 (2015) no.11, 111803 [erratum: Phys. Rev. Lett. 115 (2015) no.15, 159901].
* (5) S. Hirose et al. [Belle], Measurement of the $\tau$ lepton polarization and $R(D^{*})$ in the decay $\bar{B}\to D^{*}\tau^{-}\bar{\nu}_{\tau}$, Phys. Rev. Lett. 118 (2017) no.21, 211801.
* (6) G. Caria et al. [Belle], Measurement of $\mathcal{R}(D)$ and $\mathcal{R}(D^{*})$ with a semileptonic tagging method, Phys. Rev. Lett. 124 (2020) no.16, 161803.
* (7) A. Bazavov et al. [Fermilab Lattice and MILC], Semileptonic form factors for $B\to D^{\ast}\ell\nu$ at nonzero recoil from 2 + 1-flavor lattice QCD, [arXiv:2105.14019 [hep-lat]].
* (8) R. Aaij et al. [LHCb], Observation of $J/\psi p$ Resonances Consistent with Pentaquark States in $\Lambda_{b}^{0}\to J/\psi K^{-}p$ Decays, Phys. Rev. Lett. 115 (2015), 072001.
* (9) R. Aaij et al. [LHCb], Observation of a narrow pentaquark state, $P_{c}(4312)^{+}$, and of two-peak structure of the $P_{c}(4450)^{+}$, Phys. Rev. Lett. 122 (2019) no.22, 222001.
* (10) R. Aaij et al. [LHCb], Evidence for a new structure in the $J/\psi p$ and $J/\psi\bar{p}$ systems in $B_{s}^{0}\to J/\psi p\bar{p}$ decays, Phys. Rev. Lett. 128 (2022) no.6, 062001.
* (11) R. Aaij et al. [LHCb], Evidence of a $J/\psi\Lambda$ structure and observation of excited $\Xi^{-}$ states in the $\Xi^{-}_{b}\to J/\psi\Lambda K^{-}$ decay, Sci. Bull. 66 (2021), 1278-1287.
* (12) S. A. Gottlieb and S. Tamhankar, A Lattice study of $\Lambda_{b}$ semileptonic decay, Nucl. Phys. B Proc. Suppl. 119 (2003), 644-646.
* (13) W. Detmold, C. Lehner and S. Meinel, $\Lambda_{b}\to p\ell^{-}\bar{\nu}_{\ell}$ and $\Lambda_{b}\to\Lambda_{c}\ell^{-}\bar{\nu}_{\ell}$ form factors from lattice QCD with relativistic heavy quarks, Phys. Rev. D 92 (2015) no.3, 034503.
* (14) M. Q. Huang, H. Y. Jin, J. G. Korner and C. Liu, Note on the slope parameter of the baryonic $\Lambda_{b}\to\Lambda_{c}$ Isgur-Wise function, Phys. Lett. B 629 (2005), 27-32.
* (15) Z. X. Zhao, R. H. Li, Y. L. Shen, Y. J. Shi and Y. S. Yang, The semi-leptonic form factors of $\Lambda_{b}\to\Lambda_{c}$ and $\Xi_{b}\to\Xi_{c}$ in QCD sum rules, Eur. Phys. J. C 80 (2020) no.12, 1181.
* (16) K. Azizi and J. Y. Süngü, Semileptonic $\Lambda_{b}\rightarrow\Lambda_{c}{\ell}\bar{\nu}_{\ell}$ Transition in Full QCD, Phys. Rev. D 97 (2018) no.7, 074007.
* (17) H. H. Duan, Y. L. Liu and M. Q. Huang, Light-cone sum rule analysis of semileptonic decays $\varLambda_{b}^{0}\rightarrow\varLambda_{c}^{+}\ell^{-}{\overline{\nu}}_{\ell}$, Eur. Phys. J. C 82 (2022) no.10, 951.
* (18) Y. Miao, H. Deng, K. S. Huang, J. Gao and Y. L. Shen, $\Lambda_{b}\to\Lambda_{c}$ form factors from QCD light-cone sum rules*, Chin. Phys. C 46 (2022) no.11, 113107.
* (19) Z. G. Wang, Analysis of the Isgur-Wise function of the $\Lambda_{b}\to\Lambda_{c}$ transition with light-cone QCD sum rules, [arXiv:0906.4206 [hep-ph]].
* (20) M. Pervin, W. Roberts and S. Capstick, Semileptonic decays of heavy lambda baryons in a quark model, Phys. Rev. C 72 (2005), 035201.
* (21) D. Ebert, R. N. Faustov and V. O. Galkin, Semileptonic decays of heavy baryons in the relativistic quark model, Phys. Rev. D 73 (2006), 094002.
* (22) H. W. Ke, X. Q. Li and Z. T. Wei, Diquarks and $\Lambda_{b}\to\Lambda_{c}$ weak decays, Phys. Rev. D 77 (2008), 014020.
* (23) R. N. Faustov and V. O. Galkin, Semileptonic decays of $\Lambda_{b}$ baryons in the relativistic quark model, Phys. Rev. D 94 (2016) no.7, 073008.
* (24) T. Gutsche, M. A. Ivanov, J. G. Körner, V. E. Lyubovitskij, P. Santorelli and N. Habyl, Semileptonic decay $\Lambda_{b}\to\Lambda_{c}+\tau^{-}+\bar{\nu_{\tau}}$ in the covariant confined quark model, Phys. Rev. D 91 (2015) no.7, 074001 [erratum: Phys. Rev. D 91 (2015) no.11, 119907].
* (25) S. Rahmani, H. Hassanabadi and J. Kříž, Nonleptonic and semileptonic ${\Lambda_{b}}\rightarrow{\Lambda_{c}}$ transitions in a potential quark model, Eur. Phys. J. C 80 (2020) no.7, 636.
* (26) C. K. Chua, Color-allowed bottom baryon to charmed baryon nonleptonic decays, Phys. Rev. D 99 (2019) no.1, 014023.
* (27) H. W. Ke, N. Hao and X. Q. Li, Revisiting $\Lambda_{b}\rightarrow\Lambda_{c}$ and $\Sigma_{b}\rightarrow\Sigma_{c}$ weak decays in the light-front quark model, Eur. Phys. J. C 79 (2019) no.6, 540.
* (28) C. K. Chua, Color-allowed bottom baryon to $s$-wave and $p$-wave charmed baryon nonleptonic decays, Phys. Rev. D 100 (2019) no.3, 034025.
* (29) T. Gutsche, M. A. Ivanov, J. G. Körner, V. E. Lyubovitskij, P. Santorelli and C. T. Tran, Analyzing lepton flavor universality in the decays $\Lambda_{b}\to\Lambda_{c}^{(\ast)}(\frac{1}{2}^{\pm},\frac{3}{2}^{-})+\ell\,\bar{\nu}_{\ell}$, Phys. Rev. D 98 (2018) no.5, 053003.
* (30) C. Q. Geng, C. W. Liu and T. H. Tsai, Nonleptonic two-body weak decays of $\Lambda_{b}$ in modified MIT bag model, Phys. Rev. D 102 (2020) no.3, 034033.
* (31) Y. S. Li, X. Liu and F. S. Yu, Revisiting semileptonic decays of $\Lambda_{b(c)}$ supported by baryon spectroscopy, Phys. Rev. D 104 (2021) no.1, 013005.
* (32) Y. S. Li and X. Liu, Restudy of the color-allowed two-body nonleptonic decays of bottom baryons $\Xi_{b}$ and $\Omega_{b}$ supported by hadron spectroscopy, Phys. Rev. D 105 (2022) no.1, 013003.
* (33) J. Nieves, R. Pavao and S. Sakai, $\Lambda_{b}$ decays into $\Lambda_{c}^{*}\ell\bar{\nu}_{\ell}$ and $\Lambda_{c}^{*}\pi^{-}$ $[\Lambda_{c}^{*}=\Lambda_{c}(2595)$ and $\Lambda_{c}(2625)]$ and heavy quark spin symmetry, Eur. Phys. J. C 79 (2019) no.5, 417.
* (34) S. Meinel and G. Rendon, $\Lambda_{b}\to\Lambda_{c}^{*}(2595,2625)\ell^{-}\bar{\nu}$form factors from lattice QCD, Phys. Rev. D 103 (2021) no.9, 094516.
* (35) S. Meinel and G. Rendon, $\Lambda_{c}\to\Lambda^{*}(1520)$ form factors from lattice QCD and improved analysis of the $\Lambda_{b}\to\Lambda^{*}(1520)$ and $\Lambda_{b}\to\Lambda_{c}^{*}(2595,2625)$ form factors, Phys. Rev. D 105 (2022) no.5, 054511.
* (36) W. H. Liang, M. Bayar and E. Oset, $\Lambda_{b}\to\pi^{-}(D_{s}^{-})\Lambda_{c}(2595),~{}\pi^{-}(D_{s}^{-})\Lambda_{c}(2625)$ decays and $DN,~{}D^{*}N$ molecular components, Eur. Phys. J. C 77 (2017) no.1, 39.
* (37) W. H. Liang, E. Oset and Z. S. Xie, Semileptonic $\Lambda_{b}\to\bar{\nu}_{l}l\Lambda_{c}(2595)$ and $\Lambda_{b}\to\bar{\nu}_{l}l\Lambda_{c}(2625)$ decays in the molecular picture of $\Lambda_{c}(2595)$ and $\Lambda_{c}(2625)$, Phys. Rev. D 95 (2017) no.1, 014015.
* (38) R. P. Pavao, W. H. Liang, J. Nieves and E. Oset, Predictions for $\Xi_{b}^{-}\rightarrow\pi^{-}\left(D_{s}^{-}\right)\ \Xi_{c}^{0}(2790)\left(\Xi_{c}^{0}(2815)\right)$ and $\Xi_{b}^{-}\rightarrow\bar{\nu}_{l}l\ \Xi_{c}^{0}(2790)\left(\Xi_{c}^{0}(2815)\right)$, Eur. Phys. J. C 77 (2017) no.4, 265.
* (39) R. Aaij et al. [LHCb], Study of the $D^{0}p$ amplitude in $\Lambda_{b}^{0}\to D^{0}p\pi^{-}$ decays, JHEP 05, 030 (2017).
* (40) J. Yelton et al. [Belle], Study of Excited $\Xi_{c}$ States Decaying into $\Xi_{c}^{0}$ and $\Xi_{c}^{+}$ Baryons, Phys. Rev. D 94, no.5, 052011 (2016).
* (41) B. Chen, X. Liu and A. Zhang, Newly observed $\Lambda_{c}(2860)^{+}$ at LHCb and its \emphD-wave partners $\Lambda_{c}(2880)^{+}$, $\Xi_{c}(3055)^{+}$ and $\Xi_{c}(3080)^{+}$, Phys. Rev. D 95, no.7, 074022 (2017).
* (42) B. Chen, K. W. Wei, X. Liu and T. Matsuki, Low-lying charmed and charmed-strange baryon states, Eur. Phys. J. C 77, no.3, 154 (2017).
* (43) S. Capstick and N. Isgur, Baryons in a Relativized Quark Model with Chromodynamics, AIP Conf. Proc. 132 (1985), 267-271.
* (44) H. X. Chen, W. Chen, Q. Mao, A. Hosaka, X. Liu and S. L. Zhu, P-wave charmed baryons from QCD sum rules, Phys. Rev. D 91 (2015) no.5, 054034.
* (45) J. J. Guo, P. Yang and A. Zhang, Strong decays of observed $\Lambda_{c}$ baryons in the ${}^{3}P_{0}$ model, Phys. Rev. D 100 (2019) no.1, 014001.
* (46) G. L. Yu, Z. Y. Li, Z. G. Wang, J. Lu and M. Yan, Systematic analysis of single heavy baryons $\Lambda_{Q}$, $\Sigma_{Q}$ and $\Omega_{Q}$, [arXiv:2206.08128 [hep-ph]].
* (47) Z. Y. Li, G. L. Yu, Z. G. Wang, J. Lu and J. Z. Gu, Systematic analysis of strange single heavy baryons, [arXiv:2207.04167 [hep-ph]].
* (48) R. Aaij et al. [LHCb], Measurement of the mass and production rate of $\Xi_{b}^{-}$ baryons, Phys. Rev. D 99 (2019) no.5, 052006.
* (49) W. Wang, F. S. Yu and Z. X. Zhao, Weak decays of doubly heavy baryons: the $1/2\rightarrow 1/2$ case, Eur. Phys. J. C 77 (2017) no.11, 781.
* (50) Z. X. Zhao, Weak decays of heavy baryons in the light-front approach, Chin. Phys. C 42 (2018) no.9, 093101.
* (51) Z. X. Zhao, Weak decays of doubly heavy baryons: the $1/2\rightarrow 3/2$ case, Eur. Phys. J. C 78 (2018) no.9, 756.
* (52) H. W. Ke, N. Hao and X. Q. Li, $\Sigma_{b}\to\Sigma_{c}^{*}$ weak decays in the light-front quark model with two schemes to deal with the polarization of diquark, J. Phys. G 46 (2019) no.11, 115003.
* (53) J. Zhu, Z. T. Wei and H. W. Ke, Semileptonic and nonleptonic weak decays of $\Lambda_{b}^{0}$, Phys. Rev. D 99 (2019) no.5, 054020.
* (54) W. Wang and Z. P. Xing, Weak decays of triply heavy baryons in light front approach, Phys. Lett. B 834 (2022), 137402.
* (55) Z. X. Zhao, Weak decays of triply heavy baryons: the $3/2\to 1/2$ case, [arXiv:2204.00759 [hep-ph]].
* (56) M. V. Terentev, On the Structure of Wave Functions of Mesons as Bound States of Relativistic Quarks, Sov. J. Nucl. Phys. 24 (1976), 106 ITEP-5-1976.
* (57) V. B. Berestetsky and M. V. Terentev, Nucleon Form-Factors and Dynamics of the Light Front, Sov. J. Nucl. Phys. 25 (1977), 347-354.
* (58) Q. Chang, L. T. Wang and X. N. Li, Form factors of $V^{\prime}\to V^{\prime\prime}$ transition within the light-front quark models, JHEP 12 (2019), 102.
* (59) H. W. Ke, F. Lu, X. H. Liu and X. Q. Li, Study on $\Xi_{cc}\to\Xi_{c}$ and $\Xi_{cc}\to\Xi^{\prime}_{c}$ weak decays in the light-front quark model, Eur. Phys. J. C 80 (2020) no.2, 140.
* (60) H. W. Ke, Q. Q. Kang, X. H. Liu and X. Q. Li, Weak decays of $\Xi_{c}^{(\prime)}\to\Xi$ in the light-front quark model, Chin. Phys. C 45 (2021) no.11, 113103.
* (61) H. Y. Cheng, C. K. Chua and C. W. Hwang, Light front approach for heavy pentaquark transitions, Phys. Rev. D 70 (2004), 034007.
* (62) H. Y. Cheng, C. Y. Cheung and C. W. Hwang, Mesonic form-factors and the Isgur-Wise function on the light front, Phys. Rev. D 55 (1997), 1559-1577.
* (63) C. Y. Cheung, W. M. Zhang and G. L. Lin, Light front heavy quark effective theory and heavy meson bound states, Phys. Rev. D 52 (1995), 2915-2925.
* (64) C. Q. Geng, C. C. Lih and W. M. Zhang, Radiative leptonic B decays in the light front model, Phys. Rev. D 57 (1998), 5697-5702.
* (65) C. Q. Geng, C. W. Liu, Z. Y. Wei and J. Zhang, Weak radiative decays of antitriplet bottomed baryons in light-front quark model, Phys. Rev. D 105 (2022) no.7, 7.
* (66) J. G. Korner, M. Kramer and D. Pirjol, Heavy baryons, Prog. Part. Nucl. Phys. 33 (1994), 787-868.
* (67) F. Hussain, J. G. Korner, J. Landgraf and S. Tawfiq, $SU(2N_{f})\otimes O(3)$ light diquark symmetry and current induced heavy baryon transition form-factors, Z. Phys. C 69 (1996), 655-662.
* (68) S. Tawfiq, P. J. O’Donnell and J. G. Korner, Charmed baryon strong coupling constants in a light front quark model, Phys. Rev. D 58 (1998), 054010.
* (69) R. Mertig, M. Bohm and A. Denner, FEYN CALC: Computer algebraic calculation of Feynman amplitudes, Comput. Phys. Commun. 64 (1991), 345-359.
* (70) V. Shtabovenko, R. Mertig and F. Orellana, New Developments in FeynCalc 9.0, Comput. Phys. Commun. 207 (2016), 432-444.
* (71) V. Shtabovenko, R. Mertig and F. Orellana, FeynCalc 9.3: New features and improvements, Comput. Phys. Commun. 256 (2020), 107478.
* (72) S. Godfrey and N. Isgur, Mesons in a Relativized Quark Model with Chromodynamics, Phys. Rev. D 32 (1985), 189-231.
* (73) Y. S. Li, S. P. Jin, J. Gao and X. Liu, The angular analysis of $\Lambda_{b}\to\Lambda(1520)(\to N\bar{K})\ell^{+}\ell^{-}$ decay, [arXiv:2210.04640 [hep-ph]].
* (74) E. Hiyama, Y. Kino and M. Kamimura, Gaussian expansion method for few-body systems, Prog. Part. Nucl. Phys. 51 (2003), 223-307.
* (75) E. Hiyama, Gaussian expansion method for few-body systems and its applications to atomic and nuclear physics, PTEP 2012 (2012), 01A204.
* (76) T. Yoshida, E. Hiyama, A. Hosaka, M. Oka and K. Sadato, Spectrum of heavy baryons in the quark model, Phys. Rev. D 92 (2015) no.11, 114029.
* (77) S. Q. Luo, L. S. Geng and X. Liu, Double-charm heptaquark states composed of two charmed mesons and one nucleon, Phys. Rev. D 106 (2022), 014017.
* (78) R. L. Workman et al. [Particle Data Group], Review of Particle Physics, PTEP 2022 (2022), 083C01.
* (79) L. Lellouch, Lattice constrained unitarity bounds for anti-$\bar{B}^{0}\to\pi^{+}\ell^{-}\bar{\nu}_{\ell}$ decays, Nucl. Phys. B 479 (1996), 353-391.
* (80) C. Bourrely and I. Caprini, Bounds on the slope and the curvature of the scalar $K\pi$ form-factor at zero momentum transfer, Nucl. Phys. B 722 (2005), 149-165.
* (81) C. Bourrely, I. Caprini and L. Lellouch, Model-independent description of $B\to\pi\ell\nu$ decays and a determination of $|V_{cb}|$, Phys. Rev. D 79 (2009), 013008 [erratum: Phys. Rev. D 82 (2010), 099902].
* (82) A. Bharucha, D. M. Straub and R. Zwicky, $B\to V\ell^{+}\ell^{-}$ in the Standard Model from light-cone sum rules, JHEP 08 (2016), 098.
* (83) K. S. Huang, W. Liu, Y. L. Shen and F. S. Yu, $\Lambda_{b}\to p,N^{\ast}(1535)$ Form Factors from QCD Light-Cone Sum Rules, [arXiv:2205.06095 [hep-ph]].
* (84) T. M. Aliev, S. Bilmis and M. Savci, Charmed baryon $\Omega_{c}^{0}\to\Omega^{-}\ell\nu_{\ell}$ and $\Omega_{c}^{0}\to\Omega^{-}\pi^{+}(\rho^{+})$ decays in light cone sum rules, Phys. Rev. D 106 (2022) no.7, 074022.
* (85) S. Godfrey, Spectroscopy of $B_{c}$ mesons in the relativized quark model, Phys. Rev. D 70 (2004), 054017.
* (86) T. Gutsche, M. A. Ivanov, J. G. Körner, V. E. Lyubovitskij, V. V. Lyubushkin and P. Santorelli, Theoretical description of the decays $\Lambda_{b}\to\Lambda^{(\ast)}(\frac{1}{2}^{\pm},\frac{3}{2}^{\pm})+J/\psi$, Phys. Rev. D 96 (2017) no.1, 013003.
* (87) C. D. Lu, Y. M. Wang, H. Zou, A. Ali and G. Kramer, Anatomy of the pQCD Approach to the Baryonic Decays $\Lambda_{b}\to p\pi,~{}pK$, Phys. Rev. D 80 (2009), 034011.
* (88) H. Y. Cheng, C. K. Chua and C. W. Hwang, Covariant light front approach for s wave and p wave mesons: Its application to decay constants and form-factors, Phys. Rev. D 69 (2004), 074025.
* (89) R. Aaij et al. [LHCb], Measurements of the Branching fractions for $B_{(s)}\to D_{(s)}\pi\pi\pi$ and $\Lambda_{b}^{0}\to\Lambda_{c}^{+}\pi\pi\pi$, Phys. Rev. D 84 (2011), 092001 [erratum: Phys. Rev. D 85 (2012), 039904].
|
# Learning to Select from Multiple Options
Jiangshu Du1, Wenpeng Yin2, Congying Xia3, Philip S. Yu1
###### Abstract
Many NLP tasks can be regarded as a selection problem from a set of options,
e.g., classification tasks, multi-choice QA, etc. Textual entailment (TE) has
been shown as the state-of-the-art (SOTA) approach to dealing with those
selection problems. TE treats input texts as premises (P), options as
hypotheses (H), then handles the selection problem by modeling (P, H)
pairwise. Two limitations: (i) the pairwise modeling is unaware of other
options, which is less intuitive since humans often determine the best options
by comparing competing candidates; (ii) the inference process of pairwise TE
is time-consuming, especially when the option space is large. To deal with the
two issues, this work first proposes a contextualized TE model (Context-TE) by
appending other $k$ options as the context of the current (P, H) modeling.
Context-TE is able to learn more reliable decision for the H since it
considers various context. Second, we speed up Context-TE by coming up with
Parallel-TE, which learns the decisions of multiple options simultaneously.
Parallel-TE significantly improves the inference speed while keeping
comparable performance with Context-TE. Our methods are evaluated on three
tasks (ultra-fine entity typing, intent detection and multi-choice QA) that
are typical selection problems with different sizes of options. Experiments
show our models set new SOTA performance; particularly, Parallel-TE is faster
than the pairwise TE by $k$ times in inference. Our code is publicly available
at https://github.com/jiangshdd/LearningToSelect.
## Introduction
The NLP consists of various tasks and many of them are essentially a selection
problem from a set of options. For instance, text classification selects the
correct labels for the input text from all label candidates; given a paragraph
and multiple answer candidates, multi-choice question answering (QA) selects
the correct answer among all the choices. Textual entailment (TE) has been
widely applied as the SOTA technique for solving these selection problems,
e.g., intent detection (Xia et al. 2021), ultra-fine entity typing (Li, Yin,
and Chen 2022), coreference resolution (Yin et al. 2020), relation extraction
(Xia et al. 2021; Sainz et al. 2021), event argument extraction (Sainz et al.
2022), etc.
For those selection tasks, traditional classifiers treat all labels as
indices, ignoring their semantics which, however, are the core supervision in
zero- and few-shot learning. In contrast, TE reserves and exploits the
original label information by constructing premise-hypothesis pairs, where
premises (P) are texts and hypotheses (H) are transformed from labels. Then,
TE selects an option by predicting if an option-oriented H can be entailed by
the input P. In addition to using a unified textual entailment framework to
solve various selection problems, another benefit is that the availability of
large-scale entailment datasets, such as MNLI (Williams, Nangia, and Bowman
2018) and DocNLI (Yin, Radev, and Xiong 2021), can provide rich indirect
supervision to handle those target problems when task-specific supervision is
limited. However, there are two main limitations for the standard TE methods.
First, TE pairs one input text only with a single option. Thus, all options
are treated independently when the model makes decisions. In contrast, humans
usually perform the selection in a more intuitive way: comparing all options
and select the best one. Second, at the inference phase, the TE model needs to
compare a text with all possible options one by one, which is inefficient
especially when the option space is large. Take the ultra-fine entity typing
(Choi et al. 2018) as an example, its 10k types take the TE model 35 seconds
for each test instance and about 19.4 hours 111Experiments run at an NVIDIA
TITAN RTX. to infer the entire test set (Li, Yin, and Chen 2022).
To overcome the limitations of standard TE, we introduce two novel approaches
for option inference. First, inspired by the intuition that a model can be
more powerful if it can find the correct answers even under the disturbance
from other options, we propose a contextualized TE model (Context-TE). It
appends other $k$ options as an extra context of the standard (P, H) pairs
during training. In this way, the model makes the decision not only depending
on the current option H but also considering a more informative context. Thus,
the prediction on H is more reliable, but Context-TE is as slow as the
standard TE in terms of inference. Second, we improve the efficiency of
Context-TE by introducing a parallel TE method (Parallel-TE). Parallel-TE
learns an option’s representation based on other options and makes the
decisions of $k$ options simultaneously. Therefore, the inference time of
Parallel-TE is faster than Context-TE by $k$ times.
We evaluate our proposed models on three tasks: ultra-fine entity typing (UFET
(Choi et al. 2018), few-shot intent detection (BANKING77 (Casanueva et al.
2020), and multiple-choice QA (MCTest (Richardson, Burges, and Renshaw 2013)).
The three tasks are representative selection problems: long texts and small-
size options (MCTest), medium-size options (77 intents in BANKING77), and
large-size options with multi-label selection (UFET has over 10,000 types and
multiple can be correct for a given entity mention). Our proposed Context-TE
sets the new state-of-the-art performance on all three tasks, and the
Parallel-TE sacrifices a little bit performance (except for the UFET) while
showing clear efficiency gains.
Our contributions can be summarized as the following three points. First, we
discuss the limitations of the standard TE method in dealing with selection
problems and propose two novel learning to select models—Context-TE and
Parallel-TE—to overcome those issues. Second, our experiments show that both
Context-TE and Parallel-TE outperform the standard TE, and set the new state-
of-the-art performance on multiple benchmarks. Third, we provide a deep
analysis to better understand why the new models work.
## Related Work
Our work is mainly related with textual entailment and how it is applied to
solve other NLP tasks.
#### Textual entailment.
Dagan, Glickman, and Magnini (2006) first introduced the concept of TE and
released a challenging benchmark, _Recognizing Textual Entailment_ (RTE), for
it. Then it attracted many following studies and the early stage of TE study
mainly focused on lexical and syntactic level features (Androutsopoulos and
Malakasiotis 2010; Rei and Briscoe 2011). In recent years, many large-scale TE
datasets are released such as SNLI (Bowman et al. 2015), MNLI (Williams,
Nangia, and Bowman 2018), SciTail (Khot, Sabharwal, and Clark 2018), ANLI (Nie
et al. 2020) etc., which greatly advance the study of sentence-level TE. Yin,
Radev, and Xiong (2021) also introduced a document-level dataset named DocNLI,
which is constructed by reformatting and aggregating some NLP tasks. The
recent TE systems mainly rely on pairwise modeling over the premise and
hypothesis using attentive recurrent neural networks (Rocktäschel et al. 2016;
Wang and Jiang 2016; Wang, Hamza, and Florian 2017), attentive convolutional
neural networks (dos Santos et al. 2016; Yin and Schütze 2018), and pretrained
transformers (Devlin et al. 2019; Liu et al. 2019). Our work is related to TE
but more interested in applying TE to solve downstream selection problems.
#### Textual entailment solves other NLP tasks.
Since many NLP tasks can be converted into a TE problem, TE naturally can
provide indirect supervision for solving those tasks, especially when the
task-specific supervision is limited. Yin, Hay, and Roth (2019) presented the
first work that used TE supervision to solve zero-shot text classification.
The idea was then applied to handle few-shot intent identification (Zhang et
al. 2020; Xia et al. 2021), ultra-fine entity typing (Li, Yin, and Chen 2022),
coreference resolution (Yin et al. 2020), relation extraction (Xia et al.
2021; Sainz et al. 2021), event argument extraction (Sainz et al. 2022), zero-
shot machine comprehension (Yin, Radev, and Xiong 2021), etc. Wang et al.
(2021) directly claimed that TE is a few-shot leaner for a wide range of NLP
tasks.
All the literature we discussed above follow the standard TE method to solve
selection problems, i.e., inferring the options one by one. This method
suffers from two limitations as we stated in Introduction. In this work, we
try to enhance the representation learning of TE and improve its inference
speed in dealing with large option spaces.
## Methods
Figure 1: The comparison among TE, Context-TE and Parallel-TE in terms of
their inputs and the representation ($h$) learning for hypotheses $H$.
This section presents our approaches to strengthening the options’
representation learning (Section “Context-TE”) and speeding up the selection
process among large size of options (Section “Parallel-TE”).
### Problem Definition
We formulate the selection problem as follows: given a text $t$ and an option
space $\mathbb{O}$ consisting of $n$ options $\\{o_{1},o_{2},\cdots,o_{n}\\}$,
selecting the best (single-label) or more (multi-label) options that match the
$t$ under the definition of the task. Standard TE methods treat the $t$ as the
premise (P) and an option as a hypothesis (H). For each $t$, TE considers the
option $o_{i}$ without comparing $o_{i}$ with other competing options. When
the $n$ is large, each $t$ needs the pairwise modeling ($t$, $o_{i}$) totally
$n$ times in inference. In Section “Context-TE”, we first present our method
Context-TE to encode the interactions among options when modeling a particular
option.
### Context-TE
Context-TE is a contextualized representation learning paradigm for TE. It
adds other options as the context when modeling the pair (P, H).
•Contextualizing TE pairs. Context-TE appends $k$ options from $\mathbb{O}$ in
a random order after the original (P, H) as a competing context
($c_{1},c_{2},\cdots,c_{k}$), resulting in a contextualized pair: (P, H,
$c_{1}$, $c_{2}$, $\cdots$, $c_{k}$). Note that the options resulting in
$c_{i}$ ($i=1,\cdots,k$) should not be the same as the option generating H.
The competing context introduces the interactions between H and other $k$
candidates in the option space in a single training instance so the model can
better distinguish similar options. The gold entailment label (i.e.,
entailment/contradict/neutral) of the contextualized pair (P, H, $c_{1}$,
$c_{2}$, $\cdots$, $c_{k}$) is set the same as (P, H). For instance, in an
intent detection task, given an utterance text “What’s go on, where is my new
card?” and its gold intent “card arrival”, a standard TE pair (“What’s go on,
where is my new card?”, “card arrival”) can be contextualized as (“What’s go
on, where is my new card?”, “card arrival”, “lost or stolen card”, “atm
support”) if $k=2$, and the relation between the option “card arrival” and the
utterance “What’s go on, where is my new card?” should not change despite the
existence of competing context. The rationale of Context-TE is that if P can
entail H in various contexts, H is more likely to be true given P.
•Training. As Figure 1 (middle) illustrates, we feed contextualized pairs into
the encoder RoBERTa (Liu et al. 2019). The representation corresponding to the
CLS token is used to denote the H in the context of P as well as the competing
context ($c_{1},c_{2},\cdots,c_{k}$). The remaining classification
architecture and training process are the same with standard TE that works on
(P, H).
•Inference. The inference of Context-TE is the same as the standard TE. First,
we convert the test set of the target task into (P, H) pairs by putting $t$
with each option $o_{i}$ together. Note that no competing context is needed
for inference. Next, all the pairs are fed into the trained model and an
entailment score for each pair will be given. For a single-label task, the
option with the highest score will be returned. For a multi-label task, all
the options with scores higher than a threshold $\tau$ are returned. Same as
TE, Context-TE also needs to model $x\times n$ sizes of (P, H) pairs if there
are $x$ test inputs and $n$ options. This time-consuming process motivates our
second method Parallel-TE.
### Parallel-TE
Parallel-TE also works on the contextualized pairs (P, H, $c_{1}$, $c_{2}$,
$\cdots$, $c_{k}$) except that all options, including H and $c_{i}$
($i=1,\cdots,k$), are treated equally and optimized to learn their respective
labels simultaneously.
•Training. Different from Context-TE, all the options in the pair (P, H1,
$\cdots$, Hk) will keep their original labels ($y_{i}$ for Hi) for joint
training. Then the task is transformed to: given a sequence consisting of a
premise and $k$ hypotheses, select the correct hypotheses.
The input format of Parallel-TE is the same as Context-TE but they have
different representation learning processes. As shown in Figure 1 (bottom), we
first feed (P, H1, $\cdots$, Hk) into RoBERTa and obtain a series of token-
level representations on the top layer. Then for each Hi
($i=\\{1,\cdots,k\\}$), we average the representation vectors of all its
tokens element-wise as its representation $h_{i}$:
$h_{i}=\frac{1}{T}\sum_{j=1}^{T}\mathbf{RoBERTa}(\texttt{H}_{i}^{j}),$ (1)
where Hi has $T$ tokens, and H${}_{i}^{j}$ is the $j$-th one.
Next, each Hi will learn a score $s_{i}$ ranging from 0 to 1, indicating how
likely Hi is entailed by P, by feeding $h_{i}$ into a MLP:
$s_{i}=\mathrm{sigmoid}(\mathrm{MLP}(h_{i}))$ (2)
The loss $l$ of Parallel-TE is defined as the binary cross entropy (BCE) over
all Hi ($i=1,\cdots,k$):
$l=-\frac{1}{k}\sum_{i=1}^{k}y_{i}\cdot\mathrm{log}s_{i}+(1-y_{i})\cdot\mathrm{log}(1-s_{i})$
(3)
•Inference. For each hypothesis Hi in the input, Parallel-TE gives an
entailment score. Then we can select the option of highest score for single-
label tasks, or the options scored higher than a certain threshold for multi-
label tasks. Because Parallel-TE models $k$ hypotheses simultaneously for a
single P, its inference is $k$ times faster than the standard TE and Context-
TE. More analyses are shown in Section “Inference speed of Parallel-TE”.
#### Context-TE vs. Parallel-TE.
They have the same input structures as illustrated in Figure 1. Both of them
learn to select the correct option upon comparing with other options. They
mainly differ in two aspects: i) Context-TE only models one hypothesis in each
contextualized pair while Parallel-TE treats all the options in a pair as
hypotheses, and infers all of them at the same time, which brings a huge boost
regarding the inference efficiency; ii) Parallel-TE adopts BCE loss to deal
with multi-label selection tasks. In contrast, Context-TE uses the cross
entropy loss as each pair only needs to be classified as entailment or non-
entailment.
For the real-world problems where many options exist such as ultra-fine entity
typing and open-world intent detection, it is not feasible to embed all the
options into a single pair due to the input length limit of the encoder.
Therefore, we first retrieve top-$k$ options to reduce the option space for
those tasks. Details are discussed in Section “ Experiments”.
## Experiments
Our experiments are conducted on three different tasks: _ultra-fine entity
typing_ , _few-shot intent detection_ and _multiple-choice QA_. We choose the
three tasks since they represent different selection problems in NLP. Ultra-
fine entity typing is a multi-label task with a large option space: over
10,000 entity types. The few-shot intent detection task evaluates our proposed
models under few-shot selection scene. Multiple-choice QA is a selection
problem which requires the model to understand long paragraphs.
#### Top-$k$ options generation.
For the tasks with a large option space, it is infeasible to encode all
options in the same input. We first find the top-$k$ options for each text
with an efficient method before the Context-TE and Parallel-TE steps, hoping
that the top-$k$ options are the most promising ones for the input text. The
kept $k$ options should have a high recall as we do not want the test inputs
miss the gold options too much at this stage.
Concretely, for the ultra-fine entity typing task, we fine-tuned a bi-BERT
(Devlin et al. 2019) on the training data with one BERT encoding an entity
mention and another encoding a type candidate. For the intent detection task,
we directly use the Sentence-BERT (Reimers and Gurevych 2019) to match an
utterance and an intent candidate because utterances and their gold intents
usually have high sentence similarities. The top-$k$ model selection is based
on the recalls on tasks’ $dev$ set. This step does not apply to the multi-
choice QA task since there are only four answer candidates for each question.
The reason we choose bi-BERT or pretrained Sentence-BERT is that these
representation learners decouple the P and H and use separate encoders to
model them; as a result, they just need to generate representations for all
options once and our model can reuse the same options’ representations to
compare with all inputs. After this step, each text $t$ has $k$ options that
are most likely to be true. The top-$k$ recall and the analysis about the
influence of different $k$ values are reported in Section “Influence of $k$
values”.
#### Model configurations.
We use the public pretrained RoBERTa-large-mnli model as our backbone for the
entity typing and intent detection tasks, and the pretrained RoBERTa-large on
DocNLI (Yin, Radev, and Xiong 2021) for the QA task for a fair comparison. The
hyperparameters, threshold $\tau$, and $k$ are searched on the $dev$ set for
each task.
### Ultra-Fine Entity Typing
#### Dataset.
In this task, we use the ultra-fine entity typing (UFET) benchmark (Choi et
al. 2018). It has 5,994 human-annotated examples and 10,331 labels; each
entity mention may have multiple types that are correct. The annotated
examples are equally split into $train$, $dev$ and $test$. In addition, UFET
provides distant supervision data as an extra training resource, but we only
train our models on the human-annotated dataset. The official evaluation
metric is F1.
#### Baselines.
The following prior systems are compared:
* •
LDET (Onoe and Durrett 2019) trains a learner to clean the distant supervision
data. The learner discards the unusable data and fixes the noisy data. The
denoised distant supervision data then is added to train a bi-LSTM (Hochreiter
and Schmidhuber 1997) incorporated with pretrained ELMo (Peters et al. 2018)
representations.
* •
Box (Onoe et al. 2021) leverages BERT to project entity mentions and types as
box embeddings, which can better capture the hierarchical relationships
between fine-grained entity types. Predictions are finalized according to the
embedding intersection in the box embedding space.
* •
LRN. Liu et al. (2021) proposed a label reasoning network, leveraging auto-
regressive networks and bipartite attribute graph to recognize the extrinsic
and intrinsic dependencies among entity types. By exploiting the dependency
knowledge and reasoning, more correct labels can be discovered at the
inference stage.
* •
MLMET (Dai, Song, and Wang 2021) leverages the inner knowledge retained by a
pretrained BERT model. The authors first used a BERT-based Masked Language
Model (MLM) to generate extra distant supervision data and then trained a BERT
model on three resources: the human-annotated, distant supervision, and MLM-
generated data.
* •
LITE (Li, Yin, and Chen 2022) is the previous SOTA model. It converts the
entity typing problem into a TE task and uses a RoBERTa-large model pretrained
on MNLI (Williams, Nangia, and Bowman 2018) as the backbone. LITE treats
entity-mentioning sentences as premises. Type options are first transformed to
statements based on the template “[ENTITY] is a [LABEL]” and then are treated
as hypotheses. It also adds label dependency in TE. A margin ranking loss is
used to rank the entailment pairs over the non-entailment ones. _All TE-
related models in this work, including Context-TE and Parallel-TE, use the
same template as the baseline “LITE”._
Model | P | R | F1
---|---|---|---
LDET (Onoe and Durrett 2019) | 51.5 | 33.0 | 40.1
Box (Onoe et al. 2021) | 52.8 | 38.8 | 44.8
LRN (Liu et al. 2021) | 54.5 | 38.9 | 45.4
MLMET (Dai, Song, and Wang 2021) | 53.6 | 45.3 | 49.1
LITE (Li, Yin, and Chen 2022) | 52.4 | 48.9 | 50.6
Context-TE | 53.7 | 49.4 | 51.5
Parallel-TE | 54.0 | 51.0 | 52.4
Table 1: Results on the UFET task.
Results. Table 1 shows the performance of our models and baselines. The $k$
value is selected on the $dev$ set and set to 80 for both Context-TE and
Parallel-TE.
First, we notice that both our approaches Context-TE and Parallel-TE get new
SOTA performance on this task with the Parallel-TE performs the best (52.4).
This is particular impressive if we highlight that those baselines make use of
either extra weak supervision data (e.g., LDET, MLMET) or the hierarchical
relationships among types (e.g., Box, LRN, LITE). Second, even compared with
LITE, the prior SOTA system that adopts the TE framework as well, Context-TE
is able to learn better representations for hypotheses since it encoded the
competing context.
### Intent Detection
#### Dataset.
We use the BANKING77 dataset (Casanueva et al. 2020) for the intent detection
task. It is in the domain of online banking queries and consists of 77
intents. BANKING77 contains 10,003 training and 3,080 testing examples. We
explore the few-shot learning ability of our models in this task. From the
training data, we randomly sample 5-shot and 10-shot instances per intent as
our $train$ respectively. We also sample a small portion of the training
dataset as our $dev$, following the previous setting (Zhang et al. 2021;
Mehri, Eric, and Hakkani-Tür 2020). All experiments are run three times with
distinctly sampled $train$ and we report the average performance, evaluated by
accuracy.
#### Baselines.
We compare with the following baselines:
* •
DualEncoder (Casanueva et al. 2020) is a dual sentence encoder model combined
with USE (Cer et al. 2018) and ConveRT (Henderson et al. 2020). Both USE and
ConverRT are strong sentence encoders pretrained on different conversational
response tasks. The combination of both yields better performance than each of
them.
* •
ConvBERT+ (Mehri and Eric 2021) is based on ConvBERT (Mehri, Eric, and
Hakkani-Tür 2020), a BERT model pretrained on a large dialogue corpus. By
combining with example-driven training, task-adaptive training and observers,
the model obtains a strong performance.
* •
DNNC (Zhang et al. 2020) is a discriminative nearest-neighbor model pretrained
on three different TE datasets: SNLI (Bowman et al. 2015), MNLI (Williams,
Nangia, and Bowman 2018), and WNLI (Levesque, Davis, and Morgenstern 2011).
* •
CPFT (Zhang et al. 2021) adopts RoBERTa as the backbone and first conducts
self-supervised contrastive pretraining on six intent detection datasets. The
model is then fine-tuned on few-shot data with supervised contrastive
learning. _For a fair comparison, we report their performance without the
contrastive pretraining on extra data._
* •
TE (Xia et al. 2021) converts intent detection to traditional TE by treating
utterances and intent options as premises and hypotheses, respectively.
Model | 5-shot | 10-shot
---|---|---
CPFT (Zhang et al. 2021) | 76.75 | 84.83
DualEnc. (Casanueva et al. 2020) | 77.75 | 85.19
ConvB. (Mehri and Eric 2021) | - | 85.95
DNNC (Zhang et al. 2020) | 80.40 | 86.71
TE (Xia et al. 2021) | 78.21 | 82.51
Context-TE | 80.76 | 85.53
Parallel-TE | 78.69 | 83.29
Table 2: Few-shot performance on BANKING77.
#### Model details.
Since the intent detection task is under the few-shot setting, data
augmentation can be helpful during the training stage. Context-TE is naturally
suitable for augmenting data because it expands the training data by
introducing the extra (P, H) pairs equipped with competing context. For
Parallel-TE, we perform data augmentation as follows: for each pair with $k$
hypotheses, we shuffle it $k$ times, making sure the positive option is at the
different positions in all new sequences. This is to let the model learn an
option’s gold label wherever the option is located in the input sequence.
In this task, Context-TE infers at the full label space $\mathbb{O}$
($|\mathbb{O}|=77$). Due to the max sequence length limitation of RoBERTa,
Parallel-TE infers only the top-$k$ label options where $k$ ranges from 10 to
60, selected on $dev$.
#### Results.
Test accuracy under 5-shot and 10-shot settings on BANKING77 benchmark is
reported in Table 2. Context-TE outperforms all the models under the 5-shot
setting and obtains competitive performance with the prior SOTA on 10-shot
(85.53 vs. 86.71). Parallel-TE demonstrates tiny drop of performance on both
5-shot and 10-shot. But compared with the standard TE, both Parallel-TE and
Context-TE yield better results, suggesting that our proposed models are more
effective than the traditional TE.
### Multi-Choice Question Answering
#### Dataset.
We work on MCTest (Richardson, Burges, and Renshaw 2013), a multiple-choice
machine comprehension benchmark from a fictional story domain. One correct
answer must be selected among four candidates given a question and a paragraph
stating the background knowledge. Besides, Richardson, Burges, and Renshaw
(2013) released a TE version data by treating paragraphs as premises and
converting question-answer pairs into hypotheses. MCTest is a low-resource
task, consisting of two sets with 160 (MC160) and 500 (MC500) examples,
respectively. Our experiments are conducted on the TE version of MCTest. The
official evaluation metric is accuracy.
#### Baselines.
We compare with three latest methods:
* •
RDEC (Yu, Zha, and Yin 2019) designs an inferential network trained with
reinforcement learning to understand contexts. This method consists of
multiple attention-based reasoning steps and recursively constructs the
evidence chain to improve the reasoning skill of machines.
* •
TEDocNLI (Yin, Radev, and Xiong 2021) is the previous SOTA system that first
pretrains RoBERTa-large on DocNLI, a large corpus for document-level TE, then
finetunes on MCTest.
* •
TEvanilla directly trains a RoBERTa-large on the TE version data without
pretraining on any extra TE datasets such as MNLI, DocNLI.
#### Model details.
In this task, Parallel-TE also performs data augmentation with the strategy
elaborated in Section “Intent detection” because the training data is scarce.
In addition, Context-TE follows TEDocNLI to exploit the pretrained RoBERTa on
DocNLI as the encoder. Note that this task does not require the top-$k$ option
generation since the option size for each piece of data is merely four, but we
allow the system to truncate the end of premises if the input length exceeds
the max length limit of RoBERTa.
Model | MC160 | MC500
---|---|---
TEvanilla | 42.50 | 63.67
RDEC (Yu, Zha, and Yin 2019) | 80.00 | 75.50
TEDocNLI (Yin et al. 2021) | 90.83 | 90.66
Context-TE | 92.50 | 91.67
Parallel-TE | 81.25 | 82.50
Table 3: Test accuracy on MCTest.
#### Results.
Table 3 shows the experiment results on MCTest. Both Context-TE and TEDocNLI
rely on the indirect supervision from DocNLI, but Context-TE outperforms
TEDocNLI on both MC160 and MC500, achieving the new SOTA performance.
Parallel-TE performs worse than Context-TE because it is more comparable with
the TEvanilla: both use the RoBERTa-large encoder and do not pretrain on extra
data. Nevertheless, Parallel-TE achieves multi-option joint training, which
leads to the improvement by large margins: 81.25 vs. 42.50 on MC160, and 82.50
vs. 63.67 on MC500.
## Analysis
### What Contributes to the Improvement?
Model | UFET | BANKING77
---|---|---
P | R | F1 | 1-shot | 3-shot | 5-shot | 10-shot
TE | - | - | - | 67.53 | 73.74 | 78.21 | 82.51
TEtop-k | 54.4 | 47.7 | 50.8 | 63.95 | 72.54 | 77.31 | 80.54
LITEtop-k | 49.6 | 51.0 | 50.3 | - | - | - | -
Context-TE | 53.7 | 49.4 | 51.5 | 69.40 | 77.42 | 80.76 | 85.53
Parallel-TE | 54.0 | 51.0 | 52.4 | 69.11 | 75.12 | 78.69 | 83.29
Table 4: Ablation study on both ultra-fine entity typing and intent detection
tasks. [MODEL]top-k refers to the model runs on the retrieved top-$k$ options.
TE runs on the full option space.
The experimental results on the three benchmarks demonstrate the effectiveness
of our proposed methods. However, it is still unclear: does the system benefit
from the top-$k$ option generation as it reduces the option space or other
inference strategies? We conduct ablation study on both ultra-fine entity
typing and intent detection tasks since our models need to perform the top-$k$
option generation for them.
Specifically, given the top-$k$ selected options, we run other competitive
systems to see if the smaller option space helps them. For the entity typing
task, we run the prior SOTA model, LITE, on the same top-$k$ ($k=80$) as our
Parallel-TE model. For the intent detection task, we run the standard TE
method on the same top-$k$ ($k=25$). Note that the Context-TE model infers in
the entire option space on BANKING77. To gain a better insight into the impact
of the top-$k$ generation step, we also extend our experiments with 1-shot and
3-shot settings. The data sampling and evaluation strategies do not change.
The ablation experiments results are shown in Table 4. Given top-$k$ options,
both our systems Context-TE and Parallel-TE turn to be the TEtop-k if we
discard the competing context and the multi-option joint training. However, we
notice that TEtop-k gets pretty close performance with the LITEtop-k on UFET
(50.8 vs. 50.3 by F1) and even slightly worse performance than the standard TE
on all the four few-shot settings of BANKING77. In contract, given the same
top-$k$ options, both Context-TE and Parallel-TE surpass the competitors with
large margins. This means that our systems mainly benefit from the newly
designed representation learning and training strategy rather than the smaller
option space by top-$k$ generation.
### Influence of $k$ Values
Figure 2: Top-$k$ recall and Parallel-TE performance on UFET and BANKING77
tasks when $k$ value varies. For the Parallel-TE performance, we report F1 and
accuracy on UFET and BANKING77, respectively.
A higher $k$ value can bring a higher top-$k$ recall; however, it also caps
the model performance. In this section, we investigate how the $k$ values
affects the Parallel-TE performance.
As shown in Figure 2, we report top-$k$ recall, where $k$ varies from 10 to
100 for UFET and 10 to 70 for (3-shot) BANKING77 due to the max length limit
of RoBERTa. We also show the Parallel-TE performance under different $k$
values. The top-$k$ recall on BANKING77 is already high (91.62%) even when
$k=10$, and its increasing speed slows down when $k>30$. Parallel-TE achieves
the best performance with $k=25$ as reported in the previous section. After
that, a higher $k$ harms the Parallel-TE performance. UFET top-$k$ recall gets
more benefits from the increase of $k$. The highest F1 is achieved when
$k=80$, and a very close performance is obtained when $k=100$. These
observations demonstrate a trade-off between $k$ values and the model
performance. As $k$ increases, more correct options are captured in the
top-$k$ option space, but it also brings more difficulties for Parallel-TE to
select the correct one since more competing options exist. Nonetheless, it is
still beneficial to provide opportunities for those competing options to
interact with each other so that the model can make better decisions.
### Influence of Top-$k$ Orders
We study this factor in both training and inference.
#### Training.
As per our experiments, the order of top-$k$ options in a input instance is
essential to Parallel-TE at the training phase. We first start our experiments
on UFET and keep the original order of the top-$k$ options. After top-$k$
generation, most correct options are at the front part since they usually
receive higher scores. Training a model on the pairs with the unshuffled
top-$k$ options leads to a serious overfitting on the position information.
Thus, shuffling the order of top-$k$ is important for Parallel-TE during
training.
#### Inference.
We also investigate the influence of different top-$k$ orders at the inference
stage by conducting experiments on few-shot BANKING77. Given the same text,
different top-$k$ orders sometimes yield predictions of tiny differences but
mostly the predictions are consistent. This indicates that our training
strategy in Parallel-TE can result in very robust model behavior. Duplicating
a pair with the shuffled top-$k$ orders and then making the final decision by
majority voting improves the performance slightly, as shown in Table 5, but
these negligible improvements are not worth the time cost.
Model | 1-shot | 3-shot | 5-shot | 10-shot
---|---|---|---|---
Parallel-TE | 69.11 | 75.12 | 78.69 | 83.29
w/ majority vote | 69.89 | 75.70 | 79.31 | 83.57
Table 5: The results of Parallel-TE and the majority voting on BANKING77.
Figure 3: Inference speed (seconds/test case) of different TE models. TE /
LITE (Top-$k$) indicates that the model runs on the top-$k$ option space.
### Inference Speed of Parallel-TE
We compare the inference speeds of Parallel-TE with other entailment-based
methods (TE, LITE and their respective versions working on top-$k$ options) in
Figure 3. 222The inference speed is measured on an NVIDIA GeForce RTX 3090
with the evaluation batch size of 256.
Overall, the inference time of TE/LITE over the three tasks decreases in the
order of “UFET $\rightarrow$ BANKING77 $\rightarrow$ MCTest” since both
methods search the gold option from the whole space and each subsequent task
has a smaller size of options. The top-$k$ options can speed up TE/LITE (i.e.,
TE/LITE (top-$k$)) dramatically, which is within expectation. However, as we
discussed in Section “What contributes to the improvement?”, this operation
may degrade the model performance.
When apply our model Parallel-TE on the top-$k$ options, the inference speed
gets further boosted: on the UFET task, rerunning the prior SOTA model LITE
takes 15 seconds to predict a single test instance, while our model Parallel-
TE only spends 0.02 seconds.
### Why Context-TE Works?
Figure 4: Training loss curves and test accuracy of Context-TE and TE on MC160
task. Loss curves are processed with Gaussian smoothing.
By equipping (P, H) pairs with competing context, Context-TE outperforms the
standard TE model on all the three tasks. We try to explain it by analyzing
the training loss curves and test accuracy of both models on MC160 task, as
shown in Figure 4. We train both models 5 epochs and report the test accuracy
at the end of each epoch. The training loss is recorded at every training step
and processed with Gaussian smoothing. As the figure illustrates, Context-TE
always outperforms TE after the second epoch while holding a higher training
loss. This is reasonable because Context-TE introduces more challenging data
(selecting the correct option under the disturbance of other options is
harder), which acts as a regularizer that mitigates the overfitting to some
extent.
## Conclusion
This work studied the issues of the popular TE framework in solving selection
tasks (i.e., _neglecting option-to-option comparison in representation
learning_ and _low inference speed_) and proposed Context-TE and Parallel-TE
to solve them, respectively. Both new models outperform the standard TE method
and mostly set the new state of the art performance in three typical tasks:
ultra-fine entity typing, intent detection and multi-choice machine
comprehension.
## Acknowledgments
The authors appreciate the reviewers for their insightful comments and
suggestions. This work is supported in part by NSF under grants III-1763325,
III-1909323, III-2106758, and SaTC-1930941.
## References
* Androutsopoulos and Malakasiotis (2010) Androutsopoulos, I.; and Malakasiotis, P. 2010. A Survey of Paraphrasing and Textual Entailment Methods. _J. Artif. Intell. Res._ , 38: 135–187.
* Bowman et al. (2015) Bowman, S. R.; Angeli, G.; Potts, C.; and Manning, C. D. 2015. A large annotated corpus for learning natural language inference. In _Proceedings of EMNLP_ , 632–642.
* Casanueva et al. (2020) Casanueva, I.; Temčinas, T.; Gerz, D.; Henderson, M.; and Vulić, I. 2020. Efficient Intent Detection with Dual Sentence Encoders. In _Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI_ , 38–45.
* Cer et al. (2018) Cer, D.; Yang, Y.; Kong, S.; Hua, N.; Limtiaco, N.; John, R. S.; Constant, N.; Guajardo-Cespedes, M.; Yuan, S.; Tar, C.; Sung, Y.; Strope, B.; and Kurzweil, R. 2018. Universal Sentence Encoder. _CoRR_ , abs/1803.11175.
* Choi et al. (2018) Choi, E.; Levy, O.; Choi, Y.; and Zettlemoyer, L. 2018. Ultra-Fine Entity Typing. In _Proceedings of ACL_ , 87–96.
* Dagan, Glickman, and Magnini (2006) Dagan, I.; Glickman, O.; and Magnini, B. 2006. The PASCAL Recognising Textual Entailment Challenge. In Quiñonero-Candela, J.; Dagan, I.; Magnini, B.; and d’Alché Buc, F., eds., _Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment_ , 177–190.
* Dai, Song, and Wang (2021) Dai, H.; Song, Y.; and Wang, H. 2021. Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model. In _Proceedings of ACL-IJCNLP_ , 1790–1799.
* Devlin et al. (2019) Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of NAACL-HLT_ , 4171–4186.
* dos Santos et al. (2016) dos Santos, C. N.; Tan, M.; Xiang, B.; and Zhou, B. 2016. Attentive Pooling Networks. _CoRR_ , abs/1602.03609.
* Henderson et al. (2020) Henderson, M.; Casanueva, I.; Mrkšić, N.; Su, P.-H.; Wen, T.-H.; and Vulić, I. 2020. ConveRT: Efficient and Accurate Conversational Representations from Transformers. In _Proceedings of EMNLP Findings_ , 2161–2174.
* Hochreiter and Schmidhuber (1997) Hochreiter, S.; and Schmidhuber, J. 1997. Long Short-Term Memory. _Neural Comput._ , 9(8): 1735–1780.
* Khot, Sabharwal, and Clark (2018) Khot, T.; Sabharwal, A.; and Clark, P. 2018. SciTaiL: A Textual Entailment Dataset from Science Question Answering. In _Proceedings of AAAI_ , 5189–5197.
* Levesque, Davis, and Morgenstern (2011) Levesque, H. J.; Davis, E.; and Morgenstern, L. 2011. The Winograd schema challenge. _AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning_ , 46: 47.
* Li, Yin, and Chen (2022) Li, B.; Yin, W.; and Chen, M. 2022. Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference. _Transactions of ACL_ , 10: 607–622.
* Liu et al. (2021) Liu, Q.; Lin, H.; Xiao, X.; Han, X.; Sun, L.; and Wu, H. 2021. Fine-grained Entity Typing via Label Reasoning. In _Proceedings of EMNLP_ , 4611–4622.
* Liu et al. (2019) Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. _CoRR_ , abs/1907.11692.
* Mehri and Eric (2021) Mehri, S.; and Eric, M. 2021. Example-Driven Intent Prediction with Observers. In _Proceedings of NAACL-HLT_ , 2979–2992.
* Mehri, Eric, and Hakkani-Tür (2020) Mehri, S.; Eric, M.; and Hakkani-Tür, D. 2020. DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue. _CoRR_ , abs/2009.13570.
* Nie et al. (2020) Nie, Y.; Williams, A.; Dinan, E.; Bansal, M.; Weston, J.; and Kiela, D. 2020. Adversarial NLI: A New Benchmark for Natural Language Understanding. In _Proceedings of ACL_ , 4885–4901.
* Onoe et al. (2021) Onoe, Y.; Boratko, M.; McCallum, A.; and Durrett, G. 2021. Modeling Fine-Grained Entity Types with Box Embeddings. In _Proceedings of ACL-IJCNLP_ , 2051–2064.
* Onoe and Durrett (2019) Onoe, Y.; and Durrett, G. 2019. Learning to Denoise Distantly-Labeled Data for Entity Typing. In _Proceedings of NAACL-HLT_ , 2407–2417.
* Peters et al. (2018) Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep Contextualized Word Representations. In _Proceedings of NAACL-HLT_ , 2227–2237.
* Rei and Briscoe (2011) Rei, M.; and Briscoe, T. 2011. Unsupervised Entailment Detection between Dependency Graph Fragments. In _Proceedings of BioNLP@ACL_ , 10–18.
* Reimers and Gurevych (2019) Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In _Proceedings EMNLP-IJCNLP_ , 3982–3992.
* Richardson, Burges, and Renshaw (2013) Richardson, M.; Burges, C. J.; and Renshaw, E. 2013. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text. In _Proceedings of EMNLP_ , 193–203.
* Rocktäschel et al. (2016) Rocktäschel, T.; Grefenstette, E.; Hermann, K. M.; Kociský, T.; and Blunsom, P. 2016. Reasoning about Entailment with Neural Attention. In _Proceedings of ICLR_.
* Sainz et al. (2021) Sainz, O.; de Lacalle, O. L.; Labaka, G.; Barrena, A.; and Agirre, E. 2021. Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction. In _Proceedings of EMNLP_ , 1199–1212.
* Sainz et al. (2022) Sainz, O.; Gonzalez-Dios, I.; de Lacalle, O. L.; Min, B.; and Agirre, E. 2022. Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning. In _Proceedings of NAACL-HLT_.
* Wang et al. (2021) Wang, S.; Fang, H.; Khabsa, M.; Mao, H.; and Ma, H. 2021. Entailment as Few-Shot Learner. _CoRR_ , abs/2104.14690.
* Wang and Jiang (2016) Wang, S.; and Jiang, J. 2016. Learning Natural Language Inference with LSTM. In _Proceedings of NAACL-HLT_ , 1442–1451.
* Wang, Hamza, and Florian (2017) Wang, Z.; Hamza, W.; and Florian, R. 2017. Bilateral Multi-Perspective Matching for Natural Language Sentences. In _Proceedings of IJCAI_ , 4144–4150.
* Williams, Nangia, and Bowman (2018) Williams, A.; Nangia, N.; and Bowman, S. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In _Proceedings of NAACL-HLT_ , 1112–1122.
* Xia et al. (2021) Xia, C.; Yin, W.; Feng, Y.; and Yu, P. 2021. Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System. In _Proceedings of NAACL-HLT_ , 1351–1360.
* Yin, Hay, and Roth (2019) Yin, W.; Hay, J.; and Roth, D. 2019. Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach. In _Proceedings of EMNLP-IJCNLP_ , 3912–3921.
* Yin, Radev, and Xiong (2021) Yin, W.; Radev, D.; and Xiong, C. 2021. DocNLI: A Large-scale Dataset for Document-level Natural Language Inference. In _Proceedings of ACL-IJCNLP Findings_ , 4913–4922.
* Yin et al. (2020) Yin, W.; Rajani, N. F.; Radev, D.; Socher, R.; and Xiong, C. 2020. Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start. In _Proceedings of EMNLP_ , 8229–8239.
* Yin and Schütze (2018) Yin, W.; and Schütze, H. 2018. Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms. _Transactions of ACL_ , 6: 687–702.
* Yu, Zha, and Yin (2019) Yu, J.; Zha, Z.; and Yin, J. 2019. Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text. In _Proceedings of ACL_ , 2241–2251.
* Zhang et al. (2021) Zhang, J.; Bui, T.; Yoon, S.; Chen, X.; Liu, Z.; Xia, C.; Tran, Q. H.; Chang, W.; and Yu, P. 2021. Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning. In _Proceedings of EMNLP_ , 1906–1912.
* Zhang et al. (2020) Zhang, J.; Hashimoto, K.; Liu, W.; Wu, C.-S.; Wan, Y.; Yu, P.; Socher, R.; and Xiong, C. 2020. Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference. In _Proceedings of EMNLP_ , 5064–5082.
|
# Twice epi-differentiability of a class of non-amenable composite functions
Yulan<EMAIL_ADDRESS>School of Mathematics and Statistics, Guangdong
University of Technology, Guangzhou and Shaohua Pan222Corresponding author
(shhpan@scut.edu.cn), School of Mathematics, South China University of
Technology, Guangzhou.
###### Abstract
This paper is concerned with the twice epi-differentiability of a class of
non-amenable functions, which are the composition of a piecewise twice
differentiable (PWTD) function and a parabolically semidifferentiable mapping.
Such composite functions frequently appear in constrained and composite
optimization problems, disjunctive optimization problems, and low-rank or/and
sparsity optimization problems. To achieve their twice epi-differentiability,
we first justify the proper twice epi-differentiability and parabolic epi-
differentiability of PWTD functions, and then derive an upper and lower
estimate for the second subderivatives of this class of composite functions in
terms of a chain rule of their parabolic subderivatives. We employ the
obtained upper and lower estimates to characterize the parabolic regularity,
the second subderivatives, and so the proper twice epi-differentiability for
several classes of popular non-amenable functions, including the compositions
of PWTD outer functions and twice differentiable inner mappings, the
regularized functions inducing group sparsity, the indicator functions of the
$q\,(q>1)$-order cone and negative semidefine cone.
## 1 Introduction
Let $\mathbb{X}$ represent a finite dimensional real vector space endowed with
the inner product $\langle\cdot,\cdot\rangle$ and its induced norm
$\|\cdot\|$. We are interested in the composite function
$f(x):=\vartheta(F(x))\quad{\rm for}\ \ x\in\mathbb{X},$ (1)
where the functions $F\\!:\mathbb{X}\to\mathbb{R}^{m}$ and
$\vartheta\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}\\!:=(-\infty,\infty]$
satisfy the following basic assumption:
###### Assumption 1
(i)
$F$ is parabolically semidifferentiable on an open set $\mathcal{O}\supset
F^{-1}({\rm dom}\vartheta)$;
(ii)
$\vartheta$ is piecewise twice differentiable and strictly continuous relative
to ${\rm dom}\vartheta\\!\neq\\!\emptyset$.
Here, “strictly continuous” rather than “locally Lipschitz continuous” is used
because in this paper we mainly adopt standard notation as utilized in [1]. By
combining Assumption 1 (ii) and Definition 2.1 from below, we assume that
${\rm dom}\,\vartheta=\bigcup_{i=1}^{s}C_{i}$ for polyhedral sets
$C_{1},\ldots,C_{s}$, and on each $C_{i}$, $\vartheta$ equals a function
$\vartheta_{i}$ that is twice differentiable on an open superset of $C_{i}$.
The function $f$ in (1) frequently appears in the following composite
optimization problem
$\min_{x\in\mathbb{X}}\Phi(x):=f_{0}(x)+f(x),$ (2)
where $f_{0}\\!:\mathbb{X}\to\overline{\mathbb{R}}$ is a lower semicontinuous
(lsc) function that is twice differentiable on an open set containing
$F^{-1}({\rm dom}\vartheta)$. While such a composite problem encompasses major
classes of constrained problems such as classical nonlinear programming,
second-order cone and semidefinite programming (see [2]), eigenvalue
optimization problems (see [3, 4, 5]), and amenable composite optimization
problems [6]. As the outer $\vartheta$ in $f$ is not required to be convex,
some new problems are also absored in model (2) such as disjunctive programs
[7, 8] and composite problems arising from low-rank or/and sparsity
optimization; for example, the loss function $\psi(\mathcal{A}(UV^{\top}\\!))$
appearing in the factorized form of low-rank optimization precisely takes the
form of (1), where $\psi\\!:\mathbb{R}^{m}\to\mathbb{R}$ is a piecewise
linear-quadratic (PWLQ) function such as the SCAD or MCP function (see [9,
10]) and $\mathcal{A}\\!:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{m}$ is a
sampling operator. In addition, the group SCAD and MCP functions in group
sparsity optimization [11, 12], which are shown to be the equivalent DC
surrogates of group zero-norm in [13], also take the form of (1) with a
parabolically semidifferentiable $F$.
Composite functions of type (1) constitute a convenient framework in
variational analysis and continuous optimization for developing theoretical
and algorithmic issues of constrained and composite optimization. Standard
assumptions under which this class of functions was investigated and applied
in constrained optimization require that the inner mapping $F$ is twice
continuously differentiable, that the outer function $\vartheta$ is lsc and
convex, and that the epigraphical multifunction associated to $\vartheta$ is
metrically regular around the point of of interest (see [1]). Compared with
metric regularity, metric subregularity of a multifunction or, equivalently,
calmness of its inverse mapping has more important application scenarios where
its robust counterpart is out of reach. We refer the reader to the reference
[14, 15, 16, 17, 18, 19, 20, 21] for various developments on metric
subregularity and calmness, and their applications to optimality conditions,
error bounds, and convergence of algorithms. Inspired by this, Mohammadi et
al. [22] carried out the first-order and second-order variational anaysis for
a class of fully subamenable functions of the form (1), which are same as
fully amenable functions [6] except that metric regularity of epigraphical
multifunctions required by the latter is replaced by metric subregularity
qualification condition (MSQC) for system $F(x)\in{\rm dom}\vartheta$ at the
point of interest.
Twice epi-differentiability of extended real-valued functions, as a carrier of
vital second-order information, plays a significant role in second-order
variational analysis of nonconvex and nonsmooth optimization, but its
justification has been well recognized to be extremely difficult. Since
Rockafellar’s landmark paper [6] where this property was verified for the
fully amenable functions, there are few works on this topic except [23, 24,
25]. The papers [23, 24] extended Rockafellar’s results to the composition (1)
with a proper lsc convex $\vartheta$ and a twice differentiable $F$, but
required a restrictive assumption on the second subderivative that does not
hold for constrained optimization problems, while the work [25] obtained upper
and lower estimates for the second subderivative but did not discuss the twice
epi-differentiability. Until recently, Mohammadi et al. [26, 27, 22] conducted
a systematic study for the twice epi-differentiability of this class of
compositions under MSQCs by leveraging their parabolic epi-differentiability
and regularity. Among others, they fully analyzed in [26] parabolic regularity
for constraint system
$\Omega\cap\mathcal{O}=\big{\\{}x\in\mathcal{O}\,|\,g(x)\in\Theta\\}$ so as to
achieve the twice epi-differentiability of the indicator function of $\Omega$,
where $\mathcal{O}$ is a neighborhood of the point of interest,
$g\\!:\mathbb{R}^{n}\to\mathbb{R}^{m}$ is a mapping that is twice
differentiable at the point of interest, and $\Theta\subset\mathbb{R}^{m}$ is
a closed convex set; and later they showed in [27] that the twice epi-
differentiability of this class of compositions can be guaranteed under
parabolic regularity if the outer $\vartheta$ is strictly continuous relative
to its domain. Benko and Mehiltz [28] studied a chain rule and a marginal
function rule for the second subderivative, which yield lower estimates for
the second derivative of $f$ with an lsc $\vartheta$ is lsc and a continuous
$F$ but did not touch its twice epi-differentiability.
In this work, we continue to promote this direction by investigating the twice
epi-differentiability for a class of non-amenable functions, i.e., the
composition (1) with $\vartheta$ and $F$ satisfying Assumption 1. The domain
of the outer $\vartheta$ has a certain polyhedrality, but the associated
optimization model (2) still involves nonpolyhedral conic optimization
problems because the inner $F$ is not required to be differentiable. In
section 3, we conduct a systematic second-order variational analysis of PWTD
functions, and confirm their proper twice epi-differentiability and parabolic
epi-differentiability. Using these properties of PWTD functions, we derive in
section 4 an upper estimate and a lower one for the second subderivatives of
this class of compositions. In section 5 these estimates are used to achieve
the parabolic regularity and then twice epi-differentiability for several
classes of common $f$, including the compositions of PWTD outer functions and
twice differentiable inner mappings, the regularizers inducing group sparsity,
and the indicator functions of the $q\,(q>1)$-order cone and the positive
semidefinite cone. To the best of our knowledge, this work is the first to
explore the twice epi-differentiability of non-amenable functions. Mohammadi
[29] studied the first-order variational analysis of non-amenable functions,
but did not discuss their second-order properties.
Notation. For an extended real-valued function
$h\\!:\mathbb{X}\\!\to\\!\overline{\mathbb{R}}$, denote by ${\rm
dom}\,h:=\\{x\in\mathbb{X}\,|\,h(x)<\infty\\}$ its domain, by ${\rm
eip}\,h\\!:=\\{(x,\alpha)\in\mathbb{X}\times\mathbb{R}\ |\ h(x)\leq\alpha\\}$
its epigraph, and by $h^{*}$ its conjugate, i.e.
$h^{*}(x^{*}):=\sup_{x\in\mathbb{X}}\big{\\{}\langle
x,x^{*}\rangle-h(x)\big{\\}}$. Such $h$ is proper if $h(x)>-\infty$ for all
$x\in\mathbb{X}$ and ${\rm dom}\,h\neq\emptyset$. If a mapping
$g\\!:\mathbb{X}\to\mathbb{R}^{m}$ is differentiable at
$\overline{x}\in\mathbb{X}$, $\nabla g(\overline{x})$ denotes the transpose of
$g^{\prime}(\overline{x})$, the Jacobian of $g$ at $\overline{x}$, and if $g$
is twice differentiable at $\overline{x}$, $\nabla^{2}g(\overline{x})$ denotes
its second-order differential mapping at $\overline{x}$. For a closed set
$S\subset\mathbb{X}$, $\delta_{S}$ denotes the indicator function of $S$,
i.e., $\delta_{S}(x)=0$ if $x\in S$, otherwise $\delta_{S}(x)=\infty$, and
${\rm dist}(x,S)$ denotes the distance of $x$ from $S$ on the norm
$\|\cdot\|$. For a given $x\in\mathbb{X}$, $\mathbb{B}(x,\varepsilon)$ denotes
the closed ball of radius $\varepsilon$ centered at $x$ on the norm
$\|\cdot\|$, and write $\mathbb{B}_{\mathbb{X}}$ for $\mathbb{B}(0,1)$. For an
integer $k\geq 1$, write $[k]:=\\{1,\ldots,k\\}$. For every $q\in(1,\infty)$,
$\|\cdot\|_{q}$ represents the $\ell_{q}$-norm of vectors in $\mathbb{R}^{n}$.
For a closed set $C\subset\mathbb{R}^{m}$ and a point $y\in\mathbb{R}^{m}$,
${\rm dist}_{2}(y,C)$ means the distance of $y$ from $C$ on the
$\ell_{2}$-norm. Unless otherwise stated, we always write
$F(x):=(F_{1}(x),\ldots,F_{m}(x))^{\top}$, and for each
$y=(y_{1},\ldots,y_{m})^{\top}\in\mathbb{R}^{m}$, define the function
$(yF)\\!:\mathbb{X}\to\mathbb{R}$ by $(yF)(x):=\langle
y,F(x)\rangle=\sum_{i=1}^{m}y_{i}F_{i}(x)$.
## 2 Preliminaries
This section includes some basic concepts on variational analysis (see the
monographs [1, 30] for more details) and preliminary results that will be used
later. We first introduce the definitions of PWTD functions and basic
subdifferentials.
###### Definition 2.1
A function $h\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$ is said to be PWTD if
its domain is nonempty and can be represented as a union of finitely many
polyhedral sets, say ${\rm dom}\,h:=\bigcup_{i=1}^{s}\Omega_{i}$ for
polyhedral sets $\Omega_{1},\ldots,\Omega_{s}$, and for each $i\in[s]$, there
is a function $h_{i}$, which is twice differentiable on an open superset of
$\Omega_{i}$, such that $h=h_{i}$ on $\Omega_{i}$.
###### Definition 2.2
(see [1, Definition 8.3]) Consider a function
$h\\!:\mathbb{X}\to\overline{\mathbb{R}}$ and a point $x\in{\rm dom}\,h$. The
regular subdifferential of $h$ at $x$ is defined as
$\widehat{\partial}h(x):=\Big{\\{}v\in\mathbb{X}\ |\ \liminf_{x\neq
x^{\prime}\to x}\frac{h(x^{\prime})-h(x)-\langle
v,x^{\prime}-x\rangle}{\|x^{\prime}-x\|}\geq 0\Big{\\}},$
and the basic (also known as the limiting or Morduhovich) subdifferential of
$h$ at $x$ is defined as
$\partial h(x):=\Big{\\{}v\in\mathbb{X}\ |\ \exists\,x^{k}\to x\ {\rm with}\
h(x^{k})\to h(x)\ {\rm and}\ v^{k}\in\widehat{\partial}h(x^{k})\ {\rm s.t.}\
v^{k}\to v\Big{\\}}.$
When $h=\delta_{S}$ for a nonempty closed set $S\subset\mathbb{X}$, the
definition of $\widehat{\partial}h(x)$ and $\partial h(x)$ reduces to that of
regular and basic normal cones to $S$ at $x$, respectively, denoted by
$\widehat{\mathcal{N}}_{S}(x)$ and $\mathcal{N}_{S}(x)$. Furthermore, when
$\widehat{\mathcal{N}}_{S}(x)=\mathcal{N}_{S}(x)$, the set $S$ is said to be
Clarke regular at $x$.
For a multifunction
$\mathcal{F}\\!:\mathbb{X}\rightrightarrows\mathbb{R}^{m}$, we denote by
$\mathcal{F}^{-1}(y):=\\{x\in\mathbb{X}\ |\ y\in\mathcal{F}(x)\\}$ its inverse
mapping, and by ${\rm
gph}\mathcal{F}:=\\{(x,y)\in\mathbb{X}\times\mathbb{R}^{m}\ |\
y\in\mathcal{F}(x)\\}$ its graph. Recall that $\mathcal{F}$ is said to have
the (metric) subregularity property with modulus $\kappa>0$ at a point
$(\overline{x},\overline{y})\in{\rm gph}\mathcal{F}$ if there exists
$\varepsilon>0$ such that
${\rm dist}(x,\mathcal{F}^{-1}(\overline{y}))\leq\kappa\,{\rm
dist}_{2}(\overline{y},\mathcal{F}(x))\quad{\rm for\ all}\
x\in\mathbb{B}(\overline{x},\varepsilon).$
To characterize the tangent cones and second-order tangent sets to ${\rm
dom}f$ later, we need the MSQC for constraint system $F(x)\in{\rm
dom}\vartheta$, a mild condition also used in [31, 32, 33, 16].
###### Definition 2.3
The MSQC is said to hold for system $F(x)\in{\rm dom}\vartheta$ at a point
$\overline{x}\in{\rm dom}f$ if the multifunction $\mathcal{F}(x)\\!:=F(x)-{\rm
dom}\vartheta$ is metrically subregular at $(\overline{x},0)$.
### 2.1 Subderivative and tangent cones
Before stating the formal definition of subderivative of a function, we recall
the epi-convergence of a sequence $\\{h^{\nu}\\}_{\nu\in\mathbb{N}}$ of
functions on $\mathbb{X}$.
###### Definition 2.4
(see [1, Definition 7.1]) For any sequence $\\{h^{\nu}\\}_{\nu\in\mathbb{N}}$
of functions on $\mathbb{X}$, the lower epi-limit, denoted by
e-$\liminf_{\nu}h^{\nu}$, is the function having $\limsup_{\nu}({\rm
epi}\,h^{\nu})$ as its epigraph, i.e.,
${\rm
epi}(\textrm{e-}{\textstyle\liminf_{\nu}}h^{\nu}\big{)}:={\textstyle\limsup_{\nu}}({\rm
epi}\,h^{\nu}).$
The upper epi-limit, denoted by e-$\limsup_{\nu}h^{\nu}$, is the function
having $\liminf_{\nu}({\rm epi}\,h^{\nu})$ as its epigraph, i.e.,
${\rm
epi}(\textrm{e-}{\textstyle\limsup_{\nu}}h^{\nu}\big{)}:={\textstyle\liminf_{\nu}}({\rm
epi}\,h^{\nu}).$
If these two functions coincide, the epi-limit function e-$\lim_{\nu}h^{\nu}$
is said to exist and then
$\textrm{e-}{\textstyle\lim_{\nu}}h^{\nu}:=\textrm{e-}{\textstyle\limsup_{\nu}}h^{\nu}=\textrm{e-}{\textstyle\limsup_{\nu}}h^{\nu}$.
Thus, $h^{\nu}\xrightarrow[]{e}h$ if and only if ${\rm epi}\,h^{\nu}\to{\rm
epi}\,h$.
###### Definition 2.5
(see [1, Definitions 8.1 $\&$ 7.20]) Consider a function
$h\\!:\mathbb{X}\to\mathbb{\overline{R}}$ and a point $x\in{\rm dom}\,h$. Let
$\Delta_{\tau}h(x)\\!:\mathbb{X}\to\overline{\mathbb{R}}$ be the first-order
difference quotients of $h$ at $x$:
$\Delta_{\tau}h(x)(w^{\prime}):=\tau^{-1}\big{[}h(x+\tau
w^{\prime})-h(x)\big{]}\quad{\rm for}\ \tau>0.$
The subderivative function $dh(x)\\!:\mathbb{X}\to[-\infty,\infty]$ of $h$ at
$x$ is defined as
$dh(x)(w):=\liminf_{\tau\downarrow 0,w^{\prime}\to
w}\Delta_{\tau}h(x)(w^{\prime})\quad{\rm for}\ w\in\mathbb{X}.$
The function $h$ is called (properly) epi-differentiable at $x$ if
$\Delta_{\tau}h(x)$ epi-converge to the (proper) function $dh(x)$ as
$\tau\downarrow 0$. The function $h$ is said to be semidifferentiable at $x$
for $w$ if the (possibly infinite) limit $\lim_{\tau\downarrow 0,w^{\prime}\to
w}\Delta_{\tau}h(x)(w^{\prime})$ exists, and it is the semiderivative of $h$
at $x$ for $w$; and if this holds for every $w\in\mathbb{X}$, $h$ is
semidifferentiable at $x$.
According to [1, Theorem 7.21], the semidifferentiability of $h$ at $x$
implies that $dh(x)(w)$ is finite for all $w\in\mathbb{X}$. Inspired by this,
we say that a mapping $g\\!:\mathbb{X}\to\mathbb{R}^{m}$ is semidifferentiable
at $x$ for $w\in\mathbb{X}$ if its each component function
$g_{i}\\!:\mathbb{X}\to\mathbb{R}$ is semidifferentiable at $x$ for $w$, and
now we denote by $dg(x)(w):=(dg_{1}(x)(w),\ldots,dg_{m}(x)(w))^{\top}$ the
semiderivative of $g$ at $x$ for $w$, and $g$ is said to be semidifferentiable
at $x$ if it is semidifferentiable at $x$ for all $w\in\mathbb{X}$.
With the subderivative function, we follow the same line as in [26] to
introduce the critical cone to an extended real-valued
$h\\!:\mathbb{X}\to\overline{\mathbb{R}}$ at a point $(x,v)\in{\rm
gph}\partial h$:
$\mathcal{C}_{h}(x,v):=\big{\\{}w\in\mathbb{X}\ |\ dh(x)(w)=\langle
v,w\rangle\big{\\}}.$ (3)
When $h=\delta_{S}$ for a nonempty closed set $S\subset\mathbb{X}$, the
subderivative $dh(x)$ for $x\in{\rm dom}h$ is precisely the indicator function
of $\mathcal{T}_{S}(x)$, the tangent cone to $S$ at $x$, and now
$\mathcal{C}_{h}(x,v)=\mathcal{T}_{S}(x)\cap\\{v\\}^{\perp}$. The tangent and
inner tangent cones to $S$ at $x\in S$ are respectively defined as
$\displaystyle\mathcal{T}_{S}(x)$ $\displaystyle:=\big{\\{}w\in\mathbb{X}\ |\
\exists\,\tau_{k}\downarrow 0\ \ {\rm s.t.}\ \ {\rm
dist}(x+\tau_{k}w,S)=o(\tau_{k})\big{\\}},$
$\displaystyle\mathcal{T}_{S}^{i}(x)$ $\displaystyle:=\big{\\{}w\in\mathbb{X}\
|\ {\rm dist}(x+\tau w,S)=o(\tau)\ \ \forall\tau\geq 0\big{\\}},$
and we stipulate that $\mathcal{T}_{S}(x)=\mathcal{T}^{i}_{S}(x)=\emptyset$ if
$x\notin S$. The following lemma characterizes the tangent cone and inner
tangent cone to ${\rm dom}f$, where $dF(\overline{x})(w)$ is well defined
because the parabolically semidifferentiability of $F$ by Assumption 1 (i)
implies its semidifferentiability by Definition 2.8 later.
###### Lemma 2.1
Fix any $\overline{x}\in{\rm dom}f$. If the MSQC holds for system $F(x)\in{\rm
dom}\vartheta$ at $\overline{x}$, then
$\displaystyle\mathcal{T}_{{\rm dom}f}(\overline{x})$
$\displaystyle=\big{\\{}w\in\mathbb{X}\ |\
dF(\overline{x})(w)\in\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))\big{\\}}$
$\displaystyle=\Big{\\{}w\in\mathbb{X}\ |\
dF(\overline{x})(w)\in\bigcup\limits_{i=1}^{s}\mathcal{T}_{C_{i}}(F(\overline{x}))\Big{\\}}=\mathcal{T}_{{\rm
dom}f}^{i}(\overline{x}).$ (4)
Proof: Since the MSQC holds for system $F(x)\in{\rm dom}\vartheta$ at
$\overline{x}$, the first equality is implied by [34, Proposition 1]. From
${\rm dom}\vartheta=\bigcup_{i=1}^{s}C_{i}$ and [2, Proposition 3.37], it
follows that
$\bigcup\limits_{i=1}^{s}\mathcal{T}_{C_{i}}^{i}(F(\overline{x}))\subset\mathcal{T}_{{\rm
dom}\vartheta}^{i}(F(\overline{x}))\subset\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))=\bigcup\limits_{i=1}^{s}\mathcal{T}_{C_{i}}(F(\overline{x}))=\bigcup\limits_{i=1}^{s}\mathcal{T}_{C_{i}}^{i}(F(\overline{x})),$
which implies that the second equality holds and $\mathcal{T}_{{\rm
dom}\vartheta}^{i}(F(\overline{x}))=\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))$. Together with the first equality in (2.1)
and $\mathcal{T}_{{\rm dom}f}^{i}(\overline{x})\subset\mathcal{T}_{{\rm
dom}f}(\overline{x})$, the remainder only needs to prove that
$\big{\\{}w\in\mathbb{X}\ |\ dF(\overline{x})(w)\in\mathcal{T}^{i}_{{\rm
dom}\vartheta}(F(\overline{x}))\big{\\}}\subset\mathcal{T}_{{\rm
dom}f}^{i}(\overline{x}).$ (5)
Pick any $w$ from the set on the left hand side of (5). By the definition of
the inner tangent cone, for sufficiently small $\tau>0$, ${\rm
dist}_{2}(F(\overline{x})+\tau dF(\overline{x})(w),{\rm
dom}\vartheta)=o(\tau)$. Recall that the mapping $F$ is semidifferentiable at
$\overline{x}$ for $w$. By Definition 2.5, $F(\overline{x}+\tau
w)-F(\overline{x})-\tau dF(\overline{x})(w)=o(\tau)$ and then ${\rm
dist}_{2}(F(\overline{x}+\tau w),{\rm dom}\vartheta)=o(\tau)$. In addition,
since the MSQC holds for system $F(x)\in{\rm dom}\vartheta$ at $\overline{x}$,
for each sufficiently small $\tau>0$, there exist $x_{\tau}\in{\rm dom}f$ and
$\kappa>0$ such that $\|\overline{x}+\tau w-x_{\tau}\|\leq\kappa{\rm
dist}_{2}(F(\overline{x}+\tau w),{\rm dom}\vartheta)$. The two sides show that
${\rm dist}(\overline{x}+\tau w,{\rm dom}f)=o(\tau)$ and
$w\in\mathcal{T}^{i}_{{\rm dom}f}(\overline{x})$. Consequently, the inclusion
in (5) holds. $\Box$
According to [1, Exercise 8.4], for a function
$h\\!:\mathbb{X}\to\mathbb{\overline{R}}$ and a point $x\in{\rm dom}\,h$,
there is a close relation between its subderivative at $x$ and its regular
subdifferential at $x$, i.e.,
$v\in\widehat{\partial}h(x)\ \Longleftrightarrow\ dh(x)(w)\geq\langle
v,w\rangle\ {\rm for\ all}\ w\in\mathbb{R}^{m}.$ (6)
### 2.2 Second and parabolic subderivatives
To introduce two kinds of second subderivatives for a function
$h\\!:\mathbb{X}\to\mathbb{\overline{R}}$ at a point $x\in{\rm dom}\,h$, we
need two forms of second-order difference quotients for $h$ at $x$:
$\displaystyle\Delta^{2}_{\tau}h(x)(w^{\prime})$
$\displaystyle:=\frac{h(x\\!+\\!\tau w^{\prime})-h(x)-\tau
dh(x)(w^{\prime})}{\tau^{2}/2}\quad{\rm for}\ \tau>0,$
$\displaystyle\Delta^{2}_{\tau}h(x|v)(w^{\prime})$
$\displaystyle:=\frac{h(x\\!+\\!\tau w^{\prime})-h(x)-\tau\langle
v,w^{\prime}\rangle}{\tau^{2}/2}\quad{\rm for}\ \tau>0,$
where $\Delta^{2}_{\tau}h(x)(w^{\prime}):=\infty$ whenever $h(x\\!+\\!\tau
w^{\prime})=dh(x)(w^{\prime})=\infty$ or $-\infty$.
###### Definition 2.6
(see [1, Definitions 13.3 & 13.6]) Consider a function
$h\\!:\mathbb{X}\to\mathbb{\overline{R}}$, a point $x\in{\rm dom}\,h$ and a
vector $v\in\mathbb{X}$. The second subderivative of $h$ at $x$ for $v$ and
$w$ is defined as
$d^{2}h(x|v)(w):=\liminf_{\tau\downarrow 0,w^{\prime}\to
w}\Delta^{2}_{\tau}h(x|v)(w^{\prime}),$
while the second subderivative of $h$ at $x$ for $w$ (without mention of $v$)
is defined as
$d^{2}h(x)(w):=\liminf_{\tau\downarrow 0,w^{\prime}\to
w}\Delta^{2}_{\tau}h(x)(w^{\prime}).$
The function $h$ is said to be (properly) twice epi-differentiable at $x$ for
$v$ if $\Delta^{2}_{\tau}h(x|v)$ epi-converge to the (proper) function
$d^{2}h(x|v)$ as $\tau\downarrow 0$.
From Definition 2.7 and [1, Proposition 7.2], the twice epi-differentiability
of $h$ at $x$ for $v$ can be equivalently described as follows: for every
$w\in\mathbb{X}$ and every sequence $\tau_{k}\downarrow 0$ there exists a
sequence $w^{k}\to w$ such that $\Delta^{2}_{\tau_{k}}h(x|v)(w^{k})\to
d^{2}h(x|v)(w)$ as $k\to\infty$. By [1, Proposition 13.5], the function
$d^{2}h(x|v)$ is lsc and positive homogenous of degree $2$, and if it is
proper, i.e. $d^{2}h(x|v)(w)>-\infty$ for all $w\in\\!\mathbb{X}$ and ${\rm
dom}\,d^{2}h(x|v)\\!\neq\emptyset$, then ${\rm
dom}\,d^{2}h(x|v)\subset\mathcal{C}_{h}(x,v)$.
Next we recall the parabolic epi-differentiability of an extended real-valued
function $h\\!:\mathbb{X}\to\mathbb{\overline{R}}$, which plays a prominent
role in characterizing the expression of second subderivative $d^{2}h(x|v)$.
###### Definition 2.7
([1, Definition 13.59]) Consider a function
$h\\!:\mathbb{X}\to\mathbb{\overline{R}}$, a point $x\in{\rm dom}\,h$ and a
vector $w\in\mathbb{X}$ with $dh(x)(w)$ finite. Define the parabolic
difference quotients of $h$ at $x$ for $w$ by
$\Delta^{2}_{\tau}h(x)(w|z^{\prime}):=\frac{h(x+\tau
w+\frac{1}{2}\tau^{2}z^{\prime})-h(x)-\tau dh(x)(w)}{\tau^{2}/2}\quad{\rm
for}\ \tau>0.$
The parabolic subderivative of $h$ at $x$ for $w$ with respect to (w.r.t.) $z$
is defined as
$d^{2}h(x)(w|z):=\liminf_{\tau\downarrow 0,z^{\prime}\to
z}\Delta^{2}_{\tau}h(x)(w|z^{\prime}),$
and $h$ is said to be parabolically epi-differentiable at $x$ for $w$ if
$\Delta^{2}_{\tau}h(x)(w|\cdot)$ epi-converge to $d^{2}h(x)(w|\cdot)$ as
$\tau\downarrow 0$ and ${\rm dom}\,d^{2}h(x)(w|\cdot)\neq\emptyset$.
Similarly, by Definition 2.7 and [1, Proposition 7.2], the parabolic epi-
differentiability of $h$ at $x$ for $w$ can be equivalently described as
follows: ${\rm dom}\,d^{2}h(x)(w|\cdot)\neq\emptyset$, and for every
$z\in\mathbb{X}$ and every sequence $\tau_{k}\downarrow 0$ there exists a
sequence $z^{k}\to z$ such that
$\Delta^{2}_{\tau_{k}}h(x)(w|z^{k})\to d^{2}h(x)(w|z)\ \ {\rm as}\
k\to\infty.$ (7)
When $h=\delta_{S}$ for a nonempty closed set $S\subset\mathbb{X}$, for any
$x\in S$ and $w\in\mathcal{T}_{S}(x)$, $d^{2}h(x)(w|\cdot)$ reduces to the
indicator of the second-order tangent set to $S$ at $x$ for $w$. The second-
order tangent set and inner second-order tangent set to $S$ at $x\in S$ for
$w$ are respectively defined as
$\displaystyle\mathcal{T}^{2}_{S}(x,w)$
$\displaystyle:=\Big{\\{}z\in\mathbb{X}\ |\ \exists\,\tau_{k}\downarrow 0\ \
{\rm s.t.}\ \ {\rm
dist}(x+\tau_{k}w+(\tau_{k}^{2}/2)z,S)=o(\tau_{k}^{2})\Big{\\}},$
$\displaystyle\mathcal{T}^{i,2}_{S}(x,w)$
$\displaystyle:=\Big{\\{}z\in\mathbb{X}\ |\ {\rm dist}(x+\tau
w+(\tau^{2}/2)z,S)=o(\tau^{2})\quad\forall\tau\geq 0\Big{\\}}.$
One can check that $\mathcal{T}^{i,2}_{S}(x,w)=\emptyset$ if
$w\notin\mathcal{T}_{S}^{i}(x)$, and $\mathcal{T}^{2}_{S}(x,w)=\emptyset$ if
$w\notin\mathcal{T}_{S}(x)$. The set $S$ is called parabolically derivable at
$x$ for $w\in\mathcal{T}_{S}(x)$ if
$\mathcal{T}^{i,2}_{S}(x,w)=\mathcal{T}^{2}_{S}(x,w)\neq\emptyset$.
###### Definition 2.8
A mapping $g\\!:\mathbb{X}\to\mathbb{R}^{m}$ is said to be parabolically
semidifferentiable at $x$ for $w\in\mathbb{X}$ if it is semidifferentiable at
$x$ for $w$, and for all $z\in\mathbb{X}$ the following limit
$\lim_{\tau\downarrow 0,z^{\prime}\to
z}\Delta_{\tau}^{2}g(x)(w|z^{\prime}):=\frac{g(x+\tau
w+\frac{1}{2}\tau^{2}z^{\prime})-g(x)-\tau dg(x)(w)}{\tau^{2}/2}$
exists, and we call it the parabolic semiderivative of $g$ at $x$ for $w$
w.r.t. $z$, denoted by $g^{\prime\prime}(x;w,z)$, and $g$ is parabolically
semidifferentiable at $x$ if it is parabolically semidifferentiable at $x$ for
all $w$.
By comparing with Definition 2.7, when $m=1$, the parabolic semiderivative
$g^{\prime\prime}(x;w,z)$ coincides with $d^{2}g(x)(w|z)$ except that the
former is required to be finite. It is worth pointing out that the parabolic
semidifferentiability of $g$ at $x$ for $w$ is different from its Hadamard
second-order directional differentiability at $x$ in the direction $w$ (see
[2, Section 2.2.3]) because the latter does not need the semidifferentiability
of $g$ at $x$ for $w$.
To achieve the second-order tangent sets to ${\rm dom}f$, we need the
following lemma that weakens the twice differentiability of inner mapping in
[26, Theorem 4.3] to be the parabolic semidifferentiability.
###### Lemma 2.2
Let $\varphi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$ be a proper lsc
function, and let $g\\!:\mathbb{X}\to\mathbb{R}^{m}$ be a mapping that is
parabolically semidifferentiable at $\overline{x}\in g^{-1}({\rm
dom}\varphi)$. For each $w$ with $dg(\overline{x})(w)\in\mathcal{T}_{{\rm
dom}\varphi}(g(\overline{x}))$, define
$\mathcal{S}_{w}(p):=\big{\\{}u\in\mathbb{X}\ |\
g^{\prime\prime}(\overline{x};w,u)+p\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w))\big{\\}}\quad{\rm for}\
p\in\mathbb{R}^{m}.$ (8)
If the multifunction $\mathcal{H}(x):=g(x)-{\rm dom}\,\varphi$ is subregular
at $(\overline{x},0)\in{\rm gph}\mathcal{H}$ with modulus $\kappa$, then
$\mathcal{S}_{w}(p)\subset\mathcal{S}_{w}(0)+\kappa\|p\|_{2}\mathbb{B}_{\mathbb{X}}$
for all $p\in\mathbb{R}^{m}$ uniformly in $w$.
Proof: Fix any $p\in\mathbb{R}^{m}$. Pick any $u\in\mathcal{S}_{w}(p)$. Then,
$g^{\prime\prime}(\overline{x};w,u)+p\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x})\,|\,dg(\overline{x})(w))$. By the definition
of second-order tangent set, there exist $\tau_{k}\downarrow 0$ and $u^{k}\to
g^{\prime\prime}(\overline{x};w,u)+p$ such that
$g(\overline{x})+\tau_{k}dg(\overline{x})(w)+\frac{1}{2}\tau_{k}^{2}u^{k}\in{\rm
dom}\varphi\quad{\rm for\ all}\ k\in\mathbb{N}.$
For each $k\in\mathbb{N}$, let
$x^{k}\\!:=\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}u$. The parabolic
semidifferentiability of $g$ at $\overline{x}$ implies that
$g(x^{k})=g(\overline{x})+\tau_{k}dg(\overline{x})(w)+\frac{1}{2}\tau_{k}^{2}g^{\prime\prime}(\overline{x};w,u)+o(\tau_{k}^{2}).$
From the subregularity of $\mathcal{H}$ at $(\overline{x},0)$ with modulus
$\kappa$, for each sufficiently large $k$, there exists $z^{k}\\!\in
g^{-1}({\rm dom}\varphi)$ such that $\|x^{k}\\!-\\!z^{k}\|\leq\kappa{\rm
dist}_{2}(g(x^{k}),{\rm dom}\varphi)$. Together with the last two equations,
$\displaystyle\|x^{k}\\!-\\!z^{k}\|\leq\frac{1}{2}\tau_{k}^{2}\kappa\|u^{k}-g^{\prime\prime}(\overline{x};w,u)\|_{2}+o(\tau_{k}^{2})\leq\frac{1}{2}\tau_{k}^{2}\kappa\|p\|_{2}+o(\tau_{k}^{2}),$
which implies that the sequence $d^{k}:=\frac{2(x^{k}-z^{k})}{\tau_{k}^{2}}$
is bounded. So there exists $d\in\mathbb{X}$ such that $d^{k}\to d$ (if
necessary by taking a subsequence) with $\|d\|\leq\kappa\|p\|_{2}$. On the
other hand, for all $k$ large enough,
$g^{-1}({\rm dom}\varphi)\ni
z^{k}=x^{k}-(\tau_{k}^{2}/2)d^{k}=\overline{x}+\tau_{k}w+(\tau_{k}^{2}/2)(u-d)+o(\tau_{k}^{2}),$
which along with the parabolic semidifferentiability of $g$ at $\overline{x}$
yields that
${\rm dom}\varphi\ni
g(z^{k})=g(\overline{x})+\tau_{k}dg(\overline{x})(w)+(\tau_{k}^{2}/2)g^{\prime\prime}(\overline{x};w,u\\!-\\!d)+o(\tau_{k}^{2}).$
This means that
$g^{\prime\prime}(\overline{x};w,u\\!-\\!d)\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w))$ or
$u\\!-\\!d\in\mathcal{S}_{w}(0)$. Thus, ${\rm
dist}(u,\mathcal{S}_{w}(0))\leq\|d\|\leq\kappa\|p\|_{2}$. By the arbitrariness
of $p$ in $\mathbb{R}^{m}$ and $u\in\mathcal{S}_{w}(p)$, the result then
follows. $\Box$
The following proposition establishes the link between the second-order
tangent set to ${\rm dom}(\varphi\circ g)$ and the one to ${\rm dom}\,\varphi$
for $\varphi$ and $g$ from Lemma 2.2, which extends the conclusions of [26,
Theorem 4.5], [27, Proposition 4.3 (ii)] and [35, Lemma 2.5] to the constraint
system $g(x)\in{\rm dom}\,\varphi$ for a parabolically semidifferentiable
rather than twice (continuously) differentiable $g$, as well as removes the
Clarke regularity of ${\rm dom}\varphi$ required in [26, Theorem 4.5] and [27,
Proposition 4.3 (ii)].
###### Proposition 2.1
Let $h=\varphi\circ g$ where
$\varphi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$ is a proper lsc function,
and $g\\!:\mathbb{X}\to\mathbb{R}^{m}$ is a mapping. Consider any
$\overline{x}\in{\rm dom}\,h$. Suppose that $g$ is parabolically
semidifferentiable at $\overline{x}$, and that the multifunction
$\mathcal{H}(x):=g(x)-{\rm dom}\,\varphi$ is subregular at $(\overline{x},0)$
with modulus $\kappa$. Then, for any $w\in\mathcal{T}_{{\rm
dom}\,h}(\overline{x})$, the following equivalence holds:
$z\in\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)\ \Longleftrightarrow\
g^{\prime\prime}(\overline{x};w,z)\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w)).$ (9)
If in addition ${\rm dom}\varphi$ is parabolically derivable at
$g(\overline{x})$ for $dg(\overline{x})(w)$, then ${\rm dom}\,h$ is
parabolically derivable at $\overline{x}$ for $w$.
Proof: Fix any $w\in\mathcal{T}_{{\rm dom}\,h}(\overline{x})$. Pick any
$z\in\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)$. Then there exist
$\tau_{k}\downarrow 0$ and $z^{k}\to z$ such that
$\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k}\in{\rm dom}\,h$ for each
$k\in\mathbb{N}$. By the parabolic semidifferentiability of $g$, for all
sufficiently large $k$, ${\rm dom}\varphi\ni
g(\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k})=g(\overline{x})+\tau_{k}dg(\overline{x})(w)+\frac{1}{2}\tau_{k}^{2}g^{\prime\prime}(\overline{x};w,z)+o(\tau_{k}^{2})$.
This shows that $g^{\prime\prime}(\overline{x};w,z)\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w))$, and the implication in
the direction $\Longrightarrow$ follows.
For the converse implication, pick any $z\in\mathbb{X}$ such that
$g^{\prime\prime}(\overline{x};w,z)\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w))$. Then there exist
$\tau_{k}\downarrow 0$ and $\xi^{k}\to g^{\prime\prime}(\overline{x};w,z)$
such that
$g(\overline{x})+\tau_{k}dg(\overline{x})(w)+\frac{1}{2}\tau_{k}^{2}\xi^{k}\in{\rm
dom}\varphi$ for each $k\in\mathbb{N}$. Write
$x^{k}\\!:=\\!\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z$ for each
$k\in\mathbb{N}$. By the parabolic semidifferentiability of $g$ at
$\overline{x}$, we have
$g(x^{k})=g(\overline{x})+\tau_{k}dg(\overline{x})(w)+\frac{1}{2}\tau_{k}^{2}g^{\prime\prime}(\overline{x};w,z)+o(\tau_{k}^{2})$
for all sufficiently large $k$. From the subregularity of $\mathcal{H}$ at
$(\overline{x},0)$ with modulus $\kappa$, for each sufficiently large $k$,
there exists $z^{k}\in{\rm dom}\,h$ such that
$\|x^{k}\\!-\\!z^{k}\|\leq\kappa{\rm dist}_{2}(g(x^{k}),{\rm
dom}\varphi)\leq\kappa\big{\|}g(x^{k})\\!-\\![g(\overline{x})+\tau_{k}dg(\overline{x})(w)+\frac{1}{2}\tau_{k}^{2}\xi^{k}]\big{\|}_{2}=o(\tau_{k}^{2}).$
This implies that ${\rm dom}\,h\ni
z^{k}=\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}(z+o(\tau_{k}^{2})/\tau_{k}^{2})$
for all $k$ large enough. Consequently, $z\in\mathcal{T}_{{\rm
dom}\,h}^{2}(\overline{x},w)$ and the implication in the direction
$\Longleftarrow$ holds.
Now assume that ${\rm dom}\varphi$ is parabolically derivable. Then
$\mathcal{T}_{{\rm
dom}\,\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w))\neq\emptyset$. Pick
any $u\in\mathcal{T}_{{\rm
dom}\varphi}^{2}(g(\overline{x}),dg(\overline{x})(w))$ and any
$z\in\mathbb{X}$. Then, $z\in\mathcal{S}_{w}(p)$ with
$p:=u-g^{\prime\prime}(\overline{x};w,z)$, where $\mathcal{S}_{w}$ is the
multifunction defined as in Lemma 2.2. By invoking Lemma 2.2, there exists
$\widetilde{z}\in\mathcal{S}_{w}(0)$ such that
$\|z-\widetilde{z}\|\leq\kappa\|p\|_{2}$. From
$\widetilde{z}\in\mathcal{S}_{w}(0)$ and the equivalence in (9),
$\widetilde{z}\in\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)$.
Consequently, $\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)\neq\emptyset$.
Pick any $z\in\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)$. By the
subregularity of $\mathcal{H}$ at $(\overline{x},0)$, for any sufficiently
small $\tau>0$, we have ${\rm dist}(\overline{x}+\tau
w+\frac{1}{2}\tau^{2}z,{\rm dom}\,h)\leq\kappa{\rm
dist}_{2}(g(\overline{x}+\tau w+\frac{1}{2}\tau^{2}z),{\rm dom}\varphi),$ or
equivalently
${\rm dist}\Big{(}z,\frac{{\rm dom}\,h-\overline{x}-\tau
w}{\frac{1}{2}\tau^{2}}\Big{)}\leq\kappa{\rm
dist}_{2}\Big{(}\Delta_{\tau}^{2}g(\overline{x})(w|z),\frac{{\rm
dom}\varphi-g(\overline{x})-\tau
dg(\overline{x})(w)}{\frac{1}{2}\tau^{2}}\Big{)}.$ (10)
Clearly, $\Delta_{\tau}^{2}g(\overline{x})(w|z)\to
g^{\prime\prime}(\overline{x};w,z):=\zeta$ as $\tau\downarrow 0$. Furthermore,
since ${\rm dom}\varphi$ is parabolically derivable at $g(\overline{x})$ for
$dg(\overline{x})(w)$, by invoking [1, Corollary 4.7], when $\tau\downarrow
0$,
${\rm dist}_{2}\Big{(}\zeta,\frac{{\rm dom}\varphi-g(\overline{x})-\tau
dg(\overline{x})(w)}{\frac{1}{2}\tau^{2}}\Big{)}\ \to\ {\rm
dist}_{2}\Big{(}\zeta,\mathcal{T}_{{\rm
dom}\varphi}^{i,2}(g(\overline{x}),dg(\overline{x})(w))\Big{)}=0,$
where the equality is due to the equivalence in (9). Thus, the right-hand side
of (10) tends to zero as $\tau\downarrow 0$, which implies that the left-hand
side must approach to $0$ as $\tau\downarrow 0$, so $z\in\mathcal{T}_{{\rm
dom}\,h}^{i,2}(\overline{x},w)$. By the arbitrariness of
$z\in\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)$, we have
$\emptyset\neq\mathcal{T}_{{\rm dom}\,h}^{2}(\overline{x},w)=\mathcal{T}_{{\rm
dom}\,h}^{i,2}(\overline{x},w)$. This shows that ${\rm dom}h$ is parabolically
derivable at $\overline{x}$ for $w$. $\Box$
###### Corollary 2.1
Let $h$ be the composition in Proposition 2.1. Consider any
$\overline{x}\in{\rm dom}\,h$. Suppose that $g$ is parabolically
semidifferentiable at $\overline{x}$, and that the multifunction
$\mathcal{G}(x,\alpha):=(g(x),\alpha)-{\rm epi}\,\varphi$ is subregular at
$((\overline{x},h(\overline{x})),(0,0))$. Then, for any $w\in\mathbb{X}$ with
$dh(\overline{x})(w)$ finite,
$d^{2}h(\overline{x})(w|z)=d^{2}\varphi(g(\overline{x}))\big{(}dg(\overline{x})(w)\
|\ g^{\prime\prime}(\overline{x};w,z)\big{)}\quad\ \forall z\in\mathbb{X}.$
(11)
If in addition $\varphi$ is parabolically epi-differentiable at
$g(\overline{x})$ for $dg(\overline{x})(w)$, then $h$ is parabolically epi-
differentiable at $\overline{x}$ for $w$.
Proof: Fix any $w\in\mathbb{X}$ with $dh(\overline{x})(w)$ finite. Let
$\alpha=h(\overline{x})$ and $\beta=dh(\overline{x})(w)$. Note that ${\rm
epi}\,h=G^{-1}({\rm epi}\,\varphi)$ with $G(x,\omega)\\!=(g(x),\omega)$ for
$(x,\omega)\in\mathbb{X}\times\mathbb{R}$. By following the proof of
equivalence (9),
$\displaystyle(z^{\prime},t)\in\mathcal{T}_{{\rm
epi}\,h}^{2}((\overline{x},\alpha),(w,\beta))$
$\displaystyle\Longleftrightarrow(g^{\prime\prime}(\overline{x};w,z^{\prime}),t)\in\mathcal{T}_{{\rm
epi}\varphi}^{2}\big{(}(g(\overline{x}),\alpha),(dg(\overline{x})(w),\beta)\big{)}$
$\displaystyle\Longleftrightarrow(g^{\prime\prime}(\overline{x};w,z^{\prime}),t)\in{\rm
epi}\,d^{2}\varphi(g(\overline{x}))(dg(\overline{x})(w)\,|\,\cdot).$ (12)
Pick any $z\in\mathbb{X}$. Let $\gamma(z):=d^{2}h(\overline{x})(w|z)$ and
$\mu(z):=d^{2}\varphi(g(\overline{x}))(dg(\overline{x})(w)\,|\,g^{\prime\prime}(\overline{x};w,z))$.
Obviously, $(z,\gamma(z))\in{\rm
epi}\,d^{2}h(\overline{x})(w\,|\,\cdot)=\mathcal{T}_{{\rm
epi}\,h}^{2}((\overline{x},\alpha),(w,\beta))$ where the equality is due to
[1, Example 13.62]. Together with (2.2),
$(g^{\prime\prime}(\overline{x};w,z),\gamma(z))\in{\rm
epi}\,d^{2}\varphi(g(\overline{x}))(dg(\overline{x})(w)\,|\,\cdot)$, which
implies that $\gamma(z)\geq\mu(z)$. Note that
$(g^{\prime\prime}(\overline{x};w,z),\mu(z))\in{\rm
epi}\,d^{2}\varphi(g(\overline{x}))(dg(\overline{x})(w)\,|\,\cdot)$, From
(2.2) we have $(z,\mu(z))\in\mathcal{T}_{{\rm
epi}\,h}^{2}((\overline{x},\alpha),(w,\beta))={\rm
epi}\,d^{2}h(\overline{x})(w\,|\,\cdot)$, which implies that
$\gamma(z)\leq\mu(z)$. The two sides show that equality (11) holds. The second
part holds by using the same arguments as those for the second part of
Proposition 2.1. $\Box$
For any $w\in\mathbb{X}$ with $dF(\overline{x})(w)\in\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))$, by invoking [2, Proposition 3.37], it holds
that
$\displaystyle\bigcup_{i=1}^{s}\mathcal{T}_{C_{i}}^{i,2}(F(\overline{x}),dF(\overline{x})(w))$
$\displaystyle\subset\mathcal{T}_{{\rm
dom}\vartheta}^{i,2}(F(\overline{x}),dF(\overline{x})(w))\subset\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))$
$\displaystyle=\bigcup_{i=1}^{s}\mathcal{T}_{C_{i}}^{2}(F(\overline{x}),dF(\overline{x})(w))=\bigcup_{i=1}^{s}\mathcal{T}_{C_{i}}^{i,2}(F(\overline{x}),dF(\overline{x})(w)),$
which implies that $\mathcal{T}_{{\rm
dom}\vartheta}^{i,2}(F(\overline{x}),dF(\overline{x})(w))=\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))$. From the nonemptiness of
${\rm dom}\vartheta$ and the convex polyhedrality of each $C_{i}$, there is an
active component $C_{i}$ such that
$0\in\mathcal{T}_{C_{i}}^{i,2}(F(\overline{x}),dF(\overline{x})(w))$ by [2,
Page 168]. So, $0\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))$. Thus, ${\rm
dom}\vartheta$ is parabolically derivable. The parabolic derivability of ${\rm
dom}\vartheta$ was also achieved in [35, Lemma 2.4]. Now by Proposition 2.1 we
obtain the following corollary.
###### Corollary 2.2
Fix any $\overline{x}\in{\rm dom}f$. If the MSQC holds for constraint system
$F(x)\in{\rm dom}\vartheta$ at $\overline{x}$, then ${\rm dom}f$ is
parabolically derivable at $\overline{x}$ for any $w\in\\!\mathcal{T}_{{\rm
dom}f}(\overline{x})$, i.e., for any $w\in\mathcal{T}_{{\rm
dom}f}(\overline{x})$,
$\emptyset\neq\mathcal{T}^{2}_{{\rm
dom}f}(\overline{x},w)\\!=\\!\big{\\{}z\in\mathbb{X}\ |\
F^{\prime\prime}(\overline{x};w,z)\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}\big{(}F(\overline{x}),dF(\overline{x})(w)\big{)}\big{\\}}\\!=\\!\mathcal{T}^{i,2}_{{\rm
dom}f}(\overline{x},w).$
Parabolic regularity of an extended real-valued functions, as demonstrated in
[26, 27], is a crucial property to build the relationship between its second
subderivative and its parabolic subderivative. To close this section, we
recall this important property.
###### Definition 2.9
A function $h\\!:\mathbb{X}\to\overline{\mathbb{R}}$ is parabolically regular
at a point $\overline{x}\in{\rm dom}\,h$ for $\overline{v}$ if for every $w$
having $dh(\overline{x})(w)=\langle\overline{v},w\rangle$,
$\inf\limits_{z\in\mathbb{X}}\big{\\{}d^{2}h(\overline{x})(w|z)-\langle\overline{v},z\rangle\big{\\}}=d^{2}h(\overline{x}|\overline{v})(w),$
or in other words, if for any
$w\in\\{w\in\mathbb{X}\,|\,dh(\overline{x})(w)=\langle\overline{v},w\rangle\\}\cap{\rm
dom}\,d^{2}h(\overline{x}|\overline{v})$, there exist, among the sequences
$\tau_{k}\downarrow 0$ and $w^{k}\to w$ with
$\Delta_{\tau_{k}}^{2}h(\overline{x}|\overline{v})(w^{k})\\!\to
d^{2}h(\overline{x}|\overline{v})(w)$, ones with an additional property
${\displaystyle\limsup_{k\to\infty}}\frac{\|w^{k}-w\|}{\tau_{k}}<\infty$.
## 3 Second-order variational properties of PWTD functions
By Definition 2.1, a function $\psi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$
is PWTD provided that $\emptyset\neq{\rm
dom}\psi:=\bigcup_{i=1}^{s}\Omega_{i}$ for polyhedral sets
$\Omega_{1},\ldots,\Omega_{s}$, and for each $i\in[s]$, there is a function
$\psi_{i}$, which is twice differentiable on an open superset of $\Omega_{i}$,
such that $\psi=\psi_{i}$ on $\Omega_{i}$. Unless otherwise stated, the PWTD
function $\psi$ appearing in this section always has such a form, and for any
given $y\in{\rm dom}\,\psi$ and $w\in\mathbb{R}^{m}$, write
$J_{y}:=\big{\\{}i\in[s]\,|\,y\in\Omega_{i}\big{\\}}$ and $J_{y,w}\\!:=\\{j\in
J_{y}\,|\,w\in\mathcal{T}_{\Omega_{j}}(y)\\}$. Before characterizing second-
order variational properties of PWTD functions, we take a closer look at their
subderivatives. The following lemma, extending the result of [1, Proposition
10.21] to PWTD functions, provides the characterization for them. Since the
proof is similar to that of [1, Proposition 10.21], we do not include it.
###### Lemma 3.1
For a PWTD function $\psi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$, the
following assertions hold.
* (i)
${\rm dom}\,\psi$ is closed, $\psi$ is continuous relative to ${\rm
dom}\,\psi$ and hence is lsc on $\mathbb{R}^{m}$;
* (ii)
for each $y\in{\rm dom}\,\psi$, ${\rm dom}\,d\psi(y)=\mathcal{T}_{{\rm
dom}\psi}(y)=\ {\textstyle\\!\bigcup_{j\in
J_{y}}}\mathcal{T}_{\Omega_{j}}(y)$;
* (iii)
for any $y\in{\rm dom}\,\psi$ and any $w\in\mathbb{R}^{m}$,
$d\psi(y)(w)=\lim_{\tau\downarrow
0}\Delta_{\tau}\psi(y)(w)=\left\\{\begin{array}[]{cl}\langle\nabla\psi_{k}(y),w\rangle\
{\rm for\ any}\ k\in J_{y,w}&{\rm if}\ J_{y,w}\neq\emptyset,\\\ \infty&{\rm
if}\ J_{y,w}=\emptyset.\end{array}\right.$
Consequently, $d\psi(y)$ for $y\in{\rm dom}\,\psi$ is a proper piecewise
linear function, and $\psi$ is a properly epi-differentiable function.
Now we characterize the second subderivatives of PWTD functions and justify
that a PWTD function is twice epi-differentiable at any point of its domain
with the regular subdifferential same as its basic one, which extends [1,
Proposition 13.9] for PWLQ convex functions to PWTD functions.
###### Proposition 3.1
Let $\psi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$ be a PWTD function. Then,
the following assertions hold.
* (i)
Consider any $y\in{\rm dom}\psi$. For any $w\in\mathbb{R}^{m}$, it holds that
$d^{2}\psi(y)(w)=\lim_{\tau\downarrow
0}\Delta_{\tau}^{2}\psi(y)(w)=\left\\{\begin{array}[]{cl}\\!\langle
w,\nabla^{2}\psi_{k}(y)w\rangle\ {\rm for\ each}\ k\in J_{y,w}&{\rm if}\
J_{y,w}\neq\emptyset,\\\ \infty&{\rm if}\
J_{y,w}=\emptyset,\end{array}\right.$ (13)
and consequently $d^{2}\psi(y)$ is a PWLQ function.
* (ii)
Consider any $y\in{\rm dom}\psi$ with
$\widehat{\partial}\psi(y)=\partial\psi(y)$ and any $v\in\partial\psi(y)$. For
each $w\in\mathbb{R}^{m}$,
$\displaystyle d^{2}\psi(y|v)(w)$
$\displaystyle=\left\\{\begin{array}[]{cl}\\!\langle
w,\nabla^{2}\psi_{k}(y)w\rangle\ {\rm for\ any}\ k\in J_{y,w}&{\rm if}\
w\in\mathcal{C}_{\psi}(y,v),\\\ \infty&{\rm if}\
w\notin\mathcal{C}_{\psi}(y,v)\end{array}\right.$ (16)
$\displaystyle=d^{2}\psi(y)(w)+\delta_{\mathcal{C}_{\psi}(y,v)}(w)=\lim_{\tau\downarrow
0}\Delta_{\tau}^{2}\psi(y|v)(w),$ (17)
so $d^{2}\psi(y|v)$ is a proper PWLQ function and $\psi$ is properly twice
epi-differentiable at $y$ for $v$.
Proof: (i) Fix any $w\in\mathbb{R}^{m}$. It suffices to establish the
equalities in (13). We first consider the case that $J_{y,w}\neq\emptyset$.
Pick any $k\in J_{y,w}$. Clearly, $w\in\mathcal{T}_{\Omega_{k}}(y)$. By
Definition 2.6 and Lemma 3.1 (ii), there exist $\tau_{\nu}\downarrow 0$ and
$w^{\nu}\to w$ with $w^{\nu}\in\tau_{\nu}^{-1}[\,\bigcup_{i\in
J_{y}}\Omega_{i}\\!-\\!y]$ for each $\nu\in\mathbb{N}$ such that
$d^{2}\psi(y)(w)=\lim_{\nu\to\infty}\frac{\psi(y+\tau_{\\!\nu}w^{\nu})-\psi(y)-\tau_{\\!\nu}d\psi(y)(w^{\nu})}{\tau_{\\!\nu}^{2}/2}.$
Obviously, $y+\tau_{\\!\nu}w^{\nu}\in{\rm dom}\psi$ for each
$\nu\in\mathbb{N}$ and $w^{\nu}\in{\rm dom}\,d\psi(y)$ for sufficiently large
$\nu\in\mathbb{N}$. Then, there exist an index $\overline{j}\in J_{y}$ and an
infinite index set $N\subset\mathbb{N}$ such that for all $\nu\in N$,
$y+\tau_{\nu}w^{\nu}\in\Omega_{\overline{j}}$ and
$w^{\nu}\in\mathcal{T}_{\Omega_{\overline{j}}}(y)$, so that
$\overline{j}\in J_{y,w}\ \ {\rm and}\ \
d^{2}\psi(y)(w)=\lim_{\nu\xrightarrow[N]{}\infty}\frac{\psi_{\overline{j}}(y+\tau_{\\!\nu}w^{\nu})-\psi_{\overline{j}}(y)-\tau_{\\!\nu}\langle\nabla\psi_{\overline{j}}(y),w^{\nu}\rangle}{\tau_{\\!\nu}^{2}/2}=\langle
w,\nabla^{2}\psi_{\overline{j}}(y)w\rangle.$ (18)
On the other hand, by the polyhedrality of each $\Omega_{i}$ for $i\in[s]$ and
[1, Exercise 6.47], for sufficiently small $\tau>0$, $y+\tau w\in\bigcap_{i\in
J_{y,w}}\Omega_{i}$. Consequently, for each $i\in J_{y,w}$, it holds that
$\displaystyle\langle w,\nabla^{2}\psi_{i}(y)w\rangle$
$\displaystyle=\lim_{\tau\downarrow 0}\frac{\psi_{i}(y+\tau
w)-\psi_{i}(y)-\tau\langle\nabla\psi_{i}(y),w\rangle}{\tau^{2}/2}$
$\displaystyle=\lim_{\tau\downarrow 0}\frac{\psi(y+\tau w)-\psi(y)-\tau
d\psi(y)(w)}{\tau^{2}/2}.$ (19)
Recall that $k\in J_{y,w}$. Together with the above equations (18) and (3), it
holds that
$d^{2}\psi(y)(w)=\langle
w,\nabla^{2}\psi_{\overline{j}}(y)w\rangle=\lim_{\tau\downarrow
0}\Delta_{\tau}^{2}\psi(y)(w)=\langle w,\nabla^{2}\psi_{k}(y)w\rangle.$
By the arbitrariness of $k\in J_{y,w}$, the conclusion holds for the case
$J_{y,w}\neq\emptyset$. When $J_{y,w}=\emptyset$, as $y\in{\rm dom}\psi$, we
deduce from Lemma 3.1 (ii) that $w\notin\mathcal{T}_{{\rm dom}\psi}(y)$.
Consequently, for any sufficiently small $\tau>0$ and any $w^{\prime}$
sufficiently close to $w$, $y+\tau w^{\prime}\notin{\rm dom}\,\psi$, which
implies that $d^{2}\psi(y)(w)=\infty=\lim_{\tau\downarrow
0}\Delta_{\tau}^{2}\psi(y)(w)$. Thus, $d^{2}\psi(y)$ has the expression as
stated in (13).
(ii) Fix any $w\in\mathbb{R}^{m}$. We first consider that
$w\in\mathcal{T}_{{\rm dom}\,\psi}(y)$. By Definition 2.6 and Lemma 3.1 (ii),
there exist $\tau_{\nu}\downarrow 0$ and $w^{\nu}\to w$ with
$w^{\nu}\in\tau_{\nu}^{-1}[\,\bigcup_{j\in J_{y}}\Omega_{j}-y]$ for each
$\nu\in\mathbb{N}$ such that
$d^{2}\psi(y|v)(w)=\lim_{\nu\to\infty}\frac{\psi(y+\tau_{\nu}w^{\nu})-\psi(y)-\tau_{\\!\nu}\langle
v,w^{\nu}\rangle}{\tau_{\nu}^{2}/2}.$
Obviously, for each $\nu\in\mathbb{N}$, $y+\tau_{\\!\nu}w^{\nu}\in{\rm
dom}\psi$. Then, there exist an index $\widetilde{j}\in J_{y}$ and an infinite
index set $N\subset\mathbb{N}$ such that for all $\nu\in N$,
$y+\tau_{\nu}w^{\nu}\in\Omega_{\widetilde{j}}$ and
$w^{\nu}\in\mathcal{T}_{\Omega_{\widetilde{j}}}(y)$, and consequently,
$\displaystyle\widetilde{j}\in\\!J_{y,w}\ \ {\rm and}\ \ d^{2}\psi(y|v)(w)$
$\displaystyle=\lim_{\nu\xrightarrow[N]{}\infty}\frac{\psi_{\widetilde{j}}(y+\tau_{\\!\nu}w^{\nu})-\psi_{\widetilde{j}}(y)-\tau_{\\!\nu}\langle
v,w^{\nu}\rangle}{\tau_{\nu}^{2}/2}$
$\displaystyle=\lim_{\nu\xrightarrow[N]{}\infty}\Big{[}\\!\langle
w^{\nu},\nabla^{2}\psi_{\widetilde{j}}(y)w^{\nu}\rangle+\frac{2(d\psi(y)(w^{\nu})-\langle
v,w^{\nu}\rangle)}{\tau_{\\!\nu}}+\frac{o(\tau_{\\!\nu}^{2})}{\tau_{\\!\nu}^{2}}\Big{]}$
$\displaystyle\geq\left\\{\begin{array}[]{cl}\langle
w,\nabla^{2}\psi_{\widetilde{j}}(y)w\rangle&{\rm if}\
w\in\mathcal{C}_{\psi}(y,v),\\\ \infty&{\rm if}\ w\in\mathcal{T}_{{\rm
dom}\psi}(y)\backslash\mathcal{C}_{\psi}(y,v),\end{array}\right.$ (22)
where the inequality for the case $w\in\mathcal{C}_{\psi}(y,v)$ is due to
$v\in\widehat{\partial}\psi(y)$ and the equivalence in (6), and the inequality
for the case $w\in\mathcal{T}_{{\rm
dom}\psi}(y)\backslash\mathcal{C}_{\psi}(y,v)$ also uses the continuity of
$d\psi(y)$ relative to its domain by Proposition 3.1 (i). Note that
$\widetilde{j}\in\\!J_{y,w}$. Combining (3) and (3) yields that
$d^{2}\psi(y|v)(w)=\left\\{\begin{array}[]{cl}\\!\langle
w,\nabla^{2}\psi_{k}(y)w\rangle\ {\rm for\ each}\ k\in J_{y,w}&{\rm if}\
w\in\mathcal{C}_{\psi}(y,v),\\\ \infty&{\rm if}\ w\in\mathcal{T}_{{\rm
dom}\psi}(y)\backslash\mathcal{C}_{\psi}(y,v).\end{array}\right.$
When $w\notin\mathcal{T}_{{\rm dom}\,\psi}(y)$, since $\mathcal{T}_{{\rm
dom}\psi}(y)=\big{\\{}w\in\mathbb{R}^{m}\,|\,\liminf_{\tau\downarrow 0}{\rm
dist_{2}}\big{(}w,\frac{{\rm dom}\psi-y}{\tau}\big{)}=0\big{\\}}$, there
exists $\varepsilon>0$ such that for any $\tau\\!\in(0,\varepsilon)$ and any
$w^{\prime}$ with $\|w^{\prime}\\!-w\|_{2}\leq\varepsilon$,
$w^{\prime}\notin\tau^{-1}[{\rm dom}\,\psi-y]$. By Definition 2.6,
$d^{2}\psi(y|v)(w)=\infty=d^{2}\psi(y)(w)$. Together with the last equation,
we get the desired result. $\Box$
In Proposition 3.1 (ii), the restriction on $y\in{\rm dom}\psi$ is crucial to
the properness of $d^{2}\psi(y|v)$. For example, consider the ramp function
$r(t):=\max\\{0,\min\\{1,t\\}\\}$ for $t\in\mathbb{R}$. At $\overline{t}=1$,
we have $\widehat{\partial}r(\overline{t})=\emptyset,\,\partial
r(\overline{t})=\\{0,1\\}$, and $dr(\overline{t})(w)=w$ if $w\leq 0$,
otherwise $dr(\overline{t})(w)=0$. For $v=1\in\partial r(\overline{t})$, since
$vw>dr(\overline{t})(w)=0$ for all $w>0$, $d^{2}r(\overline{t}|v)(w)=-\infty$
by [1, Proposition 13.5], so $d^{2}r(\overline{t}|v)$ is not proper.
Next we provide characterize parabolic subderivatives of PWTD functions, and
show that under the same condition as in Proposition 3.1, a PWTD function
$\psi$ is parabolically epi-differentiable at any $y\in{\rm dom}\psi$ for each
$w\in{\rm dom}\,d\psi(y)$.
###### Proposition 3.2
Let $\psi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$ be a PWTD function. Fix
any $y\in{\rm dom}\,\psi$ and $w\in{\rm dom}\,d\psi(y)$.
* (i)
$\mathcal{T}^{2}_{{\rm dom}\,\psi}(y,w)=\bigcup_{k\in
J_{y,w}}\\!\mathcal{T}^{2}_{\Omega_{k}}(y,w)=\mathcal{T}_{\mathcal{T}_{{\rm
dom}\psi}(y)}(w)$.
* (ii)
$d^{2}\psi(y)(w|z)<\infty$ if and only if $z\in\mathcal{T}^{2}_{{\rm
dom}\psi}(y,w)$.
* (iii)
For any $z\in\mathbb{R}^{m}$, it holds that
$-\infty<d^{2}\psi(y)(w|z)=\lim_{\tau\downarrow 0}\frac{\psi\big{(}y+\tau
w+\frac{1}{2}\tau^{2}z\big{)}-\psi(y)-\tau d\psi(y)(w)}{\tau^{2}/2},$ (23)
and hence $\psi$ is parabolically epi-differentiable at $y$ for $w$.
* (iv)
If $\psi$ is regular at $y$, i.e. its epigraph ${\rm epi}\,\psi$ is Clarke
regular at $(y,\psi(y))$, then for any $z\in\mathbb{R}^{m}$,
$d^{2}\psi(y)(w|z)=d^{2}\psi(y)(w)+\sup_{\xi\in\mathcal{A}_{\psi}(y,w)}\langle\xi,z\rangle$
(24)
where $\mathcal{A}_{\psi}(y,w)\\!:=\\!\big{\\{}\xi\in\partial\psi(y)\ |\
\langle\xi,w\rangle=d\psi(y)(w)\\}$.
Proof: (i) By Lemma 3.1 (ii), $w\in\mathcal{T}_{{\rm dom}\psi}(y)$. By [2,
Proposition 3.37], $\mathcal{T}^{2}_{{\rm
dom}\,\psi}(y,w)=\bigcup_{i=1}^{s}\\!\mathcal{T}^{2}_{\Omega_{i}}(y,w)$. For
each $\Omega_{i}$ with $i\in[s]$, by invoking [1, Proposition 13.12], we have
$\mathcal{T}^{2}_{\Omega_{i}}(y,w)=\mathcal{T}_{\mathcal{T}_{\Omega_{i}}(y)}(w)$.
Thus, for each $k\in[s]$, $z\in\mathcal{T}^{2}_{\Omega_{k}}(y,w)$ implies that
$y\in\Omega_{k}$ and $w\in\mathcal{T}_{\Omega_{k}}(y)$, i.e., $k\in J_{y,w}$.
Consequently,
$\bigcup_{i=1}^{s}\mathcal{T}^{2}_{\Omega_{i}}(y,w)=\bigcup_{k\in
J_{y,w}}\\!\mathcal{T}^{2}_{\Omega_{k}}(y,w)$. Together with [2, Proposition
3.37], it follows that
$\mathcal{T}^{2}_{{\rm
dom}\psi}(y,w)=\bigcup_{i=1}^{s}\mathcal{T}^{2}_{\Omega_{i}}(y,w)=\bigcup_{k\in
J_{y,w}}\\!\mathcal{T}^{2}_{\Omega_{k}}(y,w)=\mathcal{T}_{\bigcup_{k\in
J_{y,w}}\\!\mathcal{T}_{\Omega_{k}}(y)}(w)=\mathcal{T}_{\mathcal{T}_{{\rm
dom}\,\psi}(y)}(w).$
(ii) Pick any $z\in\mathbb{R}^{m}$ with $d^{2}\psi(y)(w|z)<\infty$. By
Definition 2.7, there exist sequences $\tau_{\nu}\downarrow 0$ and $z^{\nu}\to
z$ such that
$\infty>d^{2}\psi(y)(w|z)=\lim_{\nu\to\infty}\frac{\psi(y+\tau_{\\!\nu}w+\frac{1}{2}\tau_{\\!\nu}^{2}z^{\nu})-\psi(y)-\tau_{\\!\nu}d\psi(y)(w)}{\tau_{\nu}^{2}/2},$
which implies that
$y+\tau_{\\!\nu}w+\frac{1}{2}\tau_{\\!\nu}^{2}z^{\nu}\in{\rm dom}\psi$ for all
sufficiently large $\nu$. Then, there exist an index $\overline{i}\in[s]$ and
an infinite index set $N\subset\mathbb{N}$ such that
$\\{y+\tau_{\\!\nu}w+\frac{1}{2}\tau_{\\!\nu}^{2}z^{\nu}\\}_{\nu\in
N}\subset\Omega_{\overline{i}}$. Obviously,
$w\in\\!\mathcal{T}_{\Omega_{\overline{i}}}(y)$ and
$z\in\\!\mathcal{T}^{2}_{\Omega_{\overline{i}}}(y,w)\subset\mathcal{T}^{2}_{{\rm
dom}\psi}(y,w)$. Moreover, from the last equation,
$\displaystyle d^{2}\psi(y)(w|z)$
$\displaystyle=\lim_{\nu\xrightarrow[N]{}\infty}\frac{\psi_{\overline{i}}(y+\tau_{\\!\nu}w+\frac{1}{2}\tau_{\\!\nu}^{2}z^{\nu})-\psi_{\overline{i}}(y)-\tau_{\\!\nu}\langle\nabla\psi_{\overline{i}}(y),w\rangle}{\tau_{\nu}^{2}/2}$
$\displaystyle=\lim_{\nu\xrightarrow[N]{}\infty}\frac{\tau_{\\!\nu}^{2}\langle\nabla\psi_{\overline{i}}(y),z^{\nu}\rangle+\langle\tau_{\nu}w+\frac{1}{2}\tau_{\nu}^{2}z^{\nu},\nabla^{2}\psi_{\overline{i}}(y)(\tau_{\nu}w+\frac{1}{2}\tau_{\nu}^{2}z^{\nu})\rangle+o(\tau_{\nu}^{2})}{\tau_{\nu}^{2}}$
$\displaystyle=\langle\nabla\psi_{\overline{i}}(y),z\rangle+\langle
w,\nabla^{2}\psi_{\overline{i}}(y)w\rangle.$ (25)
Conversely, pick any $z\in\mathcal{T}^{2}_{{\rm dom}\psi}(y,w)$. There exists
$\widetilde{i}\in[s]$ such that
$z\in\\!\mathcal{T}^{2}_{\Omega_{\widetilde{i}}}(y,w)$. By the proof of part
(i), $\widetilde{i}\in J_{y,w}$. Since $\Omega_{\widetilde{i}}$ is polyhedral,
for any $\tau>0$ enough small, $y+\tau
w+\frac{1}{2}\tau^{2}z\in\Omega_{\widetilde{i}}$. Then,
$\displaystyle d^{2}\psi(y)(w|z)$ $\displaystyle\leq\lim_{\tau\downarrow
0}\frac{\psi(y+\tau w+\frac{1}{2}\tau^{2}z)-\psi(y)\\!-\\!\tau
d\psi(y)(w)}{\frac{1}{2}\tau^{2}}$ $\displaystyle=\lim_{\tau\downarrow
0}\frac{\psi_{\widetilde{i}}(y+\tau
w+\frac{1}{2}\tau^{2}z)-\psi_{\widetilde{i}}(y)\\!-\\!\tau
d\psi_{\widetilde{i}}(y)(w)}{\frac{1}{2}\tau^{2}}=\langle
w,\nabla^{2}\psi_{\widetilde{i}}(y)w\rangle+\langle\nabla\psi_{\widetilde{i}}(y),z\rangle.$
This shows that $d^{2}\psi(y)(w|z)<\infty$, and the desired equivalence then
follows.
(iii) Fix any $z\in\mathbb{R}^{m}$. We proceed the arguments by two cases:
$z\notin\\!\mathcal{T}^{2}_{{\rm dom}\psi}(y,w)$ and
$z\in\\!\mathcal{T}^{2}_{{\rm dom}\psi}(y,w)$. If
$z\notin\\!\mathcal{T}^{2}_{{\rm dom}\psi}(y,w)$, for sufficiently small
$\tau>0$, $y+\tau w+\frac{1}{2}\tau^{2}z\notin{\rm dom}\,\psi$. Along with
$w\in{\rm dom}\,d\psi(y)$,
$d^{2}\psi(y)(w|z)=\infty=\lim_{\tau\downarrow 0}\frac{\psi(y+\tau
w+\frac{1}{2}\tau^{2}z)-\psi(y)-\tau d\psi(y)(w)}{\tau^{2}/2}.$
That is, equation (23) holds for this case. Next we consider that
$z\in\mathcal{T}^{2}_{{\rm dom}\,\psi}(y,w)=\bigcup_{k\in
J_{y,w}}\\!\mathcal{T}^{2}_{\Omega_{k}}(y,w)$. For convenience, write
$J_{y,w,z}\\!:=\\{k\in\\!J_{y,w}\,|\,z\in\mathcal{T}^{2}_{\Omega_{k}}(y,w)\\}$.
For each $i\in\\!J_{y,w,z}$, from $z\in\mathcal{T}^{2}_{\Omega_{i}}(y,w)$, the
polyhedrality of $\Omega_{i}$ and [1, Proposition 13.12], for any sufficiently
small $\tau>0$, $y+\tau w+\frac{1}{2}\tau^{2}z\in\Omega_{i}$. Thus, for each
$i\in J_{y,w,z}$, by using Lemma 3.1 (ii) and an elementary calculation, it
holds that
$\displaystyle\langle
w,\nabla^{2}\psi_{i}(y)w\rangle+\langle\nabla\psi_{i}(y),z\rangle$
$\displaystyle=\lim_{\tau\downarrow 0}\frac{\psi_{i}(y+\tau
w+\frac{1}{2}\tau^{2}z)-\psi_{i}(y)-\tau\langle\psi_{i}(y),w\rangle}{\tau^{2}/2}$
$\displaystyle=\lim_{\tau\downarrow 0}\frac{\psi(y+\tau
w+\frac{1}{2}\tau^{2}z)-\psi(y)-\tau d\psi(y)(w)}{\tau^{2}/2}.$ (26)
In addition, from the proof of the necessity to part (ii), there exists an
index $\overline{i}\in J_{y,w,z}$. By combining (3) and (3), we obtain (23).
Note that $0\in\mathcal{T}^{2}_{\Omega_{k}}(y,w)$ for each $k\in J_{y,w}$.
From part (ii), we have $d^{2}\psi(y)(w|0)<\infty$. This, along with (23),
shows that $\psi$ is parabolically epi-differentiable at $y$ for $w$.
(iv) Fix any $z\in\mathbb{R}^{m}$. We first consider that
$J_{y,w,z}\neq\emptyset$. Pick any $i\in\\!J_{y,w,z}$. From the polyhedrality
of $\Omega_{i}$, for any sufficiently small $\tau>0$, $y+\tau
w+\frac{1}{2}\tau^{2}z\in\Omega_{i}$. Together with part (iii), it follows
that
$\displaystyle d^{2}\psi(y)(w|z)$ $\displaystyle=\lim_{\tau\downarrow
0}\frac{\psi_{i}(y+\tau w+\frac{1}{2}\tau^{2}z)-\psi_{i}(y)-\tau
d\psi_{i}(y)(w)}{\tau^{2}/2}$ $\displaystyle=\langle
w,\nabla^{2}\psi_{i}(y)w\rangle+\langle\nabla\psi_{i}(y),z\rangle=d^{2}\psi(y)(w)+\langle\nabla\psi_{i}(y),z\rangle$
where the second equality is using
$d\psi(y)(w)=\langle\nabla\psi_{i}(y),w\rangle$ and the last one is due to
Proposition 3.1 (i). When $J_{y,w,z}=\emptyset$, by the definition of
$J_{y,w,z}$, for each $k\in J_{y,w}$,
$z\notin\mathcal{T}_{\Omega_{k}}^{2}(y,w)$. From the proof of part (i),
$z\notin\mathcal{T}_{\mathcal{T}_{{\rm dom}\,\psi}(y)}(w)={\rm
dom}\,d^{2}\psi(y)(w|\cdot)$, so $d^{2}\psi(y)(w|z)=\infty$. The above
discussions show that
$d^{2}\psi(y)(w|z)=d^{2}\psi(y)(w)+\left\\{\begin{array}[]{cl}\\!\langle\nabla\psi_{k}(y),z\rangle\
{\rm for\ any}\ k\in J_{y,w,z}&{\rm if}\ J_{y,w,z}\neq\emptyset,\\\
\infty&{\rm if}\ J_{y,w,z}=\emptyset.\end{array}\right.$ (27)
Let $\psi_{y}(w^{\prime}):=d\psi(y)(w^{\prime})$ for
$w^{\prime}\in\mathbb{R}^{m}$. By Lemma 3.1 (ii)-(iii), $\psi_{y}$ is a proper
piecewise linear function with ${\rm dom}\,\psi_{y}=\bigcup_{k\in
J_{y}}\\!\mathcal{T}_{\Omega_{k}}(y)$. For any $w\in\mathbb{R}^{m}$ and
$z\in\mathbb{R}^{m}$, define $I_{w}:=\\{i\in
J_{y}\,|\,w\in\mathcal{T}_{\Omega_{i}}(y)\\}$ and $I_{w,z}:=\\{i\in
I_{w}\,|\,z\in\mathcal{T}_{\mathcal{T}_{\Omega_{i}}(y)}(w)\\}$. By applying
the conclusion of Lemma 3.1 (iii) to the function $\psi_{y}$,
$d\psi_{y}(w)(z)=\left\\{\begin{array}[]{cl}\\!\langle\nabla\psi_{k}(y),z\rangle\
{\rm for\ any}\ k\in I_{w,z}&{\rm if}\ I_{w,z}\neq\emptyset,\\\ \infty&{\rm
if}\ I_{w,z}=\emptyset.\end{array}\right.$
Note that $I_{w,z}=J_{y,w,z}$ because
$\mathcal{T}^{2}_{\Omega_{i}}(y,w)=\mathcal{T}_{\mathcal{T}_{\Omega_{i}}(y)}(w)$
for each $i\in[s]$. Combining the last equation with (27) yields that
$d^{2}\psi(y)(w|z)=d^{2}\psi(y)(w)+d\psi_{y}(w)(z)$. Since $\psi$ is assumed
to be regular at $y$, by invoking [1, Theorem 8.30],
$\psi_{y}(w^{\prime})=\sup_{v\in\partial\psi(y)}\langle v,w^{\prime}\rangle$
for any $w^{\prime}\in\mathbb{R}^{m}$, which implies that $\psi_{y}$ is an lsc
convex function. Along with its properness,
$\partial\psi_{y}(w)=\mathop{\arg\max}_{v\in\partial\psi(y)}\langle
v,w\rangle$, and then
$d\psi_{y}(w)(z)=\sup_{\xi\in\partial\psi_{y}(w)}\langle\xi,z\rangle=\sup_{\xi\in\mathbb{R}^{m}}\Big{\\{}\langle\xi,z\rangle\
\ {\rm s.t.}\ \
\xi\in\partial\psi(y),\,\langle\xi,w\rangle=d\psi(y)(w)\Big{\\}},$
where the first equality is obtained by invoking [1, Theorem 8.30] for
$\psi_{y}$. Substituting this into
$d^{2}\psi(y)(w|z)=d^{2}\psi(y)(w)+d\psi_{y}(w)(z)$ and using the definition
of $\mathcal{A}_{\psi}(y,w)$ leads to the result. $\Box$
Proposition 3.2 extends the result of [1, Exercise 13.61] for PWLQ convex
functions to PWTD functions. From Proposition 3.2 (iv) and Proposition 3.1
(ii), we can establish the parabolic regularity of PWTD functions, which
partly extends the result of [1, Theorem 13.67].
###### Proposition 3.3
Let $\psi\\!:\mathbb{R}^{m}\to\overline{\mathbb{R}}$ be a PWTD function.
Consider any $y\in{\rm dom}\,\psi$. Suppose that $\psi$ is regular at $y$.
Then, the following assertions hold.
* (i)
Fix any $w\in{\rm dom}\,d\psi(y)$ and define $\varphi(z):=d^{2}\psi(y)(w|z)$
for $z\in\mathbb{R}^{m}$. Then, $\varphi$ is a proper lsc convex function with
$\varphi^{*}(z^{*})=-d^{2}\psi(y)(w)+\delta_{\mathcal{A}_{\psi}(y,w)}(z^{*})$
for $z^{*}\in\mathbb{R}^{m}$.
* (ii)
The function $\psi$ is parabolically regular at $y$ for every
$v\in\partial\psi(y)$.
Proof: (i) Since $\psi$ is regular at $y$, by Proposition 3.2 (iv), $\varphi$
is a sum of the support function of the closed convex set
$\mathcal{A}_{\psi}(y,w)$ and the constant $d^{2}\psi(y)(w)$, so is a proper
lsc convex function. By the definition of conjugate functions, it is easy to
achieve the expression of its conjugate $\varphi^{*}$.
(ii) Fix any $v\in\partial\psi(y)$. Pick any $w\in\mathcal{C}_{\psi}(y,v)$.
From part (i), it immediately follows that
$-\varphi^{*}(v)=d^{2}\psi(y)(w)-\delta_{\mathcal{A}_{\psi}(y,w)}(v)\leq
d^{2}\psi(y)(w)=d^{2}\psi(y|v)(w),$
where the second equality is due to Proposition 3.1 (ii) and
$w\in\mathcal{C}_{\psi}(y,v)$. On the other hand, from the definition of
conjugate functions,
$-\varphi^{*}(v)=\inf_{z\in\mathbb{R}^{m}}\big{\\{}d^{2}\psi(y)(w|z)-\langle
v,z\rangle\\}\geq d^{2}\psi(y|v)(w)$, where the inequality is due to [1,
Proposition 13.64]. The two sides show that
$\inf_{z\in\mathbb{R}^{m}}\big{\\{}d^{2}\psi(y)(w|z)-\langle
v,z\rangle\\}=d^{2}\psi(y|v)(w)$. By Definition 2.9, $\psi$ is parabolically
regular at $y$ for $v$. The result then follows. $\Box$
## 4 Estimates for the second subderivative of $f$
To derive tight lower and upper estimates for the second subderivative of $f$,
we need to characterize its critical cone and justify its parabolic epi-
differentiability. By Assumption 1, [29, Theorem 3.4] and Lemma 3.1, the
following result holds.
###### Lemma 4.1
Consider any $\overline{x}\in{\rm dom}f$. If the MSQC holds for constraint
system $F(x)\in{\rm dom}\vartheta$ at $\overline{x}$, then
$df(\overline{x})(w)=d\vartheta(F(\overline{x}))(dF(\overline{x})(w))$ for
$w\in\mathbb{X}$, and consequently, $df(\overline{x})$ is a proper and
piecewise positively homogeneous continuous function (i.e., ${\rm
dom}\,df(\overline{x})$ is nonempty and can be represented as a union of
finitely many polyhedral sets, say $\bigcup_{i=1}^{l}D_{i}$ for polyhedral
sets $D_{1},\ldots,D_{l}$, and for each $i\in[l]$, there is a function
$h_{i}$, which is a positively homogeneous continuous function on a superset
of $D_{i}$, such that $df(\overline{x})=h_{i}$ on $D_{i}$).
###### Proposition 4.1
Consider any $\overline{x}\in{\rm dom}f$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Suppose that the MSQC holds for
constraint system $F(x)\in{\rm dom}\vartheta$ at $\overline{x}$. Then, the
following assertions hold.
* (i)
$w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$ if and only if
$d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle\overline{v},w\rangle$,
which means that $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$.
* (ii)
For any $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$, $\mathcal{T}^{2}_{{\rm
epi}f}\big{(}(\overline{x},F(\overline{x})),(w,dF(\overline{x})(w))\big{)}\neq\emptyset$.
* (iii)
$\mathcal{C}_{\\!f}(\overline{x},\overline{v})\subset{\rm
dom}\,d^{2}\\!f(\overline{x}|\overline{v})$, and the converse inclusion also
holds if $\overline{v}\in\widehat{\partial}f(\overline{x})$.
* (iv)
$\mathcal{C}_{\\!f}(\overline{x},\overline{v})\neq\emptyset$ whenever
$dF(\overline{x})(0)=0$.
Proof: (i) The equivalence is by the definition of
$\mathcal{C}_{\\!f}(\overline{x},\overline{v})$ and Lemma 4.1. For any $w$
with
$d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle\overline{v},w\rangle$,
we have $dF(\overline{x})(w)\in{\rm
dom}\,d\vartheta(F(\overline{x}))=\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))$, where the equality is due to Lemma 3.1 (ii)
with $\psi=\vartheta$. By invoking Lemma 2.1, $w\in\mathcal{T}_{{\rm
dom}f}(\overline{x})$.
(ii) Fix any $w\in\\!\mathcal{T}_{{\rm dom}f}(\overline{x})$. From Corollary
2.2, there exists a vector $z\in\mathbb{X}$ such that
$F^{\prime\prime}(\overline{x};w,z)\in\mathcal{T}_{{\rm
dom}\vartheta}^{2}(F(\overline{x}),dF(\overline{x})(w))$. Together with
Proposition 3.2 (ii)-(iii), it follows that
$\varpi=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))$
is finite, which implies that
$(F^{\prime\prime}(\overline{x};w,z),\varpi)\in{\rm
epi}\,d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,\cdot)$. By the
equivalence (2.2), $(z,\varpi)\in\mathcal{T}_{{\rm
epi}f}^{2}((\overline{x},f(\overline{x})),(w,dF(\overline{x})w))$.
(iii) Pick any $w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$. From part
(i), $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$, while from part (ii) it
follows that $\mathcal{T}^{2}_{{\rm
epi}f}((\overline{x},f(\overline{x})),(w,dF(\overline{x})(w)))\\!\neq\emptyset$.
Pick any $(u,\varpi)\in\\!\mathcal{T}^{2}_{{\rm
epi}f}((\overline{x},f(\overline{x})),(w,dF(\overline{x})(w)))$. There exist
$\tau_{k}\downarrow 0$ and $(u^{k},\varpi_{k})\to(u,\varpi)$ such that
$(\overline{x},f(\overline{x}))+\tau_{k}(w,df(\overline{x})(w))+\frac{1}{2}\tau_{k}^{2}(u^{k},\varpi_{k})\in{\rm
epi}f$ for each $k\in\mathbb{N}$. Along with
$df(\overline{x})(w)=\langle\overline{v},w\rangle$,
$\displaystyle\varpi_{k}$
$\displaystyle\geq\frac{f(\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}u^{k})\\!-f(\overline{x})\\!-\tau_{k}df(\overline{x})(w)}{\tau_{k}^{2}/2}$
$\displaystyle=\frac{f(\overline{x}+\\!\tau_{k}(w\\!+\frac{1}{2}\tau_{k}u^{k}))\\!-f(\overline{x})\\!-\tau_{k}\langle\overline{v},w\\!+\frac{1}{2}\tau_{k}u^{k}\rangle}{\tau_{k}^{2}/2}+\langle\overline{v},u^{k}\rangle\quad\
\forall k\in\mathbb{N}.$
Passing the limit $k\to\infty$ to the last inequality yields that $\varpi\geq
d^{2}\\!f(\overline{x}|\overline{v})(w)\\!+\\!\langle\overline{v},u\rangle$,
which means that $w\in{\rm dom}\,d^{2}\\!f(\overline{x}|\overline{v})$ and the
inclusion $\mathcal{C}_{\\!f}(\overline{x},\overline{v})\subset{\rm
dom}\,d^{2}\\!f(\overline{x}|\overline{v})$. For the converse inclusion, as
$\overline{v}\in\widehat{\partial}\\!f(\overline{x})$,
$df(\overline{x})(w^{\prime})\geq\langle\overline{v},w^{\prime}\rangle$ for
all $w^{\prime}\in\mathbb{X}$ by (6), while ${\rm
dom}\,d^{2}\\!f(\overline{x}|\overline{v})\subset\\{w\in\mathbb{X}\,|\,df(\overline{x})(w)\leq\langle\overline{v},w\rangle\\}$
by [1, Proposition 13.5]. Thus, ${\rm
dom}\,d^{2}\\!f(\overline{x}|\overline{v})\subset\\{w\in\mathbb{X}\,|\,df(\overline{x})(w)=\langle\overline{v},w\rangle\\}=\mathcal{C}_{\\!f}(\overline{x},\overline{v})$.
(iv) When $dF(\overline{x})(0)=0$, we have
$d\vartheta(F(\overline{x}))(dF(\overline{x})(0))=0$ by Lemma 3.1 (iii), which
along with part (i) implies that
$0\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$. Consequently,
$\mathcal{C}_{\\!f}(\overline{x},\overline{v})\neq\emptyset$. $\Box$
Proposition 4.1 (iii) extends the conclusion of [22, Theorem 4.4] for the
fully subamenable function to a large class of nonconvex composite functions
$f$.
### 4.1 Parabolic subderivative of $f$
The following proposition characterizes the parabolic subderivative of $f$,
which extends the conclusion of [27, Theorem 4.4] to the composition of a PWTD
function and a parabolically semidifferentiable mapping.
###### Proposition 4.2
Consider any $\overline{x}\in{\rm dom}f$. Suppose that the MSQC holds for
constraint system $F(x)\in{\rm dom}\,\vartheta$ at $\overline{x}$. Fix any
$w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$. Then, the following assertions
hold.
* (i)
For any $z\in\mathbb{X}$,
$-\infty<d^{2}\\!f(\overline{x})(w|z)=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,F^{\prime\prime}(\overline{x};w,z))$
with
${\rm dom}\,d^{2}f(\overline{x})(w\,|\,\cdot)=\mathcal{T}^{2}_{{\rm
dom}f}(\overline{x},w).$ (28)
* (ii)
The function $f$ is parabolically epi-differentiable at $\overline{x}$ for
$w$.
Proof: (i) Since $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$, from Lemma 2.1
it follows that $dF(\overline{x})(w)\in\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))$, which by Lemma 3.1 (ii) for $\psi=\vartheta$
implies that $d\vartheta(F(\overline{x}))(dF(\overline{x})(w))<\infty$.
Together with the properness of $d\vartheta(F(\overline{x}))$ and Lemma 4.1,
$df(\overline{x})(w)$ is finite. Then, by Definition 2.7, it is not hard to
deduce that
${\rm dom}\,d^{2}\\!f(\overline{x})(w\,|\,\cdot)\subset\mathcal{T}^{2}_{{\rm
dom}f}(\overline{x},w).$ (29)
Fix any $z\in\mathbb{X}$. By Definition 2.7 and Lemma 4.1, there exist
sequences $\tau_{k}\downarrow 0$ and $z^{k}\to z$ such that
$d^{2}\\!f(\overline{x})(w|z)=\lim_{k\to\infty}\frac{\vartheta(F(\overline{x}+\tau_{k}w\\!+\\!\frac{1}{2}\tau_{k}^{2}z^{k}))-\vartheta(F(\overline{x}))\\!-\\!\tau_{k}d\vartheta(F(\overline{x}))(dF(\overline{x})(w))}{\tau_{k}^{2}/2},$
which together with the parabolic semidifferentiability of $F$ at
$\overline{x}$ implies that
$\displaystyle d^{2}\\!f(\overline{x})(w|z)$
$\displaystyle=\lim_{k\to\infty}\frac{\vartheta(F(\overline{x})+\tau_{k}dF(\overline{x})w\\!+\\!\frac{1}{2}\tau_{k}^{2}(F^{\prime\prime}(\overline{x};w,z)+o(\tau^{2}_{k})/\tau_{k}^{2}))}{\tau_{k}^{2}/2}$
$\displaystyle\quad\qquad-\frac{\vartheta(F(\overline{x}))+\tau_{k}d\vartheta(F(\overline{x}))(dF(\overline{x})(w))}{\tau_{k}^{2}/2}$
$\displaystyle\geq
d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))>-\infty$
where the second inequality is due to Proposition 3.2 (iii). Next we shall
establish the converse inequality by two cases:
$z\notin\\!\mathcal{T}^{2}_{{\rm dom}f}(\overline{x},w)$ and
$z\in\\!\mathcal{T}^{2}_{{\rm dom}f}(\overline{x},w)$. For convenience, write
$u=F^{\prime\prime}(\overline{x};w,z)$.
Case 1: $z\notin\\!\mathcal{T}^{2}_{{\rm dom}f}(\overline{x},w)$. In this
case, by Corollary 2.2, $u\notin\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))$, which by Proposition 3.2
(ii) for $\psi=\vartheta$ implies that
$d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|u)\\!=\infty$. While from
(29), $d^{2}\\!f(\overline{x})(w|z)=\infty$. Thus,
$d^{2}\\!f(\overline{x})(w|z)\\!=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|d^{2}F(\overline{x})(w|z))$.
Case 2: $z\in\\!\mathcal{T}^{2}_{{\rm dom}f}(\overline{x},w)$. By Corollary
2.2, $z\in\\!\mathcal{T}^{i,2}_{{\rm dom}f}(\overline{x},w)$ and
$u\in\\!\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))$. Pick any
$\tau_{k}\downarrow 0$. From the definition of inner second-order tangent
sets, there exists $z^{k}\to z$ such that for each $k\in\mathbb{N}$, ${\rm
dom}f\ni x^{k}\\!:=\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k}$. Let
$y^{k}\\!:=F(\overline{x})\\!+\\!\tau_{k}dF(\overline{x})(w)\\!+\\!\frac{1}{2}\tau_{k}^{2}u$.
By Proposition 3.2 (ii)-(iii),
$\infty>d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|u)=\lim_{k\to\infty}\frac{\vartheta(y^{k})-\vartheta(F(\overline{x}))-\tau_{k}d\vartheta(F(\overline{x}))(dF(\overline{x})(w))}{\tau_{k}^{2}/2},$
which implies that $y^{k}\in{\rm dom}\vartheta$ for all sufficiently large
$k$. Consequently, it holds that
$\displaystyle d^{2}\\!f(\overline{x})(w|z)$
$\displaystyle\leq\liminf_{k\to\infty}\frac{f(x^{k})-f(\overline{x})-\tau_{k}df(\overline{x})(w)}{\tau_{k}^{2}/2}$
$\displaystyle\leq\limsup_{k\to\infty}\frac{\vartheta(F(x^{k}))-\vartheta(F(\overline{x}))-\tau_{k}d\vartheta(F(\overline{x}))(dF(\overline{x})(w))}{\tau_{k}^{2}/2}$
$\displaystyle=\lim_{k\to\infty}\frac{\vartheta(y^{k})-\vartheta(F(\overline{x}))\\!-\\!\tau_{k}d\vartheta(F(\overline{x}))(dF(\overline{x})(w))}{\tau_{k}^{2}/2}+\limsup_{k\to\infty}\frac{\vartheta(F(x^{k}))\\!-\\!\vartheta(y^{k})}{\tau_{k}^{2}/2}$
$\displaystyle=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|u)+\limsup_{k\to\infty}\frac{\vartheta(F(x^{k}))\\!-\\!\vartheta(y^{k})}{\tau_{k}^{2}/2}$
$\displaystyle\leq
d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|u)+L_{\vartheta}\limsup_{k\to\infty}\frac{2\|F(x^{k})-y^{k}\|_{2}}{\tau_{k}^{2}}\
\ {\rm for\ some}\ L_{\vartheta}>0$
$\displaystyle=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,u),$
(30)
where the second inequality is due to the expression of $f$, the last
inequality is using the strict continuity of $\vartheta$ relative to its
domain by Assumption 1 (ii), the second equality is due to Proposition 3.2
(iii) and the last equality is using the parabolic semidifferentiability of
$F$.
The above arguments show that the first equality of part (i) holds. To achieve
(28), it suffices to prove that the converse inclusion in (29) holds. Pick any
$z\in\mathcal{T}^{2}_{{\rm dom}f}(\overline{x},w)$. From Corollary 2.2,
$u\in\\!\mathcal{T}_{{\rm
dom}\vartheta}^{2}(F(\overline{x}),dF(\overline{x})(w))$, which by Proposition
3.2 (ii) implies that
$d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|u)<\infty$. Together with
the first equality, $z\in{\rm dom}\,d^{2}\\!f(\overline{x})(w|\cdot)$. The
converse inclusion in (29) follows.
(ii) By Corollary 2.2 and part (i), $\emptyset\neq\mathcal{T}^{i,2}_{{\rm
dom}f}(\overline{x},w)=\mathcal{T}^{2}_{{\rm dom}f}(\overline{x},w)={\rm
dom}\,d^{2}f(\overline{x})(w\,|\,\cdot)$. Consider any $z\in\mathbb{X}$. Pick
any $\tau_{k}\downarrow 0$. From the discussions after Definition 2.7, it
suffices to argue that there exists a sequence $z^{k}\to z$ such that
$\Delta_{\tau_{k}}^{2}f(\overline{x})(w|z^{k})\to d^{2}f(\overline{x})(w|z)$
as $k\to\infty$. Indeed, when $z\notin\mathcal{T}^{i,2}_{{\rm
dom}f}(\overline{x},w)={\rm dom}\,d^{2}f(\overline{x})(w\,|\,\cdot)$, by the
definition of inner second-order tangent sets, for any $z^{k}\to z$,
$\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k}\notin{\rm dom}f$. By the
definition of the parabolic difference quotients of $f$ at $\overline{x}$ for
$w$,
$\Delta_{\tau_{k}}^{2}f(\overline{x})(w|z^{k})=\frac{f(\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k})-f(\overline{x})-\tau_{k}df(\overline{x})(w)}{\tau_{k}^{2}/2}=\infty=d^{2}f(\overline{x})(w|z)$
where the last equality is due to $z\notin{\rm
dom}\,d^{2}f(\overline{x})(w\,|\,\cdot)$. When $z\in\mathcal{T}^{i,2}_{{\rm
dom}f}(\overline{x},w)$, there is $z^{k}\to z$ such that ${\rm dom}f\ni
x^{k}=\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k}$ for each $k$, and
using the same arguments as those for (4.1) leads to
$\displaystyle\lim_{k\to\infty}\Delta_{\tau_{k}}^{2}f(\overline{x})(w|z^{k})$
$\displaystyle\leq\limsup_{k\to\infty}\frac{\vartheta(F(x^{k}))-\vartheta(F(\overline{x}))-\tau_{k}d\vartheta(F(\overline{x}))(dF(\overline{x})(w))}{\tau_{k}^{2}/2}$
$\displaystyle\leq
d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))=d^{2}f(\overline{x})(w|z)$
where the equality is due to part (i). Note that
$d^{2}f(\overline{x})(w|z)\leq\lim_{k\to\infty}\Delta_{\tau_{k}}^{2}f(\overline{x})(w|z^{k})$.
Then, $\lim_{k\to\infty}\Delta_{\tau_{k}}^{2}f(\overline{x})(w|z^{k})\to
d^{2}f(\overline{x})(w|z)$ as $k\to\infty$. The proof is completed. $\Box$
### 4.2 Lower and upper estimates for the second subderivative of $f$
Now we are ready to establish the upper and lower estimates for the second
subderivative of $f$.
###### Theorem 4.1
Consider any $\overline{x}\in{\rm dom}f$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Suppose that the MSQC holds for
constraint system $F(x)\in{\rm dom}\,\vartheta$ at $\overline{x}$. Then, for
any $w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$, it holds that
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle\leq\inf_{z\in\mathbb{X}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))-\langle\overline{v},z\rangle\big{\\}}<\infty,$
(31) $\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle\geq\sup_{\xi\in\Lambda_{\overline{x},\overline{v}}}\big{\\{}d^{2}\vartheta(F(\overline{x})|\xi)(dF(\overline{x})(w))+d^{2}(\xi
F)(\overline{x})(w)\big{\\}},$ (32)
where
$\Lambda_{\overline{x},\overline{v}}:=\\{\xi\in\partial\vartheta(F(\overline{x}))\,|\,\langle\xi,dF(\overline{x})(w^{\prime})\rangle\geq\langle\overline{v},w^{\prime}\rangle\
\forall w^{\prime}\in\mathbb{X}\\}$. If in addition
$\partial\\!f(\overline{x})\subset\partial(\xi F)(\overline{x})$ for any
$\xi\in\partial\vartheta(F(\overline{x}))$, with
$\Gamma_{\overline{x},\overline{v}}:=\big{\\{}(\xi,u_{1},\ldots,u_{m})\in\partial\vartheta(F(\overline{x}))\times\partial
F_{1}(\overline{x})\times\cdots\times\partial F_{m}(\overline{x})\ |\
\overline{v}=\sum_{i=1}^{m}\xi_{i}u_{i}\big{\\}}$,
$d^{2}\\!f(\overline{x}|\overline{v})(w)\geq\sup_{(\xi,u)\in\Gamma_{\overline{x},\overline{v}}}\Big{\\{}d^{2}\vartheta(F(\overline{x})|\xi)(dF(\overline{x})(w))+d^{2}(\xi
F)(\overline{x}\,|\,{\textstyle\sum_{i=1}^{m}}\xi_{i}u_{i})(w)\Big{\\}}.$ (33)
Proof: Fix any $w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$. By
Proposition 4.1 (i), $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$. By invoking
Proposition 4.2 (i),
$d^{2}\\!f(\overline{x})(w|z)=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))$
for all $z\in\mathbb{X}$. Together with [1, Proposition 13.64], we obtain the
first inequality in (31). From (28) and Corollary 2.2, ${\rm
dom}\,d^{2}\\!f(\overline{x})(w|\cdot)\neq\emptyset$, so there exists
$z\in\mathbb{X}$ such that
$d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))<\infty$.
The two inequalities in (31) then follow. To achieve inequality (32), it
suffices to consider that $\Lambda_{\overline{x},\overline{v}}\neq\emptyset$.
Pick any $\xi\in\Lambda_{\overline{x},\overline{v}}$. Recall that $F$ is
semidifferentiable. By [1, Theorem 7.21], for any $u\in\mathbb{X}$, it holds
that
$dF(\overline{x})(u)=\lim_{\tau\downarrow 0,u^{\prime}\to
u}\Delta_{\tau}F(\overline{x})(u^{\prime})\ \ {\rm with}\ \
\Delta_{\tau}F(\overline{x})(u^{\prime}):=\frac{F(\overline{x}+\tau
u^{\prime})-F(\overline{x})}{\tau},$
which implies that, for any $u\in\mathbb{X}$,
$\langle\xi,dF(\overline{x})(u)\rangle=d(\xi F)(\overline{x})(u)$. By the
second-order difference quotients of $f$ at $\overline{x}$, for any $\tau>0$
and any $w^{\prime}\in\mathbb{X}$,
$\displaystyle\Delta_{\tau}^{2}f(\overline{x}|\overline{v})(w^{\prime})$
$\displaystyle=\frac{\vartheta(F(\overline{x}+\tau
w^{\prime}))-\vartheta(F(\overline{x}))-\tau\langle\overline{v},w^{\prime}\rangle}{\tau^{2}/2}$
$\displaystyle\geq\frac{\vartheta(F(\overline{x})+\tau\Delta_{\tau}F(\overline{x})(w^{\prime}))-\vartheta(F(\overline{x}))-\tau
d(\xi F)(\overline{x})(w^{\prime})}{\tau^{2}/2}$
$\displaystyle=\frac{\vartheta(F(\overline{x})+\tau\Delta_{\tau}F(\overline{x})(w^{\prime}))-\vartheta(F(\overline{x}))-\tau\langle\xi,\Delta_{\tau}F(\overline{x})(w^{\prime})\rangle}{\tau^{2}/2}$
$\displaystyle\quad+\frac{(\xi F)(\overline{x}+\tau w^{\prime})-(\xi
F)(\overline{x})-\tau d(\xi F)(\overline{x})(w^{\prime})}{\tau^{2}/2}$
where the inequality is due to $\xi\in\Lambda_{\overline{x},\overline{v}}$ and
$\langle\xi,dF(\overline{x})(w^{\prime})\rangle=d(\xi
F)(\overline{x})(w^{\prime})$. From Definition 2.6 and $\lim_{\tau\downarrow
0,w^{\prime}\to
w}\Delta_{\tau}F(\overline{x})(w^{\prime})=dF(\overline{x})(w)$,
$d^{2}\\!f(\overline{x}|\overline{v})(w)\geq
d^{2}\vartheta(F(\overline{x})|\xi)(dF(\overline{x})(w))+d^{2}(\xi
F)(\overline{x})(w)$. This, by the arbitrariness of $\xi$ in
$\Lambda_{\overline{x},\overline{v}}$, implies that (32) holds. To achieve
(33), it suffices to consider that
$\Gamma_{\\!\overline{x},\overline{v}}\neq\emptyset$. Pick any
$(\xi,u)\in\Gamma_{\\!\overline{x},\overline{v}}$. Then,
$(\xi,u_{1},\ldots,u_{m})\in\partial\vartheta(F(\overline{x}))\times\partial
F_{1}(\overline{x})\times\cdots\times\partial F_{m}(\overline{x})$ and
$\overline{v}=\sum_{i=1}^{m}\xi_{i}u_{i}$. By the second-order difference
quotients for $f$ at $\overline{x}$, for any $\tau>0$ and
$w^{\prime}\in\mathbb{X}$,
$\displaystyle\Delta_{\tau}^{2}f(\overline{x}|\overline{v})(w^{\prime})$
$\displaystyle=\frac{\vartheta(F(\overline{x})+\tau\Delta_{\tau}F(\overline{x})(w^{\prime}))-\vartheta(F(\overline{x}))-\tau{\textstyle\sum_{i=1}^{m}}\xi_{i}\langle
u_{i},w^{\prime}\rangle}{\tau^{2}/2}$
$\displaystyle=\frac{\vartheta(F(\overline{x})+\tau\Delta_{\tau}F(\overline{x})(w^{\prime}))-\vartheta(F(\overline{x}))-\tau\langle\xi,\Delta_{\tau}F(\overline{x})(w^{\prime})\rangle}{\tau^{2}/2}$
$\displaystyle\quad+\frac{2}{\tau^{2}}\Big{[}(\xi F)(\overline{x}+\tau
w^{\prime})-(\xi
F)(\overline{x})-\tau\sum_{i=1}^{m}\langle\xi_{i}u_{i},w^{\prime}\rangle\big{)}\Big{]}.$
(34)
Since $\xi\in\partial\vartheta(F(\overline{x}))$, the given assumption implies
that $\overline{v}=\sum_{i=1}^{m}\xi_{i}u_{i}\in\partial(\xi
F)(\overline{x})$. Recall that $\lim_{\tau\downarrow 0,w^{\prime}\to
w}\Delta_{\tau}F(\overline{x})(w^{\prime})=dF(\overline{x})(w)$. From equality
(4.2) and Definition 2.6, it follows that
$d^{2}f(\overline{x}|\overline{v})(w)\geq
d^{2}\vartheta(F(\overline{x})|\xi)(dF(\overline{x})(w))+d^{2}(\xi
F)(\overline{x}\,|\,{\textstyle\sum_{i=1}^{m}}\xi_{i}u_{i})(w).$
By the arbitrariness of $(\xi,u)\in\Gamma_{\overline{x},\overline{v}}$, we
obtain inequality (33). $\Box$
###### Remark 4.1
Recently, Benko and Mehlitz derived a lower estimate of
$d^{2}f(\overline{x}|\overline{v})(\cdot)$ for a general lsc $\vartheta$ but a
continuous $F$ in [28, Theorem 4.1] by studying calculus rules for the second
subderivative. Their lower estimate has a concise form but requires that $F$
is calm at $\overline{x}$ in the direction of interest.
Theorem 4.1 extends the conclusion of [27, Proposition 5.1] for the second
subderivative of the composition (1) with a parabolically epi-differentiable
convex outer function and a twice differentiable inner mapping. The upper
estimate in (31) is essentially same as the one in [27, Equa (5.3)], but the
lower estimate in (32) or (33) is different from the one in [27, Equa (5.2)]
because the inner mapping $F$ here is allowed to be nondifferentiable.
## 5 Twice epi-differentiability for several classes of $f$
The following theorem states that $f$ is twice epi-differentiable at
$\overline{x}\in{\rm dom}\,f$ for $\overline{v}\in\partial\\!f(\overline{x})$
if $f$ is parabolically regular at $\overline{x}$ for $\overline{v}$. Its
proof is similar to that of [27, Theorem 3.8], and we include it for
completeness.
###### Theorem 5.1
Consider any $\overline{x}\in{\rm dom}\,f$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Suppose that the MSQC holds for
constraint system $F(x)\in{\rm dom}\,\vartheta$ at $\overline{x}$, that
$\overline{v}\in\widehat{\partial}\\!f(\overline{x})$, and that $f$ is
parabolically regular at $\overline{x}$ for $\overline{v}$. Then, the function
$f$ is properly twice epi-differentiable at $\overline{x}$ for $\overline{v}$
with
$d^{2}f(\overline{x}|\overline{v})(w)=\left\\{\begin{array}[]{cl}\min_{z\in\mathbb{X}}\big{\\{}d^{2}\\!f(\overline{x})(w|z)-\langle\overline{v},z\rangle\\}&{\rm
if}\ w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v}),\\\ \infty&{\rm
otherwise}.\end{array}\right.$ (35)
Proof: To prove that $f$ is twice epi-differentiable at $\overline{x}$ for
$\overline{v}$, fix any $w\in\mathbb{X}$ and pick any $\tau_{k}\downarrow 0$.
Case 1: $w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$. In this case, as
$\overline{v}\in\widehat{\partial}\\!f(\overline{x})$, from Proposition 4.1
(iii), ${\rm
dom}\,d^{2}f(\overline{x}|\overline{v})=\mathcal{C}_{\\!f}(\overline{x},\overline{v})$.
Since $f$ is parabolically regular at $\overline{x}$ for $\overline{v}$, by
using the same arguments as those for the first part of the proof of [27,
Proposition 3.6], there exists $\overline{z}\in{\rm
dom}\,d^{2}\\!f(\overline{x})(w\,|\,\cdot)$ such that
$d^{2}f(\overline{x})(w|\overline{z})-\langle\overline{v},\overline{z}\rangle=d^{2}\\!f(\overline{x}|\overline{v})(w).$
(36)
Note that $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$. By Proposition 4.2
(ii), $f$ is parabolically epi-differentiable at $\overline{x}$ for $w$, so we
can find a sequence $z^{k}\to\overline{z}$ such that
$\displaystyle d^{2}\\!f(\overline{x})(w|\overline{z})$
$\displaystyle=\lim_{k\to\infty}\frac{f(\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k})-f(\overline{x})-\tau_{k}df(\overline{x})(w)}{\tau_{k}^{2}/2}$
$\displaystyle=\lim_{k\to\infty}\frac{f(\overline{x}+\tau_{k}w+\frac{1}{2}\tau_{k}^{2}z^{k})-f(\overline{x})-\tau_{k}\langle\overline{v},w\rangle}{\tau_{k}^{2}/2}$
$\displaystyle=\lim_{k\to\infty}(\Delta_{\tau_{k}}^{2}f(\overline{x}|\overline{v})(w^{k})+\langle\overline{v},z^{k}\rangle)\
\ {\rm with}\ w^{k}=w+\frac{1}{2}\tau_{k}z^{k},$
where the second equality is due to
$df(\overline{x})(w)=\langle\overline{v},w\rangle$ implied by
$w\in\mathcal{C}_{f}(\overline{x},\overline{x})$. Together with (36), it
follows that
$d^{2}\\!f(\overline{x}|\overline{v})(w)=\lim_{k\to\infty}\Delta_{\tau_{k}}^{2}f(\overline{x}|\overline{v})(w^{k})$.
Case 2: $w\notin\mathcal{C}_{\\!f}(\overline{x},\overline{v})$. Now from
$\overline{v}\in\widehat{\partial}\\!f(\overline{x})$ and Proposition 4.1
(iii), $w\notin{\rm dom}\,d^{2}\\!f(\overline{x}|\overline{v})$, i.e.,
$d^{2}\\!f(\overline{x}|\overline{v})(w)=\infty$. Take $w^{k}=w$ for each
$k\in\mathbb{N}$. By Definition 2.6, it holds that
$\infty=d^{2}\\!f(\overline{x}|\overline{v})(w)\leq\liminf_{k\to\infty}\Delta_{\tau_{k}}^{2}f(\overline{x}|\overline{v})(w^{k})\leq\infty,$
which shows that the sequence $w^{k}$ is such that
$d^{2}\\!f(\overline{x}|\overline{v})(w)=\lim_{k\to\infty}\Delta_{\tau_{k}}^{2}f(\overline{x}|\overline{v})(w^{k})$.
By the discussion after Definition 2.6, the above arguments show that $f$ is
twice epi-differentiable at $\overline{x}$ for $\overline{v}$. Furthermore,
equality (35) is implied by the above proof and [1, Proposition 13.64], which
in turn shows that $d^{2}f(\overline{x}|\overline{v})$ is proper. The proof is
completed. $\Box$
Theorem 5.1 shows that under mild conditions the parabolic regularity of $f$
implies its proper twice epi-differentiablity. Inspired by this, in the
subsequent sections, we use the lower and upper estimates in Theorem 4.1 to
prove the parabolic regularity of the following several classes of $f$:
* (I)
$F$ is twice differentiable on the open set $\mathcal{O}$ and $\vartheta$
satisfies Assumption 1 (ii);
* (II)
$F(x)\\!=(\|x_{J_{1}}\|_{q},\ldots,\|x_{J_{m}}\|_{q})^{\top}\ (q>1)$ for
$x\in\mathbb{R}^{n}$ with $\\{J_{1},\ldots,J_{m}\\}$ being a partition of
$[n]$, and $\vartheta(z)\\!=\\!\sum_{i=1}^{m}\rho_{\lambda}(z_{i})\
(\lambda>0)$ for $z\in\mathbb{R}^{m}$ with
$\rho_{\lambda}\\!:\mathbb{R}\to\mathbb{R}$ satisfying the following
conditions:
* (C.1)
$\rho_{\lambda}$ is a PWTD function with $\rho_{\lambda}(0)=0$;
* (C.2)
$\rho_{\lambda}$ is differentiable at all $t\neq 0$ and
$\partial\rho_{\lambda}(0)=\gamma[-\lambda,\lambda]$ for some $\gamma>0$;
* (C.3)
$\rho_{\lambda}$ is regular and strictly continuous.
* (III)
$F(x)\\!=\|x_{2}\|_{q}-x_{1}\ (q>1)$ for
$x=(x_{1},x_{2})\in\mathbb{R}\times\mathbb{R}^{n-1}$ and
$\vartheta(t)=\delta_{\mathbb{R}_{-}}(t)$ for $t\in\mathbb{R}$. Now
$f=\delta_{K}$ where
$K\\!:=\\{(x_{1},x_{2})\in\mathbb{R}\times\mathbb{R}^{n-1}\,|\,\|x_{2}\|_{q}\leq
x_{1}\\}$ is known as the $q$-order cone (see [36, 37]). When $q=2$, $K$ is
precisely the popular second-order cone (also called the ice-cream cone).
* (IV)
$F(x)\\!=\lambda_{\rm max}(x)$ for $x\in\mathbb{S}^{n}$ and
$\vartheta(t)=\delta_{\mathbb{R}_{-}}(t)$ for $t\in\mathbb{R}$, where
$\mathbb{S}^{n}$ is the space of all $n\times n$ real symmetric matrices,
endowed with the trace inner product $\langle\cdot,\cdot\rangle$, i.e.
$\langle x,y\rangle={\rm tr}(x^{\top}y)$ for $x,y\in\mathbb{S}^{n}$, and its
induced Frobenius norm $\|\cdot\|_{F}$, and $\lambda_{\rm max}(x)$ denotes the
maximum eigenvalue of $x\in\mathbb{S}^{n}$. Obviously, the associated $f$ is
the indicator function of negative semidefinite cone $\mathbb{S}_{-}^{n}$.
### 5.1 Parabolic regularity for function $f$ of type I
For this class of $f$, as $F$ is twice differentiable on the open set
$\mathcal{O}$, at any $\overline{x}\in{\rm dom}f$, for every
$w\in\mathbb{R}^{n}$ and $\xi\in\mathbb{R}^{m}$, it holds that
$dF(\overline{x})(w)=F^{\prime}(\overline{x})w\ \ {\rm and}\ \ d^{2}(\xi
F)(\overline{x})(w)=\langle\xi,\nabla^{2}F(\overline{x})(w,w)\rangle,$
and hence the set $\Lambda_{\overline{x},\overline{v}}$ in Theorem 4.1 is
specified as
$\Lambda_{\overline{x},\overline{v}}=\\{\xi\\!\in\\!\partial\vartheta(F(\overline{x}))\,|\,\nabla
F(\overline{x})\xi\\!=\\!\overline{v}\\}$.
###### Proposition 5.1
Consider any $\overline{x}\in{\rm dom}f$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Suppose that the MSQC holds for
constraint system $F(x)\in{\rm dom}\vartheta$, that
$\partial\\!f(\overline{x})\subset\nabla
F(\overline{x})\partial\vartheta(F(\overline{x}))$, and that $\vartheta$ is
regular at $F(\overline{x})$. Then, the set
$\Lambda_{\overline{x},\overline{v}}=\big{\\{}\xi\in\partial\vartheta(F(\overline{x}))\
|\ \nabla F(\overline{x})\xi=\overline{v}\big{\\}}$ is nonempty, and for every
$w\in\mathcal{C}_{f}(\overline{x},\overline{v})$,
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle=\inf_{z\in\mathbb{X}}\Big{\\{}d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\nabla^{2}F(\overline{x})(w,w)\\!+\\!F^{\prime}(\overline{x})z)-\langle\overline{v},z\rangle\Big{\\}}$
$\displaystyle=d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})(w))+\sup_{\xi\in\Lambda_{\overline{x},\overline{v}}}\langle\xi,\nabla^{2}F(\overline{x})(w,w)\rangle,$
and consequently, the function $f$ is parabolically regular at $\overline{x}$
for $\overline{v}$.
Proof: The nonemptiness of $\Lambda_{\overline{x},\overline{v}}$ is trivial.
Fix any $w\in\mathcal{C}_{f}(\overline{x},\overline{v})$. By Proposition 4.1
(i),
$F^{\prime}(\overline{x})w\in\mathcal{C}_{\vartheta}(F(\overline{x}),\xi)$ for
any $\xi\in\Lambda_{\overline{x},\overline{v}}$. From (32) and Proposition 3.1
(ii) with $\psi=\vartheta,y=F(\overline{x})$ and $v=\xi$,
$d^{2}\\!f(\overline{x}|\overline{v})(w)\geq
d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)+\sup_{\xi\in\Lambda_{\overline{x},\overline{v}}}\langle\xi,\nabla^{2}F(\overline{x})(w,w)\rangle.$
(37)
To achieve the desired equalities, we introduce the following optimal value
function
$\Upsilon(p)\\!:=\inf_{z\in\mathbb{X}}\sup_{u\in\mathcal{A}(F(\overline{x}),F^{\prime}(\overline{x})w)}\Big{\\{}\langle
u,\nabla^{2}F(\overline{x})(w,w)\\!+\\!p\rangle+\langle\nabla\\!F(\overline{x})u\\!-\\!\overline{v},z\rangle\Big{\\}},$
(38)
where
$\mathcal{A}_{\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})(w))\\!:=\\!\\{u\in\\!\partial\vartheta(F(\overline{x}))\,|\,d\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)=\\!\langle
u,F^{\prime}(\overline{x})w\rangle\\}$. By Definition 2.8, we calculate that
$F^{\prime\prime}(\overline{x};w,z)=\nabla^{2}F(\overline{x})(w,w)\\!+\\!F^{\prime}(\overline{x})z$
for $z\in\mathbb{X}$. From inequality (31) and Proposition 3.2 (iv) with
$\psi=\vartheta,y=F(\overline{x})$, it follows that
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle\leq\inf_{z\in\mathbb{X}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\nabla^{2}F(\overline{x})(w,w)\\!+\\!F^{\prime}(\overline{x})z)-\langle\overline{v},z\rangle\big{\\}}$
$\displaystyle=\inf_{z\in\mathbb{X}}\sup_{u\in\mathcal{A}_{\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)}\Big{\\{}\langle
u,\nabla^{2}F(\overline{x})(w,w)\\!+F^{\prime}(\overline{x})z\rangle\\!-\\!\langle\overline{v},z\rangle\Big{\\}}\\!$
$\displaystyle\qquad+d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)$
$\displaystyle=\Upsilon(0)+d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w).$
(39)
Note that $\mathcal{A}_{\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$
is a closed convex set because
$\partial\vartheta(F(\overline{x}))\\!=\\!\widehat{\partial}\vartheta(F(\overline{x}))$.
By combining the definition of $\Upsilon$ in (38) and Lemma 1 in Appendix, it
holds that
$\displaystyle\Upsilon(0)$
$\displaystyle\geq\\!\sup_{\eta\in\mathbb{R}^{m}}\Big{\\{}\langle\eta,\nabla^{2}F(\overline{x})(w,w)\rangle\
\ {\rm s.t.}\ \ \nabla
F(\overline{x})\eta=\overline{v},\,\eta\in\mathcal{A}_{\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)\Big{\\}}$
$\displaystyle=\sup_{\xi\in\Lambda_{\overline{x},\overline{v}}}\langle\xi,\nabla^{2}F(\overline{x})(w,w)\rangle,$
where the second equality comes from the feasible set of the maximum problem
in the middle and the expression of $\Lambda_{\overline{x},\overline{v}}$, and
the inequality becomes an equality if $\partial\Upsilon(0)\neq\emptyset$.
Along with the above (37) and (5.1), to achieve the desired equalities, we
only need to argue that $\partial\Upsilon(0)\neq\emptyset$. From the
equalities in (5.1) and the second inequality in (31), $\Upsilon(0)<\infty$.
Recall that $\Lambda_{\overline{x},\overline{v}}$ is nonempty. From the last
inequality, we have $\Upsilon(0)>-\infty$. Thus, $\Upsilon(0)$ is finite. By
invoking [1, Proposition 8.32], it suffices to argue that there exist
$\varepsilon_{1}>0$ and $c_{1}>0$ such that
$\Upsilon(p)\geq\Upsilon(0)-c_{1}\|p\|_{2}\quad{\rm for\ all}\
p\in\mathbb{R}^{m}\ {\rm with}\ \|p\|_{2}\leq\varepsilon_{1}.$ (40)
Pick any small $\varepsilon>0$. Fix any $p\in\mathbb{R}^{m}$ with
$\|p\|_{2}\leq\varepsilon$. By Proposition 3.2 (iv),
$\Upsilon(p)+d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)=\inf_{z\in\mathbb{X}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\nabla^{2}F(\overline{x})(w,w)+p\\!+\\!F^{\prime}(\overline{x})z)-\langle\overline{v},z\rangle\big{\\}}.$
Note that $w\in\mathcal{T}_{{\rm dom}f}(\overline{x})$ by Proposition 4.1,
which by Lemma 2.1 implies that $F^{\prime}(\overline{x})w\in\mathcal{T}_{{\rm
dom}\vartheta}(F(\overline{x}))$. Together with Proposition 3.1 (i) with
$\psi=\vartheta,y=F(\overline{x})$, we conclude that
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)$ is finite. If
there is no vector $z\in\mathbb{X}$ such that
$\nabla^{2}F(\overline{x})(w,w)+p+F^{\prime}(\overline{x})z\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$, by Proposition 3.2
(ii)
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\nabla^{2}F(\overline{x})(w,w)+p\\!+\\!F^{\prime}(\overline{x})\,\cdot)\equiv\infty$,
which along with the last equality and the finitness of
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)$ shows that
$\Upsilon(p)=\infty$, and inequality (40) follows by taking
$\varepsilon_{1}=\varepsilon$ and any $c_{1}>0$. Thus, it suffices to consider
that there exists $z\in\mathbb{X}$ such that
$\nabla^{2}F(\overline{x})(w,w)+p+F^{\prime}(\overline{x})z\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$, i.e.,
$\mathcal{S}_{w}(p)\neq\emptyset$, where $\mathcal{S}_{w}$ is the
multifunction defined in (8) with $\varphi$ and $g$ replaced by $\vartheta$
and $F$, respectively. From the above discussions, we have
$\Upsilon(p)+d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)\\!=\inf_{z\in\mathcal{S}_{w}(p)}\big{\\{}d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\nabla^{2}F(\overline{x})(w,w)+p\\!+\\!F^{\prime}(\overline{x})z)-\langle\overline{v},z\rangle\big{\\}}.$
(41)
Pick any $z\in\mathcal{S}_{w}(p)$. Let
$u_{p}\\!:=\nabla^{2}F(\overline{x})(w,w)+p+F^{\prime}(\overline{x})z$. Then,
$u_{p}\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$. By Lemma 2.2,
there exist $z^{0}\in\mathcal{S}_{w}(0)$ and
$b\in\mathbb{B}_{\mathbb{R}^{n}}$, the unit ball centered at the origin of
$\mathbb{R}^{n}$, such that
$z=z^{0}+\kappa\|p\|_{2}b\ \ {\rm and}\ \
\nabla^{2}F(\overline{x})(w,w)\\!+\\!F^{\prime}(\overline{x})z^{0}\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w).$ (42)
Note that $\mathcal{A}(F(\overline{x}),F^{\prime}(\overline{x})(w))$ is
nonempty. By Proposition 3.2 (ii) and (iv),
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\cdot)$ is a
finite convex function on the set $\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$, and hence
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\cdot)$ is
strictly continuous relative to $\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$. Denote by $L_{0}$
the Lipschitz modulus of
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,\cdot)$ at
$u_{0}:=\nabla^{2}F(\overline{x})(w,w)\\!+\\!F^{\prime}(\overline{x})z^{0}$
relative to $\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),F^{\prime}(\overline{x})w)$. Then, by the
equality in (42), if necessary by shrinking $\varepsilon$,
$\big{|}d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,u_{p})-d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,u_{0})\big{|}\leq
L_{0}\|u_{p}-u_{0}\|_{2}$. Consequently,
$\displaystyle
d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,u_{p})-\langle\overline{v},z\rangle-d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)$
$\displaystyle\geq
d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w\,|\,u_{0})-d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)$
$\displaystyle\quad-\langle\overline{v},z^{0}\rangle-
L_{0}(\|p\|_{2}+\|F^{\prime}(\overline{x})\|\|z-z^{0}\|)-\|\overline{v}\|\|z-z^{0}\|$
$\displaystyle\geq\Upsilon(0)-L_{0}(\|p\|_{2}+\kappa\|F^{\prime}(\overline{x})\|\|p\|_{2})-\kappa\|\overline{v}\|\|p\|_{2},$
which along with (41) implies that
$\Upsilon(p)\geq\Upsilon(0)-[\kappa\|\overline{v}\|+L_{0}(1\\!+\\!\kappa\|F^{\prime}(\overline{x})\|)]\|p\|_{2}$.
By the arbitrariness of $p\in\mathbb{R}^{m}$ with $\|p\|_{2}\leq\varepsilon$,
inequality (40) holds with $\varepsilon_{1}=\varepsilon$ and
$c_{1}=\kappa\|\overline{v}\|+L_{0}(1\\!+\\!\kappa\|F^{\prime}(\overline{x})\|)$.
By Definition 2.9, the parabolic regularity of $f$ follows from the first
equality and Proposition 4.2 (i). $\Box$
Note that
$\partial\\!f(\overline{x})\supset\widehat{\partial}\\!f(\overline{x})\supset\nabla\\!F(\overline{x})\widehat{\partial}\vartheta(F(\overline{x}))$
always holds. By combining Proposition 5.1 and Theorem 5.1, we obtain the
proper twice epi-differentiability of $f$, that is, the following conclusion
holds.
###### Theorem 5.2
Consider any $\overline{x}\in{\rm dom}\,f$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Suppose that the MSQC holds for
constraint system $F(x)\in{\rm dom}\,\vartheta$ at $\overline{x}$, that
$\partial\\!f(\overline{x})\subset\nabla
F(\overline{x})\partial\vartheta(F(\overline{x}))$, and that $\vartheta$ is
regular at $F(\overline{x})$. Then, $f$ is properly twice epi-differentiable
at $\overline{x}$ for $\overline{v}$.
###### Remark 5.1
The assumption $\partial\\!f(\overline{x})\subset\nabla
F(\overline{x})\partial\vartheta(F(\overline{x}))$ in Proposition 5.1 and
Theorem 5.2 holds if in addition $\vartheta$ is convex around
$F(\overline{x})$ by [22, Theorem 3.5]. By combining Proposition 1 (ii) in
Appendix with [16, Pages 118-120], this assumption also holds if $\vartheta$
is strictly continuous at $F(\overline{x})$.
By Theorem 5.2, [1, Example 13.18] and [1, Theorem 13.24], we have the
following second-order necessary and sufficient optimality conditions for
problem (2) with $\vartheta$ and $F$ from type I, which extends the result of
[22, Theorem 6.1] for a PWLQ convex $\vartheta$ to a PWTD $\vartheta$.
###### Corollary 5.1
Consider problem (2) with $\vartheta$ and $F$ from type I. Fix any
$\overline{x}\in{\rm dom}f$. Suppose that the MSQC holds for system
$F(x)\\!\in{\rm dom}\vartheta$ at $\overline{x}$, that
$\partial\\!f(\overline{x})\subset\nabla
F(\overline{x})\partial\vartheta(F(\overline{x}))$, and that $\vartheta$ is
regular at $F(\overline{x})$. Let $\overline{v}:=-\nabla f_{0}(\overline{x})$.
Then the following statements holds.
* (i)
If $\overline{x}$ is a local optimal solution of (2), then
$\overline{v}\in\partial\\!f(\overline{x})$ and for any
$w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$,
$\langle
w,\nabla^{2}\\!f_{0}(\overline{x})w\rangle+d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)+\sup_{\xi\in\Lambda_{\overline{x},\overline{v}}}\langle\xi,\nabla^{2}F(\overline{x})(w,w)\rangle\geq
0,$ (43)
where $\Lambda_{\overline{x},\overline{v}}$ is the same as the one in
Proposition 5.1.
* (ii)
If $\overline{x}\in\\!(\partial\\!f)^{-1}(\overline{v})$ and (43) holds with
“$>$” for all
$w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})\backslash\\{0\\}$, then it
is a strong local optimal solution of (2), i.e., there exist $\varepsilon>0$
and $c>0$ such that
$\Phi(x)\geq\Phi(\overline{x})+(c/2)\|x-\overline{x}\|^{2}\quad\ \forall
x\in\mathbb{B}(\overline{x},\varepsilon).$
### 5.2 Parabolic regularity for function $f$ of type II
For this class of $f$, the outer $\vartheta$, induced by $\rho_{\lambda}$
satisfying conditions (C.1)-(C.3), frequently appears in group sparsity
optimization. Two popular examples are the following SCAD and MCP functions
[9, 10]:
$\displaystyle\rho_{\lambda}(t)$
$\displaystyle:=\left\\{\begin{array}[]{cl}\lambda|t|&{\rm
if\;}|t|\leq\lambda,\\\ \frac{-t^{2}+2a\lambda|t|-\lambda^{2}}{2(a-1)}&{\rm
if\;}\lambda<|t|\leq a\lambda,\\\ \frac{(a+1)}{2}\lambda^{2}&{\rm
if\;}|t|>a\lambda\end{array}\right.\ {\rm with}\ a>2;$
$\displaystyle\rho_{\lambda}(t)$ $\displaystyle:=\lambda\,{\rm
sign}(t)\\!\int_{0}^{|t|}\Big{(}1-\frac{\omega}{\lambda
b}\Big{)}_{+}d\omega\quad{\rm with}\ \ b>0.$
For convenience, in the rest of this section, let $F_{i}(z):=\|z\|_{q}$ for
each $i\in[m]$ with $z\in\mathbb{R}^{J_{i}}$, define ${\rm
gs}(x)\\!:=\\{i\in[m]\ |\ x_{J_{i}}\neq 0\\}$ and $\overline{\rm
gs}(x)\\!:=[m]\backslash{\rm gs}(x)$ for $x\in\mathbb{R}^{n}$, and write
$p=q/(q-1)$, Obviously, at any $x\in\mathbb{R}^{n}$, every $F_{i}$ for
$i\in{\rm gs}(x)$ is at least twice continuously differentiable at $x_{J_{i}}$
with
$\nabla F_{i}(x_{J_{i}})={\rm
sign}(x_{J_{i}})\circ|x_{J_{i}}|^{q-1}\|x_{J_{i}}\|_{q}^{1-q},$
where $\circ$ denotes the Hadamard product of two vectors. The mapping $F$ is
Lipschitz continuous and directionally differentiable on $\mathbb{R}^{n}$, and
for any $x,w\in\mathbb{R}^{n}$,
$dF(x)(w)=(dF_{1}(x_{J_{1}})(w_{J_{1}}),\ldots,dF_{m}(x_{J_{m}})(w_{J_{m}}))^{\top}$
with
$dF_{i}(x_{J_{i}})(w_{J_{i}})=\left\\{\begin{array}[]{cl}\langle\nabla\\!F_{i}(x_{J_{i}}),w_{J_{i}}\rangle&{\rm
if}\ i\in{\rm gs}(x),\\\ \|w_{J_{i}}\|_{q}&{\rm otherwise}\end{array}\right.\
\ {\rm for\ each}\ i\in[m],$ (44)
and for any $\xi\in\mathbb{R}^{m}$, $d^{2}(\xi
F)(x)(w)=d^{2}(\xi_{1}F_{1})(x_{J_{1}})(w_{J_{1}})+\cdots+d^{2}(\xi_{m}F_{m})(x_{J_{m}})(w_{J_{m}})$
with
$d^{2}(\xi_{i}F_{i})(x_{J_{i}})(w_{J_{i}})=\left\\{\begin{array}[]{cl}\xi_{i}\langle
w_{J_{i}},\nabla^{2}\\!F_{i}(x_{J_{i}})w_{J_{i}}\rangle&{\rm if}\ i\in{\rm
gs}(x),\\\ 0&{\rm otherwise}\end{array}\right.\ \ {\rm for\ each}\ i\in[m].$
(45)
It is not hard to check that the mapping $F$ is parabolically
semidifferentiable, and at any $x,w\in\mathbb{R}^{n}$,
$F^{\prime\prime}(x;w,z)=\big{(}F_{1}^{\prime\prime}(x_{J_{1}};w_{J_{1}},z_{J_{1}}),\ldots,F_{m}^{\prime\prime}(x_{J_{m}};w_{J_{m}},z_{J_{m}})\big{)}^{\top}$
for $z\in\mathbb{R}^{n}$ with
$F_{i}^{\prime\prime}(x_{J_{i}};w_{J_{i}},z_{J_{i}})=\left\\{\begin{array}[]{cl}\|z_{J_{i}}\|_{q}&{\rm
if\;}i\in\overline{\rm gs}(x)\cap\overline{\rm gs}(w),\\\
\langle\nabla\\!F_{i}(w_{J_{i}}),z_{J_{i}}\rangle&{\rm if\;}i\in\overline{\rm
gs}(x)\cap{\rm gs}(w),\\\ \\!\langle
w_{J_{i}},\nabla^{2}\\!F_{i}(x_{J_{i}})w_{J_{i}}\rangle\\!+\\!\langle\nabla\\!F_{i}(x_{J_{i}}),z_{J_{i}}\rangle&{\rm
if\;}i\in{\rm gs}(x).\end{array}\right.$ (46)
By [1, Theorem 8.30], conditions (C.2)-(C.3) imply that
$d\rho_{\lambda}(0)(\omega)\\!=\\!\gamma\lambda|\omega|$ for any
$\omega\in\mathbb{R}$. Then, from conditions (C.1)-(C.3), it is easy to get
the following properties of $\vartheta$.
###### Lemma 5.1
Consider the function $\vartheta$ of type II. Fix any $y\in\mathbb{R}^{m}$
with ${\rm supp}(y):=\\{i\in[m]\,|\,y_{i}\neq 0\\}$.
* (i)
For any $w\in\mathbb{R}^{m}$, $d\vartheta(y)(w)=\sum_{i\in{\rm
supp}(y)}\rho_{\lambda}^{\prime}(y_{i})w_{i}+\gamma\lambda\sum_{i\notin{\rm
supp}(y)}|w_{i}|$.
* (ii)
The function $\vartheta$ is regular at $y$ with
$\emptyset\neq\widehat{\partial}\vartheta(y)=\partial\vartheta(y)=\partial\rho_{\lambda}(y_{1})\times\cdots\times\partial\rho_{\lambda}(y_{m})$
where
$\partial\rho_{\lambda}(y_{i})=\left\\{\begin{array}[]{cl}[-\gamma\lambda,\gamma\lambda]&\
{\rm if}\ i\notin{\rm supp}(y),\\\ \\{\rho_{\lambda}^{\prime}(y_{i})\\}&\ {\rm
if\;}\ i\in{\rm supp}(y)\end{array}\right.\ \ {\rm for\ each}\ i\in[m].$
The following lemma uses Lemma 5.1 (ii) to characterize the subdifferential of
$f$.
###### Lemma 5.2
Consider any $\overline{x}\in\mathbb{R}^{n}$ and any
$v\in\partial\\!f(\overline{x})$. For $i\in{\rm gs}(\overline{x})$,
$v_{i}=\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\nabla\\!F_{i}(\overline{x}_{J_{i}})$;
and for $i\in\overline{{\rm gs}}(\overline{x})$, there exist
$\eta_{i}\in[-\gamma\lambda,\gamma\lambda]$ and
$\zeta_{i}\in\mathbb{R}^{|J_{i}|}$ such that $v_{i}=\eta_{i}\zeta_{i}$, where
$\|\zeta_{i}\|_{p}\leq 1$ if $\eta_{i}\geq 0$, otherwise
$\|\zeta_{i}\|_{p}=1$.
Proof: From the strict continuity of $\vartheta$ and $F$ and [1, Theorem 10.49
& Proposition 9.24 (b)], there exists
$\eta\in\partial\vartheta(F(\overline{x}))$ such that
$v_{i}\in\partial(\eta_{i}F_{i})(\overline{x}_{J_{i}})$ for each $i\in[m]$. By
Lemma 5.1 (ii), for each $i\in{\rm gs}(\overline{x})$,
$\eta_{i}=\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})$, so
$v_{i}=\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\nabla\\!F_{i}(\overline{x}_{J_{i}})$;
and for each $i\in\overline{{\rm gs}}(\overline{x})$,
$\eta_{i}\in[-\gamma\lambda,\gamma\lambda]$. Fix any $i\in\overline{{\rm
gs}}(\overline{x})$. If $\eta_{i}\geq 0$,
$\partial(\eta_{i}F_{i})(\overline{x}_{J_{i}})=\eta_{i}\partial
F_{i}(\overline{x}_{J_{i}})=\eta_{i}\\{d\in\mathbb{R}^{|J_{i}|}\,|\,\|d\|_{p}\leq
1\\}$, and if $\eta_{i}<0$, by [1, Corollary 9.21], a simple calcuation yields
that
$\partial(\eta_{i}F_{i})(\overline{x}_{J_{i}})=\partial(-|\eta_{i}|\|\cdot\|_{q})(\overline{x}_{J_{i}})=\eta_{i}\\{d\in\mathbb{R}^{|J_{i}|}\,|\,\|d\|_{p}=1\\}$.
$\Box$
Now we are ready to establish that the function $f$ of type II is twice epi-
differentiable.
###### Theorem 5.3
Consider any $\overline{x}\in\mathbb{R}^{n}$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Then, for every
$w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$,
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle=\min_{z\in\mathbb{R}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,F^{\prime\prime}(\overline{x};w,z))-\langle\overline{v},z\rangle\\}$
$\displaystyle=d^{2}\\!f(F(\overline{x}))(dF(\overline{x})(w))+d^{2}(\overline{\xi}F)(\overline{x})(w),$
(47)
where
$\overline{\xi}_{i}=\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})$ for
$i\in{\rm gs}(\overline{x})$ and $\overline{\xi}_{i}=\gamma\lambda$ for
$i\in\overline{{\rm gs}}(\overline{x})$, and hence $f$ is both parabolically
regular and properly twice epi-differentiable at $\overline{x}$ for
$\overline{v}$.
Proof: Note that ${\rm dom}\,\vartheta=\mathbb{R}^{m}$. The MSQC holds for
system $F(x)\in{\rm dom}\vartheta$ at $\overline{x}$. We first argue that
$\overline{\xi}\in\Lambda_{\overline{x},\overline{v}}$. By Lemma 5.1 (ii),
clearly,
$\overline{\xi}\in\partial\vartheta(F(\overline{x}))=\widehat{\partial}\vartheta(F(\overline{x}))$.
From $\overline{v}\in\partial\\!f(\overline{x})$ and Lemma 5.2,
$\overline{v}_{i}\\!=\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\nabla\\!F_{i}(\overline{x}_{J_{i}})$
for each $i\in{\rm gs}(\overline{x})$, and for each $i\in\overline{{\rm
gs}}(\overline{x})$, there exist
$\overline{\eta}_{i}\in[-\gamma\lambda,\gamma\lambda]$ and
$\overline{\zeta}_{i}\in\mathbb{R}^{|J_{i}|}$ such that
$\overline{v}_{i}=\overline{\eta}_{i}\overline{\zeta}_{i}$, where
$\|\overline{\zeta}_{i}\|_{p}\leq 1$ if $\overline{\eta}_{i}\geq 0$, otherwise
$\|\overline{\zeta}_{i}\|_{p}=1$. Consequently,
$\langle\overline{v},w^{\prime}\rangle=\\!\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),w_{J_{i}}^{\prime}\rangle+\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\overline{\eta}_{i}\langle\overline{\zeta}_{i},w_{J_{i}}^{\prime}\rangle\quad\
\forall w^{\prime}\in\mathbb{R}^{n}.$ (48)
On the other hand, from equation (44) and Lemma 5.1 (i), it follows that for
any $w^{\prime}\in\mathbb{R}^{n}$,
$\langle\overline{\xi},dF(\overline{x})(w^{\prime})\rangle\\!=\\!\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(F_{i}(\overline{x}_{J_{i}}))\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),w_{J_{i}}^{\prime}\rangle\\!+\\!\gamma\lambda\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\|w_{J_{i}}^{\prime}\|_{q}=d\vartheta(F(\overline{x}))(dF(\overline{x})(w^{\prime})).$
The last two equations imply that
$\langle\overline{\xi},dF(\overline{x})(w^{\prime})\rangle\geq\langle\overline{v},w^{\prime}\rangle$
for all $w^{\prime}\in\mathbb{R}^{n}$. Along with
$\overline{\xi}\in\partial\vartheta(F(\overline{x}))$,
$\overline{\xi}\in\Lambda_{\overline{x},\overline{v}}$. In particular, from
$\overline{\xi}\in\Lambda_{\overline{x},\overline{v}}$ and the equivalence in
(6), for any $w^{\prime}\in\mathbb{R}^{m}$,
$\langle\overline{v},w^{\prime}\rangle\leq\langle\overline{\xi},dF(\overline{x})(w^{\prime})\rangle\leq
d\vartheta(F(\overline{x}))(dF(\overline{x})(w^{\prime})),$
which by Proposition 4.1 (i) implies that
$dF(\overline{x})(w^{\prime})\in\mathcal{C}_{\vartheta}(F(\overline{x}),\overline{\xi})$
for those $w^{\prime}\in\mathcal{C}_{f}(\overline{x},\overline{v})$.
Fix any $w\in\mathcal{C}_{f}(\overline{x},\overline{v})$. From (32),
$\overline{\xi}\in\Lambda_{\overline{x},\overline{v}}$,
$dF(\overline{x})(w)\in\mathcal{C}_{\vartheta}(F(\overline{x}),\overline{\xi})$
and Proposition 3.1 (ii), we have
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$ $\displaystyle\geq
d^{2}\vartheta(F(\overline{x})\,|\,\overline{\xi})(dF(\overline{x})(w))+d^{2}(\overline{\xi}F)(\overline{x})(w)$
$\displaystyle=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w))+d^{2}(\overline{\xi}F)(\overline{x})(w).$
In addition, from (31) and Proposition 3.2 (iv) with
$\psi=\vartheta,y=F(\overline{x})$, it follows that
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle\leq\inf_{z\in\mathbb{R}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|F^{\prime\prime}(\overline{x};w,z))-\langle\overline{v},z\rangle\\}$
$\displaystyle=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w))+\\!\inf_{z\in\mathbb{R}^{n}}\\!\Big{\\{}\sup_{u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))}\\!\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle\\!-\\!\langle\overline{v},z\rangle\Big{\\}}$
where
$\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))\\!=\\!\big{\\{}u\in\partial\vartheta(F(\overline{x}))\,|\,d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle
u,dF(\overline{x})(w)\rangle\\}$. The last two equations demonstrate that to
achieve the equalities in (5.3), it suffices to establish that
$\Gamma_{0}\\!:=\\!\inf_{z\in\mathbb{R}^{n}}\\!\Big{\\{}\sup_{u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))}\\!\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle-\langle\overline{v},z\rangle\Big{\\}}=d^{2}(\overline{\xi}F)(\overline{x})(w).$
(49)
For this purpose, we first claim that
$u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))$ iff for
each $i\in[m]$,
$u_{i}\in\left\\{\begin{array}[]{cl}\\{\rho_{\lambda}^{\prime}(F_{i}(\overline{x}_{J_{i}}))\\}&{\rm
if}\ i\in{\rm gs}(\overline{x}),\\\ \\{\gamma\lambda\\}&{\rm if}\
i\in\overline{{\rm gs}}(\overline{x})\cap{\rm gs}(w),\\\ \
[-\gamma\lambda,\gamma\lambda]&{\rm if}\ i\in\overline{{\rm
gs}}(\overline{x})\cap\overline{{\rm gs}}(w).\end{array}\right.$ (50)
Indeed, from Lemma 5.1 (i) and equation (44),
$u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))$ if and only
if
$\displaystyle\sum_{i\in{\rm
gs}(\overline{x})}u_{i}\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),w_{J_{i}}\rangle+\sum_{i\in\overline{{\rm
gs}}(\overline{x})}u_{i}\|w_{J_{i}}\|_{q}=\langle
u,dF(\overline{x})(w)\rangle=d\vartheta(F(\overline{x}))(dF(\overline{x})(w))$
$\displaystyle\qquad\qquad=\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),w_{J_{i}}\rangle+\gamma\lambda\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\|w_{J_{i}}\|_{q}\ \ {\rm and}\ \
u\in\partial\vartheta(F(\overline{x})).$ (51)
From $w\in\mathcal{C}_{f}(\overline{x},\overline{v})$, Proposition 4.1 (i),
Lemma 5.1 (i) and equation (44), it holds that
$\langle\overline{v},w\rangle=\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle\nabla\\!F_{i}(x_{J_{i}}),w_{J_{i}}\rangle\\!+\gamma\lambda\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\|w_{J_{i}}\|_{q},$
which along with the above (48) for $w^{\prime}=w$ implies that the following
equality holds
$\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\gamma\lambda\|w_{J_{i}}\|_{q}\\!=\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\overline{\eta}_{i}\langle\overline{\zeta}_{i},w_{J_{i}}\rangle\
\ {\rm or}\ \sum_{i\in\overline{{\rm gs}}(\overline{x})\cap{\rm
gs}(w)}\\!\gamma\lambda\|w_{J_{i}}\|_{q}\\!=\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})\cap{\rm gs}(w)}\\!|\overline{\eta}_{i}|\langle{\rm
sign}(\overline{\eta}_{i})\overline{\zeta}_{i},w_{J_{i}}\rangle.$
Note that $\langle u,\xi\rangle\leq\|u\|_{p}\|\xi\|_{q}$ for any
$u,\xi\in\mathbb{R}^{|J_{i}|}$ with $\|u\|_{p}\leq 1$, and the equality holds
iff $u={\rm sign}(\xi)\circ|\xi|^{q-1}\|\xi\|_{q}^{1-q}$. The last equality
implies that, for each $i\in\overline{{\rm gs}}(\overline{x})\cap{\rm gs}(w)$,
${\rm sign}(\overline{\eta}_{i})\overline{\zeta}_{i}={\rm
sign}(w_{J_{i}})\circ|w_{J_{i}}|^{q-1}\|w_{J_{i}}\|_{q}^{1-q}=\nabla\\!F_{i}(w_{J_{i}})\
\ {\rm and}\ \ |\overline{\eta}_{i}|=\gamma\lambda.$ (52)
By combining (52) with (5.2) and invoking Lemma 5.1 (ii), we obtain the
claimed relation in (50). For every $z\in\mathbb{R}^{n}$, from equalities in
(52) and equation (48) with $w^{\prime}=z$, we have
$\displaystyle\langle\overline{v},z\rangle$ $\displaystyle=\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),z_{J_{i}}\rangle\\!+\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})}\overline{\eta}_{i}\langle\overline{\zeta}_{i},z_{J_{i}}\rangle$
$\displaystyle=\\!\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),z_{J_{i}}\rangle\\!+\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})\cap\overline{{\rm
gs}}(w)}\overline{\eta}_{i}\langle\overline{\zeta}_{i},z_{J_{i}}\rangle$
$\displaystyle\qquad+\\!\sum_{i\in\overline{{\rm gs}}(\overline{x})\cap{\rm
gs}(w)}|\overline{\eta}_{i}|\langle{\rm
sign}(\overline{\eta}_{i})\overline{\zeta}_{i},z_{J_{i}}\rangle$
$\displaystyle=\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\big{[}\langle\nabla\\!F_{i}(\overline{x}_{J_{i}}),z_{J_{i}}\rangle+\langle
w_{J_{i}},\nabla^{2}\\!F_{i}(\overline{x}_{J_{i}})w_{J_{i}}\rangle\big{]}+\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})\cap\overline{{\rm
gs}}(w)}\overline{\eta}_{i}\langle\overline{\zeta}_{i},z_{J_{i}}\rangle$
$\displaystyle\quad+\gamma\lambda\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})\cap{\rm gs}(w)}\langle\nabla
F_{i}(w_{J_{i}}),z_{J_{i}}\rangle-\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle
w_{J_{i}},\nabla^{2}\\!F_{i}(\overline{x}_{J_{i}})w_{J_{i}}\rangle.$
Denote by $\Xi(z)$ the sum of the first three terms on the right-hand side. By
the definition of $\Gamma_{0}$,
$\displaystyle\Gamma_{0}$
$\displaystyle=\inf_{z\in\mathbb{R}^{n}}\sup_{u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))}\Big{\\{}\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle-\Xi(z)\Big{\\}}$
$\displaystyle\qquad+\sum_{i\in{\rm
gs}(\overline{x})}\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})\langle
w_{J_{i}},\nabla^{2}\\!F_{i}(\overline{x}_{J_{i}})w_{J_{i}}\rangle$
$\displaystyle=\inf_{z\in\mathbb{R}^{n}}\sup_{u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))}\Big{\\{}\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle-\Xi(z)\Big{\\}}+d^{2}(\overline{\xi}F)(\overline{x})(w),$
(53)
where the second equality is using equation (45) and
$\overline{\xi}_{i}=\rho_{\lambda}^{\prime}(\|\overline{x}_{J_{i}}\|_{q})$ for
each $i\in{\rm gs}(\overline{x})$. Now fix any $z\in\mathbb{R}^{n}$. For every
$u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))$, by using
(50) and (46) and comparing the expression of $\Xi(z)$ with that of $\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle$, we have
$\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle-\Xi(z)=\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})\cap\overline{{\rm
gs}}(w)}\big{[}u_{i}\|z_{J_{i}}\|_{q}-\overline{\eta}_{i}\langle\overline{\zeta}_{i},z_{J_{i}}\rangle\big{]}$
which, along with the fact that
$u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))$ iff (50)
holds, implies that
$\displaystyle\sup_{u\in\mathcal{A}_{\vartheta}(F(\overline{x}),dF(\overline{x})(w))}\Big{\\{}\langle
u,F^{\prime\prime}(\overline{x};w,z)\rangle-\Xi(z)\Big{\\}}\\!=\\!\sum_{i\in\overline{{\rm
gs}}(\overline{x})\cap\overline{{\rm
gs}}(w)}\sup_{t\in[-\gamma\lambda,\gamma\lambda]}\Big{\\{}t\|z_{J_{i}}\|_{q}-\overline{\eta}_{i}\langle\overline{\zeta}_{i},z_{J_{i}}\rangle\Big{\\}}.$
Together with the above equality (5.2), it is immediate to obtain that
$\Gamma_{0}=\sum_{i\in\overline{{\rm gs}}(\overline{x})\cap\overline{{\rm
gs}}(w)}\inf_{z\in\mathbb{R}^{J_{i}}}\sup_{t\in[-\gamma\lambda,\gamma\lambda]}\Big{\\{}t\|z\|_{q}-\overline{\eta}_{i}\langle\overline{\zeta}_{i},z\rangle\Big{\\}}+d^{2}(\overline{\xi}F)(\overline{x})(w).$
Note that
$\inf_{z\in\mathbb{R}^{J_{i}}}\sup_{t\in[-\gamma\lambda,\gamma\lambda]}\big{\\{}t\|z\|_{q}-\overline{\eta}_{i}\langle\overline{\zeta}_{i},z\rangle\big{\\}}\geq
0$ and the equality holds when $z=0$. This means that the desired equality
(49) holds, so $f$ is parabolically regular at $\overline{x}$ for
$\overline{v}$.
From equation (44), $dF(\overline{x})(0)=0$, which along with Proposition 4.1
(iv) implies that
$\mathcal{C}_{\\!f}(\overline{x},\overline{v})\neq\emptyset$. Note that the
function $\mathbb{R}^{l}\ni z\mapsto\rho_{\lambda}(\|z\|_{q})$ is weakly
convex because $\rho_{\lambda}$ is a nondecreasing weakly convex function.
Hence, the function $f$ is weakly convex, which implies that
$\widehat{\partial}f(\overline{x})=\partial f(\overline{x})$. By invoking
Theorem 5.1 and equality (5.3), $f$ is properly twice epi-differentiable at
$\overline{x}$ for $\overline{v}$. $\Box$
Combining Theorem 5.3 and [1, Theorem 13.24], we have the following corollary.
###### Corollary 5.2
For problem (2) with $\vartheta$ and $F$ from type II, consider any
$\overline{x}\in\mathbb{R}^{n}$.
* (i)
If $\overline{x}$ is a local optimal solution, then
$-\\!\nabla\\!f_{0}(\overline{x})\in\partial\\!f(\overline{x})$, and for any
$w\in\mathcal{C}_{\\!f}(\overline{x},\\!-\\!\nabla\\!f_{0}(\overline{x}))$,
$\langle
w,\nabla^{2}\\!f_{0}(\overline{x})w\rangle+d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w))+d^{2}(\overline{\xi}F)(\overline{x})(w)\geq
0.$ (54)
where
$\overline{\xi}_{i}=\rho_{\lambda}^{\prime}(F_{i}(\overline{x}_{J_{i}}))$ for
$i\in{\rm gs}(\overline{x})$ and $\overline{\xi}_{i}=\gamma\lambda$ for
$i\in\overline{{\rm gs}}(\overline{x})$.
* (ii)
If $-\nabla\\!f_{0}(\overline{x})\in\partial f(\overline{x})$ and inequality
(54) holds with “$>$” for all
$w\in\mathcal{C}_{\\!f}(\overline{x},-\\!\nabla\\!f_{0}(\overline{x}))\backslash\\{0\\}$,
then $\overline{x}$ is a strong local optimal solution of problem (2).
### 5.3 Parabolic regularity for function $f$ of type III
In this part, we shall establish the parabolic regularity of $f=\delta_{K}$
and then its twice epi-differentiability, where $K$ is the $q$-order cone.
###### Theorem 5.4
Let $f$ be the composite function of type III. Consider any
$\overline{x}\in{\rm dom}f$ and $\overline{v}\in\partial\\!f(\overline{x})$.
Then, for every $w\in\mathcal{C}_{f}(\overline{x},\overline{v})$,
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle=\inf_{z\in\mathbb{R}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,d^{2}F(\overline{x})(w|z))-\langle\overline{v},z\rangle\\}$
$\displaystyle=\left\\{\begin{array}[]{cl}0&{\rm if}\ \overline{x}=0{\ \rm or\
}\overline{x}\in{\rm int}\,K,\\\
\frac{\|\overline{v}\|_{2}}{2^{1/p}}\nabla^{2}F(\overline{x})(w,w)&{\rm if\
}\overline{x}\in{\rm bd}\,K\end{array}\right.\ {\rm with}\ p=q/(q-1),$ (57)
and hence $f$ is both parabolically regular and properly twice epi-
differentiable at $\overline{x}$ for $\overline{v}$.
Proof: Note that ${\rm dom}\,\vartheta=\mathbb{R}_{-}$, $F$ is a convex
function, and the MSQC holds for constraint system $F(x)\in{\rm dom}\vartheta$
at $\overline{x}$ because the Slater contraint qualification holds. Fix any
$w\in\mathcal{C}_{f}(\overline{x},\overline{v})$. Write
$w=\\!(w_{1},w_{2})\in\mathbb{R}\times\mathbb{R}^{n-1}$ and
$\overline{v}=(\overline{v}_{1},\overline{v}_{2})\in\mathbb{R}\times\mathbb{R}^{n-1}$.
By Proposition 4.1 (i), $w\in\mathcal{T}_{{\rm dom}\,f}(\overline{x})$ and
$d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle\overline{v},w\rangle$.
Hence, $dF(\overline{x})(w)\in{\rm
dom}\,d\vartheta(F(\overline{x}))=\\!\mathcal{T}_{\mathbb{R}_{-}}(F(\overline{x}))$
where the equality is due to Lemma 3.1 (ii). By invoking [1, Theorem 8.2], it
follows that
$0=\delta_{\mathcal{T}_{\mathbb{R}_{-}}(F(\overline{x}))}(dF(\overline{x})(w))=d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle\overline{v},w\rangle.$
(58)
We proceed the proof by the following three cases: $\overline{x}\in{\rm
int}\,K,\overline{x}=0$ and $\overline{x}\in{\rm bd}\,K\backslash\\{0\\}$.
Case 1: $\overline{x}\in{\rm int}\,K$. In this case, $F(\overline{x})<0$ and
$\partial\\!f(\overline{x})=\mathcal{N}_{K}(\overline{x})=\\{0\\}$. Hence,
$\overline{v}=0,\,\partial\vartheta(F(\overline{x}))=\\{0\\}$ and
$\Lambda_{\overline{x},\overline{v}}=\\{0\\}$. Together with (58),
$dF(\overline{x})(w)\in\mathcal{C}_{\vartheta}(F(\overline{x}),0)$. By
invoking inequality (32), it follows that $d^{2}\\!f(\overline{x}|0)(w)\geq
d^{2}\vartheta(F(\overline{x})|0)(dF(\overline{x})(w))=0$ where the equality
is by the polyhedrality of $\vartheta=\delta_{\mathbb{R}_{-}}$, while from
inequality (31) and Proposition 3.2 (iv) with
$\psi=\vartheta,y=F(\overline{x})$,
$d^{2}\\!f(\overline{x}|0)(w)\leq\inf_{z\in\mathbb{R}^{n}}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|d^{2}F(\overline{x})(w|z))=0.$
Thus,
$d^{2}\\!f(\overline{x}|0)(w)=\inf_{z\in\mathbb{R}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|d^{2}F(\overline{x})(w|z))-\langle\overline{v},z\rangle\\}=0$.
Case 2: $\overline{x}=0$. Now
$F(\overline{x})=0,\partial\vartheta(F(\overline{x}))=\mathbb{R}_{+}$ and
$dF(\overline{x})(w)\in\mathcal{T}_{\mathbb{R}_{-}}(F(\overline{x}))=\mathbb{R}_{-}$.
In addition, $\partial
f(\overline{x})=\mathcal{N}_{K}(\overline{x})=K^{\circ}\\!=\\{u=(u_{1},u_{2})\in\\!\mathbb{R}\times\mathbb{R}^{n-1}\,|\,-\\!u_{1}\geq\|u_{2}\|_{p}\\}$,
where $K^{\circ}$ denotes the negative polar cone of $K$. From
$\overline{v}\in K^{\circ}$, we have
$-\overline{v}_{1}\in\mathbb{R}_{+}=\partial\vartheta(F(\overline{x}))$ and
$\|\overline{v}_{2}\|_{p}\leq-\overline{v}_{1}$. For any
$z=(z_{1},z_{2})\in\mathbb{R}\times\mathbb{R}^{n-1}$, since
$dF(\overline{x})(z)=\|z_{2}\|_{q}-z_{1}$, it holds that
$(-\overline{v}_{1})dF(\overline{x})(z)=-\overline{v}_{1}(-z_{1}+\|z_{2}\|_{q})\geq\overline{v}_{1}z_{1}+\|\overline{v}_{2}\|_{p}\|z_{2}\|_{q}\geq\langle\overline{v},z\rangle,$
(59)
where the first inequality is using
$\|\overline{v}_{2}\|_{p}\leq-\overline{v}_{1}$. This shows that
$-\overline{v}_{1}\in\Lambda_{\overline{x},\overline{v}}$. In addition, from
(58)-(59), $-\overline{v}_{1}\in\mathbb{R}_{+}$ and
$dF(\overline{x})(w)\in\mathbb{R}_{-}$, we have
$0\geq(-\overline{v}_{1})dF(\overline{x})(w)\geq\langle\overline{v},w\rangle=0$,
which implies that
$dF(\overline{x})(w)\in\mathcal{C}_{\vartheta}(F(\overline{x}),-\overline{v}_{1})$.
By using (32), Proposition 3.1 (ii) and noting that
$d^{2}F(\overline{x})(w)=0$,
$d^{2}\\!f(\overline{x}|\overline{v})(w)\geq
d^{2}\vartheta(F(\overline{x})|-\\!\overline{v}_{1})(dF(\overline{x})(w))-\overline{v}_{1}d^{2}F(\overline{x})(w)=0.$
(60)
In addition, since ${\rm dom}\,f=K$, using [38, Lemma 2.7] leads to
$0\\!\in\\!{\mathcal{T}}_{K}(w)=\\!{\mathcal{T}}^{2}_{{\rm dom}\,f}(0,w)$, and
then $d^{2}F(\overline{x})(w|0)\in\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))$ follows from Corollary
2.2. From [1, Example 13.62],
$d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,d^{2}F(\overline{x})(w|0))=\delta_{\mathcal{T}^{2}_{{\rm
dom}\vartheta}(F(\overline{x}),dF(\overline{x})(w))}(d^{2}F(\overline{x})(w|0))\\!=\\!0.$
Together with inequality (31), it holds that
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle\leq\inf_{z\in\mathbb{R}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|d^{2}F(\overline{x})(w|z))\\!-\\!\langle\overline{v},z\rangle\\}$
$\displaystyle\leq
d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,d^{2}F(\overline{x})(w|0))=0.$
Along with (60),
$d^{2}\\!f(\overline{x}|\overline{v})(w)={\displaystyle\inf_{z\in\mathbb{R}^{n}}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)|d^{2}F(\overline{x})(w|z))-\langle\overline{v},z\rangle\\}=0$.
Case 3: $\overline{x}\in{\rm bd}\,K\backslash\\{0\\}$. Now
$\|\overline{x}_{2}\|_{q}=\overline{x}_{1}\neq 0$ and
$\partial\\!f(\overline{x})=\mathcal{N}_{K}(\overline{x})=\\{tu_{q}(\overline{x})\
|\ t\in\mathbb{R}_{+}\\}$ with
$u_{q}(\overline{x})=\big{(}-1;\overline{x}_{1}^{1-q}{\rm
sign}(\overline{x}_{2})\circ|\overline{x}_{2}|^{q-1}\big{)}$. Clearly, $F$ is
twice continuously differentiable in a neighborhood of $\overline{x}$. By
invoking Proposition 5.1, for any
$w\in\mathcal{C}_{f}(\overline{x},\overline{v})$,
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle=\inf_{z\in\mathbb{R}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})w)\,|\,d^{2}F(\overline{x})(w|z))-\langle\overline{v},z\rangle\\}$
$\displaystyle=d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})(w))+\langle
w,\nabla^{2}\\!F(\overline{x})w\rangle\sup_{\xi\in\Lambda_{\overline{x},\overline{v}}}\xi$
where
$\Lambda_{\overline{x},\overline{v}}=\big{\\{}\xi\in\partial\vartheta(F(\overline{x}))\,|\,\nabla
F(\overline{x})\xi=\overline{v}\big{\\}}$. Note that
$\|\nabla\\!F(\overline{x})\|_{p}=\|u_{q}(\overline{x})\|_{p}=2^{1/{p}}$ and
$\partial\vartheta(F(\overline{x}))=\mathbb{R}_{+}$. We have
$\Lambda_{\overline{x},\overline{v}}=\big{\\{}\frac{\|\overline{v}\|_{2}}{2^{1/{p}}}\big{\\}}$.
Together with the last equation and
$d^{2}\vartheta(F(\overline{x}))(F^{\prime}(\overline{x})(w))=0$, the desired
equalities follow.
Combining the first equality with Proposition 4.2 (i) and Definition 2.9, we
obtain the parabolic regularity of $f$ at $\overline{x}$ for $\overline{v}$.
By Theorem 5.1, $f$ is properly twice epi-differentiable at $\overline{x}$ for
$\overline{v}$. $\Box$
###### Remark 5.2
When $q=2$, the parabolic regularity of $\delta_{K}$ is implied by the second-
order regularity of $K$ by [2, Propositions 3.103 & 3.136] and [39, Lemma 15],
and Mohammadi et al. also provided a direct proof for this fact in [26,
Example 5.8] by using the developed chain rule for parabolic regularity and
reformulating the second order cone as the smooth constraint system
$(\|x_{2}\|^{2}\\!-x^{2}_{1},-x_{1})\in\mathbb{R}^{2}_{-}$ for
$(x_{1},x_{2})\in\\!\mathbb{R}\times\mathbb{R}^{n-1}$. Theorem 5.4 verifies
the parabolic regularity of $\delta_{K}$ for any $q>1$ by applying the
obtained upper and lower estimates to its composition form of (1).
From Theorem 5.4, we immediately obtain the parabolic regularity of the
indicator of the Cartesian product of several $q$-order cones, which is stated
as follows.
###### Corollary 5.3
Let $K\\!=K_{1}\times\cdots\times K_{m}$ with
$K_{i}:=\\{(x_{i1},x_{i2})\,|\,\|x_{i2}\|_{q}\leq x_{i1}\\}$ for each
$i\in[m]$. Fix any
$\overline{x}=(\overline{x}_{1};\overline{x}_{2};\ldots;\overline{x}_{m})\in
K$ and
$\overline{v}=(\overline{v}_{1};\overline{v}_{2};\ldots;\overline{v}_{m})\in\mathcal{N}_{K}(\overline{x})$.
Then, $\delta_{K}$ is parabolically regular at $\overline{x}$ for
$\overline{v}$ with
$d^{2}\delta_{K}(\overline{x}|\overline{v})(w)=\sum_{i=1}^{m}d^{2}\delta_{K_{i}}(\overline{x}_{i}|\overline{v}_{i})(w_{i})$
for each $w\in\mathcal{T}_{K}(\overline{x})\cap\\{\overline{v}\\}^{\perp}$.
### 5.4 Parabolic regularity for function $f$ of type IV
We shall establish the parabolic regularity of $f=\delta_{\mathbb{S}_{-}^{n}}$
and then achieve its proper twice epi-differentiability. The parabolic
regularity of such $f$ was obtained in [27, Example 3.7] by calculating the
second subderivative directly, and we include it in Theorem 5.5 below to
demonstrate the usefulness of the estimates in Theorem 4.1.
###### Theorem 5.5
Let $f$ be the composite function of type IV. Consider any
$\overline{x}\in\mathbb{S}_{-}^{n}$ and any
$\overline{v}\in\partial\\!f(\overline{x})$. Then, for any
$w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$,
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle=\inf_{z\in\mathbb{S}^{n}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,F^{\prime\prime}(\overline{x};w,z))-\langle\overline{v},z\rangle\big{\\}}$
$\displaystyle=\left\\{\begin{array}[]{cl}0&{\rm if}\;F(\overline{x})<0,\\\
-2\langle\overline{v},w\overline{x}^{\dagger}w\rangle&{\rm
if\;}F(\overline{x})=0\end{array}\right.$ (63)
where $\overline{x}^{\dagger}$ denotes the pseudo-inverse of $\overline{x}$.
Consequently, $f$ is both parabolically regular and properly twice epi-
differentiable at $\overline{x}$ for $\overline{v}$.
Proof: Note that ${\rm dom}\,\vartheta=\mathbb{R}_{-}$ and $F$ is a convex
function. The MSQC holds for system $F(x)\in{\rm dom}\vartheta$ at
$\overline{x}$ because the Slater constraint qualification holds. Fix any
$w\in\mathcal{C}_{\\!f}(\overline{x},\overline{v})$. By Proposition 4.1 (i),
$w\in\mathcal{T}_{{\rm dom}\,f}(\overline{x})$ and
$d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle\overline{v},w\rangle$.
The latter means that $dF(\overline{x})(w)\in{\rm
dom}\,d\vartheta(F(\overline{x}))=\mathcal{T}_{\mathbb{R}_{-}}(F(\overline{x}))$,
which along with $\vartheta(\cdot)=\delta_{\mathbb{R}_{-}}(\cdot)$ and [1,
Theorem 8.2 (b)] yields that
$0=\delta_{\mathcal{T}_{\mathbb{R}_{-}}(F(\overline{x}))}(dF(\overline{x})(w))=d\vartheta(F(\overline{x}))(dF(\overline{x})(w))=\langle\overline{v},w\rangle.$
(64)
We proceed the arguments by the following two cases: $F(\overline{x})<0$ and
$F(\overline{x})=0$.
Case 1: $F(\overline{x})<0$. Now
$\partial\\!f(\overline{x})=\mathcal{N}_{\mathbb{S}^{n}_{-}}(\overline{x})=\\{0\\}$.
Hence, $\overline{v}=0,\partial\vartheta(F(\overline{x}))=\\{0\\}$ and
$\Lambda_{\overline{x},\overline{v}}=\\{0\\}$. From (64),
$dF(\overline{x})(w)\in\mathcal{C}_{\vartheta}(F(\overline{x}),0)$. By
invoking (32) and Proposition 3.1 (ii) with $\psi=\delta_{\mathbb{R}_{-}}$,
$d^{2}\\!f(\overline{x}|0)(w)\geq
d^{2}\vartheta(F(\overline{x})|0)(dF(\overline{x})(w))=0,$
while from inequality (31), $\partial\vartheta(F(\overline{x}))=\\{0\\}$ and
Proposition 3.2 (iv) with $\psi=\delta_{\mathbb{R}_{-}}$,
$d^{2}\\!f(\overline{x}|0)(w)\leq\inf_{z\in\mathbb{S}^{n}}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,F^{\prime\prime}(\overline{x};w,z))=0.$
Thus, we show that
$d^{2}\\!f(\overline{x}|0)(w)\\!=\\!{\displaystyle\inf_{z\in\mathbb{S}^{n}}}\big{\\{}d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,F^{\prime\prime}(\overline{x};w,z))-\langle\overline{v},z\rangle\\}=0$.
Case 2: $F(\overline{x})=0$. By [1, Theorem 10.49],
$\partial\\!f(\overline{x})\\!=\mathcal{N}_{\mathbb{S}^{n}_{-}}(\overline{x})=\\{v\in\xi\partial
F(\overline{x})\,|\,\xi\in\mathbb{R}_{+}\\}$. Since $\overline{v}\in\partial
f(\overline{x})$, there exist $\xi\in\mathbb{R}_{+}$ and $u\in\partial
F(\overline{x})$ such that $\overline{v}=\xi u$. Hence,
$(\xi,u)\in\Gamma_{\overline{x},\overline{v}}$, the set defined in (33). In
addition, from $u\in\partial F(\overline{x})$ and the equivalence in (6),
$dF(\overline{x})(w)\geq\langle u,w\rangle$. Together with
$dF(\overline{x})(w)\in\mathcal{T}_{\mathbb{R}_{-}}(F(\overline{x}))=\mathbb{R}_{-}$
and (64), we have $0\geq\xi dF(\overline{x})(w)\geq\xi\langle
u,w\rangle=\langle\overline{v},w\rangle=0$, which implies that
$dF(\overline{x})(w)\in\mathcal{C}_{\vartheta}(F(\overline{x}),\xi)$. By
invoking (33), Proposition 3.1 (ii) and $\xi\geq 0$, it follows that
$d^{2}f(\overline{x}|\overline{v})(w)\geq
d^{2}\vartheta(F(\overline{x})|\xi)(dF(\overline{x})(w))+\xi
d^{2}F(\overline{x}\,|\,u)(w)=-2\langle\overline{v},w\overline{x}^{\dagger}w\rangle,$
where the equality is using
$d^{2}\vartheta(F(\overline{x})|\xi)(dF(\overline{x})(w))=0$ implied by [1,
Proposition 13.9], and $\xi
d^{2}F(\overline{x}\,|\,u)(w)=-2\langle\overline{v},w\overline{x}^{\dagger}w\rangle$
implied by [5, Theorem 2.3]. In addition, by [1, Proposition 13.64],
$\displaystyle d^{2}\\!f(\overline{x}|\overline{v})(w)$
$\displaystyle\leq\inf_{z\in\mathbb{S}^{n}}\big{\\{}d^{2}f(\overline{x})(w|z)-\langle\overline{v},z\rangle\big{\\}}=\inf_{z\in\mathbb{S}^{n}}\big{\\{}\delta_{\mathcal{T}^{2}_{\mathbb{S}^{n}_{-}}(\overline{x},w)}(z)-\langle\overline{v},z\rangle\big{\\}}$
$\displaystyle=-\sup\limits_{z\in\mathcal{T}^{2}_{\mathbb{S}^{n}_{-}}(\overline{x},w)}\langle\overline{v},z\rangle=-2\langle\overline{v},w\overline{x}^{\dagger}w\rangle.$
where the first equality is due to [1, Example 13.62] and the last one is by
[2, Page 487]. From Proposition 4.2 (i),
$d^{2}f(\overline{x})(w|z)=d^{2}\vartheta(F(\overline{x}))(dF(\overline{x})(w)\,|\,F^{\prime\prime}(\overline{x};w,z))$.
Together with the last two inequalities, we obtain the desired equalities.
Combining the first equality with Proposition 4.2 (i) and Definition 2.9, we
obtain the parabolic regularity of $f$ at $\overline{x}$ for $\overline{v}$.
By Theorem 5.1, $f$ is properly twice epi-differentiable at $\overline{x}$ for
$\overline{v}$. $\Box$
## 6 Conclusions
For the composite function $f$ of the form (1) with $\vartheta$ and $F$
satisying Assumption 1, by analyzing fully the second-order variational
properties of PWTD functions and establishing the caculus rule for its
parabolic subderivate, we have derived an upper and lower estimate for its
second subderivate, and applied these estimates to achieving the parabolic
regularity and twice epi-differeniability for several classes of popular
composite functions. An interesting but challenging topic is to explore
second-order variational properties for the composition $f$ of the form (1)
involving other types of nonconvex and nonsmooth outer functions.
###### Lemma 1
Let $\mathcal{B}:\mathbb{X}\to\mathbb{R}^{m}$ be a linear mapping, and let
$c\in\mathbb{R}^{m}$ and $\overline{v}\in\mathbb{X}$ be the given vectors. For
every $p\in\mathbb{R}^{m}$, define
$\upsilon(p)\\!:=\inf_{z\in\mathbb{X}}\sup_{u\in\Omega}\big{\\{}\langle
u,\mathcal{B}z+c+p\rangle-\langle\\!\overline{v},z\rangle\big{\\}}$ where
$\Omega\subset\mathbb{R}^{m}$ is a closed convex set. Then,
$\upsilon(0)\geq\sup_{u\in\mathbb{R}^{m}}\\{\langle u,c\rangle\ {\rm s.t.}\
\mathcal{B}^{*}u=\overline{v},\,u\in\Omega\\}$ where
$\mathcal{B}^{*}\\!:\mathbb{R}^{m}\to\mathbb{X}$ is the adjoint of
$\mathcal{B}$, and the equality holds if $\partial\upsilon(0)\neq\emptyset$.
Proof: For any $(z,p)\in\mathbb{X}\times\mathbb{R}^{m}$, define
$\Psi(z,p):=\sup_{u\in\Omega}\\{\langle
u,\mathcal{B}z+c+p\rangle-\langle\\!\overline{v},z\rangle\\}$. Obviously,
$\upsilon(p)=\inf_{z\in\mathbb{X}}\Psi(z,p)$, the value function of the
perturbation problem of the following convex program
$\upsilon(0)=\inf_{z\in\mathbb{X}}\sup_{u\in\Omega}\big{\\{}\langle
u,\mathcal{B}z+c\rangle-\langle\\!\overline{v},z\rangle\big{\\}}.$ (65)
From [2, Section 2.5.1], the dual problem of (65) takes the following form
$\sup_{\eta\in\mathbb{R}^{m}}\\{-\Psi^{*}(0,\eta)\\}.$ (66)
By the definition of conjugate function, it is not difficult to calculate that
$\displaystyle\Psi^{*}(0,\eta)$
$\displaystyle=\sup_{(z,p)\in\mathbb{X}\times\mathbb{R}^{m}}\\{\langle
0,z\rangle+\langle\eta,p\rangle-\Psi(z,p)\\}$
$\displaystyle=\sup_{(z,p)\in\mathbb{X}\times\mathbb{R}^{m}}\Big{\\{}\langle\eta,p\rangle+\langle\overline{v},z\rangle-\sup\limits_{u\in\mathbb{X}}\\{\langle
u,\mathcal{B}z+c+p\rangle-\delta_{\Omega}(u)\\}\Big{\\}}$
$\displaystyle=\sup_{(z,p)\in\mathbb{X}\times\mathbb{R}^{m}}\Big{\\{}\langle\eta,p\rangle+\langle\overline{v},z\rangle-\delta^{*}_{\Omega}(\mathcal{B}z+c+p)\Big{\\}}$
$\displaystyle=\sup_{z\in\mathbb{X}}\Big{\\{}\langle\overline{v},z\rangle+\sup\limits_{p^{\prime}\in\mathbb{R}^{m}}\\{\langle\eta,p^{\prime}-c-\mathcal{B}z\rangle-\delta^{*}_{\Omega}(p^{\prime})\\}\Big{\\}}$
$\displaystyle=\sup_{z\in\mathbb{X}}\Big{\\{}\langle\overline{v},z\rangle-\langle\eta,c+\mathcal{B}z\rangle+\sup\limits_{p^{\prime}\in\mathbb{R}^{m}}\\{\langle\eta,p^{\prime}\rangle-\delta^{*}_{\Omega}(p^{\prime})\\}\Big{\\}}$
$\displaystyle=\sup\limits_{z\in\mathbb{X}}\\{\langle\overline{v}-\mathcal{B}^{*}\eta,z\rangle\\}-\langle\eta,c\rangle+\delta_{\Omega}(\eta)$
$\displaystyle=\delta_{\\{0\\}}(\mathcal{B}^{*}\eta-\overline{v})-\langle\eta,c\rangle+\delta_{\Omega}(\eta).$
Then, from the weak duality theorem,
$\upsilon(0)\geq\sup_{u\in\mathbb{R}^{m}}\\{\langle u,c\rangle\ {\rm s.t.}\
\mathcal{B}^{*}u=\overline{v},\,u\in\Omega\\}$. The inequality becomes an
equality when there is no dual gap between (65) and (66), which is guaranteed
to hold under $\partial\upsilon(0)\neq\emptyset$ by [2, Theorem 2.142 (i)].
The proof is completed. $\Box$
###### Proposition 1
Let $\varphi\\!:\mathbb{R}^{m}\\!\to\overline{\mathbb{R}}$ be a proper
function, and let $g\\!:\mathbb{X}\to\mathbb{R}^{m}$ be a mapping. Define the
multifunctions
$\mathcal{G}\\!:\mathbb{X}\\!\times\mathbb{R}\\!\rightrightarrows\mathbb{R}^{m}\\!\times\mathbb{R}$
and $\mathcal{H}\\!:\mathbb{X}\\!\rightrightarrows\mathbb{R}^{m}$ by
$\mathcal{G}(x,\alpha):=(g(x),\alpha)-{\rm epi}\,\varphi\ \ {\rm and}\ \
\mathcal{H}(x):=g(x)-{\rm dom}\,\varphi.$ (67)
Consider any $\overline{x}\in\mathbb{X}$ with
$\overline{\omega}=\varphi(g(\overline{x}))$ finite. The following assertions
hold.
* (i)
If $g$ is continuous at $\overline{x}$ and $\varphi$ is continuous at
$g(\overline{x})$ relative to its domain, the subregularity of $\mathcal{G}$
at $((\overline{x},\overline{\omega}),(0,0))$ implies that of $\mathcal{H}$ at
$(\overline{x},0)$.
* (ii)
If $g$ is strictly continuous at $\overline{x}$ and $\varphi$ is strictly
continuous at $g(\overline{x})$, the subregularity of $\mathcal{H}$ at
$(\overline{x},0)$ implies that of $\mathcal{G}$ at
$((\overline{x},\overline{\omega}),(0,0))$.
Proof: The proof of part (i) is similar to that of [22, Proposition 3.1], so
it suffices to prove part (ii). Since $\mathcal{H}$ is subregular at
$(\overline{x},0)$, there exist $\varepsilon_{0}>0$ and $c>0$ such that
${\rm dist}(x,\mathcal{H}^{-1}(0))\leq c\,{\rm dist}_{2}(g(x),{\rm
dom}\,\varphi)\quad{\rm for\ all}\
x\in\mathbb{B}(\overline{x},\varepsilon_{0}).$ (68)
Since $\varphi$ is strictly continuous at $g(\overline{x})$, there exists
$\varepsilon_{1}>0$ and $L_{\varphi}>0$ such that
$|\varphi(z)-\varphi(z^{\prime})|\leq L_{\varphi}\|z-z^{\prime}\|_{2}\quad\
\forall z,z^{\prime}\in\mathbb{B}(g(\overline{x}),\varepsilon_{1}).$ (69)
From the strict continuity of $g$ at $\overline{x}$, there exist
$\varepsilon_{2}>0$ and $L_{g}>0$ such that
$\|g(x)-g(\overline{x})\|_{2}\leq\varepsilon_{1}/3\ \ {\rm and}\
\|g(x)-g(x^{\prime})\|_{2}\leq L_{g}\|x-x^{\prime}\|\quad\ \forall
x,x^{\prime}\in\mathbb{B}(\overline{x},\varepsilon_{2}).$
Take
$\varepsilon=\min\\{\varepsilon_{0},\varepsilon_{1},\varepsilon_{2}\\}/3$.
Consider any
$(x,\alpha)\in\mathbb{B}((\overline{x},\overline{\omega}),\varepsilon)$. Let
$(y^{*},\alpha^{*})\in{\rm epi}\,\varphi$ be such that ${\rm
dist}_{2}((g(x),\alpha),{\rm
epi}\,\varphi)=\sqrt{\|g(x)-y^{*}\|_{2}^{2}+(\alpha-\alpha^{*})^{2}}$.
Clearly, it holds that
$\|g(x)-y^{*}\|_{2}\leq\sqrt{\|g(x)-g(\overline{x})\|_{2}^{2}+(\alpha-\overline{\omega})^{2}}\leq
2\varepsilon_{1}/3,$
and consequently
$\|y^{*}\\!-\\!g(\overline{x})\|_{2}\leq\|g(x)\\!-\\!y^{*}\|_{2}+\|g(x)-g(\overline{x})\|_{2}\leq\varepsilon_{1}$.
Case 1: $\alpha^{*}=\varphi(y^{*})$. Let $\widehat{x}\in\mathcal{H}^{-1}(0)$
be such that ${\rm dist}(x,\mathcal{H}^{-1}(0))=\|x-\widehat{x}\|$. Clearly,
$g(\widehat{x})\in{\rm dom}\,\varphi$ and $\|\widehat{x}-\overline{x}\|\leq
2\|x-\overline{x}\|\leq\varepsilon_{2}$. Then,
$\|g(x)-g(\widehat{x})\|_{2}\leq L_{g}\|x-\widehat{x}\|$ and
$\|g(\widehat{x})\\!-\\!g(\overline{x})\|_{2}\leq\varepsilon_{1}/3$. The
latter, along with $\|y^{*}\\!-\\!g(\overline{x})\|_{2}\leq\varepsilon_{1}$
and (69), means that
$|\varphi(y^{*})-\varphi(g(\widehat{x}))|\leq
L_{\varphi}\|y^{*}-g(\widehat{x})\|_{2}.$ (70)
From (68) we have $\|x-\widehat{x}\|\leq c\,{\rm dist}_{2}(g(x),{\rm
dom}\,\varphi)$. Since $y^{*}\in{\rm dom}\,\varphi$, it holds that
$\|x-\widehat{x}\|\leq c\|g(x)-y^{*}\|_{2}.$ (71)
By combining this with
$(\widehat{x},\varphi(g(\widehat{x})))\in\mathcal{G}^{-1}(0,0)$, it is
immediate to obtain that
$\displaystyle{\rm
dist}((x,\alpha),\mathcal{G}^{-1}(0,0))\leq\sqrt{\|x-\widehat{x}\|^{2}+|\alpha-\varphi(g(\widehat{x}))|^{2}}$
$\displaystyle\leq\sqrt{\|x-\widehat{x}\|^{2}+2|\alpha-\alpha^{*}|^{2}+2|\varphi(y^{*})\\!-\\!\varphi(g(\widehat{x}))|^{2}}$
$\displaystyle\leq\sqrt{\|x-\widehat{x}\|^{2}+2|\alpha-\alpha^{*}|^{2}+2L_{\varphi}^{2}\|y^{*}-g(\widehat{x})\|_{2}^{2}}$
$\displaystyle\leq\sqrt{\|x-\widehat{x}\|^{2}+2|\alpha-\alpha^{*}|^{2}+4L_{\varphi}^{2}\|y^{*}-g(x)\|_{2}^{2}+4L_{\varphi}^{2}\|g(x)-g(\widehat{x})\|_{2}^{2}}$
$\displaystyle\leq\sqrt{(1+4L_{\varphi}^{2}L_{g}^{2})\|x-\widehat{x}\|^{2}+2|\alpha-\alpha^{*}|^{2}+4L_{\varphi}^{2}\|y^{*}-g(x)\|_{2}^{2}}$
$\displaystyle\leq\kappa\sqrt{|\alpha-\alpha^{*}|^{2}+\|y^{*}-g(x)\|_{2}^{2}}=\kappa{\rm
dist}_{2}((0,0),\mathcal{G}(x,\alpha))$
with
$\kappa=\max\big{(}\sqrt{2},\sqrt{(1+4L_{\varphi}^{2}L_{g}^{2})c^{2}+4L_{\varphi}^{2}}\big{)}$,
where the third inequality is by (70).
Case 2: $\alpha^{*}>\varphi(y^{*})$. In this case, we have
$\mathcal{N}_{\mathbb{R}_{+}}(\alpha^{*}\\!-\\!\varphi(y^{*}))=\\{0\\}$.
Recall that
$(y^{*},\alpha^{*})\in\mathop{\arg\min}_{(z,\omega)\in\mathbb{R}^{m}\times\mathbb{R}}\Big{\\{}\frac{1}{2}\|(z,\omega)-(g(x),\alpha)\|_{2}^{2}+\delta_{\mathbb{R}_{+}}(h(z,\omega))\Big{\\}}$
where $h(z,\omega):=\omega-\varphi(z)$ for
$(z,\omega)\in\mathbb{R}^{m}\times\mathbb{R}$. Combining [1, Theorem 10.1 &
10.49] and $\mathcal{N}_{\mathbb{R}_{+}}(h(y^{*},\alpha^{*}))=\\{0\\}$ leads
to
$(0,0)\in(y^{*},\alpha^{*})-(g(x),\alpha)+D^{*}h(y^{*},\alpha^{*})\big{[}\mathcal{N}_{\mathbb{R}_{+}}(\alpha^{*}\\!-\\!\varphi(y^{*}))\big{]}$,
which by
$\mathcal{N}_{\mathbb{R}_{+}}(\alpha^{*}\\!-\\!\varphi(y^{*}))=\\{0\\}$ means
that $(g(x),\alpha)=(y^{*},\alpha^{*})\in{\rm epi}\,\varphi$, and then
$(x,\alpha)\in\mathcal{G}^{-1}(0,0)$. So ${\rm
dist}((x,\alpha),\mathcal{G}^{-1}(0,0))\\!\leq\kappa{\rm
dist}_{2}((0,0),\mathcal{G}(x,\alpha))$ for any $\kappa>0$. $\Box$
## References
* [1] R. T. Rockafellar, R. J.-B. Wets, Variational Analysis, Vol. 317, Springer Science & Business Media, 2009.
* [2] J. F. Bonnans, A. Shapiro, Perturbation Analysis of Optimization Problems, Springer Science & Business Media, 2000.
* [3] A. S. Lewis, M. L. Overton, Eigenvalue optimization, Acta numerica 5 (1996) 149–190.
* [4] C. Kan, W. Song, Second-order conditions for existence of augmented Lagrange multipliers for eigenvalue composite optimization problems, Journal of Global Optimization 63 (1) (2015) 77–97.
* [5] M. Torki, First-and second-order epi-differentiability in eigenvalue optimization, Journal of Mathematical Analysis and Applications 234 (2) (1999) 391–416.
* [6] R. T. Rockafellar, Second-order optimality conditions in nonlinear programming obtained by way of epi-derivatives, Mathematics of Operations Research 14 (3) (1989) 462–484.
* [7] E. Balas, Disjunctive Programming, Springer, 2018.
* [8] H. Gfrerer, Optimality conditions for disjunctive programs based on generalized differentiation with application to mathematical programs with equilibrium constraints, SIAM Journal on Optimization 24 (2) (2014) 898–931.
* [9] J. Fan, R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American statistical Association 96 (456) (2001) 1348–1360.
* [10] C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, The Annals of Statistics 38 (2) (2010) 894–942.
* [11] X. Guo, H. Zhang, Y. Wang, J.-L. Wu, Model selection and estimation in high dimensional regression models with group scad, Statistics & Probability Letters 103 (2015) 86–92.
* [12] J. O. Ogutu, H.-P. Piepho, Regularized group regression methods for genomic prediction: Bridge, mcp, scad, group bridge, group lasso, sparse group lasso, group mcp and group scad, in: BMC proceedings, Vol. 8, BioMed Central, 2014, pp. 1–9.
* [13] Y. Liu, S. Bi, S. Pan, Equivalent Lipschitz surrogates for zero-norm and rank optimization problems, Journal of Global Optimization 72 (4) (2018) 679–704.
* [14] F. J. A. Artacho, M. H. Geoffroy, Metric subregularity of the convex subdifferential in banach spaces, Journal of Nonlinear and Convex Analysis 15 (2014) 35–47.
* [15] H. Gfrerer, First order and second order characterizations of metric subregularity and calmness of constraint set mappings, SIAM Journal on Optimization 21 (4) (2011) 1439–1474.
* [16] A. D. Ioffe, J. V. Outrata, On metric and calmness qualification conditions in subdifferential calculus, Set-Valued Analysis 16 (2) (2008) 199–227.
* [17] H. Gfrerer, J. V. Outrata, On computation of generalized derivatives of the normal-cone mapping and their applications, Mathematics of Operations Research 41 (4) (2016) 1535–1556.
* [18] A. Y. Kruger, Error bounds and Hölder metric subregularity, Set-Valued Analysis 23 (4) (2015) 705–736.
* [19] J. Ye, X. Ye, Necessary optimality conditions for optimization problems with variational inequality constraints, Mathematics of Operations Research 22 (4) (1997) 977–997.
* [20] G. Li, B. S. Mordukhovich, Hölder metric subregularity with applications to proximal point method, SIAM Journal on Optimization 22 (4) (2012) 1655–1684.
* [21] X. Y. Zheng, K. F. Ng, Metric subregularity and calmness for nonconvex generalized equations in banach spaces, SIAM Journal on Optimization 20 (5) (2010) 2119–2136.
* [22] A. Mohammadi, B. S. Mordukhovich, M. E. Sarabi, Variational analysis of composite models with applications to continuous optimization, Mathematics of Operations Research 47 (1) (2022) 397–426.
* [23] A. Ioffe, Variational analysis of a composite function: A formula for the lower second order epi-derivative, Journal of Mathematical Analysis and Applications 160 (2) (1991) 379–405.
* [24] R. Cominetti, On pseudo-differentiability, Transactions of the American Mathematical Society 324 (2) (1991) 843–865.
* [25] A. B. Levy, Second-order epi-derivatives of composite functionals, Annals of Operations Research 101 (1) (2001) 267–281.
* [26] A. Mohammadi, B. S. Mordukhovich, M. Sarabi, Parabolic regularity in geometric variational analysis, Transactions of the American Mathematical Society 374 (3) (2021) 1711–1763.
* [27] A. Mohammadi, M. E. Sarabi, Twice epi-differentiability of extended-real-valued functions with applications in composite optimization, SIAM Journal on Optimization 30 (3) (2020) 2379–2409.
* [28] M. Benko, P. Mehlitz, Why second-order sufficient conditions are, in a way, easy–or–revisiting calculus for second subderivatives, arXiv preprint arXiv:2206.03918 (2022).
* [29] A. Mohammadi, First-order variational analysis of non-amenable composite functions, arXiv preprint arXiv:2204.01191 (2022).
* [30] B. S. Mordukhovich, Variational Analysis and Applications, Vol. 30, Springer, 2018\.
* [31] H. Gfrerer, On directional metric regularity, subregularity and optimality conditions for nonsmooth mathematical programs, Set-Valued and Variational Analysis 21 (2) (2013) 151–176.
* [32] H. Gfrerer, J. V. Outrata, On Lipschitzian properties of implicit multifunctions, SIAM Journal on Optimization 26 (4) (2016) 2160–2189.
* [33] M. Benko, H. Gfrerer, J. Ye, J. Zhang, J. Zhou, Second-order optimality conditions for general nonconvex optimization problems and variational analysis of disjunctive systems, arXiv preprint arXiv:2203.10015 (2022).
* [34] R. Henrion, J. V. Outrata, Calmness of constraint systems with applications, Mathematical Programming 104 (2) (2005) 437–464.
* [35] P. Mehlitz, On the linear independence constraint qualification in disjunctive programming, Optimization 69 (10) (2020) 2241–2277.
* [36] G. Xue, Y. Ye, An efficient algorithm for minimizing a sum of p-norms, SIAM Journal on Optimization 10 (2) (2000) 551–579.
* [37] E. D. Andersen, C. Roos, T. Terlaky, Notes on duality in second order and $p$-order cone optimization (2002).
* [38] J. F. Bonnans, H. Ramírez C, et al., Perturbation analysis of second-order cone programming problems, Mathematical Programming 104 (2) (2005) 205–227.
* [39] J. F. Bonnans, H. Ramírez C, et al., Perturbation analysis of second-order cone programming problems, Mathematical Programming 104 (2) (2005) 205–227.
|
# Elliptic ruled surfaces over arbitrary characteristic fields
Takato Togashi and Hokuto Uehara
###### Abstract
Atiyah classifies vector bundles on elliptic curves $E$ over an algebraically
closed field of any characteristic. On the other hand, a rank $2$ vector
bundle on $E$ defines a surface $S$ with a $\mathbb{P}^{1}$-bundle structure
on $E$. We study when $S$ has an elliptic fibration according to the Atiyah’s
classification, and what kinds of singular fibers appear.
## 1 Introduction
Kodaira initiated a study of elliptic surfaces over $\mathbb{C}$ about 60
years ago, since then, we have a satisfactory theory of this subject. Bombieri
and Mumford studied in [BM76] elliptic surfaces over an algebraically closed
field $k$ of arbitrary characteristic $p$, and they introduced the notion of
wild fibers in the case $p>0$, which turns out a main difficulty to develop an
analogous theory of elliptic surfaces in the case $p>0$ to that in the case
$p=0$. Ueno and Katsura studied in [KU85] what extent the theory in the case
$p=0$ can be extended or have a nice analogy in the case $p>0$. In this
article, we study elliptic fibrations on elliptic ruled surfaces in arbitrary
$p$ with the aid of [KU85] and Atiyah’s study of vector bundles on elliptic
curves.
Let $S$ be a smooth projective surface defined over $k$. Suppose that $S$
admits a relatively minimal elliptic fibration $\pi\colon S\rightarrow C$, and
$\pi^{*}(p_{i})=m_{i}D_{i}\mbox{ ($m_{i}$ is the multiplicity, $p_{i}\in
C,i=1,2,\dots,\lambda$)}$
are the multiple fibers of $\pi$. We know that
$\mathbb{R}^{1}\pi_{*}\mathcal{O}_{S}\simeq\mathcal{L}_{\pi}\oplus\mathcal{T}_{\pi}$,
where $\mathcal{L}_{\pi}$ is an invertible sheaf and $\mathcal{T}_{\pi}$ a
torsion sheaf, which is known to be $0$ in the case $p=0$. The multiple fiber
$\pi^{*}(p_{i})$ is said to be _wild_ for a point
$p_{i}\in\operatorname{Supp}\mathcal{T}_{\pi}$, and _tame_ for a point
$p_{i}\not\in\operatorname{Supp}\mathcal{T}_{\pi}$. The canonical bundles on
$S$ and $C$ are related by _the canonical bundle formula_ , which states that
there is an isomorphism
$\omega_{S}\simeq\pi^{*}(\omega_{C}\otimes\mathcal{L}_{\pi}^{-1})\otimes\mathcal{O}_{S}(\sum_{i=1}^{\lambda}a_{i}D_{i})$
for some integer $a_{i}$ with $0\leq a_{i}\leq m_{i}-1$. If $a_{i}\neq
m_{i}-1$, then $\pi^{*}p_{i}$ is known to be a wild fiber.
If $\kappa(S)=-\infty$, $S$ is either a rational surface obtained by
$9$-points blow up of $\mathbb{P}^{2}$ or an _elliptic ruled surface_ , that
is, a surface with a $\mathbb{P}^{1}$-bundle over an elliptic curve. Harbourne
and Lang classified multiple fibers on rational elliptic surfaces in [HL88].
It turns out that rational elliptic surfaces have reducible fibers, but no
wild fibers. On the other hand, we can readily see that an elliptic fibration
on an elliptic ruled surface has no reducible fibers, and moreover all
multiple fibers are of type ${}_{m}\mathrm{I}_{0}$, namely the reductions of
all multiple fibers are smooth elliptic curves (Lemma 2.7). One of the aim of
this article is to determine when a given elliptic ruled surface has an
elliptic fibration $\pi$, and to determine how many wild and tame fibers
appear in $\pi$ (Theorem 1.1).
Let $\mathcal{E}$ be a normalized rank $2$ vector bundle on an elliptic curve
$E$ and
$f\colon S=\mathbb{P}(\mathcal{E})\rightarrow E$
be the $\mathbb{P}^{1}$-bundle on $E$, determined by $\mathcal{E}$. Let us set
$e=-\deg\mathcal{E}$. Then we see that $e=0$ or $-1$ if $S$ has an elliptic
fibration. If $e=0$ and $\mathcal{E}$ is indecomposable, it is easy to see
that the isomorphism class of such vector bundle is unique. If $e=0$ and
$\mathcal{E}$ is decomposable, we see that
$\mathcal{E}\cong\mathcal{O}_{E}\oplus\mathcal{L}$ for some
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$. On the other hand, if
$e=-1$, $\mathcal{E}$ is always indecomposable. We also note that there are
many isomorphism classes of such vector bundles $\mathcal{E}$ on $E$, but all
vector bundles give the unique isomorphism class of $\mathbb{P}(\mathcal{E})$
(see [Har77, Theorem V.2.15]).
Theorem 1.1 below is the first main result in this article. In the tables
contained therein, the symbol ∗ stands for a wild fiber. Moreover, as
mentioned above, if a multiple fiber $mD$ is tame, then $a=m-1$, hence we omit
the value of $a$ in the list. For example, $(2,0/2^{*})$ in the case (ii-3)
stands for one tame fiber of type ${}_{2}\mathrm{I}_{0}$ with $a_{1}=1$ and
one wild fiber of type ${}_{2}\mathrm{I}_{0}$ with $a_{2}=0$.
###### Theorem 1.1.
Let $\mathcal{E}$ be a normalized rank $2$ vector bundle on an elliptic curve
$E$, and $S=\mathbb{P}(\mathcal{E})$ the associated $\mathbb{P}^{1}$-bundle
over $E$ with $\mathcal{E}$.
1. (i)
For $e=0$, we have the following:
| $\mathcal{E}$ | $(a_{i}/m_{i})$ if $S$ has an elliptic fibration | $p$
---|---|---|---
(i-1) | $\mathcal{O}_{E}\oplus\mathcal{O}_{E}$ | no multiple fibers | $p\geq 0$
(i-2) | $\mathcal{O}_{E}\oplus\mathcal{L}$, $\operatorname{ord}\mathcal{L}=m>1$ | $(m,m)$ | $p\geq 0$
(i-3) | $\mathcal{O}_{E}\oplus\mathcal{L}$, $\operatorname{ord}\mathcal{L}=\infty$ | no elliptic fibrations | $p\geq 0$
(i-4) | indecomposable | no elliptic fibrations | $p=0$
(i-5) | indecomposable | $(p-2/{p}^{*})$ | $p>0$
Here $\mathcal{L}$ is an element of $\mathop{\mathrm{Pic}}\nolimits^{0}E$.
2. (ii)
Suppose that $e=-1$. Then $S$ has an elliptic fibration. The list of singular
fibers are the following:
| $(a_{i}/m_{i})$ | E | $p$
---|---|---|---
(ii-1) | $(2,2,2)$ | | $p\neq 2$
(ii-2) | $(1/2^{*})$ | supersingular | $p=2$
(ii-3) | $(2,0/2^{*})$ | ordinary | $p=2$
Maruyama also considered a condition when elliptic ruled surfaces have an
elliptic fibration [Mar71, Theorem 4], by terms of elementary transformations
of ruled surfaces (see Remark 2.16). Suwa also considered a similar condition
in [Suw69, Theorem 5] in the case $p=0$. In the case of $p\neq 2$, the result
in Theorem 1.1 was obtained in the first author’s master thesis [Tog].
We also notice that the elliptic fibration in the case (ii-2) have a _wild
fiber of strange type_ (see Remark 5.1).
In the proof of Theorem 1.1, we can construct a _resolution of singular
fibers_ on elliptic ruled surfaces. More precisely we have following:
###### Theorem 1.2.
Let $f\colon S\to E$ be a $\mathbb{P}^{1}$-bundle over an elliptic curve $E$
such that $S$ also has an elliptic fibration $\pi\colon S\to\mathbb{P}^{1}$.
Then there is a finite surjective morphism $\varphi\colon F\to E$ from an
elliptic curve $F$, fitting into the following diagram:
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{F\times\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}.}$
Here, $f^{\prime}$ and $\pi^{\prime}$ are natural projections, the left square
is the fiber product diagram and the right square is obtained by the Stein
factorization of $q\circ\pi$.
We observe in Theorem 1.2 that by taking a suitable finite cover of $S$, we
obtain an elliptic fibration $\pi^{\prime}$ having milder singular fibers (in
Theorem 1.2, $\pi^{\prime}$ actually has no singular fibers). The existence of
such resolutions was already observed under some suitable situations. See, for
example, [KU85, §6, §7] and [Kaw00, Theorem A]. It is notable that the notion
of a _wild fiber of strange type_ was introduced in [KU85] as an obstruction
of the existence of such resolutions in their constructions. However, we can
actually find a resolution even in the case (ii-2), $\pi$ has a wild fiber of
strange type.
The construction of this article is as follows. In §2 we refer and prove
several results on elliptic surfaces and vector bundles on elliptic curves. In
§3, we narrow down the candidates of singular fibers of elliptic fibrations on
elliptic ruled surfaces. We prove Theorem 1.1 in §4 and Theorem 1.2 in §5.
In [Ueh17], we apply the result in [Tog], which is the same as Theorem 1.1 in
the case $p\neq 2$, to study the set $\operatorname{FM}(S)$ of Fourier–Mukai
partners of elliptic ruled surfaces $S$ in the case $p=0$. In the forthcoming
paper [UW], we apply Theorem 1.1 to study $\operatorname{FM}(S)$ for elliptic
ruled surfaces $S$ for arbitrary $p$.
#### Notation and conventions
All varieties $X$ are defined over an algebraically closed field $k$ of
characteristic $p\geq 0$. A point $Q\in X$ always means a closed point. If we
denote
$D_{1}\sim D_{2}\quad(\mbox{resp. }D_{1}\equiv D_{2})$
for divisors $D_{1}$ and $D_{2}$ on a normal variety $X$, we mean that $D_{1}$
and $D_{2}$ are linearly equivalent (resp. numerically equivalent). We denote
the dimension of the cohomology $H^{i}(X,\mathcal{F})$ of a sheaf
$\mathcal{F}$ on $X$ by $h^{i}(X,\mathcal{F})$.
In the case $p>0$, we denote a relative Frobenius morphism by
$\operatorname{Fr}\colon X_{p}\to X.$
By an _elliptic surface_ , we will always mean a smooth projective surface $S$
together with a smooth projective curve $C$ and a relatively minimal
projective morphism $\pi\colon S\to C$ whose general fiber is an elliptic
curve.
Take a point $Q$ on an elliptic curve $E$. Since we have
$\mathop{\mathrm{Ext}}\nolimits^{1}_{E}(\mathcal{O}_{E},\mathcal{O}_{E})\cong\mathop{\mathrm{Ext}}\nolimits^{1}_{E}(\mathcal{O}_{E}(Q),\mathcal{O}_{E})\cong
k,$
there is the unique isomorphism class of rank $2$ vector bundle
$\mathcal{E}_{2,0}$ (resp. $\mathcal{E}_{Q}$) fitting into the non-split exact
sequence:
$0\to\mathcal{O}_{E}\to\mathcal{E}_{2,0}\to\mathcal{O}_{E}\to
0\quad(\mbox{resp.
}0\to\mathcal{O}_{E}\to\mathcal{E}_{Q}\to\mathcal{O}_{E}(Q)\to 0).$
#### Acknowledgments.
We would like to thank Hiroyuki Ito, Kentaro Mitsui, Shigeru Mukai, Toshiyuki
Katsura, Noboru Nakayama for invaluable suggestions. H.U. is supported by the
Grants-in-Aid for Scientific Research (No. 18K03249).
## 2 Preliminaries
### 2.1 Elliptic surfaces
In this subsection, we refer several useful results on elliptic surfaces in
[BM76] and [KU85]. Let $\pi:S\rightarrow C$ be an elliptic surface. Suppose
that
$\bigl{\\{}\pi^{*}(Q_{i})=m_{i}D_{i}\bigm{|}i=1,2,\dots,\lambda\bigr{\\}}$
is the set of multiple fibers for $Q_{i}\in C$. Then, we have a decomposition
$\mathbb{R}^{1}\pi_{*}\mathcal{O}_{S}\simeq\mathcal{L}_{\pi}\oplus\mathcal{T}_{\pi}$
where $\mathcal{L}_{\pi}$ is an invertible sheaf and $\mathcal{T}_{\pi}$ is a
torsion sheaf. It is known that $\mathcal{T}_{\pi}=0$ when $p=0$. If
$Q_{i}\notin\operatorname{Supp}\mathcal{T}_{\pi}$, $\pi^{*}(Q_{i})$ is said to
be a _tame fiber_ , and a _wild fiber_ otherwise. Note that for a point
$Q\in\operatorname{Supp}\mathcal{T}_{\pi}$, $\pi^{*}(Q)$ is a multiple fiber,
since $h^{0}(\pi^{*}(Q),\mathcal{O}_{\pi^{*}(Q)})\geq 2$.
Let us define $d:=\deg\mathcal{L}_{\pi}$ and $g:=g(C)$.
###### Proposition 2.1 (Theorem 2 in [BM76]).
Let $\pi\colon S\longrightarrow C$ be a smooth projective elliptic surface.
For the decomposition
$\mathbb{R}^{1}\pi_{*}\mathcal{O}_{S}\simeq\mathcal{L}_{\pi}\oplus\mathcal{T}_{\pi},$
we have
$\omega_{S}\simeq\pi^{*}(\omega_{C}\otimes\mathcal{L}_{\pi}^{-1})\otimes\mathcal{O}_{S}(\sum_{i=1}^{\lambda}a_{i}D_{i}).$
Moreover we have the following:
1. (i)
$0\leq a_{i}\leq m_{i}-1$ for all $i=1,2,\dots,\lambda$.
2. (ii)
If $m_{i}D_{i}$ is a tame fiber, then $a_{i}=m_{i}-1$.
3. (iii)
$d=-\chi(\mathcal{O}_{S})-h^{0}(C,\mathcal{T}_{\pi})$ and $d\leq 0$.
For each $i$, $\mathcal{O}_{S}(D_{i})|_{D_{i}}$ is known to be a torsion
element in $\mathop{\mathrm{Pic}}\nolimits^{0}D_{i}$. Let us define
$\nu_{i}:=\operatorname{ord}\mathcal{O}_{S}(D_{i})|_{D_{i}}$. Then it is known
that
$m_{i}=p^{\alpha}\nu_{i}$ (1)
for some $\alpha\geq 0$ and that $\nu_{i}=m_{i}$ if and only if $m_{i}D_{i}$
is a tame fiber. If $D_{i}$ is a supersingular elliptic curve,
$\mathop{\mathrm{Pic}}\nolimits^{0}D_{i}$ has no torsion elements whose order
is divisible by $p$. Thus we have the following.
###### Lemma 2.2.
Suppose that $mD$ is a multiple fiber and $D$ is a supersingular elliptic
curve. Then $mD$ is tame if and only if $m$ is not divisible by $p$.
The following is useful.
###### Theorem 2.3 (Corollary to Proposition 4 in [BM76]).
Let $\pi\colon S\rightarrow\mathbb{P}^{1}$ be a smooth projective elliptic
surface satisfying $h^{1}(S,\mathcal{O}_{S})\leq 1$. Then we have
$a_{i}+1=m_{i}$ or $a_{i}+\nu_{i}+1=m_{i}$.
###### Definition 2.4.
In the above notation, assume furthermore that $C=\mathbb{P}^{1}$ and
$\chi(\mathcal{O}_{S})=0$. Such an elliptic surface $S$ is said to be of type
$(m_{1},\dots,m_{\lambda}|\nu_{1},\dots,\nu_{\lambda})$. Assume furthermore
that all singular fiber are tame. Then $S$ is said to be of type
$(m_{1},\dots,m_{\lambda})$.
The following is used as a necessary condition for algebraicity of elliptic
surface in [KU85].
###### Theorem 2.5 (Theorem 3.3 in [KU85]).
Let $\pi:S\rightarrow\mathbb{P}^{1}$ be an elliptic surface with
$\chi(\mathcal{O}_{S})=0$ and suppose that $S$ is of type
$(m_{1},\dots,m_{\lambda}|\nu_{1},\dots,\nu_{\lambda})$. Then for each
$i\in\\{1,2,\dots,\lambda\\}$, there are integers
$n_{1},n_{2},\dots,n_{\lambda}$ satisfying
* •
$n_{i}\equiv 1\ (\bmod\ \nu_{i})$, and
* •
$n_{1}/m_{1}+n_{2}/m_{2}+\dots+n_{\lambda}/m_{\lambda}\in\mathbb{Z}$.
###### Remark 2.6.
Let $S$ be an elliptic surface satisfying $\chi(\mathcal{O}_{S})=0$, and
$C=\mathbb{P}^{1}$. In the following cases, the information on $m$ and $\nu$
are easily deduced from Theorem 2.5.
1. (i)
([KU85, Corollary 4.2]) Suppose that $S$ has a unique multiple fiber, and the
type of $S$ is $(m|\nu)$. Then the multiple fiber is wild, $m=p^{\alpha}$ for
some integer $\alpha>0$, and $\nu=1$.
2. (ii)
([KU85, Corollary 4.3]) Suppose that $S$ is of type $(m_{1},m_{2})$. Then
there are integers $n_{2},n^{\prime}_{1}$ satisfying
$1/m_{1}+n_{2}/m_{2},\quad n^{\prime}_{1}/m_{1}+1/m_{2}\in\mathbb{Z}.$
These conditions imply the equality $m_{1}=m_{2}$.
### 2.2 Atiyah’s result
Atiyah classified indecomposable vector bundles on elliptic curves [Ati57]. We
summarize his results we need below.
Let $\mathcal{M}_{E}(r,d)=\mathcal{M}(r,d)$ be the set of isomorphism classes
of indecomposable vector bundles of rank $r$ and degree $d$ over an elliptic
curve $E$. For $\mathcal{E}\in\mathcal{M}(r,d)$, every other vector bundle in
$\mathcal{M}(r,d)$ is of the form $\mathcal{E}\otimes\mathcal{L}$ for some
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E=\mathcal{M}(1,0)$. There is
the unique element $\mathcal{E}_{r,0}$ in $\mathcal{M}(r,0)$ such that
$h^{0}(\mathcal{E}_{r,0})\neq 0$. Furthermore for
$\mathcal{E}\in\mathcal{M}(r,0)$, we have
$\displaystyle h^{0}(E,\mathcal{E})=h^{1}(E,\mathcal{E})=0\mbox{ when
$\mathcal{E}\neq\mathcal{E}_{r,0}$, and }$ $\displaystyle
h^{0}(E,\mathcal{E}_{r,0})=h^{1}(E,\mathcal{E}_{r,0})=1$
Actually we can define $\mathcal{E}_{r,0}$ by putting
$\mathcal{E}_{1,0}=\mathcal{O}_{E}$ and the unique non-trivial extension
$0\to\mathcal{E}_{r,0}\to\mathcal{E}_{r+1,0}\to\mathcal{O}_{E}\to 0$ (2)
inductively. We can also see that
$\mathcal{E}_{r,0}\cong(\mathcal{E}_{r,0})^{\vee}$.
Next, let us define the unique indecomposable vector bundle $\mathcal{E}_{P}$
of rank $2$ for a point $P\in E$, which fits into the following non-split
exact sequence
$0\to\mathcal{O}_{E}\to\mathcal{E}_{P}\to\mathcal{O}_{E}(P)\to 0.$ (3)
Although $\mathcal{E}_{P}\not\cong\mathcal{E}_{Q}$ for distinct points $P,Q\in
E$, we have $\mathbb{P}(\mathcal{E}_{P})\cong\mathbb{P}(\mathcal{E}_{Q})$
[Har77, Theorem V.2.15]. We can also see that
$\mathcal{M}(2,1)=\\{\mathcal{E}_{P}\mid P\in E\\}.$
### 2.3 Elliptic ruled surfaces
We use the terminology on ruled surface in [Har77, V.2], without specified
otherwise. Let $f\colon S\rightarrow E$ be a $\mathbb{P}^{1}$-bundle structure
over an elliptic curve $E$. Denote a general fiber of $f$ by $F_{f}$. Then
there is a rank $2$ vector bundle $\mathcal{E}$ on $E$ such that
$S\cong\mathbb{P}(\mathcal{E})$ and we assume that $\mathcal{E}$ is
normalized. Put $e=-\deg\mathcal{E}$ and take a minimal section $C_{0}$ with
${C_{0}}^{2}=-e$. Let us define a divisor $D$ which satisfies
$\mathcal{O}_{S}(D)\cong\det\mathcal{E}$. Then we have
$K_{S}\sim-2C_{0}-f^{*}D\equiv-2C_{0}-eF_{f}.$
Note that if $S$ has an elliptic fibration $\pi\colon S\to\mathbb{P}^{1}$,
then $-K_{S}$ is nef. Furthermore, we can easily see that $-K_{S}$ is nef if
and only if $e=0,-1$. We can also deduce from [Har77, V.2] that
$\mathcal{E}\cong\begin{cases}\mathcal{O}_{E}\oplus\mathcal{L}\ \mbox{ or }\
\mathcal{E}_{2,0}\quad&\mbox{in the case $e=0$}\\\
\mathcal{E}_{Q}\quad&\mbox{in the case $e=-1$}\end{cases}$ (4)
for some $\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$ and $Q\in E$.
Moreover, any rational curves on $S$ cannot dominate the elliptic curve $E$ by
$f$. This fact immediately implies the following.
###### Lemma 2.7.
An elliptic ruled surface $S$ has no quasi-elliptic fibrations. Moreover, if
$S$ has an elliptic fibration, its singular fiber is of the of the type
${}_{m}\text{I}_{0}$ for $m>0$, that is, a multiple fiber whose reduction is a
smooth elliptic curve.
### 2.4 Non-existence of elliptic fibrations
In this subsection, we show the non-existence of elliptic fibrations on
certain elliptic ruled surfaces. First we prove the following two lemmas.
###### Lemma 2.8.
For $m\geq 0$, we have
$h^{0}(E,\operatorname{Sym}^{m}\mathcal{E}_{2,0})=\begin{cases}1&\text{$p=0$
or $m<p$}\\\ 2&\text{$m=p>0$}.\end{cases}$
###### Proof.
There is a natural exact sequence
$0\to\operatorname{Sym}^{r-1}\mathcal{E}_{2,0}\to\operatorname{Sym}^{r}\mathcal{E}_{2,0}\to\mathcal{O}_{E}\to
0,$ (5)
and Atiyah observed in [Ati57, Theorem 9] that in the case $p=0$ or $1\leq
r<p$, $\operatorname{Sym}^{r-1}\mathcal{E}_{2,0}\cong\mathcal{E}_{r,0}$, and
we can see that (2) is isomorphic to (5).
In the case $r=p>0$, we can see that the exact sequence (5) splits as follows.
Let us consider an affine open covering $\\{U_{i}\\}_{i}$ of $E$. Note that
$\mathcal{E}_{2,0}$ is trivialized on each $U_{i}$. Transition functions of
$\mathcal{E}_{2,0}$ on $U_{i}\cap U_{j}$ can be describes as
$\begin{pmatrix}1&f_{ij}\\\ 0&1\end{pmatrix}$
by (5), where $f_{ij}$ is a regular function on $U_{i}\cap U_{j}$. Then the
transition functions of $\operatorname{Sym}^{m}\mathcal{E}_{2,0}$ is
$\begin{pmatrix}1&mf_{ij}&{}_{m}C_{2}f_{ij}^{2}&{}_{m}C_{3}f_{ij}^{3}&\cdots&mf_{ij}^{m-1}&f_{ij}^{m}\\\
0&1&(m-1)f_{ij}&{}_{m-1}C_{2}f_{ij}^{2}&\cdots&(m-1)f_{ij}^{m-2}&f_{ij}^{m-1}\\\
0&0&1&(m-2)f_{ij}&\cdots&(m-2)f_{ij}^{m-3}&f_{ij}^{m-2}\\\
\vdots&\vdots&\vdots&\vdots&\cdots&\vdots&\vdots\\\ 0&0&0&0&\ldots&1&f_{ij}\\\
0&0&0&0&\ldots&0&1\end{pmatrix}.$
We call this $(m+1)\times(m+1)$ matrix $A_{ij}^{(m)}$. When $m=p$, it is the
following form
$\left(\begin{array}[]{@{\,}c|cccc@{\,}}1&0&\cdots&0&f_{ij}^{p}\\\ \hline\cr
0&&&&\\\
\vdots&\lx@intercol\hfil\raisebox{-10.0pt}[0.0pt][0.0pt]{\Huge$A_{ij}^{(p-1)}$}\hfil\lx@intercol\\\
0&&&&\\\ \end{array}\right),$
since ${}_{p}C_{k}$ is divisible by $p$. When $E$ is an ordinary (resp.
supersingular) elliptic curve, the Frobenius morphism $\operatorname{Fr}$
induces a bijection (resp. a zero map)
$\operatorname{Fr}^{*}\colon H^{1}(E_{p},\mathcal{O}_{E_{p}})\to
H^{1}(E,\mathcal{O}_{E}).$
Thus we put $\\{f_{ij}^{p}\\}=\lambda\\{f_{ij}\\}\in H^{1}(E,\mathcal{O}_{E})$
for some $\lambda\in k^{*}$ (resp. $\lambda=0$). In particular, there are
regular functions $g_{i}$ on $U_{i}$ such that $f_{ij}^{p}-\lambda
f_{ij}=g_{i}-g_{j}$.
Take an $(p+1)\times(p+1)$ matrix $P_{i}$ as
$\left(\begin{array}[]{@{\,}c|ccccc@{\,}}1&0&\cdots&0&\lambda&-g_{i}\\\
\hline\cr 0&&&&\\\
\vdots&\lx@intercol\hfil\raisebox{-10.0pt}[0.0pt][0.0pt]{\Huge$I_{p}$}\hfil\lx@intercol\\\
0&&&&\\\ \end{array}\right).$
Here $I_{p}$ is the $p\times p$ identity matrix. Then we can see that
$A_{ij}^{(p)}P_{i}=P_{j}\widetilde{A_{ij}}^{(p)},$
where we put
$\widetilde{A_{ij}}^{(p)}:=\left(\begin{array}[]{@{\,}c|cccc@{\,}}1&0&\cdots&0&0\\\
\hline\cr 0&&&\\\
\vdots&\lx@intercol\hfil\raisebox{-10.0pt}[0.0pt][0.0pt]{\Huge$A_{ij}^{(p-1)}$}\hfil\lx@intercol\\\
0&&&\\\ \end{array}\right).$
This means that the vector bundles defined by the transition functions
$\\{A_{ij}\\}$ and $\\{\widetilde{A_{ij}}\\}$ are isomorphic to each other.
This completes the proof. ∎
###### Lemma 2.9.
Take $\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$ and
$m\in\mathbb{Z}_{\geq 0}$. Then we have
$h^{0}(E,\operatorname{Sym}^{m}(\mathcal{O}_{E}\oplus\mathcal{L}))=\begin{cases}1&m<\operatorname{ord}\mathcal{L}\\\
2&m=\operatorname{ord}\mathcal{L}.\end{cases}$
###### Proof.
The assertion follows from the isomorphism
$\displaystyle\operatorname{Sym}^{m}(\mathcal{O}_{E}\oplus\mathcal{L})\cong\mathcal{O}_{E}\oplus\mathcal{L}\oplus\cdots\oplus{\mathcal{L}}^{\otimes
m}.$
∎
###### Proposition 2.10.
Let $\mathcal{E}$ be a vector bundle of rank $2$ on an elliptic curve $E$. In
the following cases, $S:=\mathbb{P}_{E}(\mathcal{E})$ has no elliptic
fibrations.
1. (i)
$p=0$ and $\mathcal{E}=\mathcal{E}_{2,0}$.
2. (ii)
$p\geq 0$ and $\mathcal{E}=\mathcal{O}_{E}\oplus\mathcal{L}$ for
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$ with
$\operatorname{ord}\mathcal{L}=\infty$.
###### Proof.
In both cases, we have $e=0$ and hence $-K_{S}\sim 2C_{0}+f^{*}D$, where
$f\colon S\to E$ is the $\mathbb{P}^{1}$-bundle structure and $D$ is a divisor
satisfying $\mathcal{O}_{S}(D)\cong\det\mathcal{E}$. If $S$ has an elliptic
fibration, the complete linear system $|-mK_{S}|$ for some $m>0$ defines it,
and hence $h^{0}(S,\omega_{S}^{\otimes{-m}})\geq 2$. On the other hand, we
have
$h^{0}(S,\omega_{S}^{\otimes{-m}})=h^{0}(S,\mathcal{O}_{S}(2mC_{0}+mf^{*}D))=h^{0}(E,\operatorname{Sym}^{2m}(\mathcal{E})\otimes\mathcal{O}_{S}(mD)),$
which equals to $1$ in the case (i) by Lemma 2.8, and in the case (ii) by
Lemma 2.9. Thus the assertion follows. ∎
### 2.5 Existence of elliptic fibrations
In this subsection, we show the existence of an elliptic fibration on a given
elliptic ruled surface $S=\mathbb{P}(\mathcal{E})$. If $S$ has an elliptic
fibration, then $-K_{S}$ is nef, hence $\mathcal{E}$ is one of vector bundles
appeared in (4).
To achieve our purpose, first we study the structure of
$\varphi^{*}\mathcal{E}$, where $\varphi\colon F\to E$ is a finite isogeny of
elliptic curves. For a given such $\varphi$, there is a factorization
$\textstyle{\varphi\colon
F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi}$$\textstyle{E,}$
where $\psi$ is purely inseparable and $\phi$ is separable. It is known that
$\psi$ is a composition of Frobenius morphisms. If $\ker\phi$ is non-trivial,
it is a finite abelian group, which contains a cyclic group of a prime order.
Thus $\phi$ is decomposed into isogenies with a prime degree. Hence it is
essential to study the structure of $\varphi^{*}\mathcal{E}$ for an isogeny
$\varphi$ with a prime degree.
Suppose that $p>0$. Let $E$ be an ordinary elliptic curve. Then, $E$ contains
a subgroup $\mathbb{Z}/p\mathbb{Z}$ in a unique way, and let us call
$q_{E}\colon E\to E/(\mathbb{Z}/p\mathbb{Z})$
the quotient morphism. Recall that we have
$\widehat{q_{E}}=\operatorname{Fr}\colon E/(\mathbb{Z}/p\mathbb{Z})\cong
E_{p}\to E.$
To the contrary if $E$ is supersingular, we have
$\widehat{\operatorname{Fr}}=\operatorname{Fr}\colon E_{p}\to E.$
For an isogeny $\varphi\colon F\to E$ of degree $p$, note that
$\varphi=\begin{cases}\operatorname{Fr}\text{ if $\varphi$ is purely
inseparable},\\\ q_{F}\text{ if $\varphi$ is separable. (In this case, $E$ is
necessarily ordinary.)}\end{cases}$
We use the following Lemma to prove Lemma 2.12.
###### Lemma 2.11 (Corollaries 1.7 and 2.6 in [Oda71]).
Let $\varphi\colon F\to E$ be an isogeny of elliptic curves of degree $n$.
Then we have
$\varphi_{*}\mathcal{O}_{F}=\begin{cases}\mathcal{E}_{p,0}&\text{ if $n=p$ and
$\widehat{\varphi}$ is purely inseparable}\\\
\bigoplus_{\varphi^{*}\mathcal{L}\cong\mathcal{O}_{F}}\mathcal{L}&\text{ if
$\widehat{\varphi}$ is separable.}\end{cases}$
###### Lemma 2.12.
Let $F$ and $E$ be elliptic curves. Take an indecomposable vector bundle
$\mathcal{E}_{Q}\in\mathcal{M}_{E}(2,1)$ for some $Q\in E$.
1. (i)
Suppose that $p>0$, and let $\varphi\colon F\to E$ be an isogeny of degree $p$
with $\widehat{\varphi}$ purely inseparable. Then we have
$\varphi^{*}\mathcal{E}_{2,0}\cong\mathcal{O}_{F}\oplus\mathcal{O}_{F}.$
2. (ii)
Suppose that $p=2$. Then we have
$\operatorname{Fr}^{*}\mathcal{E}_{Q}\cong\mathcal{E}_{2,0}\otimes\mathcal{O}_{E_{2}}(Q).$
3. (iii)
Suppose that $p\geq 0$. Let $\varphi\colon F\to E$ be a separable isogeny of
degree $2$. Then
$\varphi^{*}\mathcal{E}_{Q}\cong\mathcal{O}_{F}(Q_{1})\oplus\mathcal{O}_{F}(Q_{2}),$
where $Q_{i}\in F$ and $\mathcal{O}_{F}(Q_{1}-Q_{2})$ is an order $2$ element
of $\mathop{\mathrm{Pic}}\nolimits^{0}F$.
###### Proof.
Let $\varphi\colon F\to E$ be an isogeny with prime $n:=\deg\varphi$. Note
that Lemma 2.11 implies
$\mathop{\mathrm{Coker}}\nolimits\operatorname{adj}=\begin{cases}\mathcal{E}_{p-1,0}&\text{
if $n=p$ and $\widehat{\varphi}$ is purely inseparable, }\\\
\bigoplus_{\operatorname{ord}\mathcal{L}=n}\mathcal{L}&\text{ if
$\widehat{\varphi}$ is separable,}\end{cases}$
where
$\operatorname{adj}\colon\mathcal{O}_{E}\to\varphi_{*}\varphi^{*}\mathcal{O}_{E}=\varphi_{*}\mathcal{O}_{F}$
is the adjunction morphism.
(i) By the assumption, the connection morphism
$H^{0}(\mathop{\mathrm{Coker}}\nolimits\operatorname{adj})\to
H^{1}(\mathcal{O}_{E})$
is isomorphic, and hence the morphism
$\varphi^{*}=H^{1}(\operatorname{adj})\colon\mathop{\mathrm{Ext}}\nolimits_{E}^{1}(\mathcal{O}_{E},\mathcal{O}_{E})\cong
H^{1}(\mathcal{O}_{E})\to\mathop{\mathrm{Ext}}\nolimits^{1}_{F}(\varphi^{*}\mathcal{O}_{E},\varphi^{*}\mathcal{O}_{E})\cong
H^{1}(\varphi_{*}\mathcal{O}_{F})$
is zero. (Note that if $E$ is supersingular, this fact immediately follows
from the definition.) Hence the short exact sequence obtained by $\varphi^{*}$
of the sequence (2) for $r=1$ splits. Then the result follows.
(ii), (iii) In both cases, we have
$h^{0}(\mathop{\mathrm{Coker}}\nolimits(\operatorname{adj}\otimes\mathcal{O}_{E}(-Q)))=0,$
and hence the morphism
$H^{1}(\operatorname{adj}\otimes\mathcal{O}_{E}(-Q))\colon
H^{1}(\mathcal{O}_{E}(-Q))\to
H^{1}(\varphi_{*}\varphi^{*}\mathcal{O}_{E}\otimes\mathcal{O}_{E}(-Q))=H^{1}(\varphi^{*}\mathcal{O}_{E}(-Q))$
is injective. Since it coincides with the morphism
$\varphi^{*}\colon\mathop{\mathrm{Ext}}\nolimits^{1}_{E}(\mathcal{O}_{E}(Q),\mathcal{O}_{E})\to\mathop{\mathrm{Ext}}\nolimits^{1}_{F}(\varphi^{*}\mathcal{O}_{E}(Q),\varphi^{*}\mathcal{O}_{E}),$
the short exact sequence
$0\to\mathcal{O}_{F}\to\varphi^{*}\mathcal{E}_{Q}\to\varphi^{*}\mathcal{O}_{E}(Q)\to
0$ (6)
obtained by $\varphi^{*}$ of the sequence (3) does not split.
Assume that $p=2$, $\varphi=\operatorname{Fr}$ and
$\operatorname{Fr}^{*}\mathcal{E}_{Q}$ is decomposable. Since
$\operatorname{Fr}^{*}Q=2Q$, we have
$\operatorname{Fr}^{*}\mathcal{E}_{Q}=\mathcal{O}_{E_{2}}(Q)\oplus\mathcal{O}_{E_{2}}(Q).$
Then it follows from (6) that there is an exact sequence
$0\to\mathcal{O}_{E_{2}}(-Q)\to\mathcal{O}_{E_{2}}\oplus\mathcal{O}_{E_{2}}\to\mathcal{O}_{E_{2}}(Q)\to
0.$
But this is absurd by $h^{0}(\mathcal{O}_{E_{2}}(Q))=1$. Hence,
$\operatorname{Fr}^{*}\mathcal{E}_{Q}$ is indecomposable of degree $2$ with
$\det\operatorname{Fr}^{*}\mathcal{E}_{Q}\cong\mathcal{O}_{E_{2}}(2Q)$. This
completes the proof of (ii).
Next, let us give the proof of (iii). Set $\varphi^{*}Q=Q_{1}+Q_{2}$ for
distinct points $Q_{1}$ and $Q_{2}$ on $F$. Then we see that
$\det\varphi^{*}\mathcal{E}_{Q}\cong\mathcal{O}_{F}(Q_{1}+Q_{2})$ by (6). Note
that $\mathcal{O}_{F}(Q_{1}-Q_{2})$ is an order $2$ element of
$\mathop{\mathrm{Pic}}\nolimits^{0}F$, since $\varphi$ is a separable isogeny
of degree $2$. Assume that $\varphi^{*}\mathcal{E}_{Q}$ is indecomposable.
Then there is an element $\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}F$
such that
$\varphi^{*}\mathcal{E}_{Q}\otimes\mathcal{O}_{F}(-Q_{2})\cong\mathcal{E}_{2,0}\otimes\mathcal{L}$,
and thus
$\mathcal{O}_{F}(Q_{1}-Q_{2})\cong\det(\mathcal{E}_{2,0}\otimes\mathcal{L})\cong\mathcal{L}^{\otimes
2}.$
This is absurd by the equality
$\operatorname{ord}\mathcal{O}_{F}(Q_{1}-Q_{2})=2$. Therefore, we obtain the
conclusion. ∎
###### Lemma 2.13.
Let $E$ be an elliptic curve, and take a line bundle
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$ with
$m:=\operatorname{ord}\mathcal{L}<\infty$. Then there is an isogeny
$\varphi\colon F\to E$ of degree $m$ such that
$\varphi^{*}\mathcal{L}\cong\mathcal{O}_{F}$.
###### Proof.
Define $\varphi$ to be the dual morphism of the quotient morphism
$E\cong\mathop{\mathrm{Pic}}\nolimits^{0}E\to
F:=\mathop{\mathrm{Pic}}\nolimits^{0}E/\left<\mathcal{L}\right>.$
Then, $\varphi$ satisfies the desired property. ∎
###### Lemma 2.14.
Let $\varphi\colon F\to E$ be a finite morphism of elliptic curves, and
$\mathcal{E}$ be a rank $2$ normalized vector bundle on $E$ with $e=-1$ or
$0$. Suppose that $S^{\prime}:=\mathbb{P}(\varphi^{*}\mathcal{E})$ has an
elliptic fibration $\pi^{\prime}$. Then we have the following.
1. (i)
We have $q^{*}\omega_{S}\cong\omega_{S^{\prime}}$, where $q\colon
S^{\prime}\cong F\times_{E}S\to S:=\mathbb{P}(\mathcal{E})$ is the second
projection.
2. (ii)
The elliptic ruled surface $S$ also has an elliptic fibration $\pi\colon
S\to\mathbb{P}^{1}$ fitting into the following commutative diagram:
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{S^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{S\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}}$
(7)
Here the left square is a fiber product diagram, and $\psi\circ\pi^{\prime}$
is the Stein factorization of $\pi\circ q$.
###### Proof.
(i) If $\varphi$ is étale, the result is obvious. Thus we may assume that
$\varphi$ is purely inseparable of degree $p$, that is
$\varphi=\operatorname{Fr}$. In this case,
$\Omega_{F/E}=\Omega_{F}=\mathcal{O}_{F}$, hence $\Omega_{S^{\prime}/S}\cong
f^{\prime*}\Omega_{F/E}=\mathcal{O}_{S}^{\prime}$. Then we obtain the result
by [Eke87, Corollary 3.4].
(ii) Take a sufficiently large integer $m$. Then we may assume that
$h^{0}(S^{\prime},\omega_{S^{\prime}}^{\otimes-m})$ is sufficiently large.
Now, we have a series of equalities
$\displaystyle h^{0}(S^{\prime},\omega_{S^{\prime}}^{\otimes-m})$
$\displaystyle=h^{0}(E,f_{*}q_{*}q^{*}\omega_{S}^{\otimes-m})=h^{0}(E,\varphi_{*}f^{\prime}_{*}q^{*}\omega_{S}^{\otimes-m})$
$\displaystyle=h^{0}(E,\varphi_{*}\varphi^{*}f_{*}\omega_{S}^{\otimes-m})=h^{0}(E,f_{*}\omega_{S}^{\otimes-m}\otimes\varphi_{*}\mathcal{O}_{F}).$
Here, the third equality comes from the flat base change theorem.
If $\widehat{\varphi}$ is separable, it follows from Lemma 2.11 that there
exists an invertible sheaf $\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$
such that $\varphi^{*}\mathcal{L}\cong\mathcal{O}_{F}$ and
$h^{0}(S,\omega_{S}^{\otimes-m}\otimes
f^{*}\mathcal{L}))=h^{0}(E,f_{*}\omega_{S}^{\otimes-m}\otimes\mathcal{L})\geq
2.$
Next, suppose that $\widehat{\varphi}$ is purely inseparable of degree $p$.
Since $\varphi_{*}\mathcal{O}_{F}\cong\mathcal{E}_{p,0}$ by Lemma 2.11 and
$\mathcal{E}_{p,0}$ has a filtration into $\mathcal{O}_{E}$ by (2), we have
$h^{0}(S,\omega_{S}^{\otimes-m})=h^{0}(E,f_{*}\omega_{S}^{\otimes-m})\geq 2.$
Since, in general, $\varphi$ is a composition of separable morphisms and
purely inseparable morphisms of degree $p$, we can find a divisor $H$ on $S$
satisfying $H^{2}=K_{S}\cdot H=0$ and $h^{0}(S,\mathcal{O}_{S}(H))\geq 2$. Let
us consider the moving part $|M|$ of the complete linear system $|H|.$ Note
that every effective divisor on $S$ has a non-negative self-intersection,
hence $M^{2}=0$ and $h^{0}(S,\mathcal{O}_{S}(M))\geq 2$. Thus we know that the
linear system $|M|$ is base point free, and it satisfies $M\cdot K_{S}=0$. Let
us take the Stein factorization of the projective morphism defined by $|M|$,
and then we obtain a morphism $\pi\colon S\to\mathbb{P}^{1}$ with connected
fibers. [Mac40, Theorem 2] forces the general fiber of $\pi$ is reduced. Lemma
2.7 kills the possibility that $\pi$ is quasi-elliptic, hence $\pi$ is
elliptic.
Taking the Stein factorization of $\pi\circ q$, we obtain a finite morphism
$\psi$ in the diagram. ∎
We apply Lemma 2.14 to show the following.
###### Proposition 2.15.
Let $\mathcal{E}$ be a vector bundle of rank $2$ on an elliptic curve $E$. In
the following cases, $\mathbb{P}(\mathcal{E})$ has an elliptic fibration.
1. (i)
$p>0$ and $\mathcal{E}=\mathcal{E}_{2,0}$.
2. (ii)
$p\geq 0$ and $\mathcal{E}=\mathcal{O}_{E}\oplus\mathcal{L}$ for
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$ with
$\operatorname{ord}\mathcal{L}<\infty$.
3. (iii)
$p\geq 0$ and $\mathcal{E}=\mathcal{E}_{Q}$.
###### Proof.
(i) Take an isogeny $\varphi\colon F\to E$ of degree $p$ such that
$\widehat{\varphi}$ is purely inseparable, namely
$\varphi:=\widehat{\operatorname{Fr}}$. Then we have an isomorphism
$\mathbb{P}(\varphi^{*}\mathcal{E}_{2,0})\cong\mathbb{P}^{1}\times E$ by Lemma
2.12 (i). Then, the result follows from Lemma 2.14.
(ii) Lemma 2.13 tells us that
$\varphi^{*}(\mathcal{O}_{E}\oplus\mathcal{L})\cong\mathcal{O}_{F}\oplus\mathcal{O}_{F}$
for a suitable isogeny $\varphi\colon F\to E$. Hence the result follows from
Lemma 2.14.
(iii) Suppose that $p=2$. Then Lemma 2.12 (ii) assures the existence of
isomorphism
$\mathbb{P}(\operatorname{Fr}^{*}\mathcal{E}_{Q})\cong\mathbb{P}(\mathcal{E}_{2,0})$.
Then, combining (i) with Lemma 2.14, we obtain the conclusion.
Suppose that $p\neq 2$. Then Lemma 2.12 (iii) implies that there is an
isomorphism
$\mathbb{P}(\varphi^{*}\mathcal{E}_{Q})\cong\mathbb{P}(\mathcal{O}_{F}\oplus\mathcal{L})$,
where $\varphi\colon F\to E$ is an isogeny of degree $2$, and
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}F$ with
$\operatorname{ord}\mathcal{L}=2$. Combining (ii) with Lemma 2.14, we obtain
the conclusion. ∎
###### Remark 2.16.
Maruyama showed in [Mar71, Lemma 9] that $\mathbb{P}(\mathcal{E}_{Q})$
($\mathbf{P}_{1}$ in his notation) has a base point free linear pencil whose
generic member is an elliptic curve if $p\neq 2$. In addition to it, he also
stated in [Mar71, Remark 7] that a similar result is true in the case $p=2$
“by reduction”. The authors do not understand what it means.
Using this result, he showed which elliptic ruled surfaces have an elliptic
fibration in [Mar71, Theorem 4]. But the authors feel that the proof in the
case $p=2$ is quite unsatisfactory.
### 2.6 Ramifications and singular fibers
Let us consider the situation in Lemma 2.14 and the diagram (7). If an
ordinary elliptic curve is isogeneous to another elliptic curve, it is also
ordinary. Therefore, we see that all of $E$, $F$, general fibers of $\pi$ and
$\pi^{\prime}$, and reductions of multiple fibers of $\pi$ and $\pi^{\prime}$
are either ordinary or supersingular elliptic curves simultaneously. In this
subsection, we extract information on multiple fibers of $\pi$ from one of
$\pi^{\prime}$, and vice versa.
###### Lemma 2.17.
In the diagram (7), fix a point $Q\in\mathbb{P}^{1}$ and $\psi^{*}Q=\sum
e_{i}Q_{i}^{\prime}$, where $Q_{i}^{\prime}\neq Q_{j}^{\prime}$ for $i\neq j$.
Denote the fiber of $\pi$ over $Q$ by $mD$, where $m$ is its multiplicity, and
denote the fiber $m^{\prime}_{i}D^{\prime}_{i}$ of $\pi^{\prime}$ over the
point $Q_{i}^{\prime}$ similarly.
1. (i)
For each $i$, $m^{\prime}_{i}e_{i}$ is divisible by $m$. Moreover we have
$\sum\frac{m_{i}^{\prime}e_{i}}{m}\leq\deg q.$
2. (ii)
If $m=1$ holds, then we have $m_{i}^{\prime}=1$ for all $i$. Conversely if
$e_{i}=m_{i}^{\prime}=1$ holds for some $i$, then we have $m=1$.
3. (iii)
Suppose that $\psi$ is not ramified at $Q^{\prime}_{i}$, that is $e_{i}=1$.
Then $\pi$ has a multiple fiber over $Q$ if and only if $\pi^{\prime}$ has a
multiple fiber over $Q^{\prime}_{i}$.
###### Proof.
(i) The assertions are a direct consequence of the equalities
$q^{*}(mD)=q^{*}\pi^{*}Q=\pi^{\prime*}\psi^{*}Q=\sum
m_{i}^{\prime}e_{i}D_{i}^{\prime}.$ (8)
(ii) First assume that $m=1$. Then the inequality
$\sum m_{i}^{\prime}e_{i}\leq\deg q=\deg\psi=\sum e_{i}$
forces that $m_{i}^{\prime}=1$ for all $i$. The second assertion is a direct
consequence of (i).
(iii) The assertion is a direct consequence of (ii). ∎
###### Lemma 2.18.
In the notation in Lemma 2.17, we assume furthermore that $\deg q=\deg\psi=2$.
Let us consider the morphism $q|_{D_{i}^{\prime}}\colon D_{i}^{\prime}\to D$.
1. (i)
Suppose that $e_{1}=1$, equivalently $e_{2}=1$ holds. Then we have
$m_{1}^{\prime}=m_{2}^{\prime}=m$, and multiple fibers $mD$, $mD_{1}^{\prime}$
and $mD_{2}^{\prime}$ are either tame or wild simultaneously.
2. (ii)
Suppose that $p=m=2$ and $e_{1}=2$, $m_{1}^{\prime}=1$, and
$q|_{D_{1}^{\prime}}$ is separable. Then $2D$ is a wild fiber.
3. (iii)
Suppose that $p=m=2$ and $e_{1}=2$, $m_{1}^{\prime}=2$, and
$q|_{D_{1}^{\prime}}$ is isomorphic. Then $2D$ is a wild fiber.
###### Proof.
Note that
$q|_{D_{i}^{\prime}}^{*}\mathcal{O}_{D}(D)\cong\mathcal{O}_{D_{i}^{\prime}}(q^{*}D)$.
(i) Lemma 2.17 (i) tells us that $m_{1}^{\prime}=m_{2}^{\prime}=m$. We can see
that $q|_{D_{i}^{\prime}}$ is isomorphic and
$q|_{D_{i}^{\prime}}^{*}\mathcal{O}_{D}(D)\cong\mathcal{O}_{D_{i}^{\prime}}({D_{i}^{\prime}})$.
Hence,
$\operatorname{ord}\mathcal{O}_{D}(D)=\operatorname{ord}\mathcal{O}_{D_{i}^{\prime}}({D_{i}^{\prime}})$,
and the second assertion follows.
(ii) Note that $q^{*}D=D_{1}^{\prime}$ in this case. The morphism
$q|_{D_{1}^{\prime}}$ is either the quotient morphism by
$\mathbb{Z}/2\mathbb{Z}$ or isomorphic, hence the kernel
$\mathop{\mathrm{Ker}}\nolimits\widehat{q|_{D_{1}^{\prime}}}$ of the dual
morphism is $\mu_{2}$ in the former case, and is trivial in the latter case.
On the other hand, we have
$q|_{D_{1}^{\prime}}^{*}\mathcal{O}_{D}(D)\cong\mathcal{O}_{D_{1}^{\prime}}({D_{1}^{\prime}})\cong\mathcal{O}_{D_{1}^{\prime}}$,
and hence $\operatorname{ord}\mathcal{O}_{D}(D)=1$. In particular, $2D$ is a
wild fiber.
(iii) In this case, we have $q^{*}D=2D_{1}^{\prime}$, and hence we see
$q|_{D_{1}^{\prime}}^{*}\mathcal{O}_{D}(D)\cong\mathcal{O}_{D_{1}^{\prime}}(2D_{1}^{\prime})\cong\mathcal{O}_{D_{1}^{\prime}}$.
Thus $\operatorname{ord}\mathcal{O}_{D}(D)=1$, and therefore, $2D$ is a wild
fiber. ∎
## 3 Multiple fibers on elliptic ruled surfaces
We study multiple fibers of an elliptic fibration $\pi\colon
S\to\mathbb{P}^{1}$ on an elliptic ruled surface $S$ in this section. We use
the notation in §2.1.
###### Lemma 3.1.
We have
$\sum_{i=1}^{\lambda}\displaystyle\frac{a_{i}}{m_{i}}<2+d.$
###### Proof.
By Proposition 2.1, we have
$h^{0}(S,\omega_{S}^{\otimes
n})=h^{0}(S,\pi^{*}\mathcal{M}\otimes\mathcal{O}_{S}(\sum_{i=1}^{\lambda}(na_{i}-m_{i}[\frac{na_{i}}{m_{i}}])D_{i})),$
(9)
where we set
$\mathcal{M}:=\omega_{\mathbb{P}^{1}}^{\otimes
n}\otimes\mathcal{L}_{\pi}^{\otimes-n}\otimes\mathcal{O}_{\mathbb{P}^{1}}(\sum_{i=1}^{\lambda}\displaystyle[\frac{na_{i}}{m_{i}}]).$
Then since $\sum_{i=1}^{\lambda}(na_{i}-m_{i}[\frac{na_{i}}{m_{i}}])D_{i}$ is
a fixed part of the linear system $|nK_{S}|$, we have
$\displaystyle 0=h^{0}(S,\omega_{S}^{\otimes
n})=h^{0}(S,\pi^{*}\mathcal{M})=h^{0}(\mathbb{P}^{1},\mathcal{M}).$
Hence we obtain $\sum_{i=1}^{\lambda}[\frac{na_{i}}{m_{i}}]<n(d+2)$ for all
$n>0$. This is equivalent to the desired inequality
$\sum_{i=1}^{\lambda}\displaystyle\frac{a_{i}}{m_{i}}<2+d.$
∎
Now we can prove the following lemma.
###### Lemma 3.2.
The quantities $d,m_{i},a_{i}$ satisfy the following.
Case | $d$ | $m_{i}$ | $a_{i}$ | $p$
---|---|---|---|---
$(I)$ | $0$ | no multiple fibers | | $p\geq 0$
$(II)$ | $0$ | $(m,m)$, $m>1$ | | $p\geq 0$
$(III)$ | $0$ | $(2,2,2)$ | | $p\geq 0$
$(IV)$ | $-1$ | $({p^{\alpha}}^{*})$, $\alpha>0$ | $a_{1}=p^{\alpha}-1$ | $p>0$
$(V)$ | $-1$ | $({p^{\alpha}}^{*})$, $\alpha>0$ | $a_{1}=p^{\alpha}-2$ | $p>0$
$(VI)$ | $-1$ | $(2,2^{*})$ | $a_{2}=0$ | $p=2$
The symbol ∗ denotes a wild fiber.
###### Proof.
We can divide into the following two cases; (1) $d=0$ and (2) $d=-1$.
(Equivalently, $\sum_{i=1}^{\lambda}\displaystyle\frac{a_{i}}{m_{i}}<2$, and
$<1$ respectively by Lemma 3.1).
In the case (1), it follows from Proposition 2.1 (iii) that $\pi$ has no wild
fibers. In particular, we have $a_{i}=m_{i}-1$ for all $i$ by Proposition 2.1
(ii). Then $\pi$ has no multiple fibers, _or_ $\pi$ has multiple fibers of
type ($m$), ($m_{1},m_{2}$), ($2,2,m$) or ($2,3,m_{3}$), where $m$ and $m_{i}$
satisfy that
$2\leq m,m_{1},m_{2},\quad 3\leq m_{3}\leq 5.$
In the latter case, apply Theorem 2.5 (and Remark 2.6), and then we see that
the multiple fibers are of type ($m,m$) or ($2,2,2$).
In the case (2), Proposition 2.1 implies that
$h^{0}(\mathbb{P}^{1},\mathcal{T}_{\pi})=1,$ which means that $\pi$ has a
single wild fiber. Then the inequality $\sum_{i=1}^{\lambda}{a_{i}}/{m_{i}}<1$
implies that the multiple fiber is unique (2-1), _or_ that $\pi$ has one tame
fiber and one wild fiber (2-2).
(2-1) In this case, Remark 2.6 (i) implies that the unique multiple fiber has
the multiplicity $p^{\alpha}$ ($\alpha>0$) and $\nu=1$, and Theorem 2.3
implies that $a=m-1$ or $m-2$.
(2-2) Suppose that the type of $\pi$ is $(m_{1},m_{2}^{*})$. Then the
inequality
$\frac{m_{1}-1}{m_{1}}+\frac{a_{2}}{m_{2}}<1$
and Theorem 2.3 yield that $a_{2}=m_{2}-\nu_{2}-1$. Since $m_{2}$ is of the
form $p^{\alpha}\nu_{2}$ with $\alpha>0$, we can see that $p=2$, the type of
$\pi$ is $(2,2^{*})$ and $a_{2}=0$. ∎
## 4 Proof of Theorem 1.1
Let $S$ be an elliptic ruled surface admitting a $\mathbb{P}^{1}$-bundle
structure $f\colon S\rightarrow E$ over an elliptic curve $E$. We use the
notation in §2.3.
###### Lemma 4.1.
Suppose that $S$ has an elliptic fibration $\pi\colon
S\rightarrow\mathbb{P}^{1}$. Then the equality $e=0$ holds in the cases (I),
(II) and (V) in Lemma 3.2, and the equality $e=-1$ holds in the cases (III),
(IV) and (VI) in Lemma 3.2.
###### Proof.
First of all, we have $e=0$ or $-1$, as is mentioned in §2.2. Note that the
multiplicities of all multiple fibers of $\pi$ are common by Lemma 3.2. Hence
$D_{i}\equiv D_{j}$ for each $i,j$ in the notation in §2.1.. We denote the
common multiplicities by $m$. If $\pi$ has no multiple fibers, we put $m=1$
for simplicity.
Proposition 2.1 implies that
$K_{S}\sim(-2-d)F_{\pi}+\sum_{i}a_{i}D_{i}\equiv((-2-d)m+\sum_{i}a_{i})D,$
where $D$ is a reduction of a multiple fiber of $\pi$. By a direct
computation, we have
$(-2-d)m+\sum_{i}a_{i}=\begin{cases}-2&\text{in the cases (I),(II) and
(V),}\\\ -1&\text{in the cases (III), (IV) and (VI).}\end{cases}$
Consequently in the cases (I), (II) and (V), the integer $e=K_{S}\cdot C_{0}$
should be even, i.e. $e=0$. In the cases (III), (IV) and (VI), we have $D\cdot
F_{f}=-K_{S}\cdot F_{f}=2.$ Combining this with the equality $C_{0}\cdot
F_{f}=1$, we conclude that $C_{0}$ cannot be contained in a fiber of $\pi$,
equivalently $e=-(C_{0})^{2}\neq 0$. This means $e=-1$. ∎
###### Remark 4.2.
In the proof of Lemma 4.1, we show that
$K_{S}\equiv\begin{cases}-2D\equiv-2C_{0}&\mbox{ if }e=0,\\\
-D\equiv-2C_{0}+F_{f}&\mbox{ if }e=-1.\end{cases}$
In the diagram (7), take a point $Q\in\mathbb{P}^{1}$ and put
$Q^{\prime}:=\psi(Q)$. Denote the fiber over the point $Q^{\prime}$ by
$m^{\prime}D^{\prime}$, where $m^{\prime}$ is its multiplicity, and the fiber
over the point $Q$ by $mD$ similarly. Suppose that $e=0$ for $S^{\prime}$, and
$e=-1$ for $S$. Since $q^{*}K_{S}=K_{S^{\prime}}$ holds by Lemma 2.14 (i), we
obtain
$2D^{\prime}\equiv q^{*}D.$ (10)
We now give the proof of Theorem 1.1.
_Proof of Theorem 1.1._ (i) In the case $e=0$, we start with the following
claim.
###### Claim 4.3.
Assume that $S$ has an elliptic fibration $\pi$ admitting at least one
multiple fiber. Then $\pi$ is of type $(m,m)$ if and only if $\mathcal{E}$ is
decomposable.
###### Proof.
Recall first that $\mathcal{E}$ is decomposable if and only if there exist two
sections $C_{1}$ and $C_{2}$ of $f$ such that $C_{1}\cap C_{2}=\emptyset$
([Har77, Exercise V.2.2]). Moreover, Remark 4.2 says that in the case $e=0$,
$D\equiv C_{0}$, where $mD$ is a multiple fiber with multiplicity $m$. To the
contrary, if some irreducible curve $D$ satisfying $D\equiv C_{0}$, it turns
out that $D$ is the reduction of some multiple fiber of $\pi$, since
$-K_{S}\cdot D=0$.
Suppose that $\pi$ has two multiple fibers $mD_{1}$ and $mD_{2}$ with
multiplicities $m$. Then $D_{1}$ and $D_{2}$ are sections of $f$, and hence,
$\mathcal{E}$ is decomposable.
Next, suppose that $C_{1}$ and $C_{2}$ are two sections of $f$ satisfying
$C_{1}\cap C_{2}=\emptyset$. Then we can put $C_{1}\equiv C_{0}+b_{1}F_{f}$
and $C_{2}\equiv C_{0}+b_{2}F_{f}$ for some $b_{1},b_{2}\geq 0$. The
equalities
$0=C_{1}\cdot C_{2}=C_{0}^{2}+(b_{1}+b_{2})C_{0}\cdot F_{f}=b_{1}+b_{2}$
yield $b_{1}=b_{2}=0$. Hence $C_{1}$ and $C_{2}$ are the reductions of some
multiple fibers of $\pi$. Therefore, Lemma 4.1 says that this situation fits
into the case $\pi$ is of type $(m,m)$. ∎
Let us consider the case $\mathcal{E}$ is decomposable. Then, we can take
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}E$,
$\operatorname{ord}\mathcal{L}=m>1$ such that
$\mathcal{E}=\mathcal{O}_{E}\oplus\mathcal{L}$. In this case, since
$h^{0}(S,\mathcal{O}_{S}(nC_{0}))=h^{0}(E,\operatorname{Sym}^{n}(\mathcal{O}_{E}\oplus\mathcal{L}))$
holds, we can see by Lemma 2.9 that the elliptic fibration $\pi$ has a
multiple fiber $mC_{0}$, and thus we can apply Claim 4.3 to conclude that
$\pi$ is of type $(m,m)$.
Next we consider the case $\mathcal{E}$ indecomposable, that is,
$\mathcal{E}=\mathcal{E}_{2,0}$. Then combining Lemma 2.8 with Lemma 2.9, we
know that $S$ has an elliptic fibration $\pi$ in the case $p>0$, and no
elliptic fibrations in the case $p=0$. We know from Claim 4.3 that $\pi$ is of
type $(p^{\alpha*})$. Recall that we have the following diagram:
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{F\times\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{E}_{2,0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}}$
Here, $\widehat{\varphi}\colon E\to F$ is a purely inseparable isogeny of
degree $p$. Applying Lemma 2.17 (i) for $m=p^{\alpha}$, $m_{i}^{\prime}=1$,
$e_{i}\leq p=\deg\psi$, we obtain $\alpha=1$.
(ii) In the case $e=-1$, $S$ has an elliptic fibration by Lemma 2.15 (iii).
When $p\neq 2$, we apply Lemma 2.17 (i) for the diagram
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{\mathbb{P}(\mathcal{O}_{F}\oplus\mathcal{L})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}\quad}$$\scriptstyle{\quad\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{E}_{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1},}$
(11)
where $\varphi$ is a separable isogeny of degree $2$ and $\mathcal{L}$ is an
order $2$ element in $\mathop{\mathrm{Pic}}\nolimits^{0}F$. It turns out that
the case (IV) is impossible because $\pi^{\prime}$ is of type $(2,2)$. Assume
that $\pi$ has a wild fiber $2D$. Then
$\mu:=\operatorname{ord}\mathcal{O}_{D}(D)=1$, which contradicts with $p\neq
2$ and (1). Hence, we conclude that the case (VI) is also impossible, and thus
the case (III) occurs.
When $p=2$ and $E$ is supersingular, then Lemma 2.2 implies that the cases
(III) and (VI) in Lemma 3.2 do not occur. Hence, the unique possibility is the
case (IV) by Lemma 4.1, that is, $S$ is an elliptic surface of type
$(2^{\alpha})$. We have the following diagram:
$\textstyle{E_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{Fr}}$$\scriptstyle{\square}$$\textstyle{\mathbb{P}(\mathcal{E}_{2,0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{E}_{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}}$
Since $\mathbb{P}(\mathcal{E}_{2,0})$ has the unique multiple fiber of
multiplicity $2$, it forces that $\alpha=1$ by (10).
When $p=2$ and $E$ is ordinary, we want to show the case (VI) occurs. It
suffices to exclude the cases (III) and (IV) by Lemma 4.1. Let us consider the
diagram (11) again. To obtain a contradiction, first assume that the case (IV)
occurs, that is, $\pi$ has the unique multiple fiber $2^{\alpha}D$ over a
point $Q$, and $2^{\alpha}D$ is a wild fiber. Since $\pi^{\prime}$ has no wild
fibers, Lemma 2.18 (i) yields that $\psi$ is branched over $Q$, thus
$\psi^{*}Q=2Q^{\prime}$ for some $Q^{\prime}\in\mathbb{P}^{1}$. But in this
case, it follows from Lemma 2.17 (ii) that $\pi^{\prime}$ has no multiple
fibers over points besides $Q^{\prime}$, which induces a contradiction since
$\pi^{\prime}$ is of type $(2,2)$.
Next, assume that the case (III) occurs, that is, $\pi$ has three multiple
fibers $2D_{i}$ over points $Q_{i}$ ($i=1,2,3$). Since $\psi$ is separable,
its ramification divisor has degree $2$ by the Hurwitz formula. Because $\psi$
is wildly ramified, [Har77, Proposition IV.2.2(c)] yields that there is the
unique branch point in $\mathbb{P}^{1}$. Hence, we may assume that $Q_{1}$ and
$Q_{2}$ are not branch points of $\psi$, and then
$\mathbb{P}(\mathcal{O}_{F}\oplus\mathcal{L})$ has at least $4$ multiple
fibers by Lemma 2.17 (iii). This is absurd. ∎
###### Remark 4.4.
It seems worthwhile to summarize the connection between known examples in the
literature and the result in Theorem 1.1.
1. (i)
When $p>0$ and a vector bundle $\mathcal{E}$ is of the form
$\mathcal{O}_{E}\oplus\mathcal{L}$ with $\operatorname{ord}\mathcal{L}=p$ on
an ordinary elliptic curve $E$, then the elliptic surface in [KU85, Example
4.9] is isomorphic to $\mathbb{P}(\mathcal{E})$.
2. (ii)
When $p>0,e=0$ and $\mathcal{E}$ is an indecomposable vector bundle on an
ordinary (resp. supersingular) elliptic curve $E$, then the elliptic surface
in [KU85, Example 4.7] (resp. [KU85, Example 4.8]) is isomorphic to
$\mathbb{P}(\mathcal{E})$. Here we use the fact that an elliptic curve which
is isogeny to an ordinary (resp. supersingular) elliptic curve is again
ordinary (resp. supersingular).
3. (iii)
Let $E$ be an elliptic curve, and consider the morphism
$a\colon E\times E\to E\qquad(x_{1},x_{2})\mapsto x_{1}+x_{2}$
defined by the addition, and the morphism
$i\colon E\times E\to E\qquad(x_{1},x_{2})\mapsto x_{1}-x_{2}$
defined by the subtraction. Then we obtain the following commutative diagram:
Here we denote the symmetric product $\operatorname{Sym}^{2}(E)$ by $S$, and
define $q^{\prime}$ to be the quotient morphism by the involution
$(x_{1},x_{2})\mapsto(x_{2},x_{1})$, $q$ to be the quotient morphism by the
action $x\mapsto-x$.
Then we know that $q$ is ramified at the origin $O$ of $E$, and the points of
order $2$ in $E$. We can also see that $\pi$ is an elliptic fibration, and $f$
is a $\mathbb{P}^{1}$-bundle. Furthermore the equality
$\displaystyle\bigl{\\{}p\in\mathbb{P}^{1}\mid\pi^{*}(p)\mbox{ is a multiple
fiber}\bigr{\\}}$ $\displaystyle=$
$\displaystyle\bigl{\\{}q(p^{\prime})\in\mathbb{P}^{1}\mid\mbox{
$\operatorname{ord}p^{\prime}=2$ in $E$}\bigr{\\}}$
holds. We also see that every multiple fiber has multiplicity $2$.
Suppose first that $p\neq 2$. Then there are exactly $3$ points of order $2$
in $E$, hence $S$ fits into the case in Theorem 1.1 (ii-1).
Secondly suppose that $p=2$ and $E$ is an ordinary elliptic curve. Then since
there is a unique point of order $2$ in $E$, $S$ fits into the case in Theorem
1.1 (i-5). (We can actually check that the section $C_{0}$ of $f$ is the
reduction of the multiple fiber of $\pi$.)
Finally suppose that $p=2$ and $E$ is a supersingular elliptic curve. Then
since there are no points of order $2$ in $E$, $S$ fits into the case in
Theorem 1.1 (i-1).
In [Ati57, page 451], it is mentioned that $\operatorname{Sym}^{n}(E)$ is
given as a projective bundle $\mathbb{P}(\mathcal{E})$ on $E$, where
$\mathcal{E}$ is a vector bundle in $\mathcal{M}_{E}(n,n-1)$. This statement
seems incompatible with the above results in the case $p=2$.
## 5 Proof of Theorem 1.2
In this section we prove Theorem 1.2. For this purpose, we illustrate the
diagram (7) in Lemma 2.14 when $S$ has an elliptic fibration with multiple
fibers, namely in the cases (i-2), (i-5), (ii-1), (ii-2) and (ii-3) in Theorem
1.1. We also study the ramification of $\psi$, if $\psi$ is separable.
### 5.1 Case (i-2): $\mathbb{P}(\mathcal{O}_{E}\oplus\mathcal{L})$, $p\geq
0$, $m=\operatorname{ord}\mathcal{L}<\infty$
Take the quotient morphism
$E\cong\mathop{\mathrm{Pic}}\nolimits^{0}E\to
F:=\mathop{\mathrm{Pic}}\nolimits^{0}E/\left<\mathcal{L}\right>$
and define $\varphi$ to be its dual. Then we obtain the following diagram:
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{F\times\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{O}_{E}\oplus\mathcal{L})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f\quad}$$\scriptstyle{\quad\pi}$$\textstyle{\mathbb{P}^{1}}$
Theorem 1.1 says that $\pi$ is of type $(m,m)$. Suppose that $\pi$ has
multiple fibers over points $Q_{1},Q_{2}$. When $m$ is not divisible by $p$,
then Lemma 2.17 (ii) implies that $\psi$ is branched over $Q_{1},Q_{2}$. When
$E$ is ordinary and $m=p^{n}$ ($n>0$), then all of $\varphi$, $q$ and $\psi$
are purely inseparable of degree $p^{n}$.
### 5.2 Case (i-5): $\mathbb{P}(\mathcal{E}_{2,0})$, $p>0$
Take $\varphi\colon F\to E$ such as $\widehat{\operatorname{Fr}}=\varphi$.
Then we have the following diagram.
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{F\times\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{E}_{2,0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}}$
We see in Theorem 1.1 that $\pi$ has the unique (wild) multiple fiber $pD$
over a point $Q_{1}\in\mathbb{P}^{1}$. If $E$ is ordinary, $\varphi$, $q$ and
$\psi$ are separable of degree $p$, and we can see from Lemma 2.17 (ii) that
$\psi$ is wildly ramified and $Q_{1}$ is the unique branch point of $\psi$. If
$E$ is supersingular, all of $\varphi$, $q$ and $\psi$ are purely inseparable
of degree $p$.
### 5.3 Cases (ii-1), (ii-3): $\mathbb{P}(\mathcal{E}_{Q})$, except $p=2$ and
$F$ is supersingular
In this case, there is a separable isogeny $\varphi\colon F\to E$ of degree
$2$. Then there is an order $2$ element
$\mathcal{L}\in\mathop{\mathrm{Pic}}\nolimits^{0}F$ and the following diagram:
$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi}$$\scriptstyle{\square}$$\textstyle{\mathbb{P}(\mathcal{O}_{F}\oplus\mathcal{L})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}\quad}$$\scriptstyle{\quad\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\psi}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{E}_{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}}$
When $E$ is ordinary and $p=2$, $\pi$ has one wild fiber $2D_{1}$ over a point
$Q_{1}$ and one tame multiple fiber $2D_{2}$ over a point $Q_{2}$. Since
$\psi$ is wildly ramified at a single point and $\pi^{\prime}$ is of type
$(2,2)$, Lemma 2.18 (i) (or (ii)) implies that $Q_{1}$ is the unique branch
point of $\psi$.
When $p\neq 2$, $\pi$ has $3$ multiple fibers over points
$Q_{1},Q_{2},Q_{3}\in\mathbb{P}^{1}$. It follows from Lemma 2.18 (i) that
$\psi$ is branched at two points of $Q_{i}$’s.
### 5.4 Cases (ii-2), (ii-3): $\mathbb{P}(\mathcal{E}_{Q})$, $p=2$
In this case, we have the following:
$\textstyle{E_{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{Fr}}$$\scriptstyle{\square}$$\textstyle{\mathbb{P}(\mathcal{E}_{2,0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{q}$$\textstyle{\mathbb{P}^{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\operatorname{Fr}}$$\textstyle{E}$$\textstyle{\mathbb{P}(\mathcal{E}_{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\pi}$$\textstyle{\mathbb{P}^{1}}$
Recall that $\pi^{\prime}$ has the unique wild fiber over a point
$Q^{\prime}\in\mathbb{P}^{1}$.
When $E$ is supersingular, $\pi$ also has the unique wild fiber over the point
$\operatorname{Fr}(Q^{\prime})$. When $E$ is ordinary, $\pi$ has one wild
fiber $2D_{1}$ over a point $Q_{1}$ and one tame multiple fiber $2D_{2}$ over
a point $Q_{2}$. By Lemma 2.18 (iii), we can see
$\operatorname{Fr}(Q^{\prime})=Q_{1}$.
### 5.5 Proof of Theorem 1.2
_Proof of Theorem 1.2._ Define $q\colon F\times\mathbb{P}^{1}\to S$ to be a
suitable successive finite étale covers and purely inseparable morphisms of
degree $p$ in the previous subsections. Then we obtain Theorem 1.2. ∎
###### Remark 5.1.
A wild fiber $mD$ is said to be _of strange type_ if $a=m-1$. Katsura and Ueno
show a reduction of a wild multiple fiber to a tame multiple fiber by a finite
cover if no wild fibers of strange type appear in the procedure of reduction
(see [KU85, §6 and p.330]).
Recall that if $E$ is supersingular, the elliptic fibration
$\pi\colon\mathbb{P}_{E}(\mathcal{E}_{Q})\to\mathbb{P}^{1}$ has a multiple
fiber satisfying $(a/m)=(1/2^{*}),$ namely $\pi$ has a wild fiber of strange
type. In §5.4, we obtain a reduction of a wild multiple fiber of strange type
to a tame multiple fiber.
## References
* [Ati57] M. F. Atiyah, _Vector bundles over an elliptic curve_ , Proc. London Math. Soc. (3) 7 (1957), 414–452. MR 131423
* [BM76] E. Bombieri and D. Mumford, _Enriques’ classification of surfaces in char. $p$. III_, Invent. Math. 35 (1976), 197–232. MR 491720
* [Eke87] Torsten Ekedahl, _Foliations and inseparable morphisms_ , Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985), Proc. Sympos. Pure Math., vol. 46, Amer. Math. Soc., Providence, RI, 1987, pp. 139–149. MR 927978
* [Har77] Robin Hartshorne, _Algebraic geometry_ , Springer-Verlag, New York-Heidelberg, 1977, Graduate Texts in Mathematics, No. 52. MR 0463157
* [HL88] Brian Harbourne and William E. Lang, _Multiple fibers on rational elliptic surfaces_ , Trans. Amer. Math. Soc. 307 (1988), no. 1, 205–223. MR 936813
* [Kaw00] Mitsuru Kawazoe, _Multiple fibers on elliptic surfaces in positive characteristic_ , J. Math. Kyoto Univ. 40 (2000), no. 1, 185–201. MR 1753506
* [KU85] Toshiyuki Katsura and Kenji Ueno, _On elliptic surfaces in characteristic $p$_, Math. Ann. 272 (1985), no. 3, 291–330. MR 799664
* [Mac40] Saunders MacLane, _Modular fields_ , Amer. Math. Monthly 47 (1940), 259–274. MR 1969
* [Mar71] Masaki Maruyama, _On automorphism groups of ruled surfaces_ , J. Math. Kyoto Univ. 11 (1971), 89–112. MR 280493
* [Oda71] Tadao Oda, _Vector bundles on an elliptic curve_ , Nagoya Math. J. 43 (1971), 41–72. MR 318151
* [Suw69] Tatsuo Suwa, _On ruled surfaces of genus $1$_, J. Math. Soc. Japan 21 (1969), 291–311. MR 242198
* [Tog] Takato Togashi, _Daen fibration wo motsu seihyousuu no sensiki kyokumen ni tsuite (in Japanese)_ , Master’s Thesis, Tokyo Metropolitan University, (2011).
* [Ueh17] Hokuto Uehara, _Fourier-Mukai partners of elliptic ruled surfaces_ , Proc. Amer. Math. Soc. 145 (2017), no. 8, 3221–3232. MR 3652778
* [UW] Hokuto Uehara and Tomonobu Watanabe, _Fourier-Mukai partners of elliptic ruled surfaces over arbitrary characteristic fields_ , in preparation.
Takato Togashi
TOHO GIRLS’ JUNIOR AND SENIOR HIGH SCHOOL, 1-41-1 Wakaba-cho, Chofu, Tokyo,
182-8510, Japan
e-mail address<EMAIL_ADDRESS>
Hokuto Uehara
Department of Mathematical Sciences, Graduate School of Science, Tokyo
Metropolitan University, 1-1 Minamiohsawa, Hachioji, Tokyo, 192-0397, Japan
e-mail address<EMAIL_ADDRESS>
|
# Multilingual Communication System with
Deaf Individuals Utilizing Natural and
Visual Languages
Tuan-Luc Huynh† $\dagger$These authors have equal contributions Faculty of
Information Technology, University of Science, VNU-HCMC, Vietnam Vietnam
National University, Ho Chi Minh City, Vietnam Khoi-Nguyen Nguyen-Ngoc†
Faculty of Information Technology, University of Science, VNU-HCMC, Vietnam
Vietnam National University, Ho Chi Minh City, Vietnam Chi-Bien Chu† Faculty
of Information Technology, University of Science, VNU-HCMC, Vietnam Vietnam
National University, Ho Chi Minh City, Vietnam
Minh-Triet Tran Faculty of Information Technology, University of Science,
VNU-HCMC, Vietnam Vietnam National University, Ho Chi Minh City, Vietnam
Trung-Nghia Le‡ $\ddagger$Corresponding author Faculty of Information
Technology, University of Science, VNU-HCMC, Vietnam Vietnam National
University, Ho Chi Minh City, Vietnam
###### Abstract
According to the World Federation of the Deaf, more than two hundred sign
languages exist. Therefore, it is challenging to understand deaf individuals,
even proficient sign language users, resulting in a barrier between the deaf
community and the rest of society. To bridge this language barrier, we propose
a novel multilingual communication system, namely MUGCAT, to improve the
communication efficiency of sign language users. By converting recognized
specific hand gestures into expressive pictures, which is universal usage and
language independence, our MUGCAT system significantly helps deaf people
convey their thoughts. To overcome the limitation of sign language usage,
which is mostly impossible to translate into complete sentences for ordinary
people, we propose to reconstruct meaningful sentences from the incomplete
translation of sign language. We also measure the semantic similarity of
generated sentences with fragmented recognized hand gestures to keep the
original meaning. Experimental results show that the proposed system can work
in a real-time manner and synthesize exquisite stunning illustrations and
meaningful sentences from a few hand gestures of sign language. This proves
that our MUGCAT has promising potential in assisting deaf communication.
###### Index Terms:
Sign Language Recognition, Text-to-Image Synthesis, Image Captioning
## I Introduction
Communication with deaf people is mainly based on sign language, a combination
of hand gestures, facial expressions, and postures to convey semantic
information. However, these visual communication systems are difficult to
learn and remember, leading to barriers between the deaf community and the
rest of society; this problem has not been fully solved until now.
Although technologies have been developed to understand the behaviors of deaf
people, such as sign language translation via cameras and sensory gloves, they
still have several issues. Sign language translation via camera systems [1, 2]
needs a fixed camera and a simple background to recognize sign gestures
accurately. Besides that, translating sign languages to human-understandable
languages often leads to unnatural results. It causes difficulties in jobs
that require smoothness in words, such as explaining new concepts or telling
stories. The birth of sensory devices like smart groves [3, 4] is a big step
forward in this field. However, it still does not solve the problem of
unnatural translations. In addition, the more modern sensors will come with
high prices, which makes it difficult to reach the deaf community.
Combined with natural language, which can be expressed as text or voice,
visual language can reform the communication between ordinary people and
deaf/dumb people. Indeed, visual cues (_e.g_., images, videos, 3D models) are
the best aid to express new concepts intuitively. Visual cues play an
important role in deaf communication, especially in literacy education for
deaf children. Visual communication efficiently bridges ordinary people with
the deaf community, regardless of different nationalities or different
languages.
To assist communication with deaf individuals, we propose a MUltilinGual
CommunicATion system (MUGCAT). Inspired by the adage ”A picture is worth a
thousand words,” our system supports in diverse cues, such as sign languages,
natural languages, and visual languages, to help deaf people express their
thoughts more clearly. The proposed MUGCAT system consists of two main phases:
converting sign language to intermediate language, which ordinary people can
understand, and enriching the translated information by reconstructing a
meaningful sentence aided by illustrations. We first recognize and translate
these hand gestures of deaf individuals (_i.e_., sign language) into human-
understandable language (_i.e_., textual words or phases). Illustrations are
then synthesized via a text-to-image model for visual communication. By
transforming sign languages into pictures - universal mediums of
expressiveness - our system significantly help deaf individuals convey their
thoughts. Due to the limitation of sign languages, it is challenging to
translate hand gestures to complete meaningful sentences for ordinary people.
Therefore, we propose using an image captioning method to assist the
incompleted translated text. Furthermore, our MUGCAT system can measure the
semantic similarity of generated image captions with the intermediate
translated text to keep the original meaning of the sign language. In this
way, our MUGCAT system can help to express the intentions of deaf
communicators more intuitively and clearly.
Experimental results on the WLASL dataset [5] show the potential of our MUGCAT
system in assisting natural communication with deaf individuals. The proposed
system can recognize sign gestures with an accuracy of $46.8\%$ in real time.
In addition, meaning sentences for humans are generated with corresponding
exquisite, beautiful, and stunning illustrations. We expect our MUGCAT system
to benefit both the deaf community and the sign language research community.
Our main contributions are summarized as follows:
* •
We propose a novel system, namely MUGCAT, to support multilingual
communication for deaf individuals. Our simple yet efficient system utilizes
both natural and visual languages to enhance the interpretation of deaf
communicators.
* •
Our MUGCAT system accurately recognizes and translates sign languages to
human-understandable text. The proposed system also can transform the
translated text into illustrative and expressive images in real-time
performance.
* •
The synthesized images might be misleading; hence, we propose to use an image
captioning model to select the image that best fit the translated text,
further improving the efficiency of MUGCAT.
## II Related Work
### II-A Deaf Communication
Communicating with deaf individuals mainly occurs through auditory (_e.g_.,
lip reading) and visual (_e.g_., sign language) modes. However, sign language
is more popular than lip reading because understanding speech by visually
interpreting the movements of the lips, face, and tongue is extremely
challenging, even for deaf people.
With the development of modern technology, many technological devices have
been invented to translate sign language into text, speech, etc. Some special
sensors were made to detect hand movements. The translation glove products
(EnableTalk [3] and SignAloud [4]) work on the integration of sensors that
attach to the finger to record hand posture and movements and then convert
sensor signals into speech through an independent processing unit. However,
these products are difficult to access widely due to their high cost and
difficulty to use in daily life.
### II-B Sign Language Recognition
Recently, many computer vision algorithms have been proposed to recognize sign
language from video only, thus avoiding the dependence on costly sensor
devices. Given a video, besides RGB frames, we can also obtain other
modalities of input such as image depth [6, 7] and optical flow [8, 9] (pixel-
wise motions between consecutive video frames). For RGB input only, 3D
ConvNets were widely applied [10, 11] to extract spatial-temporal information
from videos. Lin _et al_. [12] inserted a Temporal Shift Module into 2D
ConvNets to get an accuracy commensurate with 3D ConvNets while keeping the
complexity of 2D ConvNets. Komkov _et al_. [13] combined the learned
knowledge from multiple single-modality models with mutual learning technique
[14] to obtain the best model on each input modality.
### II-C Text-to-Image
In the last couple of years, text-to-image models [15] have attracted big tech
companies’ attention and thus have received rapid and massive improvements.
Classifier-free guided diffusion models have recently been shown to be highly
effective at high-resolution image generation, and they have been widely used
in large-scale diffusion frameworks, including GLIDE [16], DALL-E 2 [17], and
Imagen [18]. Nevertheless, the latest development of diffusion model-based
text-to-image model, namely Stable Diffusion [19], has been the most
significant impact since its release. Stable Diffusion offers excellent image
quality while significantly lowering the computation cost. What makes Stable
Diffusion exceptionally attractive compared to other competitors due to its
open-source. On the other hand, Google and OpenAI do not intend to open-source
Imagen [20] and DALL-E 2 [17], respectively.
While the artificial intelligence community has dominantly used text-to-image
models to create beautiful artworks, there is little attention on using these
models on real-world problems. In this work, we customized Stable Diffusion to
generate meaningful and expressive images from the translated sign language
text in a real-time manner to help visualize conversation with or between sign
language users.
### II-D Image Captioning
Research on image captioning in recent years generally uses the encoder-
decoder architecture. The encoder extracts the visual information from images
for the decoder, which generates an acceptable description. In the early, the
encoder was a CNN backbone [21, 22]. Later, it was replaced by an object
detector such as Faster R-CNN to extract object-level features [23]. This
proved more efficient and improved performance because the object information
and their relationships are very useful in describing an image. However, due
to the high computational cost of the object detection model, it is hard to
apply in a problem that requires high speed, such as communication. Besides
that, Transformer applications in the encoder to extract features or the
decoder for caption generating task [24, 25] also demonstrated surprising
efficiency improvements.
In this study, with the aim of balancing accuracy and efficiency, we used a
recent state-of-the-art method [26] which proposed a Transformer-only neural
architecture utilizing dual visual features to improve performance and
increase speed.
## III Proposed System
### III-A Overview
Figure 1: Pipeline of our multilingual communication (MUGCAT) system
Figure 1 illustrates the pipeline of our proposed multilingual communication
(MUGCAT) system, which consists of two main components: Sign language
recognition and translation (SLRT), Natural and visual data synthesis. First,
words obtained from SLRT are illustrated by the text-to-image module resulting
in several images. Then, image captioning is carried out to achieve complete
descriptions of all synthesized images. Finally, with each image and its
description, we compare its semantic similarity with the translated keywords
from SLRT to choose the most suitable image and description. Unlike
conventional SLRT systems that need to correctly recognize the whole sentence
to express the meaning, our system generates a suggested image and complete
description to represent the keywords made in sign language to overcome the
disadvantage of missed recognized sign language.
### III-B Sign Language Recognition and Translation
Sign language recognition aims to predict the sequence of signs performed in a
video, while sign language translation further translates the signs into
spoken/written languages. To synthesize images that fully convey the meaning
of a signer, we only need to identify some keywords from the hand gesture
sequence. Therefore, the recognition task is more suitable for our MUGCAT
system because it is simple but still responsive to the system. Due to the
lack of suitable datasets in this domain, we treat the problem as an action
recognition task, where the objective is to identify single words from short
clips. This simplifies the problem and meets our requirement for indicating
visual keywords.
In this work, we employed and compared several action recognition methods [10,
12, 13] on WLASL dataset [5], the largest video dataset of word-level American
sign language (ASL). The main ideas of employed methods are summarized as
follows:
Two-Stream Inflated 3D ConvNets (I3D) [10] combines two 3D ConvNets (one for
RGB image stream, one for optical-flow stream) to both take advantage of pre-
trained ImageNet weights and force the model to learn motion features
directly. Two 3D ConvNets are trained separately, and their predictions are
averaged at test time.
Temporal Shift Module (TSM) [12] inserted TSM module into 2D ConvNets to
capture temporal relationships between video frames. Feature maps are shifted
along the temporal dimension to maintain 2D ConvNet’s complexity while
achieving the performance of 3D ConvNets.
Mutual Modality Learning (MML) [13] ensembled knowledge from single-modality
models into a single model to obtain the best single-modality model for each
modality. The algorithm can be summarized in three steps: train two separate
networks $A_{1}$, $A_{2}$ on the RGB modality; respectively initialize two
networks $B_{1}$, $B_{2}$ with the weights of $A_{1}$, $A_{2}$, then train
$B_{1}$, $B_{2}$ together using mutual learning technique on RGB modality;
from $B_{1}$’s weights, initialize $N$ models $C_{1}$, $C_{2}$, …, $C_{N}$
corresponding to $N$ different modalities (RGB, optical flow, and depth), then
train these $N$ models together using mutual learning.
### III-C Natural and Visual Data Synthesis
#### III-C1 Text-To-Image Synthesis
Stable Diffusion [19], a state-of-the-art diffusion-based text-to-image model,
is the core component of our system, which strives to actualize the adage ”A
picture is worth a thousand words.” This method can offer excellent image
quality while significantly lowering computation costs.
However, the sequential sampling process of diffusion-based models is time-
consuming. As a result, the text-to-image module is also the bottleneck of our
system. To overcome this limitation, we customized hyperparameters of Stable
Diffusion to retain high-quality images while significantly reducing the
sampling process time.
Another issue that affects our system performance is the relevancy of
synthesized images. Prompt engineering (_i.e_., prompt modifiers) is necessary
for guiding the text-to-image models to generate superior-quality art.
However, in our proposed system, the prompt text for Stable Diffusion is
limited keywords from the SLRT module. Therefore, it is unavoidable that the
prompt’s quality is limited, which leads to potential drops in generated image
relevancy. We addressed this issue by introducing the image captioning model
in the system’s next stage, which serves as a filter to select the most
relevant image.
#### III-C2 Image Captioning
Image captioning methods are classified into two main approaches: grid
features and region features. Methods based on grid features directly extract
object features from high-layer feature maps of the whole image. Thus,
generated captions can contain information about the whole image. Meanwhile,
methods based on region features [23] rely on detecting objects in the image
and then extracting local features of image regions to infer results. However,
detected objects cannot represent the overall context of the image nor the
relationships of objects that affect generated captions.
In this work, we used Grid- and Region-based Image captioning Transformer
(GRIT) [26], a state-of-the-art image captioning method, which uses both types
of mentioned features to enhance both contextual information and object-level
information. Grid features are extracted using a standard self-attention
Transformer, and region features are extracted by Deformable DETR detector
[27]. Then, the extracted features are fed to a caption generator based on
Transformer to generate the final caption. In this step, we employed Parallel
Cross-Attention [26] to relate between dual visual features and caption words.
#### III-C3 Matching and Selection
SLRT generates incomplete keywords; thus, the images synthesized from the
previous step inevitably are not completely consistent with each other and the
communicator’s expression. Therefore, we propose an extra step of matching and
selecting the caption whose meaning is closest to the input keywords. From
there, our system is able to recover the complete sentence that the
communicator wanted to express from just the discrete words.
Concretely, given $K$ results $\\{I_{1},I_{2},\dots,I_{K}\\}$ of the previous
step, the goal of the matching and selection is to find the most suitable pair
of image $\widehat{I}$ and its caption $q_{\widehat{I}}$. For each caption
sentence, we measure its semantic similarity with keywords obtained from SLRT.
We used Sentence Transformers [28], denoted by $\psi(\cdot)$, to compute
sentence embeddings and evaluate them with cosine similarity. Mathematically,
it performs a maximization expressed as:
$\\{\widehat{I},q_{\widehat{I}}\\}=\operatorname*{argmax}_{i\in\\{1,2,\dots,K\\}}D(\psi(q_{I_{i}}),\psi(Q)),$
(1)
where $Q$ is a sentence that includes keywords received from SLRT, $D(\cdot)$
is a cosine similarity function.
## IV Experiments
In this section, we elaborate on the extensive experiments conducted on our
proposed system. All experiments were tested on a machine with a single Nvidia
V100 GPU.
### IV-A Sign Language Recognition
TABLE I: Accurary and efficiency on the WLASL [5] test set. All the compared
methods utilize a pre-trained backbone on ImageNet, and then they were
finetuned on the WLASL dataset. The FPS was measured on a Nvidia V100 GPU.
Method | Pretraining Dataset | Accuracy (%) | FPS (infer only) | FPS (infer $\&$ load data)
---|---|---|---|---
I3D [10] | BSL1K [29] | 46.8 | 1429 | 95
| Kinetic [10] | 32.5 | |
TSM [12] | ✗ | 20.8 | 357 | 60
| Kinetic [10] | 13.9 | |
MML [13] | ✗ | 20.8 | 323 | 104
As shown in Table I, we compared several action classification methods [10,
12, 13] on the WLASL [5] test set. In detail, for TSM [12], we trained a model
from scratch and fine-tuned another model, which was pre-trained on the
Kinetic [10] dataset. For MML [13], we trained a model from scratch with only
RGB input as WLASL [5] dataset does not provide optical flow or depth
annotations. All the networks above use an ImageNet-pre-trained ResNet50 as
the backbone. Lastly, we reused two public I3D [10] with different pretraining
datasets [5, 29] for our experiment.
We first evaluated top-1 accuracy on the WLASL [5] test set. The main
challenge of this dataset is the number of words to classify up to 2,000,
while the number of videos in the training set is just over 14,000. Therefore
compared methods only achieve acceptable accuracy. I3D achieved the top-1
accuracy of $46.8\%$, which is also state-of-the-art top-1 accuracy on the
WLASL test set.
We then evaluated the efficiency of methods by measuring the execution time
and the number of processed frames on the whole test set to obtain the average
FPS. All methods can run in a real-time manner. Especially, MML [13] can
achieve 104 FPS counting all initialization steps, such as loading the model,
preparing the dataset, etc. We also tried to compute FPS in the practice
scenario, where SLRT methods directly process the video stream and ignore the
initialization steps. All methods can achieve more than 300 FPS, and I3D [10]
achieved a surprising speed of 1429 FPS. The results show that these models
have the potential to be deployed on mobile devices and embedded systems while
still achieving real-time speed.
### IV-B Stable Diffusion Hyperparameters Adjustment
The default settings of Stable Diffusion [19] hardly achieve near real-time
performance. This section discusses our extensive experiments in various
settings to discover the optimal trade-off point between execution time and
image quality. Specifically, we focus on the number of sampling steps, the
desired resolution, and the number of samples. We used the public checkpoint
sd-v1-4.ckpt in our experiments.
The number of sampling steps is the most crucial hyperparameter that directly
controls the quality of generated images and positively correlates with the
execution duration. The default hyperparameter of 50 sampling steps using PLMS
sampler [30] generates high-quality images. Since our system ideally should
work in real-time performance, we figured that 20 sampling steps could speed
up 2.4 times while having subtle drops in contextual information, as
demonstrated in Fig. 2. Indeed, Table II shows that setting the sampling steps
to 20 is optimal with the FID of 33.5, approximately equivalent to higher
sampling steps but can process a batch of 128 images in only 15 seconds. Going
lower than 20 sampling steps results in a surge of FID scores.
TABLE II: Performance of Stable Diffusion on a single Nvidia V100 GPU. The prompt is ”A beautiful flower garden on a sunny day with a valley background.” Resolution is $512\times 512$. The FID score [31] was calculated using 50 sampling steps as the real distribution, 128 images per distribution. The optimal hyperparameter is 20 sampling steps, which can keep the image quality but speed up 2.4 times. Sampling steps | FID Score $\downarrow$ | Seconds per Batch $\downarrow$
---|---|---
50 | 0 | 35.50
45 | 33.43 | 32.05
40 | 30.44 | 28.66
35 | 31.70 | 25.24
30 | 31.55 | 21.79
25 | 33.19 | 18.39
20 | 33.51 | 14.97
15 | 40.33 | 12.25
Figure 2: The renowned ”a photograph of an astronaut riding a horse.” The top
and down rows are 20 and 50 sampling steps, respectively. Figure 3: Stable
Diffusion’s execution time in different resolutions using a single Nvidia V100
GPU. The batch size is maximized for each respective resolution, and the
hyperparameter of sampling steps is 20. The optimal numbers of synthesis
images are from 8 to 16. Figure 4: Images generated by Stable Diffusion using
the same prompt ”A beautiful flower garden on a sunny day with a valley
background” in decreasing resolution order (from left to right, top to
bottom).
Image resolution is another element that significantly affects Stable
Diffusion’s running time. Following tips of Suraj _et al_. [32], we tried
reducing the resolution and came up with seven different resolutions in
decreasing execution time, namely $512\times 512$, $512\times 448$, $448\times
448$, $512\times 384$, $448\times 384$, $512\times 320$, and $384\times 384$.
As illustrated in Fig. 4, the first two images on the top row have a valley
background. As the resolution decreases, contextual information in the prompt
will gradually become less constrained.
Fully utilizing GPU’s capability is another technique to enhance the
performance of the model. We tried setting the largest batch size on each
respective resolution and recorded the run time accordingly. Figure 3
illustrates the benchmark result of the highest resolution, lowest resolution,
and median one. The experimental result shows that the optimal number of
synthesis images (_i.e_., K in Eq. 1) is either 8 or 16. The reason is
twofold: Firstly, a small batch size can easily fit into a conventional GPU;
Secondly, since the image captioning module’s execution time scales linearly
with the number of generated images, selecting a small batch size can thus
improve both modules’ performance.
### IV-C Image Captioning Visualization
We employed GRIT [26] model that uses the pre-trained of object detector on
four datasets: COCO [33], Visual Genome, Open Images [34], Object365 [35], and
applied Parallel Cross-Attention [26] for image captioning. We can achieve the
per-batch inference time of about 0.75s when setting the batch size of 16 and
0.87s with the batch size of 8 on a single Nvidia V100 GPU.
Example results are visualized in Fig. 5. With the developed text refinement
mechanism, our MUGCAT system obviously generates an illustration and complete
caption with high semantic similarity with the original sentence from the
keywords, as shown in Fig.5.
Figure 5: Example of text refinement on a complete sentence (below) and only
on keywords (above) giving similar results.
## V Conclusion
We have proposed a MUltilinGual CommunicATion system (MUGCAT), which
integrates sign language recognition and translation, text-to-image, and image
captioning methods. The proposed system harmonizes three different
methodologies to help overcome the difficulty of communicating with deaf
individuals. Leveraging the latest development in text-to-image synthesis and
image captioning to transform written text into visual images, we strive to
lift the language barrier that has always existed in the sign language
community. Experiments show the potential of our proposed system in practice.
In the future, it would be interesting to modify our system’s camera to first-
person. We want to explore the possibility of sign language recognition and
translation methods from a first-person perspective since this would overcome
the problem of requiring standing in front of a fixed camera.
Acknowledgment. This research is funded by University of Science, VNU-HCM,
under grant number CNTT 2022-15.
## References
* [1] A. Hao, Y. Min, and X. Chen, “Self-mutual distillation learning for continuous sign language recognition,” in _ICCV_ , 2021.
* [2] Y. Min, A. Hao, X. Chai, and X. Chen, “Visual alignment constraint for continuous sign language recognition,” in _ICCV_ , 2021, pp. 11 542–11 551.
* [3] F. Lardinois, “Ukrainian students develop gloves that translate sign language into speech,” https://techcrunch.com/2012/07/09/enable-talk-imagine-cup, 2012\.
* [4] “UW undergraduate team wins $10,000 lemelson-mit student prize for gloves that translate sign language,” https://www.washington.edu/news/2016/04/12/uw-undergraduate-team-wins-10000-lemelson-mit-student-prize-for-gloves-that-translate-sign-language, 2016\.
* [5] D. Li, C. Rodriguez, X. Yu, and H. Li, “Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison,” in _WACV_ , 2020, pp. 1459–1469.
* [6] A. Gurram, A. F. Tuna, F. Shen, O. Urfalioglu, and A. M. López, “Monocular depth estimation through virtual-world supervision and real-world sfm self-supervision,” _IEEE T-PAMI_ , 2021.
* [7] J. Wang, Y. Zhong, Y. Dai, S. Birchfield, K. Zhang, N. Smolyanskiy, and H. Li, “Deep two-view structure-from-motion revisited,” _CVPR_ , 2021.
* [8] Z. Huang, X. Shi, C. Zhang, Q. Wang, K. C. Cheung, H. Qin, J. Dai, and H. Li, “FlowFormer: A transformer architecture for optical flow,” _ECCV_ , 2022\.
* [9] S. Bai, Z. Geng, Y. Savani, and J. Z. Kolter, “Deep equilibrium optical flow estimation,” in _CVPR_ , 2022.
* [10] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in _CVPR_ , 2017.
* [11] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,” _IEEE T-PAMI_ , vol. 35, no. 1, pp. 221–231, 2013.
* [12] J. Lin, C. Gan, and S. Han, “Tsm: Temporal shift module for efficient video understanding,” in _ICCV_ , 2019.
* [13] S. Komkov, M. Dzabraev, and A. Petiushko, “Mutual modality learning for video action classification,” _arXiv preprint arXiv:2011.02543_ , 2020.
* [14] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu, “Deep mutual learning,” in _CVPR_ , 2018, pp. 4320–4328.
* [15] S. S. Baraheem, T.-N. Le, and T. V. Nguyen, “Text-to-image synthesis via aesthetic layout,” in _ACM MM_ , 2020.
* [16] A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen, “Glide: Towards photorealistic image generation and editing with text-guided diffusion models,” _arXiv preprint arXiv:2112.10741_ , 2021.
* [17] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical text-conditional image generation with clip latents,” _arXiv preprint arXiv:2204.06125_ , 2022.
* [18] C. Meng, R. Gao, D. P. Kingma, S. Ermon, J. Ho, and T. Salimans, “On distillation of guided diffusion models,” _arXiv preprint arXiv:2210.03142_ , 2022.
* [19] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in _CVPR_ , 2022.
* [20] Google, “Imagen,” https://imagen.research.google/, 2022.
* [21] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in _ICML_ , 2015.
* [22] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self-critical sequence training for image captioning,” in _CVPR_ , 2017.
* [23] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in _CVPR_ , 2018.
* [24] G. Li, L. Zhu, P. Liu, and Y. Yang, “Entangled transformer for image captioning,” in _ICCV_ , 2019.
* [25] Y. Pan, T. Yao, Y. Li, and T. Mei, “X-linear attention networks for image captioning,” in _CVPR_ , 2020.
* [26] V.-Q. Nguyen, M. Suganuma, and T. Okatani, “Grit: Faster and better image captioning transformer using dual visual features,” _ArXiv preprint arXiv:2207.09666_ , 2022.
* [27] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable {detr}: Deformable transformers for end-to-end object detection,” in _ICLR_ , 2021\.
* [28] N. Reimers and I. Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” _ArXiv preprint arXiv:1908.10084_ , 2019.
* [29] S. Albanie, G. Varol, L. Momeni, T. Afouras, J. S. Chung, N. Fox, and A. Zisserman, “BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues,” in _ECCV_ , 2020.
* [30] L. Liu, Y. Ren, Z. Lin, and Z. Zhao, “Pseudo numerical methods for diffusion models on manifolds,” in _ICLR_ , 2022.
* [31] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” _NeurIPS_ , vol. 30, 2017.
* [32] S. Patil, P. Cuenca, N. Lambert, and P. von Platen, “Stable diffusion with diffusers,” https://huggingface.co/blog/stable_diffusion.
* [33] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in _ECCV_ , 2014, pp. 740–755.
* [34] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov, T. Duerig, and V. Ferrari, “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” _IJCV_ , 2020.
* [35] S. Shao, Z. Li, T. Zhang, C. Peng, G. Yu, X. Zhang, J. Li, and J. Sun, “Objects365: A large-scale, high-quality dataset for object detection,” in _ICCV_ , 2019.
|
# Differentially Private Adaptive Optimization with Delayed Preconditioners
Tian Li♠, Manzil Zaheer♡, Ziyu Liu♠, Sashank Reddi♢, Brendan McMahan♢,
Virginia Smith♠
♠Carnegie Mellon University, ♡Google DeepMind, ♢Google Research
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Privacy noise may negate the benefits of using adaptive optimizers in
differentially private model training. Prior works typically address this
issue by using auxiliary information (e.g., public data) to boost the
effectiveness of adaptive optimization. In this work, we explore techniques to
estimate and efficiently adapt to gradient geometry in private adaptive
optimization without auxiliary data. Motivated by the observation that
adaptive methods can tolerate stale preconditioners, we propose differentially
private adaptive training with delayed preconditioners (DP2), a simple method
that constructs delayed but less noisy preconditioners to better realize the
benefits of adaptivity. Theoretically, we provide convergence guarantees for
our method for both convex and non-convex problems, and analyze trade-offs
between delay and privacy noise reduction. Empirically, we explore DP2 across
several real-world datasets, demonstrating that it can improve convergence
speed by as much as 4$\times$ relative to non-adaptive baselines and match the
performance of state-of-the-art optimization methods that require auxiliary
data.
## 1 Introduction
Adaptive optimizers such as AdaGrad (McMahan & Streeter, 2010, Duchi et al.,
2011) and RMSProp (Hinton et al., 2012) are commonly used to improve
convergence speed in machine learning training. However, in privacy-sensitive
applications, the benefits of adaptivity may degrade as a result of noise
added to the preconditioners to guarantee differential privacy (Li et al.,
2022). Prior works typically address this issue by using non-sensitive
auxiliary data to approximate the underlying structures of private gradients
(Asi et al., 2021, Kairouz et al., 2021a, Li et al., 2022). While this can
boost performance, assuming access to informative public data may be
unrealistic in many privacy-sensitive applications. In this work, we instead
ask: Can we improve privacy/utility trade-offs in private adaptive
optimization without accessing auxiliary data?
Figure 1: Preconditioner values do not change drastically during optimization
(IMDB dataset).
A key insight we have in addressing this question is that for many machine
learning problems, the gradient geometry may not change drastically during
successive steps of optimization (e.g., see Figure 1, which plots successive
distributions of preconditioner values). This presents an opportunity to
estimate the preconditioners used by adaptive optimizers with smaller noise,
by averaging across previous iterates. To this end, we propose DP2, a
differentially private adaptive method that uses historical gradients to
construct delayed preconditioners with reduced noise. Despite the simplicity
of this approach, we find that it can significantly improve performance in
practice—improving convergence speed by as much as 4$\times$ relative to non-
adaptive baselines, all without the need to access auxiliary data. To better
understand these performance gains, we theoretically and empirically analyze
the method to study the effect of using delayed preconditioners, including
trade-offs that emerge between the noise reduction and staleness.
Contributions. We propose DP2 as a method for differentially private adaptive
optimization with delayed preconditioners. Unlike prior work, DP2 does not
rely on auxiliary data to improve privacy/utility trade-offs in private
training. We provide convergence guarantees for DP2 in both convex and non-
convex settings, and analyze the trade-offs between delay and privacy noise.
We conduct extensive experiments to showcase the effectiveness of DP2, which
can significantly improve model utility for a given privacy budget across text
and recommendation benchmarks.
## 2 Background and Related Work
In this section we discuss closely related works and set up some
preliminaries. We start by discussing prior work in differentially private
optimization, considering the classic framework of
$(\varepsilon,\delta)$-differential privacy (DP) (Dwork et al., 2006), defined
as follows.
###### Definition 1 (Differential privacy (Dwork et al., 2006)).
A randomized algorithm $\mathcal{M}$ is $(\varepsilon,\delta)$-differentially
private if for all neighboring datasets $D,D^{\prime}$ differing by one
element, and every possible subset of outputs $O$,
$\displaystyle\Pr\left(\mathcal{M}(D)\in O\right)\leq
e^{\varepsilon}\Pr\left(\mathcal{M}(D^{\prime})\in O\right)+\delta.$
Differentially Private SGD. Informally, DP in machine learning offers
protection by masking the influence of individual examples (example-level DP,
e.g. (Song et al., 2013, Bassily et al., 2014, Abadi et al., 2016)) or all of
the examples from one user (user-level DP, e.g. (McMahan et al., 2018, Kairouz
et al., 2021b)) on the trained model. In this work, we consider example-level
DP using the popular subsampled Gaussian mechanism (Dwork et al., 2014,
Mironov et al., 2019) to perturb gradients to ensure DP. Unless much larger
batch sizes and possibly larger datasets are used, DP mechanisms often lead to
a significant utility drop. Extensive research has thus been devoted to
investigating improved privacy/utility/computation trade-offs for DP-SGD,
including various training techniques (e.g., data augmentation and large-batch
training) (De et al., 2022), leveraging public data (Amid et al., 2022, Zhou
et al., 2021), and releasing gradient statistics via tree aggregation to
reduce the amount of noise (Kairouz et al., 2021b, Denisov et al., 2022, Chan
et al., 2011). These prior works are orthogonal to and could be applied in
conjunction with our proposed method, which focuses specifically on privacy in
the context of adaptive optimization.
Differentially Private Adaptive Optimization. To reduce privacy cost in
iterative DP algorithms, it is natural to consider applying adaptive
optimizers (e.g., AdaGrad (McMahan & Streeter, 2010, Duchi et al., 2011),
RMSProp (Hinton et al., 2012), AMSGrad (Reddi et al., 2018), and Yogi (Zaheer
et al., 2018)) to speed up convergence. A straightforward approach is to first
privatize mini-batch gradients and then plug in noisy gradients to any
adaptive updating rules (Zhou et al., 2020). However, estimating gradient
moments in this way may yield preconditioners with too much noise, resulting
in adaptive methods that may not have meaningful improvements over DP-SGD (Li
et al., 2022). As we discuss in Section 1, more recent works suggest the use
of non-sensitive public information to estimate the preconditioners (or other
gradient structures) (Li et al., 2022, Kairouz et al., 2021a, Asi et al.,
2021), which may not always be available in practice. In Section 5.2, we
empirically benchmark two baselines along this line of work and demonstrate
that DP2 can perform comparably to these state-of-the-art methods, even though
it does not require access to auxiliary data. Finally, we note that previous
works have explored the high-level direction of delayed preconditioners, but
mainly as a compromise for computational considerations in non-private
training (Gupta et al., 2018). In this work, we instead show that staleness
can be leveraged to improve privacy/utility trade-offs in private adaptive
optimization, and propose and analyze a novel method for delaying
preconditioner computation in the context of private training.
Notation. In this work, we consider using adaptive optimization methods to
solve the classic empirical risk minimization objective, i.e.,
$\min_{w}~{}F(w)~{}=~{}\frac{1}{n}\sum_{i=1}^{n}f(x^{i};w)$, where
$w\in\mathbb{R}^{d}$ and $\\{f(x^{i};w)\\}_{i\in[n]}$ are individual loss
functions on training sample $i\in[n]$. For vectors $u,v\in\mathbb{R}^{d}$, we
use $u+v$ for coordinate-wise addition, and $\frac{u}{v}$ for coordinate-wise
division. For any vector $v$, $v_{j}$ denotes the $j$-th coordinate of $v$.
For example, $g^{i,t}_{j}$ refers to the $j$-th coordinate of gradient
$g^{i,t}$. Finally, $\left|v\right|\in\mathbb{R}^{d}$ denotes taking
coordinate-wise absolute values, and $\|\cdot\|_{M}$ denotes the matrix norm
defined as $\|\cdot\|_{M}:=\sqrt{\langle\cdot,M\cdot\rangle}$ for a symmetric
and positive definite matrix $M\in\mathbb{R}^{d\times d}$, or a diagonal
matrix with non-negative diagonal entries populated by a vector
$M\in\mathbb{R}^{d}$.
## 3 DP2: Delayed Preconditioners for Differentially Private Adaptive
Optimization
We now introduce our DP2 framework. While we discuss DP2 in the context of a
particular adaptive method (RMSProp), we note that the approach is method-
agnostic in that it can generally be applied to any private adaptive
optimization method where preconditioners are calculated at each iteration. As
an initial step towards understanding the algorithm, we first investigate the
effects of delayed preconditioners in non-private training in Section 3.1. We
then explain how to apply this idea to construct less noisy preconditioners
from prior gradients in private training in Section 3.2.
### 3.1 Delayed preconditioners in non-private settings
Adaptive methods use preconditioners to adapt to gradient geometry,
effectively resulting in coordinate-wise learning rates. This can be
advantageous for many applications, especially those with sparse gradients or
non-uniform stochastic noise (e.g., Zhang et al., 2020, Reddi et al., 2021,
Hinton et al., 2012, McMahan & Streeter, 2010). One of the key design choices
of DP2 is to update preconditioners less frequently and use the average of
past gradients to reduce noise. Our observation is that a wide range of
learning problems are tolerant to the staleness of preconditioners. In this
subsection, we validate this empirically on the benchmark datasets considered
throughout this paper.
There are potentially many ways that one could instantiate the idea of delayed
preconditioner computation in adaptive optimization. Here we consider a
specific algorithm, which is the exact non-private version of our proposed DP2
framework (Algorithm 1) introduced in later sections. The basic idea is to
alternate between $s$ steps of SGD and $s$ steps of an adaptive method (for
simplicity we assume RMSProp as the adaptive algorithm), where $s$ is a
constant larger than 1. Each time we switch from SGD to RMSProp, we average
$s$ past SGD gradients and use the average to update the preconditioner. The
preconditioner will be used in subsequent RMSProp updates (thus being stale).
As motivation for DP2, we empirically show that RMSProp with delayed
preconditioners achieves almost the same optimization performance as RMSProp
(Figure 2).
Figure 2: In non-private training, RMSProp with delayed preconditioners
achieves similar training loss as standard RMSProp across all datasets. Final
test accuracies are presented in Section 5.1. This observation provides
motivation for our proposed DP2 framework for private training (Section 3.2).
As discussed in Section 2, we note that the idea of delayed preconditioning
has been briefly discussed in prior work (Gupta et al., 2018) for the purpose
of speeding up the computation of adaptive optimization in non-private
training. Unlike this prior work, we focus on the goal of reducing noise in
private training, propose an alternative method for using stale
preconditioners that is more amenable to differential privacy, and analyze our
method in both convex and non-convex settings.
### 3.2 Constructing delayed preconditioners with reduced noise
Without access to public data or other side information, prior works typically
update preconditioners based on noisy gradients at each iteration (Zhou et
al., 2020). For instance, a natural way to privatize RMSProp is to update the
preconditioner $v\in\mathbb{R}^{d}$ as $v\leftarrow\beta
v+(1-\beta)(\tilde{g})^{2}$ where $\beta\in(0,1)$ is a moving average
constant, and $\tilde{g}\in\mathbb{R}^{d}$ is the noisy gradient output by
some standard privacy mechanism (e.g., the Gaussian mechanism).111We consider
the practical diagonal (as opposed to matrix) form of adaptive methods
throughout the paper. However, a drawback to this is that the noise gets
accumulated at each iteration, making adaptive methods significantly less
effective (Li et al., 2022).
Inspired by the observation that problems can be tolerant to the staleness of
preconditioners (Figure 2), we propose to update the preconditioners less
frequently to reduce noise. For instance, we update $v$ every $s$ steps using
some aggregate function of $s$ recent private gradients from DP-SGD. During
iterations where $v$ is not updated, we simply apply the most recent (stale)
$v$ to precondition the gradients. In order to mitigate the noise, we average
over these $s$ gradients to form a pseudo-gradient $g$, which can be plugged
into arbitrary adaptive optimization algorithms. Note that the privacy noise
variance will be reduced $s$ times if we average $s$ Gaussian random variables
(i.e., the DP noise).
Input: $T$, batch size $b$, noise multiplier $\sigma$, clipping thresholds
$C$, initial model $w^{0}\in\mathbb{R}^{d}$, $v=\mathbf{0}$, constant
$\epsilon\in\mathbb{R}_{+}$, learning rate schedule $\alpha^{t}$, moving
average parameter $\beta$, SGD cumulative aggregation step $s_{1}$, RMSProp
cumulative step $s_{2}$
1 for _$t=0,\cdots,T-1$_ do
2 if _$t\ \mathrm{mod}\ (s_{1}+s_{2})=0$_ then
3 Reset accumulator $G^{t}\leftarrow\mathbf{0}$
4
5 if _$t\ \mathrm{mod}\ (s_{1}+s_{2})=s_{1}$_ then
6 Update moment estimates as $v\leftarrow\beta
v+(1-\beta)\left(G^{t}/s_{1}\right)^{2}$
7 Reset accumulator $G^{t}\leftarrow\mathbf{0}$
8
9 Uniformly randomly sample a mini-batch $B$ with size $b$ from private
training data
10 Get individual gradients for sample $i\in B$: $g^{i,t}\leftarrow\nabla
f(x^{i};w^{t})$
11 Privatize the (preconditioned) gradients using the Gaussian mechanism:
$\displaystyle\tilde{g}^{t}\leftarrow\frac{1}{b}\left(\sum_{i\in
B}\text{clip}\left(\frac{g^{i,t}}{D^{t}},C\right)+\mathcal{N}\left(\mathbf{0},\sigma^{2}C^{2}\right)\right)$
where $\displaystyle D^{t}\leftarrow\begin{cases}\mathbf{1}&\text{if }t\
\mathrm{mod}\ (s_{1}+s_{2})<s_{1}\\\
\sqrt{v}+\epsilon&\text{otherwise.}\end{cases}$
12 Accumulate the private gradients $\tilde{g}^{t}$ : $G^{t+1}\leftarrow
G^{t}+\tilde{g}^{t}$
13 Update model parameters $w$: $\displaystyle w^{t+1}\leftarrow
w^{t}-\alpha^{t}\tilde{g}^{t}$
return _$w^{T}$_
Algorithm 1 DP2-RMSprop: Delayed Preconditioners for Differentially Private
RMSprop
DP2 is summarized in Algorithm 1. For simplicity of presentation, we assume
RMSProp as the adaptive method (denoted as DP2-RMSProp) throughout this
section. However, our framework can be generally applied to other common
adaptive methods (see Appendices C.3 and D). The high-level idea is to
alternate between $s_{1}$ steps of private SGD and $s_{2}$ private RMSProp
steps, and use averages of $s_{1}$ SGD gradients (i.e., average of the
accumulator $G\in\mathbb{R}^{d}$) to update the preconditioner $v$. Next, we
discuss some key components of our algorithm.
Order of privatization and preconditioning. Given a private preconditioner
$v$, there are generally two choices to perform adaptive optimization over the
raw gradients $\\{g^{i,t}\\}_{i\in B}$ generated from mini-batch $B$ at the
$t$-th iteration.
1. 1.
First privatize gradients with clipping threshold $C_{1}$, then precondition
noisy gradients with $\sqrt{v}+\epsilon$ where $\epsilon$ is a small constant:
$\displaystyle\tilde{g}^{t}\leftarrow\frac{1}{b}\left(\sum_{i\in
B}\text{clip}\left(g^{i,t},C_{1}\right)+\mathcal{N}\left(\mathbf{0},\sigma^{2}C_{1}^{2}\right)\right)/\left(\sqrt{v}+\epsilon\right)$
2. 2.
First precondition gradients with $\sqrt{v}+\epsilon$, then privatize the
output with clipping threshold $C_{2}$:
$\displaystyle\tilde{g}^{t}\leftarrow\frac{1}{b}\left(\sum_{i\in
B}\text{clip}\left({g}^{i,t}/\left(\sqrt{v}+\epsilon\right),C_{2}\right)+\mathcal{N}\left(\mathbf{0},\sigma^{2}C_{2}^{2}\right)\right)$
The difference is that the privacy noise in the first choice may be scaled in
an undesired direction, as
$\frac{\mathcal{N}(\mathbf{0},\sigma^{2}C^{2})}{\sqrt{v}+\epsilon}$ with a
less noisy estimated $\sqrt{v}$ (perfect estimation removing all privacy noise
in the extreme case) would amplify the noise
$\mathcal{N}(\mathbf{0},\sigma^{2}C^{2})$ on informative coordinates (i.e.,
coordinates with smaller preconditioner values), which is consistent with the
argument made in Li et al. (2022). We empirically compare the two options and
show that the latter gives better performance (Section 5.3).
It is critical to average noisy gradients to construct a cleaner estimate of
the preconditioner (Line 5 and 10 in Algorithm 1) and apply it for adaptive
optimization (Line 9). As these two steps access raw gradients twice, we need
to privatize them separately. Unfortunately, the privacy budget would
accumulate with each query to the raw training data. Hence, we use the private
SGD gradients for both the model update and the preconditioner estimation.
This results in a hybrid method that alternates between private SGD and
private adaptive optimization steps. Note that to get an unbiased estimate of
the true delayed preconditioners, we can correct the bias in
$(G^{t}/s_{1})^{2}$ (Line 5) by subtracting the privacy noise variance term
$\frac{\sigma^{2}C^{2}}{s_{1}b^{2}}$ out of $(G^{t}/s_{1})^{2}$. But this
value is usually very small and negligible in practice. While in principle,
non-adaptive and adaptive updates can take different numbers of consecutive
iterations, in our empirical evaluation, we simply set $s_{1}=s_{2}$, and find
that this works reasonably well across all datasets (Section 5).
#### Privacy guarantees.
From Algorithm 1, we see that at each iteration, we access raw data and pass
them through the privacy barrier once (Line 9) to generate private gradients
$\tilde{g}^{t}$ with the same noise multiplier $\sigma$ and batch size $b$,
and the preconditioner only accumulates already differentially private
gradients. Since the final model is a composition of these private releases
(noisy gradients), Algorithm 1 (or DP2 in general) achieves the same privacy
guarantees as standard DP-SGD training under the same training settings. For
completeness, we formally state the privacy guarantee below.
###### Theorem 1 (Privacy guarantee of Algorithm 1 (Abadi et al., 2016)).
There exist constants $c_{1}$ and $c_{2}$ such that for any
$\varepsilon<c_{1}b^{2}T/n^{2}$, Algorithm 1 is
$(\varepsilon,\delta)$-differentially private for any $\delta>0$ if
$\sigma\geq c_{2}\frac{b\sqrt{T\log(1/\delta)}}{n\varepsilon}$.
In practice, we use Rényi differential privacy (RDP) for the subsampled
Gaussian mechanism accountant (Mironov et al., 2019) to compute the actual
$\varepsilon$’s reported in the experiments (Section 5).
## 4 Convergence Analysis
In this section, we analyze Algorithm 1 for both convex and non-convex
problems. We aim to study the convergence properties of DP2 and investigate
the trade-offs between delay and privacy noise. In doing so, key challenges
are introduced by alternating between adaptive and non-adaptive updating and
through the staleness of preconditioners.
### 4.1 Convex Cases
For convex functions, we define the optimal model $w^{*}$ as
$w^{*}\in~{}\arg\min_{w}F(w)$. First we state some assumptions (apart from
convexity) that are used in the analysis.
###### Assumption 1.
There exists a constant $R$ such that $\|w^{t}-w^{*}\|_{2}\leq R$ for any
iteration t.
###### Assumption 2 (Bounded stochastic gradient norm).
There exists a constant $C$ such that $\left\|g^{i,t}\right\|_{2}\leq C$ for
any $i\in[n]$ and iteration $t$.
Assumption 1 (bounded domain across all iterations) is commonly used in
adaptive optimization literature (Levy et al., 2018, Reddi et al., 2018, Asi
et al., 2021, Li et al., 2022). Assumption 2 aims to bound the $L_{2}$ norm of
the stochastic gradient, thus helping bound the $L_{2}$ sensitivity of the
operation of calculating and averaging individual gradients from a mini-batch.
Assuming bounded stochastic gradient norm is standard in prior works on convex
and non-convex private optimization (e.g., Kairouz et al., 2021a, Zhou et al.,
2020, Li et al., 2022). Under this assumption, suppose the clipping does not
happen, we have $\tilde{g}^{t}\leftarrow
g^{t}+\mathcal{N}(0,\sigma^{2}C^{2}/b^{2})$, where
$g^{t}:=\frac{1}{b}\sum_{i\in B}g^{i,t}$. Without loss of generality, let
$s_{1}$$=$$s_{2}$ in Algorithm 1. Our main convergence result is as follows
(assuming $t$ starts from 1).
###### Theorem 2 (Convergence of Algorithm 1 for convex problems).
Let Assumptions 1 and 2 hold. Assume $F$ is a convex function. Let the
learning rate $\alpha^{t}$ be set as
$\alpha^{t}\leftarrow\frac{\alpha^{\left\lfloor\frac{t}{2s}\right\rfloor+\left\lfloor\frac{t+s}{2s}\right\rfloor+1}}{\sqrt{t}}$.
After running Algorithm 1 for $T$ iterations with $s=\upsilon T$ for a small
constant $\upsilon\in(0,1]$, we obtain
$\displaystyle\min_{t\in[T]}\mathbb{E}\\!\left[F(w^{t})\right]\\!-\\!F(w^{*})\\!\leq\\!\frac{R^{2}+\kappa}{\alpha^{\left\lfloor\frac{1}{2\upsilon}\right\rfloor+\left\lfloor\frac{1+\upsilon}{2\upsilon}\right\rfloor}}\frac{1}{\sqrt{T}}\\!\sum_{t\in
T_{\upsilon}}\\!\mathbb{E}\\!\left[\left\|D^{t}\right\|_{1}\right]\\!+\\!\frac{1}{T}\sum_{t=1}^{T}\\!\frac{\alpha^{\left\lfloor\frac{t}{2\upsilon
T}\right\rfloor+\left\lfloor\frac{t+\upsilon T}{2\upsilon
T}\right\rfloor}}{\sqrt{t}}\mathbb{E}[\|N^{t}\|^{2}_{D^{t}}],$
where $T_{\upsilon}$ denotes the iteration indices where we switch from
private RMSProp steps to private SGD steps plus the last iteration, with
cardinality $|T_{\upsilon}|=\lceil\frac{1}{2\upsilon}\rceil$,
$N^{t}\sim\mathcal{N}(\mathbf{0},\sigma^{2}C^{2}/b^{2})$, and
$\displaystyle\kappa\geq\max\left\\{\alpha^{2}C^{2},\frac{Ch(s)}{\epsilon\sqrt{1-\beta}}\right\\},~{}\alpha=\min\left\\{\epsilon,\frac{1}{\sqrt{M}+\epsilon},1\right\\}~{}\text{
where }M:=C^{2}+\frac{\sigma^{2}C^{2}}{sb^{2}}.$
We defer all proofs to Appendix A and state simplified convergence results in
Corollary 1. As we can see, the above upper bound relies on a critical metric
$h(s)$ which is related to temporal gradient similarity and the amount of
staleness $s$, formally defined as:
$\displaystyle
h(s)\geq\max_{t\in[T]}\frac{\mathbb{E}\left[\left\|g^{t}\right\|_{1}\right]}{\mathbb{E}\left[\left\|\frac{1}{s}G^{\left\lfloor\frac{t}{s}\right\rfloor
s}\right\|_{1}\right]+d\epsilon}=\max_{t\in[T]}\frac{\mathbb{E}\left[\left\|g^{t}\right\|_{1}\right]}{\mathbb{E}\left[\frac{1}{s}\left\|\sum_{i=\left\lfloor\frac{t}{s}\right\rfloor
s-s}^{\left\lfloor\frac{t}{s}\right\rfloor
s-1}\tilde{g}^{i}\right\|_{1}\right]+d\epsilon},$
Figure 3: Visualization of $h(s)$ versus $s$ on IMDB.
where the expectation is taken with respect to all randomness in the
algorithm, and $G^{\left\lfloor\frac{t}{s}\right\rfloor s}\in\mathbb{R}^{d}$
refers to the latest accumulator that is used to update $v$ (Line 5 in
Algorithm 1). A smaller $h(s)$ indicates better convergence. We see that the
denominator of $h(s)$ can be decomposed into the average of past raw gradients
and the average of random Gaussian noise. Intuitively, $h(s)$ tends to be
smaller as gradients across the $s$ iterations in
$G^{\left\lfloor\frac{t}{s}\right\rfloor s}$ are more similar with the current
gradient $g^{t}$ in terms of the gradient norms. In Appendix A.2, we show that
an upper bound of $h(s)$ can be expressed as $c_{1}+c_{2}s$ where
$c_{1},c_{2}$ are two constants. We also visualize the value of $h(s)$ on the
IMDB dataset in Figure 3, and show that (1) the values of $h(s)$ are
consistently small across all delays, and (2) $h(s)$ increases as the $s$ gets
larger, which is consistent with the expression of $s$.
Trade-offs between delay and noise. Here we discuss how $s$ affects
convergence based on our analysis. Intuitively, larger $s$ (larger delay)
results in staler preconditioners, but introduces less noise due to private
gradient averaging. In our convergence bound, there are several terms that
depend on $s$ (or $\upsilon$). Although this makes it difficult to derive a
closed-form characterization of an optimal $s$, we can analyze the effects of
$s$ in simplified settings. In particular, examine the first term of the RHS
of the convergence bound, let
$\alpha=\frac{1}{\sqrt{M}+\epsilon}=\frac{1}{\sqrt{c_{3}+\frac{c_{4}}{\upsilon}}+\epsilon}$
(where $c_{3},c_{4}$ are two constants), and assume
$\left\lfloor\frac{1}{2\upsilon}\right\rfloor+\left\lfloor\frac{1+\upsilon}{2\upsilon}\right\rfloor=\frac{1}{2\upsilon}+\frac{1+\upsilon}{2\upsilon}=\frac{2+\upsilon}{2\upsilon}$.
Combined with $h(s)$, the dependence on $\upsilon$ in
$\frac{R^{2}+\kappa}{\alpha^{\left\lfloor\frac{1}{2\upsilon}\right\rfloor+\left\lfloor\frac{1+\upsilon}{2\upsilon}\right\rfloor}}$
can be expressed as
$(c_{1}+c_{2}\upsilon)\left(\sqrt{c_{3}+\frac{c_{4}}{\upsilon}}+\epsilon\right)^{\frac{2+\upsilon}{2\upsilon}}$.
This suggests that there exists an optimal $\upsilon$ that achieves the
minimal value. In Section 5.1, we empirically study the effects of $s$ across
real-world datasets, and demonstrate that there exist specific ranges of $s$
that provide favorable trade-offs between delay and noise (Figure 6).
###### Corollary 1.
Let Assumptions 1 and 2 hold. Assume $F$ is a convex function. Ignoring the
constants, the convergence rate under learning rate
$\alpha^{t}=O\left(\frac{1}{\sqrt{t}}\right)$ simplifies to
$\displaystyle\min_{t\in[T]}\mathbb{E}[F(w^{t})]-F(w^{*})\leq
O\left(\frac{1}{\sqrt{T}}\max_{t\in
T_{s}}\mathbb{E}\left[\|D^{t}\|_{1}\right]\right)+O\left(\frac{1}{T}\sum_{t=1}^{T}\frac{1}{\sqrt{t}}\mathbb{E}\left[\|N^{t}\|_{D^{t}}^{2}\right]\right),$
where $T_{s}$ denotes the iteration indices where we switch from private
RMSProp steps to private SGD steps plus the last iteration (thus having a
constant cardinality) and
$N^{t}\sim\mathcal{N}(\mathbf{0},\sigma^{2}C^{2}/b^{2})$.
At a high level, the first term is due to adaptive optimization using RMSProp,
and the second term corresponds to the added privacy noise. Our
$O\left(\frac{1}{\sqrt{T}}\right)$ rate is the same as previous results for
SGD (or DP-SGD) in convex cases with delaying learning rates (Nemirovski et
al., 2009, Bassily et al., 2014). Compared with DP-SGD, the added privacy
noise would be reduced from
$\frac{1}{T}\sum_{t=1}^{T}\frac{1}{\sqrt{t}}\mathbb{E}[\|N^{t}\|^{2}]$ to
$\frac{1}{T}\sum_{t=1}^{T}\frac{1}{\sqrt{t}}\mathbb{E}[\|N^{t}\|^{2}_{D^{t}}]$
when the gradients are sparse (so that $\|D^{t}\|_{1}<d$ in adaptive
iterations). Hence, this theorem suggests some constant improvements relative
to DP-SGD when we switch for a constant number of times.
### 4.2 Non-Convex Cases
We make the following additional common assumptions in non-convex convergence
analyses.
###### Assumption 3 (Smoothness).
Each $f(x^{i};w)~{}(i\in[n])$ is $L$-smooth with respect to
$w\in\mathbb{R}^{d}$.
###### Assumption 4.
Stochastic gradient variance is bounded, i.e.,
$\mathbb{E}[\|g^{i,t}-\mathbb{E}[g^{i,t}]\|_{2}^{2}]\leq\tau^{2}$ for all
$i,t$.
###### Theorem 3 (Convergence of Algorithm 1 for non-convex problems.).
Let Assumptions 1-4 hold. Define constant $M$ as
$M:=C^{2}+\frac{\sigma^{2}C^{2}}{sb^{2}}$. Under any delay parameter $s$,
after running Algorithm 1 with constant learning rates $\alpha^{t}=\alpha$
such that $\frac{L\alpha}{\epsilon}\leq 1$, we have
$\displaystyle\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\|\nabla
F(w^{t})\|^{2}]\leq\frac{2(\sqrt{M}+1)F(w^{1})}{\alpha T}+2\alpha
L(\sqrt{M}+1)\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right).$
The proof is deferred to Appendix B. Compared with Theorem 2, here we do not
have constraints on $s$. Note that to guarantee $(\varepsilon,\delta)$-DP by
running $T$ iterations, we can set
$\sigma^{2}=O\left(\frac{b^{2}T\log(1/\delta)}{n^{2}\varepsilon^{2}}\right)$,
$\alpha=O\left(\frac{1}{\sqrt{d}}\right)$, and
$T=O\left(\frac{n\varepsilon}{\log(1/\delta)}\right)$, to arrive at a
convergence bound
$O\left(\frac{\sqrt{d}}{n\varepsilon}+\frac{\tau^{2}}{\sqrt{d}b}\right)$.
Under any $s$, our rate (with and without noise) is the same as previous
results on DP-SGD and (DP) adaptive methods for non-convex problems (Zaheer et
al., 2018, Li et al., 2022). We note that our non-convex analysis does not
directly highlight the benefits of adaptivity or trade-offs around $s$; hence
the optimal choice of $s$ according to this result is $s=T$, to maximize the
goal of reducing privacy noise. However, the practical performance can be
better than the upper bound derived here, as shown in our experiments (Section
5). Most of the previous works studying stochastic non-convex adaptive
optimization does not prove improvements relative to SGD (e.g., Zaheer et al.,
2018, Ward et al., 2020, De et al., 2018, Alacaoglu et al., 2020). It is still
an open problem to rigorously characterize the benefits of adaptivity for non-
convex problems, which we leave for future work.
## 5 Empirical Evaluation
In this section we report empirical results on a range of learning tasks. In
Section 5.1, we compare DP2 with the baselines of DP-SGD and vanilla DP
adaptive methods across various privacy budgets, and investigate the effects
of delay on all datasets. We additionally compare DP2 with recent more
advanced private adaptive methods in Section 5.2, and conduct ablation studies
to validate the effectiveness of different DP2 components in Section 5.3.
In all experiments, we use Rényi differential privacy (RDP) accountant for the
subsampled Gaussian mechanism (Mironov et al., 2019) for privacy accounting.
We focus on the RMSProp optimizer (Hinton et al., 2012) and provide results
relating to other adaptive methods such as AdaGrad (Duchi et al., 2011,
Streeter & McMahan, 2010) in Appendix C. Our experiments are implemented in
JAX (Bradbury et al., 2018) with Haiku (Hennigan et al., 2020) to auto-
vectorize over the per-example operations (e.g. per-example clipping) for
substantial speedups (Subramani et al., 2021). Unless explicitly stated, we
report results with the best grid-searched hyperparameters. Note that for DP2
we tune the learning rates and clipping thresholds separately for private SGD
iterations and private adaptive (RMSProp) iterations. See Appendix C.2 for
hyperparameter details. Our code is publicly available at
github.com/kenziyuliu/DP2.
Tuning $s$. In all experiments, we tune the delay parameter ($s$) via grid
search. For convex tasks, we choose $s$ from $\\{0.025,0.5,0.1,0.5,1,2\\}$
epochs. For the non-convex model, we choose $s$ from $\\{0.5,3,10,25\\}$
epochs. We explore the sensitivity of DP2 to $s$ in Section 5.2, and show that
there exist a wide range of $s$ parameters that result in superior performance
compared with baseline methods.
Datasets and Tasks. We pick datasets and tasks where adaptivity is crucial
(e.g., those involving sparse gradients). For such tasks, adaptive methods
have major benefits relative to SGD in non-private training, and we expect DP2
to retain the benefits in private training. See Appendix C.1 for a detailed
description. For all datasets, we explore the effects of several noise
multiplier ($\sigma$) values, and set $\delta=10^{-k}$ where $k$ is the
smallest integer that satisfies $10^{-k}\leq 1/n$ for the training dataset
size $n$.
### 5.1 DP2 compared with DP-SGD and vanilla DP adaptive methods
We consider two popular baselines: DP-SGD (Abadi et al., 2016) and vanilla DP-
RMSProp (Zhou et al., 2020). In vanilla DP adaptive methods, private gradients
are plugged into adaptive updating rules to approximate the preconditioners at
each iteration. Figure 4 compares DP2-RMSProp with DP-SGD and DP-RMSProp. We
observe that across all datasets, DP2 consistently and substantially
outperforms the baselines in terms of both convergence and absolute
performance.
Figure 4: Test performance of DP2 compared to DP-SGD and DP-RMSProp on IMDB
(left), StackOverflow (middle), and MovieLens-100k (right) for a fixed privacy
budget. For all datasets, we calculate the privacy loss ($\varepsilon$) under
fixed $\delta$’s, noise multipliers {1.0, 1.0, 0.5}, and batch size 64. All
runs are repeated over 5 random seeds. Dotted lines correspond to training
metrics.
Privacy/utility trade-offs. Figure 4 reports learning curves under specific
privacy budgets determined by the batch size and the number of epochs. Here,
we additionally explore privacy/utility trade-offs across a range of privacy
parameters, where $\varepsilon$ ranges are consistent with prior works (e.g.,
Kairouz et al., 2021b). Results are shown in Figure 5. We observe that similar
to the results in Figure 4, DP2 significantly outperforms DP-SGD and DP-
RMSProp under each privacy budget. For reference, the non-private RMSProp
method achieves 87% accuracy, 62% accuracy, and 0.88 mean square error (MSE)
on IMDB, StackOverflow, and MovieLens, respectively. Indeed, with weaker
privacy (larger $\varepsilon$), we expect smaller utility gaps between private
and non-private optimization. In Appendix C.4, we additionally explore how
increasing the computational budget may affect the privacy-utility trade-off.
Figure 5: Privacy/utility trade-offs of DP2-RMSProp (Algorithm 1) compared
with DP-SGD and DP-RMSProp for a range of privacy budgets. We see that
DP2-RMSProp consistently achieves more favorable privacy/utility trade-offs
than the baseline methods. Figure 6: Effect of the delay parameter $s$. We
show trade-offs between delay and noise in the first three subplots. The
rightmost subfigure showcases convergence curves under different delays
($s$=10000 corresponds to delaying for $\approx 3$ epochs) where DP2 achieves
$4\times$ convergence speedup than DP-SGD. Privacy settings follow those of
Figure 4. Although a specific value of $s$ achieves the greatest improvements,
we observe that nearly all instantiations of DP2 improve upon the baselines.
Effects of $s$. Finally, we empirically study the effect of the delay
parameter $s$. Intuitively, there exists a trade-off between the amount of
delay and the privacy noise in the preconditioner: averaging over more
historical gradients (larger $s$) could yield less noisy preconditioners,
while introducing more staleness. In Figure 6, we report test performance
versus the delay $s$ across all datasets on the first three subplots. In the
last subplot, we additionally show the convergence behavior under different
values of $s$. These results suggest that there is a “sweet spot” for $s$ to
yield good performance—small delays are gradually improving over DP-RMSProp;
moderate delays perform best in terms of convergence and absolute performance;
and large delays may slow down convergence (although it is possible to reach
similar performance with sufficient training). These empirical results are
consistent with the implications of our convergence analysis discussed in
Section 4.1.
### 5.2 DP2 compared with recent methods for private optimization
As discussed in Section 2, beyond DP-SGD and vanilla DP adaptive methods,
another line of work uses auxiliary, public data to improve private (adaptive)
optimization. While not directly comparable to DP2 since DP2 does not require
any side/public information, we compare DP2 to two state-of-the-art methods
along this direction222We do not directly compare with the prior work of Asi
et al. (2021) as the code is not publicly available and implementation details
are missing in the paper; however, the more recent PDA-DPMD work of Amid et
al. (2022) we compare with suggests superior performance to Asi et al. (2021).
We also implement the diagonal variant of the method proposed in the
theoretically-focused work of Kairouz et al. (2021a), but observe that
accuracy improves only marginally beyond random guessing (see Figure 12 in
Section C.6).: (1) AdadPS (Li et al., 2022) which uses public data or their
statistics to estimate gradient geometry, and (2) PDA-DPMD (Amid et al.,
2022), which uses the loss on public data as a mirror map to learn the
underlying gradient geometry. Results are reported in Table 1, which show that
DP2 has comparable performance to state-of-the-art baselines, but without the
need to access auxiliary data. See Appendix C.6 for full details and
convergence curves.
Dataset | DP-SGD | DP-RMSProp | PDA-DPMD | AdaDPS | DP2-RMSProp
---|---|---|---|---|---
(w/ RMSProp)
IMDB $\boldsymbol{\uparrow}$ | .687 $\pm$ .018 | .713 $\pm$ .005 | .703 $\pm$ .005 | .826 $\pm$ .003 | .815 $\pm$ .011
StackOverflow $\boldsymbol{\uparrow}$ | .330 $\pm$ .002 | .328 $\pm$ .002 | .353 $\pm$ .001 | .406 $\pm$ .027 | .391 $\pm$ .001
MovieLens $\boldsymbol{\downarrow}$ | 3.02 $\pm$ .068 | 2.96 $\pm$ .062 | 3.74 $\pm$ .053 | 2.86 $\pm$ .042 | 2.78 $\pm$ .054
Table 1: DP2 compared with other private (adaptive) methods that use public
data (Li et al., 2022, Amid et al., 2022). Even though DP2 does not require
auxiliary information, we find that it achieves comparable performance with
these state-of-the-art approaches that require additional public data.
Corresponding convergence plots are presented in Figure 11 in Section C.6.
### 5.3 Ablation Studies
Figure 7: Different ablation variants of DP2 on IMDB. The dotted lines
correspond to training accuracy.
Finally, we also study the effectiveness of different components of DP2.
Recall that in Algorithm 1, we use noisy gradients from DP-SGD iterations to
update both the model parameters and the preconditioner such that the total
privacy cost is identical to that of DP-SGD. The first variant considers
accumulating DP-SGD gradients in the same way, but it runs private adaptive
methods using delayed preconditioner in almost all iterations. This requires
us to add independent noise twice at most iterations (when accumulating the
preconditioner and when noising the preconditioned update), thus increasing
the total privacy budget. The second variant is identical to DP2 except that
it applies the delayed preconditioner after noising the clean gradient; this
is to study the order of preconditioning as discussed in Section 3. As
illustrated in Figure 7, both variants indeed significantly underperform our
proposed method on the IMDB dataset, thus validating the design choices of
DP2. We defer complete results to Figure 10 and Table 4 in Appendix C.5. See
also Appendix D for the exact algorithms of both variants.
## 6 Conclusion and Future Work
In this work, we proposed DP2, a private adaptive optimization framework that
uses historical gradients to construct delayed but less noisy preconditioners,
yielding improved privacy/utility trade-offs without the need to access
auxiliary data. We demonstrated the effectiveness of DP2 both theoretically
and empirically. In the future, it would be interesting to extend the
techniques developed herein to other privacy-sensitive applications such as
federated learning (McMahan et al., 2017, Reddi et al., 2021). It is also
worth exploring interplays between DP2 and private online optimization with
tree aggregation, which similarly releases cumulative statistics with reduced
noise (Chan et al., 2011).
## Acknowledgments
The work of TL, ZL, and VS was supported in part by the National Science
Foundation Grant IIS1838017, a Google Faculty Award, a Meta Faculty Award, the
Private AI Collaborative Research Institute, and the CONIX Research Center.
Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the
National Science Foundation or any other funding agency.
## References
* Abadi et al. (2016) Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In _Conference on Computer and Communications Security_ , 2016.
* Alacaoglu et al. (2020) Ahmet Alacaoglu, Yura Malitsky, Panayotis Mertikopoulos, and Volkan Cevher. A new regret analysis for adam-type algorithms. In _International Conference on Machine Learning_ , 2020.
* Amid et al. (2022) Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M Suriyakumar, Om Thakkar, and Abhradeep Thakurta. Public data-assisted mirror descent for private model training. In _International Conference on Machine Learning_ , 2022.
* Asi et al. (2021) Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, and Kunal Talwar. Private adaptive gradient methods for convex optimization. In _International Conference on Machine Learning_ , 2021.
* Bassily et al. (2014) Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In _IEEE Symposium on Foundations of Computer Science_ , 2014.
* Bradbury et al. (2018) James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018\. URL http://github.com/google/jax.
* Chan et al. (2011) T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. _ACM Transactions on Information and System Security_ , 2011.
* De et al. (2018) Soham De, Anirbit Mukherjee, and Enayat Ullah. Convergence guarantees for rmsprop and adam in non-convex optimization and an empirical comparison to nesterov acceleration. _arXiv preprint arXiv:1807.06766_ , 2018.
* De et al. (2022) Soham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle. Unlocking high-accuracy differentially private image classification through scale. _arXiv preprint arXiv:2204.13650_ , 2022.
* Denisov et al. (2022) Sergey Denisov, Brendan McMahan, Keith Rush, Adam Smith, and Abhradeep Thakurta. Improved differential privacy for sgd via optimal private linear operators on adaptive streams. In _Advances in Neural Information Processing Systems_ , 2022.
* Duchi et al. (2011) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. _Journal of Machine Learning Research_ , 2011.
* Dwork et al. (2006) Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In _Theory of Cryptography Conference_ , 2006.
* Dwork et al. (2014) Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. _Foundations and Trends® in Theoretical Computer Science_ , 2014.
* Gupta et al. (2018) Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned stochastic tensor optimization. In _International Conference on Machine Learning_ , 2018.
* Harper & Konstan (2015) F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. _ACM Transactions on Interactive Intelligent Systems_ , 2015.
* Hennigan et al. (2020) Tom Hennigan, Trevor Cai, Tamara Norman, and Igor Babuschkin. Haiku: Sonnet for JAX, 2020. URL http://github.com/deepmind/dm-haiku.
* Hinton et al. (2012) Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Rmsprop: Divide the gradient by a running average of its recent magnitude. _Neural Networks for Machine Learning, Coursera Lecture 6e_ , 2012\.
* Kaggle (2022) Kaggle. Stack Overflow Data on Kaggle. https://www.kaggle.com/datasets/stackoverflow/stackoverflow, 2022\.
* Kairouz et al. (2021a) Peter Kairouz, Monica Ribero Diaz, Keith Rush, and Abhradeep Thakurta. (nearly) dimension independent private erm with adagrad rates via publicly estimated subspaces. In _Conference on Learning Theory_ , 2021a.
* Kairouz et al. (2021b) Peter Kairouz, Brendan McMahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, and Zheng Xu. Practical and private (deep) learning without sampling or shuffling. In _International Conference on Machine Learning_ , 2021b.
* Levy et al. (2018) Kfir Y Levy, Alp Yurtsever, and Volkan Cevher. Online adaptive methods, universality and acceleration. _Advances in Neural Information Processing Systems_ , 2018.
* Li et al. (2022) Tian Li, Manzil Zaheer, Sashank Reddi, and Virginia Smith. Private adaptive optimization with side information. In _International Conference on Machine Learning_ , 2022.
* Maas et al. (2011) Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In _Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , 2011.
* McMahan et al. (2017) Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In _International Conference on Artificial Intelligence and Statistics_ , 2017.
* McMahan et al. (2018) Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. In _International Conference on Learning Representations_ , 2018.
* McMahan & Streeter (2010) H. Brendan McMahan and Matthew J. Streeter. Adaptive bound optimization for online convex optimization. In _Conference on Learning Theory_ , 2010.
* Mironov et al. (2019) Ilya Mironov, Kunal Talwar, and Li Zhang. Rényi differential privacy of the sampled gaussian mechanism. _arXiv preprint arXiv:1908.10530_ , 2019.
* Nemirovski et al. (2009) Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. _SIAM Journal on Optimization_ , 2009.
* Reddi et al. (2021) Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečnỳ, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. In _International Conference on Learning Representations_ , 2021.
* Reddi et al. (2018) Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In _International Conference on Learning Representations_ , 2018.
* Song et al. (2013) Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In _IEEE Global Conference on Signal and Information Processing_ , 2013.
* Streeter & McMahan (2010) Matthew Streeter and H Brendan McMahan. Less regret via online conditioning. _arXiv preprint arXiv:1002.4862_ , 2010.
* Subramani et al. (2021) Pranav Subramani, Nicholas Vadivelu, and Gautam Kamath. Enabling fast differentially private sgd via just-in-time compilation and vectorization. In _Advances in Neural Information Processing Systems_ , 2021.
* TensorFlow Federated (2022) TensorFlow Federated. TensorFlow Federated Stack Overflow Dataset. https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data, 2022\.
* Ward et al. (2020) Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. _Journal of Machine Learning Research_ , 2020.
* Zaheer et al. (2018) Manzil Zaheer, Sashank Reddi, Devendra Sachan, Satyen Kale, and Sanjiv Kumar. Adaptive methods for nonconvex optimization. In _Advances in Neural Information Processing Systems_ , 2018.
* Zhang et al. (2020) Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank Reddi, Sanjiv Kumar, and Suvrit Sra. Why are adaptive methods good for attention models? In _Advances in Neural Information Processing Systems_ , 2020.
* Zhou et al. (2020) Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu, and Arindam Banerjee. Private stochastic non-convex optimization: Adaptive algorithms and tighter generalization bounds. _arXiv preprint arXiv:2006.13501_ , 2020.
* Zhou et al. (2021) Yingxue Zhou, Zhiwei Steven Wu, and Arindam Banerjee. Bypassing the ambient dimension: Private sgd with gradient subspace identification. In _International Conference on Learning Representations_ , 2021.
## Appendix A Proofs
###### Lemma 1.
Under Assumption 2, let $s_{1}=s_{2}=s$ in Algorithm 1, we have for any
$j\in[d]$, $\mathbb{E}[v_{j}]\leq C^{2}+\frac{\sigma^{2}C^{2}}{sb^{2}}$.
###### Proof.
Recall that $C$ is the gradient norm bound (Assumption 2). Let the clipping
threshold be $C$ as well. We have for $j\in[d]$,
$\displaystyle\mathbb{E}\left[\left(\frac{1}{s}G_{j}\right)^{2}\right]$
$\displaystyle=\mathbb{E}\left[\left(\frac{1}{s}\left({g}_{j}^{i_{1}}+\cdots+{g}_{j}^{i_{s}}\right)+\frac{1}{s}\left(N_{j}^{i_{1}}+\cdots+N_{j}^{i_{s}}\right)\right)^{2}\right]$
(1)
$\displaystyle=\mathbb{E}\left[\frac{1}{s^{2}}\left({g}_{j}^{i_{1}}+\cdots+{g}_{j}^{i_{s}}\right)^{2}\right]+\mathbb{E}\left[\frac{1}{s^{2}}\left({N}_{j}^{i_{1}}+\cdots+{N}_{j}^{i_{s}}\right)^{2}\right]$
(2) $\displaystyle\leq C^{2}+\frac{\sigma^{2}C^{2}}{sb^{2}},$ (3)
where $\\{i_{1},\dots,i_{s}\\}$ denotes the indices of $s$ noisy gradients
used to obtain $G_{j}$, and $\\{N_{j}^{i_{1}},\dots,N_{j}^{i_{s}}\\}$ are
random zero-mean Gaussian variables with variance
$\frac{\sigma^{2}C^{2}}{b^{2}}$ under noise multiplier $\sigma$, clipping
threshold $C$, and mini-batch size $b$. Hence for any $j\in[d]$ and $t\in[T]$,
$\displaystyle\mathbb{E}\left[\left(\frac{1}{s}G_{j}\right)^{2}\right]$
$\displaystyle\leq C^{2}+\frac{\sigma^{2}C^{2}}{sb^{2}}:=M,$ (4)
$\displaystyle\mathbb{E}[v_{j}]$ $\displaystyle\leq M,$ (5)
$\displaystyle\mathbb{E}\left[\sqrt{v_{j}}\right]$
$\displaystyle\leq\sqrt{\mathbb{E}[v_{j}]}\leq\sqrt{M}$ (6)
$\displaystyle\mathbb{E}\left[D_{j}^{t}\right]$
$\displaystyle\leq\max\left\\{\sqrt{M}+\epsilon,1\right\\}.$ (7)
∎
### A.1 Proof of Theorem 2
Based on the updating rule, we have
$\displaystyle\quad\left\|w^{t+1}-w^{*}\right\|^{2}_{D^{t}}$ (8)
$\displaystyle=\left\|w^{t}-\alpha^{t}\frac{g^{t}}{D^{t}}-\alpha^{t}N^{t}-w^{*}\right\|^{2}_{D^{t}}$
(9)
$\displaystyle=\left\|w^{t}-w^{*}\right\|_{D^{t}}^{2}+\left\|\alpha^{t}\frac{g^{t}}{D^{t}}+\alpha^{t}N^{t}\right\|^{2}_{D^{t}}-2\left\langle
w^{t}-w^{*},\alpha^{t}g^{t}+\alpha^{t}D^{t}N^{t}\right\rangle$ (10)
$\displaystyle=\left\|w^{t}-w^{*}\right\|_{D^{t}}^{2}-2\alpha^{t}\left\langle
g^{t},w^{t}-w^{*}\right\rangle+(\alpha^{t})^{2}\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle$ $\displaystyle\quad-2\alpha^{t}\langle
w^{t}-w^{*},D^{t}N^{t}\rangle+(\alpha^{t})^{2}\|N^{t}\|^{2}_{D^{t}}+2(\alpha^{t})^{2}\langle
g^{t},N^{t}\rangle.$ (11)
Rearranging terms gives
$\displaystyle\left\langle g^{t},w^{t}-w^{*}\right\rangle$
$\displaystyle=\frac{\|w^{t}-w^{*}\|^{2}_{D^{t}}-\|w^{t+1}-w^{*}\|^{2}_{D^{t}}}{2\alpha^{t}}+\frac{\alpha^{t}}{2}\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle$ $\displaystyle\quad-\langle
w^{t}-w^{*},D^{t}N^{t}\rangle+\frac{\alpha^{t}}{2}\|N^{t}\|^{2}_{D^{t}}+\alpha^{t}\langle
g^{t},N^{t}\rangle.$ (12)
Taking the expectation on both sides conditioned on $w^{t}$,
$\displaystyle\left\langle\nabla F(w^{t}),w^{t}-w^{*}\right\rangle$
$\displaystyle=\frac{\mathbb{E}_{t}\left[\|w^{t}-w^{*}\|^{2}_{D^{t}}\right]-\mathbb{E}_{t}[\|w^{t+1}-w^{*}\|^{2}_{D^{t}}]}{2\alpha^{t}}$
$\displaystyle\quad+\frac{\alpha^{t}}{2}\mathbb{E}_{t}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]+\frac{\alpha^{t}}{2}\mathbb{E}_{t}\left[\|N^{t}\|^{2}_{D^{t}}\right],$
(13)
where we have used the fact that $N$ is a zero-mean Gaussian variable
independent of $g^{t},w^{t}$. Taking the expectation on both sides and using
the convexity of $F(\cdot)$:
$\displaystyle\mathbb{E}[F(w^{t})]-F(w^{*})$
$\displaystyle\leq\frac{\mathbb{E}[\|w^{t}-w^{*}\|^{2}_{D^{t}}]-\mathbb{E}[\|w^{t+1}-w^{*}\|^{2}_{D^{t}}]}{2\alpha^{t}}+\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]+\frac{\alpha^{t}}{2}\mathbb{E}\left[\|N^{t}\|^{2}_{D^{t}}\right].$
(14)
Applying telescope sum, we have
$\displaystyle\sum_{t=1}^{T}\left(\mathbb{E}[F(w^{t})]-F(w^{*})\right)$
$\displaystyle\leq\frac{\|w^{1}-w^{*}\|^{2}_{A^{1}}}{2\alpha_{1}}+\sum_{t=2}^{T}\left(\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t}}\right]}{2\alpha^{t}}-\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t-1}}\right]}{2\alpha_{t-1}}\right)$
$\displaystyle\quad+\sum_{t=1}^{T}\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]+\sum_{t=1}^{T}\frac{\alpha^{t}}{2}\mathbb{E}\left[\|N^{t}\|^{2}_{D^{t}}\right].$
(15)
Hence, we need to bound the RHS:
$\displaystyle\frac{\left\|w^{1}-w^{*}\right\|^{2}_{D^{1}}}{2\alpha^{2}}+\underbrace{\sum_{t=2}^{T}\left(\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t}}\right]}{2\alpha^{t}}-\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t-1}}\right]}{2\alpha^{t-1}}\right)}_{T_{1}}$
$\displaystyle+\underbrace{\sum_{t=1}^{T}\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]}_{T_{2}}+\sum_{t=1}^{T}\frac{\alpha^{t}}{2}\mathbb{E}\left[\|N^{t}\|^{2}_{D^{t}}\right],$
(16)
where the vector $D^{t}\in\mathbb{R}^{d}$ satisfies that $D^{t}=\mathbf{1}$
when running private SGD steps, and $D^{t}=\sqrt{v}+\epsilon$ when running
private RMSProp steps.
Let the delay parameter to be scheduled as
$\displaystyle s=\upsilon T~{}(0<\upsilon<1)$ (17)
and the learning rate $\alpha^{t}$ be
$\displaystyle\alpha^{t}\leftarrow\frac{\alpha^{\left\lfloor\frac{t}{2s}\right\rfloor+\left\lfloor\frac{t+s}{2s}\right\rfloor+1}}{\sqrt{t}},$
(18)
where $\alpha=\min\left\\{\epsilon,\frac{1}{\sqrt{M}+\epsilon},1\right\\}$,
and $M$ is the upper bound of $\mathbb{E}\left[v_{j}\right]$ for $j\in[d]$, as
defined and proved in Lemma 1.
We next consider the $T_{1}$ term. There are four cases.
1. 1.
DP-SGD at the $t-1$-th iteration, and DP-SGD at the $t$-th iteration: As
$D^{t}=D^{t-1}$ there is not much requirement other that the learning rates
need to satisfy $\alpha^{t}\leq\alpha^{t-1}$, which holds for our choice.
2. 2.
Private RMSProp at the $t-1$-th iteration, and private RMSProp at the $t$-th
iteration: Similar to previous case, the learning rates need to satisfy
$\alpha^{t}\leq\alpha^{t-1}$, which holds for our choice.
3. 3.
DP-SGD at the $t-1$-th iteration, and private RMSProp at the $t$-th iteration:
We require
$\displaystyle\frac{\alpha^{t}}{\epsilon}\leq\alpha^{t-1}\implies\frac{\sqrt{v^{t}}+\epsilon}{\alpha^{t}}\geq\frac{1}{\alpha^{t-1}}$
(19)
But in this case we must have $t\ \%\ s=0$. So this is satisfied by our choice
as long as $\alpha\leq\epsilon$.
4. 4.
Private RMSProp at the $t-1$-th iteration, and DP-SGD at the $t$-th iteration
The first three cases form an updating pattern of DP-SGD $\to$ $\cdots$ $\to$
DP-SGD $\to$ DP-RMSProp$\to$ $\cdots$ $\to$ DP-RMSProp, where every pattern
takes $2s$ iterations, except for the first pattern, because the telescope sum
starts from $t=2$. For the first pattern, we have
$\displaystyle\frac{\left\|w^{1}-w^{*}\right\|^{2}_{D^{1}}}{2\alpha^{2}}+\sum_{t=2}^{2s}\left(\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t}}\right]}{2\alpha^{t}}-\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t-1}}\right]}{2\alpha^{t-1}}\right)$
(20)
$\displaystyle=\frac{\left\|w^{1}-w^{*}\right\|^{2}_{D^{1}}}{2\alpha^{2}}+\sum_{t=2}^{2s}\left(\mathbb{E}\left[\left\|w^{t}-w^{*}\right\|^{2}_{\frac{D^{t}}{\alpha^{t}}-\frac{D^{t-1}}{\alpha^{t-1}}}\right]\right)$
(21)
$\displaystyle\leq\frac{\left\|w^{1}-w^{*}\right\|^{2}_{D^{1}}}{2\alpha^{2}}+R^{2}\sum_{t=2}^{2s}\left(\frac{\mathbb{E}\left[\|D^{t}\|_{1}\right]}{2\alpha^{t}}-\frac{\mathbb{E}\left[\|D^{t-1}\|_{1}\right]}{2\alpha^{t-1}}\right)\leq\frac{R^{2}}{2\alpha^{2s}}\mathbb{E}\left[\|D^{2s}\|_{1}\right],$
(22)
where $D^{2s}=\sqrt{v}+\epsilon$.
For $k\geq 1$, we have
$\displaystyle\sum_{t=2sk+1}^{2sk+2s}\left(\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t}}\right]}{2\alpha^{t}}-\frac{\mathbb{E}\left[\|w^{t}-w^{*}\|^{2}_{D^{t-1}}\right]}{2\alpha^{t-1}}\right)$
$\displaystyle=\frac{\mathbb{E}\left[\|w^{2sk+1}-w^{*}\|^{2}_{D^{2sk+1}}\right]}{2\alpha^{2sk+1}}-\frac{\mathbb{E}\left[\|w^{2sk+1}-w^{*}\|^{2}_{D^{2sk}}\right]}{2\alpha^{2sk}}+\sum_{t=2sk+2}^{2sk+2s}\left(\mathbb{E}\left[\left\|w^{t}-w^{*}\right\|^{2}_{\frac{D^{t}}{2\alpha^{t}}-\frac{D^{t-1}}{2\alpha^{t-1}}}\right]\right)$
$\displaystyle\leq\frac{\mathbb{E}\left[\|w^{2sk+1}-w^{*}\|^{2}_{D^{2sk+1}}\right]}{2\alpha^{2sk+1}}-\frac{\mathbb{E}\left[\|w^{2sk+1}-w^{*}\|^{2}_{D^{2sk}}\right]}{2\alpha^{2sk}}+R^{2}\left(\frac{\mathbb{E}[\|D^{2sk+2s}\|_{1}]}{2\alpha^{2sk+2s}}-\frac{\mathbb{E}[\|D^{2sk+1}\|_{1}]}{2\alpha^{2sk+1}}\right)$
$\displaystyle\leq\frac{\mathbb{E}\left[\|w^{2sk+1}-w^{*}\|^{2}_{D^{2sk+1}}\right]}{2\alpha^{2sk+1}}+R^{2}\left(\frac{\mathbb{E}[\|D^{2sk+2s}\|_{1}]}{2\alpha^{2sk+2s}}-\frac{\mathbb{E}[\|D^{2sk+1}\|_{1}]}{2\alpha^{2sk+1}}\right)$
$\displaystyle\leq\frac{R^{2}}{2\alpha^{2sk+2s}}\mathbb{E}\left[\|D^{2sk+2s}\|_{1}\right],$
(23)
where $D^{2sk+2s}=\sqrt{v}+\epsilon$ belong to DP-RMSProp updates.
We look at the second $T_{2}$ term, and prove by induction that there exists a
constant $\kappa$ such that
$\displaystyle\sum_{t=1}^{T}\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]\leq\frac{\kappa}{\alpha^{T}}\mathbb{E}\left[\|D^{T}\|_{1}\right].$
(24)
When $T=1$ ($\alpha^{1}=\alpha$ and $D^{1}=\mathbf{1}$),
$\frac{\alpha}{2}\mathbb{E}[\|g^{1}\|^{2}]\leq\frac{\kappa{d}}{\alpha}$ holds
if $\kappa\geq\alpha^{2}C^{2}$. At each step $t$, the goal is to get
$\displaystyle\frac{\kappa}{\alpha^{t-1}}\mathbb{E}\left[\|D^{t-1}\|_{1}\right]+\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]\leq\frac{\kappa}{\alpha^{t}}\mathbb{E}\left[\|D^{t}\|_{1}\right]$
(25)
1. 1.
DP-SGD at the $t-1$-th iteration, and DP-SGD at the $t$-th iteration: We
require
$\displaystyle\frac{\kappa{d}}{\alpha^{t-1}}+\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\|g^{t}\right\|^{2}\right]\leq\frac{\kappa{d}}{\alpha^{t}}$
(26)
which would hold for choice of $\alpha^{t}$ as gradients are bounded and
$\kappa\geq\alpha^{2}C^{2}$.
2. 2.
Private RMSProp at the $t-1$-th iteration, and private RMSProp at the $t$-th
iteration:
We need
$\displaystyle\frac{\kappa\mathbb{E}\left[\|\sqrt{v^{t-1}}+\epsilon\|_{1}\right]}{\alpha^{t-1}}+\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{\sqrt{v^{t-1}}+\epsilon}\right\rangle\right]$
$\displaystyle\leq\frac{\kappa}{\alpha^{t}}\mathbb{E}\left[\|\sqrt{v^{t}}+\epsilon\|_{1}\right],$
(27) $\displaystyle\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{\sqrt{v^{t-1}}+\epsilon}\right\rangle\right]$
$\displaystyle\leq\left(\frac{\kappa}{\alpha^{t}}-\frac{\kappa}{\alpha^{t-1}}\right)\mathbb{E}\left[\left\|\sqrt{v^{t-1}}+\epsilon\right\|_{1}\right].$
(28)
Let
$\displaystyle
h(s)\geq\max_{t\in[T]}\left\\{\frac{\mathbb{E}\left[\|g^{t}\|_{1}\right]}{\mathbb{E}\left[\left\|\frac{1}{s}\left|G^{\lfloor\frac{t}{s}\rfloor
s}\right|+\epsilon\right\|_{1}\right]}\right\\}.$ (29)
Based on our updating rule,
$\displaystyle\mathbb{E}\left[\left\|\sqrt{v^{t}}+\epsilon\right\|_{1}\right]\geq\sqrt{1-\beta}\
\mathbb{E}\left[\left\|\frac{1}{s}\left|G^{\lfloor\frac{t}{s}\rfloor
s}\right|+\epsilon\right\|_{1}\right].$ (30)
Note that
$\displaystyle\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{\sqrt{v^{t-1}}+\epsilon}\right\rangle\right]\leq\frac{\alpha^{t}}{2}\mathbb{E}\left[\frac{\|g^{t}\|^{2}}{\epsilon}\right]\leq\frac{\alpha^{t}C}{2\epsilon}\mathbb{E}[\|g^{t}\|]\leq\frac{\alpha^{t}C}{2\epsilon}\mathbb{E}[\|g^{t}\|_{1}],$
(31)
where we have used the assumption that $\|g^{t}\|\leq C$. Combining the above
two,
$\displaystyle\frac{\alpha^{t}C}{2\epsilon}\mathbb{E}[\|g^{t}\|]$
$\displaystyle\leq\frac{\alpha^{t}C}{2\epsilon}h(s)\mathbb{E}\left[\left\|\frac{1}{s}\left|G^{\lfloor\frac{t}{s}\rfloor
s}\right|+\epsilon\right\|_{1}\right]$ (32)
$\displaystyle\leq\frac{\alpha^{t}C}{2\epsilon}\frac{h(s)}{\sqrt{1-\beta}}\mathbb{E}\left[\left\|\sqrt{v^{t-1}}+\epsilon\right\|_{1}\right]$
(33)
$\displaystyle\leq\kappa\left(\frac{1}{\alpha^{t}}-\frac{1}{\alpha^{t-1}}\right)\mathbb{E}\left[\left\|\sqrt{v^{t-1}}+\epsilon\right\|_{1}\right].$
(34)
This implies the condition holds as long as $\kappa$ satisfies
$\displaystyle\kappa\geq\frac{Ch(s)}{\epsilon\sqrt{1-\beta}}.$ (35)
3. 3.
DP-SGD at the $t-1$-th iteration, and private RMSProp at the $t$-th iteration.
We want to prove
$\displaystyle\frac{\kappa{d}}{\alpha^{t-1}}+\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{D^{t}}\right\rangle\right]\leq\frac{\kappa}{\alpha^{t}}\mathbb{E}\left[\left\|D^{t}\right\|_{1}\right].$
(36)
As $\left\|g^{t}\right\|\leq C$, it holds that
$\displaystyle\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{\sqrt{v^{t}}+\epsilon}\right\rangle\right]\leq\frac{\alpha^{t}}{2\epsilon}\mathbb{E}[\|g^{t}\|^{2}]\leq\frac{\alpha^{t}C}{2\epsilon}\mathbb{E}[\|g^{t}\|]\leq\frac{\alpha^{t}C}{2\epsilon}\mathbb{E}[\|g^{t}\|_{1}].$
(37)
Therefore,
$\displaystyle\frac{\alpha^{t}}{2}\mathbb{E}\left[\left\langle
g^{t},\frac{g^{t}}{\sqrt{v^{t}}+\epsilon}\right\rangle\right]$
$\displaystyle\leq\frac{Ch(s)}{2\epsilon\sqrt{1-\beta}}\alpha^{t}\mathbb{E}\left[\left\|\sqrt{v^{t}}+\epsilon\right\|_{1}\right].$
(38)
Based on our learning rate set in Eq. (18),
$\displaystyle\sqrt{t}\alpha^{t}$
$\displaystyle=\sqrt{t-1}\alpha^{t-1}\epsilon$ (39)
$\displaystyle\implies\frac{\alpha^{t}}{2}$
$\displaystyle\leq\frac{1}{\alpha^{t}}-\frac{1}{\alpha^{t-1}\epsilon}\leq\frac{1}{\alpha^{t}}-\frac{{d}}{\alpha^{t-1}\mathbb{E}\left[\left\|D^{t}\right\|_{1}\right]}.$
(40)
Hence,
$\displaystyle\frac{Ch(s)}{2\epsilon\sqrt{1-\beta}}\alpha^{t}\mathbb{E}\left[\left\|\sqrt{v^{t}}+\epsilon\right\|_{1}\right]$
$\displaystyle\leq\frac{Ch(s)}{\epsilon\sqrt{1-\beta}}\mathbb{E}\left[\left\|D^{t}\right\|_{1}\right]\left(\frac{1}{\alpha^{t}}-\frac{{d}}{\alpha^{t-1}\mathbb{E}\left[\left\|D^{t}\right\|_{1}\right]}\right)$
(41)
$\displaystyle\leq\kappa\left(\frac{\mathbb{E}[\|D^{t}\|_{1}]}{\alpha^{t}}-\frac{{d}}{\alpha^{t-1}}\right),$
(42)
where we require
$\displaystyle\kappa\geq\frac{Ch(s)}{\epsilon\sqrt{1-\beta}}.$ (43)
4. 4.
Private RMSProp at the $t-1$-th iteration, and DP-SGD at the $t$-th iteration.
We need
$\displaystyle\frac{\kappa}{\alpha^{t-1}}\mathbb{E}\left[\left\|\sqrt{v^{t-1}}+\epsilon^{t-1}\right\|_{1}\right]+\frac{\alpha^{t}}{2}\mathbb{E}\left[\|g^{t}\|^{2}\right]\leq\frac{\kappa{d}}{\alpha^{t}}.$
(44)
Plug in $\mathbb{E}\left[\|\sqrt{v^{t-1}}\|_{1}\right]\leq d\sqrt{M}$ (Lemma
1) and $\|g^{t}\|^{2}\leq C^{2}$, we have
$\displaystyle\frac{\kappa}{\alpha^{t-1}}\mathbb{E}\left[\|\sqrt{v^{t-1}}+\epsilon\|_{1}\right]+\frac{\alpha^{t}}{2}\mathbb{E}\left[\|g^{t}\|^{2}\right]\leq\frac{\kappa}{\alpha^{t-1}}\left(d\sqrt{M}+{d}\right)+\frac{\alpha^{t}}{2}C^{2}.$
(45)
Based on our learning rate set in Eq. (18), for some constant $\gamma$,
$\displaystyle\alpha^{t-1}$
$\displaystyle=\frac{\gamma}{\sqrt{t-1}},~{}\alpha^{t}\leq\frac{\gamma}{\sqrt{t}(\sqrt{M}+1)}$
(46) $\displaystyle\implies\frac{\alpha^{t}}{2}$
$\displaystyle\leq\frac{{1}}{\alpha^{t}}-\frac{\sqrt{M}+{1}}{\alpha^{t-1}}\leq\frac{{d}}{\alpha^{t}}-\frac{d\sqrt{M}+{d}}{\alpha^{t-1}}.$
(47)
Therefore
$\displaystyle\frac{\alpha^{t}}{2}C^{2}\leq\kappa\left(\frac{{d}}{\alpha^{t}}-\frac{d\sqrt{M}+{d}}{\alpha^{t-1}}\right)$
(48)
holds as long as $\kappa\geq\alpha^{2}C^{2}$. To sum up, the requirement on
$\kappa$ is
$\displaystyle\kappa\geq\max\left\\{\alpha^{2}C^{2},\frac{Ch(s)}{\epsilon\sqrt{1-\beta}}\right\\}.$
(49)
Final convergence results:
$\displaystyle\min_{t\in[T]}\mathbb{E}\left[F(w^{t})\right]-F(w^{*})$ (50)
$\displaystyle\leq\frac{R^{2}+\kappa}{\alpha^{\left\lfloor\frac{1}{2\upsilon}\right\rfloor+\left\lfloor\frac{1+\upsilon}{2\upsilon}\right\rfloor}}\frac{1}{\sqrt{T}}\sum_{t\in
T_{\upsilon}}\mathbb{E}\left[\left\|D^{t}\right\|_{1}\right]+\frac{1}{T}\sum_{t=1}^{T}\frac{\alpha^{\left\lfloor\frac{t}{2\upsilon
T}\right\rfloor+\left\lfloor\frac{t+\upsilon T}{2\upsilon
T}\right\rfloor}}{\sqrt{t}}\mathbb{E}[\|N^{t}\|^{2}_{D^{t}}],$ (51)
where $T_{v}$ denotes the iteration indices where we switch from private
RMSProp steps to private SGD steps plus the last iteration, and its
cardinality is $|T_{\upsilon}|=\lceil\frac{1}{2\upsilon}\rceil$, and
$\kappa\geq\max\left\\{\alpha^{2}C^{2},\frac{Ch(s)}{\epsilon\sqrt{1-\beta}},\right\\}$,
$\alpha=\min\left\\{\epsilon,\frac{1}{\sqrt{M}+\epsilon},1\right\\}$.
### A.2 A closer look at $h(s)$
We closely examine $h(s)$, defined as
$\displaystyle
h(s)\geq\max_{t\in[T]}\left\\{\frac{\mathbb{E}\left[\|g^{t}\|_{1}\right]}{\mathbb{E}\left[\left\|\frac{1}{s}\left|G^{\lfloor\frac{t}{s}\rfloor
s}\right|+\epsilon\right\|_{1}\right]}\right\\}.$ (52)
Let us assume mini-batch gradients on consecutive time steps are not very
different, i.e. $\|g^{t}-g^{t-1}\|_{1}\leq M$. This means each gradient norm
cannot be too far away from each other, which can be used to show the
dependence of $h(s)$ on the delay parameter $s$. Denote the gap between the
current iteration $t$ and the iteration where $v$ gets updated as $k$, i.e.,
$k:=t-\lfloor\frac{t}{s}\rfloor s$. Hence,
$\displaystyle\frac{\left\|g^{t}\right\|_{1}}{\left\|\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}^{t-k-s}\right)+\frac{1}{s}\left(N^{t-k-1}+\cdots+N^{t-k-s}\right)\right\|_{1}+d\epsilon}$
(53)
$\displaystyle=\frac{\left\|g^{t}-\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}_{j}^{t-k-s}\right)+\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}_{j}^{t-k-s}\right)\right\|_{1}}{\left\|\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}_{j}^{t-k-s}\right)+\frac{1}{s}\left(N^{t-k-1}+\cdots+N^{t-k-s}\right)\right\|_{1}+d\epsilon}$
(54)
$\displaystyle=\frac{\left\|\frac{1}{s}\left((g^{t}-{g}^{t-k-1})+\cdots+(g^{t}-{g}^{t-k-s})\right)+\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}^{t-k-s}\right)\right\|_{1}}{\left\|\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}^{t-k-s}\right)+\frac{1}{s}\left(N^{t-k-1}+\cdots+N^{t-k-s}\right)\right\|_{1}+d\epsilon}$
(55)
$\displaystyle\leq\frac{\left\|\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}^{t-k-s}\right)\right\|_{1}}{\left\|\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}^{t-k-s}\right)+\frac{1}{s}\left(N^{t-k-1}+\cdots+N^{t-k-s}\right)\right\|_{1}+d\epsilon}+\frac{\frac{1}{s}(sM+\cdots+(2s)M)}{d\epsilon}$
(56)
Denote $a:=\frac{1}{s}\left(N^{t-k-1}+\cdots+N^{t-k-s}\right)$, and
$b:=\frac{1}{s}\left({g}^{t-k-1}+\cdots+{g}^{t-k-s}\right)$. Then
$\displaystyle h(s)$
$\displaystyle\leq\frac{\mathbb{E}[\left\|b\right\|_{1}]}{\mathbb{E}[\|a+b\|_{1}]+d\epsilon}+\frac{sM}{d\epsilon}$
(57)
$\displaystyle\leq\frac{1}{\left|\frac{\mathbb{E}[\|a\|_{1}]}{\mathbb{E}[\|b\|_{1}]}-1\right|+\frac{d\epsilon}{\mathbb{E}[\|b\|_{1}]}}+\frac{sM}{d\epsilon}$
(58)
In the special case where gradients are sparse, i.e.,
$\mathbb{E}[\|b\|_{1}]<\mathbb{E}[\|a\|_{1}]$, we have
$\displaystyle h(s)$
$\displaystyle\leq\frac{1}{\frac{\mathbb{E}[\|a\|_{1}]}{\mathbb{E}[\|b\|_{1}]}+\frac{d\epsilon}{\mathbb{E}[\|b\|_{1}]}-1}+\frac{sM}{d\epsilon}$
(59)
It is easy to see that the RHS is $O\left(s\right)$, and it increases as $s$.
We can informally express it as $c_{1}s+c_{2}$, where $c_{1}$ and $c_{2}$ are
two constants.
## Appendix B Proof of Theorem 3
First we introduce a result that will be used in this section. Under the
bounded stochastic gradient variance assumption (Assumption 4), we have that
conditioned on $w^{t}$,
$\displaystyle\mathbb{E}_{t}\left[\|g^{t}\|^{2}\right]\leq\frac{\tau^{2}}{b}+\|\nabla
F(w^{t})\|^{2},$ (60)
where $b$ refers to the mini-batch size to obtain gradient $g^{t}$, i.e.,
$g^{t}\leftarrow\frac{1}{b}\sum_{i\in B}g^{i,t}$. This lemma is proved in
Zaheer et al. (2018). The per-coordinate version of this result is that for
$j\in[d]$,
$\displaystyle\mathbb{E}_{t}\left[(g_{j}^{t})^{2}\right]\leq\frac{\tau_{j}^{2}}{b}+\left(\nabla_{j}F(w^{t})\right)^{2},$
(61)
and $\sum_{j\in[d]}\tau_{j}^{2}=\tau^{2}$.
As we assume $F(w)$ is $L$-smooth, at each iteration $t$,
$\displaystyle F(w^{t+1})\leq F(w^{t})+\langle\nabla
F(w^{t}),w^{t+1}-w^{t}\rangle+\frac{L}{2}\left\|w^{t+1}-w^{t}\right\|^{2}.$
(62)
Based on the updating rule of Algorithm 1, we have
$\displaystyle F(w^{t+1})$ $\displaystyle\leq F(w^{t})+\langle\nabla
F(w^{t}),w^{t+1}-w^{t}\rangle+\frac{L}{2}\left\|w^{t+1}-w^{t}\right\|^{2}$
(63) $\displaystyle=F(w^{t})-\alpha^{t}\left\langle\nabla
F(w^{t}),\frac{g^{t}}{D^{t}}+N^{t}\right\rangle+\frac{(\alpha^{t})^{2}L}{2}\left\|\frac{g^{t}}{D^{t}}+N^{t}\right\|^{2},$
(64)
where $N\in\mathbb{R}^{d}$ and
$N_{j}\sim\mathcal{N}\left(0,\frac{\sigma^{2}C^{2}}{b^{2}}\right)$ with noise
multiplier $\sigma$ and clipping threshold $C$, and $D^{t}$ satisfies that
$\displaystyle D^{t}\leftarrow\begin{cases}\mathbf{1}&\text{if }t\
\mathrm{mod}\ 2s\leq s,\\\ \sqrt{v}+\epsilon&\text{otherwise.}\end{cases}$
(65)
Take expectation with respect to samples at the $t$-th iteration and $N^{t}$,
$\displaystyle\mathbb{E}_{t}[F(w^{t+1})]$ $\displaystyle\leq
F(w^{t})-\alpha^{t}\left\langle\nabla F(w^{t}),\frac{\nabla
F(w^{t})}{D^{t}}\right\rangle+\frac{(\alpha^{t})^{2}L}{2}\mathbb{E}_{t}\left[\left\|\frac{g^{t}}{D^{t}}\right\|^{2}\right]+\frac{d(\alpha^{t})^{2}L}{2b^{2}}\sigma^{2}C^{2}$
$\displaystyle=F(w^{t})-\alpha^{t}\sum_{j\in[d]}\frac{(\nabla_{j}F(w^{t}))^{2}}{D_{j}^{t}}+\frac{(\alpha^{t})^{2}L}{2}\sum_{j\in[d]}\frac{\mathbb{E}_{t}\left[(g_{j}^{t})^{2}\right]}{(D_{j}^{t})^{2}}+\frac{d(\alpha^{t})^{2}L}{2b^{2}}\sigma^{2}C^{2},$
(66)
where we have used the fact that $N^{t}$ is a zero-mean random variable
independent of $w^{t}$, and $D^{t}$ is independent of samples at time $t$. We
need to consider two cases.
1. 1.
DP-SGD at the $t$-th iteration
In this case, $D^{t}=\mathbf{1}$. Hence plugging in
$\displaystyle\mathbb{E}_{t}\left[(g_{j}^{t})^{2}\right]\leq\frac{\tau_{j}^{2}}{b}+\left(\nabla_{j}F(w^{t})\right)^{2},$
(67)
we have
$\displaystyle\mathbb{E}_{t}\left[F(w^{t+1})\right]\leq
F(w^{t})-\left(\alpha^{t}-\frac{(\alpha^{t})^{2}L}{2}\right)\left\|\nabla
F(w^{t})\right\|^{2}+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2b}+\frac{\sigma^{2}C^{2}d}{2b^{2}}\right).$
(68)
Under constant learning rate, let $\alpha^{t}=\alpha\leq\frac{1}{L}$,
$\displaystyle\mathbb{E}_{t}\left[F(w^{t+1})\right]\leq
F(w^{t})-\frac{\alpha}{2}\|\nabla
F(w^{t})\|^{2}+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2b}+\frac{\sigma^{2}C^{2}d}{2b^{2}}\right).$
(69)
Taking expectation on both sides gives
$\displaystyle\frac{\alpha}{2}\mathbb{E}\left[\|\nabla
F(w^{t})\|_{2}^{2}\right]\leq\mathbb{E}[F(w^{t})]-\mathbb{E}[F(w^{t+1})]+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2b}+\frac{\sigma^{2}C^{2}d}{2b^{2}}\right).$
(70)
2. 2.
Private RMSProp at the $t$-th iteration
We have
$\displaystyle\mathbb{E}_{t}[F(w^{t+1})]\leq
F(w^{t})-\alpha^{t}\sum_{j\in[d]}\frac{[\nabla
F(w^{t})]^{2}_{j}}{\sqrt{v^{t}_{j}}+\epsilon}+\frac{(\alpha^{t})^{2}L}{2\epsilon}\sum_{j\in[d]}\frac{\mathbb{E}_{t}[(g_{j}^{t})^{2}]}{\sqrt{v_{j}^{t}}+\epsilon^{t}}+\frac{d(\alpha^{t})^{2}L\sigma^{2}C^{2}}{2b^{2}}.$
(71)
Plugging in
$\mathbb{E}_{t}\left[(g_{j}^{t})^{2}\right]\leq\frac{\tau_{j}^{2}}{b}+\left(\nabla_{j}F(w^{t})\right)^{2}$
results in
$\displaystyle\mathbb{E}_{t}[F(w^{t+1})]$ (72) $\displaystyle\leq
F(w^{t})-\alpha^{t}\sum_{j\in[d]}\frac{[\nabla
F(w^{t})]^{2}_{j}}{\sqrt{v^{t}_{j}}+\epsilon}+\frac{(\alpha^{t})^{2}L}{2\epsilon}\sum_{j\in[d]}\frac{\sigma_{j}^{2}}{\left(\sqrt{v_{j}^{t}}+\epsilon\right)b}$
$\displaystyle\quad+\frac{(\alpha^{t})^{2}L}{2\epsilon}\sum_{j\in[d]}\frac{[\nabla
F(w^{t})]_{j}^{2}}{\sqrt{v_{j}^{t}}+\epsilon}+\frac{d(\alpha^{t})^{2}L\sigma^{2}C^{2}}{2b^{2}}$
(73)
$\displaystyle=F(w^{t})-\left(\alpha^{t}-\frac{(\alpha^{t})^{2}L}{2\epsilon}\right)\sum_{j\in[d]}\frac{[\nabla
F(w^{t})]^{2}_{j}}{\sqrt{v^{t}_{j}}+\epsilon}+\frac{(\alpha^{t})^{2}L}{2\epsilon}\sum_{j\in[d]}\frac{\tau_{j}^{2}}{\left(\sqrt{v_{j}^{t}}+\epsilon\right)b}+\frac{d(\alpha^{t})^{2}L\sigma^{2}C^{2}}{2b^{2}}$
(74) $\displaystyle\leq
F(w^{t})-\left(\alpha^{t}-\frac{(\alpha^{t})^{2}L}{2\epsilon}\right)\sum_{j\in[d]}\frac{[\nabla
F(w^{t})]^{2}_{j}}{\sqrt{v^{t}_{j}}+\epsilon}+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right).$
(75)
Taking expectation on both sides yields
$\displaystyle\mathbb{E}[F(w^{t+1})]\leq\mathbb{E}[F(w^{t})]-\left(\alpha^{t}-\frac{(\alpha^{t})^{2}L}{2\epsilon}\right)\sum_{j\in[d]}\mathbb{E}\left[\frac{[\nabla
F(w^{t})]^{2}_{j}}{\sqrt{v^{t}_{j}}+\epsilon}\right]+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right).$
(76)
We need to lower bound $\sum_{j\in[d]}\mathbb{E}\left[\frac{[\nabla
F(w^{t})]^{2}_{j}}{\sqrt{v^{t}_{j}}+\epsilon}\right]$. We know from Holder’s
inequality that $\mathbb{E}[\langle
u,v\rangle]\leq\mathbb{E}[\|u\|_{1}]\mathbb{E}[\|v\|_{\infty}]$. Now note that
$\displaystyle\mathbb{E}\left[\|\nabla
F(w^{t})\|^{2}\right]=\mathbb{E}\left[\left\langle\frac{|\nabla
F(w^{t})|^{2}}{D^{t}},D^{t}\right\rangle\right]$
$\displaystyle\leq\mathbb{E}\left[\left\|\frac{(\nabla
F(w^{t}))^{2}}{D^{t}}\right\|_{1}\right]\mathbb{E}\left[\|D^{t}\|_{\infty}\right]$
(77) $\displaystyle\leq\mathbb{E}\left[\left\|\frac{(\nabla
F(w^{t}))^{2}}{D^{t}}\right\|_{1}\right](\sqrt{M}+\epsilon).$ (78)
Hence
$\displaystyle\sum_{j\in[d]}\mathbb{E}\left[\frac{(\nabla_{j}F(w^{t}))^{2}}{D_{j}^{t}}\right]\geq\frac{\mathbb{E}[\|\nabla
F(w^{t})\|^{2}]}{\sqrt{M}+\epsilon}$ (79)
and
$\displaystyle\mathbb{E}[F(w^{t+1})]\leq\mathbb{E}[F(w^{t})]-\left(\alpha^{t}-\frac{(\alpha^{t})^{2}L}{2\epsilon}\right)\frac{\mathbb{E}\left[\|\nabla
F(w^{t})\|^{2}\right]}{\sqrt{M}+\epsilon}+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right).$
(80)
Let $\alpha^{t}=\alpha\leq\frac{\epsilon}{L}$, we obtain
$\displaystyle\mathbb{E}[F(w^{t+1})]\leq\mathbb{E}[F(w^{t})]-\frac{\alpha}{2(\sqrt{M}+\epsilon)}\mathbb{E}\left[\|\nabla
F(w^{t})\|^{2}\right]+(\alpha^{t})^{2}L\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right).$
(81)
Combining the two cases, for any $t$, we have
$\displaystyle\mathbb{E}[\|\nabla F(w^{t})\|^{2}]$ (82)
$\displaystyle\leq\frac{2(\sqrt{M}+1)}{\alpha}\left(\mathbb{E}[F(w^{t})]-\mathbb{E}[F(w^{t+1})]\right)+2\alpha
L(\sqrt{M}+1)\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right).$
(83)
Taking a telescope sum results in
$\displaystyle\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\|\nabla
F(w^{t})\|^{2}]\leq\frac{2(\sqrt{M}+1)F(w^{1})}{\alpha T}+2\alpha
L(\sqrt{M}+1)\left(\frac{\tau^{2}}{2\epsilon^{2}b}+\frac{d\sigma^{2}C^{2}}{2b^{2}}\right),$
(84)
where $M:=C^{2}+\frac{\sigma^{2}C^{2}}{sb^{2}}$.
## Appendix C Experimental Details and Additional Results
### C.1 Datasets
IMDB (Maas et al., 2011) is a binary classification dataset on sentiment
analysis for movie reviews that includes 25,000/25,000 training/test samples.
Each sample is a review under a vocabulary size of 10,000. We train a logistic
regression model with 10,001 parameters.
StackOverflow (TensorFlow Federated, 2022, Kaggle, 2022) is a large-scale text
dataset containing questions and answers from Stack Overflow. We focus on the
task of classifying the tag(s) of a given sentence described in TensorFlow
Federated (2022), though we focus on the usual centralized training setting
instead of a federated setting. We randomly sample 246,092 sentences for
training and 61,719 for testing, where each sentence is described by 10,000
features. We format the task as a 500-class classification problem, and the
resulting model has roughly 5 million parameters.
MovieLens-100k (Harper & Konstan, 2015) is a movie review dataset commonly
used for recommendation systems. It contains 100,000 movie ratings from 943
users on 1,682 items ($\approx 6\%$ non-zero entries). We study a (non-convex)
matrix factorization task with embedding size 100, thus totaling 262,500
parameters. We treat each non-zero entry as a ‘record’ for differential
privacy, and randomly partition them for training and evaluation.
### C.2 Hyperparameters
Unless otherwise stated, we fix the following hyperparameters in our
experiments: for IMDB, StackOverflow, and MovieLens respectively, we train for
100/50/50 epochs with batch size 64 and privacy
$\delta=10^{-5}$/$10^{-6}$/$10^{-6}$. We then perform a grid search on other
hyperparameters:
* •
Learning rates: We grid search over {0.03, 0.1, 0.3, 1, 3, 5} for SGD /
AdaGrad update rules and from {0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3} for
the RMSProp update rule.
* •
Per-example clipping thresholds: We grid search over {0.1, 0.25, 0.5, 1} when
performing per-example clipping on clean gradients without preconditioning
(e.g. for DP-SGD updates), and over {0.1, 0.25, 0.5, 1, 2, 3, 5} when clipping
preconditioned clean gradients (e.g. for DP2 updates in adaptive iterations).
The rationale is that, in general, the preconditioned gradient norms are
usually larger than those without preconditioning (recall from Section 3.2
that we apply preconditioning before privatization in DP2). For AdaDPS and
DP2-RMSProp, we also tried a few values of even larger clip thresholds ($\geq$
10) though we did not perform a full sweep for other hyperparameters at those
values due to computational constraints.
* •
Delay parameter $s$: For all datasets, $s$ (i.e., the number of optimization
steps) is chosen heuristically as a function of the number of steps in an
epoch. When reporting the best results (e.g. Figure 4, Figure 5), we search
over $s\in$ {195, 390, 780} (roughly 0.5, 1, 2 epochs respectively) for IMDB
(390 steps/epoch); $s\in$ {100, 300, 1000, 3000} for StackOverflow (3845
steps/epoch); and $s\in$ {1250, 15625, 31250, 50000} for MovieLens (1250
steps/epoch).
* •
Adaptivity $\epsilon$: In our settings, the adaptivity parameter $\epsilon$
for RMSProp/AdaGrad (in the denominator $D^{t}=\sqrt{v}+\epsilon$) would
affect the amount of adaptivity as well as the norms of preconditioned
gradients, which may in turn influence the privacy-utility trade-off under
per-example clipping. We tune $\epsilon$ over a small grid of
{$10^{-2},10^{-3},10^{-5},10^{-7}$}.
All reported results use the best hyperparameter configurations, which are
selected using training set metrics (as overfitting generally does not occur
under DP noise). To facilitate reproducibility, we summarize the tuned
hyperparameters for the main experiments and the ablation studies in Table 2
and Table 3 below respectively.
Dataset | DP-SGD | DP-RMSProp | PDA-DPMD | AdaDPS | DP2-RMSProp
---|---|---|---|---|---
(w/ RMSProp)
IMDB | (5, 0.5) | (0.3, 0.1, 10-3) | (5, 0.5) | (1, 5, 10-3) | (0.1, 3, 0.5, 5, 10-7, 195)
StackOverflow | (3, 0.25) | (0.03, 0.1, 10-3) | (3, 0.25) | (0.4, 5, 10-3) | (0.3, 0.3, 0.25, 5, 10-5, 1000)
MovieLens | (0.1, 1) | (0.001, 0.5, 10-3) | (0.1, 1) | (0.01, 10, 10-2) | (0.1, 0.03, 1, 5, 10-3, 31250)
Table 2: Tuned hyperparameters for different methods across three datasets.
For DP-SGD and PDA-DPMD, the values refer to (LR, clip); for DP-RMSProp and
AdaDPS, the values refer to (LR, clip, adaptivity $\epsilon$); and for DP2,
the values refer to (LR for SGD iters, LR for RMSProp iters, clip for SGD
iters, clip for RMSProp iters, adaptivity $\epsilon$, delay $s$). Bold values
were experimented on the edges of the hyperparameter grids.
Dataset | Ablation Variant1 | Ablation Variant 2
---|---|---
IMDB | (3.0, 0.1, 0.5, 2.0, 10-7, 780) | (0.3, 0.3, 0.25, 10-3, 780)
StackOverflow | (1.0, 1.0, 1.0, 1.0, 10-5, 1000) | (0.3, 0.001, 0.25, 10-5, 1000)
Table 3: Tuned hyperparameters for ablation studies (Section 5.3) on IMDB and
StackOverflow. Both variants use the RMSProp update rule for the adaptive
steps. Bold values were experimented on the edges of the hyperparameter grids.
For Variant 1 and 2 respectively, the values refer to (LR for SGD iters, LR
for RMSProp iters, clip for SGD iters, clip for RMSProp iters, adaptivity
$\epsilon$, delay $s$) and (LR for SGD iters, LR for RMSProp iters, clip for
both SGD/RMSProp iters, adaptivity $\epsilon$, delay $s$). Note that for
Variant 2 the clipping threshold do not need to be tuned separately for
SGD/RMSProp iters as it applies to preconditioned gradients in both cases.
### C.3 Results for DP2-AdaGrad
The DP2 framework can be applied to a range of adaptive methods beyond RMSProp
mostly discussed in the main text. We extend DP2 to the AdaGrad update rule
(with only one line of code change, see Section D), and benchmark its
convergence and privacy-utility trade-offs. In Figure 8 and Figure 9, the
results indicate that DP2-AdaGrad, like DP2-RMSProp, can consistently and
substantially improve over the baselines in terms of both convergence and
absolution performance, demonstrating the generality of DP2 to other adaptive
optimizers.
Figure 8: (Extension of Figure 4 to the AdaGrad update rule) Test accuracy of
DP2 compared to DP-SGD, DP-RMSProp, and DP-AdaGrad on IMDB and StackOverflow.
Dotted lines denote training performance.
### C.4 Effects of Increasing Computational Budgets
When differential privacy introduces a large utility gap between private and
non-private training, one approach to improving the privacy-utility trade-off
is to increase computational costs by using larger batch sizes under fixed
numbers of steps. The noise multiplier needs to increase to achieve the same
privacy target, while the overall privacy noise may still be reduced due to
the larger batch size. This technique may be adopted in practice when we want
to prioritize the utility of private optimization under fixed privacy budgets.
In Figure 9 (right), we explore the effect of such increased computation on
StackOverflow. With a 4$\times$ factor increase in computational cost
(4$\times$ larger batch sizes with the same number of training iterations), we
observe that the privacy/utility trade-off of all methods can be substantially
improved, narrowing the utility gap to non-private training. In particular,
observe that the absolute performance improvement of DP2 over the vanilla DP
baselines remains similar.
Figure 9: (Extension of Figure 5 to the AdaGrad update rule and increased
computational cost) Privacy/utility trade-offs of DP2 compared to DP-SGD, DP-
RMSProp, and DP-AdaGrad on IMDB and StackOverflow. “(4$\times$)” denotes
increasing the batch size and the number of epochs simultaneously by a factor
of 4 and picking the appropriate noise multiplier to arrive at similar privacy
costs $(\varepsilon)$.
### C.5 Additional Results for Ablation Studies
Table 4 summarizes the results for ablation studies on IMDB, StackOverflow,
and MovieLens, and Figure 10 reports test accuracies on IMDB and StackOverflow
during optimization. The variants are discussed in Section 5.3 and complete
algorithms are presented in Appendix D. We observe that DP2 indeed
consistently outperforms the two (weaker) variants on all datasets, thus
verifying our design choices for DP2. In particular, note that the utility
drop of variant 2 (adding noise before preconditioning) on StackOverflow is
more significant compared to that on IMDB; we argue that this is due to
StackOverflow being a high-dimensional learning task (roughly 5 million model
parameters) and thus the detrimental effect of preconditioning per-coordinate
noise is larger.
Dataset | Variant1 | Variant 2 | DP2-RMSProp
---|---|---|---
IMDB $\boldsymbol{\uparrow}$ | .799 $\pm$ .006 | .643 $\pm$ .007 | .815 $\pm$ .011
StackOverflow $\boldsymbol{\uparrow}$ | .382 $\pm$ .002 | .265 $\pm$ .004 | .391 $\pm$ .001
MovieLens $\boldsymbol{\downarrow}$ | 3.32 $\pm$ .088 | 3.18 $\pm$ .066 | 2.78 $\pm$ .054
Table 4: Summary of ablation studies on all three datasets.
Figure 10: Test accuracies for ablation studies on DP2. Dotted lines
correspond to training metrics.
### C.6 Additional Results for Comparison with Public Data-Assisted Methods
Figure 11 extends the results in Section 5.2 with convergence plots on IMDB
and StackOverflow. On IMDB, we observe that despite not using any auxiliary
information, the convergence of DP2-RMSProp is comparable with that of AdaDPS-
RMSProp (Li et al., 2022) which uses 1% of training data as the public data
(250 examples) to approximate the preconditioner. On StackOverflow where the
same public split of 1% corresponds to 2460 examples, we observe that AdaDPS-
RMSProp can outperform DP2. On the other hand, the extra public data do not
help PDA-DPMD outperform DP2.
Figure 11: Test accuracies of DP2 compared against recent private (adaptive)
methods that leverage public data (Li et al., 2022, Amid et al., 2022). Dotted
lines correspond to training metrics. Figure 12: Comparing DP2 against a
noisy AdaGrad variant based on Kairouz et al. (2021a) where the gradients and
the preconditioner are privatized separately.
In Figure 12, we additionally implement a private AdaGrad method proposed in
Kairouz et al. (2021a) that also leverages public data. Specifically, in each
iteration, the algorithm clips and adds independent noise to both the clean
gradients and the preconditioner estimated using clean gradients; it then uses
public data to estimate a gradient subspace onto which to project the
clipped/noised preconditioner in order to reduce the effect of noise; finally,
it preconditions the noisy gradient with the noisy preconditioner and takes an
update step. Our implementation differs from Kairouz et al. (2021a) in that we
use the diagonal form of the preconditioner instead of the full matrix form.
To estimate the gradient subspace, we follow the approach described in Zhou et
al. (2021) where the projection matrix $V\in\mathbb{R}^{d\times k}$ where $d$
is the number of parameters and $k$ is the dimension of the subspace is
obtained by taking the top-$k$ eigenspace of $M^{t}$ with
$M^{t}=\frac{1}{|X_{\text{pub}}|}\sum_{x^{i}\in
X_{\text{pub}}}\nabla_{w^{t}}f\left(x^{i};w^{t}\right)\nabla_{w^{t}}f\left(x^{i};w^{t}\right)^{\top}$
where $X_{\text{pub}}$ is the set of public examples. Unfortunately, we have
not obtained a satisfactory result for this noisy AdaGrad algorithm. We remark
that since the method is extremely computationally expensive (involves
computing the eigendecomposition of a $d\times d$ matrix with $d=10001$ at
every iteration), further hyperparameter tuning may help improve the
performance. However, our ablation studies (Section 5.3 and Appendix C.5) may
shed light on the current observations since this method privatizes gradients
before preconditioning.
## Appendix D Algorithms
For completeness, we present all algorithms mentioned in the main text in
detail.
* •
Non-private version of DP2: only changing Line 9 in Algorithm 1 to
$\displaystyle\tilde{g}^{t}\leftarrow\frac{1}{b}\sum_{i\in
B}\frac{g^{i,t}}{D^{t}}$
* •
DP2 with the AdaGrad update rule (DP2-AdaGrad): only changing Line 5 in
Algorithm 1 to
$\displaystyle v\leftarrow v+\left(G^{t}/s_{1}\right)^{2}$
* •
DP2 with Yogi’s additive update rule (DP2-Yogi): only changing Line 5 in
Algorithm 1 to
$\displaystyle v\leftarrow
v+(1-\beta)\text{sign}(G^{t}/s_{1}-v^{2})\left(G^{t}/s_{1}\right)^{2}$
* •
Ablation variant 1 (extra query) with delayed preconditioners: see Algorithm
2. Observe that the clean batch gradients $\\{g^{i,t}\\}_{i\in B}$ get
privatized twice in most iterations (when $(t-1)\bmod s\neq 0$), increasing
the total privacy cost.
* •
Ablation variant 2 (noise before preconditioning) with delayed
preconditioners: in Line 9 of Figure 1, privatize the batch gradients with the
following replacement:
$\displaystyle\tilde{g}^{t}\leftarrow\frac{1}{b}\left(\sum_{i\in
B}\text{clip}\left(g^{i,t},C\right)+\mathcal{N}\left(\mathbf{0},\sigma^{2}C^{2}\right)\right)/D^{t}$
Input: $T$, batch size $b$, noise multiplier $\sigma$, clipping thresholds
$C_{1}$, $C_{2}$, initial model $w^{0}\in\mathbb{R}^{d}$, $v=\mathbf{0}$,
constant $\epsilon\in\mathbb{R}_{+}$, learning rate schedule $\alpha^{t}$,
moving average parameters $\beta$, delay steps $s$
1 Set accumulator $G^{0}\leftarrow\mathbf{0}$
2for _$t=1,\cdots,T$_ do
3 Uniformly randomly sample a mini-batch $B$ with size $b$ from private
training data
4 Get individual gradients for sample $i\in B$: $g^{i,t}\leftarrow\nabla
f(x^{i};w^{t-1})$
5 Privatize the gradients using the Gaussian mechanism:
$\displaystyle\tilde{g}^{t}\leftarrow\frac{1}{b}\left(\sum_{i\in
B}\text{clip}\left(g^{i,t},C_{1}\right)+\mathcal{N}\left(\mathbf{0},\sigma^{2}C_{1}^{2}\right)\right)$
Accumulate the private gradients $\tilde{g}^{t}$ : $G^{t}\leftarrow
G^{t-1}+\tilde{g}^{t}$
6
7 if _$(t-1)\ \mathrm{mod}\ s=0$_ then
8 Update moment estimates: $v\leftarrow\beta
v+(1-\beta)\left(G^{t}/s\right)^{2}$
9 Reset accumulator: $G^{t}\leftarrow\mathbf{0}$
10 Set final gradient: $\bar{g}^{t}\leftarrow\tilde{g}^{t}$
11 else
12 Privatize the clean, preconditioned gradients using the Gaussian mechanism:
$\displaystyle\hat{g}^{t}\leftarrow\frac{1}{b}\left(\sum_{i\in
B}\text{clip}\left(\frac{g^{i,t}}{\sqrt{v}+\epsilon},C_{2}\right)+\mathcal{N}\left(\mathbf{0},\sigma^{2}C_{2}^{2}\right)\right)$
Set final gradient: $\bar{g}^{t}\leftarrow\hat{g}^{t}$
13 Update model parameters $w$: $\displaystyle w^{t}\leftarrow
w^{t-1}-\alpha^{t}\bar{g}^{t}$
return _$w^{T}$_
Algorithm 2 Ablation variant 1 (extra query) using delayed preconditioners
|
# Inverse soft limit construction of QCD amplitude
Andriniaina Narindra Rasoanaivo Sciences Expérimentales et des Mathématiques,
Ecole Normale Supérieure Ampefiloha,
Antananarivo - BP 881, Université d’Antananarivo, Madagascar
###### Abstract
Abstract: QCD amplitudes are one of the most important ingredients for the
understanding of the early universe. In this work we present how the knowledge
of the asymptotic states can be used to calculate the scattering amplitude of
the underline QCD events at high energy. To do so, we show how the
understanding of the soft factorization and the soft structure of amplitudes
can provide us all the necessary information to fix the kinematics of
amplitudes using inverse soft limit techniques.
## I Introduction
Quantum field theory is known to be the main framework to study the behaviour
of elementary particles and to describe the fundamental interactions between
particles. The computation of scattering amplitudes is the central focus to
connect such theoretical descriptions to the experimental measurements. The
standard method to compute scattering amplitudes is the Feynman diagrammatic
approach, however with this technique the computation tends to be very
complicated as the number of particles involved in the process get large. In
quantum chromodynamic (QCD), the number of gluon tends to grow very fast in
the process which makes the computation of amplitudes so challenging to match
with the high precision experimental results.
Many progresses have been made toward a more efficient computation, the focus
point of such progress is to understand the mathematical structure of
scattering amplitudes[]. The standard approach of Feynman is known to make the
locality and unitarity structure of the amplitude manifest at every step of
the computation. The idea to make both locality and unitarity to manifest at
same time is known to be source of all the complexity in the standard
computation. A modern approach, like the BCFW recursion of Brito Cachazo Fen
Witten Britto:2004ap , only make the unitarity manifest along side the on-
shell property of the particles involved in the process. Despite the non
conventional variables (onshell variables) used in the recursive approach, the
computation turns out very efficient to compute a large number point
amplitudes. Modern approaches also make various properties and mathematical
structure of amplitudes manifest and easy to study.
From our previous work Rasoanaivo:2022kkm , we showed how scattering
amplitudes can be decomposed around the softness of one of its particle which
is an interesting structure around the soft limit theorem Weinberg:1965nx ;
Jackiw:1968zza ; Elvang:2016qvq . The two components of such decomposition are
first the soft core, which is the main subject to the universal Weinberg soft
factorisation, and second the soft shell that needs to be computed in order to
fully reconstruct a lager point amplitude from inverse soft limit (ISL) of a
lower point amplitudes. Many works have been done toward building full
amplitudes from ISL, and such construction can be fully fixed from the
asymptotic symmetry of the individual particles involved in process.
In this work, our main objective is to investigate the possibility of
construction QCD amplitudes from ISL. We will show from our experiences of
soft decomposition of known amplitudes an algorithmic patterns to reconstruct
bipolar amplitudes, a configuration class of amplitudes, from ISL. Such
algorithm will be presented as graph, oriented graph, showing the different
operation to reconstruct full amplitudes.
## II Inverse soft limit for MHV amplitudes
Before we talk about the inverse soft limit, it is important to talk about the
Weinberg theorem Weinberg:1965nx . This theorem shows that a scattering
amplitude has an universal factorisation behaviour as the momentum of a
massless boson tends to be near zero, Rasoanaivo:2020yii ; Campoleoni:2017mbt
:
$\lim_{k\to
0}A_{n+1}(p_{1},p_{2},\ldots,p_{n},k)=\hat{S}(k)A_{n}(p_{1},p_{2},\ldots,p_{n}),$
(1)
where $A_{n}$ is an $n$-point amplitude and $\hat{S}(k)$ is the soft operator
related to the soft momentum $k$. This factorisation is known to be universal,
and its allow us to connects an $(n+1)$-points amplitude to an $(n)$-point. As
shown in Rasoanaivo:2020yii the soft operator $\hat{S}$ could be derived
independently from the amplitude by resolving the following equation
$\left[\hat{H}_{i},\hat{S}(k_{j})\right]=h_{i}\hat{S}(k_{j})\delta_{ij}.$ (2)
In this equation, $\hat{S}(k_{j})$ is the soft operator associated to $j$-th
particle taken to be soft, $h_{i}$ is the helicity of the $i$-th particle,
$\delta_{i}j$ is the Kronecker symbol, and $\hat{H}_{i}$ is the helicity
operator associated to $i$-th particle where with the spinor helicity
variables Rasoanaivo:2020yii
$\hat{H}_{i}=-\frac{1}{2}\left(\lambda^{a}_{i}\frac{\partial\;}{\partial\lambda^{a}_{i}}-\bar{\lambda}^{\dot{a}}_{i}\frac{\partial\;}{\partial\bar{\lambda}^{\dot{a}}_{i}}\right).$
(3)
The inverse soft limit is the opposite action where a soft operator is applied
on lower point amplitude while its momentum is taken to be hard so the the
$n$-point will give rise to an $(n+1)$-point. However as shown in
Rasoanaivo:2022kkm , such process is not possible with one single ISL. The
soft limit can be represented as
$A_{n+1}\xrightarrow[SL]{\;\text{soft limit}\;}A_{n}$ (4)
while the inverse soft
$A_{n}\xrightarrow[ISL]{\;\text{invert soft
limit}\;}A_{n+1}^{ISL}=A_{n+1}-R_{n+1},$ (5)
where $R_{n+1}$ is the missing part of the higher point amplitude that can be
completed from an ISL of other soft sector provided by a different particle.
In this soft decomposition, the main objective of the ISL reconstruction is to
find an algorithm to calculate $R_{n}$ so we can complete the amplitude.
Around this idea of soft reconstruction, it is worth explore the decomposition
of an amplitude around the soft behaviour of a given particle. Let us consider
the $i$-th particle in the process, an amplitude can be decompose as the core
of the soft theorem of the $i$-th particle $A^{[i]}_{n}$(the part that can be
recovered by ISL), and the soft shell which is lost in this soft limit
$R_{n}^{[i]}$,
$A_{n}=A^{[i]}_{n}+R^{[i]}_{n}\Longleftrightarrow\left\\{\begin{aligned}
&A_{n}\xrightarrow[]{\;k_{i}\to 0\;}A_{n-1}\xrightarrow{\;ISL\>}A_{n}^{[i]}\\\
&R^{[i]}_{n}\xrightarrow[]{\;k_{i}\to 0\;}0.\end{aligned}\right.$ (6)
One of the main problem of the standard approach is that amplitudes are
computed in a more generalised way where the helicity of particles are only
specified at the very last step of the computation. In the modern approaches,
especially after Park and Taylor introduce formula for some $n$-points
amplitude Parke:1986gb , the computation of amplitudes simplified depending on
them helicity configuration.
Figure 1: Six-point amplitude from one MHV and one $\overline{\text{MHV}}$
amplitudes
In order to understand better these classification let us only consider pure
gluon amplitudes were all the particles are all outgoing. Depending on the
helicity configuration, for any $n$-point amplitudes, here are the
classification of gluon amplitudes
* •
Vanishing amplitudes: are mplitude vanishes when all the particles have the
same helicity or there is only one negative of positive helicity.
* •
MHV amplitudes: these amplitudes are the first non vanishing amplitudes in
most non conserved way of the helicity in which we only have two negative
helicity particles.
* •
$\overline{\text{MHV}}$ amplitudes: the MHV bar amplitudes are similar to the
MHV amplitudes in which we only have two positive helicity particles.
* •
N${}^{k}\\!$MHV amplitudes: are amplitudes that have $k+2$ negative helicity
particles, $k$-th next the MHV amplitudes.
The well understood among these amplitudes are the MHV and the
$\overline{\text{MHV}}$ amplitudes. Them expression can be derived directly
from the Park-Taylor formula Parke:1986gb . And the amazing part is that in a
soft limit where $A_{n+1}^{MHV}$ amplitudes is factorized into a soft operator
and a MHV sub-amplitude $A_{n}^{MHV}$ a single inverse soft limit is enough to
recover the original amplitude,
$A_{n+1}^{MHV}\xrightarrow{\;\text{ soft limit
}\;}A_{n}^{MHV}\xrightarrow{\;\text{ inverse soft limit }\;}A_{n+1}^{MHV}.$
(7)
This stability of the MHV and $\overline{\text{MHV}}$ amplitudes and them
simplicity are the main reasons we choose MHV amplitude to be central part of
amplitudes. For example the 6-point amplitude
$A_{6}(1^{-}2^{-}3^{-}4^{+}5^{+}6^{+})$ [2205.13873], it can be constructed
for two inverse soft limits as shown by the graph in the Fig. 1.
## III Bipolar amplitudes
Figure 2: Seven-point bipolar amplitude reconstruction.
In our experiments, we analysed the soft decomposition of many different
helicity amplitudes. From these decomposition, we explore the possibility of
inverting the process using ISL calculations. It is important to mention that
we are in the search of pattern to generalise the ISL reconstruction of any
point amplitude.
Out of these experiments, we found that there seems a pattern in some class of
helicity configurations. These amplitude have all of its positive helicity
grouped in one region. A typical form of that is when all the positive
helicity is grouped to the right and all the negative to left such as
$A_{6}(1^{-}2^{-}3^{+}4^{+}5^{+})$, $A_{6}(1^{-}2^{-}3^{-}4^{+}5^{+}6^{+})$,
$A_{6}(1^{-}2^{-}3^{-}4^{+}5^{+}6^{+}7^{+})$, …Here we can see that a bipolar
amplitude can ever MHV or any of the NkMHV amplitudes since we do not care
about the number the negative helicity but more on how are they grouped as
bipolar objects.
For simplicity we are calling these amplitude bipolar amplitude. In the soft
decomposition experiments we found that any $n$-point bipolar amplitude can be
constructed directly from two ISL of some other $(n-1)$-points bipolar. This
finding allow to generate an ISL algorithm to reconstruct those amplitude from
MHV and/or $\overline{\text{MHV}}$ amplitudes. The algorithm can be
represented in a bi-parted graph in which:
* •
the root: is the bipolar amplitude that we aim to reconstruct.
* •
the leaves: are MHV/$\overline{\text{MHV}}$ amplitudes which are the seeds of
the construction.
* •
the oriented edges: are the ISL operations that connect the $n$-points to the
$(n+1)$-points.
* •
the intermediate vertices: are the intermediate bipolar amplitude that
connects the MHV’s to the root.
The Fig. 2 and Fig. 3 are two examples of the calculation we did using the bi-
parted algorithm above. It is worth to mention that the ISL operation made at
every edge in the graph consist to multiply the amplitude with the
corresponding soft operator of the particle labelled on the arrow and to use a
momentum shift as presented in Boucher-Veronneau:2011rwd ; Nandan:2012rk
Figure 3: Eight-point bipolar amplitude reconstruction.
## IV Conclusion
In conclusion, we aim to construct QCD amplitudes from the asymptotic
behaviour of the individual particles using inverse soft limit. And we found
that such process is well establish for the case of MHV amplitudes since they
have a simplicity in the soft decomposition. Such decomposition is main
ingredient of this reconstruction, and the analysis of decomposition of many
different amplitudes allow us to observe a pattern in the soft structure
certain amplitudes we called bipolar amplitudes. In the present work presented
at the HEPMAD22, we only gave the ISL algorithm, the formal proof will be
presented in a separate paper of our coming works. The extension of the method
can be considered for any helicity configuration, the only complication of
some helicity configuration is the apparition of loops in the graph associated
to the ISL algorithm.
## References
* (1)
## REFERENCES
* (2) R. Britto, F. Cachazo and B. Feng, Nucl. Phys. B 715, 499-522 (2005) [arXiv:hep-th/0412308 [hep-th]].
* (3) A. N. Rasoanaivo, [arXiv:2205.13873 [hep-th]].
* (4) S. Weinberg, Phys. Rev. 140, B516-B524 (1965)
* (5) R. Jackiw, Phys. Rev. 168, 1623-1633 (1968)
* (6) H. Elvang, C. R. T. Jones and S. G. Naculich, Phys. Rev. Lett. 118, no.23, 231601 (2017) [arXiv:1611.07534 [hep-th]].
* (7) A. N. Rasoanaivo, [arXiv:2002.02120 [hep-th]].
* (8) A. Campoleoni, D. Francia and C. Heissenberg, JHEP 05, 120 (2017) [arXiv:1703.01351 [hep-th]].
* (9) S. J. Parke and T. R. Taylor, Phys. Rev. Lett. 56, 2459 (1986)
* (10) C. Boucher-Veronneau and A. J. Larkoski, JHEP 09, 130 (2011) [arXiv:1108.5385 [hep-th]].
* (11) D. Nandan and C. Wen, JHEP 08, 040 (2012) [arXiv:1204.4841 [hep-th]].
|
# ASTROPHYSICAL S(0)-FACTORS FOR THE ${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm
Be}$, ${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$ and ${}^{7}{\rm
Be}(p,\gamma)^{8}{\rm B}$ DIRECT CAPTURE PROCESSES IN A POTENTIAL MODEL
††thanks: Presented at the IV International Scientific Forum ”Nuclear Science
and Technologies”, Almaty, Kazakhstan, 26-30 September, 2022.
S.A.Turakulov E.M.Tursunov Institute of Nuclear Physics, 100214, Tashkent,
Uzbekistan
National University of Uzbekistan, 100174, Tashkent, Uzbekistan
###### Abstract
Astrophysical S-factors at zero energy for the direct nuclear capture
reactions ${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$, ${}^{3}{\rm
H}(\alpha,\gamma)^{7}{\rm Li}$ and ${}^{7}{\rm Be}(p,\gamma)^{8}{\rm B}$ are
estimated within the framework of two-body potential cluster model on the
basis of extranuclear capture approximation of D. Baye and E. Brainis. The
values of S(0)-factors have been calculated using two different potential
models for each process, which were adjusted to the binding energies and
empirical values of the asymptotical normalization coefficients from the
literature. New values of S(0)-factors have been obtained.
21.60.Gx, 24.10.-i
## 1 Introduction
Determination of the low-energy values of the astrophysical S-factor for the
direct radiative capture reactions $d(\alpha,\gamma)^{6}{\rm Li}$, ${}^{3}{\rm
He}(\alpha,\gamma)^{7}{\rm Be}$, ${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$
and ${}^{7}{\rm Be}(p,\gamma)^{8}{\rm B}$, especially at E=0, plays an
important role in nuclear astrophysics in the both Standard Solar and Big Bang
nucleosynthesis (BBN) models [1, 2]. The calculation of S(0) is carried out
only with the help of theoretical approaches, since the direct experimental
measurements of the cross-section at ultralow energies are not possible due to
very small values of the cross-section. In particular, for the first capture
reaction at energies of 10 keV, the cross-section is of the order of
nanobarns. As it is well-known, the $\alpha+d\rightarrow^{6}$Li$+\gamma$
synthesis process is the main source of the 6Li isotope in a period of
primordial nucleosynthesis. For this reason, investigating the
$\alpha+d\rightarrow^{6}$Li$+\gamma$ synthesis is of a great interest within
the experimental [3, 4, 5, 6, 7] and theoretical studies [8, 9, 10]. The
direct capture processes ${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$ and
${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$ present the main source of the
primordial 7Li element [11]. All the above mentioned reactions are directly
related to the cosmological lithium problem[12]. Moreover, the processes
${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$ and ${}^{7}{\rm
Be}(p,\gamma)^{8}{\rm B}$ are essential for the estimation of neutrino fluxes
from the Sun [1, 2]. In the present work, we extend the theoretical model
previously developed in Refs. [11, 13, 14, 15, 16] for the determination of
zero energy astrophysical S-factor on the basis of extranuclear capture
approximation as proposed in Refs.[17, 18]. The aim of present work is to
determine the values of the S(0)-factor for the ${}^{3}{\rm
He}(\alpha,\gamma)^{7}{\rm Be}$, ${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$
and ${}^{7}{\rm Be}(p,\gamma)^{8}{\rm B}$ direct nuclear capture reactions
within the framework of two-body potential cluster model. The method can be
extended to the direct capture process $d(\alpha,\gamma)^{6}{\rm Li}$ within
the ”exact-mass prescription”. However the last case presents only the
methodical interest, since the isospin forbidden E1 astrophysical S factor of
the process $d(\alpha,\gamma)^{6}{\rm Li}$ can be described only within the
three-body model, but not in the two-body model [9].
## 2 Theoretical model
In fact, the experimental measurements and theoretical approaches define the
cross-sections of the process. But, in the capture process which involves
light nuclei, the cross-section decreases exponentially when the energy tends
to zero. Therefore, in the low energy nuclear astrophysics the astrophysical
$S$-factors are used. This quantity is expressed with the help of the cross
section as [13, 19]
$\displaystyle
S(E)=\sum_{l_{f}J_{f}}S_{l_{f}J_{f}}(E)=E\,\exp(2\pi\eta)\sum_{l_{f}J_{f}}\sum_{l_{i}J_{i}}\sum_{\lambda}\sigma^{\Omega\lambda}_{l_{i}J_{i}\rightarrow
l_{f}J_{f}}(E),$ (1)
where $l_{i},J_{i}$ ($l_{f},J_{f}$) are the orbital and the total angular
momenta of the initial (final) states, respectively, $\eta$ is the Zommerfeld
parameter, $\Omega=$ E or M (electric or magnetic transition), $\lambda$ is a
multiplicity of the transition. For the above radiative capture reactions the
electric dipole E1 and quadrupole E2 transitions contributions are dominant in
the cross-section. Thus, the cross sections for the electric transitions of
the radiative capture process is expressed as [17, 19]
$\displaystyle\sigma^{E\lambda}_{l_{i}J_{i}\rightarrow
l_{f}J_{f}}(E)=\frac{8\pi e^{2}\mu}{\hbar
c}\frac{k_{\gamma}^{2\lambda+1}}{k^{3}}\cdot
N_{E\lambda}\cdot\left[I_{if}(E)\right]^{2}$ (2)
where $k=\sqrt{2\mu E}/\hbar c$ is the wave number of the colliding particles
relative motion, $\mu$ is the reduced mass of the clusters involved in the
capture process. $k_{\gamma}=E_{\gamma}/\hbar c$ is the wave number of the
photon corresponding to energy $E_{\gamma}=E_{\rm th}+E$, where $E_{\rm th}$
is the threshold energy and $N_{E\lambda}$ is
$\displaystyle
N_{E\lambda}=\left[Z_{1}\left(\frac{A_{2}}{A}\right)^{\lambda}+Z_{2}\left(\frac{-A_{1}}{A}\right)^{\lambda}\right]^{2}\frac{\lambda(\lambda+1)[\lambda][l_{i}][J_{i}][J_{f}]}{\left(\left[\lambda\right]!!\right)^{2}[S_{1}][S_{2}]}$
(3) $\displaystyle\times\left(C^{l_{f}0}_{\lambda
0l_{i}0}\right)^{2}\left\\{\begin{array}[]{ccc}J_{i}&l_{i}&S\\\
l_{f}&J_{f}&\lambda\end{array}\right\\}^{2}.$ (6)
The parameters $A_{1}$, $A_{2}$ are mass numbers of the clusters in the
entrance channel, $A=A_{1}+A_{2}$, $S_{1}$, $S_{2}$ are spins of the clusters,
$S$ is a spin of reaction channel. We also use short-hand notations
$[S]=(2S+1)$and $[\lambda]!!=(2\lambda+1)!!$. The overlap integral is given as
$\displaystyle
I_{if}(E)=\int^{\infty}_{0}u_{E}^{(l_{f}SJ_{f})}(r)r^{\lambda}u^{(l_{i}SJ_{i})}(E,r)dr.$
(7)
where $u_{E}^{(l_{f}SJ_{f})}(r)$ and $u^{(l_{i}SJ_{i})}(E,r)$ are final bound
and initial scattering wave functions, respectively.
At next step, we determine the zero energy astrophysical S(0)-factor. In order
to distinguish energy dependent parts of the astrophysical $S$-factor we
introduce modified scattering functions [17]
$\displaystyle\tilde{u}^{(l_{i}SJ_{i})}(E,r)=E^{1/2}\exp{(\pi\eta)}u^{(l_{i}SJ_{i})}(E,r),$
(8)
here $E^{1/2}=\frac{k\cdot\hbar c}{\sqrt{2\mu}}$ is kinetic energy of the
relative motion. When $E$ tends to zero, consequently $\eta$ also tends to
infinity and the regular $F_{l}$ and irregular $G_{l}$ Coulomb wave functions
become unusable. Therefore, the radial scattering wave function is normalized
with the help of the rescaled Coulomb functions $\mathcal{F}_{l}$ and
$\mathcal{G}_{l}$ [18] as
$\displaystyle\tilde{u}^{(l_{i}SJ_{i})}(E,r)\mbox{$\mathop{\rightarrow}\limits_{r\rightarrow\infty}$}\cos\delta_{(l_{i}SJ_{i})}(E)\mathcal{F}_{l}(E,r)+\frac{2}{\pi}\exp(2\pi\eta)\sin\delta_{(l_{i}SJ_{i})}(E)\mathcal{G}_{l}(E,r),$
(9)
where, $\delta_{l_{i}SJ_{i}}(E)$ is the phase shift in the $(l,S,J)$th partial
wave. This normalization provides that $\tilde{u}^{(l_{i}SJ_{i})}(E,r)$ has a
finite limit when $E$ tends to zero [17, 18]. It will be convenient to make
use of a function of the phase shift $\delta_{l_{i}SJ_{i}}(E)$ defined as
$\displaystyle\mathcal{D}_{l_{i}SJ_{i}}(E)=\frac{2}{\pi}\left[\exp(2\pi\eta)-1\right]\tan\delta_{l_{i}SJ_{i}}(E)$
(10)
which also has a finite limit when $E\rightarrow 0$ [18]. Taking into
accounted that at the ultralow energy the phase shift
$\delta_{l_{i}SJ_{i}}(E)$ is very small and satisfies the condition
$\exp(-2\pi\eta)<<1$, the asymptotic form (9) of the radial wave function
becomes
$\displaystyle\tilde{u}^{(l_{i}SJ_{i})}(E,r)\mbox{$\mathop{\rightarrow}\limits_{r\rightarrow\infty}$}\mathcal{F}_{l}(E,r)+\mathcal{D}_{l_{i}SJ_{i}}(E)\mathcal{G}_{l}(E,r),$
(11)
which remains finite at $E=0$.
Finally, we can rewrite the expression for the zero energy astrophysical
S(0)-factor in the form [17, 18]
$\displaystyle S(0)=\frac{1}{2}\alpha\hbar c\left(\frac{E_{th}}{\hbar
c}\right)^{2\lambda+1}\cdot N_{E\lambda}\cdot\left[I_{if}(0)\right]^{2}.$ (12)
where $\alpha$ is the fine-structure constant. The zero energy overlap
integral is given as
$\displaystyle
I_{if}(0)=\int^{\infty}_{0}u_{E}^{(l_{f}SJ_{f})}(r)r^{\lambda}\tilde{u}^{(l_{i}SJ_{i})}(0,r)dr.$
(13)
where,
$\displaystyle\tilde{u}^{(l_{i}SJ_{i})}(0,r)\mbox{$\mathop{\rightarrow}\limits_{r\rightarrow\infty}$}\mathcal{F}_{l}^{0}(r)+\mathcal{D}_{l_{i}SJ_{i}}(0)\mathcal{G}_{l}^{0}(r),$
(14)
Using above equations one is able to estimate the astrophysical S(0)-factor of
above capture processes within the two-body cluster model.
## 3 Results and discussion
Calculations of the cross section and astrophysical S(0)-factor have been
performed under the same conditions as in Refs.[11, 13, 14, 15, 16]. The
Schrödinger equation in the entrance and exit channels is solved with the two-
body central nuclear potentials of the Gaussian form [8] with the
corresponding point-like Coulomb potential for the $\alpha+d$ and $p+^{7}{\rm
Be}$ systems as [8, 20]. For synthesis of ${}^{3}{\rm He}+\alpha$ and
${}^{3}{\rm H}+\alpha$ have been used spherical form of Coulomb potential. For
consistency we use the same model parameters as in the aforementioned paper
[13]: $\hbar^{2}/2$[a.m.u]=20.7343 MeV fm2. The Coulomb parameter ${\rm
R}_{c}$ = 3.095 fm for the spherical form of potential [11]. The mass number
corresponding for the first $A_{1}$ particle is $m_{\rm d}=$2.0 a.m.u.,
$m_{{}^{3}{\rm He}}=m_{{}^{3}{\rm H}}=$3.0 a.m.u., $m_{p}=$1.007 276 4669
a.m.u. and the mass number $A_{2}$ of the second particle is $m_{\alpha}=$ 4.0
a.m.u., $m_{{}^{7}{\rm Be}}=$ 7.014735 a.m.u., respectively.
It should be noted, that for the calculations of the $\alpha+d$ capture
reaction we use the ”exact mass” prescription in the two-body model. As was
noted in the Introduction, this case is of the methodical interest, since
realistic estimates of the isospin-forbidden E1 S-factor can be obtained only
within the three-body model [9, 11, 14, 15]. Thus, within the assumption of
the ”exact mass” prescription the exact experimental mass values $m_{\rm
d}=A_{1}=$2.013553212724 a.m.u., $m_{\alpha}=A_{2}=$ 4.001506179127 a.m.u. [8]
are used.
Table 1: The values of ANC for the $\alpha+d\to^{6}{\rm Li}$ virtual transition and corresponding astrophysical S(0)-factors of the direct $d(\alpha,\gamma)^{6}{\rm Li}$ capture reaction. Model | $C_{\alpha d}$, fm-1/2 | S(0), MeV nb
---|---|---
${\rm V}_{\rm D}$ | 2.53 | 1.53
${\rm V}_{\rm M}$ | 2.31 | 1.26
The obtained S(0)-factor values for the above capture processes are strongly
dependent on the value of the asymptotical normalization coefficient (ANC).
The values of ANC of the $\alpha+d\to^{6}{\rm Li}$ virtial transition and
astrophysical S(0)-factors of the direct $d(\alpha,\gamma)^{6}{\rm Li}$
capture process are presented in Table 1 for the two potential models ${\rm
V}_{\rm D}$ [8] and ${\rm V}_{\rm M}$ [13]. The initial potential ${\rm
V}_{\rm D}$ yields a value $C_{\alpha d}=2.53$ fm-1/2 for the ANC. The
modified potential ${\rm V}_{\rm M}$ yields $C_{\alpha d}=2.31$ fm-1/2, which
is more consistent with the empirical value of ANC, $C_{\alpha d}=2.32\pm
0.11$ fm-1/2 extracted from the experimental data in Ref. [21].
Table 2: Values of ANC for the ground $p_{3/2}$ and first excited $p_{1/2}$ bound states of the ${}^{7}{\rm Be}$ and ${}^{7}{\rm Li}$ nuclei and corresponding astrophysical S(0)-factors. Reaction | Model | $C_{p_{3/2}}$, fm-1/2 | $C_{p_{1/2}}$, fm-1/2 | S(0), keV b
---|---|---|---|---
${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$ | ${\rm V}_{\rm D}^{n}$ | 4.34 | 3.71 | 0.56
${\rm V}_{\rm M1}^{n}$ | 4.79 | 4.24 | 0.58
${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$ | ${\rm V}_{\rm D}^{n}$ | 3.72 | 3.12 | 0.10
${\rm V}_{\rm M1}^{n}$ | 4.10 | 3.55 | 0.09
In Table 2 the values of ANC for the ground $p_{3/2}$ and the first excited
$p_{1/2}$ bound states of the ${}^{7}{\rm Be}(=^{3}{\rm He}+\alpha)$ and
${}^{7}{\rm Li}(=^{3}{\rm H}+\alpha)$ nuclei and calculated results for
corresponding astrophysical S(0)-factors are given within two potential models
${\rm V}_{\rm D}^{n}$ and ${\rm V}_{\rm M1}^{n}$ [11, 15]. The parameters of
these potential models are given in Ref.[15].
These models differ each from other only with values of ANC for the bound
states of 7Be and 7Li nuclei. They have been adjusted to the values of ANC
extracted from the analysis of the low energy experimental astrophysical
S-factors of the ${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$ and ${}^{3}{\rm
H}(\alpha,\gamma)^{7}{\rm Li}$ reactions in Refs. [22, 23]. The obtained
theoretical estimations of the zero energy S(0)-factor for the ${}^{3}{\rm
He}(\alpha,\gamma)^{7}{\rm Be}$ process with the potentials ${\rm V}_{\rm
D}^{n}$ and ${\rm V}_{\rm M1}^{n}$ are very consistent with the new data
S(0)=0.56$\pm$0.04 keV b of the Solar Fusion II Collaboration [1]. In
addition, obtained results for the ${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$
capture process in the frame of proposed couple of potential models are in a
good agreement with the result S(0)=0.10 $\pm$0.02 keV b of the NACRE
Collaboration[19].
Table 3: The values of ANC for the $p+^{7}{\rm Be}\to^{8}{\rm B}$ virtual transition and corresponding astrophysical S(0)-factors of the direct ${}^{7}{\rm Be}(p,\gamma)^{8}{\rm B}$ capture reaction. Model | $C_{\alpha d}$, fm-1/2 | S(0), eV b
---|---|---
${\rm V}_{\rm D}$ | 0.70 | 18.32
${\rm V}_{\rm M}$ | 0.73 | 19.61
In Table 3 the values of ANC for the bound state of the ${}^{8}{\rm
B}(=p+^{7}{\rm Be})$ nucleus and calculated results for corresponding
astrophysical S(0)-factors are given for two potential models ${\rm V}_{\rm
D}$ and ${\rm V}_{\rm M}$ from Ref.[16], where the full set of parameters of
these potential models are given. As can be seen from the table, the obtained
theoretical estimates for the zero energy S(0)-factor of the ${}^{7}{\rm
Be}(p,\gamma)^{8}{\rm B}$ process are consistent with the new data
S(0)=20.8$\pm$2.10 eV b of the Solar Fusion II Collaboration [1].
## 4 Conclusions
In the present work, the zero energy astrophysical S-factors for the
${}^{3}{\rm He}(\alpha,\gamma)^{7}{\rm Be}$, ${}^{3}{\rm
H}(\alpha,\gamma)^{7}{\rm Li}$ and ${}^{7}{\rm Be}(p,\gamma)^{8}{\rm B}$
radiative capture reactions have been estimated in the framework of the two-
body potential cluster model on the basis of extranuclear capture
approximation [17, 18]. For the astrophysical S(0)-factors of the ${}^{3}{\rm
He}(\alpha,\gamma)^{7}{\rm Be}$, ${}^{3}{\rm H}(\alpha,\gamma)^{7}{\rm Li}$
and ${}^{7}{\rm Be}(p,\gamma)^{8}{\rm B}$ capture reactions the obtained
estimates are consistent with new data sets of the Solar Fusion II and NACRE
Collaborations.
## 5 Acknowledgements
The authors acknowledge Prof. Daniel Baye for useful discussions and valuable
advice.
## References
* [1] E.G. Adelberger et al., Rev. Mod. Phys. 83, 195 (2011).
* [2] B.D. Fields, Annual Review of Nuclear and Particle Science 61, 47 (2011).
* [3] R.G.H. Robertson et al. Phys. Rev. Lett. 47, 18 (1981).
* [4] J.Kiener et al. Phys. Rev. C 44, 21 (1991).
* [5] P. Mohr et al. Phys. Rev. C 50, 15 (1994).
* [6] M. Anders et al. (LUNA Collaboration), Phys. Rev. Lett. 113, 042501 (2014).
* [7] D. Trezzi et al. (LUNA Collaboration), Astroparticle Physics 89, 57 (2017).
* [8] S.B.Dubovichenko, Phys. Atomic Nucl. 73, 1526 (2010).
* [9] D.Baye and E.M.Tursunov, J. Phys. G: Nucl. Part. Phys. 45, 085102 (2018).
* [10] A.S.Solovyev, Phys. Rev. C 106, 014610 (2022).
* [11] E.M.Tursunov, S.A.Turakulov, A.S.Kadyrov, Phys. Rev. C 97, 035802 (2018).
* [12] M. Asplund, D. L. Lambert, P. E. Nissen, F. Primas, and V. V. Smith, The Astrophysical Journal 644, 229 (2006).
* [13] E.M.Tursunov, S.A.Turakulov, P.Descouvemont. Phys. Atom. Nucl. 78, 193 (2015).
* [14] E.M.Tursunov, S.A.Turakulov, A.S.Kadyrov, Nuclear Physics A 1000, 121884 (2020).
* [15] E.M.Tursunov, S.A.Turakulov, A.S.Kadyrov, Nuclear Physics A 1006, 122108 (2021).
* [16] E.M.Tursunov, S.A.Turakulov, A.S.Kadyrov, L.D.Blokhintsev, Phys. Rev. C 104, 045806 (2021).
* [17] D.Baye, P.Descouvemont, M.Hesse, Phys. Rev. C 58, 545 (1998).
* [18] D.Baye, E.Brainis, Phys. Rev. C 61, 025801 (2000).
* [19] C. Angulo et al. (NACRE), Nuclear Physics A 656, 3 (1999).
* [20] S. B.Dubovichenko, N. A.Burkova, A. V.Dzhazairov-Kakhramanov, A. S.Tkachenko, Nucl. Phys. A 983, 175 (2019).
* [21] L.D.Blokhintsev, V.I.Kukulin, A.A.Sakharuk, D.A.Savin, E.V.Kuznetsova, Phys. Rev. C 48, 2390 (1993).
* [22] Q.I. Tursunmahatov, R. Yarmukhamedov, Phys. Rev. C 85, 045807 (2012).
* [23] S.B. Igamov, R. Yarmukhamedov, Nucl. Phys. A 781, 247 (2007).
|
# An interlacing result for Hermitian matrices in Minkowski space
D.B. Janse van Rensburg111School of Mathematical and Statistical Sciences,
North-West University, Research Focus: Pure and Applied Analytics, Private Bag
X6001, Potchefstroom 2520, South Africa. E-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS>, A.C.M.
Ran222Department of Mathematics,Vrije Universiteit Amsterdam, De Boelelaan
1111, 1081 HV Amsterdam, The Netherlands and Research Focus: Pure and Applied
Analytics, North-West University, Potchefstroom, South Africa. E-mail:
<EMAIL_ADDRESS>M. van Straaten11footnotemark: 1
###### Abstract
In this paper we will look at the well known interlacing problem, but here we
consider the result for Hermitian matrices in the Minkowski space, an
indefinite inner product space with one negative square. More specific, we
consider the $n\times n$ matrix $A=\begin{bmatrix}J&u\\\
-u^{*}&a\end{bmatrix}$ with $a\in\mathbb{R}$, $J=J^{*}$ and
$u\in\mathbb{C}^{n-1}$. Then $A$ is $H$-selfadjoint with respect to the matrix
$H=I_{n-1}\oplus(-1)$. The canonical form for the pair $(A,H)$ plays an
important role and the sign characteristic coupled to the pair is also
discussed.
_Keywords_ interlacing, Minkowski space
_MSC_ 15A18, 15A42, 47B50
## 1 Introduction
The Hermitian matrix $H=I_{n-1}\oplus(-1)$ defines an indefinite inner product
space with one negative square, this is sometimes called the Minkowski space.
The formula $[x,y]=\langle Hx,y\rangle$, with $x,y\in\mathbb{C}^{n}$ and where
$\langle\cdot\,,\cdot\rangle$ denotes the standard inner product, defines an
indefinite inner product on $\mathbb{C}^{n}$. The function $[x,y]$ satisfies
all the properties of the standard inner product, with the exception that
$[x,x]$ may be nonpositive for $n\neq 0$. Basic elements of the theory of
indefinite inner product spaces are summarised in [2].
An $n\times n$ matrix $A$ is called $H$-selfadjoint if it is selfadjoint in
the indefinite inner product given by $H$, or equivalently, if $HA=A^{*}H$.
The spectrum $\sigma(A)$ of an $H$-selfadjoint matrix $A$ is symmetric
relative to the real axis. The sizes of the Jordan blocks in the Jordan normal
form of $A$ corresponding to eigenvalue $\lambda$ are equal to the sizes of
the Jordan blocks corresponding to eigenvalue $\bar{\lambda}$, see Proposition
4.2.3 in [2].
Canonical forms exist for pairs of matrices $(A,H)$ where $A$ is
$H$-selfadjoint, see for example [1]. However, for the specific pair of
matrices discussed in this paper, namely
$A=\begin{bmatrix}J&u\\\ -u^{*}&a\end{bmatrix}\quad\text{and}\quad
H=I_{n-1}\oplus(-1),$
the possible canonical forms are restricted to the ones below, see Section 5.6
in [2]. There exists an invertible matrix $S$ such that
$S^{-1}AS=A_{1}\oplus\cdots\oplus A_{k}$ and $S^{*}HS=H_{1}\oplus\cdots\oplus
H_{k}$, where the blocks in the canonical form are one of the following types:
1. 1.
$A_{j}=\lambda$, $H_{j}=\pm 1$, with $\lambda\in\mathbb{R}$, and only for one
real eigenvalue the sign can be negative,
2. 2.
$A_{j}=\begin{bmatrix}a+bi&0\\\ 0&a-bi\end{bmatrix}$,
$H_{j}=\begin{bmatrix}0&1\\\ 1&0\end{bmatrix}$, with $a\in\mathbb{R}$ and
$b>0$,
3. 3.
$A_{j}=J_{2}(\lambda)$, $H_{j}=\pm\begin{bmatrix}0&1\\\ 1&0\end{bmatrix}$,
with $\lambda\in\mathbb{R}$,
4. 4.
$A_{j}=J_{3}(\lambda)$, $H_{j}=\begin{bmatrix}0&0&1\\\ 0&1&0\\\
1&0&0\end{bmatrix}$, with $\lambda\in\mathbb{R}$.
In addition, only one block of the forms in either 2, 3, or 4 can occur.
Note that for the specific matrices $A$ and $H$, we have $J^{*}=J$ and
$a\in\mathbb{R}$. The goal of this paper is to find the relationship between
the eigenvalues of the matrix $A$ and the matrix $J$ and how they are
interlaced.
Interlacing problems for Hermitian matrices in a definite inner product space
are well known, and results can be found in Section 4.3 in [5]. Theorem 4.3.17
in [5] gives Cauchy’s interlacing theorem for a bordered Hermitian matrix. For
an application to graphs and subgraphs, see for example the paper by Haemers
[3] and references there.
Returning to the indefinite case, a different point of view is the inverse
eigenvalue problem, and more precisely the periodic Jacobi inverse eigenvalue
problem. See for example the paper by Xu, Bebiano and Chen, [6] and references
mentioned there. The paper [6] is concerned with the reconstruction of a
Jacobi matrix. We were inspired by some results in this paper and we were
curious about the sign associated to the specific eigenvalues. We therefore
explore this, applied to general $H$-selfadjoint matrices with one negative
square.
In our paper we are concerned with how the eigenvalues of the selfadjoint
matrix $J$ interlace with those of the $H$-selfadjoint matrix $A$, and with
the sign corresponding to the eigenvalues of $A$ in the canonical form of the
pair $(A,H)$. The proof for the interlacing of the eigenvalues follows the
same line of argument as Lemma 3.3 in [6].
We consider the characteristic polynomial of $A$. If $\lambda\notin\sigma(J)$,
we use a standard argument to write $\lambda I-A$ as
$\lambda I-A=\begin{bmatrix}\lambda I-J&-u\\\
u^{*}&\lambda-a\end{bmatrix}=\begin{bmatrix}\lambda I-J&0\\\
u^{*}&1\end{bmatrix}\begin{bmatrix}I&-(\lambda I-J)^{-1}u\\\
0&\lambda-a+u^{*}(\lambda I-J)^{-1}u\end{bmatrix}.$
Hence the characteristic polynomial of $A$ becomes
$p(\lambda)=\det(\lambda I-A)=\det(\lambda I-J)\cdot(\lambda-a+u^{*}(\lambda
I-J)^{-1}u).$ (1)
A part of the second term in (1) is a realization of the general form
$D+C(\lambda I_{n}-A)^{-1}B.$
This realization will be minimal if $n$ is as small as possible, or
equivalently, the pair $(A,C)$ is observable and the pair $(A,B)$ is
controllable. For our case this reduces to the observability of the pair
$(J,u^{*})$.
Recall, see Theorem 3.2.1 in [4], that a pair $(A,C)$ with $C$ an $m\times n$
matrix and $A$ an $n\times n$ matrix is observable if
$\mathcal{N}:=\textup{Ker}\begin{bmatrix}C\\\ CA\\\ CA^{2}\\\ \vdots\\\
CA^{n}\end{bmatrix}=\\{0\\}.$
The Hautus test for observability, see Theorem 3.2.2 in [4], states the
following: the matrix pair $(A,C)$ is observable if and only if
$\textup{rank}\begin{bmatrix}\lambda I-A\\\ C\end{bmatrix}=n\,\,\,\textrm{for
all }\,\,\lambda\in\sigma(A).$
The Hautus test for the pair $(J,u^{*})$ insures that all eigenvalues of $J$
appear as the poles of the expression $u^{*}(\lambda I-J)^{-1}u$,
multiplicities included.
Section 2 of this paper describes the relationships between observability and
the eigenvalues and Section 3 is concerned with the sign connected to the
eigenvalues in the pair $(A,H)$. Finally, in the last section a few examples
are given to verify and clarify some of the theory.
## 2 Observability
In this section we will prove several results following from the observability
of the matrix pair $(J,u^{*})$.
###### Proposition 2.1
Let $J=J^{*}\in\mathbb{C}^{n-1\times n-1}$ and $u\in\mathbb{C}^{n-1}$. If the
pair $(J,u^{*})$ is observable, i.e., $\textup{rank}\begin{bmatrix}\lambda
I-J\\\ u^{*}\end{bmatrix}=n-1$ then:
* (i)
$J$ has $n-1$ distinct eigenvalues;
* (ii)
$\sigma(J)\cap\sigma(A)=\emptyset$;
* (iii)
$A$ is nonderogatory.
Proof. (i) This follows immediately from the Hautus test. Indeed, since
$J=J^{*}$ it only needs to be shown that for no eigenvalue $\lambda_{0}$ the
corresponding eigenspace ${\rm Ker\,}(\lambda_{0}I-J)$ has dimension two or
higher. However, if this would be the case, then
$\begin{bmatrix}\lambda_{0}I-J\\\ u^{*}\end{bmatrix}$ can have rank $n-2$ at
most, thus violating the assumption of observability.
(ii) From (1) and the fact that $(\lambda I-J)$ is invertible
($\lambda\notin\sigma(J))$, we have
$\frac{\det(\lambda I-A)}{\det(\lambda I-J)}=\lambda-a+u^{*}(\lambda
I-J)^{-1}u.$
Thus, if $\sigma(J)\cap\sigma(A)\neq\emptyset$, then for some
$\lambda_{0}\in\sigma(J)\cap\sigma(A)$, there will be a
$(\lambda-\lambda_{0})$ cancelling on the left, leaving us with a polynomial
of degree $n-2$ in the denominator. Hence, there are at most $n-2$ poles but
on the right-hand side we know there should be exactly $n-1$ poles, by the
observability. This contradiction proves (ii).
(iii) Let $\lambda_{0}\in\sigma(A)$, i.e., $Ax=\lambda_{0}x$, with
$x=\begin{bmatrix}x_{1}\\\ x_{2}\end{bmatrix}\neq 0$. For
$A=\begin{bmatrix}J&u\\\ -u^{*}&a\end{bmatrix}$ one obtains
$(\lambda_{0}I-J)x_{1}-ux_{2}=0\quad\textup{and}\quad
u^{*}x_{1}+(\lambda_{0}-a)x_{2}=0.$
If $x_{2}=0$, we have from the first equation that $(\lambda_{0}I-J)x_{1}=0$,
and since $\lambda_{0}\notin\sigma(J)$ by (ii), $(\lambda_{0}I-J)$ is
invertible and hence it implies $x_{1}=0$. This is a contradiction since $x$
is an eigenvector of $A$ belonging to $\lambda_{0}$, hence $x_{2}\neq 0$.
Thus, one can solve for $x_{1}$ in terms of $x_{2}$ from the first equation
which means the eigenvector corresponding to $\lambda_{0}$ is determined by
$x_{2}$. Therefore, the $\dim\textup{Ker}(\lambda_{0}I-A)=1$ and hence $A$ is
nonderogatory.
$\Box$
If $(J,u^{*})$ is not observable, the problem can be reduced to a situation
where observability is satisfied.
###### Proposition 2.2
Assume $(J,u^{*})$ is not observable, and let
$\mathcal{N}=\cap_{j=0}^{n-2}{\rm Ker\,}u^{*}J^{j}$ be the unobservable
subspace. With respect to
$\mathbb{C}^{n-1}=\mathcal{N}\oplus\mathcal{N}^{\perp}$, write $J=J_{1}\oplus
J_{2}$ as well as $u=\begin{bmatrix}0&u_{2}^{*}\end{bmatrix}$. Then
$\sigma(A)\cap\sigma(J)=\sigma(J_{1})$, and
$\sigma(A)=\sigma(J_{1})\cup\sigma\left(\begin{bmatrix}J_{2}&u_{2}\\\
-u_{2}&a\end{bmatrix}\right).$
Proof. Using the Kalman decomposition into the unobservable space and its
orthogonal complement we can reduce to a situation where observability is
satisfied, as the pair $(J_{2},u_{2}^{*})$ is observable. Writing
$A=\begin{bmatrix}J_{1}&0&0\\\ 0&J_{2}&u_{2}\\\ 0&-u_{2}^{*}&a\end{bmatrix}$
the statements in the proposition easily follow.
$\Box$
## 3 Interlacing
In this section the main result of this article is presente. It contains the
way the eigenvalues of matrices $A$ and $J$ are interlacing with each other,
together with the sign corresponding to the eigenvalues of the matrix $A$.
From Proposition 2.1 we have that the eigenvalues of $A$ are precisely the $n$
complex zeros of the function $\lambda-a+u^{*}(\lambda I-J)^{-1}u$ and this
function has $n-1$ real distinct poles. Denote the eigenvalues of $J$ by
$\mu_{n-1}<\mu_{n-2}<\cdots<\mu_{1}$ and introduce
$g(\lambda)=-u^{*}(\lambda I-J)^{-1}u.$
Since $J=J^{*}$, there is a unitary matrix $V$ such that $V^{*}JV=D={\rm
diag\,}(\mu_{j})_{j=1}^{n-1}$. Hence, one can write
$g(\lambda)=-u^{*}V^{*}(\lambda
I-D)^{-1}Vu=-\sum_{j=1}^{n-1}\frac{d_{j}}{\lambda-\mu_{j}},$
where $d_{j}=((Vu)_{j})^{2}>0$. Note that
$g^{\prime}(\lambda)=\sum_{j=1}^{n-1}\frac{d_{j}}{(\lambda-\mu_{j})^{2}}$ is
positive where it is defined, and that
$\lim_{\lambda\to\pm\infty}g(\lambda)=0$. Finally, the eigenvalues of $A$,
which we shall denote by $\lambda_{j}$, $j=1,\ldots,n$, are the solutions of
$h(\lambda)-g(\lambda)=0$, where $h(\lambda)=\lambda-a$.
###### Theorem 3.1
Let $A$ be an $H$-selfadjoint matrix with $H=I_{n-1}\oplus(-1)$ (Hermitian and
invertible) and let $A=\begin{bmatrix}J&u\\\ -u^{*}&a\end{bmatrix}$ with
$a\in\mathbb{R}$, $u\in\mathbb{C}^{n-1}$ and $J=J^{*}$. If the pair
$(J,u^{*})$ is observable, then the conditions of Proposition 2.1 hold.
Furthermore, let $\mu_{n-1}<\mu_{n-2}<\cdots<\mu_{1}$ denote the $n-1$
distinct real eigenvalues of $J$ and let $\lambda_{1},\ldots,\lambda_{n}$ be
the eigenvalues of $A$. Then the eigenvalues of $A$ and $J$ interlace in the
following possible ways coupled with the appropriate sign for $\varepsilon$.
* 1a
$\lambda_{n}<\lambda_{n-1}<\mu_{n-1}<\lambda_{n-2}<\cdots<\lambda_{1}<\mu_{1}$,
where the sign $\varepsilon=-1$ is associated with the Jordan block of size
$1$ for the eigenvalue $\lambda_{n}$;
* 1b
$\lambda_{n}=\lambda_{n-1}<\mu_{n-1}<\lambda_{n-2}<\cdots<\lambda_{1}<\mu_{1}$,
where the sign $\varepsilon=-1$ is associated with a Jordan block of size $2$
with eigenvalue $\lambda_{n}=\lambda_{n-1}$;
* 2a
$\mu_{n-1}<\lambda_{n-2}<\cdots<\lambda_{1}<\mu_{1}$, and
$\lambda_{n}=\overline{\lambda_{n-1}}\notin\mathbb{R}$,
* 3a
$\mu_{n-1}<\lambda_{n}<\mu_{n-2}<\cdots<\lambda_{3}<\mu_{1}<\lambda_{2}<\lambda_{1}$,
where the sign $\varepsilon=-1$, is associated with the Jordan block of size
$1$ for eigenvalue $\lambda_{1}$;
* 3b
$\mu_{n-1}<\lambda_{n}<\mu_{n-2}<\cdots<\lambda_{3}<\mu_{1}<\lambda_{2}=\lambda_{1}$,
where the sign $\varepsilon=1$ is associated with a Jordan block of size $2$
with eigenvalue $\lambda_{1}=\lambda_{2}$;
* 4a
$\mu_{n-1}<\lambda_{n}<\mu_{n-2}<\cdots<\mu_{j+1}<\lambda_{j+2}<\lambda_{j+1}<\lambda_{j}<\mu_{j}<\cdots<\mu_{2}<\lambda_{1}<\mu_{1}$,
where the sign $\varepsilon=-1$ is associated with the Jordan block of size
$1$ for eigenvalue $\lambda_{j+1}$;
* 4b
$\mu_{n-1}<\lambda_{n}<\mu_{n-2}<\cdots<\mu_{j+1}<\lambda_{j+2}=\lambda_{j+1}<\lambda_{j}<\mu_{j}<\cdots<\mu_{2}<\lambda_{1}<\mu_{1}$,
where the sign $\varepsilon=1$ is associated with the Jordan block of size $2$
with eigenvallue $\lambda_{j+2}=\lambda_{j+1}$;
* 4c
$\mu_{n-1}<\lambda_{n}<\mu_{n-2}<\cdots<\mu_{j+1}<\lambda_{j+2}<\lambda_{j+1}=\lambda_{j}<\mu_{j}<\cdots<\mu_{2}<\lambda_{1}<\mu_{1}$,
where the sign $\varepsilon=-1$ is associated with the Jordan block of size
$2$ with eigenvalue $\lambda_{j+1}=\lambda{j}$;
* 4d
$\mu_{n-1}<\lambda_{n}<\mu_{n-2}<\cdots<\mu_{j+1}<\lambda_{j+2}=\lambda_{j+1}=\lambda_{j}<\mu_{j}<\cdots<\mu_{2}<\lambda_{1}<\mu_{1}$
where the sign $\varepsilon=1$ is associated with a Jordan block of size $3$
with eigenvalue $\lambda_{j+2}=\lambda_{j+1}=\lambda_{j}$.
Proof. The interlacing of the eigenvalues of $A$ and $J$, i.e., the way
$h(\lambda)$ and $g(\lambda)$ intersect, follows a similar result by Lemma 3.3
in [6]. Because of the fact that $A$ is nonderogatory, in cases 1b, 3b, 4b and
4c there is a Jordan block of size two corresponding to the two eigenvalues
that coincide, while in case 4d there is a Jordan block of size three
corresponding to the three eigenvalues that coincide. See Section 5.6 in [2].
We would like to know in the cases 1a, 3a and 4a in the list, which one of the
eigenvalues has the negative sign in the sign characteristic, and in cases 1b,
3b, 4b and 4c what the sign corresponding to the Jordan block of size two is.
In order to answer these questions we first recall one of the descriptions of
the sign characteristic, see Section 5.1 in [2]. First note that for every
real $\lambda$, the matrix $\lambda H-HA$ is an $n\times n$ Hermitian matrix,
and hence has $n$ real eigenvalues, which are denoted by
$\nu_{1}(\lambda),\ldots,\nu_{n}(\lambda)$. It turns out that these can be
chosen to be analytic functions of the real variable $\lambda$, and this will
be done for now. Let $\lambda_{1},\ldots,\lambda_{r}$ be the real eigenvalues
of $A$, and write for every $i=1,\ldots,r$ and $j=1,\ldots,n$ the function
$\nu_{j}(\lambda)$ as
$\nu_{j}(\lambda)=(\lambda-\lambda_{i})^{m_{ij}}\rho_{ij}(\lambda),$
where $\rho_{ij}(\lambda_{i})\not=0$ and is a real number. Then the nonzero
numbers among $m_{i1},\ldots,m_{in}$ are the sizes of the Jordan blocks of $A$
corresponding to $\lambda_{i}$, and the sign in the sign characteristic of the
pair $(A,H)$ corresponding to the block of size $m_{ij}\not=0$ is the sign of
the real number $\rho_{ij}(\lambda_{i})$. In particular,
if $m_{ij}=1$, then $\rho_{ij}(\lambda_{i})=\nu_{j}^{\prime}(\lambda_{i})$,
if $m_{ij}=2$, then
$\rho_{ij}(\lambda_{i})=\frac{1}{2}\nu_{j}^{\prime\prime}(\lambda_{i})$.
In order to find the signs in the particular situation we have at hand, we
argue as follows. The fact that $\nu(\lambda)$ is an eigenvalue of $\lambda
H-HA$ implies that
$\displaystyle 0$ $\displaystyle=\det(\nu I-(\lambda
H-HA))=\det\begin{bmatrix}\nu I-\lambda I+J&u\\\
u^{*}&\nu+\lambda-a\end{bmatrix}$
$\displaystyle=\det\left(\begin{bmatrix}(\nu-\lambda)I+J&0\\\
u^{*}&1\end{bmatrix}\begin{bmatrix}I&((\nu-\lambda)I+J)^{-1}u\\\
0&(\nu+\lambda)-a-u^{*}((\nu-\lambda)I+J)^{-1}u\end{bmatrix}\right)$
$\displaystyle=\det((\nu-\lambda)I+J)\cdot\left((\nu+\lambda)-a-u^{*}((\nu-\lambda)I+J)^{-1}u\right)$
$\displaystyle=\det((\nu-\lambda)I+J)\cdot\left((\nu+\lambda)-a+u^{*}((\lambda-\nu)I-J)^{-1}u\right).$
It follows that $\nu$ satisfies
$(\nu+\lambda)-a+u^{*}((\lambda-\nu)I-J)^{-1}u=0$, in other words,
$h(\nu+\lambda)-g(\lambda-\nu)=0,$
or more explicitly,
$\nu+\lambda-a=\sum_{j=1}^{n-1}\frac{d_{j}}{\lambda-\nu-\mu_{j}}.$
This determines $\nu$ implicitly as a function of $\lambda$. For fixed
$\lambda$ we know that there have to be $n$ real solutions. Introduce
$H(\lambda,\nu)=h(\lambda+\nu)-g(\lambda-\nu).$
When $m_{ij}\not=0$, we have $\nu_{j}(\lambda_{i})=0$ and
$H(\lambda_{i},0)=0$. Applying the implicit function theorem we obtain
$\nu_{j}^{\prime}(\lambda_{i})=-\left({\frac{\partial
H}{\partial\lambda}}/{\frac{\partial
H}{\partial\nu}}\right)\rfloor_{\lambda=\lambda_{i}\atop\nu=0}.$
Now
$\displaystyle\frac{\partial
H}{\partial\lambda}\rfloor_{\lambda=\lambda_{i}\atop\nu=0}=1-g^{\prime}(\lambda-\nu)\rfloor_{\lambda=\lambda_{i}\atop\nu=0}=1-g^{\prime}(\lambda_{i}),$
$\displaystyle\frac{\partial
H}{\partial\nu}\rfloor_{\lambda=\lambda_{i}\atop\nu=0}=1+g^{\prime}(\lambda-\nu)\rfloor_{\lambda=\lambda_{i}\atop\nu=0}=1+g^{\prime}(\lambda_{i}),$
so
$\nu_{j}^{\prime}(\lambda_{i})=-\left(\frac{1-g^{\prime}(\lambda_{i})}{1+g^{\prime}(\lambda_{i})}\right).$
Recall that $g^{\prime}(\lambda)>0$ whenever $\lambda$ is not one of the
$\mu_{j}$’s. Thus, if $m_{ij}=1$ the sign of $\nu_{j}^{\prime}(\lambda_{i})$,
and therefore also the sign attached to $\lambda_{i}$, is equal to the sign of
$g^{\prime}(\lambda_{i})-1$. We conclude that for an eigenvalue of
multiplicity one, the sign in the sign characteristic is determined by how $h$
and $g$ intersect at $\lambda_{i}$ as follows:
if $g^{\prime}(\lambda_{i})>1$ then the sign is $+1$ at eigenvalue
$\lambda_{i}$;
if $g^{\prime}(\lambda_{i})<1$ then the sign is $-1$ at eigenvalue
$\lambda_{i}$.
It remains to consider the signs of the Jordan blocks of order two. For that
we first recall that $\nu(\lambda)$ satisfies $H(\lambda,\nu(\lambda))=0$.
Taking the first derivative we have
$\frac{\partial H}{\partial\lambda}+\frac{\partial
H}{\partial\nu}\cdot\nu^{\prime}(\lambda)=0,$
and differentiating this again gives
$\frac{\partial^{2}H}{\partial\lambda^{2}}+\frac{\partial^{2}H}{\partial\lambda\partial\nu}\cdot\nu^{\prime}(\lambda)+\frac{\partial
H}{\partial\nu}\cdot\nu^{\prime\prime}(\lambda)=0.$
A Jordan block of size two corresponds to a situation where
$\nu_{j}(\lambda_{i})=0$ and $\nu^{\prime}_{j}(\lambda_{i})=0$, because $h$
and $g$ touch at $\lambda_{i}$. Solving the above equation for
$\nu_{j}^{\prime\prime}(\lambda_{i})$, we have
$\nu_{j}^{\prime\prime}(\lambda_{i})=-\left(\frac{\partial^{2}H}{\partial\lambda^{2}}/\frac{\partial
H}{\partial\nu}\right)\rfloor_{\lambda=\lambda_{i}\atop\nu=0}=-\frac{g^{\prime\prime}(\lambda_{i})}{1+g^{\prime}(\lambda_{i})}.$
Hence the sign corresponding to a block of size two with eigenvalue
$\lambda_{i}$ is the sign of $-g^{\prime\prime}(\lambda_{i})$. A little
analysis shows that if the graph of $g$ locally around $\lambda_{i}$ lies
above the graph of $h$, then the sign is $-1$ at eigenvalue $\lambda_{i}$,
while if the graph of $g$ locally around $\lambda_{i}$ lies below the graph of
$h$, then the sign is $+1$ at eigenvalue $\lambda_{i}$. $\Box$
## 4 Examples
As an example, consider the case where $J$ has eigenvalues $1,2,3,4$, where
the $d_{j}$’s are given by $0.01,0.02,0.001,1$, respectively. Thus,
$g(\lambda)=-\left(\frac{0.01}{\lambda-1}+\frac{0.02}{\lambda-2}+\frac{0.001}{\lambda-3}+\frac{1}{\lambda-4}\right).$
In Figure 1, the functions $g(\lambda)$ and $h(\lambda)=\lambda-a$ are plotted
for several values of $a$, namely $a=0,\,0.4591,\ 0.8319,\ 1.2631,\ 1.7485,\
2.0087,\ 6.0097,\ 6.5$.
Figure 1: On the left, some possible configurations of $h(\lambda)$ and
$g(\lambda)$, on the right $g^{\prime}(\lambda)$ and the line $y=1$ to
determine the points where there is a Jordan block of size two.
We can draw the following conclusions regarding the sign of the eigenvalues
depending on the choice of $a$:
1. 1.
In case 1a ($a=0)$ $\lambda_{n}$ comes with a negative sign, in case 3a
($a=6.5$) $\lambda_{1}$ comes with a negative sign, in case 4a (this would
occur, e.g., for $a=1$) $\lambda_{j+1}$, the middle one of the three crossings
in $(\mu_{j+1},\mu_{j})$ comes with a negative sign.
2. 2.
In case 1b ($a\approx 0.4591$) the sign corresponding to the block of order
two with eigenvalue $\lambda_{n}=\lambda_{n-1}\approx 0.8934$ is $-1$, in case
3b ($a\approx 6.0097$) the sign corresponding to the block of order two with
eigenvalue $\lambda_{1}=\lambda_{2}\approx 5,00155$ is $+1$, in case 4b
($a\approx 0.8319$ and $a\approx 1.7485$) the sign corresponding to the block
of order two with eigenvalue $\lambda_{j+2}=\lambda_{j+1}\approx 1.10815$,
respectively, $\lambda_{j+2}=\lambda_{j+1}\approx 2.1699$, is $+1$, and
finally, in case 4c ($a\approx 1.2631$ and $a\approx 2.0087$) the sign
corresponding to the block of order two with eigenvalue
$\lambda_{j+1}=\lambda_{j}\approx 1.83895$, respectively,
$\lambda_{j+1}=\lambda_{j}\approx 2.91185$, is $-1$.
In general, we can formulate the following when we view $a$ as a parameter.
There is a value $a^{-}_{1}$ such that for $a<a^{-}_{1}$ we are in situation
1a, while for $a=a^{-}_{1}$ we are in situation 1b. Then there is a value
$a^{-}_{2}$ such that for $a^{-}_{1}<a<a^{-}_{2}$ we are in case 2, while for
$a=a^{-}_{2}$ we have either 4b for some $j$ between $n-2$ and $1$, or 4d, or
3b. From the other end, there is an $a^{+}_{1}$ such that for $a>a^{+}_{1}$ we
are in the situation 3a, while for $a=a^{+}_{1}$ we are in the situation 3b.
Finally, there is an $a^{+}_{2}$ such that for $a^{+}_{2}<a<a^{+}_{1}$ we are
in the situation 2, while for $a=a^{+}_{2}$ we are either in the situation 4c,
or in the situation 4d, or in 1b. Eventually, for large positive $a$ there is
one eigenvalue moving to $+\infty$, while there are $n-1$ eigenvalues
approximating the eigenvalues of $J$ from the right.
To check the latter statement, consider the equation
$G(\lambda,a)=h(\lambda)-g(\lambda)=0$ as an equation determining $\lambda$
locally as a function of $a$. Then by the implicit function theorem
$\lambda_{i}^{\prime}(a)=-\left(\frac{\partial G}{\partial a}/\frac{\partial
G}{\partial\lambda}\right)\rfloor_{\lambda=\lambda_{i}}=\frac{1}{1-g^{\prime}(\lambda_{i})}.$
As for large values of $a$ the derivative $g^{\prime}(\lambda_{i})>1$ (as can
be seen from the graph of $g$) apart from the largest eigenvalue, we have
$\lambda_{i}$ decreasing for $i=2,\ldots,n$ and increasing for $i=1$. Using
the main result of [7] we obtain that the eigenvalues of $A$ as function of
$a$ approximate the eigenvalues of $J$ as $a\to\infty$, with the exception of
one, which goes to plus infinity.
The following example demonstrates a third order block when we take
$\mu_{1}=1,\mu_{2}=-1$ and $u=[1/\sqrt{2},1/\sqrt{2}]$. In this case we have
$A=\begin{bmatrix}1&0&1/\sqrt{2}\\\ 0&-1&1/\sqrt{2}\\\
-1/\sqrt{2}&-1/\sqrt{2}&a\end{bmatrix}.$
For $a=0$ there is a Jordan block of order three at the eigenvalue $0$. In the
left-hand graph in Figure 2 the eigenvalues of $A$ are plotted for varying
values of $a$. The dots in the circular-like shape are the complex eigenvalues
that can occur, as the graph of $h(\lambda)$ moves from left to right over the
curve of $g(\lambda)$ as function of $a$.
.
Figure 2: The situation with a Jordan block of order three.
Acknowledgement. The work of the first and third author is supported in part
by the DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences
(CoE-MaSS, Grant Number 2022-012-ALG-ILAS). The work of the second author is
based on research supported in part by the National Research Foundation of
South Africa (Grant Number 145688).
## References
* [1] Y. Bolshakov, C.V.M. van der Mee, A.C.M. Ran, B. Reichstein and L. Rodman. Polar decompositions in finite dimensional indefinite scalar product spaces: General theory, Linear Algebra Appl., 216, (1997), 91–141.
* [2] I. Gohberg, P. Lancaster and L. Rodman. Indefinite Linear Algebra and Applications, Birkhäuser Verlag, Basel, 2005.
* [3] W.H. Haemers. Interlacing Eigenvalues and Graphs. Linear Algebra Appl., 226–228, (1995), 593–616.
* [4] C. Heij, A.C.M. Ran, F. van Schagen. _Introduction to Mathematical Systems Theory,_ 2nd edition, Birkhäuser Verlag, 2020.
* [5] R.A. Horn, C.R. Johnson. _Matrix Analysis_ , 2nd edition, Cambridge University Press, Cambridge, 2013.
* [6] W.-R. Xu, N. Bebiano, G.-L. Chen. A reduction algorithm for reconstructing periodic Jacobi matrices in Minkowski spaces. _Applied Math. and Comp_. 419, (2022), 126853.
* [7] A.C.M. Ran, M. Wojtylak. Global properties of eigenvalues of parametric rank one perturbations for unstructured and structured matrices II. _Complex Anal. Oper. Theory_ 16:91, (2022).
|
# Security and Privacy-Preservation of IoT Data in Cloud-Fog Computing
Environment
Jatinder Kumar
Department of Computer Applications
National Institute of Technology
Kurukshetra, India
<EMAIL_ADDRESS>
Ashutosh Kumar Singh
Department of Computer Applications
National Institute of Technology
Kurukshetra, India
<EMAIL_ADDRESS>The authors would like to thank National Institute of
Technology Kurukshetra, India for financially supporting this research work.
###### Abstract
IoT is the fastest-growing technology with a wide range of applications in
various domains. IoT devices generate data from a real-world environment every
second and transfer it to the cloud due to the less storage at the edge site.
An outsourced cloud is a solution for handling the storage problem. Users’
privacy can be exposed by storing the data on the cloud. Therefore, we propose
a Private Data Storage model that stores IoT data on the outsourced cloud with
privacy preservation. Fog nodes are used at the edge side for data partition
and encryption. Partitioned and encrypted data is aggregated with the help of
homomorphic encryption on the outsourced cloud. For secure query processing
and accessing the data from the outsourced cloud, the introduced model can be
used on the outsourced cloud.
_Keywords_ IoT (Internet of Things), HE (Homomorphic Encryption), Fog
computing, privacy, security, outsourced cloud
## 1 Introduction
With the advancement and deployment of the internet all over the globe, the
Internet of Things (IoT) revolutionized our daily lifestyle by providing
flexibility and convenience [1, 2, 3, 4, 5]. IoT connects billions of machines
with the internet and is capable of collecting $\&$ sharing real-time data [6,
7, 8]. Major applications of IoT devices include smart homes [9, 10], smart
cities [11], smart healthcare [12, 13, 14], and smart grid [15, 16, 17, 18].
Sensors of IoT devices collect data from the environment and make them digital
Intelligence devices [19, 20, 21, 22, 23, 24, 25]. For example, a smart
vehicle’s sensors sense the environment for traffic for every millisecond [26,
27, 28]. By analyzing this information, the vehicle detects the empty road and
then decides to speed up or down [29, 30]. IoT makes the world more responsive
and smarter by merging the digital and physical worlds [31, 32, 33, 34].
It is no doubt that we have great benefits from IoT devices [35, 36]. However,
there are some issues with their benefits, like low computation power and
storage problems of IoT devices [37, 38, 39]. Local machines are not able to
store all data which IoT generates [40, 41, 42]. This problem may be solved by
cloud computing after outsourcing the data. But the cloud is not secure from
the point of privacy. Many attackers may access the data from the clouds [43,
44]. One of the famous attacks was the Apple iCloud breach [45], where private
photos of 500 celebrities leaked. Due to these attacks, the outsourced clouds
are always not trustworthy [46, 47, 48, 49, 50, 51]. As the IoT devices are in
a large number indicates that more security and protection are needed [52, 53,
54]. The whole data is stored on the cloud, and when the attackers breach the
security, they access the complete data [55, 56, 57, 58, 59, 60]. Tian Wang et
al. [61] proposed a scheme that stores the data and in which whole data is not
stored only on the cloud. Data is partitioned and stored at different levels
so that the attacker can not access the complete data. IoT devices can not
partition data before storing on the outsourced cloud because of less
computation power. We have introduced the fog nodes between cloud and IoT
devices to accomplish this task. The only factor differentiating cloud and fog
computing is the availability of fog nodes close to the edge device. The
concept of fog computing is introduced by Cisco [62] and can be abbreviated as
“From cOre to edGe” [63, 64, 65, 66]. As in the middle of edge devices and
cloud nodes, fog nodes can compute and transmit data to different levels.
Providing fog computing near IoT devices improves latency and real-time
computational capabilities of data generated by these devices. The Fog node’s
computational capabilities can be used for data partitioning and encryption
before sending them into the outsourced cloud [67]. After storing, data on the
outsourced cloud in encrypted form makes statistics a problematic task. Some
recent works proposed their schemes to analyze the data on the outsourced
cloud [68, 69, 70, 71, 72], but they have not solved these issues technically.
The cloud computing is defined as to provide the computing resources over the
internet [73, 74, 75, 76, 77, 78]. These resources can be hardware or software
in accordance to their requirements and the architecture of cloud computing is
shown in Figure 1 on the basis of services and deployment [79, 80, 81, 82].
The cloud computing can be divided into three main categories according to
their services [83, 84, 85]. These are: Infrastructure-as-a-service (IaaS),
Platform-as-a-service (PaaS), and Software-as-a-service (SaaS). On the basis
of deployment, the cloud computing is broadly classified into four main
categories, i.e., public, private, hybrid, and community cloud [86, 87, 88,
89, 90, 91, 92, 93, 94, 95].
Figure 1: Cloud computing classification
The benefits of cloud computing include availability, scalability, cost-
efficient, mobility, disaster recovery, security, and so on [96, 97, 98, 99,
100, 101, 102, 103, 104, 105]. These benefits are provided on a pay-as-you-go
basis [106, 107]. You have to pay only for those services which you are using
[108, 109, 110, 111, 112, 113, 114, 115]. With the benefit, there are some
challenges also of cloud computing [116, 117, 118, 119, 120]. These challenges
are data privacy, data security, load balancing, power utilization, resource
utilization, and resource migration [121, 122, 123, 124, 125, 126, 127, 128].
The smart grid uses the storage and real-time analysis services of cloud
computing. As the cloud is considered semi-trusted, i.e., honest and curious
[129, 130, 131, 132]. The encryption algorithms are used to protect the
privacy of customers at the third party (outsourced cloud) [133]. Homomorphic
encryption is used to secure and privacy-preserving IoT device data
aggregation. The public key of IoT devices is used to encrypt the private
data, whereas the private key is used to convert the encrypted data to
original data.
In this work, our objective is to achieve the privacy-preservation of data
generated by IoT devices and secure query processing on the outsourced cloud.
We propose privacy-preserving private data storage model for storing data of
IoT devices on outsourced clouds. We use fog nodes for the computation and
transmission of data at the edge side. Two outsourced clouds are used to
maintain privacy and storage problems. Homomorphic encryption is used to
aggregate the encrypted data at clouds. Then a secure device data query scheme
is proposed for query processing at the cloud on encrypted data.
## 2 Related Work
Andriopoulou et al. [134] proposed a fog-IoT and cloud-integrated architecture
that provides computing services at the end devices side. Healthcare IoT
devices collect the sensor data and aggregate it at fog nodes. Fog computing
improves latency for the transmission of data to the cloud servers. Therefore,
architecture is most suitable for healthcare services. Usman et al. [135]
proposed a secure IoT architecture having a lightweight encryption algorithm
to encrypt IoT data. It uses a 64-bit long key and generates a 64-bit block
cipher after 5 rounds of encryption to provide security. An Attribute-Based
Encryption (ABE) scheme over data of IoT devices is presented in [136]. Key-
Policy Attribute-Based Encryption (KP-ABE) and Ciphertext-Policy Attribute-
Based Encryption (CP-ABE) schemes are evaluated on different mobile IoT
devices that achieve data privacy. Bhandari and Kirubanand [137] presented an
advanced cryptographic solution with both asymmetric & symmetric encryption.
The keys are generated using elliptical curve cryptography. This model makes
IoT data transmission secure to the cloud. Genitary [138] introduced
homomorphic encryption (HE) scheme that allows an arbitrary number of addition
and multiplication operations on ciphertexts. However, the computational
complexity of these algorithms is too high. A Partial HE scheme that supports
ECC is used by [139][140] on the integrated framework of IoT and cloud. ECC HE
uses a small key size and improves security and ambiguity. Ramesh and
Govindarasu [141] presented a framework called proxy reciphering as a service
and uses FHE & chameleon hash functions for privacy-preserving even after
device keys are compromised. In this work, data is encrypted first with the
AES algorithm and then transformed into homomorphic ciphertexts. For the data
integration at the cloud, only an addition operation could be sufficient.
Paillier cryptosystem is proposed by Pascal Paillier [142] that supports an
arbitrary number of addition operations and has less computation complexity
than FHE.
The authors proposed secure query processing in WSN to access the user’s data
on the cloud [143, 144, 145, 146]. Yuan et al. present an architecture that
enables encrypted query processing over IoT data [147]. HomomorphicEncryption
with random diagonal elliptical curve cryptography integrated with Multi-
nomial smoothing Naive Bayes model is proposed by [140] over the IoT health
cloud system. MSNB model is used for the prediction of diseases with less
processing time and low computation cost. In our proposed work, we use
Paillier HE for adding the ciphertexts at the outsourced cloud in the proposed
PDS model. Our proposed work consists of both the features privacy-preserving
model as well as query processing over encrypted data.
## 3 Proposed Model
The architecture of the proposed model is depicted in Figure 2. The proposed
model consists three main entities, i.e., IoT devices, fog nodes, and cloud.
IoT devices are distributed into $n$ Regions ($R$) and each regions contains
IoT devices, such as $R_{1}$ = {$I_{11}$, $I_{12}$, …, $I_{1p}$}, $R_{2}$ =
{$I_{21}$, $I_{22}$, …, $I_{2q}$}, and $R_{n}$ = {$I_{n1}$, $I_{n2}$, …,
$I_{nr}$}.
Figure 2: Proposed Model
These devices generate data after some period of time ($t$), for example,
$I_{11}$ generated data ($D_{11}^{t}$) at time ($t$). The data report of these
devices ($D^{t}_{Ri}$) = {$D_{i1}^{t}$, $D_{i2}^{t}$, …, $D_{ix}^{t}$} is
encrypted with the paillier homomorphic encryption and the encrypted report
($Enc(D^{t}_{Ri})$) = {$Enc(D_{i1}^{t})$, $Enc(D_{i2}^{t})$, …,
$Enc(D_{ix}^{t})$} of IoT devices are forwarded to the corresponding fog node
($FN$). The fog node aggregate the IoT devices data at different time
intervals ($t$ and $\Delta t$) with the property of homomorphic encryption and
computes $Enc(D^{t+\Delta t}_{Ri})$ = $Enc(D^{t}_{Ri})$ \+ $Enc(D^{\delta
t}_{Ri})$. Thenafter, $Enc(D^{t+\Delta t}_{Ri})$ is transferred to the cloud
for storage and analysis. In addition to this, the proposed model can be used
for query processing. If data of any IoT device is required, the request of
data of the particular device is made to the cloud by providing the identity
number. The cloud uses the identity number of devices to fetch the required
data in the storage and send the data to the requested authority. The
responded data is in ciphertext with the public key of the device owner. So,
private key of the corresponding device is required to get the required
result. By applying the private key, the requested authority get the required
output.
## 4 Conclusion
This paper focuses on the privacy-preserving data aggregation of IoT devices.
The IoT devices collect real-time data at different time intervals from the
environment. The collected data is encrypted with the Paillier homomorphic
encryption at different times. The encrypted data is then after forwarded to
the fog nodes. The encrypted data is aggregated at various times and
transferred to the outsourced cloud. The proposed model can also be used for
secure query processing. Future work can be included the authentication of
data at the fog node with the digital signatures.
## References
* [1] Smruti Rekha Swain, Ashutosh Kumar Singh, and Chung Nan Lee. Efficient resource management in cloud environment. arXiv preprint arXiv:2207.12085, 2022.
* [2] Jitendra Kumar, Ashutosh Kumar Singh, and Rajkumar Buyya. Ensemble learning based predictive framework for virtual machine resource request prediction. Neurocomputing, 397:20–30, 2020.
* [3] Niharika Singh, Jitendra Kumar, Ashutosh Kumar Singh, and Anand Mohan. Privacy-preserving multi-keyword hybrid search over encrypted data in cloud. Journal of Ambient Intelligence and Humanized Computing, pages 1–14, 2022.
* [4] Deepika Saxena, Ishu Gupta, Ashutosh Kumar Singh, and Chung-Nan Lee. A fault tolerant elastic resource management framework towards high availability of cloud services. IEEE Transactions on Network and Service Management, 2022.
* [5] Ishu Gupta, Preetesh K Yadav, Sourav Pareek, Saif Shakeel, and Ashutosh Kumar Singh. Auxiliary informatics system: an advancement towards a smart home environment, 2022.
* [6] Sakshi Chhabra and Ashutosh Kumar Singh. A secure vm allocation scheme to preserve against co-resident threat. International Journal of Web Engineering and Technology, 15(1):96–115, 2020.
* [7] Deepika Saxena and Ashutosh Kumar Singh. Energy aware resource efficient-(eare) server consolidation framework for cloud datacenter. In Advances in communication and computational technology, pages 1455–1464. Springer, 2021.
* [8] I Gupta and AK Singh. A framework for malicious agent detection in cloud computing environment. Int J Adv Sci Technol (IJAST), 135:49–62, 2020.
* [9] Murad Khan, Bhagya Nathali Silva, and Kijun Han. Internet of things based energy aware smart home control system. Ieee Access, 4:7556–7566, 2016.
* [10] Sakshi Chhabra and Ashutosh Kumar Singh. Security enhancement in cloud environment using secure secret key sharing. Journal of Communications Software and Systems, 16(4):296–307, 2020\.
* [11] Maryam Pouryazdan, Burak Kantarci, Tolga Soyata, and Houbing Song. Anchor-assisted and vote-based trustworthiness assurance in smart city crowdsensing. IEEE Access, 4:529–541, 2016.
* [12] Anjali Tripathi, Upasana Singh, Garima Bansal, Rishabh Gupta, and Ashutosh Kumar Singh. A review on emotion detection and classification using speech. In Proceedings of the International Conference on Innovative Computing & Communications (ICICC), 2020.
* [13] Xiaodong Lin, Rongxing Lu, Xuemin Shen, Yoshiaki Nemoto, and Nei Kato. Sage: a strong privacy-preserving scheme against global eavesdropping for ehealth systems. IEEE journal on selected areas in communications, 27(4):365–378, 2009.
* [14] Anjali Tripathi, Upasana Singh, Garima Bansal, Rishabh Gupta, and Ashutosh Kumar Singh. Hedcm: Human emotions detection and classification model from speech using cnn. Workshop on Advances in Computational Intelligence at ISIC, 2021\.
* [15] Rongxing Lu. Privacy-enhancing aggregation techniques for smart grid communications. Springer, 2016.
* [16] Sukhman Singh, Tarun Kumar Madan, Jitendra Kumar, and Ashutosh Kumar Singh. Stock market forecasting using machine learning: Today and tomorrow. In 2019 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), volume 1, pages 738–745. IEEE, 2019.
* [17] Ishu Gupta, Niharika Singh, and Ashutosh Kumar Singh. Layer-based privacy and security architecture for cloud data sharing. Journal of Communications Software and Systems, 15(2):173–185, 2019\.
* [18] Priya Agarwal, Sloni Mittal, Ankit Tiwari, Ishu Gupta, Ashutosh Kumar Singh, and Bharti Sharma. Authenticating cryptography over network in data. In 2019 International Conference on Intelligent Computing and Control Systems (ICCS), pages 632–636. IEEE, 2019.
* [19] Ishu Gupta and Ashutosh Kumar Singh. Seli: statistical evaluation based leaker identification stochastic scheme for secure data sharing. IET Communications, 14(20):3607–3618, 2020.
* [20] Jatinder Kumar and Ashutosh Kumar Singh. A discussion and comparative study on security and privacy of smart meter data. arXiv preprint arXiv:2111.09227, 2021.
* [21] Ashutosh Kumar Singh and Ishu Gupta. Online information leaker identification scheme for secure data sharing. Multimedia Tools and Applications, 79(41):31165–31182, 2020.
* [22] Ishu Gupta and Ashutosh Kumar Singh. Dynamic threshold based information leaker identification scheme. Information Processing Letters, 147:69–73, 2019.
* [23] Preshi Godha, Swati Jadon, Anshi Patle, Ishu Gupta, Bharti Sharma, and Ashutosh Kumar Singh. Architecture, an efficient routing, applications, and challenges in delay tolerant network. In 2019 International Conference on Intelligent Computing and Control Systems (ICCS), pages 824–829. IEEE, 2019.
* [24] J Nader, A Alsadoon, PWC Prasad, AK Singh, and A Elchouemi. Designing touch-based hybrid authentication method for smartphones. Procedia Computer Science, 70:198–204, 2015.
* [25] Jitendra Kumar and Ashutosh Kumar Singh. Decomposition based cloud resource demand prediction using extreme learning machines. Journal of Network and Systems Management, 28(4):1775–1793, 2020\.
* [26] Jitendra Kumar and Ashutosh Kumar Singh. Performance evaluation of metaheuristics algorithms for workload prediction in cloud environment. Applied Soft Computing, 113:107895, 2021.
* [27] Sakshi Chhabra and Ashutosh Kumar Singh. Dynamic resource allocation method for load balance scheduling over cloud data center networks. Journal of Web Engineering, pages 2269–2284, 2021.
* [28] Deepika Saxena and Ashutosh Kumar Singh. Communication cost aware resource efficient load balancing (care-lb) framework for cloud datacenter. Recent Advances in Computer Science and Communications, 12:1–00, 2020.
* [29] Gurdeep Singh Hura, Ashutosh Kumar Singh, and Lau Siong Hoe. Advances in Communication and Computational Technology: Select Proceedings of ICACCT 2019. Springer, 2020.
* [30] Ishu Gupta and Ashutosh Kumar Singh. Guim-smd: guilty user identification model using summation matrix-based distribution. IET Information Security, 14(6):773–782, 2020.
* [31] Ishu Gupta and Ashutosh Kumar Singh. An integrated approach for data leaker detection in cloud environment. Journal of Information Science & Engineering, 36(5), 2020.
* [32] Pooja Tiwari, Simran Mehta, Nishtha Sakhuja, Jitendra Kumar, and Ashutosh Kumar Singh. Credit card fraud detection using machine learning: A study. arXiv preprint arXiv:2108.10005, 2021.
* [33] Harsh Mittal, Deepak Rikhari, Jitendra Kumar, and Ashutosh Kumar Singh. A study on machine learning approaches for player performance and match results prediction. arXiv preprint arXiv:2108.10125, 2021.
* [34] Jitendra Kumar and Ashutosh Kumar Singh. Adaptive learning based prediction framework for cloud datacenter networks’ workload anticipation. Journal of Information Science & Engineering, 36(5), 2020.
* [35] Himanshu Taneja, Ashutosh Kumar Singh, et al. Preserving privacy of patients based on re-identification risk. Procedia Computer Science, 70:448–454, 2015.
* [36] Ishu Gupta, Tarun Kumar Madan, Sukhman Singh, and Ashutosh Kumar Singh. Hisa-smfm: Historical and sentiment analysis based stock market forecasting model. arXiv preprint arXiv:2203.08143, 2022.
* [37] S Chhabra and AK Singh. Dynamic hierarchical load balancing model for cloud data centre networks. Electronics Letters, 55(2):94–96, 2019.
* [38] Ishu Gupta, Vartika Sharma, Sizman Kaur, and Ashutosh Kumar Singh. Pca-rf: An efficient parkinson’s disease prediction model based on random forest classification. arXiv preprint arXiv:2203.11287, 2022.
* [39] Preetesh K Yadav, Sourav Pareek, Saif Shakeel, Jitendra Kumar, and Ashutosh Kumar Singh. Advancements and security issues of iot & cyber physical systems. In 2019 International Conference on Intelligent Computing and Control Systems (ICCS), pages 940–945. IEEE, 2019.
* [40] Deepika Saxena, Ishu Gupta, Jitendra Kumar, Ashutosh Kumar Singh, and Xiaoqing Wen. A secure and multiobjective virtual machine placement framework for cloud data center. IEEE Systems Journal, 2021.
* [41] Ishu Gupta, Sloni Mittal, Ankit Tiwari, Priya Agarwal, and Ashutosh Kumar Singh. Tidf-dlpm: Term and inverse document frequency based data leakage prevention model. arXiv preprint arXiv:2203.05367, 2022.
* [42] Ashutosh Kumar Singh, Deepika Saxena, Jitendra Kumar, and Vrinda Gupta. A quantum approach towards the adaptive prediction of cloud workloads. IEEE Transactions on Parallel and Distributed Systems, 32(12):2893–2905, 2021.
* [43] Deepika Saxena and Ashutosh Kumar Singh. An intelligent traffic entropy learning-based load management model for cloud networks. IEEE Networking Letters, 4(2):59–63, 2022.
* [44] Kamaljeet Kaur, Ishu Gupta, Ashutosh Kumar Singh, et al. A comparative evaluation of data leakage/loss prevention systems (dlps). In Proc. 4th Int. Conf. Computer Science & Information Technology (CS & IT-CSCP), pages 87–95, 2017.
* [45] Dave Lewis. icloud data breach: Hacking and celebrity photos. Forbes Online, 2014.
* [46] Jitendra Kumar and Ashutosh Kumar Singh. Cloud resource demand prediction using differential evolution based learning. In 2019 7th International Conference on Smart Computing & Communications (ICSCC), pages 1–5. IEEE, 2019.
* [47] Ben Martini and Kim-Kwang Raymond Choo. Cloud storage forensics: owncloud as a case study. Digital Investigation, 10(4):287–299, 2013.
* [48] Ishu Gupta and Ashutosh Kumar Singh. A confidentiality preserving data leaker detection model for secure sharing of cloud data using integrated techniques. In 2019 7th International Conference on Smart Computing & Communications (ICSCC), pages 1–5. IEEE, 2019.
* [49] Deepika Deepika, Rajnesh Malik, Saurabh Kumar, Rishabh Gupta, and Ashutosh Kumar Singh. A review on data privacy using attribute-based encryption. In Proceedings of the International Conference on Innovative Computing & Communications (ICICC), 2020.
* [50] Aman Singh Chauhan, Dikshika Rani, Akash Kumar, Rishabh Gupta, and Ashutosh Kumar Singh. A survey on privacy-preserving outsourced data on cloud with multiple data providers. In Proceedings of the International Conference on Innovative Computing & Communications (ICICC), 2020.
* [51] Sakshi Chhabra and Ashutosh Kumar Singh. Optimal vm placement model for load balancing in cloud data centers. In 2019 7th International Conference on Smart Computing & Communications (ICSCC), pages 1–5. IEEE, 2019.
* [52] Sabrina Sicari, Alessandra Rizzardi, Luigi Alfredo Grieco, and Alberto Coen-Porisini. Security, privacy and trust in internet of things: The road ahead. Computer networks, 76:146–164, 2015.
* [53] Constantinos Kolias, Georgios Kambourakis, Angelos Stavrou, and Jeffrey Voas. Ddos in the iot: Mirai and other botnets. Computer, 50(7):80–84, 2017.
* [54] Yuchen Yang, Longfei Wu, Guisheng Yin, Lijie Li, and Hongbin Zhao. A survey on security and privacy issues in internet-of-things. IEEE Internet of Things Journal, 4(5):1250–1258, 2017.
* [55] Surender Singh and Ashutosh Kumar Singh. Web-spam features selection using cfs-pso. Procedia computer science, 125:568–575, 2018.
* [56] Ishu Gupta, Harsh Mittal, Deepak Rikhari, and Ashutosh Kumar Singh. Mlrm: A multiple linear regression based model for average temperature prediction of a day. arXiv preprint arXiv:2203.05835, 2022.
* [57] Vartika Sharma, Sizman Kaur, Jitendra Kumar, and Ashutosh Kumar Singh. A fast parkinson’s disease prediction technique using pca and artificial neural network. In 2019 International conference on intelligent computing and control systems (ICCS), pages 1491–1496. IEEE, 2019.
* [58] Alex Goh Kwang Leng, P Ravi Kumar, Ashutosh Kumar Singh, and Anand Mohan. Link-based spam algorithms in adversarial information retrieval. Cybernetics and Systems, 43(6):459–475, 2012.
* [59] SV Tagliacane, PWC Prasad, George Zajko, A Elchouemi, and Ashutosh Kumar Singh. Network simulations and future technologies in teaching networking courses: Development of a laboratory model with cisco virtual internet routing lab (virl). In 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pages 644–649. IEEE, 2016.
* [60] Kwang Leng Goh and Ashutosh Kumar Singh. Comprehensive literature review on machine learning structures for web spam classification. Procedia Computer Science, 70:434–441, 2015.
* [61] Tian Wang, Jiyuan Zhou, Xinlei Chen, Guojun Wang, Anfeng Liu, and Yang Liu. A three-layer privacy preserving cloud storage scheme based on computational intelligence in fog computing. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(1):3–12, 2018.
* [62] Ivan Stojmenovic and Sheng Wen. The fog computing paradigm: Scenarios and security issues. In 2014 federated conference on computer science and information systems, pages 1–8. IEEE, 2014.
* [63] Rishabh Gupta and Ashutosh Kumar Singh. Privacy-preserving cloud data model based on differential approach. In 2022 Second International Conference on Power, Control and Computing Technologies (ICPC2T), pages 1–6. IEEE, 2022.
* [64] Subhadeep Sarkar, Subarna Chatterjee, and Sudip Misra. Assessment of the suitability of fog computing in the context of internet of things. IEEE Transactions on Cloud Computing, 6(1):46–59, 2015.
* [65] Rishabh Gupta and Ashutosh Kumar Singh. A privacy-preserving model for cloud data storage through fog computing. International Journal of Computer Aided Engineering and Technology, 17(3):348–359, 2022.
* [66] Niharika Singh and Ashutosh Kumar Singh. Sql-injection vulnerabilities resolving using valid security tool in cloud. Pertanika Journal of Science & Technology, 27(1), 2019.
* [67] Jitendra Kumar and Ashutosh Kumar Singh. Cloud datacenter workload estimation using error preventive time series forecasting models. Cluster Computing, 23(2):1363–1379, 2020.
* [68] Ashutosh Kumar Singh, Smruti Rekha Swain, and Chung Nan Lee. A metaheuristic virtual machine placement framework toward power efficiency of sustainable cloud environment. Soft Computing, pages 1–12, 2022.
* [69] Nikhil Raj, Ashutosh Kumar Singh, and Anil Kumar Gupta. Low voltage high output impedance bulk-driven quasi-floating gate self-biased high-swing cascode current mirror. Circuits, Systems, and Signal Processing, 35(8):2683–2703, 2016\.
* [70] Kin Suntana Tep, Ben Martini, Ray Hunt, and Kim-Kwang Raymond Choo. A taxonomy of cloud attack consequences and mitigation strategies: The role of access control and privileged access management. In 2015 IEEE Trustcom/BigDataSE/ISPA, volume 1, pages 1073–1080. IEEE, 2015.
* [71] Darren Quick, Ben Martini, and Raymond Choo. Cloud storage forensics. Syngress, 2013.
* [72] Kamal Nayan Kaur, Ishu Gupta, Ashutosh Kumar Singh, et al. Digital image watermarking using (2, 2) visual cryptography with dwt-svd based watermarking. In Computational intelligence in data mining, pages 77–86. Springer, 2019.
* [73] Z Ge, PWC Prasad, N Costadopoulos, Abeer Alsadoon, AK Singh, and A Elchouemi. Evaluating the accuracy of wearable heart rate monitors. In 2016 2nd International Conference on Advances in Computing, Communication, & Automation (ICACCA)(Fall), pages 1–6. IEEE, 2016.
* [74] Ishu Gupta, Rishabh Gupta, Ashutosh Kumar Singh, and Rajkumar Buyya. Mlpam: A machine learning and probabilistic analysis based model for preserving security and privacy in cloud environment. IEEE Systems Journal, 15(3):4248–4259, 2020.
* [75] Rishabh Gupta, Deepika Saxena, and Ashutosh Kumar Singh. Data security and privacy in cloud computing: concepts and emerging trends. arXiv preprint arXiv:2108.09508, 2021.
* [76] Utkarsh Saini, Prachee Atmapoojya, Rohit Patidar, Rishabh Gupta, Sakshi Chhabra, and Ashutosh Kumar Singh. Data privacy preservation model using noise concept in cloud. INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT), 11(3):326–329, 2022.
* [77] Divyanshu Varshney, Burhanuddin Babukhanwala, Javed Khan, Deepika Saxena, and Ashutosh kumar Singh. Machine learning techniques for plant disease detection. In 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), pages 1574–1581. IEEE, 2021.
* [78] Dongri Yang, Abeer Alsadoon, PW Chandana Prasad, Ashutosh Kumar Singh, and Amr Elchouemi. An emotion recognition model based on facial recognition in virtual learning environment. Procedia Computer Science, 125:2–10, 2018.
* [79] Deepika Saxena and Ashutosh Kumar Singh. Workload forecasting and resource management models based on machine learning for cloud computing environments. arXiv preprint arXiv:2106.15112, 2021.
* [80] Prachee Atmapoojya, Utkarsh Saini, Rohit Patidar, Rishabh Gupta, and Ashutosh Kumar Singh. Differential privacy based preserving data on cloud environment. INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT), 10(5):780–785, 2021.
* [81] Deepika Saxena and Ashutosh Kumar Singh. Vm failure prediction based intelligent resource management model for cloud environments. In 2022 Second International Conference on Power, Control and Computing Technologies (ICPC2T), pages 1–6. IEEE, 2022.
* [82] Ishu Gupta and Ashutosh Kumar Singh. A holistic view on data protection for sharing, communicating, and computing environments: Taxonomy and future directions. arXiv preprint arXiv:2202.11965, 2022.
* [83] Archana Yadav, Shivam Kushwaha, Jyoti Gupta, Deepika Saxena, and Ashutosh Kumar Singh. A survey of the workload forecasting methods in cloud computing. In Proceedings of 3rd International Conference on Machine Learning, Advances in Computing, Renewable Energy and Communication, pages 539–547. Springer, 2022.
* [84] Deepika Saxena, Rishabh Gupta, and Ashutosh Kumar Singh. A survey and comparative study on multi-cloud architectures: emerging issues and challenges for cloud federation. arXiv preprint arXiv:2108.12831, 2021.
* [85] Aaisha Makkar, Tae Woo Kim, Ashutosh Kumar Singh, Jungho Kang, and Jong Hyuk Park. Secureiiot environment: Federated learning empowered approach for securing iiot from data breach. IEEE Transactions on Industrial Informatics, 2022.
* [86] Ramkrishna Patel, Vikas Choudhary, Deepika Saxena, and Ashutosh Kumar Singh. Review of stock prediction using machine learning techniques. In 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), pages 840–846. IEEE, 2021.
* [87] Ishu Gupta, Deeksha Gurnani, Neha Gupta, Caffy Singla, Prateek Thakral, and Ashutosh Kumar Singh. Compendium of data security in cloud storage by applying hybridization of encryption algorithm. TechRxiv, 2022.
* [88] Kamaljeet Kaur, Ishu Gupta, and Ashutosh Kumar Singh. Data leakage prevention: e-mail protection via gateway. In Journal of Physics: Conference Series, volume 933, page 012013\. IOP Publishing, 2017.
* [89] Ramkrishna Patel, Vikas Choudhary, Deepika Saxena, and Ashutosh Kumar Singh. Lstm and nlp based forecasting model for stock market analysis. In 2021 First International Conference on Advances in Computing and Future Communication Technologies (ICACFCT), pages 52–57. IEEE, 2021.
* [90] Sakshi Chhabra and Ashutosh Kumar Singh. Oph-lb: Optimal physical host for load balancing in cloud environment. Pertanika Journal of Science & Technology, 26(3), 2018.
* [91] Niharika Singh and Ashutosh Kumar Singh. Data privacy protection mechanisms in cloud. Data Science and Engineering, 3(1):24–39, 2018.
* [92] Sakshi Chhabra and Ashutosh Kumar Singh. A probabilistic model for finding an optimal host framework and load distribution in cloud environment. Procedia Computer Science, 125:683–690, 2018.
* [93] Deepika Saxena and Ashutosh Kumar Singh. A high availability management model based on vm significance ranking and resource estimation for cloud applications. IEEE Transactions on Services Computing, 2022.
* [94] Ishu Gupta and Ashutosh Kumar Singh. A probabilistic approach for guilty agent detection using bigraph after distribution of sample data. Procedia Computer Science, 125:662–668, 2018.
* [95] Divyanshu Varshney, Burhanuddin Babukhanwala, Javed Khan, Deepika Saxena, and Ashutosh Kumar Singh. Plant disease detection using machine learning techniques. In 2022 3rd International Conference for Emerging Technology (INCET), pages 1–5. IEEE, 2022.
* [96] Deepika Saxena, Kunwar Singh Vaisla, and Manmohan Singh Rauthan. Abstract model of trusted and secure middleware framework for multi-cloud environment. In International Conference on Advanced Informatics for Computing Research, pages 469–479. Springer, 2018.
* [97] Rishabh Gupta, Deepika Saxena, Ishu Gupta, and Ashutosh Kumar Singh. Differential and triphase adaptive learning-based privacy-preserving model for medical data in cloud environment. IEEE Networking Letters, 4(4):217–221, 2022.
* [98] Rishabh Gupta, Deepika Saxena, Ishu Gupta, Aaisha Makkar, and Ashutosh Kumar Singh. Quantum machine learning driven malicious user prediction for cloud network communications. IEEE Networking Letters, 2022.
* [99] Shilpi Saxena and Deepika Saxena. Ewsa: An enriched workflow scheduling algorithm in cloud computing. In 2015 International Conference on Computing, Communication and Security (ICCCS), pages 1–5. IEEE, 2015.
* [100] Rishabh Gupta and Ashutosh Kumar Singh. Differential and access policy based privacy-preserving model in cloud environment. Journal of Web Engineering, pages 609–632, 2022.
* [101] Deepika Saxena and Ashutosh Kumar Singh. Communication cost aware resource efficient load balancing (care-lb) framework for cloud datacenter. Recent Advances in Computer Science and Communications, 12:1–00, 2020.
* [102] Jitendra Kumar and Ashutosh Kumar Singh. Performance assessment of time series forecasting models for cloud datacenter networks’ workload prediction. Wireless Personal Communications, 116(3):1949–1969, 2021.
* [103] Deepika Saxena and Ashutosh Kumar Singh. OSC-MC: Online secure communication model for cloud environment. IEEE Communications Letters, 2021.
* [104] Deepika Saxena and Shilpi Saxena. Highly advanced cloudlet scheduling algorithm based on particle swarm optimization. In 2015 Eighth International Conference on Contemporary Computing (IC3), pages 111–116. IEEE, 2015.
* [105] Deepika Saxena and Ashutosh Kumar Singh. Auto-adaptive learning-based workload forecasting in dynamic cloud environment. International Journal of Computers and Applications, pages 1–11, 2020.
* [106] Rishabh Gupta, Ishu Gupta, Deepika Saxena, and Ashutosh Kumar Singh. A differential approach and deep neural network based data privacy-preserving model in cloud environment. Journal of Ambient Intelligence and Humanized Computing, pages 1–16, 2022.
* [107] Vardaan Sharma, Sahil Jalwa, Abdur Rehman Siddiqi, Ishu Gupta, and Ashutosh Kumar Singh. A lightweight effective randomized caesar cipher algorithm for security of data. In Evolutionary Computing and Mobile Sustainable Networks, pages 411–419. Springer, 2021.
* [108] Deepika Saxena, RK Chauhan, and Ramesh Kait. Dynamic fair priority optimization task scheduling algorithm in cloud computing: concepts and implementations. International Journal of Computer Network and Information Security, 8(2):41, 2016.
* [109] Saxena Deepika and Singh Ashutosh Kumar. A proactive autoscaling and energy-efficient vm allocation framework using online multi-resource neural network for cloud data center. Neurocomputing, 426:248–264, 2021.
* [110] Ashutosh Kumar Singh and Deepika Saxena. A cryptography and machine learning based authentication for secure data-sharing in federated cloud services environment. Journal of Applied Security Research, pages 1–24, 2021.
* [111] Deepika Saxena, Ashutosh Kumar Singh, and Rajkumar Buyya. OP-MLB: An online vm prediction based multi-objective load balancing framework for resource management at cloud datacenter. IEEE Transactions on Cloud Computing, 2021.
* [112] Jitendra Kumar, Deepika Saxena, Ashutosh Kumar Singh, and Anand Mohan. Biphase adaptive learning-based neural network model for cloud datacenter workload forecasting. Soft Computing, pages 1–18, 2020.
* [113] Sakshi Chhabra and Ashutosh Kumar Singh. Dynamic data leakage detection model based approach for mapreduce computational security in cloud. In 2016 Fifth International Conference on Eco-friendly Computing and Communication Systems (ICECCS), pages 13–19. IEEE, 2016.
* [114] Jitendra Kumar and Ashutosh Kumar Singh. Dynamic resource scaling in cloud using neural network and black hole algorithm. In 2016 Fifth International Conference on Eco-friendly Computing and Communication Systems (ICECCS), pages 63–67. IEEE, 2016.
* [115] Yaman Goel, Ishu Gupta, Pooja Rani, and Ashutosh Kumar Singh. A Facial Recognition and Detection System using openVC. TechRxiv, 11 2022.
* [116] Sahil Jalwa, Vardaan Sharma, Abdur Rehman Siddiqi, Ishu Gupta, and Ashutosh Kumar Singh. Comprehensive and comparative analysis of different files using cp-abe. In Advances in Communication and Computational Technology, pages 189–198. Springer, 2021.
* [117] Pooja Tiwari, Simran Mehta, Nishtha Sakhuja, Ishu Gupta, and Ashutosh Kumar Singh. Hybrid method in identifying the fraud detection in the credit card. In Evolutionary Computing and Mobile Sustainable Networks, pages 27–35. Springer, 2021.
* [118] Badal Pradhan, Bhagwan Singh, Abhishek Bhoria, and Ashutosh Kumar Singh. A comparative study on cipher text policy attribute based encryption schemes. International Journal of Engineering Research & Technology, 2021\.
* [119] Murari Choudhary, Shashank Jha, Deepika Saxena, Ashutosh Kumar Singh, et al. A review of fake news detection methods using machine learning. In 2021 2nd International Conference for Emerging Technology (INCET), pages 1–5. IEEE, 2021.
* [120] Preshi Godha, Swati Jadon, Anshi Patle, Ishu Gupta, Bharti Sharma, and Ashutosh Kumar Singh. Flooding and forwarding based on efficient routing protocol. In International Conference on Innovative Computing and Communications, pages 215–223. Springer, 2021.
* [121] D Saxena and AK Singh. Security embedded dynamic resource allocation model for cloud data centre. Electronics Letters, 56(20):1062–1065, 2020.
* [122] Jitendra Kumar, Ashutosh Kumar Singh, and Rajkumar Buyya. Self directed learning based workload forecasting model for cloud resource management. Information Sciences, 543:345–366, 2021.
* [123] Divyanshu Varshney, Burhanuddin Babukhanwala, Javed Khan, Deepika Saxena, and Ashutosh kumar Singh. Machine learning techniques for plant disease detection. In 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), pages 1574–1581. IEEE, 2021.
* [124] Ayushi Acharya, Hari Prasad, Vinod Kumar, Ishu Gupta, and Ashutosh Kumar Singh. Host platform security and mobile agent classification: A systematic study. In Computer Networks and Inventive Communication Technologies, pages 1001–1010. Springer, 2021.
* [125] Ashutosh Kumar Singh and Rishabh Gupta. A privacy-preserving model based on differential approach for sensitive data in cloud environment. Multimedia Tools and Applications, pages 1–24, 2022.
* [126] Anand Kesharwani, Animesh Nag, Abhishek Tiwari, Ishu Gupta, Bharti Sharma, and Ashutosh Kumar Singh. Real-time human locator and advance home security appliances. In Evolutionary Computing and Mobile Sustainable Networks, pages 37–49. Springer, 2021.
* [127] Rishabh Gupta and Ashutosh Kumar Singh. A differential approach for data and classification service-based privacy-preserving machine learning model in cloud environment. New Generation Computing, 40(3):737–764, 2022.
* [128] Jitendra Kumar, Ashutosh Kumar Singh, and Anand Mohan. Resource-efficient load-balancing framework for cloud data center networks. ETRI Journal, 43(1):53–63, 2021.
* [129] Nikhil Raj, Ashutosh Kumar Singh, and AK Gupta. Low-voltage bulk-driven self-biased cascode current mirror with bandwidth enhancement. Electronics letters, 50(1):23–25, 2014.
* [130] P Ravi Kumar and Ashutosh K Singh. Web structure mining: Exploring hyperlinks and algorithms for information retrieval. American Journal of applied sciences, 7(6):840, 2010.
* [131] Lenin Gopal, Nor Syahira Mohd Mahayadin, Adib Kabir Chowdhury, Alpha Agape Gopalai, and Ashutosh Kumar Singh. Design and synthesis of reversible arithmetic and logic unit (alu). In 2014 International Conference on Computer, Communications, and Control Technology (I4CT), pages 289–293. IEEE, 2014.
* [132] B Devkota, Abeer Alsadoon, PWC Prasad, AK Singh, and A Elchouemi. Image segmentation for early stage brain tumor detection using mathematical morphological reconstruction. Procedia Computer Science, 125:115–123, 2018.
* [133] Ashutosh Kumar Singh and Ravi Kumar. A comparative study of page ranking algorithms for information retrieval. International Journal of Computer and Information Engineering, 3(4):1154–1165, 2009.
* [134] Foteini Andriopoulou, Tasos Dagiuklas, and Theofanis Orphanoudakis. Integrating iot and fog computing for healthcare service delivery. In Components and services for IoT platforms, pages 213–232. Springer, 2017.
* [135] Muhammad Usman, Irfan Ahmed, M Imran Aslam, Shujaat Khan, and Usman Ali Shah. Sit: a lightweight encryption algorithm for secure internet of things. arXiv preprint arXiv:1704.08688, 2017.
* [136] Xinlei Wang, Jianqing Zhang, Eve M Schooler, and Mihaela Ion. Performance evaluation of attribute-based encryption: Toward data privacy in the iot. In 2014 IEEE International Conference on Communications (ICC), pages 725–730. IEEE, 2014.
* [137] Rupesh Bhandari and VB Kirubanand. Enhanced encryption technique for secure iot data transmission. International Journal of Electrical and Computer Engineering, 9(5):3732, 2019.
* [138] Craig Gentry. Fully homomorphic encryption using ideal lattices. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 169–178, 2009.
* [139] T Daisy Premila Bai, A Vimal Jerald, and S Albert Rabara. An adaptable and secure intelligent smart card framework for internet of things and cloud computing. In Big Data Analytics, pages 19–28. Springer, 2018.
* [140] M Vedaraj and P Ezhumalai. Herde-msnb: a predictive security architecture for iot health cloud system. Journal of Ambient Intelligence and Humanized Computing, 12(7):7333–7342, 2021.
* [141] Shruthi Ramesh and Manimaran Govindarasu. An efficient framework for privacy-preserving computations on encrypted iot data. IEEE Internet of Things Journal, 7(9):8700–8708, 2020.
* [142] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In International conference on the theory and applications of cryptographic techniques, pages 223–238. Springer, 1999.
* [143] Jing Shi, Rui Zhang, and Yanchao Zhang. A spatiotemporal approach for secure range queries in tiered sensor networks. IEEE transactions on wireless communications, 10(1):264–273, 2010\.
* [144] Dongjing Miao, Jiguo Yu, and Zhipeng Cai. The hardness of resilience for nested aggregation query. Theoretical Computer Science, 803:152–159, 2020.
* [145] Jing Shi, Rui Zhang, and Yanchao Zhang. Secure range queries in tiered sensor networks. In IEEE INFOCOM 2009, pages 945–953. IEEE, 2009.
* [146] Ashutosh Kumar Singh and Jatinder Kumar. A privacy-preserving multidimensional data aggregation scheme with secure query processing for smart grid. The Journal of Supercomputing, pages 1–21, 2022.
* [147] Xingliang Yuan, Chengjun Cai, Cong Wang, and Qian Wang. A scalable ledger-assisted architecture for secure query processing over distributed iot data. CCF Transactions on Networking, 3(2):97–111, 2020.
|
# Shift current response in elemental two-dimensional ferroelectrics
Zhuang Qian Zhejiang University, Hangzhou, Zhejiang 310058, China Key
Laboratory for Quantum Materials of Zhejiang Province, Department of Physics,
School of Science and Research Center for Industries of the Future, Hangzhou
Zhejiang 310030, China Jian Zhou Center for Alloy Innovation and Design,
State Key Laboratory for Mechanical Behavior of Materials, Xi’an Jiaotong
University, Xi’an 710049, China Hua Wang Zhejiang University, Hangzhou,
Zhejiang 310058, China Shi Liu<EMAIL_ADDRESS>Key Laboratory for
Quantum Materials of Zhejiang Province, Department of Physics, School of
Science and Research Center for Industries of the Future, Hangzhou Zhejiang
310030, China Institute of Natural Sciences, Westlake Institute for Advanced
Study,Hangzhou, Zhejiang 310024, China
###### Abstract
A bulk material without inversion symmetry can generate a direct current under
illumination. This interface-free current generation mechanism, referred to as
the bulk photovoltaic effect (BPVE), does not rely on $p$-$n$ junctions. Here,
we explore the shift current generation, a major mechanism responsible for the
BPVE, in single-element two-dimensional (2D) ferroelectrics represented by
phosphorene-like monolayers of As, Sb, and Bi. The strong covalency, small
band gap, and large joint density of states afforded by these elemental 2D
materials give rise to large shift currents, outperforming many state-of-the-
art materials. We find that the shift current, due to its topological nature,
depends sensitively on the details of the Bloch wave functions. It is crucial
to consider the electronic exchange-correlation potential beyond the
generalized gradient approximation as well as the spin-orbit interaction in
density functional theory calculations to obtain reliable frequency-dependent
shift current responses.
###### pacs:
## I Introduction
High-performing photoelectric conversion is essential to the solar cell
technology. Traditional photovoltaic cells based on $p$-$n$ junctions need the
built-in electric field at the interface to separate electron-hole pairs and
the efficiency is constrained by the Shockley–Queisser limit [1]. The bulk
photovoltaic effect (BPVE) is the direct conversion of solar energy into
direct current (DC), which has been considered as a promising alternative
source of photocurrent [2, 3, 4, 5, 6, 7]. As the name suggests, the presence
of the BPVE does not require a complicated interface that often demands
precisely controlled heterostructure fabrication process to minimize unwanted
impurities and electric resistance. Single-phase bulk materials with broken
inversion symmetry can generate steady photocurrent and above-band-gap
photovoltage under uniform illumination in the absence of external bias [8, 9,
10, 11, 12], potentially enabling the implementation of the whole bulk
material for photoelectric conversion [13, 14, 15, 16].
In the clean limit, the BPVE response can be obtained from the quadratic
response theory using the density matrix method [17, 18] or from the
perspective of divergent susceptibilities [19, 20, 21] where the light is
treated as a classical electromagnetic field interacting with Bloch electrons.
Ferroelectrics intrinsically exhibit the BPVE because of the fundamental
requirement of spontaneous inversion symmetry breaking. Shift current is one
of the most important mechanisms responsible for the BPVE [22, 23, 24], and
was first observed in ferroelectrics experimentally [25]. As a zero-bias
topological photocurrent [26, 27], shift current is intimately related to the
change in the phases of Bloch wave functions during the photoexcitation of an
electron from the valence to the conduction band [28]. A large shift current
is desirable for photoelectric conversion. Previous experimental and
theoretical investigations of shift current in a wide range of
noncentrosymmetric materials systems such as bulk perovskite ferroelectrics
[29, 30, 31, 32], two-dimensional (2D) materials [33, 34, 35, 36, 37, 38],
nanotubes [39], conjugated polymers [40], and topological insulators [41, 42,
43, 44, 45] have led to two general design principles for enhancing the
current density. First, low-dimensional materials with large joint density of
states (JDOS) tend to present large shift current responses upon
photoexcitation [46]. The delocalization of electronic states is another
important parameter that affects the shift current magnitude, and covalently
bonded materials characterized by large long-range electron hopping amplitudes
could give rise to large shift currents [29, 6, 40].
Recently, a family of phosphorene-like 2D elemental group-V (As, Sb, and Bi)
monolayers was predicted to possess spontaneous switchable in-plane
polarization [47] arising from the out-of-plane atomic-layer buckling. We
propose that single-element 2D ferroelectrics are promising candidates to
support large shift currents because of the strong covalency intrinsic to the
homoelement bonding and pronounced singularities in the density of states
common to 2D materials. In this work, we explore the shift current in these
newly predicted elemental ferroelectrics with density functional theory (DFT)
calculations and find that they can generate large shift currents over a wide
range of wavelengths including the visible light spectrum. In addition, our
work highlights the importance of spin-orbit coupling (SOC) and electronic
correlation effect on the shift current spectrum even in 2D systems containing
light elements.
## II Results and Discussion
### II.1 Structure
The structure of single-element group-V monolayer is displayed in Fig. 1. The
puckered lattice structure without centrosymmetry (space group $Pmn2_{1}$) can
be viewed as a distorted phosphorene-like structure. The out-of-plane atomic
buckling causes charge accumulations at the outmost group-V atoms, leading to
spontaneous in-plane polarization along the $y$ axis [47]. We compute the Born
effective charges (see inset table in Fig. 1) and confirm the buckling-induced
charge accumulation/depletion mechanism. The dynamic stability of
ferroelectric group-V monolayer has been confirmed by the computed phonon
spectrum that has no imaginary phonon modes over the whole Brillouin zone
[47]. Xu et al. recently simulated the polarization-electric field hysteresis
loop for monolayer As [48], and the predicted in-plane spontaneous
polarization is 0.42$\times$10-10 C/m, comparable with ferroelectric monolayer
SnTe [49]. We compute the energy evolution as a function of buckling parameter
$h$ (Fig. 1a) in monolayer As and Sb, respectively, each revealing a double
well potential landscape, typical for a ferroelectric phase (Fig. 1c). It is
worth noting that group-V monolayers, similar to phosphorene, have already
been experimentally synthesized for Sb and Bi [50, 51, 52]. We only consider
in-plane shift currents ($\sigma^{xbc}$ and $\sigma^{ybc}$) in this work.
Since the mirror symmetry $\mathcal{M}_{x}:x\to-x$ leaves the monolayer
invariant, only three components of the response tensor, $\sigma^{yxx}$,
$\sigma^{yyy}$, and $\sigma^{yzz}$, can be nonzero due to the symmetry
constraint that holds at any photon frequency. Here, we focus on the responses
under incident light perpendicular to the 2D sheet, namely, $\sigma^{yxx}$ and
$\sigma^{yyy}$.
### II.2 Monolayer arsenic
The DFT band structures and the corresponding shift current spectra for
monolayer As computed with PBE, PBE+SOC, HSE, and HSE+SOC are presented in
Fig.2, revealing several intriguing features. The band gap predicted by PBE is
0.15 eV, and the peak in the $\sigma^{yyy}$ spectrum exceeds 1000 $\mu$A/V2
for photon energies near the band edge. In comparison, HSE predicts a larger
band gap of 0.47 eV, whereas the peak value of $\sigma^{yyy}$ drops
considerably to only 150 $\mu$A/V2. Similarly, the band-edge value of
$\sigma^{yxx}$ estimated by PBE is more than 3000 $\mu$A/V2 but the HSE
predicts a much smaller value of 250 $\mu$A/V2. Such pronounced reduction in
the peak response is unexpected as the HSE band structure looks like a rigid
shift of the PBE band structure without substantial changes in the band
dispersions. This highlights the importance of treating the exchange-
correlation potential beyond the semilocal approximation in DFT calculations.
Moreover, the inclusion of the SOC effect, though inducing little impact on
the band gap, causes drastic changes in the shift current spectra for both PBE
and HSE. At the PBE level, the band gap reduction due to SOC is merely 0.01
eV, but the band-edge values of $\sigma^{yyy}$ and $\sigma^{yxx}$ computed
with SOC reduce to 450 and 2200 $\mu$A/V2 from their PBE values of 1000 and
3000$\mu$A/V2, respectively; the PBE and PBE+SOC spectra are almost identical
for higher photon frequencies. Interestingly, compared to the HSE values of
150 and 250 $\mu$A/V2, the peak values of $\sigma^{yyy}$ and $\sigma^{yxx}$
estimated with HSE+SOC increase to 300 and 400 $\mu$A/V2, respectively. In
addition, the HSE and HSE+SOC spectrum profiles are notably different for both
$\sigma^{yyy}$ and $\sigma^{yxx}$ in the frequency range between 1.5 and 3.0
eV.
To get a better understanding of the subtle changes in the shift current
responses due to SOC, we analyze the spectrum of $\sigma^{yyy}$ by plotting
two BZ-integrated quantities: the energy averaged shift vector $\bar{R}^{a,b}$
and the transition intensity $\varepsilon_{2}^{bb}$ defined as
$\bar{R}^{a,b}(\omega)=\sum_{k}\sum_{nm}f_{nm}R_{nm}^{a,b}\delta(\omega_{mn}-\omega)$
(1)
and
$\varepsilon_{2}^{bb}(\omega)=\sum_{k}\sum_{nm}r_{nm}^{b}r_{mn}^{b}\delta(\omega_{mn}-\omega),$
(2)
respectively. For a given photon frequency $\omega$, $\bar{R}^{a,b}(\omega)$
is a measure of aggregate contributions of shift vectors $R_{nm}^{a,b}$ at all
$k$ points in the BZ; $\varepsilon_{2}^{bb}(\omega)$ reflects the photon
absorption strength. By comparing the spectra of $\bar{R}^{y,y}$ and
$\varepsilon_{2}^{yy}$ computed with PBE, PBE+SOC, HSE, and HSE+SOC (Fig. 3),
we find that the integrated shift vectors are comparable at the low-frequency
region but these four methods predict rather different transition intensities
near the band edge. Specifically, PBE predicts a larger magnitude of
$\varepsilon_{2}^{yy}$ than PBE+SOC, whereas HSE yields a lower transition
intensity than HSE+SOC. Therefore, the SOC effect suppresses the transition at
the PBE level but promotes the transition in HSE. According to eq. 2, the
magnitude of $\varepsilon_{2}^{yy}$ depends on the interband Berry connections
between every conduction and valence band pair that has the energy difference
matching the energy of the incident light. The presence of SOC, regardless the
strength, will lift the spin degeneracy and lead to level anticrossing at some
$k$ points. These changes in electronic bands albeit localized in the momentum
space could strongly affect the interband Berry connections. Therefore, the
pronounced SOC effect on the spectrum of $\varepsilon_{2}$ and then $\sigma$
is a manifestation of the topological nature of shift current that exhibits
highly nontrivial dependence on the Berry connections of a bundle of Bloch
bands (valence and conduction bands relevant to photon excitation).
Previous studies have demonstrated that the shift current response of 2D
materials can exist a significant dependence on the layer number [53, 54, 15].
We examine the layer stacking impact on the response functions. Our HSE+SOC
calculations indicate that bilayer As remains polar and possesses a small band
gap of 0.36 eV (Fig. 4a), whereas trilayer As becomes centrosymmetric and
metallic. The shift current spectrum is presented in Fig. 4b, revealing a peak
value of 125 $\mu$A/V2 for $\sigma^{yyy}$ and $-135$ $\mu$A/V2 for
$\sigma^{yxx}$; these magnitudes are smaller than those in monolayer,
consistent with those observed experimentally in CuInP2S6 where the
photocurrent density decreases drastically when the thickness exceeds
$\approx$40 nm [15].
We believe HSE+SOC is the most reliable method among the ones employed in the
current work and most previous works, and the predicted peak shift current
response in monolayer As is 400 $\mu$A/V2 for $\sigma^{yxx}$, much higher than
previous reports for 2D materials, i.e., 100 $\mu$A/V2 in GeS. This highlights
the potential of elemental ferroelectric 2D materials for photoelectric
conversion.
### II.3 Monolayer antimony
The band structures and shift current spectra of monolayer Sb computed with
four different methods are displayed in Fig. 5. The direct band gap values
based on the band structures plotted along high-symmetry lines are 0.31, 0.32,
0.41, and 0.29 eV for PBE, PBE+SOC, HSE, HSE+SOC, respectively. Despite
yielding comparable band structures, these four methods predict distinct shift
current spectra. Similar to the case of monolayer As, we observe a reduction
of band-edge response magnitude of $\sigma^{yyy}$ ($\sigma^{yxx}$) from
$-1000$ (4000) $\mu$A/V2 in PBE (Fig. 5c, e) to $-300$ (1700) $\mu$A/V2 in HSE
(Fig. 5d, f), reaffirming the importance of including exact exchange in DFT
calculations.
The inclusion of the SOC effect completely changes the spectrum profiles of
$\sigma^{yyy}$ and $\sigma^{yxx}$ at the PBE level (Fig. 5c, e). The most
striking result is the reversal of the current direction for low-frequency
excitations. For example, PBE predicts a current running against the
polarization ($\sigma^{yyy}<0$) but PBE+SOC predicts a current flowing along
the polarization ($\sigma^{yyy}>0$). The sign of the shift current is
determined by the integrated shift vector as defined in eq. 1. Indeed, as
shown in Fig. 6a, the sign of $\bar{R}^{y,y}$ is the same as the sign of
$\sigma^{yyy}$, and SOC causes a sign change. For a given photon energy
$\omega$, the value of $\bar{R}^{a,b}$ depends on the topological quantity
$R_{nm}^{a,b}$ at every $k$ point in the BZ that supports resonant excitation.
Following our previous argument regarding the effect of SOC on the transition
intensity, the intraband Berry connection ($\mathcal{A}$) of some bands may be
altered drastically by SOC, particularly around the $k$ point where SOC
induces a hybridization gap. Therefore, it is physically plausible to have a
sign changing situation due to SOC, as demonstrated in the case of monolayer
Sb when treated with PBE. We further compute $k$-resolved $\bar{R}^{y,y}$ at a
photon energy of 0.2 eV with PBE and PBE+SOC, respectively, with results
ploted in Fig. 6e and f. It is found that all $k$-resolved $\bar{R}^{y,y}$
computed with PBE have negative values whereas those computed with PBE+SOC
become mostly positive.
At the HSE level, SOC causes a redshift of the band-edge peak to $\approx 0.1$
eV for both $\sigma^{yyy}$ and $\sigma^{yxx}$ (Fig. 5d, f), though the band
gap value taken from the HSE+SOC band structures is 0.29 eV, higher than the
onset photon energy. We further decompose $\sigma^{yyy}$ into $\bar{R}^{y,y}$
and $\varepsilon_{2}^{yy}$ and find that the magnitudes of $\bar{R}^{y,y}$
computed with HSE and HSE+SOC are comparable at the low-frequency region (Fig.
6c). However, the $\varepsilon_{2}^{yy}$ spectrum of HSE+SOC has a sharp peak
at a much lower frequency of 0.1 eV (Fig. 6d), consistent with the HSE+SOC
spectrum of $\sigma^{yyy}$ (Fig. 5d). This seems to suggest forbidden optical
excitations with in-gap photon frequencies. To resolve this puzzle, we perform
a diagnostic analysis on the electronic structures of monolayer Sb obtained
with HSE and HSE+SOC. The zoomed-in band structures along $\Gamma$-Y-S
presented in Fig. 7a show that the inclusion of SOC gives a smaller band gap
of 0.29 eV than the HSE band gap of 0.4 eV, and breaks the spin degeneracy. We
compute the JDOS defined as
$\rho(\omega)=\int\frac{d\bm{k}}{8\pi^{3}}\delta(\omega_{nm}-\omega)$ (3)
with results plotted in Fig. 7b. The onset frequency of the HSE JDOS is 0.4
eV, as expected from the HSE band gap. However, the JDOS computed with HSE+SOC
acquires nonzero values staring at a lower frequency of 0.1 eV. As the JDOS is
obtained by integrating over the whole BZ while the band structure only shows
the band energies along the high-symmetry BZ boundary paths, the JDOS of
HSE+SOC hints at a smaller gap at a generic $k$-point. We thus map out the
energy difference between the highest valence band and the lowest conduction
band over the whole BZ (Fig. 7c-d) with HSE and HSE+SOC, respectively. Indeed,
HSE gives a gap of 0.38 eV at a $k$ point very close to the high-symmetry line
$\Gamma$-Y (Fig. 7c); HSE+SOC yields a band gap of 0.1 eV at a generic
$k$-point ([0.04, 0.35] in reduced coordinates) away from the zone boundary
(Fig. 7d), consistent with the onset photon frequency for $\sigma^{yyy}$ and
$\sigma^{yxx}$ predicted by HSE+SOC (Fig. 5d, f). It is noted that the peak
value of $\sigma^{yxx}$ in monolayer Sb estimated with HSE+SOC reaches 2000
$\mu$A/V2, even higher than monolayer As. Moreover, the value of
$\sigma^{yyy}$ reaches 600 $\mu$A/V2 at $\omega=1.8$ eV, suitable for visible
light absorption.
It is noted that we have compared the shift current spectrum and JDOS of
monolayer GeS, a model material for investigating BPVE in 2D, with monolayer
Sb. The peak shift current in GeS is $\approx$100 $\mu$A/V2, one order of
magnitude smaller than that in monolayer Sb. We find that monolayer Sb
exhibits much larger JDOS than GeS. Nevertheless, because GeS has a much
larger band gap than Sb and the shift current scales inversely with the band
gap according to velocity gauge formalism [17], we suggest that the giant
shift current in monolayer Sb could be attributed to multiple factors
including small band gap and large JDOS.
### II.4 Monolayer bismuth
Since Bi has a larger atomic number, the SOC effect is more pronounced in
monolayer Bi as demonstrated from the notably different band structures
between PBE (HSE) and PBE+SOC (HSE+SOC), as shown in Fig. 8a, b. Without SOC,
PBE and HSE predict similar spectrum profiles of $\sigma^{yyy}$ and
$\sigma^{yxx}$ and large band-edge responses. The magnitude of the peak
response of $\sigma^{yxx}$ computed with HSE is close to 6500 $\mu$A/V2. In
comparison, the spectra obtained with SOC are qualitatively different. In
general, the spectra of PBE+SOC and HSE+SOC share similar peak structures with
the latter predicting lower peak values (Fig. 8c-f). For example, PBE+SOC
predicts a main peak of $-2000$ $\mu$A/V2 at 1.0 eV for $\sigma^{yyy}$ while
the HSE+SOC spectrum of $\sigma^{yyy}$ has the highest peak of $-500$
$\mu$A/V2 located at 1.4 eV. We note that the band-edge response is no longer
the strongest. The spectrum of $\varepsilon_{2}^{yy}$ (Fig. 9b) confirms that
the SOC interaction reduces the band-edge photon absorption than that in HSE.
Given the substantially different band structures predicted by HSE and
HSE+SOC, it is not surprising to obtain drastically different spectra of
$\bar{R}^{y,y}$ (Fig. 9a). For monolayer Bi, the inclusion of SOC is crucial
to acquire correct shift current spectrum.
### II.5 Strain effect
Because reduced-dimensional structures can sustain much larger strains than
their bulk counterparts, it has become common to use strain to modulate the
structural, electronic, and optical properties of 2D materials [55, 56, 57,
58]. Here, we explore the effect of uniaxial strain ($\eta_{x}$) on the shift
current response. Taking monolayer As for example, we find that a tensile
strain along the $x$ direction can effectively reduce the band gap and promote
the current density. As shown in Fig. 10, a 3% tensile strain enhances both
$\sigma^{yyy}$ and $\sigma^{yxx}$ by nearly 4-fold. In contrast, a compressive
strain ($\eta_{x}=-3$ %) increases the band gap hence reduces the band-edge
response. Interestingly, the peak of $\sigma^{yyy}$ at a higher photon energy
of 1.6 eV gets enhanced by the compressive strain of $-3$%. Similar tensile
strain-promoted response is also found in monolayer Sb. In particular, the
magnitude of $\sigma^{yxx}$ gets enhanced to 2800 $\mu$A/V2 when stretching
the monolayer by 2% along $x$ (Fig. 10d).
Finally, we compare the shift current responses of different materials
including bulk polar materials (e.g., PbTiO3, BaTiO3, and GaAs) and 2D
materials (e.g., GeSe and CrI3). As summarized in Fig. 11a, the magnitude of
the response tensor roughly scales inversely with the band gap, and the
stretched monolayer Sb has the highest response of 2800 $\mu$A/V2. Note that
most previous results were based on PBE calculations, which generally
overestimate the response. To compare with experimental data, we further
estimate the shift photocurrent density based on the computed response tensor
and a light intensity of 0.1 W/cm2 following the method in ref. [33]. It is
evident from Fig.11b that elemental 2D ferroelectrics represented by Sb, As,
and Bi outperform conventional bulk ferroelectrics and 2D layered materials
such as CuInP2S6 over a wide range of wavelengths including the visible light
spectrum. It is worth noting that TaAs exhibits a strong response at a
wavelength of $\approx$10000 nm, which is much larger than the wavelengths of
visible light. Given TaAs is a Weyl semimetal, it is not surprising that its
shift current response is large due to the gapless band dispersion and
topological nature. The mid-infrared response is likely more relevant to
applications such as optical detectors and sensors. In comparison, monolayers
As and Sb exhibit large shift current response across a wide range of
wavelengths including the visible light spectrum, which would be advantageous
for utilizing most of the solar spectrum. Additionally, since the peak
responses of group-V elemental ferroelectrics are consistent with light-
induced terahertz emission, the large current magnitude suggests their
potential applications in terahertz source platforms.
In this work, we investigate the shift current responses in single-element
two-dimensional ferroelectrics represented by monolayer As, Sb, and Bi in the
space group of $Pmn2_{1}$ using first-principles density functional theory
calculations. We find that PBE and HSE yield qualitatively different shift
current spectra, demonstrating the importance of exact exchange potential for
reliable predictions of optical responses. Moreover, the spin-orbit coupling,
largely overlooked in previous studies, can substantially affect the
magnitude, sign, and spectral profile of shift current even for light elements
such as arsenic. This highlights the topological nature of shift current that
has nontrivial dependence on both intraband and interband Berry connections of
a bundle of valence and conduction bands. Regarding computational materials by
design for new solar materials that can generate large shift currents, we
suggest that it is essential to treat the electronic exchange-correlation
interaction beyond the generalized gradient approximation and to include the
spin-orbit interaction in density functional theory calculations. Based on the
results predicted by HSE+SOC, we propose that elemental 2D ferroelectrics can
support large shift currents, outperforming many state-of-the-art materials.
This work unravels the potential of elemental 2D ferroelectrics for
photovolatic and optoelectronic applications.
## III Method
We follow the formalism adopted in Ref. [59, 60] to compute the photon
frequency-dependent shift current spectrum. The shift current density
($J_{2}$) is regarded as a second-order optical response to the
electromagnetic field $E$ of frequency $\omega$,
$J_{2}^{a}=2\sum_{bc}\sigma^{abc}(0;\omega,-\omega)E^{b}(\omega)E^{c}(-\omega),$
(4)
where the third-rank tensor $\sigma^{abc}(0;\omega,-\omega)$ is the shift
current response tensor; $a$, $b$, and $c$ are Cartesian indexes, and the
index $a$ specifies the direction of the generated DC current, while $b$ and
$c$ are polarization directions of an incident light. Within the length gauge,
the shift current tensor is derived as
$\sigma^{abc}(0;\omega,-\omega)=-\frac{i\pi
e^{3}}{2\hbar^{2}}\int\frac{d\bm{k}}{8\pi^{3}}\sum_{nm}f_{nm}(r^{b}_{mn}r^{c}_{nm;a}+r^{c}_{mn}r^{b}_{nm;a})\delta(\omega_{mn}-\omega),$
(5)
where $n$ and $m$ are band indexes, and $\bm{k}$ is the wavevector of the
Bloch wave function. $f_{nm}=f_{n}-f_{m}$ is the difference in the Fermi-Dirac
occupation number between bands $n$ and $m$;
$\omega_{nm}=\omega_{n}-\omega_{m}$ represents the band energy difference.
$r^{b}_{nm}=i\langle n|\partial k^{b}|m\rangle$ is the dipole matrix element
(interband Berry connection). $r^{b}_{nm;a}$ is the generalized derivative
expressed as $r^{b}_{nm;a}=\frac{\partial r^{b}_{nm}}{\partial
k^{a}}-ir^{b}_{nm}(\mathcal{A}^{a}_{n}-\mathcal{A}^{a}_{m})$ with
$\mathcal{A}^{a}_{n}=i\langle n|\partial k^{a}|n\rangle$ the intraband Berry
connection.
Under a linearly polarized light ($b=c$), the shift current response tensor
can be re-formulated into a more compact expression,
$\sigma^{abb}(0;\omega,-\omega)=\frac{\pi
e^{3}}{\hbar^{2}}\int\frac{d\bm{k}}{8\pi^{3}}\sum_{nm}f_{nm}R_{nm}^{a,b}r^{b}_{nm}r^{b}_{mn}\delta(\omega_{mn}-\omega),$
(6)
where $R_{nm}^{a,b}=\frac{\partial\phi^{b}_{nm}}{\partial
k^{a}}+\mathcal{A}^{a}_{n}-\mathcal{A}^{a}_{m}$ is the shift vector with
$\phi_{nm}$ being the phase factor of the dipole matrix element
$r_{nm}^{b}=|r_{nm}^{b}|e^{-i\phi_{nm}^{b}}$. The shift vector has a unit of
length and represents the average displacement of the coherent photoexcited
carriers during their lifetimes. The $r^{b}_{nm}r^{b}_{mn}$ term measures the
transition intensity which describes the optical absorption strength for the
transition from band $m$ to band $n$. Therefore, the response tensor can be
viewed as the product of shift vector and optical transition intensity.
The structural parameters of monolayer As, Sb, and Bi are optimized using
Quantum Espresso [61, 62] with Garrity-Bennett-Rabe-Vanderbilt (GBRV)
ultrasoft pseudopotentials [63]. The exchange-correlation functional is
treated within the generalized gradient approximation of Perdew-Burke-
Ernzerhof (PBE) type [64]. We use a plane-wave kinetic energy cutoff of 50 Ry,
a charge density cutoff of 250 Ry, a 12$\times$12$\times$1 Monkhorst-Pack
$k$-point mesh for Brillouin zone (BZ) integration, an ionic energy
convergence threshold of 10-5 Ry, and a force convergence threshold of 10-4 Ry
in structural optimizations. Maximally localized Wannier functions to fit DFT
electronic structures are obtained using the Wannier90 code [65], and then the
shift current response tensor is calculated in the Wannier basis as described
in ref. [59]. We use a numerical smearing parameter of 20 meV and a dense
$k$-point grid of 1000$\times$1000$\times$1 to ensure the spectrum
convergence. In addition, the 3D-like response is estimated by rescaling the
calculated 2D response with the effective thickness of monolayer [33]. The
spin-orbit coupling (SOC) is taken into account at the fully relativistic
level with norm-conserving pseudopotentials provided by the PseudoDoJo project
[66]. We also compute the shift current using the Heyd-Scuseria-Ernzerhof
(HSE) hybrid functional [67] with a $8\times 8\times 1$ $q$-point grid during
the self-consistent-field cycles. The HSE band structure is obtained via
Wannier interpolation [68] using Wannier90 interfaced with Quantum Espresso.
We employ the Wannier Berri code [69, 70] to compute the shift vector and
transition intensity (see discussions below) to analyze the shift current
spectrum. It is noted that all shift current calculations are based on the
independent-particle approximation and do not take the exciton effect into
account.
###### Acknowledgements.
Z.Q. and S.L. acknowledge the supports from Westlake Education Foundation. We
acknowledge Dr. Jae-Mo Lihm for useful suggestions regarding the usage of
Wannier90 and Wannier Berri. Z.Q. acknowledges the the help from Yudi Yang
during the preparation of the manuscript. The computational resource is
provided by Westlake HPC Center. J.Z. acknowledges National Natural Science
Foundation of China under Grant No. 11974270.
Author Contributions S.L. conceived and led the project. Z.Q. performed
calculations and data analysis. All authors contributed to the discussion and
the manuscript preparation.
Competing Interests The authors declare no competing financial or non-
financial interests.
Data Availability The data that support the findings of this study are
included in this article and are available from the corresponding author upon
reasonable request.
## References
* Shockley and Queisser [1961] W. Shockley and H. J. Queisser, Detailed balance limit of efficiency of $p-n$ junction solar cells, J. Appl. Phys. 32, 510 (1961).
* Sturman and Fridkin [2021] B. I. Sturman and V. M. Fridkin, _The Photovoltaic and Photorefractive Effects in Noncentrosymmetric Materials_ (Routledge, 2021).
* Fridkin [2001] V. M. Fridkin, Bulk photovoltaic effect in noncentrosymmetric crystals, Crystallogr. Rep. 46, 654 (2001).
* Grinberg _et al._ [2013] I. Grinberg _et al._ , Perovskites oxides for visible-light-adsorbing ferroelectric and photovoltaic materials, Naure 503, 509 (2013).
* Butler _et al._ [2015] K. T. Butler, J. M. Frost, and A. Walsh, Ferroelectric materials for solar energy conversion: photoferroics revisited, Energy Environ. Sci. 8, 838 (2015).
* Tan _et al._ [2016] L. Z. Tan _et al._ , Shift current bulk photovoltaic effect in polar materials—hybrid and oxide perovskites and beyond, npj Comput. Mater. 2, 16026 (2016).
* Spanier _et al._ [2016] J. E. Spanier _et al._ , Power conversion efficiency exceeding the shockley–queisser limit in a ferroelectric insulator, Nat. Photonics 10, 611 (2016).
* Zenkevich _et al._ [2014] A. Zenkevich _et al._ , Giant bulk photovoltaic effect in thin ferroelectric ${BaTiO_{3}}$ films, Phys. Rev. B 90, 161409 (2014).
* Nakamura _et al._ [2017] M. Nakamura _et al._ , Shift current photovoltaic effect in a ferroelectric charge-transfer complex, Nat. Commun. 8, 281 (2017).
* Pal _et al._ [2018] S. Pal _et al._ , Giant photovoltaic response in band engineered ferroelectric perovskite, Sci. Rep. 8, 8005 (2018).
* Osterhoudt _et al._ [2019] G. B. Osterhoudt _et al._ , Colossal mid-infrared bulk photovoltaic effect in a type-i weyl semimetal, Nat. Mater. 18, 471 (2019).
* Huangfu _et al._ [2020] G. Huangfu _et al._ , Visible or near-infrared light self-powered photodetectors based on transparent ferroelectric ceramics, ACS Appl. Mater. Interfaces 12, 33950 (2020).
* Zhang _et al._ [2019a] Y. J. Zhang _et al._ , Enhanced intrinsic photovoltaic effect in tungsten disulfide nanotubes, Nature 570, 349 (2019a).
* Yang _et al._ [2010] S. Y. Yang _et al._ , Above-bandgap voltages from ferroelectric photovoltaic devices, Nat. Nanotechnol. 5, 143 (2010).
* Li _et al._ [2021] Y. Li _et al._ , Enhanced bulk photovoltaic effect in two-dimensional ferroelectric CuInP2S6, Nat. Commun. 12, 5896 (2021).
* Burger _et al._ [2019] A. M. Burger _et al._ , Direct observation of shift and ballistic photovoltaic currents, Sci. Adv. 5, eaau5588 (2019).
* Kraut and von Baltz [1979] W. Kraut and R. von Baltz, Anomalous bulk photovoltaic effect in ferroelectrics: A quadratic response theory, Phys. Rev. B 19, 1548 (1979).
* von Baltz and Kraut [1981] R. von Baltz and W. Kraut, Theory of the bulk photovoltaic effect in pure crystals, Phys. Rev. B 23, 5590 (1981).
* Aversa and Sipe [1995] C. Aversa and J. E. Sipe, Nonlinear optical susceptibilities of semiconductors: Results with a length-gauge analysis, Phys. Rev. B 52, 14636 (1995).
* Sipe and Shkrebtii [2000] J. E. Sipe and A. I. Shkrebtii, Second-order optical response in semiconductors, Phys. Rev. B 61, 5337 (2000).
* Fregoso [2019] B. M. Fregoso, Bulk photovoltaic effects in the presence of a static electric field, Phys. Rev. B 100, 064301 (2019).
* Sturman [2020] B. I. Sturman, Ballistic and shift currents in the bulk photovoltaic effect theory, Phys.–Usp. 63, 407 (2020).
* Panday _et al._ [2019] S. R. Panday, S. Barraza-Lopez, T. Rangel, and B. M. Fregoso, Injection current in ferroelectric group-IV monochalcogenide monolayers, Phys. Rev. B 100, 195305 (2019).
* Dai _et al._ [2021] Z. Dai, A. M. Schankler, L. Gao, L. Z. Tan, and A. M. Rappe, Phonon-assisted ballistic current from first-principles calculations, Phys. Rev. Lett. 126, 177403 (2021).
* Glass _et al._ [1974] A. M. Glass, D. von der Linde, and T. J. Negran, High‐voltage bulk photovoltaic effect and the photorefractive process in LiNbO3, Appl. Phys. Lett. 25, 233 (1974).
* Barik and Sau [2020] T. Barik and J. D. Sau, Nonequilibrium nature of nonlinear optical response: Application to the bulk photovoltaic effect, Phys. Rev. B 101, 045201 (2020).
* Ahn _et al._ [2020] J. Ahn, G.-Y. Guo, and N. Nagaosa, Low-frequency divergence and quantum geometry of the bulk photovoltaic effect in topological semimetals, Phys. Rev. X 10, 041041 (2020).
* Nastos and Sipe [2006] F. Nastos and J. E. Sipe, Optical rectification and shift currents in GaAs and GaP response: Below and above the band gap, Phys. Rev. B 74, 035201 (2006).
* Young and Rappe [2012] S. M. Young and A. M. Rappe, First principles calculation of the shift current photovoltaic effect in ferroelectrics, Phys. Rev. Lett. 109, 116601 (2012).
* Young _et al._ [2012] S. M. Young, F. Zheng, and A. M. Rappe, First-principles calculation of the bulk photovoltaic effect in bismuth ferrite, Phys. Rev. Lett. 109, 236601 (2012).
* Young _et al._ [2015] S. M. Young, F. Zheng, and A. M. Rappe, First-principles materials design of high-performing bulk photovoltaics with the LiNbO3 structure, Phys. Rev. Appl. 4, 054004 (2015).
* Tan and Rappe [2019] L. Z. Tan and A. M. Rappe, Upper limit on shift current generation in extended systems, Phys. Rev. B 100, 085102 (2019).
* Rangel _et al._ [2017] T. Rangel _et al._ , Large bulk photovoltaic effect and spontaneous polarization of single-layer monochalcogenides, Phys. Rev. Lett. 119, 067402 (2017).
* Wang and Qian [2019] H. Wang and X. Qian, Ferroicity-driven nonlinear photocurrent switching in time-reversal invariant ferroic materials, Sci. Adv. 5, eaav9743 (2019).
* Tiwari [2022] R. P. Tiwari, Enhanced shift current bulk photovoltaic effect in ferroelectric rashba semiconductor $\alpha$-GeTe: ab initio study from three- to two-dimensional van der waals layered structures, J. Phys. Condens. Matter 34, 435404 (2022).
* Zhang _et al._ [2019b] Y. Zhang _et al._ , Switchable magnetic bulk photovoltaic effect in the two-dimensional magnet CrI3, Nat. Commun. 10, 3783 (2019b).
* Xu _et al._ [2021a] H. Xu, H. Wang, J. Zhou, and J. Li, Pure spin photocurrent in non-centrosymmetric crystals: bulk spin photovoltaic effect, Nat. Commun. 12, 4330 (2021a).
* Xiao _et al._ [2022] R. C. Xiao _et al._ , Non-synchronous bulk photovoltaic effect in two-dimensional interlayer-sliding ferroelectrics, npj Comput. Mater. 8, 138 (2022).
* Kim _et al._ [2022] B. Kim, N. Park, and J. Kim, Giant bulk photovoltaic effect driven by the wall-to-wall charge shift in WS2 nanotubes, Nat. Commun. 13, 3237 (2022).
* Liu _et al._ [2017] S. Liu, F. Zheng, and A. M. Rappe, Giant bulk photovoltaic effect in vinylene-linked hybrid heterocyclic polymer, J. Phys. Chem. C 121, 6500 (2017).
* Tan and Rappe [2016] L. Z. Tan and A. M. Rappe, Enhancement of the bulk photovoltaic effect in topological insulators, Phys. Rev. Lett. 116, 237402 (2016).
* Xu _et al._ [2021b] H. Xu _et al._ , Colossal switchable photocurrents in topological janus transition metal dichalcogenides, npj Comput. Mater. 7, 31 (2021b).
* Xu _et al._ [2022a] H. Xu, H. Wang, and J. Li, Abnormal nonlinear optical responses on the surface of topological materials, npj Comput. Mater. 8, 111 (2022a).
* Xu _et al._ [2021c] H. Xu, J. Zhou, and J. Li, Light-induced quantum anomalous hall effect on the 2D surfaces of 3D topological insulators, Adv. Sci. 8, 2101508 (2021c).
* Ji _et al._ [2019] Z. Ji _et al._ , Spatially dispersive circular photogalvanic effect in a weyl semimetal, Nat. Mater. 18, 955 (2019).
* Cook _et al._ [2017] A. M. Cook, B. M. Fregoso, F. de Juan, S. Coh, and J. E. Moore, Design principles for shift current photovoltaics, Nat. Commun. 8, 14176 (2017).
* Xiao _et al._ [2018] C. Xiao _et al._ , Elemental ferroelectricity and antiferroelectricity in group-V monolayer, Adv. Funct. Mater. 28, 1707383 (2018).
* Xu _et al._ [2022b] F. Xu _et al._ , Controllable ferroelectricity and bulk photovoltaic effect in elemental group-v monolayers through strain engineering, Phys. Rev. B 106, 195418 (2022b).
* Wan _et al._ [2017] W. Wan, C. Liu, W. Xiao, and Y. Yao, Promising ferroelectricity in 2d group IV tellurides: a first-principles study, Appl. Phys. Lett. 111, 132904 (2017).
* Wang _et al._ [2006] X. S. Wang, S. S. Kushvaha, Z. Yan, and W. Xiao, Self-assembly of antimony nanowires on graphite, Appl. Phys. Lett. 88, 233105 (2006).
* Bianchi _et al._ [2012] M. Bianchi _et al._ , Surface states on a topologically nontrivial semimetal: The case of Sb(110), Phys. Rev. B 85, 155431 (2012).
* Nagao _et al._ [2004] T. Nagao _et al._ , Nanofilm allotrope and phase transformation of ultrathin bi film on Si(111)-7×7, Phys. Rev. Lett. 93, 105501 (2004).
* Strasser _et al._ [2022] A. Strasser, H. Wang, and X. Qian, Nonlinear optical and photocurrent responses in janus MoSSe monolayer and MoS2-MoSSe van der waals heterostructure, Nano Lett. 22, 4145 (2022).
* Mu _et al._ [2023] X. Mu, Q. Xue, Y. Sun, and J. Zhou, Magnetic proximity enabled bulk photovoltaic effects in van der waals heterostructures, Phys. Rev. Res. 5, 013001 (2023).
* Xu _et al._ [2020] R. Xu _et al._ , Strain-induced room-temperature ferroelectricity in SrTiO3 membranes, Nat. Commun. 11, 3141 (2020).
* Li _et al._ [2020] Z. Li _et al._ , Efficient strain modulation of 2D materials via polymer encapsulation, Nat. Commun. 11, 1151 (2020).
* Zhao _et al._ [2020] C. Zhao _et al._ , Strain tunable semimetal–topological-insulator transition in monolayer 1-T’-WTe2, Phys. Rev. Lett. 125, 046801 (2020).
* Wei _et al._ [2018] Y. Wei _et al._ , A rhombohedral ferroelectric phase in epitaxially strained Hf0.5Zr0.5O2 thin films, Nature Mater. 17, 1095 (2018).
* Ibañez-Azpiroz _et al._ [2018] J. Ibañez-Azpiroz, S. S. Tsirkin, and I. Souza, Ab initio calculation of the shift photocurrent by wannier interpolation, Phys. Rev. B 97, 245143 (2018).
* Wang _et al._ [2017] C. Wang _et al._ , First-principles calculation of nonlinear optical responses by wannier interpolation, Phys. Rev. B 96, 115147 (2017).
* Giannozzi _et al._ [2009] P. Giannozzi _et al._ , QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, J. Phys. Condens. Matter 21, 395502 (2009).
* Giannozzi _et al._ [2017] P. Giannozzi _et al._ , Advanced capabilities for materials modelling with QUANTUM ESPRESSO, J. Phys. Condens. Matter 29, 465901 (2017).
* Garrity _et al._ [2014] K. F. Garrity, J. W. Bennett, K. M. Rabe, and D. Vanderbilt, Pseudopotentials for high-throughput DFT calculations, Comput. Mater. Sci. 81, 446 (2014).
* Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77, 3865 (1996).
* Pizzi _et al._ [2020] G. Pizzi _et al._ , Wannier90 as a community code: new features and applications, J. Phys. Condens. Matter. 32, 165902 (2020).
* van Setten _et al._ [2018] M. van Setten _et al._ , The PseudoDojo: Training and grading a 85 element optimized norm-conserving pseudopotential table, Comput. Phys. Commun. 226, 39 (2018).
* Krukau _et al._ [2006] A. V. Krukau, O. A. Vydrov, A. F. Izmaylov, and G. E. Scuseria, Influence of the exchange screening parameter on the performance of screened hybrid functionals, J. Chem. Phys. 125, 224106 (2006).
* Marzari _et al._ [2012] N. Marzari, A. A. Mostofi, J. R. Yates, I. Souza, and D. Vanderbilt, Maximally localized wannier functions: Theory and applications, Rev. Mod. Phys. 84, 1419 (2012).
* Tsirkin [2021] S. S. Tsirkin, High performance wannier interpolation of berry curvature and related quantities with WannierBerri code, npj Comput. Mater. 7, 33 (2021).
* Destraz _et al._ [2020] D. Destraz _et al._ , Magnetism and anomalous transport in the weyl semimetal PrAlGe: possible route to axial gauge fields, npj Quant. Mater. 5, 5 (2020).
Figure 1: Ferroelectric single-element group-V monolayers. Schematics of (a)
side view and (b) top view of crystal structures of single-element group-V
monolayer with in-plane polarization along the $y$ axis. The inversion
symmetry breaking results from the spontaneous atomic layer buckling denoted
as $h$. The atoms are colored based on the sign of the Born effective charge
($Z_{11}$) reported in the table (blue for negative and red for positive). (c)
Energy evolution as a function of buckling height $h$ in monolayer As
monolayer (left) and monolayer Sb (right), revealing a double well potential.
Figure 2: Electronic band structures and shift current spectra of monolayer
As. Electronic band structures and shift current spectra of monolayer As. Band
structures computed with (a) PBE and PBE+SOC and (b) HSE and HSE+SOC.
$\sigma^{yyy}$ spectra computed with (c) PBE and PBE+SOC and (d) HSE and
HSE+SOC. $\sigma^{yxx}$ spectra computed with (e) PBE and PBE+SOC and (f) HSE
and HSE+SOC.
Figure 3: Shift vector and transition intensity in monolayer As. BZ-integrated
shift vector ($\bar{R}^{y,y}$) and transition intensity
($\varepsilon_{2}^{yy}$) for $\sigma^{yyy}$ in monolayer As. $\bar{R}^{y,y}$
estimated with (a) PBE and PBE+SOC and (c) HSE and HSE+SOC.
$\varepsilon_{2}^{yy}$ estimated with (b) PBE and PBE+SOC and (d) HSE and
HSE+SOC.
Figure 4: Band structure and shift current in bilayer As. (a) Electronic band
structure and (b) shift current $\sigma^{yyy}$ and $\sigma^{yxx}$ spectra for
bilayer As calculated with HSE+SOC. The inset in (a) shows the structure of
bilayer As.
Figure 5: Electronic band structures and shift current spectra of monolayer
Sb. Electronic band structures and shift current spectra of monolayer Sb. Band
structures computed with (a) PBE and PBE+SOC and (b) HSE and HSE+SOC.
$\sigma^{yyy}$ spectra computed with (c) PBE and PBE+SOC and (d) HSE and
HSE+SOC. $\sigma^{yxx}$ spectra computed with (c) PBE and PBE+SOC and (d) HSE
and HSE+SOC.
Figure 6: Shift vector and transition intensity in monolayer Sb. BZ-integrated
shift vector ($\bar{R}^{y,y}$) and transition intensity
($\varepsilon_{2}^{yy}$) for $\sigma^{yyy}$ in monolayer Sb. $\bar{R}^{y,y}$
estimated with (a) PBE and PBE+SOC and (c) HSE and HSE+SOC.
$\varepsilon_{2}^{yy}$ estimated with (b) PBE and PBE+SOC and (d) HSE and
HSE+SOC. (e) and (f) are $k$-reolsved shift vector calculate at $\omega=0.2$
for PBE and PBE+SOC.
Figure 7: Analysis of the electronic structure of monolayer Sb. (a) Zoomed-in
band structures calculated with HSE and HSE+SOC. (b) Joint density of states.
Contour plots of the energy difference between the highest valence band and
the lowest conduction band over the whole 2D BZ obtained with (c) HSE and (d)
HSE+SOC. The star symbol labels the direct band gap.
Figure 8: Electronic band structures and shift current spectra of monolayer
Bi. Band structures computed with (a) PBE and PBE+SOC and (b) HSE and HSE+SOC.
$\sigma^{yyy}$ spectra computed with (c) PBE and PBE+SOC and (d) HSE and
HSE+SOC. $\sigma^{yxx}$ spectra computed with (c) PBE and PBE+SOC and (d) HSE
and HSE+SOC.
Figure 9: Shift vector and transition intensity in monolayer Bi. BZ-integrated
(a) shift vector and (b) transition intensity for $\sigma^{yyy}$ in monolayer
Bi computed with HSE and HSE+SOC.
Figure 10: Shift vector and transition intensity in monolayer Bi. Uniaxial
strain dependence of (a)(c) $\sigma^{yyy}$ and (b)(d) $\sigma^{yxx}$ of
monolayer As and Sb.
Figure 11: Comparison of shift current response and current density of
different materials. (a) Peak value of shift current response [29, 59, 33, 60,
36] and (b) current density [15, 7, 8, 4, 12, 10, 11] of different materials
assuming a light intensity of 0.1 W/cm2. Besides strong band-edge responses to
long-wavelength light, monolayer Sb and As can generate large shift currents
in response to visible light illumination.
|
Present address:] Department of Physics and Astronomy, Texas A&M University,
College Station, 77843-4242, Texas, USA and Cyclotron Institute, Texas A&M
University, College Station, 77843-3636, Texas USA.
# Isoscalar giant monopole strength in 58Ni, 90Zr, 120Sn and 208Pb
A. Bahini<EMAIL_ADDRESS>School of Physics, University of the
Witwatersrand, Johannesburg 2050, South Africa iThemba Laboratory for
Accelerator Based Sciences, Somerset West 7129, South Africa R. Neveling
<EMAIL_ADDRESS>iThemba Laboratory for Accelerator Based Sciences,
Somerset West 7129, South Africa P. von Neumann-Cosel Institut für
Kernphysik, Technische Universität Darmstadt, D-64289 Darmstadt, Germany J.
Carter School of Physics, University of the Witwatersrand, Johannesburg 2050,
South Africa I. T. Usman School of Physics, University of the Witwatersrand,
Johannesburg 2050, South Africa P. Adsley [ School of Physics, University
of the Witwatersrand, Johannesburg 2050, South Africa iThemba Laboratory for
Accelerator Based Sciences, Somerset West 7129, South Africa Department of
Physics, Stellenbosch University, Matieland Stellenbosch 7602, South Africa
Irene Joliot Curie Lab, UMR8608, IN2P3-CNRS, Université Paris Sud 11, 91406
Orsay, France N. Botha School of Physics, University of the Witwatersrand,
Johannesburg 2050, South Africa J. W. Brümmer iThemba Laboratory for
Accelerator Based Sciences, Somerset West 7129, South Africa Department of
Physics, Stellenbosch University, Matieland Stellenbosch 7602, South Africa L.
M. Donaldson iThemba Laboratory for Accelerator Based Sciences, Somerset West
7129, South Africa S. Jongile iThemba Laboratory for Accelerator Based
Sciences, Somerset West 7129, South Africa Department of Physics,
Stellenbosch University, Matieland Stellenbosch 7602, South Africa T. C.
Khumalo School of Physics, University of the Witwatersrand, Johannesburg
2050, South Africa iThemba Laboratory for Accelerator Based Sciences,
Somerset West 7129, South Africa Department of Physics, University of
Zululand, Richards Bay, 3900, South Africa M. B. Latif School of Physics,
University of the Witwatersrand, Johannesburg 2050, South Africa iThemba
Laboratory for Accelerator Based Sciences, Somerset West 7129, South Africa
K. C. W. Li iThemba Laboratory for Accelerator Based Sciences, Somerset West
7129, South Africa Department of Physics, Stellenbosch University, Matieland
Stellenbosch 7602, South Africa P. Z. Mabika Department of Physics and
Astronomy, University of the Western Cape, Bellville 7535, South Africa P. T.
Molema School of Physics, University of the Witwatersrand, Johannesburg 2050,
South Africa iThemba Laboratory for Accelerator Based Sciences, Somerset West
7129, South Africa C. S. Moodley School of Physics, University of the
Witwatersrand, Johannesburg 2050, South Africa iThemba Laboratory for
Accelerator Based Sciences, Somerset West 7129, South Africa S. D.
Olorunfunmi School of Physics, University of the Witwatersrand, Johannesburg
2050, South Africa iThemba Laboratory for Accelerator Based Sciences,
Somerset West 7129, South Africa P. Papka iThemba Laboratory for Accelerator
Based Sciences, Somerset West 7129, South Africa Department of Physics,
Stellenbosch University, Matieland Stellenbosch 7602, South Africa L. Pellegri
School of Physics, University of the Witwatersrand, Johannesburg 2050, South
Africa iThemba Laboratory for Accelerator Based Sciences, Somerset West 7129,
South Africa B. Rebeiro Department of Physics and Astronomy, University of
the Western Cape, Bellville 7535, South Africa E. Sideras-Haddad School of
Physics, University of the Witwatersrand, Johannesburg 2050, South Africa F.
D. Smit iThemba Laboratory for Accelerator Based Sciences, Somerset West
7129, South Africa S. Triambak Department of Physics and Astronomy,
University of the Western Cape, Bellville 7535, South Africa M. Wiedeking
School of Physics, University of the Witwatersrand, Johannesburg 2050, South
Africa iThemba Laboratory for Accelerator Based Sciences, Somerset West 7129,
South Africa J. J. van Zyl Department of Physics, Stellenbosch University,
Matieland Stellenbosch 7602, South Africa
###### Abstract
Background: Inelastic $\alpha$-particle scattering at energies of a few
hundred MeV and very-forward scattering angles including $None$ has been
established as a tool for the study of the isoscalar giant monopole (IS0)
strength distributions in nuclei. This compressional mode of nuclear
excitation can be used to derive the incompressibility of nuclear matter.
Objective: An independent investigation of the IS0 strength in nuclei across a
wide mass range was performed using the $0^{\circ}$ facility at iThemba
Laboratory for Accelerator Based Sciences (iThemba LABS), South Africa, to
understand differences observed between IS0 strength distributions in previous
experiments performed at the Texas A&M University (TAMU) Cyclotron Institute,
USA and the Research Center for Nuclear Physics (RCNP), Japan.
Methods: The isoscalar giant monopole resonance (ISGMR) was excited in 58Ni,
90Zr, 120Sn and 208Pb using $\alpha$-particle inelastic scattering with $196$
MeV $\alpha$ beam and scattering angles $\theta_{\text{Lab}}=0^{\circ}$ and
$4^{\circ}$. The K$600$ magnetic spectrometer at iThemba LABS was used to
detect and momentum analyze the inelastically scattered $\alpha$ particles.
The IS0 strength distributions in the nuclei studied were deduced with the
difference-of-spectra (DoS) technique including a correction factor for the
$4^{\circ}$ data based on the decomposition of $L>0$ cross sections in
previous experiments.
Results: IS0 strength distributions for 58Ni, 90Zr, 120Sn and 208Pb are
extracted in the excitation-energy region $E_{\rm x}=9-25$ MeV. Using
correction factors extracted from the RCNP experiments, there is a fair
agreement with their published IS0 results. Good agreement for IS0 strength in
58Ni is also obtained with correction factors deduced from the TAMU results,
while marked differences are found for 90Zr and 208Pb.
Conclusions: Previous measurements show significant differences in the IS0
strength distributions of 90Zr and 208Pb. This work demonstrates clear
structural differences in the energy region of the main resonance peaks with
possible impact on the determination of the nuclear matter incompressibility
presently based on the IS0 centroid energies of these two nuclei. The results
also suggest that for an improved determination of the incompressibility,
theoretical approaches should aim at a description of the full strength
distributions rather than the centroid energy only.
## I Introduction
The isoscalar giant monopole resonance (ISGMR) is a nuclear collective
excitation that can provide information on the bulk properties of the nucleus
Harakeh . It was first identified in the late 1970s harakeh1977mn ;
youngblood1977isoscalar and has since then been extensively studied due to
its role in constraining the incompressibility of uniform nuclear matter
($K_{\infty}$) Blaizot ; Harakeh ; GC_review2018 . Current knowledge of the
ISGMR in stable nuclei depends largely on experimental studies performed at
the Texas A&M University (TAMU) Cyclotron Institute and the Research Center
for Nuclear Physics (RCNP) over the past three decades through small-angle
(including $None$) inelastic $\alpha$-particle scattering measurements at
$240$ MeV and $386$ MeV, respectively GC_review2018 .
There are well-known examples where different systematic trends of the
incompressibility of nuclei ($K_{A}$) are extracted from datasets obtained at
these two facilities. The possibility of nuclear structure contributions to
$K_{A}$ was considered by Youngblood et al. youngblood2013 following the
investigation of the ISGMR strength in 90,92,94Zr and 92,96,98,100Mo at TAMU.
Such a suggestion would have considerable consequences, since it contradicts
the generally held notion that the ISGMR and nuclear incompressibility are
collective phenomena and hence, without sensitivity to details of the internal
structure of the nucleus. The ISGMR centroid energy for 90Zr was reported to
be $1.22$ MeV and $2.80$ MeV lower than that for 92Zr and 92Mo, respectively,
resulting in a value for $K_{A}$ that increases with mass number. This
unexpected result was subsequently attributed to the high excitation-energy
tail of the isoscalar giant monopole (IS0) strengths that were substantially
larger in 92Zr and 92Mo than for the other Zr and Mo isotopes
krishichayan2015g . However, these differences were not observed in
independent measurements performed at RCNP. Using both the difference-of-
spectra (DoS) and multipole decomposition analysis (MDA) techniques, it was
shown that the ISGMR strengths and energies in 90,92Zr and 92Mo are
practically identical gupta2016 . The study was expanded to include 94,96Mo
howard2019 , which resulted in the same conclusion based on moment ratios and
extracted scaling-model incompressibilities.
Different trends for $K_{A}$ were also observed for the Ca isotope chain.
Results from ISGMR studies at TAMU for 40,44,48Ca youngblood2001 ; lui2011 ;
button2017 showed an increase of the ISGMR centroid energy with increasing
mass number button2017 . In contrast, Howard et al. howard2020 used the
experimental facilities at RCNP to study the evolution of the ISGMR strength
in 40,42,44,48Ca and found the generally expected trend of a decrease of the
ISGMR centroid energy with increasing mass number. Recently, Olorunfunmi et
al. sunday presented a third dataset for the Ca isotope chain, obtained at
iThemba LABS, and demonstrated that the moment ratios extracted from the three
facilities agree when considering an excitation-energy range covering the
resonance peak. It was observed that different trends in the nuclear
incompressibility for these nuclei are most likely caused by contributions to
the IS0 strength outside of the region covering the resonance peak, and in
particular for high excitation energies.
Much of the discussion regarding the source of the different trends in $K_{A}$
centers around the different background subtraction methods employed by TAMU
and RCNP groups howard2020 ; gupta2016 ; GC_review2018 prior to the MDA of
the excitation-energy spectra. The background subtraction methodology used in
the TAMU experiments makes assumptions about both the instrumental background
and the physical continuum youngblood2002isoscalar . On the other hand,
experimental methods employed at RCNP eliminate the instrumental background
from the excitation-energy spectra, but contributions from the physical
continuum are not distinguished from the IS0 strength in the analysis
GC_review2018 . In both the Ca and Zr/Mo cases discussed above, comparisons in
literature were only made on the basis of trends observed in $K_{A}$, which is
a single number obtained from the ratio of moments of the IS0 strength
distribution, that in some cases can be shown to display quite a variation in
structural character between different studies. The existence of such
differences led Colo et al. colo2020 to conclude that one should rather use
the overall shape of the strength distributions in the analysis of the ISGMR
instead of the extracted values of the ISGMR energy centroids. It is,
therefore, very important to be aware of the structural variations in the IS0
strength distributions across all available datasets before commenting on the
value of, as well as possible trends in $K_{A}$.
Here, we aim to provide a third measurement of the shape of the IS0 strength
distribution in a few medium-to-heavy nuclei in order to extend the
comparisons provided in Refs. Armand_PRC2022 ; sunday for lighter nuclei.
## II Experimental details
The experimental procedure followed in this study is fully described elsewhere
sunday ; Armand_PRC2022 . As such, only salient details are provided here. The
experiment was performed at the Separated Sector Cyclotron (SSC) facility of
the iThemba Laboratory for Accelerator Based Sciences (iThemba LABS) in South
Africa. A beam of $196$-MeV $\alpha$ particles was inelastically scattered off
self-supporting 58Ni, 90Zr, 120Sn and 208Pb targets with areal densities
ranging from $0.7$ mg/cm2 to $1.43$ mg/cm2 and isotopically enriched to values
greater than $96\%$. The reaction products were momentum analyzed by the
K$600$ magnetic spectrometer nev11 . The horizontal and vertical positions of
the scattered $\alpha$ particles in the focal plane of the spectrometer were
measured using two multiwire drift chambers. Energy deposition in the plastic
scintillators in the focal plane as well as time-of-flight measurements
relative to the cyclotron radio frequency were used for particle
identification.
Spectra were acquired with the spectrometer positioned at angles
$\theta_{\text{K600}}=0^{\circ}$ and $4^{\circ}$. In the former, scattering
angles of $\theta_{\text{Lab}}=0^{\circ}\pm 1.91^{\circ}$ and in the latter,
scattering angles from $\theta_{\text{Lab}}=2^{\circ}-6^{\circ}$ were covered
by a circular spectrometer aperture. The procedures for particle
identification, calibration of the measured focal-plane angles, as well as
background subtraction followed those described in Ref. Armand_PRC2022 . The
momentum calibration was based on well-known states in 24Mg kaw2013 ; bor1981
, and an energy resolution of $\approx 70$ keV (full width at half maximum,
FWHM) was obtained. Figure 2 shows the inelastic scattering cross sections
extracted at $\theta_{\text{K600}}=0^{\circ}$ for 58Ni, 90Zr, 120Sn and 208Pb.
Fine structure is clearly observed in the ISGMR region. The cross sections
shown in Fig. 2 are for angle ranges that represent a subset of the accessible
angle range for the $\theta_{\text{K600}}=4^{\circ}$ measurements, as required
to extract monopole strengths. See Sect. III.1 for details.
Figure 1: Double-differential cross sections (binned to $30$ keV) for the
($\alpha,\alpha^{\prime}$) reaction at $E_{\alpha}=196$ MeV on
208Pb,120Sn,90Zr, and 58Ni for the angular range
$\theta_{\text{Lab}}=0^{\circ}-1.91^{\circ}$.
Figure 2: Same as Fig. 2, but for angle cuts as implemented in the
$\theta_{\text{K600}}=4^{\circ}$ dataset as summarized in Table 3.
While the 58Ni, 90Zr and 120Sn target foils were free of contaminants, the
208Pb target showed signs of surface oxidation. The $12.049$ MeV
$J^{\pi}=0^{+}$ and $11.520$ MeV $J^{\pi}=2^{+}$ states of 16O were observed
in the 208Pb spectra measured at $None$ and $None$, respectively. While the
identifiable peaks of 16O sit at the lower energy side of the excitation-
energy spectrum, 16O also contributes to the background underneath the ISGMR
region. Therefore, it is essential to remove the contribution from 16O across
the full excitation energy range prior to the calculation of the differential
cross sections. Accurate 16O($\alpha,\alpha^{\prime}$) spectra at $None$ and
$None$ were produced as follows. Inelastic $\alpha$-particle scattering data
from Mylar ($\textrm{C}_{10}\textrm{H}_{8}\textrm{O}_{4}$) and
${}^{\text{nat}}\textrm{C}$ targets were acquired at $\theta$lab = 0∘ and 4∘.
An excitation-energy spectrum for the
${}^{16}\textrm{O}$($\alpha,\alpha^{\prime}$) reaction at each angle was then
produced by subtracting the 12C data from the Mylar spectrum, normalized to
the 9.641 MeV $J^{\pi}=3^{-}$ state and the broad resonance strength in this
energy region. Contributions of the ${}^{16}\textrm{O}$ contaminant to the
excitation-energy spectrum of 208Pb were then removed by subtracting a
normalized ${}^{16}\textrm{O}$ spectrum. In the case of the 0∘ dataset the
normalization was based on the integrated yield of the ${}^{16}\textrm{O}$,
12.049 MeV, $J^{\pi}=0^{+}$ peak and for the 4∘ dataset on the
${}^{16}\textrm{O}$, 11.520 MeV, $J^{\pi}=2^{+}$ peak.
## III Analysis
### III.1 DoS technique
The MDA technique was employed in numerous studies to extract multipole
strength distributions in nuclei, including the IS0 strength distributions
GC_review2018 ; gupta2018isoscalar . However, due to the limited number of
angular data points in this study, the IS0 strength distributions were
determined by means of the DoS technique DoSpaper . This relies on the
assumption that the sum of all multipolarity contributions $L>0$ is
essentially the same close to $0^{\circ}$ as at the first minimum of the $L=0$
angular distribution, and can be removed by subtraction of the spectra
measured at the two scattering angles. The method, therefore, requires the
determination of suitable angle cuts for the different nuclei from the
measurement at $\theta_{\text{Lab}}=2^{\circ}-6^{\circ}$, which can be
assessed from distorted-wave born approximation (DWBA) calculations.
The DoS method requires the prior subtraction of contributions to the spectra
due to relativistic Coulomb excitation of the isovector giant dipole resonance
(IVGDR). The Coulomb cross sections are strongly forward peaked and thus
violate the basic DoS assumption. These contributions are determined using
photonuclear cross sections in conjunction with DWBA calculations based on the
Goldhaber-Teller model satchler1987isospin to estimate the IVGDR differential
cross sections as a function of excitation energy. Lorentzian parameters for
the photonuclear cross sections (relative strength $\sigma_{m}$, peak energy
$E_{m}^{\text{photo}}$ and width $\Gamma^{\text{photo}}$) used in the present
study were taken from Ref. plujko and are presented in Table 1.
Table 1: Lorentzian parameters of the photonuclear cross sections from Ref. plujko used for the estimation of Coulomb cross sections at $E_{\alpha}=196$ MeV. Nucleus | $\sigma_{m}$ (mb) | $E_{m}^{\text{photo}}$ (MeV) | $\Gamma^{\text{photo}}$ (MeV)
---|---|---|---
58Ni | $0.294$ | $18.26$ | $6.95$
90Zr | $0.861$ | $16.84$ | $3.99$
120Sn | $1.219$ | $15.40$ | $4.86$
208Pb | $1.121$ | $13.46$ | $3.58$
In the present study, the DWBA calculations were performed according to the
method described in Ref. satchler1997missing . A density-dependent single-
folding model for the real part of the potential $U(r)$, obtained with a
Gaussian $\alpha$-nucleon potential, and a phenomenological Woods-Saxon
potential for the imaginary term of $U(r)$ were used, so that the
$\alpha$-nucleus potential can be written as
$U(r)=V_{\text{fold}}(r)+i\dfrac{W}{\left\\{1+\exp\left[\left(r-R_{\text{I}}\right)/a_{\text{I}}\right]\right\\}}~{},$
(1)
with radius
$R_{\text{I}}=r_{\text{0I}}(A_{\text{p}}^{1/3}+A_{\text{t}}^{1/3})$ and
diffuseness $a_{\text{I}}$. The subscripts p and t refer to projectile and
target, respectively, and $A$ denotes the mass number. The potential
$V_{\text{fold}}(r)$ is obtained by folding the ground-state density with a
density-dependent $\alpha$-nucleon interaction
$V_{\text{fold}}(r)=-V\int
d^{3}r^{\prime}\rho(r^{\prime})\left[1-\beta\rho(r^{\prime})^{2/3}\right]\exp(-\mathit{z}^{2}/t^{2})~{},$
(2)
where $\mathit{z}=|r-r^{\prime}|$ is the distance between the centre of mass
of the $\alpha$ particle and a target nucleon, and $\rho(r^{\prime})$ is the
ground-state density of the target nucleus at the position $r^{\prime}$ of the
target nucleon. The parameters $\beta=1.9$ fm2 and range $t=1.88$ fm were
taken from Ref. satchler1997missing . The ground-state density $\rho(r)$ of
the target nucleus at the position $r$ is given by
$\rho(r)=\dfrac{\rho_{0}}{1+\exp\left(\frac{r-c}{a}\right)}~{},$ (3)
where the Fermi-distribution parameters $c$ and $a$ describe the half-density
radius and the diffuseness, respectively. Numerical values of the Fermi-
distribution parameters $c$ and $a$, which describe the half-density radius
and the diffuseness, respectively, were taken from Ref. fricke1995nuclear .
The calculations were carried out using the computer code PTOLEMY Mac1978 ;
rhoades1980techniques2 . Optical model parameters used in the DWBA
calculations were taken for each nucleus from the studies of the TAMU group on
58Ni, 90Zr and 116Sn. Here, the 116Sn nucleus is considered because no result
has been published by the group on 120Sn. For 208Pb, elastic scattering cross
sections calculated with the parameters quoted in Ref. youngblood2004isoscalar
could not reproduce the experimental data of Ref. clark2001isoscalar from
which they were said to be derived. Thus, we have performed an independent fit
guided by the systematic mass dependence (decrease of real and imaginary
depth, increase of imaginary radius) observed for the other nuclei. All
parameters are shown in Table 2.
Table 2: Optical model parameters used in the present study. Nucleus | $V$ (MeV) | $W$ (MeV) | $r_{\text{0I}}$ (fm) | $a_{\text{I}}$ (fm) | Refs.
---|---|---|---|---|---
58Ni | $41.19$ | $40.39$ | $0.821$ | $0.974$ | lui2006
90Zr | $40.02$ | $40.9$ | $0.786$ | $1.242$ | krishichayan2015g
120Sn | $36.7$ | $23.94$ | $0.998$ | $1.047\par$ | youngblood2004isoscalar
208Pb | $33.3$ | $31.4$ | $1.032$ | $1.057\par$ | See text
Table 3: Angle cuts implemented in the $\theta_{\text{K600}}=4^{\circ}$ dataset to define the angular region around the first minimum of the $L=0$ component ($\theta^{L=0}_{\text{c.m.}}$). Nucleus | $\theta_{\text{Lab}}$ | $\theta_{\text{c.m.}}$ | $\theta^{L=0}_{\text{c.m.}}$
---|---|---|---
58Ni | $$$-$$$ | $$$-$$$ | $None$
90Zr | $$$-$$$ | $$$-$$$ | $None$
120Sn | $$$-$$$ | $$$-$$$ | $None$
208Pb | $$$-$$$ | $$$-$$$ | $None$
Consider as an example the DWBA results for multipoles $L=0-3$ as well as the
IVGDR cross sections for the case of 120Sn at an excitation energy of 16.5
MeV, as shown in Fig. 3. The theoretical angular distributions for excitation
of the isoscalar modes are normalized to the corresponding strengths deduced
in Ref. li2010isoscalar . An angular region around the first minimum of the
$L=0$ angular distribution, indicated by the yellow area, is chosen for the
subtraction procedure from the zero degree spectrum. The angular ranges chosen
for the nuclei studied here are summarized in Table 3.
Figure 3: DWBA calculations of the differential cross sections for the
120Sn($\alpha$,$\alpha^{\prime}$) reaction at $E_{\alpha}=196$ MeV for various
isoscalar electric multipoles. The calculations were done for an excitation
energy of $16.5$ MeV, representative of the maximum of the IS0 strength
distributions, and scaled using fraction energy-weighted sum rule (FEWSR)
strengths from Ref. li2010isoscalar . The black dashed line represents the sum
of all multipoles except $L=0$. Figure 4: Double-differential cross sections
for 120Sn($\alpha$,$\alpha^{\prime}$) at $E_{\alpha}=196$ MeV. Top: The blue
and red spectra represent the data acquired at
$$$\leq\theta_{\text{Lab}}\leq$$$ and at $$$\leq\theta_{\text{Lab}}\leq$$$,
respectively. The green spectrum shows the latter spectrum corrected with
excitation-energy-dependent factors as outlined in Fig. 5 and in the text. The
IVGDR contributions (shown in dark brown and cyan) were subtracted from the
blue and red spectra, respectively, prior to the application of the DoS
technique. Bottom: The magenta (black) spectrum represents the difference
spectra when applying the DoS technique with (without) the correction factors
shown in Fig. 5.
A comparison of the spectra extracted from the $0^{\circ}$ data and the angle
cut around the minimum of the $L=0$ angular distribution for 120Sn is
presented in the upper part of Fig. 4 as blue and red histograms,
respectively. The figure also shows the cross sections due to Coulomb
excitation of the IVGDR for the two angle settings (brown and cyan lines,
respectively). They are small but non-negligible. The black histogram in the
lower part of Fig. 4 is the difference spectrum before the correction
procedure described in Sect. III.2.
### III.2 DoS with excitation-energy-dependent corrections
The central premise of the DoS technique is that the sum of the cross sections
of all multipoles $L>0$ is constant at small scattering angles including the
region of the first minimum of the $L=0$ angular distribution GC_review2018 ;
Armand_PRC2022 ; DoSpaper . Hence, the subtraction of the inelastic spectrum
at the angle where $L=0$ is at a minimum from the $0^{\circ}$ spectrum is
assumed to represent essentially the IS0 component excited in
$\alpha$-inelastic scattering close to $0^{\circ}$. However, as was
demonstrated in Ref. sunday , the cross sections from the small-angle
measurement can deviate from the sum of all $L>0$ multipoles in the $None$
measurement. This is also clear from Fig. 3 for the case of 120Sn. As such, an
excitation-energy-dependent correction factor (CF) to be applied to the small-
angle spectrum prior to the application of the DoS technique was introduced,
and is written as:
$\text{CF}(E_{\text{x}})=\dfrac{\sum_{L=1,2,3}\frac{d\sigma^{\text{DWBA}}}{d\Omega}(E_{\text{x}},\theta^{\text{av.}}_{\text{c.m.}})\Big{|}_{L}}{\sum_{L=0,1,2,3}\frac{d\sigma^{\text{DWBA}}}{d\Omega}(E_{\text{x}},\theta^{L=0}_{\text{c.m.}})\Big{|}_{L}}~{},$
(4)
where $\theta^{\text{av.}}_{\text{c.m.}}$ represents the angle corresponding
to the average cross sections between
$\theta_{\text{c.m.}}=0^{\circ}-2^{\circ}$, and $\theta^{L=0}_{\text{c.m.}}$
is given in the rightmost column of Table 3. The method relies on the
availability of information about the relative strengths of the $L>0$
multipoles from previous measurements on the same nucleus. Although this makes
the procedure model dependent, the results of Ref. sunday indicate that the
dependence on the chosen inputs is weak.
Figure 5: Outline of the procedure to establish an excitation-energy-dependent
correction factor for the small-angle cross sections, taking 120Sn as an
example. (a) FEWSR results for different multipoles from RCNP li2010isoscalar
. Corresponding DWBA cross sections at $196$ MeV representative of the zero-
degree and the small-angle measurements are shown in Panels (b) and (c),
respectively. Panel (d) shows correction factors (black dot-dashed line)
determined by the ratio of the $L=1+2+3$ results in panel (b) to the
$L=0+1+2+3$ results in panel (c). The red dot-dashed line is the result when
$L=3$ is excluded from the procedure.
The method is illustrated again for the case of 120Sn in Fig. 5. Isoscalar
$L=0-3$ strength distributions given in Ref. li2010isoscalar in terms of
FEWSR as a function of excitation energy, shown in Fig. 5(a), are used as
inputs. The corresponding DWBA cross sections for $L=1-3$, averaged over the
two angular regions of the iThemba LABS experiment, are shown in Figs. 5(b)
and (c), respectively. The summed cross sections are shown as black solid and
dashed lines. We note that Fig. 5(c) also contains an $L=0$ contribution, as
required per Eq. (4). However, this contribution is negligibly small (less
than $0.1$% of the summed cross section), which is to be expected as the
relevant angle was specifically chosen to be around the minimum of the $L=0$
distribution. Finally, Fig. 5(d) shows the correction factor as a function of
excitation energy, determined from the ratio of the black solid and dashed
curves in Figs. 5(b) and (c), respectively. Application of the correction
factors on the small-angle spectrum is shown in the top panel of Fig. 4 as a
green histogram, and the modified DoS spectrum appears in the bottom panel of
Fig. 4 as a magenta histogram. One can see that the correction is particularly
strong on the low-energy side of the ISGMR.
The correction factors obtained applying the same procedure to the other
nuclei measured in this study are summarized in Fig. 6. Unlike the case of
120Sn, where only a single previous measurement was reported, here we have two
(58Ni and 90Zr) or even three (208Pb) data sets as input, representing both
RCNP (solid and dashed lines) and TAMU (dash-dotted lines). For the case of
90Zr, correction factors were also determined based on the FEWSR results for
the neighboring 94Mo nucleus howardphd assuming that the contributions from
different multipolarities change very slowly as a function of nuclear mass
number. Differences between the deduced correction factors are sizable for
90Zr and 208Pb when comparing TAMU and RCNP results. However, the two
different experimental results available for both nuclei from RCNP experiments
lead to very similar factors. Thus, only the corrections obtained with Refs.
howardphd (90Zr) and patelphd (208Pb) were used to create the RCNP corrected
spectra presented and discussed in the next section.
Figure 6: Correction factors extracted using FEWSR from RCNP and TAMU
datasets, as discussed in the text. For 58Ni (top panel), data were taken from
nayak2006 (solid line) and lui2006 (dashed line); for 90Zr (middle panel)
from gupta2018isoscalar (solid line), howardphd (dashed line) and
krishichayan2015g (dash-dotted line); and for 208Pb (bottom panel) from
patelphd (solid line), uchida2004 (dashed line) and youngblood2004isoscalar
(dash-dotted line).
We have investigated to what extent these correction-factor differences may
depend on the assumptions in the MDA results of the different experiments,
specifically regarding the maximum value of $L$ for which FEWSR results are
determined. This question is discussed in Ref. nayak2006 for the case of
58Ni. It was found that a variation of $L_{\rm max}$ from $6$ to $8$ had no
impact on the FEWSR strengths for $L=0-3$. A similar conclusion was drawn by
Gupta et al. gupta2018isoscalar for their measurements of $A\approx 90$
nuclei including 90Zr. Furthermore, in some of the previously published
results, information on the $L=3$ component is missing. In general, one
expects it to have a minor impact on the correction factors. The main part of
the octupole strength is of a $3\hbar\omega$ nature and therefore expected at
high excitation energies, while its $1\hbar\omega$ component fujita1985 is
located at excitation energies below the ISGMR. Nevertheless, we have tested
the influence for the case of 120Sn. The correction factors obtained by
including only $L=1+2$ components are displayed in Fig. 4(d) as a red dashed
line. They fully coincide with the correction factors obtained including $L=3$
cross sections. Information on the octupole strength from the investigation of
90Zr at RCNP gupta2018isoscalar is also lacking. The impact of the $L=3$
cross sections for the correction factors in this case was estimated from the
detailed information on the multipole decomposition analysis provided by Ref.
howardphd for the neighboring nucleus 94Mo and again found to be negligible.
## IV Results and discussion
The corrected difference cross sections can be converted to fractions
$a_{0}(E_{\text{x}})$ of the isoscalar monopole EWSR by comparing with DWBA
calculations assuming $100\%$ EWSR, as shown in Ref. sunday . The IS0 strength
was determined using a $1$ MeV bin size for all nuclei except 58Ni, which was
binned to $800$ keV, to facilitate direct comparison with previous
experiments, as shown in Figs. 7-10, using the following equation
GC_review2018 :
$S_{0}(E_{\text{x}})=\frac{\text{EWSR(IS0)}}{E_{\text{x}}}a_{0}(E_{\text{x}})=\dfrac{2\hbar^{2}A\langle
r^{2}\rangle}{mE_{\text{x}}}a_{0}(E_{\text{x}})~{}.$ (5)
Here, $m$ represents the nucleon mass, $E_{\text{x}}$ is the excitation
energy, and $\langle r^{2}\rangle$ is the second moment of the ground-state
density. The values for $\langle r^{2}\rangle$ for 58Ni, 90Zr, 120Sn, and
208Pb were derived from Ref. fricke1995nuclear and found to be $14.3$,
$18.2$, $21.7$, and $30.3$ fm2, respectively. Note that all results from TAMU
were originally presented as fractions $a_{0}(E_{\text{x}})$, and were
therefore converted to IS0 strength following the same procedure.
Figure 7: IS0 strength distributions in 58Ni. The present iThemba LABS data
are shown as black histograms. Also shown are the ($\alpha,\alpha^{\prime}$)
data from RCNP nayak2006 (blue filled circles) and TAMU lui2006 (red filled
circles) groups. The top panel shows results when FEWSR from RCNP are used to
correct the small-angle spectrum while the bottom panel displays results when
FEWSR from TAMU are used to correct the small-angle spectrum.
The IS0 strength distributions for 58Ni are presented in Fig. 7, where the
iThemba LABS results shown in the upper and lower panels were extracted using
correction factors derived from RCNP nayak2006 and TAMU lui2006 experiments,
respectively. Here, as is the case for the other results from iThemba LABS,
the errors associated with the strength distributions include both systematic
and statistical uncertainties. The IS0 strength distributions from the two
previous experiments agree within error bars. The iThemba LABS strength
distribution is in reasonable agreement with the two previous datasets,
regardless of the choice of correction factor. Slightly weaker strengths are
seen in the lower excitation-energy region $9$ $\leq E_{\text{x}}\leq 16$ MeV
when utilizing the RCNP-based correction factor. On the other hand, using the
TAMU-based correction factor, the distribution is somewhat stronger than the
TAMU and RCNP distributions in the high excitation-energy region $20\leq
E_{\text{x}}\leq 25$ MeV.
Figure 8: Same as Fig. 7 but for 90Zr. Also shown are the
($\alpha,\alpha^{\prime}$) data from RCNP gupta2018isoscalar (blue filled
circles) and TAMU krishichayan2015g (red filled circles).
The case for 90Zr is summarized in Fig. 8. The original controversy in the
mass $90$ region was attributed to the high excitation-energy tail of the IS0
strength that is substantially larger in 92Zr and 92Mo in the TAMU experiment
than for the other Zr and Mo isotopes krishichayan2015g . However, here we
clearly see that there are also significant structural differences at the peak
of the resonance between data from RCNP gupta2018isoscalar and TAMU
krishichayan2015g . The IS0 strength from the present experiment utilizing the
RCNP correction factors is in very good agreement with the results from RCNP.
On the other hand, when the correction factors are based on the results from
TAMU, the centroid of the IS0 distribution shifts to a lower excitation
energy. While the absolute value of the strength at its peak undergoes a
noteworthy increase, the overall agreement with previous datasets
deteriorates.
Figure 9: Same as Fig. 7 but for 120Sn. Also shown are the
($\alpha,\alpha^{\prime}$) data from RCNP li2010isoscalar (blue filled
circles). Here, only FEWSR from RCNP are used to correct the small-angle
spectrum.
The comparison of the IS0 strength distribution in 120Sn from the RCNP
experiment li2010isoscalar with the present analysis is presented in Fig. 9.
There is good agreement for the main part of the ISGMR up to about $18$ MeV
and at higher excitation energies. Between $19$ and $22$ MeV, the present
results indicate a larger strength, just outside the $1\sigma$ error bars.
Figure 10: Same as Fig. 7 but for 208Pb. Also shown are the ($\alpha,\alpha^{\prime}$) data from RCNP uchida2004 ; patel2013testing (blue and dark green filled circles) and TAMU youngblood2004isoscalar (red filled circles) groups. Table 4: Parameters extracted from the ISGMR strength distributions from previous ($\alpha,\alpha^{\prime}$) measurements through Lorentzian or Gaussian peak fitting as well as moment ratio calculations, established over different excitation-energy ranges. Nucleus | Centroid (MeV) | Width (MeV) | $m_{1}/m_{0}$ (MeV) | $\sqrt{m_{1}/m_{-1}}$ (MeV) | $\sqrt{m_{3}/m_{1}}$ (MeV) | Energy range (MeV) | Reference
---|---|---|---|---|---|---|---
58Ni | $18.43\pm 0.15$ | $7.41\pm 0.13$ | $19.9^{+0.7}_{-0.8}$ $19.20^{+0.44}_{-0.19}$ | $18.70^{+0.34}_{-0.17}$ | $20.81^{+0.90}_{-0.28}$ | $10.5-32.5$ $10-35$ | RCNP nayak2006 TAMU lui2006 111Peak positions and widths (FWHM) from Gaussian fits.
90Zr | $16.76\pm 0.12$ $17.1$ | $4.96^{+0.31}_{-0.32}$ $4.4$ | $19.17^{+0.21}_{-0.20}$ $17.88^{+0.13}_{-0.11}$ | $18.65\pm 0.17$ $17.58^{+0.06}_{-0.04}$ | $20.87^{+0.34}_{-0.33}$ $18.86^{+0.23}_{-0.14}$ | $10-30$ $10-35$ | RCNP gupta2018isoscalar 222Peak positions and widths (FWHM) from Lorentzian fits. TAMUkrishichayan2015g a
120Sn | $15.4\pm 0.2$ | $4.9\pm 0.5$ | $15.7\pm 0.1$ | $15.5\pm 0.1$ | $16.2\pm 0.2$ | $10.5-20.5$ | RCNP li2010isoscalar b
208Pb | $13.7\pm 0.1$ $13.4\pm 0.2$ | $3.3\pm 0.2$ $4.0\pm 0.4$ $2.88\pm 0.20$333The equivalent Gaussian FWHM. | $13.96\pm 0.20$ | $13.5\pm 0.1$ | | $9.5-19.5$ $8-33$ $10-35$ | RCNP patel2013testing b RCNP uchida2004 b TAMU youngblood2004isoscalar
Table 5: Lorentzian parameters and moment ratios for the ISGMR strength distributions in 58Ni, 90Zr, 120Sn, and 208Pb, where $m_{k}=\int E_{\text{x}}^{k}S(E_{\text{x}})dE_{\text{x}}$ is the $k$th moment of the strength distribution for the excitation-energy range $10-24.5$ MeV ($10-17$ MeV for 208Pb) from the present work, compared to values extracted for the TAMU and RCNP data sets over the same excitation-energy range. Nucleus | Centroid (MeV) | Width (MeV) | $m_{1}/m_{0}$ (MeV) | $\sqrt{m_{1}/m_{-1}}$ (MeV) | $\sqrt{m_{3}/m_{1}}$ (MeV) | Reference
---|---|---|---|---|---|---
58Ni | $17.8\pm 0.4$ $17.8\pm 0.4$ $17.9\pm 0.3$ $17.9\pm 0.3$ | $5.4\pm 0.4$ $5.4\pm 0.4$ $5.3\pm 0.3$ $5.3\pm 0.3$ | $18.40\pm 0.15$ $18.22\pm 0.13$ $18.15\pm 0.11$ $18.14\pm 0.06$ | $18.14\pm 0.14$ $17.94\pm 0.13$ $17.85\pm 0.11$ $17.81\pm 0.06$ | $19.12\pm 0.17$ $18.98\pm 0.15$ $19.00\pm 0.12$ $19.00\pm 0.06$ | Present, CF from nayak2006 Present, CF from lui2006 RCNP nayak2006 TAMU lui2006
90Zr | $16.7\pm 0.2$ $16.2\pm 0.2$ $16.8\pm 0.2$ $16.9\pm 0.2$ | $4.4\pm 0.2$ $4.2\pm 0.2$ $4.8\pm 0.3$ $3.9\pm 0.3$ | $17.06\pm 0.35$ $16.02\pm 0.36$ $17.59\pm 0.11$ $17.23\pm 0.03$ | $16.80\pm 0.32$ $15.79\pm 0.32$ $17.31\pm 0.11$ $17.03\pm 0.03$ | $17.84\pm 0.48$ $16.69\pm 0.57$ $18.41\pm 0.11$ $17.81\pm 0.04$ | Present, CF from howardphd Present, CF from krishichayan2015g RCNP gupta2018isoscalar TAMU krishichayan2015g
120Sn | $15.5\pm 0.4$ $15.4\pm 0.2$ | $5.6\pm 0.4$ $4.6\pm 0.3$ | $16.24\pm 0.39$ $16.54\pm 0.23$ | $15.92\pm 0.35$ $16.20\pm 0.22$ | $17.21\pm 0.54$ $17.61\pm 0.25$ | Present, CF from li2010isoscalar RCNP li2010isoscalar
208Pb | $13.8\pm 0.3$ $13.3\pm 0.3$ $13.7\pm 0.2$ $13.4\pm 0.2$ $13.9\pm 0.3$ | $3.1\pm 0.2$ $3.2\pm 0.3$ $3.4\pm 0.2$ $4.0\pm 0.3$ $2.3\pm 0.4$ | $13.39\pm 0.27$ $12.44\pm 0.45$ $13.47\pm 0.22$ $13.78\pm 0.29$ $13.64\pm 0.08$ | $13.25\pm 0.26$ $12.29\pm 0.42$ $13.32\pm 0.22$ $13.59\pm 0.27$ $13.56\pm 0.08$ | $13.80\pm 0.29$ $12.90\pm 0.57$ $13.86\pm 0.22$ $14.32\pm 0.35$ $13.85\pm 0.07$ | Present, CF from patel2013testing Present, CF from youngblood2004isoscalar RCNP patel2013testing RCNP uchida2004 TAMU youngblood2004isoscalar
For the case of 208Pb, results from three different previous experiments
patel2013testing ; uchida2004 ; youngblood2004isoscalar are available. The
IS0 strength distributions from these studies are compared with one another in
Fig. 10. Upon inspection of the different strength distributions, it is clear
that there are distinct structural differences between the different datasets.
Youngblood et al. youngblood2004isoscalar produced a very narrow IS0
distribution that is not nearly as asymmetric as the results from both Uchida
et al. uchida2004 and Patel et al. patel2013testing . The TAMU study also
reported the highest value for the monopole strength at the peak of the
distribution, while the strength at the peak is almost a factor of two lower
in Ref. uchida2004 , while the results from Ref. patel2013testing lie in
between. The latter two distributions reach their maximum at a slightly lower
excitation energy. The iThemba LABS results corrected using the FEWSR results
available from Ref. patel2013testing are in fair agreement with the IS0
distribution from that paper. On the other hand, it better agrees with the IS0
distribution from Ref. uchida2004 when the TAMU-based correction factors are
employed. The strength visible above $17$ MeV in the present data (and
eventually also in the high excitation region of the 120Sn data) might be
attributed to a less than perfect subtraction of the low-energy flank of the
ISGDR uchida2003 that dominates the background cross sections.
A total of $83\pm 5\%$ $(96\pm 5\%)$, $84\pm 9\%$ $(88\pm 9\%)$, $112\pm
11\%$, and $124\pm 14\%$ $(85\pm 14\%)$ of the IS0 EWSR was identified for
58Ni, 90Zr, 120Sn, and 208Pb using the RCNP- (TAMU)-based correction factors.
The quoted EWSR fractions have been calculated over the excitation-energy
range $10-24.5$ MeV ($10-17$ MeV for 208Pb), encompassing the main ISGMR peak
and the errors associated include both systematic and statistical
uncertainties. While a comparison to previously quoted values is difficult
because they strongly depend on the chosen energy interval, they illustrate
that most of the ISGMR strength is found in the energy range covered by the
present data.
There are clear structural differences between results originating from TAMU
and RCNP in the case of 90Zr and 208Pb, but not for 58Ni. These differences
are within the main region of the ISGMR, and not confined to high excitation
energies where background subtraction effects might be expected to dominate.
In the case of 58Ni, the iThemba LABS data show fair agreement with previous
datasets regardless of the source of the correction factors, but the picture
is unfortunately not so clear for the heavier nuclei. This is due to the
reliance in this study on the $L>0$ strength distributions sourced from the
very experiments with which we wish to compare IS0 results. Consider that for
90Zr there is, at best, agreement between iThemba LABS and RCNP results when
using the RCNP-based correction factor and, at worst, a situation of three
distinct IS0 strength distributions. For the case of 208Pb, the iThemba LABS
data either agrees with the strength distribution from Uchida et al.
uchida2004 or Patel et al. patel2013testing depending on the use of the
TAMU- or RCNP-based correction factors, respectively.
It is interesting to consider the various values used to characterize the
energy of the ISGMR reported in the literature for the data shown in Figs.
7-10, originating either from peak fitting or from moment ratio calculations
Lipp89 . The results are summarized in Table 4. Clearly, the value assigned to
the ISGMR centroid depends on the calculation method. Peak fitting with
Gaussian or Lorentzian distributions is not very satisfactory, as the real
shape of the ISGMR rarely conforms to these simplistic peak shapes. Values for
the various moment ratios, on the other hand, depend heavily on the
excitation-energy range over which they are calculated, and in the absence of
clear guidelines, one finds quite a variation in the integration ranges
utilized. The behavior of the scaling model energies ($\sqrt{m_{3}/m_{1}}$)
for the case of 58Ni and 90Zr as compared to 120Sn confirms the impact of
large integration ranges on the extracted centroid values. The higher values
of the RCNP results gupta2018isoscalar in the case of 90Zr, even for a
smaller excitation-energy range covered than in the TAMU results
krishichayan2015g , stem from possible contributions due to the physical
continuum at high excitation energies GC_review2018 .
It is important to be aware of these complications, as differences of several
hundred keV impact on the extraction of nuclear matter incompressibility from
theoretical calculations. For example, in Ref. Shl2006 the value for
$K_{\infty}$ was constrained by using the $m_{1}/m_{0}$ ratio of the TAMU data
to represent the energies of the ISGMR in 208Pb and 90Zr. The experimental
centroid energies typically change by more than 400 keV if the results from
RCNP studies are used instead, as done in a recent study by Li et al. Li2022 ,
where the centroid for 208Pb originates from the moment ratio
$\sqrt{m_{1}/m_{-1}}$.
While the structural differences in the strength distributions highlighted in
previous paragraphs will also contribute towards the range of values reported
in Table 4, the large variations in applicable energy ranges make it
impossible to compare these results on an even footing. For this reason we
calculated, for all the strength distributions shown in Figs. 7-10, the three
moment ratios over the same excitation-energy range, and present the results
in Table 5. In addition, we fitted the IS0 strength distributions with a
Lorentzian
$S\left(E_{\text{x}}\right)=\dfrac{\sigma_{0}}{\left(E_{\text{x}}^{2}-E^{2}_{0}\right)^{2}+E_{\text{x}}^{2}\Gamma^{2}}~{},$
(6)
in order to extract characteristic centroid and width parameters. Here,
$E_{0}$ and $\Gamma$ represent the peak energy and width of the resonance, and
$\sigma_{0}$ denotes the strength value at $E_{0}$. These show large
variations from the moment ratios, demonstrating again that the ISGMR strength
distributions are not well approximated by a Lorentzian shape. The results in
Table V confirm that differences up to several hundred keV in centroid
energies calculated through any of the moment ratio methods can be observed
between the available datasets.
It is, therefore, clear that the comparison between different experimental
studies as well as theory should not be based only on a single number, i.e.,
the centroid energy dependent on energy integration ranges or calculation
methods, but also on the full strength distributions. This view is supported
by recent studies GC_review2018 ; colo2020 . Theoretically, this requires
going beyond the mean-field level in calculations and include at least
particle-vibration coupling (PVC). A current study of the ISGMR in the chain
of stable tin isotopes including PVC Li2022 demonstrates centroid shifts of
several hundred keV potentially resolving the longstanding problem that the
random-phase approximation (RPA) calculations require a significantly lower
value of $K_{\infty}$ to describe the Sn isotopes than 208Pb.
## V Conclusions
We present IS0 strength distributions on nuclei over a wide mass range
obtained with the DoS method modified to allow for excitation-energy-dependent
correction factors. These were deduced from available information on $L>0$
isoscalar strengths. The need for input from other experiments introduces a
model dependence in the analysis. When using input from various previous
studies the effects were found to be negligible for 58Ni, but large for 90Zr
and 208Pb. In general, when taking the $L>0$ strengths from RCNP experiments,
fair to good agreement with the IS0 strength distributions from those
experiments is achieved.
There is quite a variation in values of ISGMR centroids reported in
literature. Besides the much discussed problems of the subtraction of an
empirical background (containing physical and instrumental parts) favored by
the TAMU group and the possible inclusion of $L=0$ strength unrelated to the
ISGMR at high excitation energies in the analysis of RCNP data, we show that
the structural differences in the main ISGMR peak in results from previous
experiments impact on the centroid energy. This is particularly true in the
cases of 90Zr and 208Pb, which have been used to extract the nuclear matter
incompressibility from the comparison to RPA calculations with different
forces.
While the present data cannot resolve the experimental issues, because of the
model-dependent method of extraction of the IS0 strength, they underline a
need for new high-precision data on key nuclei for the determination of
$K_{\infty}$ combined with an improved theoretical treatment aiming at a
description of the full strength distributions rather than the ISGMR centroids
only. Theoretically, this requires the inclusion of complex configurations
beyond the level of RPA. As an example, a current study of the ISGMR in the
chain of stable tin isotopes demonstrates centroid shifts of several hundred
keV when PVC is included Li2022 , allowing for a consistent description with
forces reproducing the centroid in 208Pb.
## ACKNOWLEDGEMENTS
The authors thank the Accelerator Group at iThemba LABS for the high-quality
dispersion-matched beam provided for this experiment. We are indebted to G.
Colò and U. Garg for useful discussions. This work was supported by the
Deutsche Forschungsgemeinschaft under contract SFB $1245$ (Project ID No.
$79384907$) and by an NRF-JINR grant JINR200401510986. A.B. acknowledges
financial support through iThemba LABS, NRF South Africa. R.N. acknowledges
support from the NRF through Grant No. $85509$. P.A. acknowledges support from
the Claude Leon Foundation in the form of a postdoctoral fellowship. This work
is based on the research supported in part by the National Research Foundation
of South Africa (Grant Number: $118846$).
## References
* (1) M. N. Harakeh and A. van der Woude, Giant Resonances: Fundamental High-Frequency Modes of Nuclear Excitation Oxford Studies in Nuclear Physics Vol. 24 (Oxford University Press, New York, 2001).
* (2) M. N. Harakeh, K. van der Borg, T. Ishimatsu, H. P. Morsch, A. van der Woude, and F. E. Bertrand, Phys. Rev. Lett. 38, 676 (1977).
* (3) D. H. Youngblood, C. M. Rozsa, J. M. Moss, D. R. Brown, and J. D. Bronson, Phys. Rev. Lett. 39, 1188 (1977).
* (4) J. Blaizot, Phys. Rep. 64, 171 (1980).
* (5) U. Garg and G. Colò, Prog. Part. Nucl. Phys. 101, 55 (2018).
* (6) D. H. Youngblood, Y.-W. Lui, Krishichayan, J. Button, M. R. Anders, M. L. Gorelik, M. H. Urin, and S. Shlomo, Phys. Rev. C 88, 021301 (2013).
* (7) Krishichayan, Y.-W. Lui, J. Button, D. H. Youngblood, G. Bonasera, and S. Shlomo, Phys. Rev. C 92, 044323 (2015).
* (8) Y. K. Gupta, U. Garg, K. B. Howard, J. T. Matta, M. Şenyiğit, M. Itoh, S. Ando, T. Aoki, A. Uchiyama, S. Adachi, M. Fujiwara, C. Iwamoto, A. Tamii, H. Akimune, C. Kadono, Y. Matsuda, T. Nakahara, T. Furuno, T. Kawabata, M. Tsumura, M. N. Harakeh, and N. Kalantar-Nayestanaki, Phys. Lett. B 760, 482 (2016).
* (9) K. B. Howard, U. Garg, Y. K. Gupta, and M. N. Harakeh, Eur. Phys. J. A 55, 228 (2019).
* (10) D. H. Youngblood, Y.-W. Lui, and H. L. Clark, Phys. Rev. C 63, 067301 (2001).
* (11) Y.-W. Lui, D. H. Youngblood, S. Shlomo, X. Chen, Y. Tokimoto, Krishichayan, M. Anders, and J. Button, Phys. Rev. C 83, 044327 (2011).
* (12) J. Button, Y.-W. Lui, D. H. Youngblood, X. Chen, G. Bonasera, and S. Shlomo, Phys. Rev. C 96, 054330 (2017).
* (13) K. Howard, U. Garg, M. Itoh, H. Akimune, S. Bagchi, T. Doi, Y. Fujikawa, M. Fujiwara, T. Furuno, M. N. Harakeh, Y. Hijikata, K. Inaba, S. Ishida, N. Kalantar-Nayestanaki, T. Kawabata, S. Kawashima, K. Kitamura, N. Kobayashi, Y. Matsuda, A. Nakagawa ,S. Nakamura, K. Nosaka, S. Okamoto, S. Ota, S. Weyhmiller, and Z. Yangh, Phys. Lett. B 801, 135185 (2020).
* (14) S. D. Olorunfunmi, R. Neveling, J. Carter, P. von Neumann-Cosel, I. T. Usman, P. Adsley, A. Bahini, L. P. L. Baloyi, J. W. Brümmer, L. M. Donaldson, H. Jivan, N. Y. Kheswa, K. C. W. Li, D. J. Marín-Lámbarri, P. T. Molema, C. S. Moodley, G. G. O’Neill, P. Papka, L. Pellegri, V. Pesudo, E. Sideras-Haddad, F. D. Smit, G. F. Steyn, A. A. Aava, F. Diel, F. Dunkel, P. Jones, and V. Karayonchev, Phys. Rev. C 105, 054319 (2022).
* (15) D. H. Youngblood, H. L. Clark, and Y.-W. Lui, Phys. Rev. C 65, 034302 (2002).
* (16) G. Colò, D. Gambacurta, W. Kleinig, J. Kvasil, V. O. Nesterenko, and A. Pastore, Phys. Lett. B 811, 135940 (2020).
* (17) A. Bahini, V. O. Nesterenko, I. T. Usman, P. von Neumann-Cosel, R. Neveling, J. Carter, J. Kvasil, A. Repko, P. Adsley, N. Botha, J. W. Brümmer, L. M. Donaldson, S. Jongile, T. C. Khumalo, M. B. Latif, K. C. W. Li, P. Z. Mabika, P. T. Molema, C. S. Moodley, S. D. Olorunfunmi, P. Papka, L. Pellegri, B. Rebeiro, E. Sideras-Haddad, F. D. Smit, S. Triambak, and J. J. van Zyl, Phys. Rev. C 105, 024311 (2022).
* (18) R. Neveling, H. Fujita, F. D. Smit, T. Adachi, G. P. A. Berg, E. Z. Buthelezi, J. Carter, J. L. Conradie, M. Couder, R. W. Fearick, S. V. Förtsch, D. T. Fourie, Y. Fujita, J. Görres, K. Hatanaka, M. Jingo, A. M. Krumbholz, C. O. Kureba, J. P. Mira, S. H. T. Murray, P. von Neumann-Cosel, S. O′Brien, P. Papka, I. Poltoratska, A. Richter, E. Sideras-Haddad, J. A. Swartz, A. Tamii, I. T. Usman, and J. J. van Zyl, Nucl. Instrum. Methods Phys. Res. Sect. A 654, 29 (2011).
* (19) T. Kawabata, Few-Body Syst. 54, 1457 (2013).
* (20) K. V. D. Borg, M. Harakeh, and A. V. D. Woude, Nucl. Phys. A 365, 243 (1981).
* (21) Y. K. Gupta, K.B. Howard, U. Garg, J. T. Matta, M. Şenyiğit, M. Itoh, S. Ando, T. Aoki, A. Uchiyama, S. Adachi, M. Fujiwara, C. Iwamoto, A. Tamii, H. Akimune, C. Kadono, Y. Matsuda, T. Nakahara, T. Furuno, T. Kawabata, M. Tsumura, M. N. Harakeh, and N. Kalantar-Nayestanaki, Phys. Rev. C 97, 064323 (2018).
* (22) S. Brandenburg, R. De Leo, A. G. Drentje, M. N. Harakeh, H. Sakai and A. van der Woude, Phys. Lett. B 130, 9 (1983).
* (23) G. R. Satchler, Nucl. Phys. A 472, 215 (1987).
* (24) V. Plujko, O. Gorbachenko, R. Capote, and P. Dimitriou, At. Data Nucl. Data Tables 123-124, 1 (2018).
* (25) G. R. Satchler and D. T. Khoa, Phys. Rev. C 55, 285 (1997).
* (26) G. Fricke and C. Bernhardt, At. Data Nucl. Data Tables 60, 177 (1995).
* (27) M. H. Macfarlane and S. C. Pieper, Argonne National Laboratory Report No. ANL-76-11, (1978) (unpublished).
* (28) M. Rhoades-Brown, M. H. Macfarlane, and S. C. Pieper, Phys. Rev. C 21, 2436 (1980).
* (29) D. H. Youngblood, Y.-W. Lui, H. L. Clark, B. John, Y. Tokimoto, and X. Chen, Phys. Rev. C 69, 034315 (2004).
* (30) H. L. Clark, Y.-W. Lui, and D. H. Youngblood, Nucl. Phys. A 687, 80c (2001).
* (31) Y.-W. Lui, D. H. Youngblood, H. L. Clark, Y. Tokimoto, and B. John, Phys. Rev. C 73, 014314 (2006).
* (32) T. Li, U. Garg, Y. Liu, R. Marks, B. K. Nayak, P. V. Madhusudhana Rao, M. Fujiwara, H. Hashimoto, K. Nakanishi, S. Okumura, M. Yosoi, M. Ichikawa, M. Itoh, R. Matsuo, T. Terazono, M. Uchida, Y. Iwao, T. Kawabata, T. Murakami, H. Sakaguchi, Z. Terashima, Y. Yasuda, J. Zenihiro, H. Akimune, K. Kawase, and M. N. Harakeh, Phys. Rev. C 81, 034309 (2010).
* (33) K. B. Howard, Structure effects on the giant monopole resonance and determinations of the nuclear incompressibility, Ph.D. thesis, University of Notre Dame, 2020.
* (34) D. C. Patel, A Study of the Isoscalar Giant Monopole Resonance: The Role of Symmetry Energy in Nuclear Incompressibility in the Open-Shell Nuclei, Ph.D. thesis, University of Notre Dame, 2020.
* (35) B. K. Nayak, U. Garg, M. Hedden, M. Koss, T. Li, Y. Liu, P. V. Madhusudhana Rao, S. Zhu, M. Itoh, H. Sakaguchi, H. Takeda, M. Uchida, Y. Yasuda, M. Yosoi, H. Fujimura, M. Fujiwara, K. Hara, T. Kawabata, H. Akimune, and M. N. Harakeh, Phys. Lett. B 637, 43 (2006).
* (36) Y. Fujita, M. Fujiwara, S. Morinobu, I. Katayama, T. Yamazaki, T. Itahashi, H. I kegami, and S. I. Hayakawa, Phys. Rev. C 32, 425 (1985).
* (37) M. Uchida, H. Sakaguchi, M. Itoh, M. Yosoi, T. Kawabata, Y. Yasuda, H. Takeda, T. Murakami, S. Terashima, S. Kishi, U. Garg, P. Boutachkov, M. Hedden, B. Kharraja, M. Koss, B. K. Nayak, S. Zhu, M. Fujiwara, H. Fujimura, H. P. Yoshida, K. Hara, H. Akimune, and M. N. Harakeh, Phys. Rev. C 69, 051601 (2004).
* (38) D. C. Patel, U. Garg, M. Fujiwara, T. Adachi, H. Akimune, G. P. A. Berg, M. N. Harakeh, M. Itoh, C. Iwamoto, A. Long, J. T. Matta, T. Murakami, A. Okamoto, K. Sault, R. Talwar, M. Uchida, and M. Yosoi, Phys. Lett. B 726, 178 (2013).
* (39) M. Uchida, H. Sakaguchi , M. Itoh, M. Yosoi, T. Kawabata, H. Takeda, Y. Yasuda, T. Murakami, T. Ishikawa, T. Taki, N. Tsukahara, S. Terashima, U. Garg, M. Hedden, B. Kharraja, M. Koss, B.K. Nayak, S. Zhu, M. Fujiwara, H. Fujimura, K. Hara, E. Obayashi, H. P. Yoshida, H. Akimune, M. N. Harakeh, and M. Volkerts, Phys. Lett. B 557, 12 (2003).
* (40) E. Lipparini and S. Stringari, Phys. Rep. 175, 103 (1989).
* (41) S.Shlomo, V. M. Kolomietz, and G. Colò, Eur. Phys. J. A 30, 23 (2006).
* (42) Z. Z. Li, Y. F. Niu, and G. Colò, arXiv:2211.01264.
|
# Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images
Meng Wang, Kai Yu, Chun-Mei Feng, Ke Zou, Yanyu Xu, Qingquan Meng,
Rick Siow Mong Goh, Yong Liu, and Huazhu Fu Meng Wang and Kai Yu contributed
equally to this work. Corresponding author: Huazhu Fu (hzfu@ieee.org). Meng
Wang, Kai Yu, Chun-Mei Feng, Yanyu Xu, Rick Siow Mong Goh, Yong Liu, and
Huazhu Fu are with institute of high performance computing, agency for
science, technology and research (A*STAR), Singapore 138632, Singapore. Ke Zou
is with National Key Laboratory of Fundamental Science on Synthetic Vision and
the College of Computer Science, Sichuan University, Chengdu 610065, China.
Qingquan Meng is with Soochow University, Suzhou 215006, China.
###### Abstract
Focusing on the complicated pathological features, such as blurred boundaries,
severe scale differences between symptoms, background noise interference,
etc., in the task of retinal edema lesions joint segmentation from OCT images
and enabling the segmentation results more reliable. In this paper, we propose
a novel reliable multi-scale wavelet-enhanced transformer network, which can
provide accurate segmentation results with reliability assessment.
Specifically, aiming at improving the model’s ability to learn the complex
pathological features of retinal edema lesions in OCT images, we develop a
novel segmentation backbone that integrates a wavelet-enhanced feature
extractor network and a multi-scale transformer module of our newly designed.
Meanwhile, to make the segmentation results more reliable, a novel uncertainty
segmentation head based on the subjective logical evidential theory is
introduced to generate the final segmentation results with a corresponding
overall uncertainty evaluation score map. We conduct comprehensive experiments
on the public database of AI-Challenge 2018 for retinal edema lesions
segmentation, and the results show that our proposed method achieves better
segmentation accuracy with a high degree of reliability as compared to other
state-of-the-art segmentation approaches. The code will be released on:
https://github.com/LooKing9218/ReliableRESeg.
###### Index Terms:
Reliable segmentation, Multi-scale transformer, Wavelet, Retinal edema lesions
## 1 Introduction
Retinal edema lesions are highly correlated symptoms of diabetic retinopathy
(DR) including the symptoms of retina edema (RE), sub-retinal fluid (SRF), and
pigment epithelial detachment (PED) [1]. Retinal optical coherence tomography
(OCT) is a non-invasive imaging technology that can visualize the cross-
sectional structure of the retina, and it has been widely used in the
diagnosis of retinal diseases [2]. Ophthalmologists can evaluate retinal edema
lesions efficiently through retinal OCT images [3]. Therefore, accurate
segment retinal edema lesions from OCT images can greatly aid ophthalmologists
in evaluating diseases and specifying treatment plans [4]. Recently, many
segmentation methods based on the convolutional neural networks (CNNs) [5, 6,
7, 8] have been proposed. While these CNN-based methods achieved excellent
performance, there is a limitation in modeling explicit long-range dependent
global features due to the inherent locality of convolutional operations.
Focusing on this limitation, many studies attempted to introduce the
transformer to improve the model’s ability to learn long-range dependent
global information [9, 10]. And, several transformer-based segmentation models
have also been proposed for image segmentation and achieved comparable
performance with CNN-based approaches [11, 12, 13]. However, few previous
studies have explored the challenges in the task of retinal edema lesions
joint segmentation from OCT images.
Figure 1: Examples of retinal edema lesions. Red, blue, and green represent
RE, SRF, and PED, respectively. Retinal edema lesions often co-occur in
multiple morphologies, and as the disease progresses, the same symptom lesion
usually shows serious scale differences in different OCT slices.
As shown in Fig. 1, retinal edema lesions in OCT images have some specific
challenging characteristics, such as PEDs usually occupy a small proportion of
the whole OCT image, which may cause the loss of small-size feature
information during down-sampling, resulting in a decrease in the segmentation
accuracy of PEDs. In contrast to PEDs, RE and SRF always have larger sizes,
but they usually show large morphological differences in different OCT slices,
and RE is always characterized by blurred borders, which leads to performance
degradation for the segmentation methods designed for other tasks. In
addition, most previous segmentation methods mainly focus on the improvement
of segmentation accuracy, ignoring the assessment for the reliability of
segmentation results, which leads to unreliability when deployed in evidence-
critical retinal disease evaluation applications. In summary, there are still
three main challenges in the task of retinal edema lesions joint segmentation
from OCT images: 1) How to avoid the loss of small-sized features during down-
sampling, while enhancing the model’s ability to represent local detailed
features, such as retinal structure information and tiny PED lesion features.
2) How to improve the ability of the model to learn complex multi-scale global
semantic information of retinal edema lesions in OCT images. 3) How to
evaluate the confidence of the segmentation results to make the segmentation
results reliable without accuracy loss.
Therefore, in this paper, we propose a novel reliable multi-scale wavelet-
enhanced transformer network focusing on these challenges in the task of
retinal edema lesions joint segmentation from OCT images. Our main
contributions of this paper can be summarized as follows:
* •
We design a novel feature extractor network by replacing the down-sampling
operation in pre-trained ResNet with our newly proposed adaptive wavelet down-
sampling (AWDS) module, which can generate a wavelet representation to avoid
small feature loss while enhancing the ability of the network to represent
local multi-resolution detailed features.
* •
A novel multi-scale transformer (AMsTrans) module is developed to combine with
our proposed wavelet-enhanced feature extractor network, which aims to further
improve the model’s capacity of extracting the multi-scale long-range
dependent global features of the retinal edema lesions.
* •
To make the segmentation results more reliable, an uncertainty-aware
segmentation head based on the subjective logic evidence theory is introduced
to generate the final segmentation result for retinal edema lesions, while
giving a corresponding overall uncertainty evaluation score map to evaluate
the confidence of the segmentation result without accuracy loss.
* •
We conduct comprehensive experiments on the public database of AI-Challenge
2018 for retinal edema lesions segmentation, and the results show that our
proposed method outperforms other state-of-the-art segmentation approaches.
Meanwhile, with the estimated uncertainty, our method could produce a reliable
segmentation result, and potentially avoid disasters from out-of-distribution
(OOD) samples.
## 2 Related works
Image segmentation methods based on CNN: Deep learning especially for the
convolutional neural networks (CNNs) has been demonstrated with highly feature
representations that have achieved excellent performance in many computer
vision tasks [14, 15, 16, 17, 18]. Recently, numerous end-to-end CNN-based
segmentation methods have been proposed and achieved promising performance
[19, 5, 20, 6, 7, 8]. Currently, the CNN-based methods can be summarized as
two architectures: fully convolutional network (FCN) architecture [19, 20, 21]
and U-shaped (UNet) framework [5, 7, 6, 8, 22]. Usually, the FCN architecture
uses a classic baseline model such as VGG [15] or ResNet [17] as the backbone
network to extract the rich semantic information of the input data, and then
directly restore the top-level features to the input image size to segment the
target. The segmentation approaches based on the U-shaped architecture is
mainly composed of three core components: an encoder for extracting rich
features of the input image, a decoder for gradually restoring the resolution
information of the top-level features, and skip-connection to fuse the
information from different levels of the encoder with the corresponding
decoder’s features. Although these CNN-based methods have achieved promising
results in their respective specific segmentation tasks, the direct
application of these methods to the segmentation of retinal edema lesions in
OCT images still suffers from two limitations: 1) Simple down-sampling
operations often result in the loss of feature information of small lesions,
leading to low segmentation accuracy of tiny PED lesions with very small
regional proportions. 2) These methods are incapable of modeling explicit
long-range global features due to the inherent locality of convolution
operations.
Image segmentation methods based on Transformer: The transformer was
originally proposed for modeling the long-range dependent information of
timing signals in natural language processing (NLP) tasks [9]. With the
impressive achievements of Transformer in the field of NLP [23, 24, 25],
researchers have extended it to the field of computer vision [10]. Recently,
transformer-based methods [26, 27, 13] have also achieved comparable
performance to CNNs in many vision tasks as the transformer layer has
unlimited receptive fields and can capture long-range dependency global
features. Several segmentation methods based on transformer have also been
proposed for medical image segmentation [11, 12, 28]. However, the performance
of these methods in the task of retinal edema lesions segmentation with
complex pathological features is degraded due to the weaker local detail
feature representation ability of the transformer. In addition, most of these
methods explore long-range dependent features at a single scale, which is
another important reason for the performance degradation when these methods
are applied to the segmentation of retinal edema lesions with complex scale
features. Therefore, how to improve the ability of the model to learn complex
multi-scale global contextual information of retinal edema lesions is crucial
for improving the segmentation performance of the model. Recently, some
previous works have improved the model’s ability to represent multi-scale
global features by introducing a multi-scale learning module when designing
the model. [29] proposed CoTr for accurate 3D medical image segmentation.
Under this framework, the CNN is constructed to extract feature
representations and an efficient deformable Transformer (DeTrans) is built to
model the long-range dependency on the extracted feature maps. [30] designed
M2TR to capture the subtle manipulation artifacts at different scales using
transformer models. [31] introduced adaptive attention multi-scale fusion
transformer (AFTrans) to capture region attention without box annotations and
compensate for ViT shortcomings in fine-grained visual recognition. However,
most previous multi-scale methods still approach exploring multi-scale
features in a specified-sized feature map, ignoring the exploration with
different sizes and different semantic information. Different from previous
multi-scale approaches [29, 30, 31, 8], our proposed multi-scale transformer
module explores the multi-scale features by adaptively aggregating the global
long-dependent information in multi-scale feature maps from different levels
of feature extractor network.
Wavelet for local detailed feature information representation: Wavelet
transform has a good local detailed feature representation capacity in the
time-frequency domain, and can present any local details in the image, so it
is widely used to deal with various image problems [32, 33, 34]. Inspired by
the resemblance between multi-resolution analysis and the convolutional
filtering and pooling operations in CNNs, there are several previous works
dedicated to developing wavelet convolutional neural networks for computer
vision tasks [35, 36, 37]. Although the Wavelet network can model local multi-
resolution detail features well, compared to the classic CNN-based baseline
models such as ResNet [17] and DenseNet [38], their ability to learn complex
texture features of images is weak. Therefore, enhancing the model’s ability
to learn complex texture features of images is crucial for applying wavelet
theory to the joint segmentation task of retinal edema lesions from OCT
images. Therefore, in this paper, we construct the feature extractor network
to explore local multi-resolution detailed features by integrating our
proposed adaptive wavelet down-sampling module with the pre-trained ResNet
blocks.
Figure 2: The framework of our proposed reliable multi-scale wavelet enhanced
transformer network. Our proposed method mainly consists of four components: a
novel wavelet-enhanced feature extractor network which integrates the pre-
trained ResNet blocks with our proposed adaptive wavelet down-sampling (AWDS)
module, it aims to extract the complicated feature information in the retinal
OCT image; the adaptive multi-scale transformer (AMsTrans) module is appended
on the top layer of feature extractor network to guide the model to explore
the multi-scale long-range dependent global features of retinal lesions; the
decoder path to restore the spatial information with strong multi-scale global
features generated by AMsTrans module and gradually fuse the multi-semantic
contextual information from different stages of feature extractor network; the
uncertainty-aware segmentation head is used to generate the final segmentation
results with corresponding uncertainty score map, where regions with high
uncertainty scores indicate that the segmentation result in that region may be
incorrect and may require double-check by the physician, while regions with
low certainty scores indicate a high level of confidence in the segmentation
result.
Uncertainty-based learning: the past decade, deep learning-based methods have
achieved tremendous excellent results in various CV tasks [14, 17, 16, 18, 10,
13]. However, these deep learning models are essentially deterministic
functions and thus cannot provide uncertainty estimation for final decisions.
Therefore, the uncertainty learning method has gradually attracted the
attention of researchers. Bayesian Neural Networks (BNNs) [39, 40, 41] obtain
deep model uncertainty by substituting model deterministic weight parameters.
However, BNNs usually come with a high computational cost, therefore, Gal et
al [42]. proposed a more scalable and practical approach, MC-dropout, in which
the inference decision is obtained by dropout sampling of weights during
training and testing. Lakshminarayanan et al [43]. proposed ensemble-based
uncertainty methods by training and integrating multiple deep networks, which
also achieved promising performance. Unlike these methods to obtain
uncertainty estimation by modeling uncertainty through network weights, Sensoy
et al [44]. proposed an uncertainty method based on the subjective logical
evidential theory that directly models uncertainty without ensemble or Monte
Carlo sampling. Building upon RBF networks, Amersfoort et al [45]. adopted the
distance between test samples and prototypes as the agency for deterministic
uncertainty. Furthermore, Han et al [46]. proposed a unified multi-view
uncertainty classification method by combining subjective logical theory with
the Dirichlet distribution, which achieved excellent performance in the task
of multi-view classification. However, unlike the previous studies on
single/multi-view classification, our joint segmentation task for retinal
edema lesions is not a simple classification task, and it also needs to
consider the spatial continuity of the lesion area to give high uncertainty
scores to incorrect segmented regions.
## 3 Methods
### 3.1 Overall architecture
As shown in Fig. 2, our proposed reliable multi-scale wavelet-enhanced
transformer network is designed based on the U-shaped architecture that mainly
consists of four components: the wavelet-enhanced feature extractor network
which integrates the pre-trained ResNet blocks with our proposed adaptive
wavelet down-sampling (AWDS) module, it aims to extract the complicated
feature information in the retinal OCT image; the AMsTrans module is appended
on the top layer of feature extractor network to guide the model to explore
the multi-scale long-range dependent global features of retinal lesions; the
decoder path to restore the spatial information with strong multi-scale global
features generated by AMsTrans module and gradually fuse the multi-semantic
contextual information from different stages of feature extractor network; the
uncertainty-aware segmentation head is adapted to generate the final
segmentation results with corresponding uncertainty score map, where regions
with high uncertainty scores indicate that the segmentation result in that
region may be incorrect and may require double-check by the physician, while
regions with low certainty scores indicate a high level of confidence in the
segmentation result.
### 3.2 Wavelet-enhanced feature extractor network
As shown in Fig. 3, to extract the complex feature information in OCT images
while enhancing the model’s ability to extract the local multi-resolution
detailed information, we propose a novel feature extractor network, which
mainly consists of the pre-trained ResNet blocks and AWDS module.
Previous studies have shown that pre-trained ResNet has good feature
representation capability and is widely used as a feature extraction backbone
in different vision tasks [19, 7, 8]. While these methods have achieved
remarkable performance, they are essentially spatial approaches and usually
ignore spectral information that is crucial for representing the multi-
resolution local detailed features in OCT images. In addition, the down-
sampling operation in ResNet may also result in the loss of small-sized
symptomatic features of retinal edema lesions, leading to poor segmentation
performance for PEDs with small-size features. Furthermore, it has been
demonstrated that wavelet transform has a good local multi-resolution detailed
feature representation capacity in the time-frequency domain, thus can present
any local details in the image [32, 34, 35, 37].
Therefore, we introduce wavelet theory and develop a novel AWDS module to
replace the down-sampling operation in ResNet, which can supplement the
missing spectral information in ResNet and improve the model’s ability to
extract local multi-resolution detailed features. The architecture of the
proposed AWDS module is shown in Fig. 3. In AWDS module, we adopt 2D adaptive
lifting scheme to perform multi-resolution wavelet transform on the input
feature maps ${\rm{\textbf{Input}}}\in{{\rm{R}}^{\left({C,H,W}\right)}}$ to
generate four wavelet sub-bands feature maps
(${\rm{\textbf{LL}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$,
${\rm{\textbf{LH}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$,
${\rm{\textbf{HL}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$, and
${\rm{\textbf{HH}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$), and a
Conv3$\times$3 operation is used to fuse the features of different sub-bands
to achieve down-sampling without any feature loss.
#### 3.2.1 The adaptive horizontal lifting scheme
The input 2D feature map is first split into the even
(${I_{e}}\left[{n,:}\right]=I\left[{2n,:}\right]$) and odd
(${I_{o}}\left[{n,:}\right]=I\left[{2n+1,:}\right]$) horizontal components.
Then, a horizontal updater (${U_{h}}$) and a horizontal predictor (${P_{h}}$)
are performed on the split components to generate the approximation $L_{H}$
and the details $H_{H}$ sub-bands of the wavelet transformation as follows,
$\begin{split}L_{H}\left[{n,:}\right]&={I_{e}}\left[{n,:}\right]+{\mathop{\rm
U_{h}}\nolimits}\left({{I_{o}}\left[{n,:}\right]}\right),\\\
H_{H}\left[{n,:}\right]&={I_{o}}\left[{n,:}\right]-P_{h}\left({{L_{H}}\left[{n,:}\right]}\right),\end{split}$
(1)
where $U_{h}$ and $P_{h}$ are two learnable blocks consisting of convolutional
operations as shown in Fig.3, both of which can adaptively optimize their
coefficients during training by gradient back-propagation.
#### 3.2.2 The adaptive vertical lifting scheme
Similar to the adaptive horizontal lifting scheme, take $H_{H}$ as an example,
the input 2D feature map $H_{H}$ is first split into the even
(${H_{He}}\left[{:,n}\right]={H_{H}}\left[{:,2n}\right]$) and odd
(${H_{Ho}}\left[{:,n}\right]={H_{H}}\left[{:,2n+1}\right]$) vertical
components. Then, a vertical updater (${U_{v}}$) and a vertical predictor
(${P_{v}}$) are performed on the split components to generate the
approximation $HH$ and the details $HL$ sub-bands of the wavelet
transformation as follows,
$\begin{split}HL\left[{:,n}\right]&={H_{He}}\left[{n,:}\right]+{\mathop{\rm
U_{v}}\nolimits}\left({{H_{Ho}}\left[{n,:}\right]}\right),\\\
HH\left[{n,:}\right]&={H_{Ho}}\left[{n,:}\right]-P_{v}\left({{HL}\left[{n,:}\right]}\right).\end{split}$
(2)
Like $U_{h}$ and $P_{h}$, both $U_{v}$ and $P_{v}$ are also learnable blocks
consisting of convolutional operations as shown in Fig. 3, both of which can
adaptively optimize their coefficients during training by gradients back-
propagation.
It can be seen from Fig. 3, Eqs. 1 and 2 that the input feature map can be
down-sampled without losing any feature information by 2D adaptive lifting
scheme. Finally, ${\rm{\textbf{LL}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$,
${\rm{\textbf{LH}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$,
${\rm{\textbf{HL}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$, and
${\rm{\textbf{HH}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$ are concatenated
fed into a Conv${3\times 3}$ convolutional layer to adaptively fuse the
features of different wavelet sub-bands.
Figure 3: The framework of adaptive wavelet down-sampling module. In AWDS
module, we adopt 2D adaptive lifting scheme to perform multi-resolution
wavelet transform on the input feature maps
${\rm{\textbf{Input}}}\in{{\rm{R}}^{\left({C,H,W}\right)}}$ to generate four
wavelet sub-bands feature maps
(${\rm{\textbf{LL}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$,
${\rm{\textbf{LH}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$,
${\rm{\textbf{HL}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$, and
${\rm{\textbf{HH}}}\in{{\rm{R}}^{\left({C,H/2,W/2}\right)}}$), and a
Conv3$\times$3 operation is used to fuse the features of different sub-bands
to achieve down-sampling without any feature loss.
### 3.3 The adaptive multi-scale transformer module
As shown in Fig. 1, retinal edema lesions often co-occur in multiple
morphologies, and as the disease progresses, the same symptom lesion usually
shows serious scale differences in different OCT slices. Therefore, it is
crucial for improving the segmentation performance to enhance the model’s
capacity to learn the global features of the retinal edema lesions with
different scales. Transformer, with an excellent ability to model long-range
dependent features, has been applied in many fields of computer vision.
Therefore, inspired by transformer, as shown in Fig. 4, dedicated to exploring
multi-scale global long-dependent feature modeling in retinal OCT images, we
propose a novel AMsTrans module. It can be seen from Fig. 2 and Fig. 4 that,
different from the common transformer which focuses on single scale-size
features, the proposed AMsTrans module takes feature maps with different
scale-size at different stages of encoder path as input. First, the feature
maps from level-1 ($F_{1}$), level-2 ($F_{2}$), level-3 ($F_{3}$), and level-4
($F_{4}$) are fed into a bilinear interpolation down-sampling module followed
by a Conv3$\times$3 layer to normalize the resolution and channels to the top
layer feature map ($F_{T}$). Then, the normalized feature maps and the top
layer feature map are respectively fed into the corresponding scale dot-
product block, so as to learn the long-dependent global features in different
scale-size. This process can be analogized to the multi-head self-attention
operation in the common transformer structure, i.e, the scale dot-product
attention block of each scale-size feature map branch represents the SA head
of multi-head self-attention in the common transformer. Meanwhile, inspired by
an artificial neuron(AN) [47], the weighted sum operation followed by the
Conv3$\times$3 feature fusion layer is adopted to adaptively fuse long-
dependent global features from multi-scale feature maps,
${F_{Ms}}=Conv3\times
3\left({\textbf{1}*{F_{T}}+\sum\limits_{i=1}^{4}{{w_{i}}{F_{i}}}}\right),$ (3)
where 1 is analogized to the bias in AN, while $w_{i}$ is the learnable
weights obtained by Conv1$\times$1 followed by Sigmoid normalization layer, as
shown in Fig. 4,
$W={\mathop{\rm Sigmoid}\nolimits}\left({Conv1\times 1\left({{\mathop{\rm
Concat}\nolimits}\left({{F_{T}},{F_{1}},{F_{2}},{F_{3}},{F_{4}}}\right)}\right)}\right)$
(4)
where B, H, and W are the batch size, height, and width of the feature maps,
respectively. Finally, the residual architecture is constructed by summing
$F_{T}$ with $F_{Ms}$ to further enhance the model’s ability to model strong
semantic abstract features contained in the top layer, while avoiding the
gradient vanishing. Different from previous multi-scale approaches [29, 30,
31, 8], our proposed AMsTrans module adaptively weight the global features
learned from multiple scale-size feature maps, which can enhance the model’s
ability to represent the multi-scale global features for the different retinal
lesions, while avoiding the interference of the weak semantic information of
the shallow layer on the high-level semantic features contained in the top
layer.
Figure 4: Illustration of adaptive multi-scale transformer module. Our
proposed AMsTrans module takes feature maps with different scale-size at
different stages of encoder path as input. First, the feature maps from
level-1 ($F_{1}$), level-2 ($F_{2}$), level-3 ($F_{3}$), and level-4 ($F_{4}$)
are fed into a bilinear interpolation down-sampling module followed by a
Conv3$\times$3 layer to normalize the resolution and channels to the top layer
feature map ($F_{T}$). Then, the normalized feature maps and the top layer
feature map are respectively fed into the corresponding scale dot-product
attention block, so as to learn the long-dependent global features in
different scale-size. Meanwhile, inspired by an artificial neuron, the
weighted sum operation followed by the Conv3$\times$3 feature fusion layer is
adopted to adaptively fuse long-dependent global features from multi-scale
feature maps, where the weights are obtained by Conv1$\times$1 followed by
Sigmoid normalization layer.
### 3.4 Uncertainty-aware segmentation head
Figure 5: Overview of uncertainty-aware segmentation head. Step : Obtaining
the evidence $E$ of three retinal edema lesions by applying Softplus
activation function to ensure the feature values are larger than 0; Step :
Parameterizing $E$ to Dirichlet distribution; Step : Calculating the belief
masses and corresponding uncertainty scores.
How to make the segmentation result more reliable without losing the accuracy
is significant for the joint segmentation of retinal edema lesions from OCT
images. To this end, as shown in Fig. 2 and Fig. 5, we introduce an
uncertainty-aware segmentation head based on subjective logic evidence
uncertainty theory, which can generate retinal edema lesion segmentation
results with corresponding uncertainty assessment map based on the feature
evidence from the last layer of the main segmentation framework. Specifically,
assuming there is a K-target segmentation task with K+1 mass maps, including K
belief mass maps corresponding to the target lesions and an overall
uncertainty mass map, which are all non-negative and the values of the sum of
the same coordinates to 1, i.e., $u_{i,j}+\sum^{K}_{k=1}b^{k}_{i,j}=1$, where
$b^{k}_{i,j}\geq 0$ and $u_{i,j}\geq 0$ are the probability of the $k$-th
target lesion at coordinate $\left(i,j\right)$ and the corresponding overall
uncertainty score, respectively. In this paper, the joint segmentation of
retinal edema lesions is a 3-target segmentation task, thus the K is set to 3
in this paper. As shown in Fig. 5, the belief masses and overall uncertainty
scores can be calculated by the following three steps:
Step 1: Obtaining the evidence
$E_{i,j}=[e_{i,j}^{1},e_{i,j}^{2},...,e_{i,j}^{K}]$ of three retinal edema
lesions by applying Softplus activation function to ensure the feature values
are larger than 0:
$\begin{split}E&=Softplus\left(F_{Out}\right)\in R^{B,K,H,W},\end{split}$ (5)
where $F_{Out}$ is the final feature maps obtained from the final layer of our
proposed segmentation backbone, while $B$,$H$, and $W$ indicate the batch
size, height, and width of the $F_{Out}$, respectively.
Step 2: Parameterizing $E$ to Dirichlet distribution, as:
$\begin{split}\bm{\alpha}_{i,j}=&E_{i,j}+1,\;i.e.,\;\alpha^{k}_{i,j}=e^{k}_{i,j}+1,\\\
\end{split}$ (6)
where $\alpha^{k}$ and $e^{k}$ are the k-th category Dirichlet distribution
parameters and evidence, respectively.
Step 3: Calculating the belief masses and corresponding uncertainty scores
as:
$\begin{split}b^{k}_{i,j}=\frac{e^{k}_{i,j}}{S}=\frac{\alpha^{k}_{i,j}-1}{S},\;u_{i,j}=\frac{K}{S},\\\
\end{split}$ (7)
where
$S=\sum\nolimits^{K}_{k=1}\left(e^{k}_{i,j}+1\right)=\sum\nolimits^{K}_{k=1}\alpha^{k}_{i,j}$
is the Dirichlet intensities. It can be seen from Eq. (7), the probability
assigned to category k is proportional to the observed evidence for category
k. Conversely, if less total evidence is obtained, the greater the total
uncertainty. The belief assignment can be considered as a subjective opinion.
Given an opinion, the probability of k-th target lesion at $\left(i,j\right)$
coordinates is computed as $p^{k}_{i,j}=\frac{\alpha^{k}_{i,j}}{S}$ based on
the Dirichlet distribution [48].
### 3.5 Loss function
In previous segmentation studies, a joint loss function ($\ell_{O}$)
consisting of cross-entropy loss ($\ell_{CE}$) and Dice loss ($\ell_{Dice}$)
has been widely used for model optimization in segmentation tasks, as follows:
$\ell_{O}=\ell_{CE}+\ell_{Dice},$ (8)
where
$\begin{split}\ell_{CE}&=-\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}Y^{k}_{i,j}log\left(p^{k}_{i,j}\right),\\\
\ell_{Dice}&=\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}\left(1-\frac{2Y^{k}_{i,j}p^{k}_{i,j}}{Y^{k}_{i,j}+p^{k}_{i,j}}\right),\end{split}$
(9)
where $Y^{k}_{i,j}$ and $p^{k}_{i,j}$ indicate the ground truth and
segmentation result of k-th at $\left(i,j\right)$, respectively. However, in
this paper, SL is adapted to associate the evidence feature for generating
final segmentation results with the parameters of Dirichlet distribution, i.e,
given the evidence feature of
$E_{i,j}=\left[e^{1}_{i,j},e^{2}_{i,j},...,e^{K}_{i,j}\right]$, the Dirichlet
distribution parameter $\bm{\alpha}_{i,j}=E_{i,j}+1$ and the target lesion
probabilities of
$P_{i,j}=\left[p^{1}_{i,j},p^{2}_{i,j},...,p^{K}_{i,j}|\bm{\alpha}\right]$ are
obtained. Therefore, to guide the model optimization in the pixel level, the
cross-entropy loss is re-formalized as follows:
$\footnotesize\begin{split}\ell_{UCE}\\!&=\\!\int-\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}Y^{k}_{i,j}log\left(p^{k}_{i,j}\right)\frac{1}{B\left(\alpha_{i,j}\right)}\prod^{K}_{k=1}\left(p^{k}_{i,j}\right)^{\alpha^{k}_{i,j}-1}\\\
&=\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}Y^{k}_{i,j}\left(\psi\left(S_{i,j}\right)-\psi\left(\alpha^{k}_{i,j}\right)\right),\end{split}$
(10)
where $\psi(\cdot)$ denotes the digamma function, while
$B\left(\alpha^{k}_{i,j}\right)$ is the multinomial beta function for the
concentration parameter of k-th target lesion $\alpha^{k}$ at
$\left(i,j\right)$. Meanwhile, we further introduce the KL divergence function
to ensure that incorrect labels will yield less evidence:
$\scriptsize\begin{split}\ell_{KL}&=\log\left(\dfrac{\Gamma\left(\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}\left(\tilde{\alpha}^{k}_{i,j}\right)\right)}{\Gamma\left(K\right)\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}\Gamma\left(\tilde{\alpha}^{k}_{i,j}\right)}\right)\\\
&+\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}\left(\tilde{\alpha}^{k}_{i,j}-1\right)\left[\sum\nolimits_{\left(i,j\right)}\Phi\left(\tilde{\alpha}^{k}_{i,j}\right)-\Phi\left(\sum^{K}_{i=1}\sum\nolimits_{\left(i,j\right)}\tilde{\alpha}^{k}_{i,j}\right)\right],\end{split}$
(11)
where $\Gamma(\cdot)$ is the gamma function, while
$\bm{\tilde{\alpha}}_{i,j}=Y_{i,j}+\left(1-Y_{i,j}\right)\odot\bm{\alpha}_{i,j}$
denotes the adjusted parameters of the Dirichlet distribution which aims to
avoid penalizing the evidence of the ground-truth class to 0. Therefore, the
objective function for the model optimization in pixel level based on the
feature distribution parameterized by Dirichlet concentration is as follows:
$L_{Un}=L_{UCE}+\lambda_{u}\ast L_{KL},$ (12)
where $\lambda_{u}$ is the balance factor weighted to $\ell_{KL}$. To prevent
the model from focusing too much on KL loss at the early stage of training,
which may lead to a poor exploration of the parameter space and result in a
flat uniform distribution of the model’s output, we initialize $\lambda_{u}$
as 0 and gradually increase it with the number of training iterations.
Meanwhile, we also propose a new type of uncertainty-aware loss function based
on the dice loss function, aiming to guide the model optimization at the image
level, as follows:
$\begin{split}\ell_{U-Dice}&=\sum^{K}_{k=1}\sum\nolimits_{\left(i,j\right)}\left(1-\frac{2Y^{k}_{i,j}p^{k}_{i,j}}{Y^{k}_{i,j}+p^{k}_{i,j}}\right)\\\
&+\left(1-\sum\nolimits_{\left(i,j\right)}\left(\frac{2\bar{Y}_{i,j}U_{i,j}}{\bar{Y}_{i,j}+U_{i,j}}\right)\right).\end{split}$
(13)
where, $\bar{Y}_{i,j}$ represents the ground truth used to guide the model to
assign high uncertainty scores to incorrectly segmented regions during
training.
$\begin{split}\bar{Y}_{i,j}=1-\bm{1}\left\\{P_{i,j},Y_{i,j}\right\\},where\\\
\bm{1}\left\\{P_{i,j},Y_{i,j}\right\\}=\begin{cases}1&if\ P_{i,j}=Y_{i,j}\\\
0&otherwise\end{cases}.\end{split}$ (14)
As can be seen from Eq. 13 and Eq. 14, our proposed uncertainty-aware dice
loss can not only guide the model to focus on the accuracy of lesion region
segmentation, but also enable the model to assign high uncertainty scores to
incorrectly segmented regions. Overall, in this paper, the objective function
for model optimization is calculated as:
$\ell_{overall}=\ell_{Un}+\ell_{U-Dice}.$ (15)
As shown in Eq. 15, $\ell_{Un}$ is employed to guide the model optimization in
pixel level, while the uncertainty-aware Dice loss $\ell_{U-Dice}$ is used to
optimize the model in image level.
## 4 Experiments and Results
### 4.1 Dataset and implementation detail
We systematically evaluate the proposed method on the public database of AI-
Challenge 2018 for retinal edema lesions segmentation, including the
segmentation of retina edema(RE), sub-retinal fluid(SRF), and pigment
epithelial detachment(PED) with severely imbalanced regional proportions. The
regional ratio of RE, SRF, and PED in this database is counted as
0.8441:0.1493:0.0066, where the proportion of PED is much smaller than RE and
SRF, which poses a great challenge to accurate segment PED lesion. The dataset
contains 85 retinal OCT cubes (1024$\times$512$\times$128) with ground truth.
We randomly divide the dataset into three exclusive subsets for training,
validation, and testing based on 3D cubes with a ratio of 6:2:2. Therefore,
the training dataset contains 6528 B-Scan OCT images, while validation and
testing contain 2176 B-Scan OCT images, respectively. In addition, to
comprehensively evaluate the performance of different methods, three
indicators of Jaccard coefficient, Dice similarity index, and Accuracy (ACC)
are used to quantitatively analyze the segmentation results from different
methods.
For data preprocessing, we resize the retinal OCT B-Scan to 512$\times$256 to
improve the computational efficiency, while avoiding excessive detail loss and
maintaining the average aspect ratio. To ensure fairness, all experiments
involved in this paper were performed on the public platform Pytorch and
RTX3090 GPU (24G). All networks are optimized by Adam with a batch size of 8
and maximum training epochs of 100. The initial learning rate and weight decay
were set to 0.0005 and 0.0001, respectively.
TABLE I: The Jaccard, Dice, and Accuracy of different methods
Methods | Jaccard | Average | Dice | Average | Acc | Average
---|---|---|---|---|---|---
PED | SRF | RE | PED | SRF | RE | PED | SRF | RE
UNet | 0.603 | 0.893 | 0.783 | 0.759 | 0.696 | 0.940 | 0.872 | 0.836 | 0.998 | 0.993 | 0.906 | 0.966
AttUNet | 0.661 | 0.839 | 0.766 | 0.755 | 0.772 | 0.896 | 0.863 | 0.844 | 0.998 | 0.988 | 0.894 | 0.960
CE-Net | 0.474 | 0.885 | 0.778 | 0.712 | 0.585 | 0.936 | 0.870 | 0.797 | 0.997 | 0.993 | 0.901 | 0.964
CPFNet | 0.571 | 0.888 | 0.787 | 0.749 | 0.681 | 0.936 | 0.875 | 0.831 | 0.998 | 0.993 | 0.907 | 0.966
UNet++ | 0.583 | 0.888 | 0.787 | 0.753 | 0.695 | 0.937 | 0.874 | 0.835 | 0.998 | 0.993 | 0.909 | 0.967
GLFRNet | 0.630 | 0.888 | 0.794 | 0.771 | 0.739 | 0.936 | 0.880 | 0.852 | 0.998 | 0.994 | 0.909 | 0.967
TransUNet | 0.611 | 0.887 | 0.776 | 0.758 | 0.718 | 0.938 | 0.870 | 0.842 | 0.998 | 0.993 | 0.901 | 0.964
UTNet | 0.450 | 0.864 | 0.710 | 0.675 | 0.571 | 0.923 | 0.820 | 0.771 | 0.996 | 0.991 | 0.876 | 0.954
MsTGANet | 0.629 | 0.889 | 0.782 | 0.767 | 0.742 | 0.937 | 0.871 | 0.850 | 0.998 | 0.993 | 0.906 | 0.966
Bayesian | 0.723 | 0.888 | 0.792 | 0.801 | 0.817 | 0.937 | 0.879 | 0.878 | 0.999 | 0.993 | 0.910 | 0.967
TBraTS | 0.721 | 0.890 | 0.793 | 0.802 | 0.820 | 0.938 | 0.879 | 0.879 | 0.999 | 0.993 | 0.911 | 0.968
Proposed | 0.760 | 0.901 | 0.802 | 0.821 | 0.856 | 0.947 | 0.885 | 0.896 | 0.999 | 0.995 | 0.917 | 0.970
Figure 6: Segmentation results of different models, where red represents RE,
blue is SRF, and green indicates PED.
### 4.2 Comparison results
As shown in Table I, in order to comprehensively evaluate the performance of
the proposed method, we compared with other excellent segmentation methods in
including CNN-based models [5, 7, 49, 6, 8, 22, 50], transformer-based
networks [11, 12], and uncertainty-based approaches [42, 51]. It can be seen
from Table I that the proposed method achieves the highest performance on all
segmentation targets with average Jaccard, Dice, and Acc achieving 0.821,
0.896, and 0.970, respectively. Seen from Table I, since both Ce-Net [7] and
CPFNet [8] adopted the original pre-trained ResNet [17] as the feature extract
network to capture rich feature information in medical images.
TABLE II: The Jaccard, Dice, and Accuracy of ablation experiments
Wavelet | AMsTrans | Uncertainty | Jaccard | Average | Dice | Average | ACC | Average
---|---|---|---|---|---|---|---|---
PED | SRF | RE | PED | SRF | RE | PED | SRF | RE
/ | / | / | 0.604 | 0.882 | 0.779 | 0.755 | 0.726 | 0.935 | 0.870 | 0.844 | 0.998 | 0.992 | 0.904 | 0.965
✓ | / | / | 0.662 | 0.888 | 0.793 | 0.781 | 0.764 | 0.937 | 0.880 | 0.860 | 0.998 | 0.993 | 0.907 | 0.966
✓ | ✓ | / | 0.725 | 0.888 | 0.793 | 0.802 | 0.818 | 0.937 | 0.879 | 0.878 | 0.999 | 0.993 | 0.910 | 0.967
✓ | ✓ | ✓ | 0.760 | 0.901 | 0.802 | 0.821 | 0.856 | 0.947 | 0.885 | 0.896 | 0.999 | 0.995 | 0.917 | 0.970
Figure 7: Wavelet feature representation visualization. Red arrows indicate
PED lesions. The retinal structure information at different levels of the
feature extractor network is preserved and enhanced, especially for the PED
lesion with small-size features.
However, the down-sampling operation in ResNet may cause feature loss for
small targets, resulting in its performance degradation in our retinal edema
lesion segmentation tasks with sparse and complex feature information. In
addition, GLFRNet [50] improves the segmentation performance by introducing
two modules of the global feature reconstruction module and the local feature
reconstruction module, however, the problem of feature loss caused by the
down-sampling still affects the performance for segmenting the PED lesions
with very small regional proportions.
Figure 8: Variation curve of Dice index with noise intensity for different
methods. We can observe the interesting fact that the performance of all
comparison methods decreases as the level of added noise increases, instead,
our proposed method can still maintain higher segmentation performance than
other excellent segmentation models and uncertainty approaches, which
demonstrates that our proposed method can achieve higher segmentation accuracy
with better robustness.
Compared with these CNN-based methods, the transformer-based methods of
TransUNet [11] and MsTGANet [22], both improve the segmentation performance by
combining transformer with CNN to improve the model’s ability to learn long-
range dependent global features. As shown in Table I, both TransUNet [11] and
MsTGANet [22] achieve comparable performance with most CNN-based methods. In
contrast to these CNNs-based and Transformer-based approaches, our proposed
method alleviates feature loss and improves the ability to capture multi-
resolution local features through our designed wavelet-enhanced feature
extractor network, and further improves the model’s capacity to capture multi-
scale long-range dependent global features of retinal edema lesion by
combining the proposed AMsTrans module with wavelet-enhanced extractor.
Meanwhile, we also introduce a novel uncertainty-aware segmentation head to
generate segmentation results with corresponding uncertainty evaluation maps,
which enables our proposed method more reliable with higher segmentation
accuracy. Therefore, as seen from Table I, whether it is a PED with a small
area proportion, an SRF with a clear boundary but a large scale variance, or
an RE with a blurred boundary and a large area proportion, our proposed method
achieves higher segmentation metrics for all these retinal edema lesions. The
Jaccard of PED, SRF, and RE are improved by 20.6%, 1.5%, and 1.0% than GLFRNet
[50], which achieves the highest average performance in all comparison
methods. Meanwhile, we also compare our proposed method with the other
excellent uncertainty-based approaches, such as Bayesian-based method [42] and
TBraTS [51]. As shown in Table II, our proposed method achieves better
segmentation performance, and the average of Jaccard the proposed method
improves by 2.5% and 2.4% over the Bayesian-based method and TBraTS,
respectively. In addition, Fig.6 shows the segmentation results from different
methods. As shown in Fig.6, the proposed method obtains better segmentation
results than other models, the over-segmentation and missing-segmentation
problems are significantly alleviated, especially for the PED lesion(Fig.6(c)
and Fig.6(d)). These quantitative and qualitative analysis results show that
our proposed method can significantly improve the joint segmentation
performance of retinal edema lesions from OCT images, demonstrating the
effectiveness of our proposed method.
Figure 9: The segmentation results of the proposed method for noise OCT
images. Where regions with high uncertainty scores indicate that the
segmentation result in that region may be incorrect and may require a double-
check by the physician, while regions with low certainty scores indicate a
high level of confidence in the segmentation result.
### 4.3 Ablation study
We conducted a variety of ablation studies to demonstrate the effectiveness of
the proposed method, the results are shown in Table II. Here, the modified
U-shaped network which adopted pre-trained ResNet as the extractor network is
employed as the ‘Backbone’ model. It can be seen from Table II that the model
that adopted the proposed wavelet-enhanced extractor network achieves better
segmentation performance than the Backbone model, especially for the tiny PED
lesion, the Jaccard and Dice are improved by 9.6% and 5.23%, respectively.
Furthermore, Fig. 7 shows the feature reconstruction results of different sub-
bands generated by the lifting scheme at different levels. The feature
reconstruction results of different sub-bands are very similar to traditional
wavelet transform. As shown in Fig. 7 that the retinal structure information
at different levels of the feature extractor network is preserved and
enhanced, especially for the PED lesion with small-size features. The
reconstruction results show that our proposed wavelet-enhanced feature
extractor network (Wavelet in Table II) can generate a wavelet representation
to avoid feature loss while enhancing the ability of the network to represent
local multi-resolution detailed features. Meanwhile, the average Jaccard of
Wavelet+AMsTrans also improved by 6.2% and 2.7% to the Backbone model and
Wavelet model, respectively, which demonstrates that the segmentation
performance can be further improved by combining AMsTrans with Wavelet,
proving the effectiveness of the AMsTrans module. Finally, the proposed method
(Wavelet+AMsTrans+Uncertainty) achieves the highest metrics in all retinal
edema lesions segmentation, especially the Jaccard of PED, which is 25.8%
higher than the Backbone. In general, these ablation experimental results
demonstrate the effectiveness of the components of our proposed method.
### 4.4 The reliability analysis
We also conducted experiments to further verify the reliability of our
proposed method. As we know, speckle noise is the main cause of poor OCT image
quality [3, 52, 53]. Therefore, we degraded the OCT image quality by adding
Gaussian noise with different variances $\sigma^{2}$ to the original OCT
image. As shown in Fig.8, we can observe the interesting fact that the
performance of all comparison methods decreases as the level of added noise
increases, instead, our proposed method can still maintain higher segmentation
performance than other excellent segmentation models and uncertainty
approaches, which demonstrates that our proposed method can achieve higher
segmentation accuracy with better robustness. Meanwhile, Fig. 9 shows the
segmentation results of the proposed method for noisy OCT images. As shown in
Fig. 9, the histogram distribution between the original and noisy images
changed significantly. Meanwhile, the quality of noisy images was degraded
compared with the original OCT images, and the boundary information of retinal
lesions in noisy images is more blurred, which is the main reason for the
degraded segmentation performance of all methods. But our proposed method can
provide the corresponding uncertainty evaluation, which makes the segmentation
results more reliable. As shown in Fig. 9, although our proposed method
presents incorrect segmentation results for noisy images, it also assigns high
uncertain scores in the corresponding uncertainty map to reflect the low
confidence for the segmentation results, i.e, it implicitly indicates that the
segmentation regions with high uncertainty scores may be unreliable and need
to be double-checked by the ophthalmologist, which makes our proposed method
more reliable and potentially avoid disasters from OOD samples.
## 5 Conclusion and discussion
In this paper, focusing on the challenges in the task of retinal edema lesions
joint segmentation from OCT images, we propose a novel reliable multi-scale
wavelet enhanced transformer network by integrating CNN, wavelet transform,
transformer concept, and uncertainty mechanism. To our best knowledge, it is
the first work aiming to develop a reliable segmentation method for retinal
edema lesions joint segmentation from OCT images. We conducted comprehensive
experiments on AI-Challenge 2018 dataset for retinal edema lesions joint
segmentation to validate our proposed method. The experimental results show
that our proposed method achieves better segmentation performance with higher
robustness than other state-of-the-art segmentation approaches. Meanwhile,
unlike previous segmentation methods, our proposed method can produce reliable
segmentation results with an estimated uncertainty without loss of accuracy,
which makes our proposed model more reliable.
Our proposed method achieves promising performance on the challenging task of
retinal edema lesions joint segmentation from OCT images. However, there are
still the following issues to be further explored: 1) To further verify the
performance on datasets with large variances in data distribution, such as
data collected from different OCT acquisition scanners with different modes.
Therefore, in our future work, we will collect more data with retinal edema
lesions from different OCT scanners and acquisition modes to build a larger
and more comprehensive database to further evaluate the performance of our
proposed method. 2) Due to the complex pathological features, large
morphological differences, and blurred boundaries of retinal edema lesions in
retinal OCT images, it has always been a great challenge to annotate pixel-
level labels for retinal edema lesions, which is time-consuming and expensive.
Therefore, how to leverage a large amount of unlabeled data to further improve
the performance of retinal edema lesions joint segmentation is a very
meaningful research topic. Therefore, developing a semi/weakly supervised
reliable segmentation method to further improve the performance of retinal
edema lesions joint segmentation is also one of our future research works. In
addition, we will continue to work on improving our proposed method to make it
suitable for more complex medical image segmentation tasks.
## Acknowledgment
This work was supported by A*STAR Advanced Manufacturing and Engineering (AME)
Programmatic Fund (A20H4b0141) and the Exploration Project of Natural Science
Foundation of Zhejiang Province(LQ22F010003).
## References
* [1] S. Wild, G. Roglic, A. Green, R. Sicree, and H. King, “Global prevalence of diabetes: estimates for the year 2000 and projections for 2030,” _Diabetes care_ , vol. 27, no. 5, pp. 1047–1053, 2004.
* [2] D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito _et al._ , “Optical coherence tomography,” _science_ , vol. 254, no. 5035, pp. 1178–1181, 1991\.
* [3] X. Xi, X. Meng, Z. Qin, X. Nie, Y. Yin, and X. Chen, “Ia-net: informative attention convolutional neural network for choroidal neovascularization segmentation in oct images,” _Biomedical Optics Express_ , vol. 11, no. 11, pp. 6122–6136, 2020.
* [4] D. Xiang, H. Tian, X. Yang, F. Shi, W. Zhu, H. Chen, and X. Chen, “Automatic segmentation of retinal layer in oct images with choroidal neovascularization,” _IEEE Transactions on Image Processing_ , vol. 27, no. 12, pp. 5880–5891, 2018.
* [5] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical image computing and computer-assisted intervention_. Springer, 2015, pp. 234–241.
* [6] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz _et al._ , “Attention u-net: Learning where to look for the pancreas,” _arXiv preprint arXiv:1804.03999_ , 2018.
* [7] Z. Gu, J. Cheng, H. Fu, K. Zhou, H. Hao, Y. Zhao, T. Zhang, S. Gao, and J. Liu, “Ce-net: Context encoder network for 2d medical image segmentation,” _IEEE transactions on medical imaging_ , vol. 38, no. 10, pp. 2281–2292, 2019\.
* [8] S. Feng, H. Zhao, F. Shi, X. Cheng, M. Wang, Y. Ma, D. Xiang, W. Zhu, and X. Chen, “Cpfnet: Context pyramid fusion network for medical image segmentation,” _IEEE transactions on medical imaging_ , vol. 39, no. 10, pp. 3008–3018, 2020.
* [9] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [10] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly _et al._ , “An image is worth 16x16 words: Transformers for image recognition at scale,” _arXiv preprint arXiv:2010.11929_ , 2020.
* [11] J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “Transunet: Transformers make strong encoders for medical image segmentation,” _arXiv preprint arXiv:2102.04306_ , 2021.
* [12] Y. Gao, M. Zhou, and D. N. Metaxas, “Utnet: a hybrid transformer architecture for medical image segmentation,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2021, pp. 61–71.
* [13] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2021, pp. 10 012–10 022.
* [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” _Advances in neural information processing systems_ , vol. 25, 2012.
* [15] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [16] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 1–9.
* [17] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [18] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 1492–1500.
* [19] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440.
* [20] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2019, pp. 3146–3154.
* [21] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 2881–2890.
* [22] M. Wang, W. Zhu, F. Shi, J. Su, H. Chen, K. Yu, Y. Zhou, Y. Peng, Z. Chen, and X. Chen, “Mstganet: Automatic drusen segmentation from retinal oct images,” _IEEE Transactions on Medical Imaging_ , 2021.
* [23] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint arXiv:1810.04805_ , 2018.
* [24] Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhutdinov, “Transformer-xl: Attentive language models beyond a fixed-length context,” _arXiv preprint arXiv:1901.02860_ , 2019.
* [25] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language understanding,” _Advances in neural information processing systems_ , vol. 32, 2019.
* [26] A. Srinivas, T.-Y. Lin, N. Parmar, J. Shlens, P. Abbeel, and A. Vaswani, “Bottleneck transformers for visual recognition,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2021, pp. 16 519–16 529.
* [27] D. Zhang, H. Zhang, J. Tang, M. Wang, X. Hua, and Q. Sun, “Feature pyramid transformer,” in _European Conference on Computer Vision_. Springer, 2020, pp. 323–339.
* [28] H. Cao, Y. Wang, J. Chen, D. Jiang, X. Zhang, Q. Tian, and M. Wang, “Swin-unet: Unet-like pure transformer for medical image segmentation,” _arXiv preprint arXiv:2105.05537_ , 2021.
* [29] Y. Xie, J. Zhang, C. Shen, and Y. Xia, “Cotr: Efficiently bridging cnn and transformer for 3d medical image segmentation,” in _International conference on medical image computing and computer-assisted intervention_. Springer, 2021, pp. 171–180.
* [30] J. Wang, Z. Wu, J. Chen, and Y.-G. Jiang, “M2tr: Multi-modal multi-scale transformers for deepfake detection,” _arXiv preprint arXiv:2104.09770_ , 2021.
* [31] Y. Zhang, J. Cao, L. Zhang, X. Liu, Z. Wang, F. Ling, and W. Chen, “A free lunch from vit: adaptive attention multi-scale fusion transformer for fine-grained visual recognition,” in _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2022, pp. 3234–3238.
* [32] N. Chervyakov, P. Lyakhov, D. Kaplun, D. Butusov, and N. Nagornov, “Analysis of the quantization noise in discrete wavelet transform filters for image processing,” _Electronics_ , vol. 7, no. 8, p. 135, 2018.
* [33] N. Sathiyanathan, “Medical image compression using view compensated wavelet transform,” _Journal of Global Research in Computer Science_ , vol. 9, no. 9, pp. 01–04, 2018.
* [34] A. A. Abdulrahman, M. Rasheed, and S. Shihab, “The analytic of image processing smoothing spaces using wavelet,” in _Journal of Physics: Conference Series_ , vol. 1879, no. 2. IOP Publishing, 2021, p. 022118.
* [35] S. Fujieda, K. Takayama, and T. Hachisuka, “Wavelet convolutional neural networks,” _arXiv preprint arXiv:1805.08620_ , 2018.
* [36] T. Williams and R. Li, “Advanced image classification using wavelets and convolutional neural networks,” in _2016 15th IEEE international conference on machine learning and applications (ICMLA)_. IEEE, 2016, pp. 233–239.
* [37] M. X. B. Rodriguez, A. Gruson, L. Polania, S. Fujieda, F. Prieto, K. Takayama, and T. Hachisuka, “Deep adaptive wavelet network,” in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , 2020, pp. 3111–3119.
* [38] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708.
* [39] J. Denker and Y. LeCun, “Transforming neural-net output levels to probability distributions,” _Advances in neural information processing systems_ , vol. 3, 1990.
* [40] D. J. MacKay, “A practical bayesian framework for backpropagation networks,” _Neural computation_ , vol. 4, no. 3, pp. 448–472, 1992.
* [41] R. M. Neal, _Bayesian learning for neural networks_. Springer Science & Business Media, 2012, vol. 118.
* [42] Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in _international conference on machine learning_. PMLR, 2016, pp. 1050–1059.
* [43] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [44] M. Sensoy, L. Kaplan, and M. Kandemir, “Evidential deep learning to quantify classification uncertainty,” _Advances in Neural Information Processing Systems_ , vol. 31, 2018.
* [45] J. Van Amersfoort, L. Smith, Y. W. Teh, and Y. Gal, “Uncertainty estimation using a single deep deterministic neural network,” in _International conference on machine learning_. PMLR, 2020, pp. 9690–9700.
* [46] Z. Han, C. Zhang, H. Fu, and J. T. Zhou, “Trusted multi-view classification,” _arXiv preprint arXiv:2102.02051_ , 2021.
* [47] L. D. Harmon, “Artificial neuron,” _Science_ , vol. 129, no. 3354, pp. 962–963, 1959.
* [48] B. A. Frigyik, A. Kapila, and M. R. Gupta, “Introduction to the dirichlet distribution and related processes,” 2010.
* [49] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in _Deep learning in medical image analysis and multimodal learning for clinical decision support_. Springer, 2018, pp. 3–11.
* [50] J. Song, X. Chen, Q. Zhu, F. Shi, D. Xiang, Z. Chen, Y. Fan, L. Pan, and W. Zhu, “Global and local feature reconstruction for medical image segmentation,” _IEEE Transactions on Medical Imaging_ , 2022.
* [51] K. Zou, X. Yuan, X. Shen, M. Wang, and H. Fu, “Tbrats: Trusted brain tumor segmentation,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention_. Springer, 2022, pp. 503–513.
* [52] M. Wang, W. Zhu, K. Yu, Z. Chen, F. Shi, Y. Zhou, Y. Ma, Y. Peng, D. Bao, S. Feng _et al._ , “Semi-supervised capsule cgan for speckle noise reduction in retinal oct images,” _IEEE Transactions on Medical Imaging_ , vol. 40, no. 4, pp. 1168–1183, 2021.
* [53] Y. Zhou, K. Yu, M. Wang, Y. Ma, Y. Peng, Z. Chen, W. Zhu, F. Shi, and X. Chen, “Speckle noise reduction for oct images based on image style transfer and conditional gan,” _IEEE Journal of Biomedical and Health Informatics_ , 2021\.
|
# Aerodynamic characterization of two tandem wind turbines under yaw
misalignment control using actuator line model
Yu Tu
School of Naval Architecture, Ocean and Civil Engineering
Shanghai Jiao Tong University
Shanghai 200240, China
& Kai Zhang
School of Naval Architecture, Ocean and Civil Engineering
Shanghai Jiao Tong University
Shanghai 200240, China
<EMAIL_ADDRESS>
Zhaolong Han
School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong
University
Shanghai 200240, China
&Dai Zhou
School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong
University
Shanghai 200240, China
&Onur Bilgen
Department of Mechanical and Aerospace Engineering, Rutgers University
Piscataway, 08854, New Jersey, USA
(February 3,2023)
###### Abstract
Yaw control has proven to be promising in alleviating the wake effects that
plague the efficiency of wind farms. In this work, the actuator line modeling
(ALM) method is adopted to simulate the flows over two tandem turbines
distanced by $3-7$ rotor diameters, with the yaw angle of the upstream rotor
varying from $\gamma_{1}=0^{\circ}$ to $50^{\circ}$. The aim is to provide a
comprehensive aerodynamic characterization of this simple wind farm under yaw
misalignment control. With increasing yaw angle, the power generated by the
downstream rotor increases, compensating the power loss in the upstream rotor,
and resulting in significantly higher total power of the two turbines than
that without yaw control. The maximum power output is achieved as the upstream
wake of the yawed rotor is redirected away from the downstream rotor plane.
Behind the downstream rotor, the secondary steering phenomenon is observed,
where the wake is also redirected from the centerline. The use of the actuator
line model also reveal unsteady aerodynamic characteristics that can not be
captured by lower-fidelity models. For the upstream rotor, the yaw
misalignment results in time-varying change in the local angle of attack on
the blade, giving rise to unsteady loading. The downstream rotor is partially
submerged in the deflected wake incurred by the yawed upstream rotor. As the
blade revolves into and out of the wake deficit, the blade experiences cyclic
loading, leading to even stronger fluctuations in the aerodynamic loads than
the upstream rotor. These analysis provides a comprehensive understanding of
the yaw control effects on the two tandem rotors from the perspectives of
aerodynamic performance, wake profiles, and unsteady characteristics. The
insights gained from the present study can aid the design of collective yaw
control strategies of wind farms, and lay the foundation for assessing the
fatigue damage associated with yaw misalignment.
_Keywords_ Wake interaction $\cdot$ Yaw control $\cdot$ Wind turbines $\cdot$
Unsteady aerodynamics
## 1 Introduction
Over the years, wind power has proven to be a viable alternative to the
conventional fossil-based energy sources (Veers et al., 2019). The offshore
environment is more attractive for wind energy development because the air
flow is typically stronger and more consistent compared to onshore flow.
However, the high cost of energy still presents an urgent problem for the
deployment of large-scale offshore wind farms. Specifically, the efficiency
and service life of offshore wind farm are plagued by the wake interactions
between turbines (Vermeer et al., 2003; Troldborg et al., 2011; Sun et al.,
2020). As the incoming wind passes through the upstream turbine, the wake
forms with reduced wind speed and increased turbulence intensity. Submerged in
the wakes, the downstream turbines not only generate less power but also
suffer from higher fatigue loading. According to Barthelmie et al. (2009), the
annual average loss caused by wake effects in large-scale wind farms accounts
for about $10\%-20\%$ of the total power generation.
Wind farm control is a new area of research that has rapidly become a key
enabler for the development of large wind farm projects (Andersson et al.,
2021). Conventional wind farm control strategy seeks maximizing the energy
output of each individual turbines. However, such strategy does not take the
entirety of wind farm as well as the spatial correlations into account. On the
other hand, the holistic wake control strategy aims to improve the overall
performance of wind farm by operating some wind turbines at sub-optimum
conditions (Wagenaar et al., 2012; Andersson et al., 2021; Houck, 2022; Meyers
et al., 2022). Among various wind farm control strategies, wake steering via
yaw control emerges as particularly effective. In traditional way, the yaw
system of wind turbine needs to track the variable ambient wind direction
(Chen et al., 2021) and turn the rotor to align with it. Unlike that, the yaw
control strategy is realized by applying intentional yaw misalignment between
the incoming flow and upstream turbines, which induces reduced power output on
these units but direct the velocity deficit away from the downstream turbines.
The implementation of wake steering for wind farm requires thorough
understanding of the wake dynamics associated with yawed turbines. The most
notable feature of a yawed turbine wake is arguably the formation of a
counter-rotating vortex pair (Howland et al., 2016; Bastankhah and Porté-Agel,
2016; Fleming et al., 2018), which resembles the wingtip vortices in the wake
of finite-aspect-ratio wings (Anderson, 2011; Zhang et al., 2020). The
resulting velocity deficit in the wake of the yawed turbine takes the form of
a curled kidney-like shape. In recent years, several advanced wake models have
been proposed to depict the highly three-dimensional flow features behind the
yawed turbines. Recognizing the similarity of the wakes behind the yawed
turbine and finite-span wings, Shapiro et al. (2018) treated the yawed rotor
disk as a lifting body, and the Prandtl lifting line approach is used to
correlate the transverse velocity induced by the vortex system with the
lateral thrust force. Zong and Porté-Agel (2020) expressed the rates of
vorticity shedding at rotor blade tips using vortex cylinder theory to
determine the trailing vorticity distribution behind a yawed rotor. Bastankhah
et al. (2022) considered the wake edge of the yawed turbine as an ideally thin
vortex sheet, and solved the vortex sheet equation for evolution in the wake.
Based on these calculations, they modified the Gaussian wake model by
incorporating the predicted shape and deflection of the curled wake to predict
the wake profiles behind yawed turbines. King et al. (2021) developed the
Gaussian curl hybrid (GCH) wake model, which is able to reproduce the
secondary effects of wake steering in large arrays of turbines. This model is
implanted into the open-source wind farm optimization toolbox FLORIS (NREL,
2022). These analytical models reveal useful insights into the flow physics of
yawed turbines, and are essential in designing of wake mitigation strategies
for wind farms.
The efficacy of yaw control in improving the wind farm efficiency has been
demonstrated in a number of experimental studies. Adaramola and Krogstad
(2011) studied yaw control of two aligned wind turbines with a streamwise
spacing of four rotor diameters through wind tunnel measurements, and found
that yaw control can enhance the total power generation by 12%. Campagnolo et
al. (2016) experimentally studied the effects of yaw misalignment on three
turbines. It was revealed that yawing the front row by $20^{\circ}$ and second
row by $16^{\circ}$ improved the total farm power production by 15%.
Bastankhah and Porté-Agel (2019) performed wind tunnel experiments to study
the performance of a model wind farm with five turbine rows under a wide
variety of yaw angle distributions. Their results showed that yaw angle
control can increase the overall wind farm efficiency as much as 17% with
respect to fully non-yawed conditions. The most successful yaw angle
distributions were found to be those with a relatively large yaw angle value
for the first turbine row, and then, the yaw angle decreases progressively for
downwind rows until it eventually becomes zero for the last one. Bartl et al.
(2018a) studied the effects of intentional yaw misalignment on the power
production and loads of a downstream turbine are investigated for full and
partial wake overlap. It shows that the increase in combined power is at the
expense of increased yaw moments on both the upstream and downstream turbines.
Aju et al. (2022) performed systematic wind tunnel experiments to quantify the
power output fluctuations and unsteady aerodynamic loads of model wind farms
with three rows and three columns across various yaw angles. Apart from these
wind tunnel experiments, wake steering via yaw control has also been performed
in a number of field tests (Fleming et al., 2017; Howland et al., 2019; Simley
et al., 2021; Howland et al., 2022a). These studies have shown the great
potential of wake steering in enhancing the power generation efficiency of
commercial wind farms in realistic operating conditions.
Computational fluid dynamics (CFD) methods have also been instrumental in
bridging the gap between analytical wake models and wind tunnel and field
tests. Due to the high computational cost in resolving the boundary layers of
the rotating blades at high Reynolds numbers (Mittal et al., 2016; Lawson et
al., 2019; de Oliveira et al., 2022; Miao et al., 2017), most of the studies
have used simplified turbine models. The actuator disk model (ADM) is the
simplest rotor modeling technique in CFD applications. The blade swept area
forms a full disc, which represents the rotor. The most basic form of ADM
assumes uniform load distribution on the disc. Thus, it is necessary to
provide the aerodynamic coefficients as input to the model, which can be
nontrivial in yawed conditions (Hur et al., 2019; Howland et al., 2020a,
2022b; Heck et al., 2022). In a more advanced version of ADM, the blade
element theory is incorporated into the model, allowing it to account for the
radial distribution of the loads as well as the tangential forces. Lin and
Porté-Agel (2022) showed that this version of ADM yields flow statistics that
are in better agreement with the wind-tunnel measurements for both nonyawed
and yawed conditions compared to the traditional ADM. The actuator line model
(ALM) developed by Shen et al. (2005) computes the turbine-induced forces on
line elements distributed on the moving turbine blades, which introduced
temporal dependency to the turbine model. Thus, ALM is well-suited for
studying the unsteady aerodynamics of wind turbines, and has been the _de
facto_ tool for state-of-the-art wind farm simulation (Stevens and Meneveau,
2017; Stevens et al., 2018; Shapiro et al., 2022).
The objective of this paper is to understand the effects of yaw misalignment
on the aerodynamics of two tandem turbines. Although this simple wind farm
layout has been investigated extensively in the literature as reviewed above,
the wake profiles and the unsteady flow physics associated with the yaw
control has not been elucidated in detail. In addition, previous studies have
mostly employed relatively complex setups involving atmosphere boundary layer,
wind shear, inlet turbulence, etc., which could hinder a clear comprehension
of the yaw misalignment effects. To address the above questions, this paper
presents large-eddy simulations with actuator line model to simulate the flow
over two tandem wind turbines, and characterize the aerodynamic performance,
wake profiles, and unsteady flow physics over a wide range of yaw angles. We
assume the flow to be uniform and void of ground effects to isolate the
effects of yaw misalignment among other factors. In what follows, we introduce
the numerical methods, computational setup and validation in section 2. The
results are presented in section 3, in which we discuss the aerodynamic
performance, wake profiles, and unsteady characteristics of the two tandem
rotors. We conclude this paper by summarizing our findings in section 4.
## 2 Computational setup
### 2.1 Problem description
In this study, a 5 MW reference turbine designed by Jonkman et al. (2009) at
the National Renewable Energy Laboratory (referred to as NREL 5 MW turbine
hereafter) is used as the model rotor. This horizontal-axis, upwind turbine
has three blades with a rotor diameter of $D=126$ m. The rated wind speed is
11.4 m/s. The turbine hub, tower and ground effects are not considered in this
study.
Figure 1: Description of the problem. ($a$) Perspective view of the two
turbines and ($b$) $x$-$y$ plane view. The thick gray lines in $(b)$ represent
the rotor planes.
Two NREL 5 MW rotors distanced by $L_{x}$ are placed in tandem along the
streamwise direction, as shown in figure 1. These two rotors are subjected to
uniform inflow velocity, which is set to the rated wind speed
$U_{\infty}=11.4$ m/s. Since the current study focuses on the aerodynamic
characterization of the two rotors, we ignore the rotation speed control and
set the tip speed ratios (defined as $\lambda=\Omega D/(2U_{\infty})$, where
$\Omega$ is the angular velocity) of both turbines to be 8, unless otherwise
stated. The yaw control is actuated around the $z$ axis of the upstream rotor,
while the downstream rotor remains perpendicular to the freestream. We
characterize the aerodynamics of this simple wind farm configuration with the
upstream yaw angle varying from $\gamma_{1}=0^{\circ}-50^{\circ}$.
### 2.2 Numerical methods
We use large eddy simulation (LES) to solve for the flows over two tandem
turbines. The filtered incompressible Navier-Stokes equation reads as
$\frac{\partial\tilde{u}_{i}}{\partial x_{i}}=0,\\\
\frac{\partial\tilde{u}_{i}}{\partial
t}+\tilde{u}_{j}\frac{\partial\tilde{u}_{i}}{\partial
x_{j}}=-\frac{1}{\rho}\frac{\partial\tilde{p}}{\partial
x_{i}}+\nu\frac{\partial}{\partial x_{j}}\frac{\partial\tilde{u}_{i}}{\partial
x_{j}}+\frac{\partial\tau_{ij}}{\partial x_{j}}+f_{i},$ (1)
where $u_{i}$ and $p$ are the velocity and pressure, $\tilde{\cdot}$
represents the resolved flow quantities, and $\rho$ and $\nu$ are the air
density and kinematic viscosity, respectively. $\tau_{ij}$ is the subgrid-
scale stress (SGS) tensor, which is expressed according to the Boussinesq
approximation with the introduction of a turbulent eddy viscosity $\nu_{t}$
$\tau_{ij}=\frac{2}{3}k_{t}\delta_{ij}-2\nu_{t}\tilde{S}_{ij}.$ (2)
here, $k_{t}=\tau_{kk}/2$ is the SGS turbulent kinetic energy and
$\tilde{S}_{ij}=\left({\partial\tilde{u}_{i}}/{\partial
x_{j}}+{\partial\tilde{u}_{j}}/{\partial x_{i}}\right)/2$ is the rate of
strain tensor computed from the resolved scales. The $k$-equation model is
selected to calculate the kinematic energy $k_{t}$:
$\frac{\partial k_{t}}{\partial t}+\frac{\partial}{\partial
x_{j}}\left(\tilde{u}_{j}k_{t}\right)=2\nu_{t}\tilde{S}_{ij}\tilde{S}_{ij}+\frac{\partial}{\partial
x_{j}}\left[\left(\nu+\nu_{t}\right)\frac{\partial k_{t}}{\partial
x_{j}}\right]-C_{\epsilon}k_{t}^{1.5}\Delta^{-1},$ (3)
where the SGS viscosity is given by $\nu_{t}=C_{k}\Delta k_{t}^{0.5}$. The
model coefficients $C_{\epsilon}$ and $C_{k}$ are dynamically computed as part
of the solution based on the Germano identity (Germano et al., 1991) with test
filter $\widehat{\Delta}=2\Delta$
($\Delta=\sqrt{\Delta_{x}\Delta_{y}\Delta_{z}}$ is the nominal grid size) by
the least square minimization procedure proposed by Lilly (1992).
The $f_{i}$ term in equation (1) represents the body force imposed by the wind
turbine. The actuator line model (ALM) is used to calculate these body forces.
As shown in figure 2, the ALM discretizes the blade into a series of 2D
airfoil sections along the radial direction, and the point at the quarter
chord point of each section is called the actuation point. The two-dimensional
sectional lift and drag forces are calculated as
$f_{l}=\frac{1}{2}C_{l}\rho cU_{rel}^{2},\quad f_{d}=\frac{1}{2}C_{d}\rho
cU_{rel}^{2},$ (4)
where $c$ is the chord length of the local airfoil, and
$U_{rel}=\sqrt{U_{\Omega}^{2}+U_{\textrm{wind}}^{2}}$ is the local wind
velocity relative to the blade ($U_{\Omega}$ and $U_{\textrm{wind}}$ are the
velocity of blade rotation and wind velocity at the actuator point,
respectively). $C_{l}$ and $C_{d}$ are the lift and drag coefficient of local
2D airfoil profile, which is precalculated and tabulated with respect to the
angle of attack. The local angle of attack $\phi$ of the 2D airfoil section is
taken to be the angle between the chord and the velocity at the actuator
segment. To account for the tip effects, the tip correction factor proposed in
Shen et al. (2005) is implemented in the ALM. The lift and drag forces are
projected into the flow field by taking the convolution with a 3D Gaussian
kernel $\eta$ for each blade element
$\eta=\frac{1}{\epsilon^{3}\pi^{3/2}}\exp(-(d/\epsilon)^{2}),$ (5)
where $d$ is the distance between the measured point and the actuator point on
the blade. $\epsilon$ is the Gaussian width that determines the concentration
of the distributed load, and is set as twice the local grid size, as suggested
by Troldborg (2008).
Figure 2: Computational setup and illustration of ALM.
The two tandem NREL 5 MW rotors are placed in a rectangular computational
domain, which covers $(x,y,z)\in[-10D,35D]\times[-5D,5D]\times[-5D,5D]$, as
shown in figure 2. The resulting blockage ratio is $0.78\%$. The center of the
upstream rotor is placed at $(x,y,z)=(0,0,0)$, and the downstream one at
$(x,y,z)=(L_{x},0,0)$, where $L_{x}$ is the spacing between the two rotors.
The actuator line representing the turbine blade is discretized into 19
segments along the span.
The flows over the turbines are simulated using the solver pimpleFoam from the
open-source CFD toolbox OpenFOAM (Weller et al., 1998). We use the actuator
line model implemented in the turbinesFoam library (Bachant et al., 2016),
which has seen application in a number of studies (Zhang and Bilgen, 2020;
Onel and Tuncer, 2021; Liu et al., 2022). Uniform mesh is used to resolve the
flows near and in the wake of the rotors. A detailed mesh dependency test is
presented in section 2.3. The simulations employ a fixed Courant number of
$CFL_{\max}=0.1$, as suggested by Troldborg (2008). At the inlet boundary, a
uniform velocity at the rated wind speed $U_{\infty}=11.4$ m/s is prescribed.
A zero-gradient condition is applied to the velocity at outlet, where a
reference pressure $p_{\infty}=0$ is specified. The rest of the boundaries are
set as slip.
The FLORIS v3.0 code (FLOw Redirection and Induction in Steady State, NREL
(2022)), which is a control-oriented wind farm simulation software, is used as
a low-fidelity reference to the ALM results in this study. FLORIS incorporates
several steady-state engineering wake models to account for the wake
interaction effects between turbines, and is widely accepted in wind farm
control and layout studies (Doekemeijer et al., 2019; Gebraad et al., 2017).
We use the the Gaussian curl hybrid (GCH) wake model (King et al., 2021),
which better reproduces the secondary effects of yawed wake, including the
yaw-added wake recovery as well as the secondary wake steering.
By default, FLORIS calculates the aerodynamic performance of the turbine using
the input table which contains the steady-state responses as a function of
inflow velocity defined in Jonkman et al. (2009). This approach implicitly
dictates the tip speed ratio based on the upstream velocity, suggesting that
the downstream rotor operates at lower rotational speed to maintain optimum
tip speed ratio (based on the wake velocity of the upstream rotor). This is
different from the settings for ALM, in which both rotors operates at the same
rotational speed as mentioned in section 2.1. To ensure a fair comparison of
the results obtained from the two models, the input table of FLORIS is
modified to tabulate power and thrust coefficients (precalculated by ALM)
under the same rotational speed regardless of the inflow velocity. In
addition, both the wind shear and the turbulence intensity are set as zero in
FLORIS. The hub height is modified to be high enough to avoid the ground
effect. The other parameters are set as default.
### 2.3 Validation
Table 1: Aerodynamic parameters of a single turbine at three grid sizes. The tested tip speed ratio is $\lambda=8$. | coarse | medium | fine
---|---|---|---
$R/\Delta_{g}$ | $24$ | $32$ | $42$
grid number | $2.78\times 10^{6}$ | $5.94\times 10^{6}$ | $1.27\times 10^{7}$
$C_{P}$ | $0.534$ | $0.523$ | $0.515$
$C_{T}$ | $0.792$ | $0.786$ | $0.781$
Figure 3: Wakes profiles of the wake of NREL 5 MW rotor at $\lambda=8$. ($a$)
$x=1D$, ($b$) $x=3D$, ($c$) $x=5D$. Figure 4: ($a$) Power coefficients and
($b$) thrust coefficients of NREL 5 MW under different TSR. The square line
represents the LES with ALM by Onel and Tuncer (2021). The pentagram line is
using Unsteady Reynolds-Averaged Simulation (URANS) coupled with fully-
resolved model (FRM) by Dose et al. (2018) and Make and Vaz (2015). The hollow
circle line is from the blade element momentum theory in Qblade by Marten et
al. (2013). The red circle line is the results from the present study.
In this section, we carry out detailed mesh dependency test and compare our
results with the literature. Three different resolutions,
$R/\Delta_{g}=24,32,42$ (where $\Delta_{g}$ is the grid size around the blade,
$R$ is the rotor radius) are employed to validate the grid independence. For a
single NREL 5 MW rotor simulation at tip speed ratio of $\lambda=8$, the grid
numbers and results for the three resolutions are given in table 1. Here, the
power and thrust coefficients of the rotor are defined as
$C_{P}=\frac{P}{\rho U_{\infty}^{3}A/2},\quad C_{T}=\frac{T}{\rho
U_{\infty}^{2}A/2},$ (6)
where $P$ and $T$ are the power and thrust force on the rotor, and $A=\pi
R^{2}$ is the swept rotor area. With increasing grid resolution, the power
coefficients and thrust coefficients vary only slightly. The wake velocity
profiles at three downstream positions $x=1D$, $3D$, $5D$ are shown in figure
3. Except for the region near $y/D=0$ where the hub is not modeled, the three
grid resolutions results in similar wake velocity distributions. With the
convergence of the wake velocity profiles in the far wake, accurate results
can be expected in the two tandem turbines cases. For the rest of the paper,
the simulations are carried out using the medium grid resolution.
To further validate our numerical setup, we compute the power and thrust
coefficients at different tip speed ratios for the single turbine case, and
compare them with the literature in figure 4. The power coefficient of the
rotor increases with tip speed ratio initially and reaches peak at
$\lambda=8$, while the thrust coefficient increases monotonically with
$\lambda$. Overall, both the power and thrust coefficients predicted by the
present ALM simulations agree well with the literature, showcasing the
correctness of the present numerical setup.
## 3 Results
In this section, we present the results from the ALM simulations of the two
tandem turbines. In section 3.1, we discuss the aerodynamic performance of the
two rotors under yaw control. The wake profiles are then shown in section 3.2.
At last, the unsteady aerodynamic properties of the tandem turbines are
presented in section 3.3. In these discussions, we also incorporate the
results from the low-fidelity modeling tool FLORIS (NREL, 2022) as a
comparison where possible.
### 3.1 Aerodynamic performance
Figure 5: The power coefficients of ($a$) upstream rotor, ($b$) downstream
rotor, ($c$) two rotor combined. The gray dashed line in ($a$) is suggested by
Burton et al. (2002). Figure 6: The thrust coefficients of ($a$) upstream
rotor and ($b$) downstream rotor. The other two lines is covered by the yellow
dashed line in ($a$). Figure 7: Power coefficients in torque control. ($a$)
upstream turbine, ($b$) downstream turbine, and ($c$) two turbines combined
for $L_{x}=5D$.
The power coefficients of the two turbines under varying yaw angles of the
upstream rotor ($\gamma_{1}$) and at different spacings ($L_{x}=3D$, $5D$ and
$7D$) are shown in figure 5. For the upstream rotor, the power coefficient
decreases with $\gamma_{1}$, since the component of wind velocity which is
normal to the rotor decreases. Most analytical wind farm power models assume
that the power of a yawed rotor follows
$P(\gamma_{1})=P(0)\cos^{P_{p}}(\gamma_{1})$. Based on the classical one-
dimensional momentum theory with an incoming axial freestream wind speed of
$U_{\infty}\cdot\cos(\gamma_{1})$ perpendicular to the rotor, Burton et al.
(2002) instructed that the power production of a yawed wind turbine decreases
following $\cos^{3}(\gamma_{1})$, i.e., $P_{p}=3$, as shown by the dotted line
in figure 5($a$). This is clearly not in agreement with the current ALM
simulations. In fact, the momentum theory neglects the dependence of the
induction incurred by the rotor yaw. The value of $P_{p}$ is reported to be
widespread depending on the turbine model, typically between $1<P_{p}<3$
(Schreiber et al., 2017; Liew et al., 2020; Howland et al., 2020b). We observe
that the relationship between $C_{P_{1}}$ and $\gamma_{1}$ calculated by ALM
is close to that predicted by FLORIS, which employs a default value of
$P_{p}=1.88$ (Annoni et al., 2018) for the NREL 5 MW turbine. As expected, the
spacing between the two rotors has negligible effects on the power generation
of the upstream one.
For the downstream rotor, the power coefficient generally increases with the
yaw angle of the upstream rotor, as depicted in figure 5($b$). With small
spacing $L_{x}=3D$, the increment of power coefficients of the downstream
rotor is not significant for $\gamma_{1}\lesssim 15^{\circ}$. For larger yaw
angles, the $C_{P_{2}}$-$\gamma_{1}$ curve exhibits almost linear growth up to
$\gamma_{1}=50^{\circ}$. With larger spacings $L_{x}=5D$ and $7D$, the
downstream rotor generates more power than with $L_{x}=3D$. But the growth
rate of $C_{P_{2}}$ with respect to $\gamma_{1}$ gradually saturates at higher
yaw angles. This is due to the fact that the deflected upstream wake bypasses
the downstream rotor, as will be discussed in section 3.2.
The power coefficient of the downstream rotor predicted by the low-fidelity
model FLORIS is also presented in figure 5($b$). The FLORIS code calculates
the aerodynamic performance by looking up the input table of the NREL 5 MW
turbine (Jonkman et al., 2009), based on the averaged velocity over the
downstream rotor area (Annoni et al., 2018). Since the freestream velocity is
fixed at the rated wind speed $U_{\infty}=11.4$ m/s in this study, the
averaged wind velocity on the downstream rotor is always smaller than
$U_{\infty}$ due to wake effects. In the below-rated operating condition
region 2, the blade pitch angle is fixed at zero, and the rotor speeds
increase linearly with wind speed to maintain constant optimal tip speed ratio
around $\lambda=8$, which is in the same setting with ALM. Although exact
match with the ALM results is not achieved, the power coefficients of the
downstream rotor calculated by FLORIS also feature an increasing trend with
growing $\gamma_{1}$. The positive effect of $L_{x}$ on the power generation
of the downstream rotor is also predicted in FLORIS.
Table 2: The comparison of optimal performance in yaw misalignment cases between ALM and FLORIS results. $\Delta C_{P_{tol}}=\frac{C_{P_{tol}}(\gamma_{1}^{opt})-C_{P_{tol}}(0^{\circ})}{C_{P_{tol}}(0^{\circ})}$. | ALM | FLORIS
---|---|---
$L_{x}/D$ | $3$ | $5$ | $7$ | $3$ | $5$ | $7$
$\gamma_{1}^{opt}$ | $40^{\circ}$ | $35^{\circ}$ | $30^{\circ}$ | $25^{\circ}$ | $30^{\circ}$ | $25^{\circ}$
$C_{P_{tol}}(\gamma_{1}^{opt})$ | $0.637$ | $0.754$ | $0.804$ | $0.604$ | $0.702$ | $0.782$
$C_{P_{tol}}(0^{\circ})$ | $0.549$ | $0.555$ | $0.556$ | $0.559$ | $0.587$ | $0.613$
$\Delta C_{P_{tol}}$ | $16.0\%$ | $35.9\%$ | $45.0\%$ | $8.0\%$ | $19.6\%$ | $27.7\%$
The total power coefficients, $C_{P_{tol}}=C_{P_{1}}+C_{P_{2}}$, exhibit a
nonmonotonic relationship with $\gamma_{1}$. This is a result of the
decreasing $C_{P_{1}}$ and increasing $C_{P_{2}}$, as the yaw angle of the
upstream rotor increases. For $L_{x}=3D$, maximum total power output is
reached when the upstream rotor is yawed at $\gamma_{1}\approx 40^{\circ}$.
With increasing spacing between the two rotors, the optimal yaw angle
decreases, and the maximum total power increases. Compared to the unyawed
cases, the maximum total power output with yaw control increases by 16.0%,
35.9% and 45.0% for $L_{x}=3D,5D$ and $7D$, respectively. It is noted that
with $L_{x}=5D$ and $7D$, the optimal yaw angle is close to the $\gamma_{1}$
at which the growth rate of $C_{P_{2}}$ saturates. The total power curves
predicted by FLORIS also exhibit bell shape, similar to the ALM results. While
the optimal yaw angle of the $L_{x}=3D$ case predicted by FLORIS is far from
that obtained by ALM, the agreement is closer for cases with $L_{x}=5D$ and
$7D$.
The thrust coefficients of the two rotors are presented in figure 6. Although
both ALM and FLORIS predict downward trend of $C_{T_{1}}$ with growing
$\gamma_{1}$, the agreement between these two methods is not satisfactory
compared with the power coefficients. The ALM results show that the thrust
coefficients of the yawed upstream rotor follow
$C_{T_{1}}(\gamma_{1})=C_{T_{1}}(0)\cdot\cos^{2}(\gamma_{1})$, while in FLORIS
the scaling factor is set as $\cos(\gamma_{1})$ by default. The thrust
coefficients of the downstream rotor increase with the upstream yaw angle in a
similar fashion with the power coefficients.
Let us compare the effectiveness of yaw control against induction control.
Here, the induction control is realized by changing the rotating speed of the
two rotors, while the yaw angle and blade pitch angle are fixed at zero. The
spacing between the two rotors is fixed at $L_{x}=5D$. The tip speed ratio of
the upstream rotor ranges from $\lambda_{1}=4$ – 9. For each $\lambda_{1}$,
the tip speed ratio of the downstream rotor $\lambda_{2}$ is also varied to
locate the optimal operating condition that results in maximum power output.
The power coefficients of the two rotors are shown in figure 7($a,b$). As the
upstream rotor is derated from $\lambda_{1}=9$ to 4, the optimal $\lambda_{2}$
shifts to higher values, and the power generation of the downstream rotor is
enhanced. Nevertheless, the maximum power coefficients of the two rotors
combined is only 0.58, which is much lower than that could be achieved with
yaw control. This comparison suggests greater effectiveness of yaw control
over induction control, as also noted in other studies (Nash et al., 2021;
Houck, 2022; Li et al., 2022).
We note in passing that additional simulations with negative yaw angles have
also been carried out, arriving at the conclusion that both positive and
negative yaw angles yield the same power production. This is in contradiction
to the findings that direction of yaw angle has noticeable influence on the
total power (Schottler et al., 2016; Fleming et al., 2018; Bartl et al.,
2018a). Archer and Vasel-Be-Hagh (2019) hypothesized that the difference of
power production between positive and negative yaw angles is related to the
Coriolis effect, and recommended that only positive yaw misalignment angles
should be considered for wake steering purposes in the northern hemisphere.
Another explanation put forward by Fleming et al. (2018) suggests that the
difference is due to the ground effects and wind shear. As will be discussed
in §3.2, a yawed turbine creates a highly three-dimensional wake featuring a
pair of counter-rotating vortices at the top and bottom of the rotor,
respectively. When a turbine is positively yawed, the top vortex rotates the
same direction as the wake itself (opposite the rotor rotation), which
strengthens that vortex. When negatively yawed, the lower vortex is enhanced
in the same way, but it also experiences lower wind speeds and ground shear.
However when the turbine is positively yawed, the top vortex is in higher wind
speeds and unencumbered by the ground, which allows it to have a greater
effect on the shape of the wake. Since neither the Coriolis force and ground
effects are considered in this study, it is reasonable that the power
generated by the two rotors is insensitive to the sign of yaw angle.
### 3.2 Wake profiles
Figure 8: Instantaneous vortical structures visualized by iso-surfaces of the
$Q$-criterion for ($a$) $L_{x}=5D,\gamma_{1}=0^{\circ}$ and ($b$)
$L_{x}=5D,\gamma_{1}=35^{\circ}$ cases. The black circles denote the position
of two rotors. Figure 9: Time-averaged streamwise velocity fields calculated
by ALM ($a,b,c$) and FLORIS ($d,e,f$) on $z=0$ plane. Shown are the cases of
optimal yaw angle for $L_{x}=3D$, $5D$ and $7D$, respectively. Figure 10:
Time-averaged streamwise velocity fields calculated by ALM ($a,b,c$) and
FLORIS ($d,e,f$) on $y=0$ plane. Shown are the cases of optimal yaw angle for
$L_{x}=3D$, $5D$ and $7D$, respectively. Figure 11: Time-averaged streamwise
velocity fields calculated by ALM in optimal cases of two tandem turbines.
($a$) $L_{x}=3D$, $\gamma_{1}=40^{\circ}$, ($b$) $L_{x}=5D$,
$\gamma_{1}=35^{\circ}$, ($c$) $L_{x}=7D$, $\gamma_{1}=30^{\circ}$. The cloud
maps are positioned at $x=1D$, $3D$, $5D$, $7D$, $10D$, $13D$. The black
circle denotes the position of an non-yawed rotor. Figure 12: Time-averaged
streamwise velocity fields calculated by FLORIS in optimal cases of two tandem
turbines. ($a$) $L_{x}=3D$, $\gamma_{1}=40^{\circ}$, ($b$) $L_{x}=5D$,
$\gamma_{1}=35^{\circ}$, ($c$) $L_{x}=7D$, $\gamma_{1}=30^{\circ}$. The cloud
maps are positioned at $x=1D$, $3D$, $5D$, $7D$, $10D$, $13D$.The black circle
denotes the position of an non-yawed rotor.
We analyze the wake profiles with the aim to shed light upon the aerodynamic
performance described above. Figure 8 shows instantaneous vortical structures
visualized by isosurfaces of $Q$ (second invariant of velocity gradient
tensor) for the cases with $\gamma_{1}=0^{\circ}$ and $35^{\circ}$ at
$L_{x}=5D$. For the unyawed case, the near wake of the upstream rotor is
dominated by helical tip vortices shed from the blades. As the upstream wake
impinges on the downstream rotor, torus-like vortices form around the rotor,
and then breaks down in the far wake. On the other hand, the wake of the yawed
upstream rotor features a pair of counter-rotating vortices that trails into
the far wake. These streamwise vortices interact with a part of the vortical
structures generated at the outskirt of the downstream rotor, while the rest
part (that is less affected by the deflected upstream wake) still shed helical
tip vortices. As a result, the wake of the dowmstrean rotor appear less axis-
symmetric compared to the unyawed case.
The time-averaged streamwise velocity fields of the optimal cases for
$L_{x}=3D$, $5D$ and $7D$ are shown in figures 9 and 10 on $x$-$y$ planes and
$x$-$z$ planes, respectively. The introduction of yaw on the upstream rotor
deflects its wake away from the centerline on the $x$-$y$ planes, as is clear
from both the ALM and FLORIS calculations. For $L_{x}=3D$, the optimal yaw
angle is achieved at $\gamma_{1}=40^{\circ}$. Under this condition, a
significant portion of the upstream wake still impinges on the downstream
rotor. Although it is possible to increase the yaw angle of upstream rotor to
further steer its wake, the power gain of the downstream rotor can no longer
compensate the loss in the upstream one. With $L_{x}=5D$ and $7D$, the
deflected wakes almost bypass the entire downstream rotor at the optimal yaw
angles. With further increase in $\gamma_{1}$, the gain of power in downstream
rotor becomes less significant, as evidenced by the saturated growth rate of
$C_{P_{2}}$ at high yaw angles shown in figure 5($b$). On the $x$-$z$ planes,
the wakes of both turbines is not deflected as shown in figure 10. The
velocity deficit predicted by ALM exhibits hollow area downstream of the yawed
turbine, which is associated with the thinning of the wake shown on the
$x$-$y$ plane. This feature is not predicted by FLORIS, which assumes self-
similarity in the wake model.
One of the key differences between the flow fields predicted by ALM and FLORIS
is that in the former, the upstream wake appears to become increasingly narrow
along the streamwise direction, while for the latter the wake is more
dispersed. To understand this difference, we show the time-averaged
$\overline{U_{x}}$ fields on slices cut at different streamwise locations in
figures 11 and 12 for ALM and FLORIS, respectively. With increasing downstream
distance, the wake deficit of the yawed turbine not only deflects in the $y$
direction, but also curls into the kidney-like shape, particularly for cases
with large yaw angles. This type of wake is characterized by a pair of
counter-rotating vortices as shown in figure 8($b$), and has been discussed
extensively in literature (Medici and Alfredsson, 2006; Howland et al., 2016;
Bartl et al., 2018b; Bastankhah and Porté-Agel, 2016; Kleusberg et al., 2020).
It is thus clear that the narrow wake observed on the $x$-$y$ slices in figure
9 corresponds to the thin connecting part of the two counter-rotating
vortices. The wakes calculated by FLORIS use the Gauss-curl hybrid (GCH)
model, which assumes self-similarity for fast computation. As a result, the
velocity deficit remains isotropic in shape while shifting in the direction of
yaw, and no counter-rotating vortex pair is reproduced.
In the immediate wake of the downstream rotor, an isotropic velocity deficit
is formed, and is engulfed by the kidney-like wake incurred by the yawed
upstream rotor. Further downstream, the combined wake gradually becomes
diffused as the velocity recovers. Even though the downstream rotor is aligned
perpendicular with the freestream, its wake also exhibits certain degree of
deflection due to the yawed incoming flow. This phenomenon is referred to as
“secondary steering” (Fleming et al., 2018), and is important for unraveling
the full potential of yaw control (Rak and Pereira, 2022). As shown in figure
12, the secondary steering phenomenon is also captured by FLORIS, although the
detailed wake shape differs from that predicted by ALM. It is noted that the
Gauss-curl hybrid (GCH) wake model (King et al., 2021) employed here is
tailored for reproducing secondary steering. This phenomenon can not be
captured using conventional Jensen or Gaussian wake models (Fleming et al.,
2018).
### 3.3 Unsteady aerodynamics
Even in steady wind considered herein, both the yawed upstream rotor and
unyawed downstream rotor experience unsteady loading, which negatively affect
the power quality and fatigue life of the turbines. For the upstream rotor,
the angle of attack on each blade is continuously changing as it rotates in
the yawed condition, resulting in fluctuating thrust and torque as shown in
figure 13. For the downstream rotor, the thrust and torque also exhibit
periodic variation, and the fluctuations in these aerodynamic quantities are
stronger than those for the upstream rotor. Each rotation period is associated
with three waves in the aerodynamic loading. As shown in the spectrum of the
power coefficients $C_{P_{2}}$ in figure 15, the unsteady aerodynamic
performance of the downstream rotor, regardless of the yaw angle, is dominated
by a peak frequency of $f_{C_{P}}=0.69$ Hz with a superharmonic at
$2f_{C_{P}}=1.38$ Hz. This corresponds to the tip speed ratio of $\lambda=8$,
which translates to a rotational frequency of $f_{0}=\lambda U_{\infty}/(2\pi
R)=0.23$ Hz. Since the considered turbine is three-bladed, the dominant
frequency in the aerodynamic loading of the rotor is related to the rotational
frequency by $f_{C_{P}}=3f_{0}$.
Figure 13: Variations of ($a$) thrust and ($b$) torque of upstream rotor
during one rotation period under different $\gamma_{1}$. The shown case is
with $L_{x}=5D$. Figure 14: Variations of ($a$) thrust and ($b$) torque of
downstream rotor during one rotation period under different $\gamma_{1}$. The
shown case is with $L_{x}=5D$. Figure 15: The power spectra of $C_{P_{2}}$ at
different $\gamma_{1}$,
$\gamma_{1}=0^{\circ},25^{\circ},35^{\circ},40^{\circ},50^{\circ}$. $L_{x}=5D$
Figure 16: Variations of ($a$) local attack angle $\alpha_{2}$, ($b$) axial
and ($c$) tangential force per unit span along the blade 1 of WT2 during the
rotation period under different yaw angle. Shown is the case with $L_{x}=5D$.
$\theta_{2}$ is the rotational angle of blade 1. In ($a$), the insets show the
zoomed view of the angle of attack for $r/R=0.4-1$. In ($b$), the insets show
the wake deficit (gray) at $x=4D$ and blade positions at different azimuthal
angles. Figure 17: The standard deviation of bending moment at the blade root
of the upstream rotor ($a$) and downstream rotor ($b$).
The highly unsteady aerodynamic response of the downstream rotor is a result
of the non-uniform wake profiles incurred by the yawed upstream rotor. Figure
16 shows the variations of the angle of attack, axial and tangential forces
along the blade span during one rotation period of the rotor. For the non-
yawed case, since the wake profile on the $y$-$z$ plane is isotropic, the
angle of attack along the blade span remains fixed over time, leading to
constant aerodynamic loading. For the yawed cases, the wake of the upstream
rotor is directed away from the centerline, and deforms into a kidney-like
shape, as described in §3.2. As the blades revolves into and out of the wake
deficit, the angle of attack on the blade section changes, leading to the
variations in the sectional forces. It is observed that with medium yaw
angles, the variations in the sectional forces commence near the blade root.
For high yaw angles, since the upstream wake is almost completely deflected
from the downstream rotor, only the tip region is affected by this unsteady
effect, while the loading on the rest of the span remains constant.
We further present the fluctuating bending moments at the blade root of the
rotors in figure 17. The bending moment is calculated as
$M_{r}=\sqrt{M_{n}^{2}+M_{t}^{2}}$, where
$M_{n,t}=\int_{0}^{R}F_{n,t}r\mathrm{d}r$ is the moment associated with the
normal and tangential forces, $r$ denotes the spatial coordinate along the
span. The fluctuating bending moments of both rotors exhibit nonmonotonic
relationship with $\gamma_{1}$. For the upstream rotor, $M_{r_{1}}^{\prime}$
increases with yaw angle initially but gradually saturates at higher
$\gamma_{1}$. The fluctuating bending moment of the downstream rotor is
significantly higher than that of the upstream one. While the maximum
$M_{r_{2}}^{\prime}$ is similar among cases with different streamwise spacing,
the yaw angle at which the maximum value is achieved shifts to lower value
with increasing spacing. By comparing with figure 5, it is noticed that the
peak in $M_{r_{2}}^{\prime}$ occurs at smaller $\gamma_{1}$ than that for the
combined power of the two rotors. This is expected since the maximum total
power generation is reached when the deflected upstream wake bypasses the
whole downstream rotor area, while the maximum fluctuating loads occur when
the upstream wake covers approximately half of the downstream rotor, which is
achieved with smaller yaw angle. The analysis of the unsteady aerodynamic
performance of the two rotors serves as a precursor for assessing the fatigue
loads of wind turbine blades, which is critical for the lifespan of turbines
and their maintenance cost.
## 4 Conclusion
This paper presented extensive numerical simulations to characterize the yaw
control effects on the aerodynamics of two tandem turbines in uniform inflow
condition. The simulations are performed using the mid-fidelity actuator line
model, with turbulence closure by large eddy simulation. The results from the
low-fidelity modeling tool FLORIS are also included for comparison.
With increasing yaw angle, the power coefficient of the upstream rotor
decreases following the $\cos^{1.88}(\gamma_{1})$ relationship, and that of
the downstream rotor increases more significantly, resulting in higher
combined power generation. For different spacing between the two rotors
$L_{x}=3D,5D$ and $7D$, the maximum total powers increase by 16.0%, 35.9% and
45.0%, compared with the cases without yaw control. The optimal yaw angle at
which the maximum power is achieved occurs when the upstream wake is deflected
away from the downstream rotor. The wake of the yawed rotor is highly three-
dimensional, and is featured by a pair of counter-rotating vortices resembling
a kidney shape. Although the wake shapes predicted by the low-fidelity tool
FLORIS do not reveal such three dimensionality, the secondary steering
phenomenon, where the wake of the downstream rotor also exhibit deflection,
are captured in both models.
The use of ALM also reveals unsteady aerodynamic characteristics that can not
be captured in lower-fidelity models. Yawing the upstream rotor introduces
time-varying angle of attack on the rotor blades, giving rise to the unsteady
aerodynamic performance of the turbine. For the downstream rotor, as the
blades revolves into and out of the redirected upstream velocity deficit, the
blades experience fluctuating aerodynamic loads with a dominant frequency
dictated by the rotational speed of the rotors. The fluctuating bending moment
at the blade root of the downstream rotor is significantly higher than that of
the upstream one, raising concerns of structural fatigue damage associated
with yaw control.
To sum up, this paper has presented aerodynamic performance, wake profiles,
and unsteady characteristics of two tandem turbines under yaw control. The
fundamental insights obtained here improves the understanding of the
aerodynamics of the yaw misalignment effects on the aerodynamics of two tandem
turbines, and can aid the design of collective yaw control strategies of large
wind farms.
## Acknowledgments
KZ, ZLH and DZ acknowledge financial support from the Innovation Program of
Shanghai Municipal Education Commission (no. 2019-01-07-00-02-E00066),
National Science Foundation of China (grant numbers: 12202271, 52122110,
42076210), Program for Intergovernmental International S&T Cooperation
Projects of Shanghai Municipality, China (grant no. 22160710200), and the
Oceanic Interdisciplinary Program of Shanghai Jiao Tong University (grant no.
SL2020PT201). KZ is also grateful for the computing resources at Amarel
cluster provided through the Office of Advanced Research Computing (OARC) at
Rutgers University, on which some of the simulations were carried out. OB is
supported by the Department of Energy Advanced Research Projects Agency-Energy
Program award DE-AR0001186.
## References
* Veers et al. [2019] Paul Veers, Katherine Dykes, Eric Lantz, Stephan Barth, Carlo L Bottasso, Ola Carlson, Andrew Clifton, Johney Green, Peter Green, Hannele Holttinen, et al. Grand challenges in the science of wind energy. _Science_ , 366(6464):eaau2027, 2019.
* Vermeer et al. [2003] L.J. Vermeer, J.N. Sorensen, and A. Crespo. Wind turbine wake aerodynamics. _Progress in Aerospace Sciences_ , 39(6):467–510, 2003. ISSN 0376-0421.
* Troldborg et al. [2011] Niels Troldborg, Gunner C Larsen, Helge A Madsen, Kurt S Hansen, Jens N Sørensen, and Robert Mikkelsen. Numerical simulations of wake interaction between two wind turbines at various inflow conditions. _Wind Energy_ , 14(7):859–876, 2011.
* Sun et al. [2020] Haiying Sun, Xiaoxia Gao, and Hongxing Yang. A review of full-scale wind-field measurements of the wind-turbine wake effect and a measurement of the wake-interaction effect. _Renewable and Sustainable Energy Reviews_ , 132:110042, 2020.
* Barthelmie et al. [2009] Rebecca Jane Barthelmie, K Hansen, Sten Tronæs Frandsen, Ole Rathmann, JG Schepers, W Schlez, J Phillips, K Rados, A Zervos, ESa Politis, et al. Modelling and measuring flow and wind turbine wakes in large wind farms offshore. _Wind Energy: An International Journal for Progress and Applications in Wind Power Conversion Technology_ , 12(5):431–444, 2009.
* Andersson et al. [2021] Leif Erik Andersson, Olimpo Anaya-Lara, John Olav Tande, Karl Otto Merz, and Lars Imsland. Wind farm control-part I: A review on control system concepts and structures. _IET Renewable Power Generation_ , 15(10):2085–2108, 2021.
* Wagenaar et al. [2012] Jan Willem Wagenaar, L Machielse, and J Schepers. Controlling wind in ECN’s scaled wind farm. _Proc. Europe Premier Wind Energy Event_ , 1(01), 2012.
* Houck [2022] Daniel R Houck. Review of wake management techniques for wind turbines. _Wind Energy_ , 25(2):195–220, 2022.
* Meyers et al. [2022] Johan Meyers, Carlo Bottasso, Katherine Dykes, Paul Fleming, Pieter Gebraad, Gregor Giebel, Tuhfe Göçmen, and Jan-Willem van Wingerden. Wind farm flow control: prospects and challenges. _Wind Energy Science Discussions_ , pages 1–56, 2022.
* Chen et al. [2021] Yaoran Chen, Yan Wang, Zhikun Dong, Jie Su, Zhaolong Han, Dai Zhou, Yongsheng Zhao, and Yan Bao. 2-d regional short-term wind speed forecast based on cnn-lstm deep learning model. _Energy Conversion and Management_ , 244:114451, 2021.
* Howland et al. [2016] Michael F Howland, Juliaan Bossuyt, Luis A Martínez-Tossas, Johan Meyers, and Charles Meneveau. Wake structure in actuator disk models of wind turbines in yaw under uniform inflow conditions. _Journal of Renewable and Sustainable Energy_ , 8(4):043301, 2016.
* Bastankhah and Porté-Agel [2016] Majid Bastankhah and Fernando Porté-Agel. Experimental and theoretical study of wind turbine wakes in yawed conditions. _Journal of Fluid Mechanics_ , 806:506–541, 2016.
* Fleming et al. [2018] Paul Fleming, Jennifer Annoni, Matthew Churchfield, Luis Martínez Tossas, Kenny Gruchalla, Michael Lawson, and Patrick Moriarty. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering. _Wind Energy Science_ , 3:243–255, 05 2018.
* Anderson [2011] John Anderson. _Fundamentals of Aerodynamics_. McGraw Hill, 2011.
* Zhang et al. [2020] Kai Zhang, Shelby Hayostek, Michael Amitay, Wei He, Vassilios Theofilis, and Kunihiko Taira. On the formation of three-dimensional separated flows over wings under tip effects. _Journal of Fluid Mechanics_ , 895, 2020.
* Shapiro et al. [2018] Carl R Shapiro, Dennice F Gayme, and Charles Meneveau. Modelling yawed wind turbine wakes: a lifting line approach. _Journal of Fluid Mechanics_ , 841, 2018.
* Zong and Porté-Agel [2020] Haohua Zong and Fernando Porté-Agel. A point vortex transportation model for yawed wind turbine wakes. _Journal of Fluid Mechanics_ , 890, 2020.
* Bastankhah et al. [2022] Majid Bastankhah, Carl R Shapiro, Sina Shamsoddin, Dennice F Gayme, and Charles Meneveau. A vortex sheet based analytical model of the curled wake behind yawed wind turbines. _Journal of Fluid Mechanics_ , 933, 2022.
* King et al. [2021] Jennifer King, Paul Fleming, Ryan King, Luis A Martínez-Tossas, Christopher J Bay, Rafael Mudafort, and Eric Simley. Control-oriented model for secondary effects of wake steering. _Wind Energy Science_ , 6(3):701–714, 2021.
* NREL [2022] NREL. Floris. version 3.0, 2022. URL https://github.com/NREL/floris.
* Adaramola and Krogstad [2011] MS Adaramola and P-Å Krogstad. Experimental investigation of wake effects on wind turbine performance. _Renewable energy_ , 36(8):2078–2086, 2011.
* Campagnolo et al. [2016] Filippo Campagnolo, Vlaho Petrović, Johannes Schreiber, Emmanouil M. Nanos, Alessandro Croce, and Carlo L. Bottasso. Wind tunnel testing of a closed-loop wake deflection controller for wind farm power maximization. _Journal of Physics: Conference Series_ , 753:032006, sep 2016.
* Bastankhah and Porté-Agel [2019] Majid Bastankhah and Fernando Porté-Agel. Wind farm power optimization via yaw angle control: A wind tunnel study. _Journal of Renewable and Sustainable Energy_ , 11(2):023301, 2019.
* Bartl et al. [2018a] Jan Bartl, Franz Mühle, and Lars Sætran. Wind tunnel study on power output and yaw moments for two yaw-controlled model wind turbines. _Wind Energy Science_ , 3(2):489–502, 2018a.
* Aju et al. [2022] Emmanuvel Joseph Aju, Devesh Kumar, Melissa Leffingwell, Mario A Rotea, and Yaqing Jin. The influence of yaw misalignment on turbine power output fluctuations and unsteady aerodynamic loads within wind farms. _SSRN 4194363_ , 2022.
* Fleming et al. [2017] Paul Fleming, Jennifer Annoni, Andrew Scholbrock, Eliot Quon, Scott Dana, Scott Schreck, Steffen Raach, Florian Haizmann, and David Schlipf. Full-scale field test of wake steering. In _Journal of Physics: Conference Series_ , volume 854, page 012013\. IOP Publishing, 2017.
* Howland et al. [2019] Michael F Howland, Sanjiva K Lele, and John O Dabiri. Wind farm power optimization through wake steering. _Proceedings of the National Academy of Sciences_ , 116(29):14495–14500, 2019.
* Simley et al. [2021] Eric Simley, Paul Fleming, Nicolas Girard, Lucas Alloin, Emma Godefroy, and Thomas Duc. Results from a wake-steering experiment at a commercial wind plant: investigating the wind speed dependence of wake-steering performance. _Wind Energy Science_ , 6(6):1427–1453, 2021\.
* Howland et al. [2022a] Michael F Howland, Jesús Bas Quesada, Juan Jose Pena Martinez, Felipe Palou Larrañaga, Neeraj Yadav, Jasvipul S Chawla, Varun Sivaram, and John O Dabiri. Collective wind farm operation based on a predictive model increases utility-scale energy production. _Nature Energy_ , 7:818––827, 2022a.
* Mittal et al. [2016] Anshul Mittal, Kidambi Sreenivas, Lafayette K Taylor, Levi Hereth, and Christopher B Hilbert. Blade-resolved simulations of a model wind turbine: effect of temporal convergence. _Wind Energy_ , 19(10):1761–1783, 2016.
* Lawson et al. [2019] Michael J Lawson, Jeremy Melvin, Shreyas Ananthan, Kenny M Gruchalla, Jonathan S Rood, and Michael A Sprague. Blade-resolved, single-turbine simulations under atmospheric flow. Technical report, National Renewable Energy Lab.(NREL), Golden, CO (United States), 2019.
* de Oliveira et al. [2022] Marielle de Oliveira, Rodolfo Curci Puraca, and Bruno Souza Carmo. Blade-resolved numerical simulations of the NREL offshore 5 MW baseline wind turbine in full scale: A study of proper solver configuration and discretization strategies. _Energy_ , page 124368, 2022.
* Miao et al. [2017] Weipao Miao, Chun Li, Giorgio Pavesi, Jun Yang, and Xiaoyun Xie. Investigation of wake characteristics of a yawed hawt and its impacts on the inline downstream wind turbine using unsteady CFD. _Journal of Wind Engineering and Industrial Aerodynamics_ , 168:60–71, 2017.
* Hur et al. [2019] Chihoon Hur, Tom Berdowski, Carlos Simao Ferreira, Koen Boorsma, and Gerard Schepers. A review of momentum models for the actuator disk in yaw. In _AIAA Scitech 2019 Forum_ , page 1799, 2019.
* Howland et al. [2020a] Michael F Howland, Aditya S Ghate, Sanjiva K Lele, and John O Dabiri. Optimal closed-loop wake steering–part 1: Conventionally neutral atmospheric boundary layer conditions. _Wind Energy Science_ , 5(4):1315–1338, 2020a.
* Howland et al. [2022b] Michael F Howland, Aditya S Ghate, Jesús Bas Quesada, Juan José Pena Martínez, Wei Zhong, Felipe Palou Larrañaga, Sanjiva K Lele, and John O Dabiri. Optimal closed-loop wake steering–part 2: Diurnal cycle atmospheric boundary layer conditions. _Wind Energy Science_ , 7(1):345–365, 2022b.
* Heck et al. [2022] Kirby S Heck, Hannah M Johlas, and Michael F Howland. Modeling the induction, thrust, and power of a yaw misaligned actuator disk. _arXiv preprint arXiv:2209.00111_ , 2022.
* Lin and Porté-Agel [2022] Mou Lin and Fernando Porté-Agel. Large-eddy simulation of a wind-turbine array subjected to active yaw control. _Wind Energy Science Discussions_ , pages 1–22, 2022.
* Shen et al. [2005] Wen Zhong Shen, Robert Mikkelsen, Jens Nørkær Sørensen, and Christian Bak. Tip loss corrections for wind turbine computations. _Wind Energy: An International Journal for Progress and Applications in Wind Power Conversion Technology_ , 8(4):457–475, 2005.
* Stevens and Meneveau [2017] Richard J.A.M. Stevens and Charles Meneveau. Flow structure and turbulence in wind farms. _Annual Review of Fluid Mechanics_ , 49:311–339, 2017.
* Stevens et al. [2018] Richard JAM Stevens, Luis A Martínez-Tossas, and Charles Meneveau. Comparison of wind farm large eddy simulations using actuator disk and actuator line models with wind tunnel experiments. _Renewable energy_ , 116:470–478, 2018.
* Shapiro et al. [2022] Carl R Shapiro, Genevieve M Starke, and Dennice F Gayme. Turbulence and control of wind farms. _Annual Review of Control, Robotics, and Autonomous Systems_ , 5:579–602, 2022.
* Jonkman et al. [2009] Jason Jonkman, Sandy Butterfield, Walter Musial, and George Scott. Definition of a 5-MW reference wind turbine for offshore system development. Technical report, National Renewable Energy Lab.(NREL), Golden, CO (United States), 2009.
* Germano et al. [1991] Massimo Germano, Ugo Piomelli, Parviz Moin, and William H Cabot. A dynamic subgrid-scale eddy viscosity model. _Physics of Fluids A: Fluid Dynamics (1989-1993)_ , 3(7):1760–1765, 1991.
* Lilly [1992] Douglas K Lilly. A proposed modification of the Germano subgrid-scale closure method. _Physics of Fluids A: Fluid Dynamics (1989-1993)_ , 4(3):633–635, 1992.
* Troldborg [2008] Niels Troldborg. _Actuator line modeling of wind turbine wakes_. PhD thesis, Technical University of Denmark, 2008.
* Weller et al. [1998] Henry G Weller, Gavin Tabor, Hrvoje Jasak, and Christer Fureby. A tensorial approach to computational continuum mechanics using object-oriented techniques. _Computers in physics_ , 12(6):620–631, 1998\.
* Bachant et al. [2016] Peter Bachant, Anders Goude, and Martin Wosnik. Actuator line modeling of vertical-axis turbines. _arXiv preprint arXiv:1605.01449_ , 2016.
* Zhang and Bilgen [2020] Kai Zhang and Onur Bilgen. Multi-fidelity aerodynamic modeling of a floating offshore wind turbine rotor. In _ASME International Mechanical Engineering Congress and Exposition_ , volume 84584, page V010T10A061. American Society of Mechanical Engineers, 2020.
* Onel and Tuncer [2021] Huseyin C Onel and Ismail H Tuncer. Investigation of wind turbine wakes and wake recovery in a tandem configuration using actuator line model with les. _Computers & Fluids_, 220:104872, 2021.
* Liu et al. [2022] Luoqin Liu, Lucas Franceschini, Daniel F Oliveira, Flavio CC Galeazzo, Bruno S Carmo, and Richard JAM Stevens. Evaluating the accuracy of the actuator line model against blade element momentum theory in uniform inflow. _Wind Energy_ , 2022.
* Doekemeijer et al. [2019] Bart M Doekemeijer, Jan-Willem Van Wingerden, and Paul A Fleming. A tutorial on the synthesis and validation of a closed-loop wind farm controller using a steady-state surrogate model. In _2019 American Control Conference (ACC)_ , pages 2825–2836. IEEE, 2019.
* Gebraad et al. [2017] Pieter Gebraad, Jared J Thomas, Andrew Ning, Paul Fleming, and Katherine Dykes. Maximization of the annual energy production of wind power plants by optimization of layout and yaw-based wake control. _Wind Energy_ , 20(1):97–107, 2017.
* Dose et al. [2018] Bastian Dose, Hamid Rahimi, Iván Herráez, Bernhard Stoevesandt, and Joachim Peinke. Fluid-structure coupled computations of the NREL 5 MW wind turbine by means of CFD. _Renewable Energy_ , 129, 05 2018.
* Make and Vaz [2015] Michel Make and Guilherme Vaz. Analyzing scaling effects on offshore wind turbines using CFD. _Renewable Energy_ , 83:1326–1340, 11 2015.
* Marten et al. [2013] David Marten, Juliane Peukert, Georgios Pechlivanoglou, Christian Nayeri, and Christian Paschereit. Qblade: An open source tool for design and simulation of horizontal and vertical axis wind turbines. _International Journal of Emerging Technology and Advanced Engineering_ , 3:264–269, 03 2013.
* Burton et al. [2002] Tony Burton, Nick Jenkins, David Sharpe, and Ervin Bossanyi. _Wind energy handbook_. John Wiley & Sons, 04 2002.
* Schreiber et al. [2017] Johannes Schreiber, EM Nanos, Filippo Campagnolo, and Carlo L Bottasso. Verification and calibration of a reduced order wind farm model by wind tunnel experiments. In _Journal of Physics: Conference Series_ , volume 854, page 012041\. IOP Publishing, 2017.
* Liew et al. [2020] Jaime Liew, Albert M Urbán, and Søren Juhl Andersen. Analytical model for the power–yaw sensitivity of wind turbines operating in full wake. _Wind Energy Science_ , 5(1):427–437, 2020.
* Howland et al. [2020b] Michael F Howland, Carlos Moral González, Juan José Pena Martínez, Jesús Bas Quesada, Felipe Palou Larranaga, Neeraj K Yadav, Jasvipul S Chawla, and John O Dabiri. Influence of atmospheric conditions on the power production of utility-scale wind turbines in yaw misalignment. _Journal of Renewable and Sustainable Energy_ , 12(6):063307, 2020b.
* Annoni et al. [2018] Jennifer Annoni, Paul Fleming, Andrew Scholbrock, Jason Roadman, Scott Dana, Christiane Adcock, Fernando Porte-Agel, Steffen Raach, Florian Haizmann, and David Schlipf. Analysis of control-oriented wake modeling tools using lidar field results. _Wind Energy Science_ , 3(2):819–831, 2018.
* Nash et al. [2021] Ryan Nash, Reza Nouri, and Ahmad Vasel-Be-Hagh. Wind turbine wake control strategies: A review and concept proposal. _Energy Conversion and Management_ , 245:114581, 2021.
* Li et al. [2022] Baoliang Li, Jia He, Mingwei Ge, Hongliang Ma, Bowen Du, Haoze Yang, and Yongqian Liu. Study of three wake control strategies for power maximization of offshore wind farms with different layouts. _Energy Conversion and Management_ , 268:116059, 2022.
* Schottler et al. [2016] Jannik Schottler, Agnieszka Hölling, Joachim Peinke, and Michael Hölling. Wind tunnel tests on controllable model wind turbines in yaw. In _34th wind energy symposium_ , page 1523, 2016.
* Archer and Vasel-Be-Hagh [2019] Cristina L Archer and Ahmad Vasel-Be-Hagh. Wake steering via yaw control in multi-turbine wind farms: Recommendations based on large-eddy simulation. _Sustainable Energy Technologies and Assessments_ , 33:34–43, 2019.
* Medici and Alfredsson [2006] Davide Medici and PH Alfredsson. Measurements on a wind turbine wake: 3d effects and bluff body vortex shedding. _Wind Energy: An International Journal for Progress and Applications in Wind Power Conversion Technology_ , 9(3):219–236, 2006.
* Bartl et al. [2018b] Jan Bartl, Franz Mühle, Jannik Schottler, Lars Sætran, Joachim Peinke, Muyiwa Adaramola, and Michael Hölling. Wind tunnel experiments on wind turbine wakes in yaw: effects of inflow turbulence and shear. _Wind Energy Science_ , 3(1):329–343, 2018b.
* Kleusberg et al. [2020] Elektra Kleusberg, Philipp Schlatter, and Dan S Henningson. Parametric dependencies of the yawed wind-turbine wake development. _Wind Energy_ , 23(6):1367–1380, 2020.
* Rak and Pereira [2022] Bartłomiej P Rak and RB Santos Pereira. Impact of the wake deficit model on wind farm yield: A study of yaw-based control optimization. _Journal of Wind Engineering and Industrial Aerodynamics_ , 220:104827, 2022.
|
# Automated Market Makers: Mean-Variance Analysis of LPs Payoffs and Design of
Pricing Functions
Philippe Bergault111Université Paris Dauphine-PSL, Ceremade, 75116, Paris,
France<EMAIL_ADDRESS>Louis Bertucci222Institut Louis
Bachelier, 75002, Paris, France<EMAIL_ADDRESS>David Bouba333Swaap Labs<EMAIL_ADDRESS>Olivier Guéant444Université Paris 1
Panthéon-Sorbonne, Centre d’Economie de la Sorbonne, 106 Boulevard de
l’Hôpital, 75642 Paris Cedex 13, France<EMAIL_ADDRESS>
###### Abstract
With the emergence of decentralized finance, new trading mechanisms called
Automated Market Makers have appeared. The most popular Automated Market
Makers are Constant Function Market Makers. They have been studied both
theoretically and empirically. In particular, the concept of impermanent loss
has emerged and explains part of the profit and loss of liquidity providers in
Constant Function Market Makers. In this paper, we propose another mechanism
in which price discovery does not solely rely on liquidity takers but also on
an external exchange rate or price oracle. We also propose to compare the
different mechanisms from the point of view of liquidity providers by using a
mean / variance analysis of their profit and loss compared to that of agents
holding assets outside of Automated Market Makers. In particular, inspired by
Markowitz’ modern portfolio theory, we manage to obtain an efficient frontier
for the performance of liquidity providers in the idealized case of a perfect
oracle. Beyond that idealized case, we show that even when the oracle is
lagged and in the presence of adverse selection by liquidity takers and
systematic arbitrageurs, optimized oracle-based mechanisms perform better than
popular Constant Function Market Makers.
Key words: Automated market makers, cryptocurrencies, DeFi, oracles,
stochastic optimal control.
## 1 Introduction
Since the early days of the Decentralized Finance (DeFi) era, Automated Market
Makers (AMMs) have been some of the largest DeFi protocols on public
blockchains. Although it is nothing more than a smart contract on a
blockchain, an AMM should be regarded, from a financial point of view, as a
liquidity pool of two assets555There exist AMMs functioning with pools
involving more than two assets, but we focus on the two-asset case throughout
this paper. involving two types of agents: liquidity providers (referred to as
LPs) and liquidity takers (referred to as LTs or, more simply, traders). LPs
supply reserves to the pool, usually in both assets of the pool. In exchange,
they become entitled to a share of the pool that is in line with their supply.
LTs, in turn, use AMMs to trade, that is to swap a given quantity of one asset
against the other. The exchange rate proposed by an AMM is typically based on
a pre-defined and public formula that depends on the reserves, the transaction
size and the direction of the swap. This function is often called the pricing
function, and, although it must be defined in the contract, it can use
external data through what is called oracles.
AMMs constitute a new paradigm beyond (i) that of dealer markets (like the
global FX market, see [62, 63]) where specific agents – usually banks –
provide liquidity to clients and hold risk on their balance sheet, (ii) that
of classical order-driven markets organized around limit order books (like
most stock markets), and (iii) that of dark pools introduced in the last
decades (see [51]). Unlike in dealer markets, any agent can provide liquidity
through an AMM. Unlike in limit order books, prices are automatically set by
the protocol. Unlike in dark pools, the available liquidity is visible and the
price is not defined solely by importing that of another venue. Most
importantly, LPs do not provide liquidity to LTs in the case of AMMs: LPs
provide liquidity to the pool and LTs take liquidity from the pool.666In
particular (like for most DeFi applications) AMMs are fully collateralized and
neither LPs nor LTs are exposed to any sort of counterparty risk. To be more
precise, LPs are exposed to a technological – or cyber – risk, but as long as
the smart contract works as expected, they will be able to withdraw their
share of the reserves. The novelty of AMMs raises a lot of theoretical
questions from an economic point of view, in relation to the classical market
microstructure literature, but also optimization and quantitative questions
related to the quantitative finance and financial engineering literature.
This paper is a contribution to the quantitative finance literature on AMMs
focusing on the relation between the design of AMMs and the performance of
LPs. Using standard convex analysis, we recover the now well-known result
that, in the absence of transaction fees, posting liquidity into a Constant
Function Market Maker777We refer to [59] for a pedagogical introduction to
CFMMs and the classical pricing functions. Examples of popular CFMMs include
Uniswap (see [2, 3, 4]), Balancer (see [56]), Curve (see [35, 36]), etc. An
interesting empirical analysis of Uniswap v3 is [54]. (CFMM) exposes to a
concave payoff that is inferior to that of holding coins outside of the
pool.888This led to the now classical concept of impermanent loss that is
widely used in the DeFi world. The “impermanent” trait of such loss lies in
the fact that losses vanish when the exchange rate reverts back to its
original value (at the time the liquidity was provided). This nonpositive and
concave payoff comes from the fact that price discovery in CFMMs is left to
LTs who act as arbitrageurs and make money out of the pool. Transaction fees
should therefore compensate a risk that is fundamentally related to the design
of CFMMs and competition between CFMMs cannot reduce transaction fees below a
threshold that depends on market conditions. We therefore plead in this paper
for the design of oracle-based AMMs that are more complicated than CFMMs in
that they do not solely rely on LTs for price discovery.
In our paper, we explore indeed AMMs in which the pricing function uses
external information about current market prices. Even if a blockchain only
has knowledge about on-chain activity, it is possible to feed a smart contract
with external data through oracles (see [27] for a discussion). This situation
is quite similar to that of a traditional market maker in dealer markets who
provides quotes based on an estimation of the price or exchange rate
(typically a mid-price imported from an electronic platform or a composite
price) that is available at the same time. Instead of being entirely passive,
the AMM can update its bid and offer not only after a trade or a provision or
redemption of liquidity, but also after the oracle has fired a price update.
One difference with the traditional finance case is however that a dealer in
FX or corporate bond markets can have and frequently has short positions
whereas liquidity must always be present in the pool in the case of an AMM.
In order to compare different AMMs (from the point of view of LPs999Due to the
challenges posed by impermanent loss, much of the existing literature has
sought to optimize AMM designs with a focus on LPs. Our work aligns with this
perspective, drawing additional inspiration from research works on optimal
market making in OTC markets, which predominantly consider the dealers’
standpoint. It is worth noting, however, that an economic perspective on AMM
design should ideally encompass both LTs and LPs. In the below literature
review, there are a few papers tackling equilibrium issues encompassing both
LPs and LTs but not with designs like ours.), we build in this paper a simple
mean-variance framework inspired by the so-called modern portfolio theory of
the 1950s. However, unlike in the Markowitz case, we always consider the
Hodl101010Hodl is slang in the cryptocurrency community for holding a
cryptocurrency. In traditional finance, this would be called “buy and hold”.
strategy as a benchmark. An agent faces indeed an alternative: providing
liquidity by posting coins into an AMM or holding them. The Hodl benchmark is
also intimately linked to the notion of impermanent loss that is ubiquitous in
the field.
We also borrow from the modern portfolio theory the concept of efficient
portfolios which becomes in our case efficient market making strategies, i.e.
efficient pricing functions. Indeed, in addition to comparing the risk /
return profile of different existing AMMs, we are interested in computing the
maximum extra return that a LP could expect for a given level of tracking
error with respect to Hodl.
Although the efficient frontier cannot be computed in closed form, it can
easily be approximated using a model inspired by the modern literature on
market making. Interestingly, we show that popular CFMMs exhibit poor
performances relatively to our approximation of the theoretical efficient
frontier in a complete information framework. However, even in the presence of
a lagged oracle and adverse selection, an optimized oracle-based AMM is able
to get close to that theoretical efficient frontier.
In recent years, many research works have been carried out on AMMs, especially
CFMMs, with a special focus on Constant Product Market Makers (CPMMs) and
their generalizations. Motivated by the emergence of Uniswap and the volume
exchanged through it, the authors of [11] studied the main properties of
CPMMs. In particular, they showed that, in the presence of arbitrageurs, the
exchange rate proposed by a CPMM for small transactions should be in a range
around that of competing venues – the width of the range being determined by
transaction fees. Then, they studied the main properties of CPMMs: the no-
splitting property, the no-depletion property, etc. They also studied the
payoff of LPs in a CPMM as a function of an external asset price when there is
no transaction fees (see also [33] and an extension to the case of liquidity
concentration as in Uniswap v3 in [34]). The returns of LPs are also studied
in [24] and [37] for pricing functions that are geometric means. In
particular, impermanent loss is studied for several pricing functions.
Beyond CPMMs, CFMMs have been studied by the authors of [7] in a general
multi-coin setting. They introduced a natural notion of trading set and showed
that its convexity is key to obtain desirable properties. In the no-
transaction-fee case, they also obtained a formula for the value of the pool
that involves Legendre-Fenchel transforms (see also [9] and Section 2 for a
discussion). In particular, they showed that the returns of LPs between two
liquidity provision or redemption events suffered from impermanent loss for
all CFMMs. The same group of authors with additional coauthors then shed a new
light on their previous works in [6] by embedding several problems involving
CFMMs in a convex setting.
A recent and important work to better understand the payoff of LPs in CFMMs is
[57]. Because the value of a CFMM is a concave function of the exchange rate
between the two assets, Itô’s formula leads to a decomposition111111When the
price process is a martingale this is the Doob-Meyer decomposition of the
payoff. of the payoff of LPs into two terms: a stochastic integral
corresponding to the payoff of a self-financing strategy and a nonincreasing
and nonpositive term. They use this decomposition to claim that, at least
theoretically, part of the risk can be hedged away. In particular, if
continuous-time hedging with no friction was possible and if an AMM or an LP
implemented the hedging strategy, only part of the impermanent loss would
remain that they call Loss-Versus-Rebalancing (LVR).121212In the very recent
paper [58] they added transaction costs and discrete-time trading and obtained
asymptotic results with respect to the frequency of blocks.
Beyond the properties of CFMMs and the returns of LPs, several questions have
also been addressed in the literature. The question of the optimal fees is
discussed in [38]. That of the take rate (the proportion of the fees kept by
the protocol) is addressed in [41]. Strategic liquidity provision is the topic
of several papers. With a viewpoint rooted into the classical Kyle/Glosten-
Milgrom literature on market microstructure, [12] studies liquidity provision
in a CPMM with a focus on equilibrium issues. Equilibrium questions are also
discussed in [48]. The authors of [30] and [60] discuss the impact of
liquidity concentration (as in Uniswap v3) on liquidity provision strategies.
A microstructural and game-theoretical approach is proposed in [26] in which
the authors discuss, among many interesting topics, the influence of
volatility on the adoption of AMMs. The coexistence of limit order books and
AMMs and the competition between them are discussed in [13], [16] and [52].
The execution of orders in CFMMs has also been studied in several papers: [10]
tackles the static problem of optimal routing across a set of several CFMMs
while [29] studies a dynamic problem of optimal execution à la Almgren-Chriss
on a CFMM.
In this paper, we go beyond CFMMs and propose an optimized oracle-based AMM.
Our ideas and the model we use are inspired by the recent literature on market
making. The quantitative literature on market making initially started in the
1980s with the seminal works [49, 50] (see also the papers [5] and [61] from
the 1980s). It was revived in 2008 in [14] where the authors use stochastic
optimal control tools. Following the latter paper, market making has become an
important research strand in quantitative finance over the last decade. The
authors of [44] presented a thorough analysis of the problem introduced in
[14], and proposed closed-form approximations of the optimal quotes in the
case of exponential intensities. Instead of the expected utility framework of
[14], our model uses the objective function introduced in [32]. Since then, a
lot of new features have been progressively added to the initial models: the
impact of parameters ambiguity is studied in [28], in [21] the authors
considered a framework with various trade sizes, etc. Also, specific models
for different asset classes have been proposed: for stock options in [15], for
foreign exchange currencies in [17], [18] and [19], and for bonds in
[45].131313See the books [31] and [42] for a detailed bibliography on market
making. Our model extends the recent market making models tackling the problem
faced by FX dealers, in which exchange rate dynamics are geometric rather than
arithmetic, by considering instead of the total profit and loss (PnL) of the
liquidity provider, only the extra money (positive or negative) beyond Hodl.
The remaining of the paper is organized as follows. In Section 2, we introduce
notation and recall classical results about the returns of LPs in CFMMs in a
concise manner. Section 3 goes beyond the case of CFMMs and derives
approximations of efficient pricing functions for an AMM with complete
information (in particular a perfect exchange rate oracle). Section 4 relaxes
the assumptions of Section 3 to be closer to reality by considering
misspecification of the parameters, a lagged oracle and adverse selection by
arbitrageurs.
## 2 The payoff of LPs in CFMMs: a primer
### 2.1 Notation and definition of the protocol
CFMMs constitute one of the simplest forms of AMMs. In CFMM protocols, the
exchange rate for a given pair of coins or tokens is determined by the
function of (i) the reserves in the liquidity pool and (ii) the size (and
side) of the prospective transaction. In what follows, we recall the
functioning of two-currency CFMMs and the now classical analysis of the PnL of
LPs in these protocols. It is noteworthy that, because ownership over the CFMM
reserves is defined proportionally to the amounts deposited by LPs, one can
consider – without loss of generality – the special case of a 1-LP-only
system.141414As in most papers of the literature, our analysis is valid
between any two liquidity provision/redemption events. In other words, we
assume that liquidity evolves according to the swaps placed by LTs only.
In what follows, we shall denote by $(q_{t}^{0})_{t\geq 0}$ and
$(q_{t}^{1})_{t\geq 0}$ the two processes for the reserves corresponding
respectively to the number of coins of currency 0 and currency 1 in the pool
(time $0$ corresponds to the initial posting of reserves). In the case of a
CFMM with no transaction fees, the proposed exchange rate is typically such
that the quantity $f(q^{0}_{t},q^{1}_{t})$ remains constant151515Of course, in
the case of provision/redemption of liquidity, the quantity changes, hence the
above footnote. before and after a trade where the function
$f:\mathbb{R}_{+}^{*}\times\mathbb{R}_{+}^{*}\rightarrow\mathbb{R}$ is
typically increasing with respect to both of its variables, reflecting the
intent to swap one currency for the other. For what follows, it is more
practical to use an alternative formulation based on level sets. We assume
that reserves always satisfy the equation $q_{t}^{0}=\psi(q_{t}^{1})$ where
the function $\psi:\mathbb{R}_{+}^{*}\to\mathbb{R}_{+}^{*}$ satisfies the
following properties:
* •
$\psi$ is decreasing – this reflects that one currency is swapped for the
other
* •
$\lim_{q^{1}\to 0^{+}}\psi(q^{1})=+\infty$ and
$\lim_{q^{1}\to+\infty}\psi(q^{1})=0$ – to prevent depletion of one of the two
currencies
* •
$\psi$ is strictly convex – convexity161616The strictness is important only to
differentiate the Legendre transform of $\psi$ below. guarantees the absence
of arbitrage opportunities
* •
$\psi$ is continuously differentiable – to simplify the analysis (it is the
case in practice).
In this setting, if a client wants to sell $\Delta q^{1}>0$ coins of currency
1 to the pool at date $t$, he/she will receive $\Delta q^{0}$ coins of
currency 0 where
$q_{t}^{0}-\Delta q^{0}=\psi(q_{t}^{1}+\Delta
q^{1}),\quad\text{i.e.}\quad\frac{\Delta q^{0}}{\Delta
q^{1}}=-\frac{\psi(q_{t}^{1}+\Delta q^{1})-\psi(q_{t}^{1})}{\Delta q^{1}}.$
Symmetrically, if a client wants to buy $\Delta q^{1}>0$ coins of currency 1
from the pool at date $t$, he/she will pay $\Delta q^{0}$ coins of currency 0
where
$q_{t}^{0}+\Delta q^{0}=\psi(q_{t}^{1}-\Delta
q^{1}),\quad\text{i.e.}\quad\frac{\Delta q^{0}}{\Delta
q^{1}}=\frac{\psi(q_{t}^{1}-\Delta q^{1})-\psi(q_{t}^{1})}{\Delta q^{1}}.$
Of course, the convexity of $\psi$ ensures that
$-\frac{\psi(q_{t}^{1}+\Delta q^{1})-\psi(q_{t}^{1})}{\Delta
q^{1}}\leq\frac{\psi(q_{t}^{1}-\Delta q^{1})-\psi(q_{t}^{1})}{\Delta q^{1}},$
thereby precluding the existence of arbitrage opportunities within the pool.
### 2.2 No-arbitrage assumption and PnLs
Assuming there exists at time $t$ an external market exchange rate $S_{t}$ for
currency 1 (in terms of currency $0$) at which infinitesimal quantities could
be traded, we clearly see from the above equations that there would be an
arbitrage opportunity at time $t$ if $S_{t}$ was not equal to
$-\psi^{\prime}(q_{t}^{1})$. Neglecting the existence of bid-ask spreads even
for tiny transactions, we write $S_{t}=-\psi^{\prime}(q_{t}^{1})$ and decide
to evaluate (in currency 0 terms) any position in currency 1 with exchange
rate $S_{t}$.
In such an idealized setting, the PnL at time $t$ (in currency 0
terms)171717The PnL needs to be accounted in a given currency. We arbitrarily
choose currency 0 in what follows. of the representative LP is therefore
$\text{PnL}_{t}=\left(q_{t}^{0}+S_{t}q_{t}^{1}\right)-\left(q_{0}^{0}+S_{0}q_{0}^{1}\right)$
while that of the same agent who would not have posted reserves in the pool
would be
$\text{PnL}^{\text{Hodl}}_{t}=\left(q_{0}^{0}+S_{t}q_{0}^{1}\right)-\left(q_{0}^{0}+S_{0}q_{0}^{1}\right)=(S_{t}-S_{0})q_{0}^{1}.$
To compare those two PnLs, let us define $\psi^{*}$ the Legendre-Fenchel
transform181818We restrict the function to the interior of its domain. of
$\psi$, i.e.
$\psi^{*}:p\in\mathbb{R}^{*}_{-}\mapsto\sup_{q\in\mathbb{R}^{*}_{+}}pq-\psi(q).$
The maximizer $q$ in the definition of $\psi^{*}(p)$ is uniquely defined by
$\psi^{\prime}(q)=p$ and it is hence $q_{t}^{1}$ when $p=-S_{t}$. We therefore
obtain
$\psi^{*}(-S_{t})=-q_{t}^{1}S_{t}-\psi(q_{t}^{1})=-q_{t}^{1}S_{t}-q_{t}^{0}\quad\text{and}\quad\psi^{*^{\prime}}(-S_{t})=q_{t}^{1}.$
In particular, we have,
$\text{PnL}_{t}=\psi^{*}(-S_{0})-\psi^{*}(-S_{t})\quad\text{and}\quad\text{PnL}^{\text{Hodl}}_{t}=(S_{t}-S_{0})\psi^{*^{\prime}}(-S_{0}).$
Because $\psi^{*}$ is strictly convex, its graph lies above the tangent line
at all points and we have therefore
$\text{PnL}_{t}-\text{PnL}^{\text{Hodl}}_{t}=\psi^{*}(-S_{0})-\psi^{*}(-S_{t})-(S_{t}-S_{0})\psi^{*^{\prime}}(-S_{0})\leq
0,$
with equality if and only if $S_{t}=S_{0}$. This result corresponds to the
notion of impermanent loss since the loss vanishes if the exchange rate goes
back to its value when reserves were added to the pool.
### 2.3 Analysis and discussion
The above inequality is fundamental as it claims that, under mild assumptions,
there is no benefit in providing liquidity through a CFMM with no fees. In
other words, a minimal amount of fees is necessary to encourage liquidity
provision. In particular, competition between CFMMs cannot decrease fees to
zero.
The above computations prove that the payoff of a LP in a CFMM with no fees is
not only nonpositive but also concave, i.e. similar to that of an option
seller. Assuming that $\psi^{*}$ is twice continuously differentiable and
applying Itô’s formula as in [57], we see that
$\text{PnL}_{t}-\text{PnL}^{\text{Hodl}}_{t}=\int_{0}^{t}(\psi^{*^{\prime}}(-S_{s})-\psi^{*^{\prime}}(-S_{0}))dS_{s}-\frac{1}{2}\int_{0}^{t}\psi^{*^{\prime\prime}}(-S_{s})d\langle
S\rangle_{s}$
where $\langle\cdot\rangle$ denoted quadratic variation. The first term can be
(theoretically) hedged away through a self-financing strategy with
$-(\psi^{*^{\prime}}(-S_{s})-\psi^{*^{\prime}}(-S_{0}))$ coins of asset $1$ at
time $s$, while the second term, called LVR in [57], is always nonpositive
because $\psi^{*}$ is convex.
Regarding the first term, continuous-time hedging with no friction is a
classical theoretical assumption. However, in practice, hedging raises a lot
of questions given the high frequency at which trades happen in AMMs: when
should we hedge? on which venue? should we cross the spread in limit order
books or post limit orders? what about our market impact? etc. Furthermore,
hedging requires to trade on another venue and may be complicated to implement
inside an AMM: LPs should carry out the hedging process by themselves, all the
more since LPs might not want to be hedged at the level of each AMM but rather
at the level of their portfolio to benefit from netting effects. In any case,
one can argue that part of the risk could be hedged away, leading therefore to
a reduction of the fees charged by CFMMs to LTs in order to compensate LPs.
This is an important route for future research in the field.
Regarding the second term, it is intrinsically related to the main problem of
CFMM protocols. In a CFMM, the price discovery process indeed occurs solely
thanks to trades with LTs, and LTs therefore extract value from LPs. To avoid
this value extraction, an interesting idea consists in using a price oracle.
This is the purpose of this paper.
## 3 Efficient pricing functions in the complete information case
### 3.1 Modeling framework
In this section, we consider a filtered probability space
$\left(\Omega,\mathcal{F},\mathbb{P};\mathbb{F}=(\mathcal{F}_{t})_{t\geq
0}\right)$ satisfying the usual conditions.
In the previous section, the automated market making protocol did not rely on
any external information to propose exchange rates. It indeed proposed
exchange rates for various transaction sizes based only on the reserves lying
in the pool at a given time. We now consider the theoretical and idealized
case where an exogenous market exchange rate is known, for instance the mid-
price on a centralized exchange based on a limit order book, like those of
Binance, Kraken or Coinbase. This exchange rate is indicative rather than
tradable (equivalently, we assume that the AMM is not able to trade on other
venues) even for infinitesimal sizes but we still denote by
$(S_{t})_{t\in\mathbb{R}_{+}}$ the market exchange rate process, stating the
value of currency 1 in terms of currency 0. We assume for it the following
dynamics:
$dS_{t}=\mu S_{t}dt+\sigma S_{t}dW_{t},$
where $\mu\in\mathbb{R}$ is a known deterministic drift, $\sigma>0$ a known
deterministic volatility and $\left(W_{t}\right)_{t\in\mathbb{R}_{+}}$ a
standard Brownian motion.
To study the PnL of LPs, we need assumptions on the demand of LTs. To build a
parsimonious model, we consider as in [19] that transaction sizes are labelled
in the accounting currency (currency 0) and that transactions are decomposed
into two parts: one part corresponding to the market exchange rate and another
part corresponding to a markup (that might very rarely be a discount) that is
accounted in currency 0, whatever the side of the transaction, for the sake of
simplicity. More precisely, if a client wants to buy $z$ coins of currency 0
at time $t$, then $z/S_{t}$ coins of currency 1 will be asked and
$z\delta^{1,0}(t,z)$ out of the total of $z$ coins of currency 0 will not be
transferred to him/her. Symmetrically, if a client wants to sell $z$ coins of
currency 0 at time $t$, then $z/S_{t}$ coins of currency 1 will be offered to
him/her and $z\delta^{0,1}(t,z)$ extra coins of currency 0 will be asked as a
markup.191919$\delta^{0,1}(t,z)$ and $\delta^{1,0}(t,z)$ converted in basis
points, could be regarded as “mid”-to-bid and ask-to-“mid” in basis points.
Everything works indeed almost as if the prices proposed for swapping were
respectively $S_{t}(1-\delta^{1,0}(t,z))$ (bid) and
$S_{t}(1+\delta^{0,1}(t,z))$ (ask). While $S_{t}$ serves as an indicative
price independent of size $z$, the ultimate exchange rate depends on the size
(and direction) of the transaction.
We assume that the markups $\left(\delta^{0,1},\delta^{1,0}\right)$ belong to
$\begin{split}\mathcal{A}:=\Bigg{\\{}\delta=\left(\delta^{0,1},\delta^{1,0}\right):\Omega\times[0,T]&\times\mathbb{R}_{+}^{*}\mapsto\mathbb{R}^{2}\bigg{|}\delta\text{
is }\mathcal{P}\otimes\mathcal{B}(\mathbb{R}_{+}^{*})\text{-measurable }\\\
\qquad\qquad\qquad\qquad&\text{and
}\delta^{0,1}(t,z)\wedge\delta^{1,0}(t,z)\geq-C\ \mathbb{P}\otimes dt\otimes
dz\ a.e.\Bigg{\\}},\end{split}$
for a given (large) constant $C>0$. Here, $\mathcal{P}$ denotes the
$\sigma$-algebra of $\mathbb{F}$-predictable subsets of $\Omega\times[0,T]$
and $\mathcal{B}(\mathbb{R}_{+}^{*})$ denotes the Borelian sets of
$\mathbb{R}_{+}^{*}$.
In the model, to simplify the analysis, we accumulate these markups in a
process $(X_{t})_{t\in[0,T]}$, separated from the reserves. Its dynamics is
$dX_{t}=\int_{z\in\mathbb{R}_{+}^{*}}z\delta^{0,1}(t,z)J^{0,1}(dt,dz)+\int_{z\in\mathbb{R}_{+}^{*}}z\delta^{1,0}(t,z)J^{1,0}(dt,dz),$
with $X_{0}=0$, where $J^{0,1}(dt,dz)$ and $J^{1,0}(dt,dz)$ are two
$\mathbb{R}_{+}^{*}$-marked point processes modelling transactions through
which the AMM sells currency 1 and receives currency 0 (for $J^{0,1}(dt,dz)$)
and transactions through which the AMM sells currency 0 and receives currency
1 (for $J^{1,0}(dt,dz)$).
These marked point processes also allow to write the dynamics of the reserves:
$dq^{0}_{t}=\int_{z\in\mathbb{R}_{+}^{*}}z\left(J^{0,1}(dt,dz)-J^{1,0}(dt,dz)\right)\quad\text{and}\quad
dq^{1}_{t}=\int_{z\in\mathbb{R}_{+}^{*}}\frac{z}{S_{t}}\left(J^{1,0}(dt,dz)-J^{0,1}(dt,dz)\right).$
Because we are interested in the PnL of a LP compared to that of an agent who
would have held the coins outside of the AMM, we introduce the following two
processes:
$\left(Y^{0}_{t}\right)_{t\in\mathbb{R}_{+}}=\left((q^{0}_{t}-q^{0}_{0})\right)_{t\in\mathbb{R}_{+}}\quad\text{and}\quad\left(Y^{1}_{t}\right)_{t\in\mathbb{R}_{+}}=\left((q^{1}_{t}-q^{1}_{0})S_{t}\right)_{t\in\mathbb{R}_{+}}.$
Their dynamics are given by:
$dY^{0}_{t}=\int_{z\in\mathbb{R}_{+}^{*}}\\!\\!z\left(J^{0,1}(dt,dz)-J^{1,0}(dt,dz)\right)\text{
and }dY^{1}_{t}=\mu Y^{1}_{t}dt+\sigma
Y^{1}_{t}dW_{t}+\int_{z\in\mathbb{R}_{+}^{*}}\\!\\!z\left(J^{1,0}(dt,dz)-J^{0,1}(dt,dz)\right).$
We assume that the processes $J^{0,1}(dt,dz)$ and $J^{1,0}(dt,dz)$ have known
intensity kernels given respectively by
$(\nu^{0,1}_{t}(dz))_{t\in\mathbb{R}_{+}}$ and
$(\nu^{1,0}_{t}(dz))_{t\in\mathbb{R}_{+}}$, verifying
$\nu^{0,1}_{t}(dz)=\Lambda^{0,1}\left(z,\delta^{0,1}(t,z)\right)\mathds{1}_{\\{q^{1}_{t-}\geq\frac{z}{S_{t}}\\}}m(dz)\quad\text{and}\quad\nu^{1,0}_{t}(dz)=\Lambda^{1,0}\left(z,\delta^{1,0}(t,z)\right)\mathds{1}_{\\{q^{0}_{t-}\geq
z\\}}m(dz),$
where $m$ is a measure (typically Lebesgue or discrete) and $\Lambda^{0,1}$
and $\Lambda^{1,0}$ are called the intensity functions of the processes
$J^{0,1}(dt,dz)$ and $J^{1,0}(dt,dz)$ respectively. In the standard literature
on market making (see [18] or [21], for instance), these intensity functions
(which correspond to the demand curve of LTs) are assumed of the logistic
type, i.e.
$\Lambda^{0,1}(z,\delta)=\lambda^{0,1}(z)\frac{1}{1+e^{\alpha^{0,1}(z)+\beta^{0,1}(z)\delta}}\quad\text{and}\quad\Lambda^{1,0}(z,\delta)=\lambda^{1,0}(z)\frac{1}{1+e^{\alpha^{1,0}(z)+\beta^{1,0}(z)\delta}},\vspace{-0.1cm}$
where $\lambda^{0,1}(z)m(dz)$ and $\lambda^{1,0}(z)m(dz)$ describe the maximum
number of transactions of size in $[z,z+dz]$ per unit of time (the height of
the demand curve) and $\alpha^{0,1}(z)$, $\beta^{0,1}(z)$, $\alpha^{1,0}(z)$,
and $\beta^{1,0}(z)$ the shape of the demand curve, in particular the
sensitivity to the markups. It is noteworthy that indicator functions
represent the impossibility for the AMM to propose exchange rates for
transactions that cannot occur because reserves are too low in the demanded
currency.
The PnL minus the PnL of Hodl at time $T$, hereafter the excess PnL, of a LP
is therefore given by
$\displaystyle X_{T}+Y^{0}_{T}+Y^{1}_{T}$
$\displaystyle=\int_{0}^{T}\int_{z\in\mathbb{R}_{+}^{*}}z\delta^{0,1}(t,z)J^{0,1}(dt,dz)+\int_{0}^{T}\int_{z\in\mathbb{R}_{+}^{*}}z\delta^{1,0}(t,z)J^{1,0}(dt,dz)$
$\displaystyle\qquad+\int_{0}^{T}\int_{z\in\mathbb{R}_{+}^{*}}\\!\\!z\left(J^{0,1}(dt,dz)-J^{1,0}(dt,dz)\right)+\int_{0}^{T}\mu
Y^{1}_{t}dt+\int_{0}^{T}\sigma Y^{1}_{t}dW_{t}$
$\displaystyle\qquad+\int_{0}^{T}\int_{z\in\mathbb{R}_{+}^{*}}\\!\\!z\left(J^{1,0}(dt,dz)-J^{0,1}(dt,dz)\right)$
$\displaystyle=\int_{0}^{T}\int_{z\in\mathbb{R}_{+}^{*}}z\delta^{0,1}(t,z)J^{0,1}(dt,dz)+\int_{0}^{T}\int_{z\in\mathbb{R}_{+}^{*}}z\delta^{1,0}(t,z)J^{1,0}(dt,dz)$
$\displaystyle\qquad+\int_{0}^{T}\mu Y^{1}_{t}dt+\int_{0}^{T}\sigma
Y^{1}_{t}dW_{t}.$ (1)
We see that the excess PnL can be decomposed into two parts: one corresponding
to the accumulated markups and the other one representing the deviation from
the Hodl strategy reserves. In particular, it contains jump terms and a
Brownian term.202020As in the recent article [57], one can argue that the term
$\int_{0}^{T}\mu Y^{1}_{t}dt+\int_{0}^{T}\sigma
Y^{1}_{t}dW_{t}=\int_{0}^{T}(q^{1}_{t}-q^{1}_{0})dS_{t}$ could be hedged, at
least theoretically.
### 3.2 Towards an efficient frontier
We now derive the optimal strategy in this framework with complete information
for an objective function that is not exactly a mean-variance one but a more
practical one for us in order to use the tools of stochastic optimal control
theory. In fact, we only consider the variance of $\int_{0}^{T}\sigma
Y^{1}_{t}dW_{t}$ which is a very reasonable proxy for the variance of the
excess PnL, as most of the risk comes from the Brownian term and not from the
drift or the jump terms.
More precisely, for each $\gamma>0$,212121The parameter $\gamma$ can be
interpreted in two ways. One can see it as a risk aversion parameter or as a
Lagrange multiplier, like in the Markowitz framework. we introduce the
following stochastic optimal control problem:
$\sup_{(\delta^{0,1},\delta^{1,0})\in\mathcal{A}}\mathbb{E}\Bigg{[}\int\limits_{0}^{T}\Bigg{\\{}\int_{z\in\mathbb{R}_{+}^{*}}\Big{(}z\delta^{0,1}(t,z)\Lambda^{0,1}(z,\delta^{0,1}(t,z))\mathds{1}_{\\{q^{1}_{t-}\geq\frac{z}{S_{t}}\\}}$
$\qquad\qquad\qquad\qquad+z\delta^{1,0}(t,z)\Lambda^{1,0}(z,\delta^{1,0}(t,z))\mathds{1}_{\\{q^{0}_{t-}\geq
z\\}}\Big{)}m(dz)+\mu
Y^{1}_{t}-\frac{\gamma}{2}\sigma^{2}(Y^{1}_{t})^{2}\Bigg{\\}}dt\Bigg{]}.$
This stochastic optimal control problem has $4$ state variables and is
therefore hardly tractable, even numerically. Nevertheless, it is noteworthy
that for moderate values of $\mu$, the quadratic penalty provides an incentive
to keep the composition of the pool close to the initial one.222222In pratice,
the choice of $\gamma$ should be contingent on the pool size and the level of
liquidity. For example, a higher $\gamma$ value might be suitable when the
pool size is smaller and/or liquidity demand is high. Conversely, a reduced
$\gamma$ could be adopted when the pool is more substantial, and liquidity
demand is lesser. Notably, the value of $\gamma$ could be adjusted at each
time of liquidity provision/redemption. In this paper, as in most of the
literature, we analyse the PnL of LPs between two times of liquidity
provision/redemption and $\gamma$ is fixed. Therefore, the no-depletion
constraints (which translated into indicators in the above formula) can be
regarded as superfluous, and we subsequently approximate the problem by
removing the latter terms.232323In fact, in our numerical examples, we observe
that, with the markups obtained using this approach, the reserves remain
positive at all times, i.e. the constraints are not binding. In other words,
we consider the following modified objective function:
$\displaystyle\mathbb{E}\Bigg{[}\int\limits_{0}^{T}\Bigg{\\{}\int_{z\in\mathbb{R}_{+}^{*}}\Big{(}z\delta^{0,1}(t,z)\Lambda^{0,1}(z,\delta^{0,1}(t,z))+z\delta^{1,0}(t,z)\Lambda^{1,0}(z,\delta^{1,0}(t,z))\Big{)}m(dz)+\mu
Y^{1}_{t}-\frac{\gamma}{2}\sigma^{2}(Y^{1}_{t})^{2}\Bigg{\\}}dt\Bigg{]}.$
It is noteworthy that this new problem only depends on one state variable,
$Y^{1}$, whose dynamic is Markovian. It can be addressed with classical tools
of stochastic optimal control. For that purpose, we introduce the value
function $\theta:[0,T]\times\mathbb{R}\rightarrow\mathbb{R}$ associated with
this stochastic optimal control problem. It is well known that $\theta$ solves
the following Hamilton-Jacobi-Bellman equation:242424If continuous-time
hedging with no friction was possible as in [57], the HJB equation would be
Eq. (2) with $\mu=0$ and $\sigma=0$. This would lead to $\theta(t,y)=C(T-t)$
for some constant $C$ independent of $y$ and an optimal strategy independent
of $Y^{1}_{t}$. A softer way to consider the possibility to hedge could be, as
in [17], to add a term $\sup_{v}v\partial_{y}\theta-L(v)$ with
$L:v\mapsto\psi|v|+\eta|v|^{1+\phi}$ an execution cost function as in the
optimal execution literature. In particular, the cost of liquidity and the
need to cross the spread would be modeled.
$\begin{cases}\\!&0=\partial_{t}\theta(t,y)+\mu
y\left(1+\partial_{y}\theta(t,y)\right)-\frac{\gamma}{2}\sigma^{2}y^{2}+\frac{1}{2}\sigma^{2}y^{2}\partial^{2}_{yy}\theta(t,y)\\\
\\!&\qquad+\text{\scalebox{0.6}[1.0]{$\bigint$}}_{\\!\\!\mathbb{R}_{+}^{*}}\left(zH^{0,1}\left(z,\frac{\theta(t,y)-\theta(t,y-z)}{z}\right)+zH^{1,0}\left(z,\frac{\theta(t,y)-\theta(t,y+z)}{z}\right)\right)m(dz),\\\
\\!&\theta(T,y)=0,\end{cases}$ (2)
where
$H^{0,1}:(z,p)\in\mathbb{R}_{+}^{*}\times\mathbb{R}\mapsto\underset{\delta\geq-C}{\sup}\
\Lambda^{0,1}(z,\delta)(\delta-p)\text{ and
}H^{1,0}:(z,p)\in\mathbb{R}_{+}^{*}\times\mathbb{R}\mapsto\underset{\delta\geq-C}{\sup}\
\Lambda^{1,0}(z,\delta)(\delta-p).$
Using the same ideas as in [43], we see that for $i\neq j\in\\{0,1\\}$, the
supremum in the definition of $H^{i,j}(z,p)$ is reached at a unique
$\bar{\delta}^{i,j}(z,p)$ given by
$\bar{\delta}^{i,j}(z,p)=(\Lambda^{i,j})^{-1}\left(z,-\partial_{p}{H^{i,j}}(z,p)\right)$
where for all $z$, $(\Lambda^{i,j})^{-1}(z,.)$ denotes the inverse of the
function $\Lambda^{i,j}(z,.)$. Moreover, the markups that maximize our
modified objective function are obtained in the following form
$\displaystyle\delta^{0,1*}(t,z)=\bar{\delta}^{0,1}\left(z,\frac{\theta(t,Y^{1}_{t-})-\theta(t,Y^{1}_{t-}-z)}{z}\right)$
(3)
and
$\displaystyle\delta^{1,0*}(t,z)=\bar{\delta}^{1,0}\left(z,\frac{\theta(t,Y^{1}_{t-})-\theta(t,Y^{1}_{t-}+z)}{z}\right).$
(4)
In practice, we proceed in two steps in order to solve the above stochastic
optimal control problem. First, we approximate numerically the solution to the
HJB equation (2). For that purpose, we employ operator splitting in order to
deal separately with the differential terms and the non-local terms. For the
differential terms, we used an implicit scheme with Neumann conditions at the
boundaries ($\pm\bar{y}$ for $\bar{y}$ chosen large). As for the non-local
terms, we used a discrete measure $m$ and applied a Newton-Raphson algorithm
to resolve the implicit scheme at each time step (we assume that the AMM does
not accept trades that go beyond the boundaries $\pm\bar{y}$). This approach
proved to be computationally efficient. Once the value function $\theta$ is
approximated, the second step consists in plugging the approximation of
$\theta$ into Equations (3) and (4) to obtain the associated markups.
### 3.3 Numerical example
In what follows, we are going to illustrate the excess PnL associated with
various strategies. In order to illustrate our findings, we use throughout the
paper a market simulator based on a discrete-time version of the above
framework, with the following realistic parameters:
* •
currency 0: USD, currency 1: ETH
* •
Initial exchange rate: 1600 USD per ETH
* •
Drift: $\mu=0\text{ day}^{-1}$
* •
Volatility: $\sigma=1\text{ year}^{-\frac{1}{2}}$
* •
Single transaction size: 4000 USD (i.e. $m$ is a Dirac mass)
* •
Intensity functions: $\lambda^{0,1}=\lambda^{1,0}=100\text{ day}^{-1}$,
$\alpha^{0,1}=\alpha^{1,0}=-1.8$, $\beta^{0,1}=\beta^{1,0}=1300\text{
bps}^{-1}$ (this corresponds to an average of 86 trades per day when the
proposed exchange rate always equals the market exchange rate, 96 trades per
day when the proposed exchange rate is the market exchange rate improved by 10
bps, and 62 trades per day when the proposed exchange rate is the market
exchange rate worsen by 10 bps)
* •
Initial inventory: $2000000$ USD and $1250$ ETH
* •
Time horizon: $T=0.5\text{ day}.$
We plot in Figure 1 the performance of AMMs following different strategies:
* •
Naive oracle-based strategies consisting in choosing $\delta^{0,1}$ and
$\delta^{1,0}$ constant
* •
CPMM strategies with various transaction fees
* •
CFMM strategies as decribed in [35, 36]
* •
An oracle-based strategy documented in [23]
* •
Oracle-based strategies associated with the markups we derived from the above
stochastic optimal control problem.
Figure 1: Performance of strategies in terms of mean / standard deviation of
excess PnL. In blue: naive strategies with constant
$\delta^{0,1},\delta^{1,0}$ (the number next to the point corresponds to the
value of $\delta^{0,1}$ and $\delta^{1,0}$ in bps). In grey: CPMM with fees
(the numbers next to the points correspond to the transaction fees in bps). In
pink: CFMM without market exchange rate oracle as described in [35, 36] for
different sets of realistic parameters. In purple $(\star)$: AMM with market
exchange rate oracle as described in [23]. In green: approximation of the
efficient frontier, obtained using the optimal markups for different levels of
risk aversion (the numbers next to the points correspond to the value of
$\gamma$).
We used Monte-Carlo simulations with 1000 trajectories. The (approximation of
the) efficient frontier is parameterized by $\gamma$. Unsurprisingly, when
$\gamma$ is large (i.e. large risk aversion), the optimal strategy consists in
providing liquidity at a very high cost, and the resulting risk / return
profile gets close to the origin $(0,0)$. As $\gamma$ decreases, the efficient
frontier describes an increasing curve that stops (unlike in the Markowitz
case) at $\gamma=0$, at a point corresponding to (optimized) constant markups
and to the maximum expected excess PnL.
Although this first graph corresponds to an idealized world where the market
exchange rate is perfectly known at all times (and where there is therefore no
arbitrage), it sheds light on the intrinsic limitations of (oracle-free) CFMMs
compared to oracle-based AMMs. In our setting, CFMMs indeed underperform
optimal strategies, as expected, but also naive strategies, both in terms of
expected excess PnL and in terms of standard deviation. It is noteworthy that
implementing a hedging strategy for CFMMs would reduce the standard deviation
but leave negative expected excess PnLs in our setting.
These results, however need to be qualified. Naive strategies with constant
transaction fees lack of robustness: in the presence of an arbitrage
opportunity due to information asymmetry or mispricing, the pool would indeed
be depleted from one of the two assets! In the next section, we show that our
optimized oracle-based strategies are, however, robust to misspecifications,
lags in price oracles and the introduction of adverse selection.
## 4 Beyond the idealized case: misspecification, incomplete information and
introduction of adverse selection
In practice, an AMM can be designed with some values of the drift, volatility
and liquidity parameters, but different values may realize. Furthermore, the
drift is highly unpredictable, volatility is not constant but clustered, while
liquidity varies depending on market conditions.252525For models taking
account of the stochastic nature of volatility and liquidity, see [22].
Consequently, it is of the utmost importance to study the impact of parameters
misspecification on the risk / return profile of allegedly efficient
strategies, i.e. what happens if the realized drift, volatility and liquidity
parameters do not match those used in the smart contract.
In Figure 2, we consider the case where the actual liquidity parameters
$\lambda^{0,1}$ and $\lambda^{1,0}$ one inputs in the simulator are the same
as in the above section, but the liquidity parameters of the strategy are set
to $\lambda^{0,1}=\lambda^{1,0}=50\text{ day}^{-1}$. We compare the results to
the efficient frontier and observe that the misspecified strategies remain
almost exactly on the efficient frontier, with a shift toward higher risk
aversions. Similarly, we consider in Figure 3 the case where the actual
volatility one inputs in the simulator is equal to 100% ($\sigma=1\text{
year}^{-\frac{1}{2}}$) as above, but the volatility used to compute the
strategy is 120% ($\sigma=1.2\text{ year}^{-\frac{1}{2}}$). We compare the
results to the efficient frontier and observe the same phenomenon as for
liquidity parameters. These results can be explained theoretically. Indeed, in
the absence of drift and ignoring the Laplacian term
$\frac{1}{2}\sigma^{2}y^{2}\partial^{2}_{yy}\theta(t,y)$, it can be shown that
when $\lambda^{0,1}=\lambda^{1,0}=:\lambda$, misspecifying $\lambda$ or
$\sigma$ has the same effect as choosing another $\gamma$, because the
solution to Equation (2) depends only on the ratio
$\frac{\lambda}{\gamma\sigma^{2}}$. This is in line with the observation made
in [18].
Figure 2: Performance of strategies in terms of mean / standard deviation of
excess PnL when $\lambda^{0,1}$ and $\lambda^{1,0}$ are misspecified. In
green: efficient frontier, obtained with the efficient strategy for different
levels of risk aversion with complete information. In pink: performance of the
misspecified strategy obtained with $\lambda^{0,1}=\lambda^{1,0}=50\text{
day}^{-1}$ for different levels of risk aversion. The numbers next to the
points correspond to the value of $\gamma$.
Figure 3: Performance of strategies in terms of mean / standard deviation of
excess PnL when $\sigma$ is misspecified. In green: efficient frontier,
obtained with the efficient strategy for different levels of risk aversion
with complete information. In pink: performance of the misspecified strategy
obtained with $\sigma=1.2\text{ year}^{-\frac{1}{2}}$ for different levels of
risk aversion. The numbers next to the points correspond to the value of
$\gamma$.
Finally, we consider in Figure 4 the case where the parameters of the strategy
correspond to those used in the previous section with no drift, but the drift
one inputs to compute the strategy is equal to 40% ($\mu=0.4\text{
year}^{-1}$). We compare the results to the efficient frontier and observe
that the misspecified strategies remain close to the efficient frontier. It is
noteworthy that we did not compute the point corresponding to no risk aversion
($\gamma=0$) because the optimal strategy of any risk-neutral agent with a
view on the drift is to get an infinite directional position: here, this means
that the AMM should sell one of the two assets to buy the other.
Figure 4: Performance of strategies in terms of mean / standard deviation of
excess PnL when $\mu$ is misspecified. In green: efficient frontier, obtained
with the efficient strategy for different levels of risk aversion with
complete information. In pink: performance of the misspecified strategy
obtained with $\mu=0.4\text{ year}^{-1}$ for different levels of risk
aversion. The numbers next to the points correspond to the value of $\gamma$.
Misspecification can be a problem, but the main problem faced by AMMs is
adverse selection. It is important to recall that the main problem of CFMMs is
that value is extracted by LTs. Of course, in the above model with complete
information, the market exchange rate is known at all times and adverse
selection does not exist. To introduce adverse selection, we assume that the
AMM cannot observe the market exchange rate perfectly but rather with a lag,
while the demand curves of LTs are centered on the right (current) price. We
show the results in Figure 5 and clearly see that performances move away from
the efficient frontier as the delay increases. However, the performance
remains good for reasonable values of the lag.
Partial information regarding the market exchange rate can sometimes result in
arbitrage opportunities for LTs. These arbitrage opportunities are already
taken into account in the demand curves modeled by the intensity functions,
though not in a systematic way. In other words, if a price appears to be very
good for LTs, the probability that a transaction occurs is very high in our
model. In practice however, there exists a category of agents who
systematically exploit arbitrage opportunities: they trade with the AMM until
arbitrage opportunities disappear.262626Of course, these trades come with a
cost for arbitrageurs, who still have to pay the fees associated with a swap
on an other (centralized) exchange. In our case, we assume a proportional cost
of $7.5\text{ bps}$. Moreover, note that in our simulations, arbitrageurs
trade a size that is optimal for them – and thus does not necessarily
correspond to the size of the other trades. The above analysis regarding
market exchange rate oracles can then only be complete if we take into account
these arbitrageurs. In practice, arbitrageurs can represent a significant
volume as soon as the market exchange rate moves outside of the spread offered
by the AMM, and we compare in Figure 6 the performance of the efficient
strategies in the presence of a 10-second delayed oracle and with
arbitrageurs. We also provide in Figure 7 a complete risk / return analysis of
the different strategies studied in this paper, in the presence of
arbitrageurs and incomplete information (for those relying on oracles).
Figure 5: Performance of the efficient strategies in terms of mean / standard
deviation of excess PnL, obtained by playing the efficient strategies for
different levels of risk aversion with different oracle delays: complete
information (in green), 10-second delay (in yellow), 30-second delay (in
blue), 1-minute delay (in purple), 5-minute delay (in pink), 30-minute delay
(in red). The number next to the point corresponds to the value of $\gamma$.
Figure 6: Performance of the previous optimal strategy in terms of mean /
standard deviation of excess PnL. In green: efficient frontier, obtained by
playing the optimal strategy for different levels of risk aversion with
complete information. In yellow: performance of the same optimal strategy for
different levels of risk aversion with a lagged oracle. In orange: performance
of the same optimal strategy for different levels of risk aversion with a
lagged oracle and with arbitrage flow. The number next to the point
corresponds to the value of $\gamma$.
Figure 7: Performance of strategies in terms of mean / standard deviation of
excess PnL with a lagged oracle and with arbitrage flow. In grey: CPMM with
fees. In pink: CFMM without market exchange rate oracle as described in [35,
36] for different sets of realistic parameters. In purple ($\star$): AMM with
market exchange rate oracle as described in [23]. In orange: performance of
the efficient strategies for different levels of risk aversion.
What we observe in Figure 6 is that the presence of arbitrageurs who
systematically and continuously keep the offered exchange rate in a narrow
range around the market exchange rate tends to increase the expected excess
PnL and reduce risk. This may sound counter-intuitive, but it comes from the
fact that those arbitrageurs actually “protect” the AMM against trades with
traditional LTs at prices even further away from the market exchange rate
(this could be related to the findings of [58]). Of course, the efficient
frontier documented in Figure 6 does not take into account this additional
flow. Building a model that would internalize both delayed oracles and
(systematic) arbitrageurs and allow to derive optimal strategies in this
context remains an open problem.
Figure 7 confirms that the use of a market exchange rate oracle, even delayed,
really makes a difference in terms of risk / return profile. Nevertheless, it
is important to note that introducing an oracle creates a fundamentally
different protocol which relies on external data. One of the issues with
oracles (as noted in [65] and then in [55]), is that they could be
manipulated. Using an oracle in an AMM protocol should therefore be performed
carefully in order for LPs to really achieve the promised improved risk-
adjusted performance (for a detailed explanation of the next generation of
decentralized oracles see [25]).
## 5 Conclusion
In this paper, we provide an analysis of AMMs in a mean / standard deviation
framework inspired by both Markowitz’ modern portfolio theory and the recent
literature on optimal market making. We show that traditional CFMMs (including
CPMMs) with different levels of transaction fees perform poorly relative to
the theoretical efficient frontier and very often exhibit negative excess PnL.
We also show that allowing an AMM to get information about the current market
exchange rate (through an oracle) can significantly improve performance. Such
an oracle-based AMM would quote a bid / ask spread around the current market
exchange rate based on its reserves. This design significantly reduces the
volatility of the excess PnL while delivering a positive excess PnL on
average. Our results are robust to the presence of a lagged oracle and to the
introduction of adverse selection by LTs and arbitrageurs. Nevertheless, while
introducing an oracle in the AMM design can significantly improve the risk-
adjusted performance of LPs, it comes at the cost that the oracle itself
should be carefully designed to avoid introducing additional attack vectors.
## Statement and acknowledgment
The research carried out for this paper benefited from the support of the
Research Program “Decentralized Finance and Automated Market Makers”, a
Research Program under the aegis of Institut Europlace de Finance, in
partnership with Apesu / Swaap Labs.
The content of this article has been presented at several conferences and
seminars: BlockSem seminar (Paris, France), FFEA 2nd Spring Workshop on
FinTech (Ghent, Belgium), The 3rd workshop on Decentralized Finance (Bol,
Croatia), DeFOx seminar (Oxford, UK), Euro Working Group on Commodities and
Financial Modelling meeting (Rome, Italy), SIAM Conference on Financial
Mathematics and Engineering (Philadelphia, USA), AMaMeF conference (Bielefeld,
Germany), Florence-Paris Workshop on Mathematical Finance (Florence, Italy),
Frontiers in Quantitative Finance Seminar (London, UK), Séminaire Bachelier
(Paris, France), 16th Financial Risks International Forum (Paris, France),
London-Paris Bachelier Workshop on Mathematical Finance (London, UK),
Blockchain@X-OMI Workshop on Blockchain and Decentralized Finance (Palaiseau,
France), Apéro DeFi (Paris, France). The discussions that took place during
these events have substantially contributed to improving the presentation of
our results.
We extend our sincere gratitude to the two anonymous referees for their
insights and constructive feedback, which greatly enhanced the quality of our
article.
## References
* [1] Frédéric Abergel, Côme Huré, and Huyên Pham. Algorithmic trading in a microstructural limit order book model. Quantitative Finance, 20(8):1263–1283, 2020.
* [2] Hayden Adams. Uniswap whitepaper. 2018\.
* [3] Hayden Adams, Noah Zinsmeister, and Dan Robinson. Uniswap v2 core, 2020. 2020\.
* [4] Hayden Adams, Noah Zinsmeister, Moody Salem, River Keefer, and Dan Robinson. Uniswap v3 core, 2021.
* [5] Yakov Amihud and Haim Mendelson. Dealership market: Market-making with inventory. Journal of Financial Economics, 8(1):31–53, 1980.
* [6] Guillermo Angeris, Akshay Agrawal, Alex Evans, Tarun Chitra, and Stephen Boyd. Constant function market makers: Multi-asset trades via convex optimization. arXiv preprint arXiv:2107.12484, 2021.
* [7] Guillermo Angeris and Tarun Chitra. Improved price oracles: Constant function market makers. In Proceedings of the 2nd ACM Conference on Advances in Financial Technologies, pages 80–91, 2020.
* [8] Guillermo Angeris, Alex Evans, and Tarun Chitra. When does the tail wag the dog? Curvature and market making. arXiv preprint arXiv:2012.08040, 2020.
* [9] Guillermo Angeris, Alex Evans, and Tarun Chitra. Replicating market makers. arXiv preprint arXiv:2103.14769, 2021.
* [10] Guillermo Angeris, Alex Evans, Tarun Chitra, and Stephen Boyd. Optimal routing for constant function market makers. In Proceedings of the 23rd ACM Conference on Economics and Computation, pages 115–128, 2022.
* [11] Guillermo Angeris, Hsien-Tang Kao, Rei Chiang, Charlie Noyes, and Tarun Chitra. An analysis of Uniswap markets. arXiv preprint arXiv:1911.03380, 2019.
* [12] Jun Aoyagi. Liquidity provision by automated market makers. Available at SSRN 3674178, 2020.
* [13] Jun Aoyagi and Yuki Ito. Coexisting Exchange Platforms: Limit Order Books and Automated Market Makers. In _Limit Order Books and Automated Market Makers._ 2021\.
* [14] Marco Avellaneda and Sasha Stoikov. High-frequency trading in a limit order book. Quantitative Finance, 8(3):217–224, 2008.
* [15] Bastien Baldacci, Philippe Bergault, and Olivier Guéant. Algorithmic market making for options. Quantitative Finance, 21(1):85–97, 2021.
* [16] Andrea Barbon, Angelo Ranaldo. On the quality of cryptocurrency markets: Centralized versus decentralized exchanges. arXiv preprint arXiv:2112.07386, 2021.
* [17] Alexander Barzykin, Philippe Bergault, and Olivier Guéant. Algorithmic market making in foreign exchange cash markets: a new model for active market makers. Mathematical Finance, 33(1):41–79, 2021.
* [18] Alexander Barzykin, Philippe Bergault, and Olivier Guéant. Market-making by a foreign exchange dealer. Risk Magazine, August 2022.
* [19] Alexander Barzykin, Philippe Bergault, and Olivier Guéant. Dealing with multi-currency inventory risk in foreign exchange cash markets. Risk Magazine, March 2023.
* [20] Philippe Bergault, David Evangelista, Olivier Guéant, and Douglas Vieira. Closed-form approximations in multi-asset market making. Applied Mathematical Finance, 28(2):101–142, 2021.
* [21] Philippe Bergault and Olivier Guéant. Size matters for otc market makers: general results and dimensionality reduction techniques. Mathematical Finance, 31(1):279–322, 2021.
* [22] Philippe Bergault, Louis Bertucci, David Bouba, Olivier Guéant, Julien Guilbert. Enhancing oracle-based automated market makers: advanced price and liquidity models. Working paper, 2023.
* [23] David Bouba. Swaap.finance: Introducing the matrix-mm. 2021\.
* [24] Nassib Boueri. G3M impermanent loss dynamics. arXiv preprint arXiv:2108.06593, 2021.
* [25] Lorenz Breidenbach, Christian Cachin, Benedict Chan, Alex Coventry, Steve Ellis, Ari Juels, Farinaz Koushanfar, Andrew Miller, Brendan Magauran, Daniel Moroz, et al. Chainlink 2.0: Next steps in the evolution of decentralized oracle networks. Chainlink Labs, 2021.
* [26] Agostino Capponi and Ruizhe Jia. The adoption of blockchain-based decentralized exchanges. arXiv preprint arXiv:2103.08842, 2021.
* [27] Agostino Capponi, Garud Iyengar, and Jay Sethuraman. Decentralized Finance: Protocols, Risks, and Governance. Foundations and Trends in Privacy and Security, 5(3):144–188, 2023.
* [28] Álvaro Cartea, Ryan Donnelly, and Sebastian Jaimungal. Algorithmic trading with model uncertainty. SIAM Journal on Financial Mathematics, 8(1):635–671, 2017.
* [29] Álvaro Cartea, Fayçal Drissi, and Marcello Monga. Decentralised finance and automated market making: Execution and speculation. Available at SSRN, 2022.
* [30] Álvaro Cartea, Fayçal Drissi, and Marcello Monga. Decentralised finance and automated market making: predictable loss and optimal liquidity provision. Available at SSRN, 2022.
* [31] Álvaro Cartea, Sebastian Jaimungal, and José Penalva. Algorithmic and high-frequency trading. Cambridge University Press, 2015.
* [32] Álvaro Cartea, Sebastian Jaimungal, and Jason Ricci. Buy low, sell high: A high frequency trading perspective. SIAM Journal on Financial Mathematics, 5(1):415–444, 2014.
* [33] Joseph Clark. The replicating portfolio of a constant product market. Available at SSRN 3550601, 2020.
* [34] Joseph Clark. The replicating portfolio of a constant product market with bounded liquidity. Available at SSRN, 2021.
* [35] Michael Egorov. Stableswap – efficient mechanism for stablecoin liquidity. 2019\.
* [36] Michael Egorov. Automatic market-making with dynamic peg. 2021\.
* [37] Alex Evans. Liquidity provider returns in geometric mean markets. arXiv preprint arXiv:2006.08806, 2020.
* [38] Alex Evans, Guillermo Angeris, and Tarun Chitra. Optimal fees for geometric mean market makers. In International Conference on Financial Cryptography and Data Security, pages 65–79. Springer, 2021.
* [39] Pietro Fodra and Huyên Pham. High frequency trading and asymptotics for small risk aversion in a markov renewal model. SIAM Journal on Financial Mathematics, 6(1):656–684, 2015.
* [40] Pietro Fodra and Huyên Pham. Semi-markov model for market microstructure. Applied Mathematical Finance, 22(3):261–295, 2015.
* [41] Robin Fritsch, Samuel Käser, and Roger Wattenhofer. The economics of automated market makers. arXiv preprint arXiv:2206.04634, 2022.
* [42] Olivier Guéant. The Financial Mathematics of Market Liquidity: From optimal execution to market making, volume 33. CRC Press, 2016.
* [43] Olivier Guéant. Optimal market making. Applied Mathematical Finance, 24(2):112–154, 2017.
* [44] Olivier Guéant, Charles-Albert Lehalle, and Joaquin Fernandez-Tapia. Dealing with the inventory risk: a solution to the market making problem. Mathematics and Financial Economics, 7(4):477–507, 2013.
* [45] Olivier Guéant and Iuliia Manziuk. Deep reinforcement learning for market making in corporate bonds: beating the curse of dimensionality. Applied Mathematical Finance, 26(5):387–452, 2019.
* [46] Fabien Guilbaud and Huyen Pham. Optimal high-frequency trading with limit and market orders. Quantitative Finance, 13(1):79–94, 2013.
* [47] Fabien Guilbaud and Huyên Pham. Optimal high-frequency trading in a pro rata microstructure with predictive information. Mathematical Finance, 25(3):545–575, 2015.
* [48] Joel Hasbrouck, Thomas J. Rivera, and Fahad Saleh. The need for fees at a dex: How increases in fees can increase dex trading volume. Available at SSRN, 2022.
* [49] Thomas Ho and Hans R Stoll. Optimal dealer pricing under transactions and return uncertainty. Journal of Financial Economics, 9(1):47–73, 1981.
* [50] Thomas SY Ho and Hans R Stoll. The dynamics of dealer markets under competition. The Journal of Finance, 38(4):1053–1074, 1983.
* [51] Charles-Albert Lehalle and Sophie Laruelle. Market microstructure in practice. World Scientific, 2018.
* [52] Alfred Lehar and Christine A. Parlour. Decentralized exchanges. Technical report, Working paper, 2021.
* [53] Alex Lipton and Artur Sepp. Automated Market-Making for Fiat Currencies. arXiv preprint arXiv:2109.12196, 2021.
* [54] Stefan Loesch, Nate Hindman, Mark B. Richardson, and Nicholas Welch. Impermanent loss in uniswap v3. arXiv preprint arXiv:2111.09192, 2021.
* [55] Torgin Mackinga, Tejaswi Nadahalli, and Roger Wattenhofer. Twap oracle attacks: Easier done than said? Cryptology ePrint Archive, 2022.
* [56] Fernando Martinelli and Nikolai Mushegian. A non-custodial portfolio manager, liquidity provider, and price sensor. 2019\.
* [57] Jason Milionis, Ciamac Moallemi, Tim Roughgarden, and Antony Lee Zhang Automated market making and loss-versus-rebalancing. 2023\.
* [58] Jason Milionis, Ciamac Moallemi, Tim Roughgarden. Automated Market Making and Arbitrage Profits in the Presence of Fees. arXiv preprint arXiv:2305.14604, 2023.
* [59] Vijay Mohan. Automated market makers and decentralized exchanges: a DeFi primer. Financial Innovation, 8(1):1–48, 2022.
* [60] Michael Neuder, Rithvik Rao, Daniel J Moroz, and David C Parkes. Strategic liquidity provision in uniswap v3. arXiv preprint arXiv:2106.12033, 2021.
* [61] Maureen O’hara and George S Oldfield. The microeconomics of market making. Journal of Financial and Quantitative analysis, 361–376, 1986\.
* [62] Andreas Schrimpf and Vladyslav Sushko. FX trade execution: complex and highly fragmented. BIS Quarterly Review, December, 2019.
* [63] Andreas Schrimpf and Vladyslav Sushko. Sizing up global foreign exchange markets. BIS Quarterly Review, December, 2019.
* [64] Hans R Stoll. Market microstructure. In Handbook of the Economics of Finance, volume 1, 553–604. 2003.
* [65] Kevin Tjiam, Rui Wang, Huanhuan Chen, and Kaitai Liang. Your smart contracts are not secure: Investigating arbitrageurs and oracle manipulators in ethereum. In CYSARM@ CCS, 25–35, 2021.
|
# Anger Breeds Controversy: Analyzing Controversy and Emotions on Reddit
Kai Chen USC Information Sciences Institute4676 Admiralty WayMarina del
ReyCAUSA<EMAIL_ADDRESS>, Zihao He USC Information Sciences Institute4676
Admiralty WayMarina del ReyCAUSA<EMAIL_ADDRESS>, Rong-Ching Chang
University of California, DavisDavisCAUSA<EMAIL_ADDRESS>, Jonathan May
USC Information Sciences Institute4676 Admiralty WayMarina del ReyCAUSA
<EMAIL_ADDRESS>and Kristina Lerman USC Information Sciences Institute4676
Admiralty WayMarina del ReyCAUSA<EMAIL_ADDRESS>
(2023)
###### Abstract.
Emotions play an important role in interpersonal interactions and social
conflict, yet their function in the development of controversy and
disagreement in online conversations has not been explored. To address this
gap, we study controversy on Reddit, a popular network of online discussion
forums. We collect discussions from a wide variety of topical forums and use
emotion detection to recognize a range of emotions from text, including anger,
fear, joy, admiration, etc. Our study has three main findings. First,
controversial comments express more anger and less admiration, joy and
optimism than non-controversial comments. Second, controversial comments
affect emotions of downstream comments in a discussion, usually resulting in
long-term increase in anger and a decrease in positive emotions, although the
magnitude and direction of emotional change depends on the forum. Finally, we
show that emotions help better predict which comments will become
controversial. Understanding emotional dynamics of online discussions can help
communities to better manage conversations.
Controversy, emotion, Reddit, comment, discussion
††copyright: acmcopyright††journalyear: 2023††doi:
XXXXXXX.XXXXXXX††conference: WebScience; April 30–May 01, 2023; Austin,
TX††price: 15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Information systems
Social networking sites††ccs: Human-centered computing Empirical studies in
collaborative and social computing††ccs: Applied computing Sociology
## 1\. Introduction
The social web has linked millions of people worldwide, creating “digital town
squares” for exchanging ideas, opinions, and beliefs. On platforms like Reddit
and Twitter, among many others, people post messages or respond to the
messages posted by others. The low barriers to entry into global online
conversations offers society many benefits, such as democratizing the
production and distribution of information, reducing the power of traditional
gatekeepers to decide what information gets attention, creating better ways
for people to learn from the diverse experiences of others, and catalyzing
mass protest movements. Unfortunately, the same mechanisms that lead to
societal benefits are also responsible for creating unique new
vulnerabilities. Exchanging diverse viewpoints within global online
communities invites disagreement, which malicious actors and anti-social
trolls exploit to derail conversations, spread misinformation, and inflame
polarization. The rise in anti-social online behaviors has had profound
consequences on society, undermining collective trust in institutions and in
democracy itself (Haidt, 2022).
Online communities have tried to reduce harmful speech by mediating
discussions to remove messages that violate community norms due to toxicity,
harassment, or personal attacks (Park et al., 2021). However, manual
moderation does not scale to the volume and speed of online conversations.
Although machine moderation has improved in recent years, with tools that
automatically recognize harassment, hate speech, and other types of toxic
speech (MacAvaney et al., 2019; Poletto et al., 2021; Plaza-del Arco et al.,
2021), these methods treat the symptoms, rather than causes of the problem. In
order to better identify and mediate controversy, we need to understand how
controversy develops and derails conversations in open online communities
before we can effectively—and automatically—moderate them.
Researchers have attempted to identify controversial discussions in online
communities using network approaches (Garimella et al., 2018) or features
derived from user activity (Koncar et al., 2021). Others have trained models
to learn language cues associated with controversial comments (Zayats and
Ostendorf, 2018; Park et al., 2021). With advances in natural language
processing, we are now able to move beyond these works to explore
psycholinguistic dimensions of controversy. Specifically, we study how
controversy and disagreement develop within online discussions through the
lens of emotions. We focus on emotions because they are the cornerstone of
interpersonal interactions (Van Kleef et al., 2016) and shape the social
response to conflict (Bar-Tal et al., 2007). Emotions are also important in
online interactions and have been shown to contribute to the viral spread of
topics (Brady et al., 2021; Coviello et al., 2014; Bi, 2022). However, the
role of emotions in the development of controversy or disagreement in online
discussions has not been explored.
We study Reddit, a popular network of online communities. Within Reddit’s many
topical forums, or ‘subreddits,’ members post new topics for discussion, and
others comment on these submissions or respond to the comments of others.
Community members can upvote or downvote any comment, expressing their
agreement or disagreement with it. Reddit automatically flags a comment as
_controversial_ if it has a large and similar number of upvotes and downvotes.
To study how emotions affect the development of controversy and disagreement
in online interactions, we pose the following research questions:
RQ1:
Are controversial comments more emotional than non-controversial comments?
RQ2:
How do controversial comments change emotions in the discussion?
RQ3:
Can we identify controversial comments at time of creation, i.e., before they
become controversial?
To answer these research questions, we use a state-of-the-art emotion
detection method (Alhuzali and Ananiadou, 2021) to measure a range of emotions
expressed in text. We find that controversial comments on Reddit express
substantially more anger and less joy, love and optimism than non-
controversial comments. Although controversial comments represent a small
fraction of all comments in a discussion—typically, only 3% of comments are
controversial—we show that discussions with at least one controversial comment
also express more anger and less positive emotions like joy and love. To
explain this observation, we investigate how controversial comments change
emotions of the subsequent discussion by comparing the emotions expressed in
comments that follow a controversial comment to the emotions expressed in the
comments preceding it. We find that controversial comments set the long-term
emotional tone of discussions. Finally, we show that adding emotions as
features to a state-of-the-art controversial comment classification method
leads to significant performance improvement. This enables us to predict
whether a comment will become controversial, potentially allowing a moderator
to step in to keep the conversation from becoming overheated.
We argue that, besides focusing on simple metrics like the number of upvotes
and downvotes of comments, public media watchdogs and social media platforms
should pay attention to the emotional tone of online discussions.
Understanding the emotional dynamics of online conversations could help
communities engage in more constructive dialog and prevent disagreement and
controversy from derailing conversations.
## 2\. Related Work
In this section, we briefly introduce controversy detection on different
social media platforms, emotion detection, and a few recent works that have
laid the groundwork by combining emotions and controversy detection.
### 2.1. Controversy Detection on Social Platforms
Previous work has explored different methods to detect and predict controversy
in online platforms. Some works leverage the network structures between the
users, submissions, and text features for detection of controversial comments.
For example, Garimella et al. (2018) construct conversation graphs on Twitter
and characterize controversy based on graph structures, such as random walks,
betweenness centrality, and low dimensional embeddings. They define a measure
of controversy based on random walks, which measures how likely a random user
joining a controversial discussion is to be exposed to the dominant authority
of each side in the debate. They show that this method using graph structural
features is better at identifying controversial topics than those using
content-based features. Zayats and Ostendorf (2018) train a graph-structured
bidirectional LSTM to predict the popularity of comments in Reddit discussions
and further use language cues to help identify controversial comments.
Similarly, Park et al. (2021) detect norm violations on Reddit, leveraging
LSTM and pretrained language models, which are more suitable for sequential
data.
Koncar et al. (2021) identify controversial comments in multilingual
discussions on Reddit by training a linear classifier on features of comments
and discussions. They explore different types of features, including lexical
features of the comments themselves, the predecessors, and successors. They
find that user activities (such as the rate at which people comment before and
after a controversial comment, and the number of preceding comments) produce
the most discriminating features. One confounding factor is that discussions
with controversial comments receive more attention since they are flagged by
Reddit, making them easier to find through its user interface. The increased
attention affects the evolution of controversial discussions. Jang and Allan
(2018) summarize controversy through stance-indicativeness, articulation, and
topic relevance and the evaluation shows that their summaries based on these
lexicon features has a better understanding of the controversy.
Besides detecting individual controversial comments, some works have focused
on identifying controversial submissions, which initiate discussions on
Reddit. Hessel and Lee (2019) and Zhong et al. (2020) define controversial
submissions by ratio of upvotes to all votes. Hessel and Lee (2019) focus on
both the textual contents of comments as well as the discussion hierarchy.
They conclude that the textual contents are significantly more helpful for
detecting submission controversy but fail to generalize to different forums
(subreddits); however, structural features are more generalizable.
### 2.2. Emotion Recognition
To understand emotions in human text, early research uses dictionary-based
methods to measure the sentiment expressed in messages by counting positive or
negative words they contain (Golder and Macy, 2011; Bollen et al., 2011; Chen
and Skiena, 2014; Mejova et al., 2014; Dori-Hacohen and Allan, 2013). Another
popular approach measures emotions in text along the dimensions of valence and
arousal, with the former capturing the level of pleasure, or positive
sentiment, expressed in text, and the latter capturing the level of activation
induced by the emotion. This approach relies on lexicons, e.g., the WKB
lexicon (Warriner et al., 2013), that include valence and arousal scores of
common English words. After lemmatizing the input text, they average the
scores of terms that match the lexicon features. Using these methods,
researchers find that the sentiment of tweets display characteristic diurnal
and weekly patterns of mood variation (Golder and Macy, 2011) and are able to
track the geographic distribution of emotional wellbeing (Jaidka et al.,
2020).
These lexicon-based approaches, however, do not account for context and are
difficult to extend to multilingual data due to the effort required to label
words. To address these challenges, a new generation of methods based on large
language models enables a wider range of emotional expressions to be
quantified at scale (Alhuzali and Ananiadou, 2021). These methods benefit from
the availability of large-scale datasets of sentences that have associated
emotion labels. For example, Mohammad et al. (2018) provide a corpus of tweets
annotated with emotion labels in English, Arabic and Spanish. Multilingual
transformers like XLM-T (Barbieri et al., 2021; et el, 2020) have been trained
on this data, extending emotion detection capability to multilingual settings.
In addition, GoEmotions (Demszky et al., 2020) is a dataset of 58k English
Reddit comments with up to 28 different categories of emotion.
### 2.3. Emotions and Controversy Online
Research exploring the role of emotions in controversy on a large scale is
largely under-explored. Bi (2022) studies the role of emotions in the
diffusion of posts on Facebook. She finds evidence that both positive and
negative emotions are directly associated with the diffusion of highly
controversial topics. Mejova et al. (2014) categorize news into controversial
and non-controversial and find that controversial news tends to use more
negative emotional words. Similarly, Stieglitz and Dang-Xuan (2013) and Brady
et al. (2017) discover that emotional tweets tend to spread wider and faster
based on analysis of political discussions on Twitter.
Building upon prior works, we focus on psycholinguistic indicators,
specifically emotions expressed in comments. We further extend the
contribution by studying the relationship and dynamics between emotions and
controversial comments in different granularity, covering individual comments,
how controversial comments impact the succeeding comments in discussions, and
the ability to detect controversial comments using emotional cues.
## 3\. DATA
Reddit is a popular social platform for user discussions. It consists of a
wide range of topical forums, or subreddits. In each subreddit, a user can
start a discussion by posting a new submission, which other users can comment
on or respond to the comments of others. We use Pushshift API (Baumgartner et
al., 2020) to collect Reddit discussions. Pushshift has archives of Reddit
data dating back to 2005, which includes complete discussions and metadata,
such as the controversial tag. A comment is automatically flagged by Reddit as
controversial when the numbers of upvotes and downvotes it has are both high
and very similar. We further define a controversial discussion as the one that
includes at least one controversial comment.
From the collected discussions, we create two datasets – Dataset I: Popular
Forums (used in Sec. 4.2 and Sec. 4.3) and Dataset II: Mutilingual Forums
(used in Sec. 4.4).
### 3.1. Dataset I: Popular Forums
We collect data from the 100 most popular subreddits on Reddit based on the
number of subscribers. For subreddits that were very large, we randomly under-
sample discussions so that the number of discussions from all subreddits were
roughly similar. We then filter out discussions with fewer than five comments,
and discard ten subreddits that disallow users from commenting after 2022,
such as _r/announcement_. The remaining 90 subreddits cover a large variety of
topics, such as art (r/Art, r/pics), music (r/Music, r/listentothis), sports
(r/sports, r/nba), politics (r/politics, r/news), science (r/science,
r/space), humor (r/jokes, r/Animalsbeingbros, r/facepalm), gender
(r/TwoXChromosomes), advice (r/lifehacks, r/LifeProTips), gaming (r/PS4,
r/Minecraft), and emotional reactions (r/aww, r/wholesomememes,
r/mademesmile), among many others. The complete list of the 90 subreddits are
shown on the y-axis of Figure 2. We call this dataset of popular forums
Dataset I and show the statistics in Table 1.
| min | max | mean | median
---|---|---|---|---
| avg. # of comments
---
per discussion
7.9 | 557.6 | 104.4 | 82
| ratio of controversial
---
comments
0.3% | 9.7% | 3% | 2.8%
| ratio of moderated
---
comments
3.1% | 57.9% | 12.7% | 9.5%
Table 1. Statistics of discussions in Dataset I: Popular Forums.
### 3.2. Dataset II: Multilingual Forums
For controversy detection, we sample six subreddits from the aforementioned
popular forums covering four different categories: science (_r/science_ and
_r/technology_), question & answer (_r/AskScience_ and _r/AskReddit_), news
(_r/news_ and _r/worldnews_). To further demonstrate our approach’s
generalizability to discussions in languages other than English, we add
multilingual discussions from subreddits _r/france_ (in French) and _r/de_ (in
German). We call this dataset of multilingual forums Dataset II and report the
statistics in Table 2.
| r/science | r/technology | r/news | r/worldnews | r/AskReddit | r/AskScience | r/france | r/de
---|---|---|---|---|---|---|---|---
number of discussions | 32,744 | 102,246 | 8,691 | 17,858 | 188,177 | 73,665 | 50,558 | 63,058
number of comments | 1,681,039 | 1,482,271 | 2,116,989 | 2,693,907 | 6,615,721 | 1,681,039 | 1,628,475 | 1,845,356
average discussion length | 51.3 | 14.4 | 243.5 | 150.8 | 35.2 | 5.7 | 32.2 | 29.2
ratio of controversial comments | 4.4% | 5.8% | 7.4% | 7.9% | 1.2% | 1.1% | 5.9% | 5.1%
ratio of removed comments | 42.5% | 10.8% | 18.5% | 11.2% | 5.8% | 47.3% | 6.3% | 6.3%
Table 2. Statistics of Dataset II: Multilingual Forums.
## 4\. Methods and Results
We answer our research questions by analyzing emotions and controversy of
online discussions. We define a discussion to be controversial if it has at
least one comment that has been tagged as controversial by Reddit 111Results
do not differ qualitatively when using a higher threshold of the number of
controversial comments to define controversial discussions. As a reminder, a
comment is tagged as “controversial” if it has the same (or similar) number of
upvotes as downvotes, and both numbers are large.
### 4.1. Overview of Emotions on Reddit
To recognize emotions expressed in text, we use a multilingual emotion
detection model from Chochlakis et al. (2022b, a). The model is based on
SpanEmo (Alhuzali and Ananiadou, 2021) that is the state-of-the-art in emotion
detection. The original backbone language model BERT (Devlin et al., 2019) is
replaced with multiligual XLM-T (Barbieri et al., 2021; et el, 2020), which is
more suitable in a multilingual setting to handle text inputs in large number
of languages. The model was finetuned on SemEval 2018 Task 1 E-c data
(Mohammad et al., 2018) and GoEmotions (Demszky et al., 2020), with ten
simplified emotion clusters: “Anger, Hate, Contempt, Disgust,” “Embarrassment,
Guilt, Shame, Sadness,” “Admiration, Love,” “Optimism, Hope,” “Joy,
Happiness,” “Pride, National Pride,” “Fear, Pessimism,” “Amusement,” “Other
Positive Emotions,” and “Other Negative Emotions.” Further details of the
model and performance are described in (Chochlakis et al., 2022b,
a).222https://github.com/gchochla/Demux-MEmo Given input text, the model
returns one scalar value per emotion, indicating the _confidence_ that the
emotion is present. Since it is a multi-label classification setting, the
model can assign multiple (or no) emotions to text input. Therefore, for each
input Reddit comment, we have a 10d confidence vector of the ten emotion
clusters.
Consider a discussion of length $L$ (i.e. it has $L$ comments). We measure the
confidence $f_{i}(e)$ of emotion $e$ expressed in comment $i$ of the
discussion using the emotion detection model. By averaging over all $L$
comments in a discussion, we obtain $\hat{f}(e)$, its average emotion
confidence. Figure 1 shows the distribution of $\hat{f}(e)$ for five emotions
of controversial discussions (orange curve) and non-controversial discussions
(blue curve) on four subreddits. We observe that the controversial and non-
controversial discussions have largely different distributions on some
emotions. Anger/Hate/Contempt/Disgust, for example, has systematically higher
confidence in controversial discussions on r/france and r/news compared to
non-controversial discussions. On the other hand, the confidence of the
Admiration/Love emotion is higher in non-controversial discussions on r/art
and r/science. Our research quantifies these differences and helps explain how
they arise.
---
Figure 1. Distributions of five emotion clusters of dicussions on four
subreddits. The emotion confidence values of each discussion are averaged from
those of each comment within it.
### 4.2. RQ1: Emotions in Controversial Comments
To answer our first research question, we compare emotions expressed in
controversial comments to emotions in non-controversial comments. Let ${CC}$
be the set of controversial comments in a subreddit, and $NC$ be the set of
non-controversial comments in the same subreddit. We define emotion gap
$\delta_{e}$ as the difference between the mean confidence of emotion $e$ in
controversial comments and its mean confidence in non-controversial comments:
$\delta_{e}=\frac{1}{|CC|}\sum_{i\in CC}f_{i}(e)-\frac{1}{|NC|}\sum_{i\in
NC}f_{i}(e).$
We calculate $\delta_{e}$ separately for each emotion and subreddit.
Figure 2 shows the emotion gaps $\delta_{e}$ for all ten emotion clusters and
across all 90 subreddits in Dataset I. We observe strong global trends. The
Anger/Hate/Disgust emotion cluster is consistently stronger in controversial
comments than in non-controversial comments (as indicated by bright red colors
in Fig. 2). Negative other emotions are also higher in controversial comments,
but there are no strong differences in other negative emotions like
Embarrassment/Guilt/Shame and Fear/Pessimism. In contrast, positive emotions
are stronger in non-controversial comments than in controversial comments. For
example, Admiration/Love, Joy/Happiness, and Positive-other are all much less
common in controversial comments than in non-controversial comments (as
indicated by bright blue color in Fig. 2). The emotion Amusement appears to be
stronger in controversial comments in half of the forums, but rarely weaker.
This emotion is usually used to denote text that is funny, but sometimes also
captures sarcasm. This suggests that controversial comments are often funny or
sarcastic, though not in all forums.
There are many differences across subreddits in the strength of the emotion
gap. For example, compared to other forums, subreddits r/Music,
r/listentothis, r/Art have strongest differences across all emotions.
Controversial comments on these forums have much more Anger than non-
controversial comments, but also much less Admiration and Joy than non-
controversial comments. Surprisingly, the forums that we expected to have more
controversy, like r/politics and r/AmItheAsshole, show smaller emotional
differences between controversial and non-controversial comments.
Figure 2. Emotion gaps between controversial and non-controversial comments
for all subreddits in Dataset I. Red indicates higher emotion confidence for
controversial comments, blue indicates higher emotion confidence for non-
controversial comments, and white indicates equal emotion confidence between
them. Color saturation denotes the magnitude of emotion gap.
These results answer our first research question. Controversial comments are
angrier than non-controversial comments and express less positive emotion,
like love, joy and optimism. These differences also apply to controversial
discussions, though the magnitude of the emotional gap is reduced: compared to
non-controversial discussions, controversial discussions express more anger
and less admiration and joy. Controversial comments alone do not explain the
difference in the emotionality of controversial discussions, since they
represent a small share of all comments (see Table 1). Instead, controversial
comments change the emotional tone of subsequent comments, which we explore
next.
### 4.3. RQ2: Controversial Comments Change Emotions in Discussions
In this section, we explore how controversial comments shape emotions in the
discussions. To quantify the impact of a controversial comment $i$ on emotion
$e$ in a discussion, we calculate the difference between the average
confidence of $e$ in comments posted after the comment $i$ and comments that
came before it. Let $L$ be the length of a discussion, and $i$ the position of
a controversial comment within the discussion. We refer to comments in
positions $\\{1,\ldots,i-1\\}$ as predecessors of the controversial comment,
and comments in positions $\\{i+1,\ldots,L\\}$ as successors of comment $i$.
The emotional impact $\theta^{i}_{e}$ of the controversial comment $i$ is:
$\theta^{i}_{e}=\frac{1}{(L-i)}\sum_{j>i}f_{j}(e)-\frac{1}{(i-1)}\sum_{j<i}f_{j}(e).$
If $\theta^{i}_{e}>0$, the controversial comment $i$ leads to more emotion $e$
in subsequent comments; otherwise, if $\theta^{i}_{e}<0$, the comment $i$
reduces that emotion in the discussion.
To measure the overall impact of controversial comments on emotions, we
average the impact of all controversial comments within each subreddit’s
discussions. Figure 3 shows this quantity across all emotions and all 90
subreddits. Most subreddits follow the same pattern: negative emotions like
Anger/Hate/Contempt/Disgust, Fear/Pessimism and other negative emotions rise
after a controversial comment and positive emotions generally, though not
always, fall. There are some exceptions to this trend. In _r/nosleep_ and
_r/AmItheAsshole_ subreddits, anger decreases after a controversial comment.
Moreover, positive emotions, such as Admiration/Love and Joy/Happiness, in
_r/photoshopbattle_ and _r/WritingPrompts_ increase after a controversial
comment. Amusement rises systematically in almost all subreddits, on par with
negative emotions. This suggests rising sarcasm in controversial discussions.
These findings answer our second research question and suggest that
controversial comments lead to long-term changes in the emotional tone of
discussions, typically, not only raising anger of downstream comments but also
reducing positive emotions of discussions. However, different communities
respond differently to controversy, resulting in some deviations from this
pattern. Therefore, automatic tools to moderate controversial comments will
need to take the varying community context into account.
Figure 3. Emotional impact of controversial comments. Cells show the
difference between the average emotion confidence of successors of a
controversial comment and its predecessors in Dataset I. Red indicates a long-
term increase in emotions following controversial comments, blue indicates
long-term decrease in emotions, and white indicates no change in emotions
following controversial comments. Color saturation denotes the magnitude of
the impact.
---
Figure 4. Performance (ROC AUC) of controversial comment prediction using ten-
fold cross validation on eight subreddits using thirteen features sets,
including user activity (UA) features and emotion features of the current
comment, its predecessors, and its successors. Results using successor
features are shadowed. Overall the models that take in emotion features
achieve a better performance. We use user activity (UA) features as baseline
and test different combinations of UA and emotion features. All comments are
in English except for the _r/france_ (in French) and _r/de_ (in German)
subreddits.
### 4.4. RQ3: Predicting Controversy from Emotions
Koncar et al. (2021) address the task of predicting whether a comment $i$ in a
discussion on Reddit is controversial using a variety of lexical and user
activity features. They find that the most predictive features were ones
related to user activity, which they calculate based on comments that preceded
the comment $i$ in a discussion (i.e., predecessors) and comments that
succeeded it (i.e., successors):
* •
Predecessor features: number of comments preceding the current comment $i$,
number of unique authors of preceding comments, elapsed time from the first
predecessor till the current comment, and the average elapsed time between
predecessors.
* •
Successor features: number of comments posted after the comment $i$, number of
unique authors of succeeding comments, eplased time from the current comment
to the last successor, and average time between successors.
We supplement these features with emotions, and use the emotions expressed in
the current comment, and those expressed in predecessors or successors, as
features.
* •
All emotions: confidence of emotion clusters in predecessor comments,
successor comments, or the comment $i$’s own emotion.
* •
Positive emotions: subset of emotions that includes Admiration/Love,
Optimism/Hope, Joy/Happiness, Pride, other positive emotions.
* •
Negative emotions: subset of emotions that includes
Anger/Hate/Contempt/Disgust, Embarrassment/Sadness, Fear/Pessimism, and other
negative emotions.
To better understand the impact of emotions on predicting controversial
comments, we construct different feature sets to be used by the classification
model (as shown in Fig. 4). Following Koncar et al. (2021), we used Gradient
Boosted Decision Trees with ten-fold cross-validation for the prediction
task.333We also tried using a pretrained language model (Devlin et al., 2019)
taking in the original comments as inputs without feature engineering for
predicting comment controversy, but the model failed to converge. We conducted
a grid search to select hyperparameters that achieve the best performance on
the validation set.
We applied the model to predict whether a comment is controversial using
Dataset II (Sec. 3.2). This data includes discussions from six popular
subreddits (in English) and also discussions in French and German,
demonstrating the utility of our approach to multilingual settings. The data
is highly imbalanced in that controversial comments make up only 5% of all
comments; therefore, we under-sample non-controversial comments to create a
balanced dataset for testing. We use ROC-AUC to measure classification
performance.
Figure 4 shows the results of the controversial comment classification
separately for the eight subreddits. Overall, adding emotions to the model
consistently improves performance compared to using only activity features.
Such non-trivial improvement demonstrates the effectiveness of emotions in
controversial comment detection.
The best performance is achieved on _r/AskReddit_ , with AUC of 78.6%,
followed by r/france and r/de with AUC of 77.3% and 74.7%, respectively. These
scores represent 4.2%, 6.2%, and 5.7%, respectively, improvement in
performance compared to using activity features only (comparing feature set m
and feature set i). Interestingly, these three subreddits are also the least
moderated forums in our data (see Table 2), with the lowest ratio of comments
removed by moderators. This suggests that it is easier to identify
controversial comments in less-moderated or unmoderated forums.
In addition to the overall comparison between different feature sets and
different subreddits, there are some intriguing observations on feature
selection. First, the model that uses comment’s own emotion as a feature
(feature sets e, f, g) outperforms models that use emotions of predecessor
comments (feature sets b, c, d). Taking a deeper look, we observe that the
average length of discussions on _r/news_ and _r/worldnews_ is extremely high
(on average 243.5 comments per discussion). As a result, the emotion features
of predecessors are too noisy due to many non-controversial comments among the
predecessors. Second, negative emotions are more effective than positive
emotions in predicting controversy, suggesting that controversy is better
conveyed by negative emotions. Features of successors (feature sets i, j, k,
l, m) are more helpful in strongly moderated subreddits, such as in
_r/askscience_ , because these features still leverage the structure
information from removed comments.
Finally, although using features of predecessors and the current comment does
not perform as well as using all features (including successor features), it
represents a more realistic prediction scenario. On this task, adding emotions
to the feature set significantly improves classification performance across
all subreddits. Our results show emotions help identify controversial comments
before they become controversial.
## 5\. Discussion and Conclusion
Emotions, or feelings, are fundamental to human experience, and play a
critical role in the formation of beliefs, social interactions (Van Kleef et
al., 2016), and interpersonal conflict (Bar-Tal et al., 2007). Our study
demonstrates that emotions also shape the evolution of controversial
discussions in online communities. Leveraging emotional cues present in
language, we identify a range of positive and negative emotions expressed in
the text of comments. Our large-scale study of discussions on more than 90
subreddits shows that controversial comments have stronger negative emotions,
especially anger, and fewer positive emotions than non-controversial comments.
Although controversial comments represent a small share of all comments in a
discussion, they shift the emotional tone of the entire discussion, leading to
angrier and less positive subsequent comments. We also show that an
emotionally aware classification model could better recognize comments that
will become controversial, even in multilingual discussions.
Our work suggests that moderating controversial comments may help improve the
emotional tone of a discussion. It is possible to catch such comments at the
time of their creation, and step in to help the author regulate the negative
emotions such comments express. Reducing the long-term impact of controversial
comments on the discussion will help improve the overall quality of the
discussions.
## References
* (1)
* Alhuzali and Ananiadou (2021) Hassan Alhuzali and Sophia Ananiadou. 2021. SpanEmo: Casting Multi-label Emotion Classification as Span-prediction. In _Proc. European Chapter of the ACL_. ACL, Online, 1573–1584.
* Bar-Tal et al. (2007) Daniel Bar-Tal, Eran Halperin, and Joseph De Rivera. 2007\. Collective emotions in conflict situations: Societal implications. _Journal of Social Issues_ 63, 2 (2007), 441–460.
* Barbieri et al. (2021) F. Barbieri, L. Anke, and J. Camacho-Collados. 2021\. XLM-T: Multilingual Language Models in Twitter for Sentiment Analysis and Beyond.
* Baumgartner et al. (2020) Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In _Proceedings of the international AAAI conference on web and social media_ , Vol. 14. 830–839.
* Bi (2022) Nicky Chang Bi. 2022\. How emotions and issue controversy influence the diffusion of societal issues with imagined audience on Facebook. _Behaviour & Information Technology_ 41, 6 (2022), 1245–1257.
* Bollen et al. (2011) Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. _Journal of computational science_ 2, 1 (2011), 1–8.
* Brady et al. (2021) William J Brady, Killian McLoughlin, Tuan N Doan, and Molly J Crockett. 2021. How social learning amplifies moral outrage expression in online social networks. _Science Advances_ 7, 33 (2021), eabe5641.
* Brady et al. (2017) William J Brady, Julian A Wills, John T Jost, Joshua A Tucker, and Jay J Van Bavel. 2017. Emotion shapes the diffusion of moralized content in social networks. _Proceedings of the National Academy of Sciences_ 114, 28 (2017), 7313–7318.
* Chen and Skiena (2014) Yanqing Chen and Steven Skiena. 2014. Building sentiment lexicons for all major languages. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_. 383–389.
* Chochlakis et al. (2022a) Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. 2022a. Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion. _arXiv preprint arXiv:2210.15842_ (2022).
* Chochlakis et al. (2022b) Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. 2022b. Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats. _arXiv preprint arXiv:2211.00171_ (2022).
* Coviello et al. (2014) Lorenzo Coviello, Yunkyu Sohn, Adam DI Kramer, Cameron Marlow, Massimo Franceschetti, Nicholas A Christakis, and James H Fowler. 2014\. Detecting emotional contagion in massive social networks. _PloS one_ 9, 3 (2014), e90315.
* Demszky et al. (2020) Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emotions. _arXiv preprint arXiv:2005.00547_ (2020).
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
* Dori-Hacohen and Allan (2013) Shiri Dori-Hacohen and James Allan. 2013. Detecting controversy on the web. In _Proceedings of the 22nd ACM international conference on Information & Knowledge Management_. 1845–1848.
* et el (2020) T. Wolf et el. 2020\. Transformers: State-of-the-Art Natural Language Processing. In _EMNLP_. ACL, Online, 38–45.
* Garimella et al. (2018) Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Quantifying Controversy on Social Media. _Trans. Soc. Comput._ 1, 1, Article 3 (jan 2018), 27 pages. https://doi.org/10.1145/3140565
* Golder and Macy (2011) Scott A Golder and Michael W Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. _Science_ 333, 6051 (2011), 1878–1881.
* Haidt (2022) Jonathan Haidt. 2022\. Why the Past 10 Years of American Life Have Been Uniquely Stupid. _The Atlantic_ (May 2022).
* Hessel and Lee (2019) Jack Hessel and Lillian Lee. 2019. Something’s Brewing! Early Prediction of Controversy-causing Posts from Discussion Features. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Association for Computational Linguistics, Minneapolis, Minnesota, 1648–1659. https://doi.org/10.18653/v1/N19-1166
* Jaidka et al. (2020) Kokil Jaidka, Salvatore Giorgi, H Andrew Schwartz, Margaret L Kern, Lyle H Ungar, and Johannes C Eichstaedt. 2020. Estimating geographic subjective well-being from Twitter: A comparison of dictionary and data-driven language methods. _Proceedings of the National Academy of Sciences_ 117, 19 (2020), 10165–10171.
* Jang and Allan (2018) Myungha Jang and James Allan. 2018. Explaining controversy on social media via stance summarization. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_. 1221–1224.
* Koncar et al. (2021) Philipp Koncar, Simon Walk, and Denis Helic. 2021. Analysis and Prediction of Multilingual Controversy on Reddit. In _13th ACM Web Science Conference 2021_ (Virtual Event, United Kingdom) _(WebSci ’21)_. Association for Computing Machinery, New York, NY, USA, 215–224. https://doi.org/10.1145/3447535.3462481
* MacAvaney et al. (2019) Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019\. Hate speech detection: Challenges and solutions. _PloS one_ 14, 8 (2019), e0221152.
* Mejova et al. (2014) Yelena Mejova, Amy X Zhang, Nicholas Diakopoulos, and Carlos Castillo. 2014. Controversy and sentiment in online news. _arXiv preprint arXiv:1409.8152_ (2014).
* Mohammad et al. (2018) Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 task 1: Affect in tweets. In _Proceedings of the 12th international workshop on semantic evaluation_. 1–17.
* Park et al. (2021) Chan Young Park, Julia Mendelsohn, Karthik Radhakrishnan, Kinjal Jain, Tushar Kanakagiri, David Jurgens, and Yulia Tsvetkov. 2021. Detecting Community Sensitive Norm Violations in Online Conversations. In _Findings of the Association for Computational Linguistics: EMNLP 2021_. Association for Computational Linguistics, Punta Cana, Dominican Republic, 3386–3397. https://doi.org/10.18653/v1/2021.findings-emnlp.288
* Plaza-del Arco et al. (2021) Flor Miriam Plaza-del Arco, M Dolores Molina-González, L Alfonso Urena-López, and M Teresa Martín-Valdivia. 2021\. Comparing pre-trained language models for Spanish hate speech detection. _Expert Systems with Applications_ 166 (2021), 114120.
* Poletto et al. (2021) Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2021\. Resources and benchmark corpora for hate speech detection: a systematic review. _Language Resources and Evaluation_ 55, 2 (2021), 477–523.
* Stieglitz and Dang-Xuan (2013) Stefan Stieglitz and Linh Dang-Xuan. 2013. Emotions and information diffusion in social media—sentiment of microblogs and sharing behavior. _Journal of management information systems_ 29, 4 (2013), 217–248.
* Van Kleef et al. (2016) Gerben A Van Kleef, Arik Cheshin, Agneta H Fischer, and Iris K Schneider. 2016. The social nature of emotions. _Frontiers in psychology_ 7 (2016), 896.
* Warriner et al. (2013) Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. 2013\. Norms of valence, arousal, and dominance for 13,915 English lemmas. _Behavior research methods_ 45, 4 (2013), 1191–1207.
* Zayats and Ostendorf (2018) Victoria Zayats and Mari Ostendorf. 2018. Conversation Modeling on Reddit Using a Graph-Structured LSTM. _Transactions of the Association for Computational Linguistics_ 6 (2018), 121–132. https://doi.org/10.1162/tacl_a_00009
* Zhong et al. (2020) Lei Zhong, Juan Cao, Qiang Sheng, Junbo Guo, and Ziang Wang. 2020. Integrating Semantic and Structural Information with Graph Convolutional Network for Controversy Detection. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Online, 515–526. https://doi.org/10.18653/v1/2020.acl-main.49
|
# A HIGHLY ROBUST SPARSE FRACTAL ARRAY
Kretika Goel
Research Scholar
SENSE
IIT Delhi
<EMAIL_ADDRESS>
&Monika Aggarwal
Professor
CARE
IIT DELHI
<EMAIL_ADDRESS>
&Subrat Kar
Professor
Dept of Electrical Engg
IIT Delhi
<EMAIL_ADDRESS>
###### Abstract
The term fractal refers to the fractional dimensions that have recursive
nature and exhibit better array factor properties. In this article, we present
a new class of sparse array where the recursive nature of a fractal can be
used in designing an antenna array called a sparse fractal array by combining
sparsity properties of the various sparse array and the recursive nature of
fractal array. The most important property of the proposed array is the hole-
free difference coarray which makes it a good choice for DOA estimation as
various algorithms like coarray MUSIC etc demand hole-free difference coarray.
But the performance of any array depends on the presence of essential and
nonessential sensors in it which governs whether the difference coarray will
get affected upon sensor failure or not. Hence, in this paper, a rigorous
analysis is done for various combinations of sparse fractal arrays to test
their robustness in presence of a faulty sensor environment.
_K_ eywords Sparse array $\cdot$ fractal array $\cdot$ essential sensors
$\cdot$ fragility $\cdot$ robustness
## 1 Introduction
Sparse arrays have attracted considerable attention in various fields such as
radar, array signal processing [1], beamforming, the direction of arrival
estimation [2], ultrasound imaging, 5G communications [3], etc. The important
property of sparse arrays i.e. these are the sensor arrays with nonuniform
element spacing enables them to resolve more sources than the available
physical sensors. This property of sparse arrays arises from the virtual
counterpart of the physical array i.e. called a difference coarray which is
the pair-wise difference of each sensor location in the physical array. The
higher the size of the contiguous region of the difference coarray, the more
can be the degree of freedom [4].
Various types of 1D sparse arrays coined so far include minimum redundancy
arrays (MRA) [5], minimum holes arrays (MHA) [6], nested arrays (NA) [7] and
coprime arrays (CP) [8],[9], CADIS [10], CACIS [11], Augmented nested array
[12], super nested array [13], etc leading to numerous studies proposing
diverse sparse configuration. The importance of all these arrays is the
increased size of their virtual coarray which makes the DOF of all of them of
the order of O(N2) which is high enough as compared to ULA which has DOF O(N)
where N is the number of sources present in a physical array. Hence, it can be
inferred that the larger the size of the virtual coarray, the more the number
of resolvable sources and the better the resolution of the array. Hence, array
designing is of prime importance in signal processing because the size of
uniform difference coarray is an important metric in achieving better array
performance.
In some implementations, arrays with symmetric geometry are highly preferred
because they not only ease the computational complexity but also improve the
DOA estimation performance [14]. Moreover such symmetric and recursive
geometries also facilitate the array calibration in presence of mutual
coupling [15]. Hence, In this paper, we consider one such symmetric array as a
Fractal array. The introduction of various sparse arrays has sparked great
interest in non-uniform arrays, leading to numerous studies. The performance
of an antenna array can further be improved by another class of array
designing i.e by using fractal geometric techniques. The term fractal, which
means irregular fragments, was originally coined by Mandelbrot [16] which has
contributed a lot to the new and rapidly growing field of research as fractal
antenna engineering which includes the study of fractal-shaped antenna
elements, as well as the use of fractals in antenna arrays.
A wide variety of applications of fractal arrays has been found in fractal
electrodynamics [17], in which fractal geometry is combined with
electromagnetic theory for the analysis and design of antenna systems. Fractal
arrays are recursive arrays that can be formed through the repetitive
application of a generating subarray. A generating subarray is a small array
at scale one ( P = 1 ) which is used to build larger arrays at higher scales
(i.e., P > 1 ). Generating subarray has elements that are turned on and off in
a certain pattern. Followed by copying, scaling, and translation to produce
the fractal array which can otherwise be considered as a sequence of self-
similar subarrays.
Fractal arrays are of two types: deterministic and natural fractal structures
[18] where Deterministic fractal structures are the geometric structures that
have exact dimensions for the expansion. Another type is Random Fractals which
are distinguished by the crudest measure of size, which is dimension. All the
fractals found in nature are random fractals like mountains, coastal lines,
and fractal-shaped flowers. The advantage of using fractal antenna array (FA)
is that it improves multi-beam, and multi-band characteristics and also
improves array factor behavior. One such example of a Fractal Array is A
Cantor array is used for DOA estimation [19]. A Cantor array is a recursively
generated Array whose difference coarray is hole-free, has a large aperture
length, is symmetric, and has a maximal economy. Since resolving more
uncorrelated sources using fewer elements is mitigated by Sparse Array by
providing a much larger aperture virtual array that resolves more sources.
Hence to combine the properties of sparse arrays and fractal arrays [20],[21]
a novel sparse fractal array is proposed in this paper which combines these
arrays to attain all the properties like extended aperture, high DOF,
symmetry, recursiveness, and hole-free virtual coarray, robustness, and array
economy.
Sparse arrays are very valuable in terms of cost reduction and effective
performance with limited sensors and reduced cost. The array geometry plays
important role in the direction of arrival estimation of the sources falling
on the array. Hence it should be robust enough to the faulty sensor
environment. Faulty sensors change the pattern of difference coarray which
leads to failure of coarray MUSIC which is applied to estimate the DOA of the
sources falling on the antenna array. In general, sensors could fail randomly
and may cause the breakdown of the overall system. It is sometimes observed
that, for some sparse arrays, such as MRA, sensor failure could shrink the ULA
segment in the different coarray which leads to degradation in the performance
of the system. According to the literature, the issue of sensor failure can be
addressed in two ways [22]. Firstly by developing new algorithms which make
the system self-calibrate at times of sensor failure. Secondly by analyzing
array geometries in the presence of a faulty sensor environment and then
designing the system in such a way that the cost of system designing is
governed by the presence of essential and inessential sensors in it [23]. In
the literature, the validation of the first method is found in [24] where DOA
estimation is carried out based on a minimal resource allocation network, and
in [25] array is diagnosed based on Bayesian compressive sensing. However,
these papers do not fully exploit the interplay between the array
configuration and the exact condition under which these algorithms are
applicable. To study the second method various measures are proposed in [26]
to quantify the robustness of arrays. In [27] beampattern is studied based on
the given sensor failure probability. But the impact of faulty sensors on the
difference coarray especially for the sparse arrays still needs to be
investigated. This paper tries to do so by examining the proposed sparse
fractal arrays in a faulty sensor environment and then compares the results
with the already existing most common sparse arrays.
## 2 ARRAY DATA MODEL
Figure 1: Array with m Elements
Let M narrowband, uncorrelated, and far-field real sources with wavelength
$\lambda$ impinge on the 1D antenna array whose elements are located at
n$\lambda$/d where n $\in$ L and d is the inter element spacing between the
elements as shown in fig.1
The $i^{th}$ source with azimuth angle $\theta$ impinge on the array from the
direction {($\theta_{1}$) ,($\theta_{2}$),($\theta_{3}$ ) $\cdots$
($\theta_{M}$ )} where $\theta_{i}$ $\in$ $[-\pi/2,\pi/2]$
The array output at the time ’t’ can be expressed as:
$Y(t)=\sum_{i=1}^{M}A(\theta_{i}^{{}^{\prime}}).S_{i}(t)+N(t)$ (1)
where $\theta_{i}^{{}^{\prime}}$ are normalized DOA such that
$\theta_{i}^{{}^{\prime}}$ = (d/ $\lambda$)sin( $\theta_{i}$) and
d=$\lambda/2$ is the inter sensor spacing. Each element of the steering vector
$A(\theta_{i}^{{}^{\prime}})=[a(\theta_{1}),\cdots,a(\theta_{M}]$
corresponding to the sensors at the location $n\lambda/d$ where $n\in L$. And
is defined as $e^{2\pi j\cdot sin(\theta_{i}^{{}^{\prime}}}$ . Signal
$S_{i}(t)=[s_{1}(t),\cdots,s_{M}(t)]^{T}]$ is the source signal vector and
N(t) is the white Gaussian noise vector with zero mean and variance
$\sigma^{2}$. t = 1, 2, • • •, T refers to the sampling time, where T is the
total number of snapshots.
The covariance matrix of Y(t) can be written as:
$R=E[Y(t)\cdot Y(t)^{H}]=\sum_{i=1}^{M}\sigma_{i}^{2}\cdot
A(\theta_{i}^{{}^{\prime}})\cdot A^{H}(\theta_{i}^{{}^{\prime}})+\sigma^{2}I$
(2)
where $\sigma_{i}^{2}$ is the $i^{th}$ source power and $\sigma^{2}$ is the
noise power.
Note that the entity $A(\theta_{i}^{{}^{\prime}})\cdot
A^{H}(\theta_{i}^{{}^{\prime}})$ in the covariance matrix defined in eq.2 is
of the form $e^{2\pi(\theta^{{}^{\prime}})(s_{i}-s_{j})}$ where
$(s_{i}-s_{j})$ $\in$ D i.e the difference coarray having $(s_{i}-s_{j})$ as
the difference between $i^{th}$ and $j^{th}$ sensor location.
Applying vectorization operation on eq.2 and reshaping it to get the
autocorrelation vector defined on the difference coarray we get,
$Y_{d}(t)=\sum_{i=1}^{M}\sigma_{i}^{2}\cdot
A_{D}(\theta_{i}^{{}^{\prime}})+\sigma^{2}I^{{}^{\prime}}$ (3)
where $I^{{}^{\prime}}=[E^{T}_{1}E^{T}_{2}\cdot E^{T}_{N}]^{T}$ with $E_{i}$
being a column vector of all zeros except value 1 at the $i^{th}$ position.
The auto-correlation vector $Y_{d}$, can be considered as sensor output on the
difference coarray D. If D contains a contiguous ULA segment $D_{U}$, then DOA
can be estimated via coarray MUSIC on the autocorrelation evaluated at $D_{U}$
which ensures that coarray MUSIC can resolve $O(|L^{2}|)$ uncorrelated sources
using $|L|$ physical sensors thereby enhancing the degree of freedom.
Sparse arrays are designed so that the difference-coarray D contains a ULA
part around the origin, which is denoted by $D_{U}$. Based on the relationship
between D and $D_{U}$, the signal over $D_{U}$ is then constructed as follows:
$Y_{d_{U}}(t)=\sum_{i=1}^{M}\sigma_{i}^{2}\cdot
A_{D_{U}}(\theta_{i}^{{}^{\prime}})+\sigma^{2}I^{{}^{\prime}}$ (4)
Doa estimation using finite snapshots can be carried out by calculating the
finite snapshot version of Y(t),$Y_{d}(t)$, $Y_{d_{U}}(t)$ and R as
$Y^{\prime}(k)$ ,$Y^{\prime}_{d}(k)$, $Y^{\prime}_{d_{U}}(k)$ and $R^{\prime}$
respectively where $k=1,2,3.....K$ be K realizations of eq.1. K is the total
number of snapshots. The covariance matrix for which can be estimated as
$R^{\prime}=\sum_{k=1}^{K}Y^{\prime}(k).Y^{\prime}(k)^{H}/K$ . From R’ finite
snapshot autocorrelation vector on $D_{U}$ can be formulated as
$Y^{\prime}_{d_{U}}(k)$. Moreover, for sparse array, the covariance matrix R
is not hermitian Toeplitz in general but for coarray, we can construct a
hermitian Toeplitz matrix by the method explained in [28].
Now partitioning the signal subspace and noise subspace of this matrix and
then applying coarray MUSIC to it will give the desired spectrum of the signal
which helps in DOA estimation.
## 3 PROPOSED SPARSE FRACTAL ANTENNA ARRAY
First explaining the basic Cantor arrays [6] which are the fractal arrays that
can be defined recursively as :
$C_{r+1}=C_{r}U(C_{r}+3^{r})where,r\in N$ (5)
Note that the cantor array definition is followed from [2] which states that
the basic Cantor arrays are symmetric and Cr has N = 2r physical elements. Let
C =[0,1] then Cr will have hole-free difference coarray Dr with an aperture as
3r. Its difference coarray is of the order $O(Nlog_{2}3)\equiv O(N1.585)$
which results in degraded performance when compared to sparse arrays whose
difference coarray satisfies O(N2) property. To overcome this limitation of a
fractal array we propose a new architecture in which the Cantor array is mixed
with the sparse array that will generate an array with increased DOF and
reduced RMSE thus showing better performance over other arrays.
###### Definition 3.1 (Proposed 1D SFA).
The proposed SFA(sparse fractal array ) consists of two sub-arrays, i.e. sub-
array 1 and sub-array 2. Let Sub-array 1 is an M elements sparse array and
lets sub-array 2 be an N elements Fractal array, which is created from Cantor
set. The element position of in subarray 1 and subarray2 is defined by the
$\widetilde{S_{1}}$ and $\widetilde{S_{2}}$ where
$\widetilde{S_{1}}=[m1,m2,.....M]d1,m1=1$ (6)
$\widetilde{S_{2}}=[n1,n2,......N]d2,n1=0$ (7)
Where m and n are integers and d1 and d2 are inter-element spacing such that
d2 = (2M +1)d1. And the proposed array will be denoted as:
$\widetilde{S}=\widetilde{S_{1}}\oplus\widetilde{S_{2}}$ (8)
Where element position of proposed SFA is expressed by the cross summation
$\oplus$ defined over $\widetilde{S_{1}}$ and $\widetilde{S_{2}}$ and the SFA
will have MN elements.
Example 1: Let subarray 1 be nested array whose sensor location for M=6 is
defined in the set $\widetilde{S_{1}}$. Let d1=1 so the normalized sensor
positions in the first subarray are given as : $\widetilde{S_{1}}$ = [1 2 3 4
8 12]. For constructing $\widetilde{S_{2}}$ we will use the basic Fractal
array (FA) =0,1 in the paper. Now $\widetilde{S_{2}}$ = [0 1]d2 where d2 =
(2mM +1)d1 =(2*6 +1)d1=13d1 $\widetilde{S_{2}}$ =[0 13]d1 Using the above
relation $\widetilde{S_{1DSFA}}$ = $\widetilde{S_{1}}$ $\oplus$
$\widetilde{S_{2}}$ we get, $\widetilde{S}$=[1 2 3 4 8 12 14 15 16 17 21 25]
###### Definition 3.2 (Difference coarray of the proposed SFA).
The difference coarray $\widetilde{D}$ for an array $\widetilde{S}$ is defined
as the set of differences between the sensor locations given in set
$\widetilde{S}$. $\widetilde{D}$ = {n1-n2 : n1,n2 $\in$ $\widetilde{S}$ } Let
$\widetilde{D_{u}}$ represent the contiguous range of the sensors in the
difference coarray $\widetilde{D}$ which is hole free and the most useful when
comes to the application of estimation algorithms. The virtual sensor
positions in the difference coarray are also called lags and for case1 the
virtual aperture of the proposed SFA extends from -24 to 24. In the given
example $\widetilde{D_{u}}$ also lies in the range -24 to 24 as there are no
holes present in the difference coarray of the proposed SFA. Therefore all
information on the difference coarray can be exploited. Fig.2 shows the
physical array of the proposed SFA and its different coarray .
###### Definition 3.3 (Weights Function of the proposed SFA).
The weight function w ( k ) can be understood as the number of sensor pairs
with separation $k\in Z$ where Z is an integer. This can also be written as w
(k ) = $|(n1,n2)\in L^{2}:n1-n2=k|$. By definition, w ( k ) is an integer-
valued even function, i.e. w (-k ) = w (k ) . Designing any sparse array
requires the optimal utilization of the difference between coarray D and the
weight function w ( k). For example, both the MRA and the MHA have different
coarrays of size $O(N^{2})$ but the problem in MRA is that it has no close
form expression hence the work in [29] and [30] lacks the closed form
expression to deal with sparse and fractal array as the sparse arrays which
have closed-form expressions for the sensor locations are simple to compute
like nested array, coprime array, generalized coprime array, and super nested
array, etc. Hence, we make use of all such sparse arrays whose closed-form
expression exists which can be used for designing sparse fractal arrays.
Hence, we make use of all such sparse arrays whose closed-form expression
exists which can be used for designing sparse fractal arrays.
## 4 NUMERICAL EXAMPLES
While designing any array configurations we need to select the basic arrays
which fulfill some laid down criteria according to literature first among them
is that the sensor locations should be expressed in closed-form. Secondly, the
difference coarray of the basic array should be hole free because the
estimation performance of various DOA estimation methods, e.g., MUSIC and
ESPRIT depends on the cardinality of the central ULA segment of the difference
coarray. Hence, for the proper estimation of the number of sources hole free
difference coarray is required. Thirdly, the difference coarray should be
large enough to achieve increased DOF concerning the number of sensors. So
keeping all such criteria in mind various configurations of sparse fractal
arrays have been tried and tested using the basic arrays as a nested array,
coprime array, Augmented nested arrays, and super nested arrays which are
expressed one by one.
### 1.
[Nested Fractal array ( NFA)]
A two level nested array consists of two uniform linear arrays called inner
and outer uniform linear arrays where the inner ULA has N1 elements with
spacing d1 and the outer ULA has N2 elements with spacing d2 such that d2=(N1
+ 1)d1. Sensors locations of the physical array is given by
$\\{md1,(m=1,2,3\cdots N1)\cup nd2,(n=1,2,3\cdots N2)\\}$ where if N is even
then N1=N2=N/2 and if N is odd then N1=(N-1)/2; N2=(N+1)/2. Using the basic
sparse array as a nested array for N=6 and taking the cross sum with the basic
fractal array i.e. [0 1] as defined in example 1 we obtain the NFA as shown in
Fig along with its difference coarray which has no holes present in it. The
central ULA segment of the difference coarray of the proposed NFA is
$\widetilde{D_{u}}$ hence it can resolve up to $|(\widetilde{D_{u}}-1)/2|$
number of sources . Hence, with 6 physical sensor and $|\widetilde{D_{u}}|$ =
49 , it can estimate maximum of 24 sources. Therefore to check the
effectiveness let us assume 24 uncorrelated sources falling on the RCPA whose
normalized DOA are picked up randomly from $\theta$ = [- 0.5, 0.5] . Let SNR
=0 dB and snapshots be 500. Applying MUSIC on the finite snapshot version of
the above model, we get the following results as shown in Fig.2 with 0.0014
RMSE achieved.
Figure 2: (a) Physical and Virtual array of NFA,(b) normalized MUSIC Spectrum
of 24 signals applied on it for doa estimation along with RMSE.
### 2.
[Coprime Fractal Array(CFA)] According to the proposed method let subarray 1
be conventional coprime array whose sensor location for coprime pair M=2 and
N=3 and total number of elements as $2M+N-1=6$ is defined in the set
$\widetilde{S_{1}}$ . Let d1=1 so the normalized sensor positions in the first
subarray is given as : $\widetilde{S_{1}}$ = [0 2 3 4 6 9]. For constructing
$\widetilde{S_{2}}$ we will use the basic Fractal array (FA) = [0,1] in the
paper. Now $\widetilde{S_{2}}$ = [0 1]d2, Where d2 = (2mM +1)d1 =(2*6
+1)d1=13d1 $\widetilde{S_{2}}$ =[0 13]d1. Using the above relation
$\widetilde{S_{1DSFA}}$ = $\widetilde{S_{1}}$ $\oplus$ $\widetilde{S_{2}}$ we
get, $\widetilde{S}$=[ 0 2 3 4 6 9 13 15 16 17 19 22] and the virtual aperture
of the proposed CFA extends from -24 to 24 . In the given example
$\widetilde{D_{u}}$ also lies in the range -22 to 22 also there are no holes
present in the difference coarray of the proposed CFA. Let 22 uncorrelated
sources fall on the CFA whose normalized DOA are picked up randomly from
$\theta$ = [-0.5, 0.5]. Let SNR =0 dB and snapshots be 500. Applying MUSIC on
the finite snapshot version of the above model, we get the following results
as shown in Fig.3 with 0.0027 RMSE achieved.
Figure 3: (a) Physical and Virtual array of CFA,(b) normalized MUSIC Spectrum
of 22 signals applied on it for doa estimation along with RMSE.
### 3.
[Augmented GenI Fractal Array(AUGGENIFA)]
An augmented nested array or ACA was formed by first splitting the dense
subarray of the nested array into various parts, which is further rearranged
in a particular manner at the sides of the sparse subarray. Using this
concept, an augmented nested array (ANA) was designed by splitting the dense
ULA of the nested array into a few left/right sub-arrays which are then
arranged on either side of the low element density subarray of the nested
array. According to the proposed method let subarray 1 be AUGGENI for N=6 a
whose sensor location is defined in the set $\widetilde{S_{1}}$ . Let d1=1 so
the normalized sensor positions in the first subarray is given as :
$\widetilde{S_{1}}$ = [1 4 8 12 13 14]. For constructing $\widetilde{S_{2}}$
we will use the basic Fractal array (FA) = [0,1] we get, $\widetilde{S}$=[ 1 4
8 12 13 14 17 21 25 26 27], and the difference coarray of the proposed
AUGGENIFA extends from -26 to 26 which means that it can estimate a maximum of
26 sources. For checking the array we take the maximum possible sources i.e.
26 uncorrelated sources falling on the AUGGENIFA whose normalized DOA are
picked up randomly from $\theta$ = [-0.5, 0.5]. Let SNR =0 dB and snapshots be
500. Applying MUSIC on the finite snapshot version of the above model, we get
the following results as shown in Fig.4 with 0.00156 RMSE achieved.
Figure 4: (a) Physical and Virtual array of AUGGENIFA,(b) normalized MUSIC
Spectrum of 26 signals applied on it for doa estimation along with RMSE.
### 4.
[Augmented GenII Fractal Array(AUGGENIIFA)]
Another type of augmented nested array is ANAGENII which is formed by
splitting the dense sub-array of the actual nested array into odd and even
parts, respectively, and then arranging it on either side of the less dense
subarray of the nested array. For N=6, AUGGENII will have the sensor locations
at $\widetilde{S_{1}}$ = [1 2 4 8 12 13] and by using basic Fractal array (FA)
= [0,1] we get we get, $\widetilde{S}$=[ 1 2 4 8 12 13 14 15 17 21 25 26] .
For checking the array we take the maximum possible sources i.e. 26
uncorrelated sources falling on the AUGGENIFA whose normalized DOA are picked
up randomly from $\theta$ = [- 0.5, 0.5]. Let SNR =0 dB and snapshots be 500.
Applying MUSIC on the finite snapshot version of the above model, we get the
following results as shown in Fig.5 with 0.00127 RMSE achieved.
Figure 5: (a) Physical and Virtual array of AUGGENIIFA,(b) normalized MUSIC
Spectrum of 26 signals applied on it for doa estimation along with RMSE.
### 5.
[Super Nested Fractal Array (SNFA)]
Super nested arrays have the same number of sensors, and the same difference
coarray as nested arrays but the arrangement of sensors in a unique manner
leads to reduced mutual coupling than nested arrays. For N=6, Super nested
arrays will have the sensor locations at $\widetilde{S_{1}}$ = [1 1 3 6 8 11
12] and by using basic Fractal array (FA) = [0,1] we get we get SNFA as
$\widetilde{S}$=[ 1 3 6 8 11 12 14 16 19 21 24 25] . For checking the
validation of the proposed array we take the maximum possible sources i.e. 25
uncorrelated sources falling on the SNFA whose normalized DOA are picked up
randomly from $\theta$ = [- 0.5, 0.5]. Let SNR =0 dB and snapshots be 500.
Applying MUSIC on the finite snapshot version of the above model, we get the
following results as shown in Fig. 6 with 0.00174 RMSE achieved.
Figure 6: (a) Physical and Virtual array of Supernested Fractal Array,(b)
normalized MUSIC Spectrum of 25 signals applied on it for doa estimation along
with RMSE.
## 5 ROBUSTNESS OF SPARSE FRACTAL ARRAY TO SENSOR FAILURE
There is a well known theory according to literature that the larger the
difference in coarray, the less robust the array is. This can otherwise be
stated as given two arrays with the same physical number of elements and the
same difference coarray, one could be much more robust as compared to another.
Hence, it is necessary to design the sparse array in such a way that it
maintains a balance between the size and the robustness of the difference
coarray because when such an array is used for doa estimation then the
performance is affected both by the size of the different arrays and the
robustness properties.
###### Definition 5.1 (Essential Sensors).
A sensor in an array is said to be essential if its presence and absence makes
an effect on the structure of the different coarray. In other words, an
essential sensor as its name signifies is essential because its failure can
alter the difference coarray which means that it will no longer be hole free.
And the presence of holes in the difference coarray means it degrades the
performance of estimation algorithms and the array is said to be less robust.
The necessity to study the essentialness property of the sensors is that it
helps in system designing because the cost of sensors assignment truly depends
on their essential and inessential behavior. For example, let there be two
sensors with different costs and qualities. Let the first sensor is costly but
has a low failure probability and another sensor is less expensive but easily
fails. Therefore, to maintain a balance between the budget and the robustness
of an array, the first sensor can be used as an essential sensor, and another
sensor can be used as an inessential sensor while designing the array.
###### Definition 5.2 ( The fragility of a sensor array ).
The term fragility is the characteristic term that is used to define whether
the array is robust to sensor failure or not. It can be better stated in terms
of fragility i.e. an array is said to be more robust if it is less fragile and
vice versa. Hence, the fragility(F) varies from 0 to 1, where 0 means less
fragile and more robust and 1 means more fragile and less robust array. The
formula for calculating fragility is the number of essential sensors divided
by the total number of sensors in an array.
###### Definition 5.3 (The R-essentialness property of a sensor array).
The R-essentialness means that it is not always necessary that only one sensor
will fail at a time and only its failure will govern the robustness of an
array. There can be the possibility that R sensors can fail at a time hence
all possibilities need to be considered which define the robustness of an
array. All such R possibilities are called R-essentialness. A subarray
$\widetilde{Z}$ of $\widetilde{S}$ is said to be k -essential with respect to
an array $\widetilde{S}$ if it has the following properties. Firstly,
$\widetilde{Z}$ has size exactly R . Secondly, The difference coarray changes
when $\widetilde{Z}$ is removed from $\widetilde{S}$ . Note that the R
-essentialness is an attribute of a subarray $\widetilde{Z}$ of
$\widetilde{S}$ and the fragility for all such subarrays will be called
R-Fragility i.e. if two sensors fail at a time then it is 2-essentialness with
2-Fragility defining the robustness of an array.
Fig.7 explains the concept of essential and inessential sensors i.e the
sensors at $\\{0,2,4,9,17,22\\}$ are essential sensors because the removal of
them alters the difference coarray while the sensors at
$\\{3,6,13,15,16,19\\}$ are inessential sensors as there removal does not
alter the original difference coarray of proposed CFA. Hence the total number
of essential sensors in CFA are 6 which calculates fragility $F_{1}=0.5$ which
satisfy $0\leq F\leq 1$.Hence, the proposed fractal design is robust to sensor
failure.
Figure 7: (a) The original array and its difference coarray of Proposed
Coprime Fractal Array. The array configurations and the difference coarrays
after the deletion of (b) the sensor at 0, (c) the sensor at 2, (d) the sensor
at 3, or (e) the sensor at 4,(f) the sensor at 6, (g) the sensor at 9, or (h)
the sensor at 13,(i) the sensor at 15, (j) the sensor at 16, or (k) the sensor
at 17,(l) the sensor at 19, (m) the sensor at 22 from the original array in
(a). Here the sensors are denoted by red dots and empty space resembles holes.
## 6 SIMULATION RESULTS
To study the robustness of proposed sparse fractal arrays we consider 1 sensor
failure called 1-essentialness, 2 sensor failures called 2-essentialness, and
3 sensor failures at a time as 3-essentialness respectively whose fragility is
also calculated as F1, F2, and F3 respectively. The cumulative results are
shown in the table but due to space limitation we will only show
1-essentialness results in the table but all F are shown in the Table.1 which
will be used for further comparisons.
Table 1: Robustness Analysis of Proposed Sparse Fractal Array S.No. | Proposed Sparse Fractal Arrays with 12 physical sensors. | 1-Essentialness | Fragility
---|---|---|---
1 | Nested fractal Array | {1,2,3,4,17,21,25} | F1=0.5833 F2=0.8788 F3=0.9909
2 | Coprime Fractal array | {0,2,4,9,17,22} | F1=0.5000 F2=0.8182 F3=0.9727
3 | AUGGENI Fractal Array | {1,4,8,12,17,21,25,26,27} | F1=0.8182 F2=1
4 | AUGGENII Fractal Array | {1,2 ,3,4,17,21,25} | F1=0.5833 F2=0.8788 F3=0.9909
5 | Super Nested Fractal Array | {1,3,6,8,11,12,21,24,25} | F1=0.7500 F2=1
After plotting these results of fragility F1 with respect to normalized
difference coarray which is obtained by the relation and comparing it with
some already existing sparse arrays we get Fig. which shows that out of all
proposed fractal arrays coprime fractal array is found to be more robust and
very less fragile whereas already existing MRA, MHA, and nested array has F=1
which means they are least robust to sensor failure.
Figure 8: Robustness analysis for all proposed arrays for F=1 with 12
physical sensors in each sparse fractal array
Fig shows the R fragility of several array configurations with 12 physical
sensors and taking 1-essentialness, 2-essentialness, and 3-essentialness with
respect to F1, F2, and F3 we found that It can be observed that the ULA is the
most robust array since it has the smallest F for all R. The nested array and
the super nested array are least robust as $(F_{R}=1forall1\leq R\leq|S|)$ The
prototype coprime array, Proposed NFA, CFA, AUGGENII, AUGGENI are less robust
than the ULA but more robust than the nested array and the SNFA.
Figure 9: The R-fragility of several array configurations with 12 physical
sensors
## 7 CONCLUDING REMARKS
In this paper, we have proposed various sparse fractal array configurations
and have generated the hole-free difference coarray which is crucial for the
functionality of coarray MUSIC and other estimation algorithms. Further, we
have studied the robustness of proposed arrays to sensor failure by taking
various combinations of sensor failure on the proposed arrays. It helps to
study the behavior of the probability that the difference coarray changes when
the particular sensor fails. Hence, the essentialness property of the sensors
in an array forms the basis for analyzing the robustness of sparse arrays. In
the future, it is of considerable interest to design 2D sparse fractal arrays
and study the robustness of arrays.
## ACKNOWLEDGMENT
The authors would like to thank the Center for Sensors, Instrumentation and
Cyber-physical Systems Engg (SENSE), Center for Applied Research in
Electronics (CARE) and the Department of Electrical Engineering, Indian
Institute of Technology Delhi (IIT Delhi) for the facilities used in this
research.
## References
* [1] Jim Partan, Jim Kurose, and Brian Neil Levine. A survey of practical issues in underwater networks. ACM SIGMOBILE Mobile Computing and Communications Review, 11(4):23–33, 2007.
* [2] John Heidemann, Wei Ye, Jack Wills, Affan Syed, and Yuan Li. Research challenges and applications for underwater sensor networking. In IEEE Wireless Communications and Networking Conference, 2006. WCNC 2006., volume 1, pages 228–235. IEEE, 2006.
* [3] Guoqiang Mao, Barış Fidan, and Brian DO Anderson. Wireless sensor network localization techniques. Computer networks, 51(10):2529–2553, 2007.
* [4] Minglei Yangg, Alexander M Haimovich, Baixiao Chen, and Xin Yuan. A new array geometry for doa estimation with enhanced degrees of freedom. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 3041–3045. IEEE, 2016.
* [5] Geng Wang, Minghao He, Chunlai Yu, Jun Han, and Changxiao Chen. Fast underdetermined doa estimation based on generalized mra via original covariance vector sparse reconstruction. IEEE Access, 9:66805–66815, 2021.
* [6] Guodong Qin, Moeness G Amin, and Yimin D Zhang. Doa estimation exploiting sparse array motions. IEEE Transactions on Signal Processing, 67(11):3013–3027, 2019\.
* [7] Piya Pal and Palghat P Vaidyanathan. Nested arrays: A novel approach to array processing with enhanced degrees of freedom. IEEE Transactions on Signal Processing, 58(8):4167–4181, 2010.
* [8] Chun-Lin Liu, PP Vaidyanathan, and Piya Pal. Coprime coarray interpolation for doa estimation via nuclear norm minimization. In 2016 IEEE International Symposium on Circuits and Systems (ISCAS), pages 2639–2642. IEEE, 2016.
* [9] PP Vaidyanathan and Piya Pal. Sparse sensing with coprime arrays. In 2010 Conference record of the forty fourth Asilomar conference on signals, systems and computers, pages 1405–1409. IEEE, 2010.
* [10] Saeed M Alamoudi, Mohammed A Aldhaheri, Saleh A Alawsh, and Ali H Muqaibel. Sparse doa estimation based on a shifted coprime array configuration. In 2016 16th Mediterranean Microwave Symposium (MMS), pages 1–4. IEEE, 2016.
* [11] Bo-lin Cheng, Ming-wei Li, Ming-yue Feng, and Xiao-jie Tang. Doa estimation on cacis-type array based on fast-rvm algorithm. In 2019 6th International Conference on Systems and Informatics (ICSAI), pages 1577–1582. IEEE, 2019.
* [12] Penghui Ma, Jianfeng Li, Fan Xu, and Xiaofei Zhang. Hole-free coprime array for doa estimation: Augmented uniform co-array. IEEE Signal Processing Letters, 28:36–40, 2020.
* [13] Chun-Lin Liu and PP Vaidyanathan. Super nested arrays: Sparse arrays with less mutual coupling than nested arrays. In 2016 IEEE International Conference on Acoustics, Speech and signal processing (ICASSP), pages 2976–2980. IEEE, 2016.
* [14] Fu Li, Hui Liu, and Richard J Vaccaro. Performance analysis for doa estimation algorithms: unification, simplification, and observations. IEEE Transactions on Aerospace and Electronic Systems, 29(4):1170–1184, 1993.
* [15] Zhongfu Ye, Jisheng Dai, Xu Xu, and Xiaopei Wu. Doa estimation for uniform linear array with mutual coupling. IEEE Transactions on Aerospace and Electronic Systems, 45(1):280–288, 2009.
* [16] Douglas H Werner, Randy L Haupt, and Pingjuan L Werner. Fractal antenna engineering: The theory and design of fractal antenna arrays. IEEE Antennas and propagation Magazine, 41(5):37–58, 1999.
* [17] Dwight L Jaggard. Fractal electrodynamics and modeling. In Directions in electromagnetic wave modeling, pages 435–446. Springer, 1991.
* [18] Chun-Lin Liu and PP Vaidyanathan. Maximally economic sparse arrays and cantor arrays. In 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pages 1–5. IEEE, 2017\.
* [19] Zixiang Yang, Qing Shen, Wei Liu, Yonina C Eldar, and Wei Cui. Extended cantor arrays with hole-free fourth-order difference co-arrays. In 2021 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1–5. IEEE, 2021.
* [20] Regev Cohen and Yonina C Eldar. Sparse array design via fractal geometries. IEEE transactions on signal processing, 68:4797–4812, 2020.
* [21] P Raiguru and RK Mishra. Doa estimation on fractal-based array. In Advances in Electrical Control and Signal Systems, pages 777–784. Springer, 2020.
* [22] Chun-Lin Liu and Palghat P Vaidyanathan. Robustness of difference coarrays of sparse arrays to sensor failures—part i: A theory motivated by coarray music. IEEE Transactions on Signal Processing, 67(12):3213–3226, 2019\.
* [23] Chun-Lin Liu and PP Vaidyanathan. Robustness of coarrays of sparse arrays to sensor failures. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3231–3235. IEEE, 2018.
* [24] S Vigneshwaran, Narasimhan Sundararajan, and P Saratchandran. Direction of arrival (doa) estimation under array sensor failures using a minimal resource allocation neural network. IEEE Transactions on Antennas and Propagation, 55(2):334–343, 2007\.
* [25] Chenglong Zhu, Wen-Qin Wang, Hui Chen, and Hing Cheung So. Impaired sensor diagnosis, beamforming, and doa estimation with difference co-array processing. IEEE Sensors Journal, 15(7):3773–3780, 2015.
* [26] Angeliki Alexiou and Athanassios Manikas. Investigation of array robustness to sensor failure. Journal of the Franklin Institute, 342(3):255–272, 2005.
* [27] Matteo Carlin, Giacomo Oliveri, and Andrea Massa. On the robustness to element failures of linear ads-thinned arrays. IEEE Transactions on Antennas and Propagation, 59(12):4849–4853, 2011.
* [28] Chun-Lin Liu and PP Vaidyanathan. Remarks on the spatial smoothing step in coarray music. IEEE Signal Processing Letters, 22(9):1438–1442, 2015.
* [29] Priyadarshini Raiguru and RK Mishra. A new recursively generated array geometry for doa estimation. In 2019 TEQIP III Sponsored International Conference on Microwave Integrated Circuits, Photonics and Wireless Networks (IMICPW), pages 167–169. IEEE, 2019.
* [30] Regev Cohen and Yonina C Eldar. Sparse fractal array design with increased degrees of freedom. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4195–4199. IEEE, 2019.
|
Currently at ]Max-Born-Institute, Max-Born-Str. 2A, 12489 Berlin, Germany
# Quantized relativistic time-of-arrival operators for spin-0 particles and
the quantum tunneling time problem
P.C.M. Flores<EMAIL_ADDRESS><EMAIL_ADDRESS>[ E.A. Galapon
Theoretical Physics Group, National Institute of Physics, University of the
Philippines Diliman, 1101 Quezon City, Philippines
###### Abstract
We provide a full account of our recent report [EPL, 141 (2023) 10001] which
constructed a quantized relativistic time-of-arrival operator for spin-0
particles using a modified Weyl-ordering rule to calculate the traversal time
across a square barrier. It was shown that the tunneling time of a
relativistic spin-0 particle is instantaneous under the condition that the
barrier height $V_{o}$ is less than the rest mass energy. This implies that
instantaneous tunneling is an inherent quantum effect in the context of
arrival times.
## I Introduction
Tunneling is one of the most well-known quantum effects and has been a long
standing important subject of quantum mechanics. The simplest tunneling
phenomenon is demonstrated by a square potential barrier wherein the
Schrödinger equation predicts a non-zero probability that a particle initially
on the far left of the barrier is transmitted to the far right even if its
energy is less than the barrier height. However, tunneling becomes problematic
when one associates the time it takes a wavepacket to traverse the classically
forbidden region [1, 2] because it is compounded with the quantum time problem
(QTP), and superluminality. Standard quantum mechanics only treats time as a
parameter, as such, the quantum tunneling time problem may be ill-defined
because there is no canonical formalism in standard quantum mechanics to
answer questions regarding time durations [3, 4]. Moreover, a dynamical
treatment of time, e.g., a time operator, has been met with pessimism because
of Pauli’s no-go theorem [5] on the existence of a time operator. This has led
to several definitions of tunneling time using a parametric approach, e.g.
Wigner phase time [6], Büttiker-Landauer time [7], Larmor time [8, 9, 10],
Pollak-Miller time [11], dwell time [12], among many others [13, 14, 15, 16,
17, 18, 19, 20, 21, 22]. However, one of us has shown that Pauli’s no-go
theorem does not hold in the single Hilbert space formulation of quantum
mechanics [23] and constructed a corresponding barrier traversal time operator
to calculate the tunneling time [15]. By doing so, tunneling time was treated
as a dynamical observable which addresses any contentions on tunneling time
being an ill-defined problem.
There are still debates on the the validity of the various proposals and its
corresponding physical meaning when it predicts apparent superluminal
velocities [24]. Several experiments [25, 26, 27, 28, 29, 30, 31, 32, 33] to
measure the tunneling time have confirmed the superluminal behavior of a
tunneling particle but there is no consensus on whether the particle is
transmitted instantaneously or if it spends a finite time inside the barrier.
Moreover, the relation between these various proposed tunneling times is still
unclear but it has recently been argued that these tunneling times can be
classified into two distinct categories [34], i.e., arrival time and
interaction time. The former is concerned with the appearance of the tunneled
particle at the far side of the barrier while the latter determines the time
duration spent inside the barrier. Tunneling time as an “arrival time” is
demonstrated by attoclock experiments while “interaction time” by Larmor clock
experiments [34]. Some attoclock experiments have reported instantaneous [25,
26, 27, 28, 29, 30] tunneling while others reported finite tunneling times
[31, 32]. Moreover, a recent Larmor clock experiment has also reported finite
tunneling time [33]. Now, whether tunneling is instantaneous or not, the crux
of the problem is that both results imply that the particle exhibits
superluminal behavior below the barrier. This now raises the question on
whether the superluminality is a consequence of using non-relativistic quantum
mechanics, i.e., could there a fundamental difference if one uses a
relativistic theory?
There have been several studies to extend the analysis of tunneling times to
the relativistic case in order to adequately address the superluminal behavior
[35, 36, 37, 38]. It was shown by de Leo and Rotelli [35], then separately
again by de Leo [36] whom used the phase time via the Dirac equation in a step
potential to show that superluminal tunneling times is still present. Petrillo
and Janner [37] obtained similar results for a square barrier via the Dirac
equation. Krekora, Su, and Grobe [38] also used the Dirac equation for various
potential barriers of the form $V(x)=V_{o}e^{-(2x/w)^{n}}$ with an effective
width $w$, and defined an “instantaneous tunneling speed” to show superluminal
tunneling under the condition that the barrier height $V_{o}$ is less than
twice the rest mass energy. This apparent superluminal behavior despite the
relativistic treatment implies that the superluminal behavior is an inherent
quantum effect.
In this paper, we give a full account of our recent report [39] which proposed
a formalism on the construction of quantized relativistic TOA-operators for
spin-0 particles in the presence of an interaction potential. This was then
used to construct a corresponding barrier traversal time operator. By doing
so, the formalism can simultaneously addresses the compounding problems of
superluminality and the QTP in tunneling times. Now, it is well-known that
relativistic quantum mechanics is not a well-defined one-particle theory since
relativistic effects can lead to spontaneous pair-creation and annihilation
which might render the concept of TOA meaningless, i.e., we are not sure if
the particle that tunneled and arrived is the same particle we initially had.
To address this, we will impose the condition that the barrier height is less
than the rest mass energy.
The rest of the paper is structured as follows. In Sec. II we review the
construction of quantized non-relativistic TOA-operators in coordinate
representation using Weyl, Born-Jordan, and simple symmetric ordering [40]
which will then be modified to construct the corresponding relativistic
counterpart for spin-0 particles in Sec. III. The barrier traversal time
operator is then constructed in Sec. IV and will be shown to reduce to the
correct classical limit as $\hbar\rightarrow 0$ in Sec. V. Next, we establish
the expected barrier traversal time and show that tunneling is instantaneous
in Sec. VI, regardless of the ordering rule used. A single Gaussian wavepacket
is then used as an example in Sec. VII. Last, we conclude in Sec. VIII.
## II Review of quantized non-relativistic TOA-operators
The rigorous mathematical framework of quantum mechanics was developed by von
Neumann using the Hilbert space $\mathcal{H}$ as its underlying linear
topological space wherein physical observables are generally identified with
maximally symmetric densely defined operators $\mathsf{\hat{A}}$ in
$\mathcal{H}$ while physical states are represented by the set of unit rays
$\ket{\psi}$ in $\mathcal{H}$. The eigenvalues of these operators then
represent the possible measurement outcomes of the corresponding observable
and its spectrum may be discrete, continuous, or a combination of both.
However, operators in quantum mechanics are generally unbounded with a
continuous spectrum corresponding to non-normalizable eigenfunctions, e.g. the
position and momentum operator whose eigenfunctions are the Dirac-delta
function $\delta(q-q_{o})$ and the plane wave
$\exp(ipq/\hbar)/\sqrt{2\pi\hbar}$, respectively.
In order to deal with these non-square integrable functions that are outside
the Hilbert space, one can use Dirac’s bra-ket notation which is made
mathematically rigorous by using the rigged Hilbert space (RHS) that utilizes
the theory of distributions [41, 42, 43, 44, 40]. In our case, we choose the
fundamental space of our RHS to be the space of infinitely continuously
differentiable complex valued functions with compact supports $\Phi$ such that
the RHS is $\Phi\subset L^{2}(\mathbb{R})\subset\Phi^{\times}$, where
$\Phi^{\times}$ is the space of all continuous linear functionals on $\Phi$.
The standard Hilbert space formulation of quantum mechanics is recovered by
taking the closures on $\Phi$ with respect to the metric of
$L^{2}(\mathbb{R})$.
In coordinate representation, a quantum observable $\mathsf{\hat{A}}$ is a
mapping from $\Phi$ to $\Phi^{\times}$, and is given by the formal integral
operator
$(\mathsf{\hat{A}}\varphi)(q)=\int_{-\infty}^{\infty}dq^{\prime}\matrixelement{q}{\mathsf{\hat{A}}}{q^{\prime}}\varphi(q^{\prime})$
(1)
where the kernel satisfies
$\matrixelement{q}{\mathsf{\hat{A}}}{q^{\prime}}=\matrixelement{q^{\prime}}{\mathsf{\hat{A}}}{q}^{*}$,
to ensure Hermiticity such that the eigenvalues of Eq. (1) are real-valued.
The integral Eq. (1) is interpreted in the distributional sense, i.e. it is a
functional on $\Phi$ wherein the kernel
$\matrixelement{q}{\mathsf{\hat{A}}}{q^{\prime}}$ is a distribution. As an
example, the position and momentum operators are now given as
$\displaystyle(\mathsf{\hat{q}}\varphi)(q)=$
$\displaystyle\int_{-\infty}^{\infty}dq^{\prime}\delta(q-q^{\prime})\varphi(q^{\prime})=q\varphi(q)$
(2) $\displaystyle(\mathsf{\hat{p}}\varphi)(q)=$
$\displaystyle\int_{-\infty}^{\infty}dq^{\prime}i\hbar\dfrac{d\delta(q-q^{\prime})}{dq^{\prime}}\varphi(q^{\prime})=-i\hbar\dfrac{d\varphi(q)}{dq}.$
(3)
There is still no consensus on how TOA-operators in the presence of an
interaction potetial are constructed [45, 46, 40]. One possible method is by
canonical quantization but it has been deemed not meaningful because the
classical TOA can be multiple and/or complex-valued. Moreover, canonical
quantization suffers from ordering ambiguities, obstruction to quantization
[47, 48], and circularity when imposing the correspondence principle [49, 50].
To overcome these problems, the method of “supraquantization” was proposed for
non-relativistic TOA-operators [50], i.e., constructing TOA-operators from
first principles of quantum mechanics. It turns out that for linear systems,
$V(q)=\alpha q^{2}+\beta q+\gamma$, the “supraquantized” TOA-operator is equal
to the canonically quantized TOA-operator using Weyl-ordering. Meanwhile, the
“supraquantized” TOA-operator for non-linear systems can be expressed as a
perturbation series wherein the leading term is the Weyl-ordered TOA-operator
[50, 40, 51]. This relation shows that canonical quantization is sufficient as
a leading order approximation of the canonical TOA-operator.
Now, the non-relativistic TOA-operators constructed by Galapon and Magadan
[40] quantized the corresponding classical non-relativistic TOA
$t_{x}(q,p)=-\text{sgn}(p)\sqrt{\dfrac{\mu_{o}}{2}}\int_{x}^{q}dq^{\prime}\left[\dfrac{p^{2}}{2\mu_{o}}+V(q)-V(q^{\prime})\right]^{-1/2}$
(4)
in coordinate representation. The function $\text{sgn}(p)$ is the sign of the
initial momentum $p$ which accounts for the particles moving from the left or
right. Meanwhile, $x$ is the arrival point and $\mu_{o}$ is the rest mass of
the particle. The expression Eq. (4) was obtained by treating energy as a
constant of motion and inverting the corresponding Hamilton equation of
motion. To quantize Eq. (4), it has been argued [40] that objections can be
addressed on physical grounds. First, the TOA of a quantum particle is always
real-valued because it can tunnel to the classically forbidden region. Second,
it is only meaningful to quantize the first TOA because the wavefunction will
collapse after a detector registers the TOA of the quantum particle. The
quantization of Eq. (4) was done by first expanding around the free TOA [40],
i.e.,
$t_{0}(q,p)=-\sum_{j=0}^{\infty}(-1)^{j}\mu_{o}^{j+1}\dfrac{(2j-1)!!}{j!p^{2j+1}}\int_{o}^{q}dq^{\prime}\left(V(q)-V(q^{\prime})\right)^{j}.$
(5)
It is then assumed that the potential is analytic at the origin wherein it
admits the expansion $V(q)=\sum_{n=0}\nu_{n}q^{n}$ such that
$\displaystyle\int_{o}^{q}dq^{\prime}\left(V(q)-V(q^{\prime})\right)^{j}=\sum_{n=1}^{\infty}a_{n}^{(j)}q^{n}.$
(6)
This then yields the local time of arrival (LTOA)
$t_{0}(q,p)=-\sum_{j=0}^{\infty}(-1)^{j}\mu_{o}^{j+1}\dfrac{(2j-1)!!}{j!}\sum_{n=1}^{\infty}a_{n}^{(j)}\dfrac{q^{n}}{p^{2j+1}},$
(7)
which is now amenable to quantization because it is single and real-valued in
its region of convergence in the phase space. Now, the LTOA converges
absolutely and uniformly only in some local neighborhood
$\omega=\omega_{q}\times\omega_{p}$ determined by
$\absolutevalue{V(q)-V(q^{\prime})}<p^{2}/2\mu_{o}$ for $p\neq 0$ and
continuous $V(q)$, and will diverge outside this region which signifies that
the particle has not classically arrived at $q=0$, i.e. the classically
forbidden region. However, the classical TOA Eq. (4) holds in the region
$\Omega=\Omega_{q}\times\Omega_{p}$ where $\omega\subset\Omega$. This means
that Eq. (4) is the analytic continuation of the LTOA in the region
$\Omega\backslash\omega$ [51].
The monomials $q^{n}p^{-m}$ were then quantized by generalizing the Bender-
Dunne basis operators [52, 53],
$\mathsf{\hat{t}_{-m,n}}=\dfrac{\sum_{k=0}^{n}\beta_{k}^{(n)}\mathsf{\hat{q}}^{k}\mathsf{\hat{p}}^{-m}\mathsf{\hat{q}}^{n-k}}{\sum_{k=0}^{n}\beta_{k}^{(n)}},$
(8)
where, the coefficients satisfy the condition
$\beta_{k}^{(n)}=\beta_{n-k}^{(n)*}$ to ensure Hermiticity. Now, the most
well-studied [54, 55, 56, 57, 58, 59] ordering rules are Weyl, Born-Jordan,
and simple-symmetric with each having its own advantage . Specifically, Weyl
ordering preserves the covariant property of Hamiltonian dynamics with respect
to linear canonical transforms [57, 60] while Born-Jordan preserves the
equivalence of the the Schrödinger and Heisenberg formulation of quantum
mechanics [57, 61, 62]. On the other hand, simple-symmetric ordering just
provides the easiest possible ordering by using the “average rule” [54, 63].
These ordering rules are imposed on the basis operators
$\mathsf{\hat{t}_{-m,n}}$ by choosing the coefficients
$\displaystyle\beta_{k}^{(n)}=\begin{cases}\dfrac{n!}{k!(n-k)!}\quad&,\quad\text{Weyl}\\\
1\quad&,\quad\text{Born-Jordan}\\\
\delta_{k,0}+\delta_{k,n}\quad&,\quad\text{simple-symmetric}.\end{cases}$ (9)
It easily follows that in coordinate representation, the non-relativistic TOA-
operator admits the expansion
$(\mathsf{\hat{T}_{0}}\varphi)(q)=-\int_{-\infty}^{\infty}dq^{\prime}\sum_{j=0}^{\infty}(-1)^{j}\mu_{o}^{j+1}\dfrac{(2j-1)!!}{j!}\sum_{n=1}^{\infty}a_{n}^{(j)}\matrixelement{q}{\mathsf{\hat{t}_{-2j-1,n}}}{q^{\prime}}\varphi(q^{\prime}).$
(10)
wherein
$\displaystyle\matrixelement{q}{\mathsf{\hat{t}_{-m,n}}}{q^{\prime}}=\dfrac{i(-1)^{\frac{1}{2}(m-1)}}{2\hbar^{m}(m-1)!}P_{n}(q|q^{\prime})(q-q^{\prime})^{m-1}\text{sgn}(q-q^{\prime}),\quad
m=1,2,\dots$ (11) $\displaystyle
P_{n}(q|q^{\prime})=\begin{cases}\left(\dfrac{q+q^{\prime}}{2}\right)^{n}\quad&,\quad\text{Weyl}\\\
\\\ \dfrac{1}{n+1}\left(\dfrac{q^{n+1}-q^{\prime
n+1}}{q-q^{\prime}}\right)\quad&,\quad\text{Born-Jordan}\\\ \\\
\dfrac{q^{n}+q^{\prime n}}{2}\quad&,\quad\text{simple-symmetric}.\end{cases}$
(12)
The summation over $n$ in Eq. (10) is then evaluated using the following
identities
$\displaystyle\sum_{n=1}^{\infty}a_{n}^{(j)}P_{n}(q|q^{\prime})=\begin{cases}\int_{0}^{(q+q^{\prime})/2}ds\left[V\left(\dfrac{q+q^{\prime}}{2}\right)-V(s)\right]^{j}\quad&,\quad\text{Weyl}\\\
\\\
\int_{0}^{q}du\int_{0}^{s}(V(s)-V(u))^{j}-\int_{0}^{q^{\prime}}du\int_{0}^{s}(V(s)-V(u))^{j}\quad&,\quad\text{Born-
Jordan}\\\ \\\
\dfrac{1}{2}\int_{0}^{q}ds(V(q)-V(s))^{j}+\dfrac{1}{2}\int_{0}^{q^{\prime}}ds(V(q)-V(s))^{j}\quad&,\quad\text{simple-
symmetric},\end{cases}$ (13)
which follows from the assumed analyticity of the potential at the origin Eq.
(6). The resulting expression is further evaluated by taking the summation
over $j$.
Performing these operations yield the non-relativistic TOA-operators of the
form
$(\mathsf{\hat{T}_{0}}\varphi)(q)=\int_{-\infty}^{\infty}dq^{\prime}\dfrac{\mu_{o}}{i\hbar}T_{0}(q,q^{\prime})\text{sgn}(q-q^{\prime})\varphi(q^{\prime}),$
(14)
where $T(q,q^{\prime})$ is referred to as the time kernel factor (TKF) which
depends on the ordering rule used, i.e.,
$\displaystyle T_{0}^{(W)}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{2}\int_{0}^{\frac{q+q^{\prime}}{2}}ds{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}}{2\hbar^{2}}(q-q^{\prime})^{2}\left\\{V\left(\frac{q+q^{\prime}}{2}\right)-V(s)\right\\}\right]$
(15) $\displaystyle T_{0}^{(BJ)}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{2(q-q^{\prime})}\int_{0}^{q}ds\int_{0}^{s}du{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}}{2\hbar^{2}}(q-q^{\prime})^{2}\left\\{V(s)-V(u)\right\\}\right]$
$\displaystyle-\dfrac{1}{2(q-q^{\prime})}\int_{0}^{q^{\prime}}ds\int_{0}^{s}du{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}}{2\hbar^{2}}(q-q^{\prime})^{2}\left\\{V(s)-V(u)\right\\}\right]$
(16) $\displaystyle T_{0}^{(SS)}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{4}\int_{0}^{q}ds{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}}{2\hbar^{2}}(q-q^{\prime})^{2}\left\\{V(q)-V(s)\right\\}\right]$
$\displaystyle+\dfrac{1}{4}\int_{0}^{q^{\prime}}ds{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}}{2\hbar^{2}}(q-q^{\prime})^{2}\left\\{V(q^{\prime})-V(s)\right\\}\right]$
(17)
where ${{}_{0}}F_{1}(;a;z)$ is a specific hypergeometric function. The
superscripts “W”, “BJ”, and “SS” refer to the Weyl, Born-Jordan, and simple
symmetric ordering, respectively.
## III Non-analytic quantization of the relativistic LTOA in coordinate
representation
Supposing that the relation between the non-relativistic “supraquantized” and
quantized TOA-operators also holds for the relativistic case, it should be
enough for now to consider the simplest approach by developing the method of
canonical quantization of relativistic TOA-operators, and leave the method of
“supraquantization” open for future studies. We follow the steps outlined in
Sec. II to construct the relativistic TOA-operator by quantizing the
corresponding “classical” relativistic time-of-arrival (CRTOA) obtained from
inverting the equation of motion from the Hamiltonian of special relativity
[64], i.e.,
$\displaystyle t_{x}(q,p)=$
$\displaystyle-\text{sgn}p\int_{x}^{q}\dfrac{dq^{\prime}}{c}\left(1-\dfrac{\mu_{o}^{2}c^{4}}{(H(q,p)-V(q^{\prime}))^{2}}\right)^{-1/2}$
(18)
wherein
$H(q,p)=\sqrt{p^{2}c^{2}+\mu_{o}^{2}c^{4}}+V(q)$ (19)
is the total energy of the positive energy solutions generated by the Klein-
Gordon equation. Similar to Eq. (4), the expression Eq. (18) was also obtained
by treating energy as a constant of motion and inverting the corresponding
Hamilton equation of motion. Without loss of generality, we assume the arrival
point to be the origin $x=0$ and impose that the potential is analytic at the
origin such that Eq. (18) has the expansion around the relativistic free TOA
given by
$\displaystyle t_{0}(q,p)=$
$\displaystyle-\mu_{o}\sum_{j=0}^{\infty}\sum_{k=0}^{j}\binom{-\frac{1}{2}}{j}\binom{j}{k}\dfrac{(2\mu_{o})^{j}}{(2\mu_{o}c^{2})^{j-k}}\sum_{n=1}^{\infty}a_{n}^{(2j-k)}\dfrac{\gamma_{p}^{k+1}}{p^{2j+1}}q^{n}$
(20)
where, $\gamma_{p}=\sqrt{1+p^{2}/\mu_{o}^{2}c^{2}}$. For consistency with Sec.
II, we shall also refer to Eq. (20) as the relativistic LTOA since it is also
single and real-valued within its region of convergence in the phase space.
That is, Eq. (20) will only converge absolutely and uniformly in some local
neighborhood $\omega=\omega_{q}\times\omega_{p}$ determined by
$\absolutevalue{V(q)-V(q^{\prime})}>\sqrt{p^{2}c^{2}+\mu_{o}^{2}c^{4}}-\mu_{o}c^{2}$
for $p\neq 0$ and continuous $V(q)$. Meanwhile Eq. (18) holds in the region
$\Omega=\Omega_{q}\times\Omega_{p}$ where $\omega\subset\Omega$ and is the
analytic continuation of the relativistic LTOA in the region
$\Omega\backslash\omega$.
Figure 1: Contours of integration for Eq. (25) for (a) $q-q^{\prime}>0$ and
(b) $q-q^{\prime}<0$.
The relativistic LTOA Eq. (20) is now amenable to quantization by promoting
the position and momentum $(q,p)$ into operators
$(\mathsf{\hat{q}},\mathsf{\hat{p}})$. There is still no consensus on the
existence of a position operator in relativistic quantum mechanics [65] but
the most suitable candidate is the Newton-Wigner position operator [66]. In
our case, we will only use the non-relativistic position operator
$\mathsf{\hat{q}}$ in quantizing Eq. (20) which is motivated by Razavi’s
relativistic free TOA operator [67, 68]
$\mathsf{\hat{T}_{Ra}}=-\dfrac{\mu_{o}}{2}\left(\mathsf{\hat{q}}\mathsf{\hat{p}^{-1}}\sqrt{1+\dfrac{\mathsf{\hat{p}^{2}}}{\mu_{o}^{2}c^{2}}}+\mathsf{\hat{p}^{-1}}\sqrt{1+\dfrac{\mathsf{\hat{p}^{2}}}{\mu_{o}^{2}c^{2}}}\mathsf{\hat{q}}\right).$
(21)
To quantize Eq. (20), we extend the Bender-Dunne basis operators [52, 53] to
separable classical function $f(q,p)=g(q)^{n}h(p)^{m}$, i.e.,
$f(q,p)\Rightarrow\mathsf{\hat{f}_{\hat{q},\hat{p}}}=\dfrac{\sum_{k=0}^{n}\alpha_{k}^{(n)}\mathsf{\hat{g}_{\hat{q}}}^{k}\mathsf{\hat{h}_{\hat{p}}}^{m}\mathsf{\hat{g}_{\hat{q}}}^{n-k}}{\sum_{k=0}^{n}\alpha_{k}^{(n)}}.$
(22)
where the coefficients $\alpha_{k}^{(n)}$ are given by Eq. (9). This now leads
to the quantization
$\displaystyle
Q\left[q^{n}p^{-2j-1}\gamma_{p}^{k+1}\right]=\begin{cases}\dfrac{1}{2^{n}}\sum_{r=0}^{n}\binom{n}{r}\mathsf{\hat{q}}^{r}\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}\mathsf{\hat{q}}^{n-r}\quad&,\quad\text{Weyl}\\\
\\\
\dfrac{1}{n+1}\sum_{r=0}^{n}\mathsf{\hat{q}}^{r}\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}\mathsf{\hat{q}}^{n-r}\quad&,\quad\text{Born-
Jordan}\\\ \\\
\dfrac{1}{2}\left(\mathsf{\hat{q}}^{n}\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}+\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}\mathsf{\hat{q}}^{n}\right)\quad&,\quad\text{simple-
symmetric}\end{cases}$ (23)
It follows from Eq. (23) that in coordinate representation, the quantized
relativistic TOA Eq. (20) now has the expansion
$\displaystyle(\mathsf{\hat{T}_{c}}\varphi)(q)=-\mu_{o}\sum_{j=0}^{\infty}\sum_{k=0}^{j}$
$\displaystyle\binom{-\frac{1}{2}}{j}\binom{j}{k}\dfrac{(2\mu_{o})^{j}}{(2\mu_{o}c^{2})^{j-k}}$
$\displaystyle\times\sum_{n=1}^{\infty}a_{n}^{(2j-k)}\int_{-\infty}^{\infty}dq^{\prime}P_{n}^{(Q)}(q|q^{\prime})\matrixelement{q}{\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}}{q^{\prime}}\varphi(q^{\prime})$
(24)
where, $P_{n}^{(Q)}(q|q^{\prime})$ is given by Eq. (12) and the superscript
$(Q)$ refers the to quantization rule used. The momentum kernel
$\matrixelement{q}{\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}}{q^{\prime}}$
is evaluated by inserting the resolution of the identity
$\mathsf{1}=\int_{-\infty}^{\infty}dp\ket{p}\bra{p}$, and using the plane wave
expansion $\innerproduct{q}{p}=e^{iqp/\hbar}/\sqrt{2\pi\hbar}$, i.e.,
$\displaystyle\matrixelement{q}{\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}}{q^{\prime}}=$
$\displaystyle\int_{-\infty}^{\infty}\frac{dp}{2\pi\hbar}\exp[\frac{i}{\hbar}(q-q^{\prime})p]\frac{1}{p^{2j+1}}\left(\sqrt{1+\frac{p^{2}}{\mu_{o}^{2}c^{2}}}\right)^{k+1}.$
(25)
The integral on the right hand side of Eq. (25) diverges because of the pole
with order $2j+1$ at $p=0$. Moreover, it has branch points at $\pm i\mu_{o}c$
for even values of $k$. Now, this has already been evaluated [68] for the case
when $j=k=0$ and can be similarly evaluated as a distributional Fourier
transform using the contours shown in Fig. 1. The evaluation of Eq. (25) is
done by taking its complex extension and taking the average of the integrals
$\int_{\Gamma^{\pm}}dzf(z)z^{-2j-1}$, where the paths $\gamma^{+}$
($\gamma^{-}$) passes above (below) the pole at $z=0$. Performing this
integration assigns a value to Eq. (25) which coincides with the Hadamard
finite part [69], and is explicitly given as
$\displaystyle\matrixelement{q}{\mathsf{\hat{p}}^{-2j-1}\mathsf{\gamma_{\hat{p}}^{k+1}}}{q^{\prime}}=-\dfrac{1}{2i\hbar}(f_{j,k}(q,q^{\prime})+g_{j,k}(q,q^{\prime}))\text{sgn}(q-q^{\prime})$
(26)
where,
$\displaystyle f_{j,k}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{(2j)!}\left(\dfrac{i}{\hbar}(q-q^{\prime})\right)^{2j}\int_{0}^{\infty}dye^{-y}\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}^{k+1}\left(1-i\dfrac{\hbar}{q-q^{\prime}}\dfrac{y}{z}\right)^{2j}$
(27) $\displaystyle g_{j,k}(q,q^{\prime})=$
$\displaystyle\dfrac{(-1)^{j}i^{k}}{(\mu_{o}c)^{2j}}\left(\dfrac{1-(-1)^{k+1}}{2}\right)\dfrac{2}{\pi}\int_{1}^{\infty}dy\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{q-q^{\prime}}y]\dfrac{\sqrt{y^{2}-1}^{k+1}}{y^{2j+1}}.$
(28)
The function $f_{j,k}(q,q^{\prime})$ is the contribution of the resiude $z=0$
and is rewritten in integral form using the residue theorem, wherein, the
contour $R$ is a circle in the complex plane with radius $r<\mu_{o}c$.
Meanwhile, $g_{j,k}(q,q^{\prime})$ is the contribution of the branch cut which
vanishes for odd values of $k$. Thus, the relativistic TOA-operator Eq. (24)
now has the expansion
$\displaystyle(\mathsf{\hat{T}_{c}}\varphi)(q)=\int_{-\infty}^{\infty}dq^{\prime}\dfrac{\mu_{o}}{i\hbar}T^{\\{Q\\}}(q,q^{\prime})\text{sgn}(q-q^{\prime})\varphi(q^{\prime})$
(29)
where, $T^{\\{Q\\}}(q,q^{\prime})$ is the relativistic TKF and has the
expansion
$\displaystyle T^{\\{Q\\}}(q,q^{\prime})=\dfrac{1}{2}\sum_{j=0}^{\infty}$
$\displaystyle\sum_{k=0}^{j}\binom{-\frac{1}{2}}{j}\binom{j}{k}\dfrac{(2\mu_{o})^{j}}{(2\mu_{o}c^{2})^{j-k}}(f_{j,k}(q,q^{\prime})+g_{j,k}(q,q^{\prime}))\sum_{n=1}^{\infty}a_{n}^{(2j-k)}P_{2j-k}^{\\{Q\\}}(q|q^{\prime}).$
(30)
An integral form factor for Eq. (30) is obtained by series rearrangement and
using the identities in Eq. (13).
### Modified Weyl-ordered TOA operator
Performing the summation yields
$\displaystyle
T^{\\{W\\}}(q,q^{\prime})=\dfrac{1}{2}\int_{0}^{\frac{q+q^{\prime}}{2}}ds\mathsf{W}_{s}(q,q^{\prime})$
(31)
where,
$\displaystyle\mathsf{W}_{s}(q,q^{\prime})=\mathsf{W}_{s}^{(1)}(q,q^{\prime})+\dfrac{2}{\pi}\int_{1}^{\infty}dz\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{q-q^{\prime}}z]\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{W}_{s,z}^{(2)}(q,q^{\prime})$
(32)
in which
$\displaystyle\mathsf{W}_{s}^{(1)}(q,q^{\prime})=\int_{0}^{\infty}dye^{-y}$
$\displaystyle\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}$
$\displaystyle\times{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}\left(\left(q-q^{\prime}\right)-i\hbar\dfrac{y}{z}\right)^{2}\mathsf{P_{W}}(s,z,q,q^{\prime})\right]$
(33) $\displaystyle\mathsf{W}_{s,z}^{(2)}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{2}\left[1-\dfrac{1}{z^{2}}\left(\dfrac{V_{s}^{(W)}(q,q^{\prime})}{\mu_{o}c^{2}}\right)^{2}+2i\dfrac{\sqrt{z^{2}-1}}{z^{2}}\left(\dfrac{V_{s}^{(W)}(q,q^{\prime})}{\mu_{o}c^{2}}\right)\right]^{-1/2}+f_{i\rightarrow-i}$
(34) $\displaystyle\mathsf{P_{W}}(s,z,q,q^{\prime})=$
$\displaystyle\left(\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{V_{s}^{(W)}(q,q^{\prime})}{2\mu_{o}c^{2}}\right)$
(35) $V_{s}^{(W)}(q,q^{\prime})=V\left(\frac{q+q^{\prime}}{2}\right)-V(s).$
(36)
The factor ${{}_{0}}F_{1}(;a;z)$ in Eq. (33) is a specific hypergeometric
function, and the contour $R$ is a circle of radius $r<\mu_{o}c$ that encloses
the pole at $z=0$, while $f_{i\rightarrow-i}$ denotes changing $i$ to $-i$ of
the first term in Eq. (34). The TKF given by Eqs. (31)-(36) reduces to the
known kernel for Weyl-quantized non-relativistic TOA operator Eq. (15) in the
limit $c\rightarrow\infty$. See Appendix A for details.
### Modified Born-Jordan-ordered TOA-operator
Repeating the same steps yields
$\displaystyle T^{\\{BJ\\}}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{2(q-q^{\prime})}\int_{0}^{q}ds\int_{0}^{s}du\mathsf{B}_{s,u}(q,q^{\prime})-\dfrac{1}{2(q-q^{\prime})}\int_{0}^{q^{\prime}}ds\int_{0}^{s}du\mathsf{B}_{s,u}(q,q^{\prime})$
(37)
where,
$\displaystyle\mathsf{B}_{s,u}(q,q^{\prime})=\mathsf{B}_{s,u}^{(1)}(q,q^{\prime})+\dfrac{2}{\pi}\int_{1}^{\infty}dz\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{q-q^{\prime}}z]\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{B}_{s,u,z}^{(2)}(q,q^{\prime})$
(38)
in which
$\displaystyle\mathsf{B}_{s,u}^{(1)}(q,q^{\prime})=\int_{0}^{\infty}dye^{-y}$
$\displaystyle\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}$
$\displaystyle\times{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}V_{s}^{(BJ)}(u)}{2\hbar^{2}}\left(\left(q-q^{\prime}\right)-i\hbar\dfrac{y}{z}\right)^{2}\mathsf{P_{BJ}}(s,u,z,q,q^{\prime})\right]$
(39) $\displaystyle\mathsf{B}_{s,u,z}^{(2)}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{2}\left[1-\dfrac{1}{z^{2}}\left(\dfrac{V_{s}^{(BJ)}(u)}{\mu_{o}c^{2}}\right)^{2}+2i\dfrac{\sqrt{z^{2}-1}}{z^{2}}\left(\dfrac{V_{s}^{(BJ)}(u)}{\mu_{o}c^{2}}\right)\right]^{-1/2}+f_{i\rightarrow-i}$
(40) $\displaystyle\mathsf{P_{BJ}}(s,u,z,q,q^{\prime})=$
$\displaystyle\left(\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{V^{(BJ)}(s,u)}{2\mu_{o}c^{2}}\right)$
(41) $V^{(BJ)}(s,u)=V(s)-V(u).$ (42)
The TKF given by Eqs. (37)-(42) also reduces to the known kernel for Born-
Jordan quantized non-relativistic TOA operator Eq. (16) in the limit
$c\rightarrow\infty$.
### Modified simple-symmetric-ordered TOA-operator
Last, we have
$\displaystyle T^{\\{SS\\}}(q,q^{\prime})=$
$\displaystyle\dfrac{1}{4}\int_{0}^{q}ds\mathsf{S}(s,q)+\dfrac{1}{4}\int_{0}^{q^{\prime}}ds\mathsf{S}(s,q^{\prime})$
(43)
where
$\displaystyle\mathsf{S}(s,x)=\mathsf{S}^{(1)}(s,x)+\dfrac{2}{\pi}\int_{1}^{\infty}dz\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{q-q^{\prime}}z]\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{S}_{z}^{(2)}(s,x)$
(44)
in which
$\displaystyle\mathsf{S}^{(1)}(s,x)=\int_{0}^{\infty}dye^{-y}$
$\displaystyle\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}$
$\displaystyle\times{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}V^{(SS)}(s,x)}{2\hbar^{2}}\left(\left(q-q^{\prime}\right)-i\hbar\dfrac{y}{z}\right)^{2}\mathsf{P_{SS}}(s,z,x)\right]$
(45) $\displaystyle\mathsf{S}_{z}^{(2)}(s,x)=$
$\displaystyle\dfrac{1}{2}\left[1-\dfrac{1}{z^{2}}\left(\dfrac{V^{(SS)}(s,x)}{\mu_{o}c^{2}}\right)^{2}+2i\dfrac{\sqrt{z^{2}-1}}{z^{2}}\left(\dfrac{V^{(SS)}(s,x)}{\mu_{o}c^{2}}\right)\right]^{-1/2}+f_{i\rightarrow-i}$
(46) $\displaystyle\mathsf{P_{SS}}(s,z,x)=$
$\displaystyle\left(\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{V^{(SS)}(s,x)}{2\mu_{o}c^{2}}\right)$
(47) $V^{(SS)}(s,x)=V(x)-V(s).$ (48)
The TKF given by Eqs. (43)-(48) also reduces to the known kernel for simple-
symmetric quantized non-relativistic TOA-operator Eq. (17) in the limit
$c\rightarrow\infty$.
In general, a closed form expression for the relativistic TKFs
$T^{\\{W\\}}(q,q^{\prime})$, $T^{\\{BJ\\}}(q,q^{\prime})$, and
$T^{\\{SS\\}}(q,q^{\prime})$ may be intractable because of how we assigned a
finite value to the divergent integral Eq. (26). It is possible that a
tractable form may be obtained using a different assignment to the divergent
integral. However, we justify the use of $T^{\\{W\\}}(q,q^{\prime})$,
$T^{\\{BJ\\}}(q,q^{\prime})$, and $T^{\\{SS\\}}(q,q^{\prime})$ because it
reduces to the non-relativistic time kernel[40].
## IV Barrier traversal time operator
We use the measurement scheme shown in Fig. 2. Two detectors $D_{T}$ and
$D_{R}$ are placed at the arrival point $q=0$ and in the far left,
respectively. A square potential barrier of height $V(q)=V_{o}>0$ and length
$L=a-b$ is then placed between the detectors $a<q<b<0$. Next, a wavepacket
$\psi(q)$ initially centered at $q=q_{o}$ with momentum $p_{o}$ is placed
between $D_{R}$ and the barrier such that the tail of $\psi(q)$ does not
initially ‘leak’ into the barrier. The wavepacket is then launched at $t=0$
towards $D_{T}$ which records the arrival of the particle while the detector
$D_{R}$ does not record any data. This is done to avoid altering the
propagation of $\psi(q)$ and provide an indirect but accurately realistic way
of obtaining the TOA of the particle at the origin [15, 70, 71]. The same
measurement scheme is employed in the absence of the barrier.
The measurement is repeated several times for an ensemble of identically
prepared particles to obtain a TOA distribution at $D_{T}$. We assume that the
measured TOA distribution has an ideal distribution generated by the spectral
resolution of a corresponding TOA-operator $\mathsf{\hat{T}_{F}}$ and
$\mathsf{\hat{T}_{B}}$ in the absence and presence of the potential barrier,
respectively. In the succeeding expressions, the subscript $F$ ($B$) will
indicate the case when the barrier is absent (present). The traversal time
across the barrier is then deduced from the difference of the average value of
the measured TOA
$\Delta\bar{\tau}=\bar{\tau}_{F}-\bar{\tau}_{B}=\matrixelement{\psi}{\mathsf{\hat{T}_{F}}}{\psi}-\matrixelement{\psi}{\mathsf{\hat{T}_{B}}}{\psi}$
(49)
and is assumed to be the expectation value of the TOA-operator.
Figure 2: Measurement scheme for the traversal time of a particle in the (a)
absence of a barrier, and (b) presence of a barrier. The wavepacket
$\psi_{o}(q)$ is prepared between the detectors $D_{R}$ and $D_{T}$ such that
its tails does not extend to the barrier region.
In the absence of the barrier, the relativistic TKFs
$T^{\\{W\\}}_{F}(q,q^{\prime})$, $T^{\\{BJ\\}}_{F}(q,q^{\prime})$, and
$T^{\\{SS\\}}_{F}(q,q^{\prime})$ is obtained by substituting $V(q)=0$ into
Eqs. (31), (37) and (43), respectively. All ordering rules will yield the same
TKF
$\tilde{T}_{F}(\eta,\zeta)=\frac{\eta}{2}\mathsf{T}_{F}(\zeta),$ (50)
where,
$\mathsf{T}_{F}(\zeta)=1+\dfrac{2}{\pi}\int_{1}^{\infty}dz\dfrac{\sqrt{z^{2}-1}}{z}\exp(-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{\zeta}z).$
(51)
The operator corresponding to the TKF $\tilde{T}_{F}(\eta,\zeta)$ coincides
with the Rigged Hilbert space extension of Razavi’s relativistic free TOA-
operator [68]
$\displaystyle(\mathsf{\hat{T}_{\text{Ra}}}\phi)(q)=\int_{-\infty}^{\infty}dq^{\prime}\matrixelement{q}{\mathsf{\hat{T}_{\text{Ra}}}}{q^{\prime}}\phi(q^{\prime})$
(52)
wherein, it was shown that the physical quantities associated with Eq. (52)
are consistent with special relativity. Now, the TKFs
$T^{\\{W\\}}(q,q^{\prime})$, $T^{\\{BJ\\}}(q,q^{\prime})$, and
$T^{\\{SS\\}}(q,q^{\prime})$ were derived under the assumption that the
interaction potential $V(q)$ is analytic. However, it can still be applied to
piecewise potentials such as the square barrier because the TKFs are in
integral form. We will justify this assumption later by establishing that in
the classical limit $\hbar\rightarrow 0$, the operator for the square
potential barrier corresponding to the TKFs reduce to its “classical”
relativistic TOA.
The TKF using the modified Wey ordering Eq. (31) may be obtained by mapping
the potential $V(q)$ from $(q,q^{\prime})$ coordinates into three non-
overlapping regions in the $(\eta,\zeta)$ coordinate wherein
$\eta=(q+q^{\prime})/2$ and $\zeta=q-q^{\prime}$. In this coordinate system,
the arrival point is now at $\eta=0$ and $V(\eta)=V_{o}$ for $a<\eta<b<0$ and
zero outside the interval $(a,b)$. For Region I, it is easy to see that
$V(\eta)=0$ for the entire integration region of Eq. (31). Meanwhile, for
Region II, we now have $V(\eta)=V_{o}$ and it is necessary to split the
integral Eq. (31) into two parts as $V(s)=0$ for $b<s<0$ while $V(s)=V_{o}$
for $\eta<s<b$. Last, for Region III we have $V(\eta)=0$ and split the
integral Eq. (31) into three parts as $V(s)=V_{o}$ for $a<s<b$ while $V(s)=0$
outside this interval. Performing these operations will yield
$\displaystyle\tilde{T}_{B}^{(I)}(\eta,\zeta)=$
$\displaystyle\dfrac{\eta}{2}\mathsf{T}_{F}(\zeta)$
$\displaystyle\tilde{T}_{B}^{(II)}(\eta,\zeta)=$
$\displaystyle\left(\dfrac{\eta+b}{2}\right)\mathsf{T}_{F}(\zeta)-\dfrac{b}{2}\mathsf{T}_{B}(V_{o},\zeta)$
(53) $\displaystyle\tilde{T}_{B}^{(III)}(\eta,\zeta)=$
$\displaystyle\left(\dfrac{\eta+L}{2}\right)\mathsf{T}_{F}(\zeta)-\dfrac{L}{2}\mathsf{T}_{B}(-V_{o},\zeta)$
in which $\mathsf{T}_{F}(\zeta)$ is given by Eq. (51) and
$\displaystyle\mathsf{T}_{B}(V_{o},\zeta)=$
$\displaystyle\mathsf{F}_{B}(V_{o},\zeta)+\dfrac{2}{\pi}\int_{1}^{\infty}dz\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{\zeta}z]\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{G}_{B}(V_{o},z).$
(54) $\displaystyle\mathsf{F}_{B}(V_{o},\zeta)=$
$\displaystyle\int_{0}^{\infty}dye^{-y}\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}V_{o}}{2\hbar^{2}}\left(\zeta-i\hbar\dfrac{y}{z}\right)^{2}\left(\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{V_{o}}{2\mu_{o}c^{2}}\right)\right]$
(55) $\displaystyle\mathsf{G}_{B}(V_{o},z)=$
$\displaystyle\dfrac{1}{2}\left\\{\left[1-\dfrac{1}{z^{2}}\left(\dfrac{V_{o}}{\mu_{o}c^{2}}\right)^{2}+2i\dfrac{\sqrt{z^{2}-1}}{z^{2}}\left(\dfrac{V_{o}}{\mu_{o}c^{2}}\right)\right]^{-1/2}+g_{i\rightarrow-i}\right\\}.$
(56)
We now work in the original $(q,q^{\prime})$-coordinate of our system to
evaluate the TKF $T^{\\{BJ\\}}(q,q^{\prime})$ given by Eqs. (37)-(41) and
later transform to the coordinates $(\eta,\zeta)$. For Region I, $V(q)=0$ for
the entire integration region of Eq. (37). Meanwhile, for Region II, it is
necessary to split the integral over $u$ of Eq. (37) into two parts as
$V(u)=0$ for $b<u<0$ while $V(u)=V_{o}$ for $s<u<b$ while $V(s)=V_{o}$ over
the whole region of $s$. We again repeat the same steps for Region III, and
split the integral over $u$ of Eq. (37) into three parts as $V(u)=V_{o}$ for
$a<u<b$ while $V(u)=0$ outside this interval. Then, $V(s)=V_{o}$ over the
whole region of $s$ in Eq. (37). Performing these operations and transforming
into the coordinates $(\eta,\zeta)$ will yield the same TKFs as Eq. (53).
Repeating the same procedure will yield the the same TKFs
$T^{\\{SS\\}}(q,q^{\prime})$. In the succeeding discussion, we shall only
refer to the modified Weyl-ordered operator since the same results will also
hold for the Born-Jordan and simple-symmetric case.
## V Classical limit of the free and barrier TKFs
We now prove that the TKFs corresponding to the TOA-operator for the free and
barrier case are indeed the quantization of the CRTOA by taking their inverse
Weyl-Wigner transform
$\tilde{t}(q_{o},p_{o})=\dfrac{\mu_{o}}{i\hbar}\int_{-\infty}^{\infty}d\zeta
e^{-ip_{o}\zeta/\hbar}\tilde{T}(q_{o},\zeta)\text{sgn}(\zeta)$ (57)
where, $q_{o}$ and $p_{o}$ are the initial position and momentum,
respectively. For the free case, this is done by substituting Eq. (50) to Eq.
(57) which yields
$\displaystyle\tilde{t}_{F}=\dfrac{\mu_{o}}{i\hbar}\dfrac{q_{o}}{2}\int_{-\infty}^{\infty}d\zeta
e^{-ip_{o}\zeta/\hbar}\text{sgn}(\zeta)+\dfrac{\mu_{o}}{i\hbar}\dfrac{q_{o}}{2}\dfrac{2}{\pi}\int_{1}^{\infty}dz\dfrac{\sqrt{z^{2}-1}}{z}\int_{-\infty}^{\infty}d\zeta\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{\zeta}z]e^{-ip_{o}\zeta/\hbar}\text{sgn}(\zeta).$
(58)
The first term of Eq. (58) is evaluated by taking the inverse of the
distributional Fourier transform [72]
$\int_{-\infty}^{\infty}dxx^{-m}e^{i\sigma
x}=i^{m}\dfrac{\pi}{(m-1)!}\sigma^{m-1}\text{sgn}\sigma.$ (59)
Meanwhile, the order of integration for the second term of Eq. (58) are
interchanged, and the inner integral is evaluated as a Laplace transform. The
resulting expression is further evaluated using the integral identity [68]
$\int_{1}^{\infty}dz\dfrac{\sqrt{z^{2}-1}}{z}\dfrac{a^{2}}{a^{2}+b^{2}z^{2}}=\dfrac{\pi}{2}\left(-1+\sqrt{1+\dfrac{a^{2}}{b^{2}}}\right).$
(60)
for all real $(a,b)$, which can also be obtained using the calculus of
residues. Thus, the classical limit of the free TOA-operator corresponding to
$\tilde{T}_{F}(\eta,\zeta)$ is
$\displaystyle\tilde{t}_{F}=$
$\displaystyle-\dfrac{\mu_{o}q_{o}}{p_{o}}\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}},$
(61)
which is the known free CRTOA obtained from directly integrating Eq.(18).
In the presence of the potential barrier, it easily follows from Eq. (61) that
the classical limit of the TKF for Region I $\tilde{T}_{B}^{(I)}(\eta,\zeta)$
is $\tilde{t}_{B}^{(I)}=\tilde{t}_{F}$. For Region II, the Weyl-Wigner
transform of the TKF $\tilde{T}_{B}^{(II)}(\eta,\zeta)$ is
$\displaystyle\tilde{t}_{B}^{(II)}=$
$\displaystyle-\dfrac{\mu_{o}(q_{o}+b)}{p}\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}-\dfrac{b}{2}\dfrac{\mu_{o}}{i\hbar}\int_{-\infty}^{\infty}d\zeta
e^{-ip_{o}\zeta/\hbar}\mathsf{T}_{B}(V_{o},\zeta)\text{sgn}(\zeta),$ (62)
wherein
$\displaystyle\int_{-\infty}^{\infty}$ $\displaystyle d\zeta
e^{-ip_{o}\zeta/\hbar}\mathsf{T}_{B}(V_{o},\zeta)\text{sgn}(\zeta)$
$\displaystyle=$ $\displaystyle\int_{-\infty}^{\infty}d\zeta
e^{-ip_{o}\zeta/\hbar}\mathsf{F}_{B}(V_{o},\zeta)\text{sgn}(\zeta)+\left(\dfrac{2\hbar}{ip_{o}}\right)\dfrac{2}{\pi}\int_{1}^{\infty}dz\mathsf{G}_{B}(V_{o},z)\dfrac{\sqrt{z^{2}-1}}{z}\dfrac{p_{o}^{2}}{p_{o}^{2}+\mu_{o}^{2}c^{2}z^{2}}.$
(63)
The first term of Eq. (63) is evaluated by expanding the hypergeometric
function in $\mathsf{F}_{B}(V_{o},\zeta)$ using its power series
representation to perform a term-by-term integration. The resulting series
converges as long the initial energy of the particle is above the barrier
height, i.e.
$\displaystyle\int_{-\infty}^{\infty}$ $\displaystyle d\zeta
e^{-ip_{o}\zeta/\hbar}\mathsf{F}_{B}(V_{o},\zeta)\text{sgn}(\zeta)$
$\displaystyle=$
$\displaystyle\dfrac{2\hbar}{ip_{o}}\sum_{j=0}^{\infty}\dfrac{(2j)!}{j!j!}\left(\dfrac{-\mu_{o}V_{o}}{2\hbar^{2}}\right)^{j}\sum_{k=0}^{j}\binom{j}{k}\left(\dfrac{V_{o}}{2\mu_{o}c^{2}}\right)^{j-k}$
$\displaystyle\times\left\\{\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}^{k+1}-\left(\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)^{j+1}\binom{\frac{k+1}{2}}{j+1}{{}_{2}}F_{1}\left[1,\frac{1}{2}+j-\frac{k}{2};j+2;-\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right]\right\\}$
$\displaystyle=$
$\displaystyle\left(\dfrac{2\hbar}{ip_{o}}\right)\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}\left[1+\dfrac{2\mu_{o}V_{o}}{p_{o}^{2}}\left(\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{V_{o}}{2\mu_{o}c^{2}}\right)\right]^{-1/2}$
$\displaystyle-\left(\dfrac{2\hbar}{ip_{o}}\right)\dfrac{2}{\pi}\int_{1}^{\infty}dz\mathsf{G}_{B}(V_{o},z)\dfrac{\sqrt{z^{2}-1}}{z}\dfrac{p_{o}^{2}}{p_{o}^{2}+\mu_{o}^{2}c^{2}z^{2}}$
(64)
The second line follows from using the integral representation of the Gauss
hypergeometric function
${{}_{2}}F_{1}(\alpha,\beta;\gamma;z)=\dfrac{\Gamma(\gamma)}{\Gamma(\beta)\Gamma(\gamma-\beta)}\int_{0}^{\infty}dt\dfrac{t^{c-b-1}(1+t)^{a-c}}{(t+1-z)^{a}}$
(65)
for $Re[\gamma]>\real[\beta]>0$ and $\absolutevalue{\text{Arg}(1-z)}<\pi$.
Combining Eqs. (62)-(64) thus yields
$\displaystyle\tilde{t}_{B}^{(II)}=$
$\displaystyle-\dfrac{\mu_{o}(q_{o}+b)}{p_{o}}\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{b}{c}\sqrt{\dfrac{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}{\left(\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{V_{o}}{\mu_{o}c^{2}}\right)^{2}-1}}.$
(66)
The first term of $\tilde{t}_{B}^{(II)}$ is the free CRTOA from the edge of
the barrier to the origin while the second term is the traversal time on top
of the barrier. Repeating the same steps, the Weyl-Wigner transform of
$\tilde{T}_{B}^{(III)}(\eta,\zeta)$ is
$\displaystyle\tilde{t}_{B}^{(III)}=$
$\displaystyle-\dfrac{\mu_{o}(q_{o}+L)}{p_{o}}\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}+\dfrac{L}{c}\sqrt{\dfrac{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}{\left(\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}-\dfrac{V_{o}}{\mu_{o}c^{2}}\right)^{2}-1}}.$
(67)
The first term of $\tilde{t}_{B}^{(III)}$ is the traversal time across the
interaction free region while the second term is the traversal time across the
barrier region. The Weyl-Wigner transforms $\tilde{t}_{B}^{(II)}$ and
$\tilde{t}_{B}^{(III)}$ also coincide with CRTOA obtained from directly
integrating Eq. (18).
In general, the classical limit of the TKF for a given quantization scheme is
obtained by
$\tilde{t}(q_{o},p_{o})=\lim_{\hbar\rightarrow
0}\dfrac{\mu_{o}}{i\hbar}\int_{-\infty}^{\infty}d\zeta
e^{-ip_{o}\zeta/\hbar}\tilde{T}^{\\{Q\\}}(q_{o},\zeta)\text{sgn}(\zeta),$ (68)
wherein the integral is understood in a distributional sense, provided that
the limit exists [40]. Notice that the Weyl-Wigner transform Eq. (57) does not
involve the vanishing of $\hbar$. Now, Eq. (68) implies that the classical
limit of the TKF for a given quantization scheme is, in general, dependent on
positive powers of $\hbar$. Such is the case for the Born-Jordan and simple-
symmetric ordering. Performing the limit $\hbar\rightarrow 0$ then reduces to
classical limits of the TKFs $T^{\\{BJ\\}}(q,q^{\prime})$ and
$T^{\\{SS\\}}(q,q^{\prime})$ into that equal to the Weyl-Wigner transform of
$T^{\\{W\\}}(q,q^{\prime})$.
## VI Expected Barrier Traversal Time
We now assume that the average value of the measured TOA $\bar{\tau}$ at the
detector $D_{T}$ (see Fig. 2) is equal to the expectation value of the
operator $\mathsf{\hat{T}}$, i.e.
$\displaystyle\bar{\tau}=$
$\displaystyle\matrixelement{\psi}{\mathsf{\hat{T}}}{\psi}=\int_{-\infty}^{\infty}dq\psi^{*}(q)\int_{-\infty}^{\infty}dq^{\prime}\dfrac{\mu_{o}}{i\hbar}T(q,q^{\prime})\text{sgn}(q-q^{\prime})\psi(q^{\prime}).$
(69)
The incident wavefunction is assumed to be prepared in a pure state
$\psi(q)=\varphi(q)e^{ik_{o}q}$ with momentum expectation value $p_{o}=\hbar
k_{o}$, where $\matrixelement{\varphi}{\mathsf{\hat{p}}}{\varphi}=0$. We
further assume that $\varphi(q)$ is infinitely differentiable and impose the
condition that the support of $\varphi(q)$ is in Region III such that the tail
of $\varphi(q)$ does not ’leak’ into the barrier. To evaluate Eq. (69), it
will be convenient to perform a change of variables from $(q,q^{\prime})$ to
$(\eta,\zeta)$ such that $\bar{\tau}=\imaginary(\bar{\tau}^{*})$ wherein
$\bar{\tau}^{*}$ is the complex-valued TOA given by
$\displaystyle\bar{\tau}^{*}=-\dfrac{2\mu_{o}}{\hbar}\int_{-\infty}^{\infty}$
$\displaystyle d\eta\int_{0}^{\infty}d\zeta
e^{ik_{o}\zeta}\tilde{T}(\eta,\zeta)\varphi^{*}\left(\eta-\dfrac{\zeta}{2}\right)\varphi\left(\eta+\dfrac{\zeta}{2}\right).$
(70)
In the succeeding expressions, we indicate complex-valued quantities with an
asterisk ∗ wherein the imaginary component corresponds to the physical
quantity.
In the absence of the barrier, it easily follows from Eqs. (50) and (70) that
the complex-valued free TOA is
$\displaystyle\bar{\tau}_{F}^{*}=-\dfrac{\mu_{o}}{\hbar}$
$\displaystyle\int_{0}^{\infty}d\zeta
e^{ik_{o}\zeta}\mathcal{T}_{F}(\zeta)\int_{-\infty}^{\infty}d\eta\eta\varphi^{*}\left(\eta-\dfrac{\zeta}{2}\right)\varphi\left(\eta+\dfrac{\zeta}{2}\right).$
(71)
Meanwhile, in the presence of the barrier, we have
$\displaystyle\bar{\tau}_{B}^{*}=$
$\displaystyle-\dfrac{\mu_{o}}{\hbar}\int_{0}^{\infty}d\zeta
e^{ik_{o}\zeta}\int_{-\infty}^{\infty}d\eta\tilde{T}_{B}^{(III)}(\eta,\zeta)\varphi^{*}\left(\eta-\dfrac{\zeta}{2}\right)\varphi\left(\eta+\dfrac{\zeta}{2}\right)$
$\displaystyle=$
$\displaystyle\bar{\tau}_{F}^{*}-\dfrac{\mu_{o}L}{\hbar}\int_{0}^{\infty}d\zeta
e^{ik_{o}\zeta}(\mathsf{T}_{F}(\zeta)-\mathsf{T}_{B}(-V_{0},\zeta))\int_{-\infty}^{\infty}d\eta\varphi^{*}\left(\eta-\dfrac{\zeta}{2}\right)\varphi\left(\eta+\dfrac{\zeta}{2}\right).$
(72)
The measurable quantity for deducing the barrier traversal time is the TOA
difference between the free and barrier case
$\Delta\bar{\tau}=\imaginary(\Delta\bar{\tau}^{*})=\imaginary(\bar{\tau}_{F}^{*}-\bar{\tau}_{B}^{*})$,
which is explicitly given as
$\displaystyle\Delta\bar{\tau}^{*}=$
$\displaystyle\dfrac{\mu_{o}L}{p_{o}}\left(Q_{c}^{*}-R_{c}^{*}\right)$ (73)
wherein
$\displaystyle Q_{c}^{*}=$ $\displaystyle k_{o}\int_{0}^{\infty}d\zeta
e^{ik_{o}\zeta}\mathsf{T}_{F}(\zeta)\Phi(\zeta)$ (74) $\displaystyle
R_{c}^{*}=$ $\displaystyle k_{o}\int_{0}^{\infty}d\zeta
e^{ik_{o}\zeta}\mathsf{T}_{B}(-V_{0},\zeta)\Phi(\zeta)$ (75)
$\displaystyle\Phi(\zeta)=$
$\displaystyle\int_{-\infty}^{\infty}d\eta\varphi^{*}\left(\eta-\dfrac{\zeta}{2}\right)\varphi\left(\eta+\dfrac{\zeta}{2}\right).$
(76)
The complex-valued dimensionless quantities $Q_{c}^{*}$ and $R_{c}^{*}$
accounts for the contribution of the barrier and relativistic effects on the
non-relativistic free TOA $\mu_{o}L/p_{o}$. The physical content of the
quantities $Q_{c}$ and $R_{c}$ are investigated by taking the asymptotic
expansion in the high energy limit $k_{o}\rightarrow\infty$.
It is easy to see that if we substitute Eq. (51) to Eq. (74), then it follows
that the quantity $(\mu_{o}L/p_{o})Q_{c}$ is just the expectation value of the
free relativistic TOA-operator calculated by Flores and Galapon [73]. Thus,
$Q_{c}\sim\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}.$ (77)
which is the relativistic correction to the non-relativistic free TOA
$\mu_{o}L/p_{o}$. Now, the quantity $R_{c}^{*}$ is a Fourier integral with
respect to the asymptotic parameter $k_{o}$. We use the same steps outlined in
Sec. IV for the calculation of the Weyl-Wigner transform of the TKF
$\tilde{T}_{B}^{(III)}(\eta,\zeta)$, and perform repeated integration-by-parts
to collect powers of $\hbar$. Taking the imaginary part of $R_{c}^{*}$ thus
yields
$\displaystyle\imaginary[R_{c}^{*}]\sim$
$\displaystyle\sum_{m=0}^{\infty}\Phi^{(2m)}(0)\dfrac{(-1)^{m}\hbar^{2m}}{p_{o}^{2m}}\sum_{j=0}^{\infty}\dfrac{(2j)!}{(1)_{j}j!}\left(\dfrac{\mu_{o}V_{o}}{2p_{o}^{2}}\right)^{j}\sum_{k=0}^{j}\binom{j}{k}\left\\{\left(-\dfrac{V_{o}}{2\mu_{o}c^{2}}\right)^{j-k}\right.$
$\displaystyle\times\left.\sum_{l=0}^{j}\binom{\frac{k+1}{2}}{l}\binom{2m+2j-2l}{2j-2l}\left(\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)^{l}\right\\}+\dfrac{2}{\pi}\int_{1}^{\infty}dz\dfrac{\left(\frac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)}{z^{2}+{\left(\frac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)}}\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{G}(V_{o},z)$
(78) $\displaystyle\sim$
$\displaystyle\sum_{j=0}^{\infty}\dfrac{(2j)!}{(1)_{j}j!}\left(\dfrac{\mu_{o}V_{o}}{2p_{o}^{2}}\right)^{j}\sum_{k=0}^{j}\binom{j}{k}\left(-\dfrac{V_{o}}{2\mu_{o}c^{2}}\right)^{j-k}\left\\{\sqrt{1+\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}}^{k+1}\right.$
$\displaystyle-\left.\left(\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)^{j+1}\binom{\frac{k+1}{2}}{j+1}{{}_{2}}F_{1}\left[1,\frac{1}{2}+j-\frac{k}{2};j+2;-\dfrac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right]\right\\}$
$\displaystyle+\dfrac{2}{\pi}\int_{1}^{\infty}dz\dfrac{\left(\frac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)}{z^{2}+{\left(\frac{p_{o}^{2}}{\mu_{o}^{2}c^{2}}\right)}}\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{G}(V_{o},z)$
(79)
The second line Eq. (79) follows from the classical limit $\hbar\rightarrow 0$
in which only the terms with $m=0$ will not vanish, wherein we used the
normalization conditions $\Phi(0)=1$. The integral representation of the Gauss
hypergeometric function, Eq. (65), is again used to perform the summation
which yields
$R_{c}\sim\dfrac{p_{o}}{\mu_{o}c}\sqrt{\dfrac{E_{p}^{2}}{(E_{p}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}$
(80)
where $E_{p}=\sqrt{p^{2}c^{2}+\mu_{o}^{2}c^{4}}$. Thus, $R_{c}$ is just the
ratio of the energy of the incident particle and its energy above the barrier.
This leads us to the interpretation that $R_{c}$ is the effective index of
refraction (IOR) of the barrier with respect to the wavepacket. The same
interpretation was made in the non-relativistic case for the square potential
barrier and well [15, 74]. This implies that the traversal time across the
barrier is given by $\bar{\tau}_{\text{trav}}=(\mu_{o}L/p_{o})R_{c}$.
Figure 3: Contour of integration for Eq. (86) leading to the interchange of
the order of integration in Eq. (87) Figure 4: Contours of integration of Eq.
(90) for the (a) the first integral when $V_{o}/\hbar c<\mu_{o}c/\hbar$, and
(b) the second integral
We now establish the expected traversal time across the potential barrier and
use the same notations as that of Galapon [15] for consistency. To evaluate
the complex-valued IOR Eq. (75), we introduce the inverse Fourier transform of
the wavepacket
$\varphi(q)=(2\pi)^{-1}\int_{-\infty}^{\infty}d\tilde{k}e^{i\tilde{k}q}\phi(\tilde{k})$
such that
$\Phi(\zeta)=\int_{-\infty}^{\infty}d\tilde{k}|\phi(\tilde{k})|^{2}e^{i\tilde{k}\zeta}.$
(81)
Substituting Eq. (81) to Eq. (75), and performing a change of variable
$\tilde{k}=k-k_{o}$ yields
$\dfrac{R_{c}^{*}}{k_{o}}=\int_{0}^{\infty}d\zeta\mathsf{T}_{B}(-V_{0},\zeta)\int_{-\infty}^{\infty}dk|\phi(k-k_{o})|^{2}e^{ik\zeta}$
(82)
Notice that $\phi(k-k_{o})$ is the Fourier transform of the full incident
wavefunction $\psi(q)=e^{ik_{o}q}\varphi(q)$, i.e.
$\phi(k-k_{o})=\tilde{\psi}(k)=\dfrac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dqe^{-ikq}\psi(q)$
(83)
Thus, we have
$\displaystyle\dfrac{R_{c}^{*}}{k_{o}}=\int_{0}^{\infty}$ $\displaystyle
d\zeta\mathsf{T}_{B}(-V_{0},\zeta)\int_{-\infty}^{\infty}dke^{ik\zeta}\absolutevalue{\tilde{\psi}(k)}^{2}$
(84) $\displaystyle=\int_{0}^{\infty}$ $\displaystyle
d\zeta\mathsf{F}_{B}(-V_{0},\zeta)\int_{-\infty}^{\infty}dke^{ik\zeta}\absolutevalue{\tilde{\psi}(k)}^{2}+\dfrac{2}{\pi}\int_{1}^{\infty}dy\dfrac{\sqrt{y^{2}-1}}{y}\mathsf{G}_{B}(V_{o},y)\int_{-\infty}^{\infty}dk\dfrac{\frac{\mu_{o}c}{\hbar}y}{k^{2}+\frac{\mu_{o}^{2}c^{2}}{\hbar^{2}}y^{2}}\absolutevalue{\tilde{\psi}(k)}^{2}.$
(85)
The last line follows from interchanging the order of integration in the
second term of Eq. (85) but the same cannot be done on the first term.
Specifically, if we use the same steps outlined in Sec. IV to perform a term-
by-term integration on the first term of Eq. (85), then this will lead to an
infinite sum of divergent integrals whose values may be assigned using
analytic continuation, regularization, and many others. However, it was
recently shown by one of us that this naive interchange in the ordering of
integrals leading to divergent integrals sometimes miss significant terms [75,
76]. This was shown to have physical significance in the traversal time of a
non-relativistic particle across a potential well [74].
Table 1: Numerical verification of $\tilde{R}_{c}$ for spatially narrow Gaussian wavepackets $\sigma=0.5$ when there are above and below barrier components | Integral: Eq. (75) | Summation: Eq. (100) | Evaluated: Eq. (94)
---|---|---|---
$k_{o}=2.00;V_{o}=0.2$ | 1.32442 | 1.32442 | 1.32442
$k_{o}=2.00;V_{o}=0.3$ | 1.38141 | 1.38141 | 1.38141
$k_{o}=2.00;V_{o}=0.5$ | — | 1.48255 | 1.48255
$k_{o}=2.00;V_{o}=0.6$ | — | 1.52350 | 1.52350
$k_{o}=0.90;V_{o}=0.3$ | 0.99882 | 0.99888 | 0.99888
$k_{o}=3.00;V_{o}=0.3$ | 1.24812 | 1.24812 | 1.24811
$k_{o}=5.00;V_{o}=0.3$ | 1.09394 | 1.09394 | 1.09393
$k_{o}=0.15;V_{o}=0.3$ | 0.18996 | 0.18996 | 0.18996
$k_{o}=0.20;V_{o}=0.3$ | 0.25253 | 0.25253 | 0.25253
$k_{o}=0.25;V_{o}=0.3$ | 0.31446 | 0.31446 | 0.31446
Table 2: Numerical verification of $\tilde{R}_{c}$ for spatially wide Gaussian wavepackets $\sigma=9.0$ and $V_{o}=0.3$ when there are only below barrier components | Integral: Eq. (75) | Evaluated: Eq. (94)
---|---|---
$k_{o}=0.19$ | $2.23294\times 10^{-16}$ | $1.34410\times 10^{-29}$
$k_{o}=0.25$ | $1.77061\times 10^{-14}$ | $2.01917\times 10^{-24}$
$k_{o}=0.28$ | $1.84479\times 10^{-16}$ | $5.06286\times 10^{-22}$
To make the the interchange in the orders of integration on the the first term
of Eq. (85) valid, we use the methods of Pablico and Galapon [74] and use the
contour shown in Fig. 3. We let $p(z)=|\tilde{\psi}(z)|^{2}$ and assume that
$\tilde{\psi}(z)$ does not have any poles in the complex plane, i.e.
$\int_{-\infty}^{\infty}e^{ix\zeta}p(x)=\int_{-\infty}^{\infty}dxe^{-(\epsilon-
ix)\zeta}p(x+i\epsilon).$ (86)
This now makes
$\displaystyle\int_{0}^{\infty}$ $\displaystyle
d\zeta\mathsf{F}_{B}(-V_{0},\zeta)\int_{-\infty}^{\infty}dke^{ik\zeta}\absolutevalue{\tilde{\psi}(k)}^{2}=\int_{-\infty}^{\infty}dkp(k+i\epsilon)\int_{0}^{\infty}d\zeta\mathsf{F}_{B}(-V_{o},\zeta)e^{-(\epsilon-
ik)\zeta}.$ (87)
The interchange is valid provided that $\epsilon>k$. We can now use the series
representation of the hypergeometric function in
$\mathsf{F}_{B}(-V_{o},\zeta)$ and use the same methods outlined in Sec. IV.
This turns the first term of Eq. (85) into
$\displaystyle\int_{0}^{\infty}d\zeta$
$\displaystyle\mathsf{F}_{B}(-V_{0},\zeta)\int_{-\infty}^{\infty}dke^{ik\zeta}\absolutevalue{\tilde{\psi}(k)}^{2}$
$\displaystyle=i\sum_{n=0}^{\infty}$
$\displaystyle\dfrac{(2n)!}{(1)_{n}n!}\left(\dfrac{\mu_{o}V_{o}}{2\hbar^{2}}\right)^{n}\int_{-\infty}^{\infty}dkp(k+i\epsilon)\text{csgn}(k+i\epsilon)$
$\displaystyle\times\left(\sqrt{1+\dfrac{\hbar^{2}(k+i\epsilon)^{2}}{\mu_{o}^{2}c^{2}}}\right)^{n+1}\left((k+i\epsilon)^{2}+\dfrac{V_{o}^{2}}{\hbar^{2}c^{2}}\right)^{-n-\frac{1}{2}}$
$\displaystyle-\dfrac{2i}{\pi}$
$\displaystyle\int_{1}^{\infty}dy\dfrac{\sqrt{y^{2}-1}}{y}\mathsf{G}_{B}(V_{o},y)\int_{-\infty}^{\infty}dkp(k+i\epsilon)\dfrac{\frac{\hbar^{2}}{\mu^{2}c^{2}}(k+i\epsilon)}{y^{2}+\frac{\hbar^{2}}{\mu^{2}c^{2}}(k+i\epsilon)^{2}}$
(88)
where $\text{csgn}(z)$ is the complex signum function
$\displaystyle\text{csgn}(z)=\begin{cases}1\quad&,\text{Re}(z)>0\\\
-1\quad&,\text{Re}(z)<0\\\
\text{sgn}(\text{Im}(z))\quad&,\text{Re}(z)=0.\end{cases}$ (89)
To understand the physical content of Eq. (88), we consider the following
integral in the complex plane,
$\displaystyle\oint
dzp(z)\left(\sqrt{1+\dfrac{\hbar^{2}z^{2}}{\mu_{o}^{2}c^{2}}}\right)^{n+1}\left(z^{2}+\dfrac{V_{o}^{2}}{\hbar^{2}c^{2}}\right)^{-n-\frac{1}{2}}\quad\text{and}\quad\oint
dzp(z)\dfrac{\frac{\hbar^{2}}{\mu^{2}c^{2}}z}{y^{2}+\frac{\hbar^{2}}{\mu^{2}c^{2}}z^{2}},$
(90)
wherein the first integral has four branch points at $z=\\{\pm
i\frac{\mu_{o}c}{\hbar},\pm i\frac{V_{o}}{\hbar c}\\}$ while the second
integral has poles at $z=\pm i\frac{\mu c}{\hbar}$. We assume that the branch
points satisfy $V_{o}/\hbar c<\mu_{o}c/\hbar$ which is equivalent to the
condition $V_{o}<\mu_{o}c^{2}$. The integrals Eq. (90) are then evaluated
using the contours in Fig. 4 (see Appendix B for details) and the resulting
expressions are substituted to Eq. (85) which yields
$\displaystyle R_{c}^{*}=$ $\displaystyle i\dfrac{\hbar k_{o}}{\mu
c}\int_{0}^{\infty}dk\left(\absolutevalue{\tilde{\psi}(k)}^{2}-\absolutevalue{\tilde{\psi}(-k)}^{2}\right)\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}$
$\displaystyle+k_{o}\dfrac{2}{\pi}\int_{1}^{\infty}dy\dfrac{\sqrt{y^{2}-1}}{y}\mathsf{G_{B}}(V_{o},y)\int_{-\infty}^{\infty}dk\absolutevalue{\tilde{\psi}(k)}^{2}\dfrac{\frac{\mu
c}{\hbar}y}{k^{2}+\frac{\mu^{2}c^{2}}{\hbar^{2}}y^{2}}$ (91)
in which, $\tilde{E}_{k}=\sqrt{\hbar^{2}k^{2}c^{2}+\mu_{o}^{2}c^{4}}$. It is
easy to see that the first term of Eq. (91) is generally complex-valued while
the second term is always real-valued. Thus, taking the imaginary component of
the IOR yields
$\imaginary[R_{c}^{*}]=\dfrac{\hbar k_{o}}{\mu_{o}c}\tilde{R}_{c}=\dfrac{\hbar
k_{o}}{\mu_{o}c}\text{Re}\left\\{\int_{0}^{\infty}dk\left(\absolutevalue{\tilde{\psi}(k)}^{2}-\absolutevalue{\tilde{\psi}(-k)}^{2}\right)\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}\right\\}$
(92)
The right-hand side of Eq. (92) is only real-valued when
$\absolutevalue{k}>\kappa_{c}$, where
$\kappa_{c}=\sqrt{\dfrac{2\mu_{o}V_{o}}{\hbar^{2}}\left(1+\frac{V_{o}}{2\mu_{o}c^{2}}\right)}$
(93)
provided that $V_{o}<\mu_{o}c^{2}$. Thus, Eq. (92) becomes
$\tilde{R}_{c}=\tilde{R}_{c}^{(+)}-\tilde{R}_{c}^{(-)}=\int_{\kappa_{c}}^{\infty}dk\absolutevalue{\tilde{\psi}(+k)}^{2}\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}-\int_{\kappa_{c}}^{\infty}dk\absolutevalue{\tilde{\psi}(-k)}^{2}\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}$
(94)
It easily follows that the barrier traversal time now has the form
$\bar{\tau}_{\text{trav}}=\dfrac{\mu_{o}L}{p_{o}}\imaginary[R_{c}^{*}]=t_{c}\tilde{R}_{c},$
(95)
where, $t_{c}=L/c$ is the time it takes a photon to traverse the barrier
length. The term $\tilde{R}_{c}^{(+)}$ ($\tilde{R}_{c}^{(-)}$) characterizes
the contribution of the positive (negative) components of the energy
distribution of $\tilde{\psi}(k)$ with $\absolutevalue{k}>\kappa_{c}$ to the
effective IOR $\tilde{R}_{c}$. Clearly, the quantity
$\bar{\tau}_{\text{trav}}^{(\pm)}=t_{c}\tilde{R}_{c}^{(\pm)}=\int_{\kappa_{c}}^{\infty}dk\bar{\tau}_{\text{top}}(k)|\tilde{\psi}(\pm
k)|^{2}$ (96)
is the weighted average of the classical above barrier traversal time
$\bar{\tau}_{\text{top}}(k)=t_{c}\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}$
(97)
with weights $|\tilde{\psi}(\pm k)|^{2}$. The effective IOR Eq. (94) shows
that the contribution of the below barrier energy components of
$\tilde{\psi}(k)$ with $\absolutevalue{k}<\kappa_{c}$ vanishes, which leads us
to the same conclusion as that of Galapon [15]. That is, the below barrier
energy components of $\tilde{\psi}(k)$ are transmitted instantaneously which
implies that tunneling, whenever it occurs, is instantaneous.
Thus, the instantaneous tunneling time predicted in Ref. [15] is not a mere
consequence of using a non-relativistic theory but is an inherent quantum
effect in the context of “arrival times” as it still manifests even with a
relativistic treatment. However, there is a specific configuration in a
tunneling experiment such that this instantaneous tunneling time can be
observed. Specifically, it is implied from Eq. (94) that the initial incident
wavepacket $\psi(q)$ must be sufficiently spatially wide so that the spread in
momentum is narrow. This will ensure that $\tilde{\psi}(k)$ only has below
barrier components. Additionally, Eq. (94) rests on the assumption that
$\psi(q)$ does not initially ‘leak’ inside the barrier region, as such, the
initial incident wavepacket must be placed very far from the barrier.
Figure 5: Momentum density distribution $|\tilde{\psi}(k)|^{2}$ of spatially
wide Gaussian wavepackets for the parameters $\mu_{o}=c=\hbar=1$ with
$k_{o}=1.3$. The red line represents $\kappa_{c}=1.7025$ with $V_{o}=0.99$.
## VII Barrier traversal time of Gaussian wavepackets
We consider an incident Gaussian wavepacket , i.e.
$\varphi(q)=\dfrac{1}{\sqrt{\sigma\sqrt{2\pi}}}\exp[-\dfrac{(q-q_{o})^{2}}{4\sigma^{2}}].$
(98)
that is initially centered at $q=q_{o}$ with a position variance $\sigma^{2}$.
In momentum representation, this leads to
$\displaystyle\absolutevalue{\tilde{\psi}(\pm k)}^{2}=$
$\displaystyle\sqrt{\dfrac{2\sigma^{2}}{\pi}}\exp[-2\sigma^{2}(k\mp
k_{o})^{2}].$ (99)
For completeness, we first numerically verify the equivalence of Eqs. (94) and
Eq. (75). However, Eq. (75) is numerically taxing and unstable as the
potential $V_{o}$ increases, such that $\mathsf{T}_{B}(-V_{o},\zeta)$ must be
represented in an equivalent expression. This is done by using the power
series representation of the hypergeometric function in Eq. (55) to perform a
term-by-term integration which yields
$\displaystyle\mathsf{T}_{B}(-V_{o},\zeta)=$
$\displaystyle\sum_{l=0}^{\infty}\dfrac{(2l)!}{l!l!}\left(\dfrac{-\mu_{o}V_{o}}{2\hbar^{2}}\right)^{l}\sum_{m=0}^{l}\binom{l}{m}\left(\dfrac{-V_{o}}{2\mu_{o}c^{2}}\right)^{l-m}\sum_{n=0}^{l}\binom{\frac{m+1}{2}}{n}\left(\dfrac{-\hbar^{2}}{\mu_{o}^{2}c^{2}}\right)^{n}\dfrac{\zeta^{2l-2n}}{(2l-2n)!}$
$\displaystyle+\dfrac{2}{\pi}\int_{1}^{\infty}dz\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{\zeta}z]\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{G}_{B}(V_{o},z)$
(100)
Eq. (100) is then substituted to Eq. (75). This series will converge as long
as the initial energy of the particle is above the barrier height for
$V_{o}<\mu_{o}c^{2}$. text
The equivalent expressions for the effective IOR given by Eqs. (75), (94), and
(100) were numerically evaluated using Wolfram Mathematica 12 - Student
edition. The computer used has the following specifications: an Intel Core
i5-9300H CPU @ 2.40 GHz, 8.0 GB Ram, and a 64-bit operating system $\times$
64-based processor. Table 1 compares the values of $\tilde{R}_{c}$ for
spatially narrow Gaussian wavepackets, i.e. the wavepackets have a wide spread
in momentum such that it can have both below and above barrier components. The
evaluation of Eq. (75) is numerically taxing for the computer as the potential
increases but the equivalent expression Eq. (100) converges to the same value
as that of Eq. (94). Moreover, it can be seen that for the parameters wherein
Eq. (75) is evaluated, the equivalent expressions Eq. (100) and (94) all
converge to the same value. Table 2 compares the values of $\tilde{R}_{c}$ for
spatially wide Gaussian wavepackets, i.e. the wavepackets have a narrow spread
in momentum such that it only has below barrier components. Eq. (100) will not
converge so we only compare Eqs. (75) and (94). It can be seen that the the
values become numerically zero, which supports our earlier conclusion. This
gives us confidence in the final expression of the barrier traversal time Eq.
(95).
Figure 6: The effective IOR $\tilde{R}_{c}$ of spatially wide Gaussian
wavepackets for the parameters $\mu_{o}=c=\hbar=1$ with $\sigma=6$. The red
line represents $\kappa_{c}=1.7025$ with $V_{o}=0.99$. The area below the blue
line represents the superluminal region for the traversal time when
$\tilde{R}_{c}<1$. Figure 7: Momentum density distribution
$|\tilde{\psi}(k)|^{2}$ for the parameters $\mu_{o}=c=\hbar=1$ with
$\sigma=6$. The red line represents $\kappa_{c}=1.7025$ with $V_{o}=0.99$.
To further appreciate the importance of distinguishing the below and above
barrier components, consider Fig. 5. The components on the right (left) of the
red line $\kappa_{c}$ are the above (below) barrier components. It can easily
be seen from Fig. 5 that all the components of $|\tilde{\psi}(k)|^{2}$ for the
cases $\sigma=4.0$ and $\sigma=6.0$ are below $\kappa_{c}$ which will tunnel
instantaneously through the barrier $V_{o}=0.99$. This is easily verified by
evaluating Eq. (94) for these parameters, which will yield $\tilde{R}_{c}\sim
0$. Fig. 6 shows the effective IOR $\tilde{R}_{c}$ for spatially wide Gaussian
wavepackets as the initial momentum $k_{o}$ increases. It can be seen that
there is a region where the traversal time $\bar{\tau}_{\text{trav}}$ becomes
superluminal as $k_{o}$ increases such that the spread of
$|\tilde{\psi}(k)|^{2}$ starts to go beyond $\kappa_{c}$. This is shown in
Fig. 7. We can thus estimate that if the initial momentum
$k_{o}<\kappa_{c}-\sigma_{k}$, where $\sigma_{k}$ is the momentum variance,
then the traversal time becomes superluminal because
$\int_{\kappa_{c}}^{\infty}dk|\tilde{\psi}(k)|^{2}$ is small which effectively
leads to $\tilde{R}_{c}<1$ or equivalently $\bar{\tau}_{\text{trav}}<t_{c}$.
Moreover, the traversal time becomes subliminal when the initial momentum
$k_{o}>\kappa_{c}-\sigma_{k}$, wherein the peak of $\tilde{R}_{c}$ is roughly
at $k_{o}=\kappa_{c}+\sigma_{k}$. The effective IOR $\tilde{R}_{c}$ then
eventually plateaus to some value as all the components of
$|\tilde{\psi}(k)|^{2}$ are above $\kappa_{c}$.
## VIII Conclusion
In this paper, we have given a full account of [EPL, 141 (2023) 10001]. The
general form of the quantized relativistic TOA-operators in the presence of an
interaction potential were also obtained using a modified Weyl, Born-Jordan,
and simple symmetric ordering rule. These were then used to investigate the
traversal time of a relativistic quantum particle across a square barrier. We
have shown that tunneling is still instantaneous for the three ordering rules
despite a relativistic treatment of time as a dynamical observable, provided
that the barrier height is less than the rest mass energy. This result is
similar to the earlier work of Galapon [15] for a non-relativistic particle.
That is, tunneling is instantaneous and that only the above barrier energy
components of the initial wavepacket’s momentum distribution contribute to the
barrier traversal time.
The results of this paper implies that instantaneous tunneling time, or
generally superluminal tunneling times, across a square barrier is not a
consequence of using a non-relativistic theory but is an inherent quantum
effect in the context of arrival times. However, this instantaneous tunneling
can only be observed if the following conditions are satisfied: (i) the
initial incident wavepacket $\psi(q)$ must be spatially wide to ensure that
all the momentum components are below the barrier; and (ii) the initial
incident wavepacket must be placed very far from the barrier to prevent any
‘leaking’ into the barrier.
It remains to be explored the case when $V_{o}>\mu_{o}c^{2}$, which can be
done by modifying the contour in Fig. 4. By doing so, it is expected that one
should be able to extract a non-zero value for the below-barrier contributions
to the effective IOR of the barrier. The caveat is that the effects of
spontaneous pair creation and annihilation may be significant in this regime
such that the concept of TOA loses its meaning. That is, the particle that
arrived may not be the same initial particle that tunneled through the barrier
such that the concept of TOA-becomes ill-defined.
It should then be enough to use a non-relativistic theory and investigate the
effects of the shape of the barrier to the measured tunneling times. It is
well-known that non-linear systems such as the square barrier suffers from
obstructions to quantization [50]. In the non-relativistic case, the
correction terms to the TKF for non-linear systems, such as the square
barrier, has been recently obtained [51]. Applying these correction terms to
the non-relativistic case may lead to non-zero tunneling times.
We leave the problem for spin-$1/2$ particles open for future studies. Earlier
studies done by Bunao and Galapon [77, 78], where the TOA-operators were
obtained by solving the time-energy canonical commutation relation, have shown
that $\mathsf{\hat{T}_{S-1/2}}=\mathsf{\hat{T}_{S-0}}+\mathsf{\hat{T}_{E}}$ in
which, $\mathsf{\hat{T}_{S-1/2}}$ and $\mathsf{\hat{T}_{S-0}}$ are the free-
particle TOA-operator for spin-$1/2$ and spin-$0$ particles, respectively.
Meanwhile, $\mathsf{\hat{T}_{E}}$ is an extra term which is not invariant
under parity transforms, and commutes with the Dirac-Hamiltonian. The term
$\mathsf{\hat{T}_{E}}$ was then thrown away because it does not contribute
anything to the conjugacy relation implying
$\mathsf{\hat{T}_{S-1/2}}=\mathsf{\hat{T}_{S-0}}$, but $\mathsf{\hat{T}_{E}}$
may provide for other characteristics, roles, and physical interpretations of
a time observable [79]. We expect the same behavior when extending the
formalism to spin-$1/2$ particles, i.e., there will be an extra term to the
barrier traversal time operator for spin-$1/2$ particles. However, the
quantization prescription does not impose conjugacy between the Hamiltonian
and TOA-operators, as such, it is possible that this extra term cannot be
simply thrown away and may lead to non-zero tunneling times.
###### Acknowledgements.
P.C.M. Flores would like to thank D.A.L. Pablico and C.D. Tica for fruitful
discussions regarding the evaluation of the divergent integrals in term-by-
term integration. P.C.M. Flores acknowledges the support of the Department of
Science and Technology – Science Education Institute through the ASTHRDP-NSC
graduate scholarship program.
## Data Availability Statement
Data sharing is not applicable to this article as no new data were created or
analyzed in this study.
## Appendix A Non-relativistic limit of the time kernel factors
For completeness, we show how the relativistic TKFs in Sec. III reduces to the
known TKFs of the non-relativistic TOA operator constructed by Galapon and
Magadan [40]. We first evaluate the modified Weyl-ordered relativistic TKF
operator as follows
$\displaystyle\lim_{c\rightarrow\infty}$ $\displaystyle
T^{\\{W\\}}(q,q^{\prime})$ $\displaystyle=$
$\displaystyle\dfrac{1}{2}\int_{0}^{\frac{q+q^{\prime}}{2}}ds\lim_{c\rightarrow\infty}\mathsf{W}_{s}(q,q^{\prime})$
$\displaystyle=$
$\displaystyle\dfrac{1}{2}\int_{0}^{\frac{q+q^{\prime}}{2}}ds\lim_{c\rightarrow\infty}\left\\{\mathsf{W}_{s}^{(1)}(q,q^{\prime})+\dfrac{2}{\pi}\int_{1}^{\infty}dz\exp[-\dfrac{\mu_{o}c}{\hbar}\absolutevalue{q-q^{\prime}}z]\dfrac{\sqrt{z^{2}-1}}{z}\mathsf{W}_{s,z}^{(2)}(q,q^{\prime})\right\\}$
(101)
It can easily be seen that the second term of Eq. (101) vanishes
exponentially. Meanwhile, the first term of Eq. (101) reduces into
$\displaystyle\lim_{c\rightarrow\infty}$
$\displaystyle\mathsf{W}_{s}^{(1)}(q,q^{\prime})$ $\displaystyle=$
$\displaystyle\int_{0}^{\infty}dye^{-y}\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}\lim_{c\rightarrow\infty}\sqrt{1+\dfrac{z^{2}}{\mu_{o}^{2}c^{2}}}{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}\left(\left(q-q^{\prime}\right)-i\hbar\dfrac{y}{z}\right)^{2}\mathsf{P_{W}}(s,z,q,q^{\prime})\right]$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}dye^{-y}\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z}{{}_{0}}F_{1}\left[;1;\dfrac{\mu_{o}V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}\left(\left(q-q^{\prime}\right)-i\hbar\dfrac{y}{z}\right)^{2}\right].$
(102)
The right-hand side of Eq. (102) is further evaluated by taking the series
representation of the hypergeometric function to perform a term-by-term
integration, i.e.,
$\displaystyle\lim_{c\rightarrow\infty}$
$\displaystyle\mathsf{W}_{s}^{(1)}(q,q^{\prime})$ $\displaystyle=$
$\displaystyle\sum_{m=0}^{\infty}\dfrac{1}{(1)_{m}m!}\left(\dfrac{\mu
V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}\right)^{m}\sum_{n=0}^{2m}\binom{2m}{n}(q-q^{\prime})^{2m-n}\left(-i\hbar\right)^{n}\int_{0}^{\infty}dye^{-y}y^{n}\oint_{R}\dfrac{dz}{2\pi
i}\dfrac{1}{z^{n+1}}$ $\displaystyle=$
$\displaystyle\sum_{m=0}^{\infty}\dfrac{1}{(1)_{m}m!}\left(\dfrac{\mu
V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}(q-q^{\prime})^{2}\right)^{m}$
$\displaystyle=$ $\displaystyle{{}_{0}}F_{1}\left[;1;\dfrac{\mu
V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}(q-q^{\prime})^{2}\right]$ (103)
Thus, we now have the non-relativistic limit of the Weyl-ordered TKF given by
$\lim_{c\rightarrow\infty}T^{\\{W\\}}(q,q^{\prime})=\dfrac{1}{2}\int_{0}^{\frac{q+q^{\prime}}{2}}ds{{}_{0}}F_{1}\left[;1;\dfrac{\mu
V_{s}^{(W)}(q,q^{\prime})}{2\hbar^{2}}(q-q^{\prime})^{2}\right]$ (104)
The same process is applied to obtain the non-relativistic limit of the Born-
Jordan and simple-symmetric ordered TKFs.
## Appendix B Further details on the evaluation of the complex-valued IOR
$R_{c}^{*}$
Here, we provide the details on the evaluation of the contour integrals Eq.
(90). Let us first consider the following integral
$\oint
dzp(z)\left(\sqrt{1+\dfrac{\hbar^{2}z^{2}}{\mu_{o}^{2}c^{2}}}\right)^{n+1}\left(z^{2}+\dfrac{V_{o}^{2}}{\hbar^{2}c^{2}}\right)^{-n-\frac{1}{2}},$
(105)
which is separately evaluated using the left and right box contours in Fig.
4(a). It is straightforward to show that the integral Eq. (105) will vanish
along the paths $z=\pm r+iy$ since $p(z)=\absolutevalue{\tilde{\psi}(z)}^{2}$
vanishes as $r\rightarrow\infty$. Moreover, Eq. (105) also vanish along the
semicircular paths around the branch points $z=\delta
e^{i\theta}+i(\mu_{o}c/\hbar)$ and $z=\delta e^{i\theta}+i(V_{o}/\hbar c)$ as
$\delta\rightarrow 0$. Taking the difference of the non-vanishing terms of the
right and left box contours will then yield
$\displaystyle\int_{-\infty}^{\infty}dx$ $\displaystyle
p(x+i\epsilon)\text{csgn}(x+i\epsilon)\left(\sqrt{1+\dfrac{\hbar^{2}(x+i\epsilon)^{2}}{\mu_{o}^{2}c^{2}}}\right)^{n+1}\left((x+i\epsilon)^{2}+\dfrac{V_{o}^{2}}{\hbar^{2}c^{2}}\right)^{-n-\frac{1}{2}}$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}dx\left(p(x)-p(-x)\right)\left(\sqrt{1+\dfrac{\hbar^{2}x^{2}}{\mu_{o}^{2}c^{2}}}\right)^{n+1}\left(x^{2}+\dfrac{V_{o}^{2}}{\hbar^{2}c^{2}}\right)^{-n-\frac{1}{2}}$
$\displaystyle-i\left(1-(-1)^{n+1}\right)\left(-i\dfrac{\hbar^{2}}{\mu^{2}c^{2}}\right)^{n}\int_{1}^{\infty}dyp\left(i\dfrac{\mu
c}{\hbar}y\right)\sqrt{\dfrac{y^{2}-1}{y^{2}-\frac{V_{o}^{2}}{\mu^{2}c^{4}}}}\left(\dfrac{\sqrt{y^{2}-1}}{y^{2}-\frac{V_{o}^{2}}{\mu^{2}c^{4}}}\right)^{n}$
(106)
We can similarly evaluate the integral
$\displaystyle\oint
dzp(z)\dfrac{\frac{\hbar^{2}}{\mu^{2}c^{2}}z}{y^{2}+\frac{\hbar^{2}}{\mu^{2}c^{2}}z^{2}}$
(107)
using the contour in Fig. 4(b). It is also straightforward to show that the
integral Eq. (107) will vanish along the paths $z=\pm r+iy$ since
$p(z)=\absolutevalue{\tilde{\psi}(z)}^{2}$ vanishes as $r\rightarrow\infty$.
Using the residue theorem, it is easy to show that
$\displaystyle\int_{-\infty}^{\infty}dx$ $\displaystyle
p(x+i\epsilon)\dfrac{\frac{\hbar^{2}}{\mu^{2}c^{2}}(x+i\epsilon)}{y^{2}+\frac{\hbar^{2}}{\mu^{2}c^{2}}(x+i\epsilon)^{2}}=\int_{-\infty}^{\infty}dxp(x)\dfrac{\frac{\hbar^{2}}{\mu^{2}c^{2}}x}{y^{2}+\frac{\hbar^{2}}{\mu^{2}c^{2}}x^{2}}-\pi
ip\left(i\dfrac{\mu c}{\hbar}y\right)$ (108)
We then substitute both Eqs. (106) and (108) into Eq. (88) which yields
$\displaystyle\int_{0}^{\infty}d\zeta$
$\displaystyle\mathsf{F}_{B}(-V_{0},\zeta)\int_{-\infty}^{\infty}dke^{ik\zeta}\absolutevalue{\tilde{\psi}(k)}^{2}$
$\displaystyle=$ $\displaystyle i\dfrac{\hbar}{\mu
c}\int_{0}^{\infty}dk\left(\absolutevalue{\tilde{\psi}(k)}^{2}-\absolutevalue{\tilde{\psi}(-k)}^{2}\right)\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}$
(109)
Last, we combine Eqs. (109) and (85) to obtain
$\displaystyle R_{c}^{*}=$ $\displaystyle i\dfrac{\hbar k_{o}}{\mu
c}\int_{0}^{\infty}dk\left(\absolutevalue{\tilde{\psi}(k)}^{2}-\absolutevalue{\tilde{\psi}(-k)}^{2}\right)\sqrt{\dfrac{\tilde{E}_{k}^{2}}{(\tilde{E}_{k}-V_{o})^{2}-\mu_{o}^{2}c^{4}}}$
$\displaystyle+k_{o}\dfrac{2}{\pi}\int_{1}^{\infty}dy\dfrac{\sqrt{y^{2}-1}}{y}\mathsf{G_{B}}(V_{o},y)\int_{-\infty}^{\infty}dk\absolutevalue{\tilde{\psi}(k)}^{2}\dfrac{\frac{\mu
c}{\hbar}y}{k^{2}+\frac{\mu^{2}c^{2}}{\hbar^{2}}y^{2}}.$ (110)
Notice that the first term of (110) is generally complex-valued while the
second term is always real-valued.
## References
* MacColl [1932] L. MacColl, “Note on the transmission and reflection of wave packets by potential barriers,” Physical Review 40, 621 (1932).
* Hartman [1962] T. E. Hartman, “Tunneling of a wave packet,” Journal of Applied Physics 33, 3427–3433 (1962).
* Hauge and Støvneng [1989] E. Hauge and J. Støvneng, “Tunneling times: a critical review,” Reviews of Modern Physics 61, 917 (1989).
* Landauer and Martin [1994] R. Landauer and T. Martin, “Barrier interaction time in tunneling,” Reviews of Modern Physics 66, 217 (1994).
* Pauli _et al._ [1933] W. Pauli _et al._ , “Handbuch der physik,” Geiger and scheel 2, 83–272 (1933).
* Wigner [1955] E. P. Wigner, “Lower limit for the energy derivative of the scattering phase shift,” Physical Review 98, 145 (1955).
* Büttiker and Landauer [1982] M. Büttiker and R. Landauer, “Traversal time for tunneling,” Physical Review Letters 49, 1739 (1982).
* Baz [1966] A. Baz, “Lifetime of intermediate states,” Yadern. Fiz. 4 (1966).
* Rybachenko [1967] V. Rybachenko, “Time of penetration of a particle through a potential barrier,” Sov. J. Nucl. Phys. 5, 635–639 (1967).
* Büttiker [1983] M. Büttiker, “Larmor precession and the traversal time for tunneling,” Physical Review B 27, 6178 (1983).
* Pollak and Miller [1984] E. Pollak and W. H. Miller, “New physical interpretation for time in scattering theory,” Physical review letters 53, 115 (1984).
* Smith [1960] F. T. Smith, “Lifetime matrix in collision theory,” Physical Review 118, 349 (1960).
* Sokolovski and Baskin [1987] D. Sokolovski and L. Baskin, “Traversal time in quantum scattering,” Physical Review A 36, 4604 (1987).
* Yamada [2004] N. Yamada, “Unified derivation of tunneling times from decoherence functionals,” Physical review letters 93, 170401 (2004).
* Galapon [2012] E. A. Galapon, “Only above barrier energy components contribute to barrier traversal time,” Physical review letters 108, 170402 (2012).
* de Carvalho and Nussenzveig [2002] C. A. de Carvalho and H. M. Nussenzveig, “Time delay,” Physics Reports 364, 83–174 (2002).
* Winful [2006] H. G. Winful, “Tunneling time, the hartman effect, and superluminality: A proposed resolution of an old paradox,” Physics Reports 436, 1–69 (2006).
* Imafuku, Ohba, and Yamanaka [1997] K. Imafuku, I. Ohba, and Y. Yamanaka, “Effects of inelastic scattering on tunneling time based on the generalized diffusion process approach,” Physical Review A 56, 1142 (1997).
* Brouard, Sala, and Muga [1994] S. Brouard, R. Sala, and J. Muga, “Systematic approach to define and classify quantum transmission and reflection times,” Physical Review A 49, 4312 (1994).
* Jaworski and Wardlaw [1988] W. Jaworski and D. M. Wardlaw, “Time delay in tunneling: Sojourn-time approach versus mean-position approach,” Physical Review A 38, 5404 (1988).
* Leavens and Aers [1989] C. Leavens and G. Aers, “Dwell time and phase times for transmission and reflection,” Physical Review B 39, 1202 (1989).
* Hauge, Falck, and Fjeldly [1987] E. Hauge, J. Falck, and T. Fjeldly, “Transmission and reflection times for scattering of wave packets off tunneling barriers,” Physical Review B 36, 4203 (1987).
* Galapon [2002] E. Galapon, “Pauli’s theorem and quantum canonical pairs: the consistency of a bounded, self–adjoint time operator canonically conjugate to a hamiltonian with non–empty point spectrum,” Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 458, 451–472 (2002).
* Winful [2003] H. G. Winful, “Nature of “superluminal" barrier tunneling,” Physical review letters 90, 023901 (2003).
* Eckle _et al._ [2008a] P. Eckle, M. Smolarski, P. Schlup, J. Biegert, A. Staudte, M. Schöffler, H. G. Muller, R. Dörner, and U. Keller, “Attosecond angular streaking,” Nature Physics 4, 565–570 (2008a).
* Eckle _et al._ [2008b] P. Eckle, A. N. Pfeiffer, C. Cirelli, A. Staudte, R. Dörner, H. G. Muller, M. Büttiker, and U. Keller, “Attosecond ionization and tunneling delay time measurements in helium,” Science 322, 1525–1529 (2008b).
* Pfeiffer _et al._ [2012] A. N. Pfeiffer, C. Cirelli, M. Smolarski, D. Dimitrovski, M. Abu-samha, L. B. Madsen, and U. Keller, “Attoclock reveals natural coordinates of the laser-induced tunnelling current flow in atoms,” Nature Physics 8, 76–80 (2012).
* Pfeiffer _et al._ [2013] A. N. Pfeiffer, C. Cirelli, M. Smolarski, and U. Keller, “Recent attoclock measurements of strong field ionization,” Chemical Physics 414, 84–91 (2013).
* Sainadh _et al._ [2019] U. S. Sainadh, H. Xu, X. Wang, A. Atia-Tul-Noor, W. C. Wallace, N. Douguet, A. Bray, I. Ivanov, K. Bartschat, A. Kheifets, _et al._ , “Attosecond angular streaking and tunnelling time in atomic hydrogen,” Nature 568, 75–77 (2019).
* Torlina _et al._ [2015] L. Torlina, F. Morales, J. Kaushal, I. Ivanov, A. Kheifets, A. Zielinski, A. Scrinzi, H. G. Muller, S. Sukiasyan, M. Ivanov, _et al._ , “Interpreting attoclock measurements of tunnelling times,” Nature Physics 11, 503–508 (2015).
* Landsman _et al._ [2014] A. S. Landsman, M. Weger, J. Maurer, R. Boge, A. Ludwig, S. Heuser, C. Cirelli, L. Gallmann, and U. Keller, “Ultrafast resolution of tunneling delay time,” Optica 1, 343–349 (2014).
* Camus _et al._ [2017] N. Camus, E. Yakaboylu, L. Fechner, M. Klaiber, M. Laux, Y. Mi, K. Z. Hatsagortsyan, T. Pfeifer, C. H. Keitel, and R. Moshammer, “Experimental evidence for quantum tunneling time,” Physical review letters 119, 023201 (2017).
* Ramos _et al._ [2020] R. Ramos, D. Spierings, I. Racicot, and A. M. Steinberg, “Measurement of the time spent by a tunnelling atom within the barrier region,” Nature 583, 529–532 (2020).
* Spierings and Steinberg [2021] D. C. Spierings and A. M. Steinberg, “Observation of the decrease of larmor tunneling times with lower incident energy,” Phys. Rev. Lett. 127, 133001 (2021).
* De Leo and Rotelli [2007] S. De Leo and P. P. Rotelli, “Dirac equation studies in the tunneling energy zone,” The European Physical Journal C 51, 241–247 (2007).
* De Leo [2013] S. De Leo, “A study of transit times in dirac tunneling,” Journal of Physics A: Mathematical and Theoretical 46, 155306 (2013).
* Petrillo and Janner [2003] V. Petrillo and D. Janner, “Relativistic analysis of a wave packet interacting with a quantum-mechanical barrier,” Phys. Rev. A 67, 012110 (2003).
* Krekora, Su, and Grobe [2001] P. Krekora, Q. Su, and R. Grobe, “Effects of relativity on the time-resolved tunneling of electron wave packets,” Physical Review A 63, 032107 (2001).
* Flores and Galapon [2023] P. C. Flores and E. A. Galapon, “Instantaneous tunneling of relativistic massive spin-0 particles,” Europhysics Letters 141, 10001 (2023).
* Galapon and Magadan [2018] E. A. Galapon and J. J. P. Magadan, “Quantizations of the classical time of arrival and their dynamics,” Annals of Physics 397, 278–302 (2018).
* Bohm [1974] A. Bohm, “Rigged hilbert space and quantum mechanics,” Tech. Rep. (1974).
* De la Madrid, Bohm, and Gadella [2002] R. De la Madrid, A. Bohm, and M. Gadella, “Rigged hilbert space treatment of continuous spectrum,” Fortschritte der Physik: Progress of Physics 50, 185–216 (2002).
* De la Madrid [2002] R. De la Madrid, “Rigged hilbert space approach to the schrödinger equation,” Journal of Physics A: Mathematical and General 35, 319 (2002).
* De la Madrid [2003] R. De la Madrid, “The rigged hilbert space of the free hamiltonian,” International Journal of Theoretical Physics 42, 2441–2460 (2003).
* León _et al._ [2000] J. León, J. Julve, P. Pitanga, and F. De Urríes, “Time of arrival in the presence of interactions,” Physical Review A 61, 062101 (2000).
* Peres [2006] A. Peres, _Quantum theory: concepts and methods_ , Vol. 57 (Springer Science & Business Media, 2006).
* Gotay, Grabowski, and Grundling [2000] M. Gotay, J. Grabowski, and H. Grundling, “An obstruction to quantizing compact symplectic manifolds,” Proceedings of the American Mathematical Society 128, 237–243 (2000).
* Groenewold and Groenewold [1946] H. J. Groenewold and H. J. Groenewold, _On the principles of elementary quantum mechanics_ (Springer, 1946).
* Galapon [2001] E. A. Galapon, “Quantum-classical correspondence of dynamical observables, quantization, and the time of arrival correspondence problem,” Optics and Spectroscopy 91, 399–405 (2001).
* Galapon [2004] E. A. Galapon, “Shouldn’t there be an antithesis to quantization?” Journal of mathematical physics 45, 3180–3215 (2004).
* Pablico and Galapon [2023] D. A. L. Pablico and E. A. Galapon, “Quantum corrections to the weyl quantization of the classical time of arrival,” The European Physical Journal Plus 138, 1–22 (2023).
* Bender and Dunne [1989a] C. M. Bender and G. V. Dunne, “Exact solutions to operator differential equations,” Physical Review D 40, 2739 (1989a).
* Bender and Dunne [1989b] C. M. Bender and G. V. Dunne, “Integration of operator differential equations,” Physical Review D 40, 3504 (1989b).
* Domingo and Galapon [2015] H. B. Domingo and E. A. Galapon, “Generalized weyl transform for operator ordering: polynomial functions in phase space,” Journal of Mathematical Physics 56, 022104 (2015).
* De Gosson [2016] M. A. De Gosson, _Born-Jordan quantization: theory and applications_ , Vol. 182 (Springer, 2016).
* Cohen [2012] L. Cohen, _The Weyl operator and its generalization_ (Springer Science & Business Media, 2012).
* de Gosson [2016a] M. A. de Gosson, “From Weyl to Born–Jordan quantization: The Schrödinger representation revisited,” Physics Reports 623, 1–58 (2016a).
* de Gosson [2016b] M. A. de Gosson, “Born–Jordan Quantization,” in _Born-Jordan Quantization_ , Fundamental Theories of Physics, Vol. 182 (Springer International Publishing, Cham, 2016) pp. 113–127.
* de Gosson and Luef [2011] M. de Gosson and F. Luef, “Preferred quantization rules: Born–Jordan versus Weyl. The pseudo-differential point of view,” Journal of Pseudo-Differential Operators and Applications 2, 115–139 (2011).
* De Gosson [2006] M. A. De Gosson, _Symplectic geometry and quantum mechanics_ , Vol. 166 (Springer Science & Business Media, 2006).
* De Gosson [2013] M. A. De Gosson, “Born–jordan quantization and the uncertainty principle,” Journal of Physics A: Mathematical and Theoretical 46, 445301 (2013).
* Cohen [1966] L. Cohen, “Generalized phase-space distribution functions,” Journal of Mathematical Physics 7, 781–786 (1966).
* Shewell [1959] J. R. Shewell, “On the formation of quantum-mechanical operators,” American Journal of Physics 27, 16–21 (1959).
* Greiner _et al._ [2000] W. Greiner _et al._ , _Relativistic quantum mechanics_ , Vol. 2 (Springer, 2000).
* León [1997] J. León, “Time-of-arrival formalism for the relativistic particle,” Journal of Physics A: Mathematical and General 30, 4791 (1997).
* Newton and Wigner [1949] T. D. Newton and E. P. Wigner, “Localized states for elementary systems,” Reviews of Modern Physics 21, 400 (1949).
* Razavy [1969] M. Razavy, “Quantum-mechanical conjugate of the hamiltonian operator,” Il Nuovo Cimento B (1965-1970) 63, 271–308 (1969).
* Flores and Galapon [2022a] P. C. Flores and E. A. Galapon, “Relativistic free-motion time-of-arrival operator for massive spin-0 particles with positive energy,” Physical Review A 105, 062208 (2022a).
* Galapon [2016] E. A. Galapon, “The Cauchy principal value and the Hadamard finite part integral as values of absolutely convergent integrals,” Journal of Mathematical Physics 57, 033502 (2016).
* Sombillo and Galapon [2014] D. L. Sombillo and E. A. Galapon, “Quantum traversal time through a double barrier,” Physical Review A 90, 032115 (2014).
* Sombillo and Galapon [2018] D. L. B. Sombillo and E. A. Galapon, “Barrier-traversal-time operator and the time-energy uncertainty relation,” Physical Review A 97, 062127 (2018).
* Gel’fand and Shi [1964] I. Gel’fand and G. Shi, “ov, generalized functions. vol. i: Properties and operations,” (Academic Press, New York, 1964) p. 360.
* Flores and Galapon [2022b] P. C. Flores and E. A. Galapon, “Relativistic free-motion time-of-arrival operator for massive spin-0 particles with positive energy,” Phys. Rev. A 105, 062208 (2022b).
* Pablico and Galapon [2020] D. A. L. Pablico and E. A. Galapon, “Quantum traversal time across a potential well,” Physical Review A 101, 022103 (2020).
* Galapon [2017] E. A. Galapon, “The problem of missing terms in term by term integration involving divergent integrals,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 473, 20160567 (2017).
* Tica and Galapon [2019] C. D. Tica and E. A. Galapon, “Finite-part integration of the generalized stieltjes transform and its dominant asymptotic behavior for small values of the parameter. ii. non-integer orders,” Journal of Mathematical Physics 60, 013502 (2019).
* Bunao and Galapon [2015a] J. Bunao and E. A. Galapon, “A one-particle time of arrival operator for a free relativistic spin-0 charged particle in (1+ 1) dimensions,” Annals of Physics 353, 83–106 (2015a).
* Bunao and Galapon [2015b] J. Bunao and E. A. Galapon, “A relativistic one-particle time of arrival operator for a free spin-1/2 particle in (1+ 1) dimensions,” Annals of Physics 356, 369–382 (2015b).
* Farrales, Domingo, and Galapon [2022] R. A. E. Farrales, H. B. Domingo, and E. A. Galapon, “Conjugates to one particle hamiltonians in 1-dimension in differential form,” The European Physical Journal Plus 137, 1–24 (2022).
|
# Bayesian Heuristics for Robust Spatial Perception
Aamir Hussain Chughtai, Muhammad Tahir and Momin Uppal The authors are with
Department of Electrical Engineering, Lahore University of Management
Sciences, DHA Lahore Cantt., 54792, Lahore Pakistan. (email:
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Spatial perception is a key task in several machine intelligence applications
such as robotics and computer vision. In general, it involves the nonlinear
estimation of hidden variables that represent the system’s state. However, in
the presence of measurement outliers, the standard nonlinear least squared
formulation results in poor estimates. Several methods have been considered in
the literature to improve the reliability of the estimation process. Most
methods are based on heuristics since guaranteed global robust estimation is
not generally practical due to high computational costs. Recently general
purpose robust estimation heuristics have been proposed that leverage existing
non-minimal solvers available for the outlier-free formulations without the
need for an initial guess. In this work, we propose three Bayesian heuristics
that have similar structures. We evaluate these heuristics in practical
scenarios to demonstrate their merits in different applications including 3D
point cloud registration, mesh registration and pose graph optimization. The
general computational advantages our proposals offer make them attractive
candidates for spatial perception tasks.
###### Index Terms:
Spatial Perception, Measurement Outliers, Nonlinear Estimation, Variational
Bayes, Expectation-Maximization, Statistical Inference, Parameter and State
Estimation.
## I Introduction
Several machine intelligence tasks in robotics depend on reliable spatial
perception which involves the estimation of the unknown latent variable
representing the state describing the system. Examples of spatial perception
include object detection and localization, motion estimation, simultaneous
localization and mapping (SLAM) [1, 2, 3, 4] etc. The information for
inference is available in form of noisy observations which can generally be
represented as transformations of the hidden variable given as
$\mathbf{y}_{i}=\mathbf{h}_{i}(\mathbf{x})+\mathbf{\epsilon}_{i}$ (1)
where the ith measurement $\mathbf{y}_{i}$ ($i=1,\ldots,m$), of a batch of
data $\mathbf{y}$, is expressed as a known nonlinear function
$\mathbf{h}_{i}(.)$ of the unknown variable of interest $\mathbf{x}$ corrupted
by random noise $\mathbf{\epsilon}_{i}$. Under the assumption that
$\mathbf{\epsilon}_{i}$, described with zero-mean Gaussian noise statistics
with the precision matrix $\bm{\Omega}_{i}$ (inverse of the covariance
matrix), is uncorrelated across each measurement channel, the maximum a
posteriori (MAP) estimate has the following equivalent least square
formulation [3]
$\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i=0}^{m}\left\|\mathbf{y}_{i}-\mathbf{h}_{i}(\mathbf{x})\right\|_{\bm{\Omega}_{i}}^{2}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i=0}^{m}{\big{(}r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right)\big{)}}^{2}$
(2)
where $\mathcal{X}$ denotes the domain of $\mathbf{x}$, the notation
$\|\mathbf{e}\|_{\bm{\Omega}}^{2}={\mathbf{e}}^{\top}\bm{\Omega}\mathbf{e}$
and $r_{i}(\mathbf{y}_{i},\mathbf{x})$ is the residual error for the ith
measurement. Note that $r_{0}(\mathbf{y}_{0},\mathbf{x})$ incorporates the
regularizing term considering a Gaussian prior for $\mathbf{x}$. It is well
known that the cost function in (2) leads to brittle estimates in face of
measurement outliers owing to unwanted over-fitting to the corrupted data [5].
The observations can be easily plagued with outliers in practice due to sensor
failures, environmental factors, or due to erroneous data association by the
preprocessing front-end algorithms [6, 7]. This limits the reliability of
perception-based tasks and is therefore a dynamic research area in robotics.
Owing to the underlying functional nonlinearities and nonconvexity of the
domain, even solving (2) globally for common spatial perception applications
can be challenging. However, several estimators have been devised in this
regard for various applications including point cloud registration, mesh
registration, pose graph optimization [8, 9, 10] etc. These are commonly
termed as non-minimal solvers which utilize all the measurements for
estimation. On the other hand, minimal solvers use the smallest number of
observations for estimation [11].
The problem of estimation in the presence of outliers becomes more complicated
since the standard cost function in (2) is inadequate for this purpose.
Therefore, several approaches have been devised in this regard. Most of the
methods are heuristics that offer efficient solutions mostly without
guarantees [12, 13, 14, 15, 11]. Other specialized guaranteed approaches also
exist that provide solutions with formal certificates. However, their
practicality gets limited due to issues with scalability to large-scale
problems as the underlying semidefinite programs (SDP) can have a very high
computational budget [16, 17]. Naturally, efficient heuristics become the
default choice to enable such applications. In the literature, hybrid
approaches have also been advocated where the solutions obtained from
heuristics are subsequently evaluated for optimality [18]. Since these hybrid
approaches have been found to work effectively in practice, efficient
heuristics are highly desired.
Different general purpose heuristics for robust spatial perception have been
designed based on consensus maximization aiming to maximize observations
within a predefined inlier threshold during estimation [6]. The famous random
sample consensus (RANSAC) approach [12] is extensively employed relying on
minimal solvers for operation. Realizing the fragility of RANSAC under a high
outlier regime and its scalability issues, Adaptive Trimming (APAPT) has been
proposed to address these limitations [13]. ADAPT adopts non-minimal solvers
in its design.
Another popular approach is M-estimation [19] which relies on robust cost
functions applied to the residuals for resilience against data corruption.
Since these formulations are generally inefficient to solve globally [6], the
recently introduced graduated non-convexity (GNC) approach proposes an
efficient heuristic [11]. It replaces the underlying non-convex functions with
surrogate functions for two common cost functions namely the Geman McClure
(GM) and the truncated least square (TLS). Using the Black-Rangarajan duality
it casts an equivalent formulation which is then iteratively solved using
variable and weight update steps alternatively resulting in GNC-GM and GNC-TLS
methods. These are general-purpose robust estimators that employ non-minimal
solvers during the variable update step and require no initial guess for
operation.
The relevant literature also indicates the use of Bayesian methods for
outlier-robust estimation. In [20], a method to tune M-estimators is
suggested. It aims to estimate the tuning parameters of the M-estimators
within an Expectation Maximization (EM) framework. However, the overall scheme
is complicated due to its reliance on adaptive importance sampling. Another
Bayesian EM-based method is reported in [15] where the parameters of a general
robust function are estimated along with the primary variable of interest. The
method invokes iteratively reweighted least squares (IRLS) within its EM
framework which adds an additional layer of approximation during inference
since IRLS performance can be sensitive to initialization. Other Bayesian
methods, reported in the filtering context, do not involve M-estimation for
inference. For example, methods in [21, 22] propose handling outliers by
modifying the standard Gaussian likelihood distributions and subsequently
perform inference. However, these algorithms do not treat outliers
independently for each measurement channel due to under-parameterization [23].
Obviating this limitation, two similar Bayesian methods have been proposed:
the recursive outlier-robust (ROR) [24] and the selective observations
rejecting (SOR) [23] method. We build on these two methods for the general
nonlinear robust estimation problem with the following contributions.
* •
We study the limitations of the standard ROR and SOR methods which shows why
these standard approaches falter in spatial perception applications. The
analysis suggests the need for adapting the hyper-parameters, governing the
outlier characteristics, during inference.
* •
We propose three methodologies to overcome the shortcomings of the standard
approaches. Similar to the GNC methods, we are interested in invoking the
existing non-minimal estimators. To this end, we consider point estimates for
$\mathbf{x}$ and use the EM framework. Since EM can be viewed as a special
case of variational Bayes (VB), the adaptation is possible. The proposed
approaches are similar to the iterative GNC methods with alternating variable
and weight update steps.
* •
We also evaluate the proposals in several experimental scenarios and benchmark
them against the GNC methods which are the state-of-the-art general purpose
heuristics for robust spatial perception.
The structure of the remaining paper is as follows. In Section II, we motivate
the selection of the ROR and SOR methods which we build upon. Moreover, we
discuss the inferential tools employed in our proposals. In Section III, we
discuss the limitations of the standard approaches and present methodologies
to overcome their shortcomings. In Section IV, we discuss the experimental
results. Lastly, Section V provides concluding remarks along with some future
directions.
## II Relevant Bayesian methods and tools
In this section, we first briefly discuss the motivation for choosing the two
Bayesian methods: ROR and SOR. Moreover, we provide a short primer on the
Bayesian tools that we leverage in our proposals in the upcoming section.
### II-A Choice of the Bayesian methods for robust estimation
Recently, ROR and SOR methods have been successfully applied for devising
robust nonlinear filtering techniques with satisfactory performance results.
In these methods, estimation of $\mathbf{x}$ in (1) relies on modifying the
measurement noise statistics. The choice is motivated by the inability of the
nominal Gaussian noise to describe the data in face of outliers. Subsequently,
the noise parameters are jointly estimated with $\mathbf{x}$. Bayesian theory
offers attractive inferential tools for estimating the state and parameters
jointly enabling iterative solutions [25, 26]. We adapt these methods to the
general nonlinear estimation context of (1) and (2) where non-minimal solvers
for estimation are available for the outlier-free cases. The motivation for
choosing these particular methods is twofold. First, the formulations of these
filters lend their modification conveniently to the nonlinear problem at hand.
Moreover, owing to the modeling simplicity, the choice of the hyperparameters
for the noise statistics is intuitive for adaptation to our case.
### II-B EM as a special case of VB
For solving the problem in (2), we aim to use non-minimal solvers, which have
been developed and tested for different applications. To that end, we need to
cast the ROR and SOR algorithms in a way that existing nonlinear least squared
solvers are invoked during inference. These Bayesian methods are devised using
VB which leads to distributions for the state and parameters. However, the
available solvers generally result in point estimates for the state.
Therefore, to enable adoption of these Bayesian approaches for robust spatial
perception applications, we adopt the EM method which as shown in the Bayesian
literature can be viewed as a special case of the VB algorithm [27]. We first
present the VB method and then interpret EM method as its special case.
#### II-B1 VB
Suppose that we are interested in estimating multivariate parameter
$\bm{\theta}$ from data $\mathbf{y}$. For tractability, we can resort to the
VB algorithm considering the mean-field approximation where the actual
posterior is approximated with a factored distribution as [28]
$p(\bm{\theta}|\mathbf{y})\approx\prod_{j=1}^{J}q(\bm{\theta}_{j})$ (3)
where $J$ partitions of $\bm{\theta}$ are assumed with the $j$th partition
given as $\bm{\theta}_{j}$. The VB marginals can be obtained by minimizing the
Kullback-Leibler (KL) divergence between the product approximation and the
true posterior resulting in
$\displaystyle q(\bm{\theta}_{j})$ $\displaystyle\propto
e^{\big{(}\big{\langle}\mathrm{ln}(p(\bm{\theta}|\mathbf{y}))\big{\rangle}_{q(\bm{\theta}_{-j})}\big{)}}\
\forall\ j$ (4)
where ${q(\bm{\theta}_{-j})}=\prod_{k\neq j}q(\bm{\theta}_{k})$ and
$\langle.\rangle_{q(\mathbf{{x}})}$ denotes the expectation of the argument
with respect to the distribution $q(\mathbf{{x}})$. The VB marginals can be
obtained by iteratively invoking (4) till convergence.
#### II-B2 EM
From the Bayesian literature, we know that the EM method can be viewed as a
special case of the VB algorithm considering point densities for some of the
factored distributions in (3). In particular, the factored distributions which
are assumed as point masses in (3) can be written as delta functions
$\displaystyle
q(\bm{\theta}_{n})=\delta(\bm{\theta}_{n}-\hat{\bm{\theta}}_{n})$ (5)
with $n$ denoting the indices where such assumption is taken. Resultingly, we
can update the parameter of $q(\bm{\theta}_{n})$ using as
$\hat{\bm{\theta}}_{n}=\underset{\bm{\theta}_{n}}{\operatorname{argmax}}\big{\langle}\mathrm{ln}(p(\bm{\theta}|\mathbf{y}))\big{\rangle}_{q(\bm{\theta}_{-n})}\forall\
n$ (6)
The expression (6) is formally known as the M-Step of the EM method. The
remaining factored distributions not considered as point masses can be
determined using (4) where the expectation with respect to
$q(\bm{\theta}_{n})$ would simply result in sampling
$\mathrm{ln}(p(\bm{\theta}|\mathbf{y})$ at $\hat{\bm{\theta}}_{n}$. This is
formally called as the E-Step in the EM method.
Treating EM as particular case of VB allows us another advantage in addition
to leveraging the existing non-minimal point estimators for the system state.
It allows us the liberty to treat those parameters with point masses where the
expectation evaluation with respect to that parameter would otherwise be
unwieldy.
## III Proposed Algorithms
Having chosen the two particular methods for application in robust perception
tasks and having interpreted EM as a special case of VB, we are in a position
to present our proposals. In this section, we first present the standard ROR
method and discuss its limitations. Based on the analysis, we propose a
methodology to overcome the drawbacks. Then we shift our attention to the SOR
method. We present the standard SOR technique and explain its drawbacks. Based
on the insights drawn, we propose two frameworks to deal with the
shortcomings.
### III-A ROR Methods
#### III-A1 Standard ROR
We use the version of the ROR method as originally reported in Section 2.5 of
[24] where conditionally independent measurements are considered. The ROR
method, as originally reported, assumes $\mathbf{y}_{i}$ (measurement from
each channel) to be scalar but it can be a vector in general. Accordingly, the
likelihood to robustify (1) is the multivariate Student-t density. We denote
the distribution as
${\mathrm{St(\mathbf{z}_{s}|\bm{\phi}_{s},\mathbf{\Sigma}_{s},\eta)}}$ where
the random vector $\mathbf{z}_{s}$ obeys the Student-t density and the
parameters include $\bm{\phi}_{s}$ (mean), $\mathbf{\Sigma}_{s}$ (scale
matrix) and $\eta$ (degrees of freedom) which controls the kurtosis or heavy-
tailedness. Resultingly, we can write the likelihood density as [24]
$\displaystyle p(\mathbf{y}_{i}|\mathbf{x})$
$\displaystyle={\mathrm{St}}\big{(}\mathbf{y}_{i}|\mathbf{h}_{i}(\mathbf{x}),\bm{\Omega}_{i}^{-1},\nu_{i}\big{)}=\int
p(\mathbf{y}_{i}|\mathbf{x},\lambda_{i})p(\lambda_{i})d\lambda_{i}$ (7)
with the conditional likelihood following the multivariate Gaussian density
given as
$p(\mathbf{y}_{i}|\mathbf{x},\lambda_{i})=\mathcal{N}(\mathbf{y}_{i}|\mathbf{h}_{i}(\mathbf{x}),(\lambda_{i}\bm{\Omega}_{i})^{-1})$
(8)
where $\mathcal{N}(\mathbf{z}_{n}|\bm{\phi}_{n},\mathbf{\Sigma}_{n})$
symbolizes that the random vector $\mathbf{z}_{n}$ follows the Gaussian
distribution parameterized by $\bm{\phi}_{n}$ (mean) and $\mathbf{\Sigma}_{n}$
(covariance matrix). $\lambda_{i}$ in (7) obeys the univariate Gamma
distribution given as [24]
$p(\lambda_{i})=\mathcal{G}(\lambda_{i}|\frac{\nu_{i}}{2},\frac{\nu_{i}}{2})$
(9)
where $\mathcal{G}({z}_{g}|a_{g},b_{g})$ denotes that the random variable
$z_{g}$ follows the Gamma distribution with the shape parameter $a_{g}$ and
the rate parameter $b_{g}$ [29]. The normalizing constant of the distribution
is denoted as
$f(a,b)=\frac{b^{a}}{\Gamma(a)}$ (10)
where ${\Gamma(a)}$ denotes the Gamma function.
Denoting $\bm{\lambda}$ as the vector with ${\lambda_{i}}$ its $i$th element,
we can write the following using the Bayes theorem
$p(\mathbf{x},\bm{\lambda}|\mathbf{y})\propto{p(\mathbf{y}|\bm{{\lambda}},\mathbf{x})p(\mathbf{x})p(\bm{{\lambda}})}$
(11)
Resultingly, the log-posterior, can be written as
$\displaystyle
p(\mathbf{x},\bm{\lambda}|\mathbf{y})=\Big{\\{}\sum_{i=1}^{m}\Big{(}-0.5\lambda_{i}{\big{(}r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right)\big{)}}^{2}-0.5\nu_{i}\lambda_{i}$
$\displaystyle+(0.5(\nu_{i}+d)-1)\ln({\lambda}_{i})\Big{)}-0.5{\big{(}r_{0}\left(\mathbf{y}_{0},\mathbf{x}\right)\big{)}}^{2}+constant\Big{\\}}$
(12)
To proceed further, we seek the following VB factorization of the posterior
distribution
$p(\mathbf{x},\bm{{\lambda}}|\mathbf{y})\approx
q(\mathbf{x})q(\bm{{{\lambda}}})$ (13)
Based on (6) and (12), the state variable $\hat{\mathbf{x}}$ is estimated
using the VB/EM theory as [24]
$\hat{\mathbf{x}}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i=0}^{m}w_{i}{\left(r_{i}(\mathbf{y}_{i},\mathbf{x})\right)}^{2}$
(14)
where $w_{i}=\langle{{\lambda}}_{i}\rangle_{q({{\lambda}}_{i})}\forall\ i>0$.
We have considered a point estimator for $\mathbf{x}$ i.e.
$q(\mathbf{x})=\delta(\mathbf{x}-\hat{\mathbf{x}})$ to utilize existing
solvers for spatial perception tasks mainly available in this form.
Similarly, the weights can be updated as [24]
$w_{i}=\left({1+{\frac{{\hat{r}_{i}^{2}}-d}{\nu+d}}}\right)^{-1}\ \forall\
i>0$ (15)
where $d$ denotes the dimension of $\mathbf{y}_{i}$ and
${\hat{r}_{i}^{2}}={\big{(}r_{i}\left(\mathbf{y}_{i},\hat{\mathbf{x}}\right)\big{)}}^{2}$.
We assume $\nu_{i}=\nu\ \forall\ i$ and use $w_{i}$ to weight the precision
matrix instead of $\bar{\lambda}$ as in the original work to remain consistent
with comparative works. Since we assume outliers occur only in the
measurements, the weight for the regularizing term remains fixed as 1 i.e.
$w_{0}=1$ in (14).
### Limitations of standard ROR
Starting with $w_{i}=1\ \forall\ i$, the standard ROR invokes (14) and (15)
iteratively till convergence. The technique has shown good performance in
filtering context for different practical examples such as target tracking
[24] and indoor localization [23]. This can be attributed to the appearance of
well-regularized cost functions (2). However, for advanced problems in robust
spatial perception, the performance of standard ROR compromises at high
outlier ratios since it fails to capture the outlier characteristics by fixing
$\nu$ which governs the heavy-tailedness of noise or the characteristics of
outliers. Empirical evidence confirms this observation.
Figure 1: $w_{i}$ vs ${\hat{r}_{i}^{2}}/\mu$ in ROR.
For further understanding of this limitation consider how $w_{i}$ changes with
parameters of Student-t distribution. The variation of $w_{i}$ against
${\hat{r}_{i}^{2}}/\mu$ is shown in Fig. 1 where $\mu=\nu+d$. The plot of (15)
would only be a shifted version of the plot in Fig. 1. Since
$\hat{r}_{i}^{2}\gg d$ when outlier appears in the ith dimension, we can
simplify (15) as $w_{i}=({1+{\frac{{\hat{r}_{i}^{2}}}{\mu}}})^{-1}$ indicating
the importance of the kurtosis $\nu$ and resultingly $\mu$. It can be observed
that the residuals are gradually downweighted with increasing magnitude during
estimation with $w_{i}=0.5$ for ${\hat{r}_{i}^{2}}=\mu$. Since the residuals
are evaluated considering the state estimate using all the measurements
initially, it is possible that the squared residuals even for the uncorrupted
dimensions become greater than the prefixed $\mu$ downweighting them in the
process. This can lead to performance issues. Therefore, this calls for
adapting $\mu$ by considering the residuals evaluated with clean and corrupted
measurements. Given the limitations, we propose a variant of the standard ROR
method where $\mu$ is adapted during iterations considering the residuals at
each iteration. We call it Extended ROR or simply EROR.
#### III-A2 EROR
In EROR, we propose adaptation of $\mu$ during iterations considering the
updated squared residuals based on the relationship of the $w_{i}$ and $\mu$.
The choice of $\mu$ is done such that weights assigned to residuals span the
maximum portion of the zero to one range. In other words, the largest
residuals need to be pruned with the smallest weights in the weighted least
squared cost function and vice versa. In particular, for
${{\hat{r}_{i}^{2}}}={\hat{r}_{\max}^{2}}$ with $w_{i}\rightarrow 0$ leads to
$\mu\rightarrow 0$ ($\mu=\frac{{\hat{r}_{i}^{2}}}{1/{w_{i}}-1}$). Practically,
$\mu\ll{\hat{r}_{\max}^{2}}$. Similarly for the other extreme
${{\hat{r}_{i}^{2}}}={\hat{r}_{\min}^{2}}$ with $w_{i}\rightarrow 1$ leads to
$\mu\rightarrow\infty$. Practically, $\mu\gg{\hat{r}_{\min}^{2}}$. To cater
for both extremes we propose
$\mu=\mathrm{mean}({\hat{r}_{\max}^{2}},{\hat{r}_{\min}^{2}})$. Also, $\mu$ is
lower bounded by $\chi$ to ensure residuals within a minimum threshold are not
neglected during estimation. The notion of $\chi$ is similar to $\bar{c}^{2}$
as in [11] which is set as the maximum error expected for the inliers. Note
that adaptation of $\mu$ is intuitive which can be viewed as an additional
step in the standard ROR devised using VB. EROR is presented as Algorithm 1.
Initialize $w_{i}=1\ \forall\ i$
while _the convergence criterion has not met_ do
Variable update:
$\hat{\mathbf{x}}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i}w_{i}{(r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right))}^{2}$
Residual update:
${\hat{r}_{i}^{2}}={\big{(}r_{i}\left(\mathbf{y}_{i},\hat{\mathbf{x}}\right)\big{)}}^{2}$
$\forall\ i$
Parameteric update:
${\hat{r}_{\max}^{2}}=\max_{i}({\hat{r}_{i}^{2}});{\hat{r}_{\min}^{2}}=\min_{i}({\hat{r}_{i}^{2}})\
\mathrm{s.t.}\ i>0$
$\mu=\max(\mathrm{mean}({\hat{r}_{\max}^{2}},{\hat{r}_{\min}^{2}}),\chi)$
Weight update: $w_{i}=\frac{1}{1+({{\hat{r}_{i}^{2}}}/{\mu})}$ $\forall\ i>0$
end while
Algorithm 1 EROR
### III-B SOR Methods
#### III-B1 Standard SOR
The SOR method, as originally reported [23], assumes $\mathbf{y}_{i}$ to be
scalar but it can be a vector in general. In the original work, an indicator
vector $\bm{\mathcal{I}}\in\mathbb{R}^{m}$ with Bernoulli elements is
introduced to describe outliers in the measurements. In particular,
${{\mathcal{I}}}_{i}=\epsilon$ indicates the occurrence of an outlier in the
ith dimension and ${{\mathcal{I}}}_{i}=1$ is reserved for the no outlier case.
Accordingly, the conditional likelihood to robustify (1) is a multivariate
Gaussian density function [23]
$\displaystyle
p(\mathbf{y}_{i}|\mathbf{x},{\mathcal{I}}_{i})={\mathcal{N}}\Big{(}\mathbf{y}_{i}|\mathbf{h}_{i}(\mathbf{x}),({{\mathcal{I}}}_{i}\bm{\Omega}_{i})^{-1}\Big{)}$
$\displaystyle=\frac{1}{\sqrt{{(2\pi)^{m}{|\bm{\Omega}_{i}^{-1}|}}}}e^{\left({-}0.5{{\mathcal{I}}}_{i}{\left(r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right)\right)}^{2}\right)}{{\mathcal{I}_{i}}}^{0.5}$
(16)
${\mathcal{I}_{i}}\ \forall\ i>0$ is assumed to have the following prior
distribution
$p({{\mathcal{I}}}_{i})=(1-{\theta_{i}})\delta({{{\mathcal{I}}}_{i}}-\epsilon)+{\theta_{i}}\delta({{{\mathcal{I}}}_{i}}-1)$
(17)
where $\theta_{i}$ denotes the prior probability of having no outlier in the
$i$th measurement channel. $\epsilon$ has the role of catering for describing
the anomalous data in effect controlling the covariance of outliers. Using the
Bayes theorem we can write
$p(\mathbf{x},\bm{\mathcal{I}}|\mathbf{y})\propto{p(\mathbf{y}|\bm{\mathcal{I}},\mathbf{x})p(\mathbf{x})p(\bm{\mathcal{I}})}$
(18)
where $\bm{\mathcal{I}}$ denotes the vector with ${\mathcal{I}_{i}}$ its $i$th
element.
As a result, the log-posterior is given as
$\displaystyle\ln(p(\mathbf{x},\bm{\mathcal{I}},b|\mathbf{y}))=\Big{\\{}\sum_{i=1}^{m}\Big{(}-0.5{\mathcal{I}}_{i}{\big{(}r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right)\big{)}}^{2}+0.5\ln({\mathcal{I}}_{i})$
$\displaystyle+\ln\left((1-{\theta_{i}})\delta({{{\mathcal{I}}}_{i}}-\epsilon)+{\theta_{i}}\delta({{{\mathcal{I}}}_{i}}-1)\right)\Big{)}-0.5{\big{(}r_{0}\left(\mathbf{y}_{0},\mathbf{x}\right)\big{)}}^{2}$
$\displaystyle+constant\Big{\\}}$ (19)
For tractable inference we resort to the following VB factorization of the
posterior distribution
$p(\mathbf{x},\bm{\mathcal{I}}|\mathbf{y})\approx
q(\mathbf{x})q(\bm{{\mathcal{I}}})$ (20)
Based on (6) and (19), the state estimate $\hat{\mathbf{x}}$ is updated using
the VB/EM theory as [23]
$\hat{\mathbf{x}}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i=0}^{m}w_{i}{(r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right))}^{2}$
(21)
with $w_{0}=1$ and
$\displaystyle w_{i}$
$\displaystyle=\langle{\mathcal{I}}_{i}\rangle_{q({\mathcal{I}}_{i})}\
\forall\ i>0$ (22)
$\displaystyle=\Omega_{i}+(1-\Omega_{i})\epsilon\approx\Omega_{i}$ (23)
where $\Omega_{i}$ parameterizes the VB posterior marginal
${q({\mathcal{I}}_{i})}$ corresponding to $\theta_{i}$ in
$p({{\mathcal{I}}}_{i})$. $\Omega_{i}$ is updated using (4) and (19) as [23]
$\displaystyle\Omega_{i}$
$\displaystyle=\frac{1}{1+{\sqrt{\epsilon}}(\frac{1}{\theta_{i}}-1){e^{\left(0.5{\hat{r}_{i}^{2}}(1-\epsilon)\right)}}}$
(24)
Since the hyperparameter $\epsilon$ is assumed to be a small positive number
we denote it as $\epsilon=\exp({{-{\rho}}^{2}})$. Assuming a neutral prior for
the occurrence of an outlier in the $i$th dimension i.e. ${\theta_{i}}=0.5$
(as reported originally [23]) and noting that
$\epsilon=\exp({{-{\rho}}^{2}})\approx 0$ we can write
$w_{i}\approx\Omega_{i}\approx\frac{1}{1+{e^{\left(0.5({\hat{r}_{i}^{2}}-\rho^{2})\right)}}}\forall\
i>0$ (25)
### Limitations of standard SOR
Starting with $w_{i}=1\ \forall\ i$, the standard SOR invokes (21) and (25)
iteratively till convergence. The technique has shown good performance in the
filtering context [23] but has limited ability in robust spatial perception
tasks as the standard ROR method. This drawback compromises the performance
especially at high outlier ratios since the standard SOR fails to capture the
outlier characteristics by fixing $\rho^{2}$ (or $\epsilon$) which governs the
covariance of outliers. Experimental evidence verifies this limitation.
Figure 2: $w_{i}$ vs ${\hat{r}_{i}^{2}}-\rho^{2}$ in SOR.
To further appreciate this limitation consider how $w_{i}$ changes with
${\hat{r}_{i}^{2}}-\rho^{2}$ as shown in Fig. 2. It can be observed that the
residuals are gradually downweighted with increasing magnitude during
estimation with $w_{i}=0.5$ for ${\hat{r}_{i}^{2}}=\rho^{2}$. Since the
residuals are evaluated considering the state estimate using all the
measurements initially, it is possible that the squared residuals even for the
uncorrupted dimensions become greater than the prefixed $\rho^{2}$
downweighting them during the inference process. This can lead to performance
issues. Therefore, this calls for adapting $\rho^{2}$ by considering the
residuals evaluated with clean and corrupted measurements.
Given the limitations, we present two methods based on the standard SOR
technique. Firstly, we propose a modification in the SOR method by explicitly
adapting $\rho^{2}$ during iterations considering the residuals at each
iteration. We call it Extended SOR or simply ESOR. Secondly, by modifying the
basic SOR model with a notion of adaptive unknown covariance of outliers, we
propose Adaptive SOR or simply ASOR where the parameters controlling the
covariance of outliers are learnt within the inferential procedure.
#### III-B2 ESOR
In ESOR, we modify $\rho^{2}$ during iterations taking into account the
updated squared residuals. In particular, we propose to select
$\rho^{2}={\sum_{i=0}^{m}}w_{i}{\hat{r}_{i}^{2}}/{\sum_{i=0}^{m}}w_{i}$. In
other words, $\rho^{2}$ is selected as the the effective centroid of points
given as squared residuals considering the weights assigned. With such an
intuitive choice, the probability of declaring an outlier in the $i$th
dimension is 0.5 when ${\hat{r}_{i}^{2}}$ equals the effective mean of squared
residuals. The residuals greater than $\rho$ become the candidates for
downweighing and vice versa during an iteration. Lastly, $\rho^{2}$ is lower
bounded by $\gamma$ to ensure residuals within a minimum threshold are not
neglected during estimation. ESOR is presented as Algorithm 2.
Initialize $w_{i}=1\ \forall\ i$
while _the convergence criterion has not met_ do
Variable update:
$\hat{\mathbf{x}}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i}w_{i}{(r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right))}^{2}$
Residual update:
${\hat{r}_{i}^{2}}={\big{(}r_{i}\left(\mathbf{y}_{i},\hat{\mathbf{x}}\right)\big{)}}^{2}$
$\forall\ i$
Parameteric update:
$\rho^{2}=\max(\frac{{\sum_{i}}w_{i}{\hat{r}_{i}^{2}}}{{\sum_{i}}w_{i}},\gamma)$
Weight update: $w_{i}=\frac{1}{1+{e^{(0.5({\hat{r}_{i}^{2}}-\rho^{2}))}}}$
$\forall\ i>0$
end while
Algorithm 2 ESOR
#### III-B3 ASOR
For devising ESOR we adapt $\rho^{2}$ (or $\epsilon$) during iterations to
capture the characteristics of outliers. Nevertheless, the choice of selection
of $\rho^{2}$ which controls the covariance of outliers is entirely intuitive.
This can be viewed as an additional step within the standard SOR method, not
falling under the standard VB approach. However, the insights drawn from ESOR
with sound experimental performance suggest the merits of considering or
learning the characteristics of outliers during inference. With these
observations, we now present ASOR which is devised with the standard VB
approach. In contrast to EROR and ESOR, we jointly estimate the covariance
controlling factor along with the state and the weights in ASOR.
The conditional likelihood for designing ASOR remains same as for the standard
SOR given in (16). Building on the insights from EROR and ESOR, we need to
adapt covariances to describe the outliers. In particular, we assume that for
outlier occurrence in the $i$th observation channel i.e.
${{\mathcal{I}}}_{i}\neq 1$ in (16), ${{\mathcal{I}}}_{i}$ obeys a Gamma
probability density being supported on the set of positive real numbers.
Resultingly, we write the hierarchical prior distribution of
${{\mathcal{I}}}_{i}$ given as
$\displaystyle p({{\mathcal{I}}}_{i}|b)$
$\displaystyle=(1-{\theta_{i}})\underbrace{f(a,b){{{\mathcal{I}}}_{i}}^{a-1}e^{-b{{{\mathcal{I}}}_{i}}}}_{\mathcal{G}({{\mathcal{I}}}_{i}|a,b)}+{\theta_{i}}\delta({{{\mathcal{I}}}_{i}}-1)$
(26)
where we assume the two components of $p({{\mathcal{I}}}_{i}|b)$ in (26) as
disjoint with the Gamma density defined as zero for ${{{\mathcal{I}}}_{i}}=1$
without losing anything. This assumption helps in subsequent derivation. The
parameter $b$ is the factor that captures the common effect of outliers in
each observation channel. From the Bayesian theory we know that the conjugate
prior of $b$ is also a Gamma distribution given as [30]
$\displaystyle p(b)$ $\displaystyle=f(A,B){b}^{A-1}e^{-Bb}$ (27)
where $A$ and $B$ are the parameters of this Gamma distribution. We are now in
a position to invoke the Bayes theorem given as
$p(\mathbf{x},\bm{\mathcal{I}},b|\mathbf{y})\propto{p(\mathbf{y}|\bm{\mathcal{I}},\mathbf{x})p(\mathbf{x})p(\bm{\mathcal{I}}|b)p(b)}$
(28)
Resultingly, the log-posterior, which is used in subsequently derivation, is
given as
$\displaystyle\ln(p(\mathbf{x},\bm{\mathcal{I}},b|\mathbf{y}))=\Big{\\{}\sum_{i=1}^{m}\Big{(}-0.5{\mathcal{I}}_{i}{\left(r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right)\right)}^{2}+0.5\ln({\mathcal{I}}_{i})$
$\displaystyle+\ln\left((1-{\theta_{i}})f(a,b){{{\mathcal{I}}}_{i}}^{a-1}e^{-b{{{\mathcal{I}}}_{i}}}+{\theta_{i}}\delta({{{\mathcal{I}}}_{i}}-1)\right)\Big{)}$
$\displaystyle-0.5{\big{(}r_{0}\left(\mathbf{y}_{0},\mathbf{x}\right)\big{)}}^{2}+(A-1)\ln(b)-Bb+constant\Big{\\}}$
(29)
To proceed further we resort to the VB factorization given as
$p(\mathbf{x},\bm{\mathcal{I}},b|\mathbf{y})\approx
q(\mathbf{x})q(\bm{{\mathcal{I}}})q({b})$ (30)
Using the VB/EM theory and with the assumption that
$q(\mathbf{x})=\delta(\mathbf{x}-\hat{\mathbf{x}})$ we obtain the following
using (6) and (29)
$\hat{\mathbf{x}}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i=0}^{m}w_{i}{\left(r_{i}(\mathbf{y}_{i},\mathbf{x})\right)}^{2}$
(31)
where $w_{0}=1$ and
$\displaystyle w_{i}$
$\displaystyle=\langle{\mathcal{I}}_{i}\rangle_{q({\mathcal{I}}_{i})}\
\forall\ i>0$ (32)
Thanks to the notion of VB-conjugacy [27], it turns out that
$q(\mathcal{I}_{i})$ has a same functional form as of
$p({{\mathcal{I}}}_{i}|b)$ in (26). $q(\mathcal{I}_{i})$ is parameterized by
$\alpha$, $\beta_{i}$ and $\Omega_{i}$ corresponding to $a$, $b$ and
$\theta_{i}$ in $p({{\mathcal{I}}}_{i}|b)$ respectively. Resultingly, we can
evaluate the expectation in (32) as
$\displaystyle w_{i}=\Omega_{i}+(1-\Omega_{i})\alpha/\beta_{i}\ \forall\ i>0$
(33)
The VB marginal $q(\bm{\mathcal{I}})$ can be obtained using (4) and (29) as
$\displaystyle q(\bm{\mathcal{I}})\propto$
$\displaystyle\prod_{i=1}^{m}\Big{\\{}\mathcal{I}^{0.5}_{i}e^{-0.5{\hat{r}_{i}^{2}}\mathcal{I}_{i}}((1-{\theta_{i}})f(a,b){{{\mathcal{I}}}_{i}}^{a-1}e^{-\hat{b}{{{\mathcal{I}}}_{i}}}$
$\displaystyle+{\theta_{i}}\delta({{{\mathcal{I}}}_{i}}-1))\Big{\\}}$ (34)
where we have assume a point estimator for $b$ i.e. $q(b)=\delta(b-\hat{b})$
to simplify the arising expectations.
We can further write
$\displaystyle q(\bm{\mathcal{I}})=$
$\displaystyle\prod_{i=1}^{m}\Big{\\{}k_{i}(1-{\theta_{i}})f(a,\hat{b}){{{\mathcal{I}}}_{i}}^{\alpha-1}e^{-\beta_{i}\mathcal{I}_{i}}+$
$\displaystyle
k_{i}{\theta_{i}}e^{-0.5{\hat{r}_{i}^{2}}}\delta({{{\mathcal{I}}}_{i}}-1)\Big{\\}}$
(35)
where $k_{i}$ is the proportionality constant for the $i$th dimension,
$\beta_{i}={0.5{\hat{r}_{i}^{2}}+\hat{b}}$ and $\alpha=a+0.5$.
Proceeding ahead we can write
$\displaystyle q(\bm{\mathcal{I}})$
$\displaystyle=\prod_{i=1}^{m}\overbrace{(1-{\Omega_{i}})\underbrace{f(\alpha,\beta_{i}){{{\mathcal{I}}}_{i}}^{\alpha-1}e^{-\beta_{i}\mathcal{I}_{i}}}_{q^{1}({\mathcal{I}}_{i})}+{{\Omega_{i}}\delta({{{\mathcal{I}}}_{i}}-1)}}^{q(\mathcal{I}_{i})}$
where
${\Omega_{i}}=k_{i}e^{-0.5{\hat{r}_{i}^{2}}}\theta_{i}$ (36)
To determine $k_{i}$, we note that the distribution in (35) should integrate
to $1$. Therefore, the following should hold
$k_{i}{\theta_{i}}e^{-0.5{\hat{r}_{i}^{2}}}+k_{i}(1-{\theta_{i}})\frac{f(a,\hat{b})}{f(\alpha,{\beta_{i}})}=1$
(37)
Leading to
$k_{i}=\frac{1}{{\theta_{i}}e^{-0.5{\hat{r}_{i}^{2}}}+(1-{\theta_{i}})\frac{f(a,\hat{b})}{f(\alpha,{\beta_{i}})}}$
(38)
Resultingly, using (10) and (36), (38) we arrive at
$\Omega_{i}=\frac{1}{1+\zeta\frac{{\hat{b}}^{a}}{{\beta_{i}^{\alpha}}}e^{0.5{\hat{r}_{i}^{2}}}}\
\forall\ i>0$ (39)
where $\zeta=(\frac{1}{\theta_{i}}-1)\frac{\Gamma(\alpha)}{\Gamma(a)}$.
Lastly, in a similar manner using the VB/EM approach we can determine
$q(b)=\delta(b-\hat{b})$ where using (6) and (29)
$\hat{b}=\underset{b}{\operatorname{argmax}}\big{\langle}\mathrm{ln}(p(\hat{\mathbf{x}},\bm{\mathcal{I}},b|\mathbf{y}))\big{\rangle}_{q(\bm{\mathcal{I}})}$
(40)
The expected log-posterior in (40) can be written as
$\displaystyle\big{\langle}\mathrm{ln}(p(\hat{\mathbf{x}},\bm{\mathcal{I}},b|\mathbf{y}))\big{\rangle}_{q(\bm{\mathcal{I}})}=$
$\displaystyle\Big{\\{}{\sum_{i=1}^{m}v_{i}(b)}+{(A-1)}\ln(b){-Bb}$
$\displaystyle+constant\Big{\\}}$ (41)
where
$\displaystyle v_{i}(b)$
$\displaystyle={\langle\ln((1-{\theta_{i}})f(a,b){{{\mathcal{I}}}_{i}}^{a-1}e^{-b\mathcal{I}_{i}}+{\theta_{i}}\delta({{{\mathcal{I}}}_{i}}-1))\rangle_{q({\mathcal{I}}_{i})}}$
(42)
which can further be written as follows considering only the terms dependent
on $b$
$\displaystyle
v_{i}(b)=(1-\Omega_{i})(\ln(f(a,b))-b\langle{{{\mathcal{I}}}_{i}}\rangle_{q^{1}({\mathcal{I}}_{i})})+constant$
(43)
where we assume $q^{1}(\mathcal{I}_{i})$ is defined as zero for
${{{\mathcal{I}}}_{i}}=1$ as the prior Gamma density in (26). Given
$\langle{{{\mathcal{I}}}_{i}}\rangle_{q^{1}({\mathcal{I}}_{i})}={\alpha}/{\beta_{i}}$
and using the expressions in (10) and (43), we can write (41) as
$\displaystyle\big{\langle}\mathrm{ln}(p(\hat{\mathbf{x}},\bm{\mathcal{I}},b|\mathbf{y}))\big{\rangle}_{q(\bm{\mathcal{I}})}=$
$\displaystyle{(\bar{A}-1)}\ln(b)-\bar{B}b+constant$ (44)
where
$\displaystyle\bar{A}$ $\displaystyle=A+\sum_{i=1}^{m}a(1-\Omega_{i})$ (45)
$\displaystyle\bar{B}$
$\displaystyle=B+\sum_{i=1}^{m}(1-\Omega_{i})\frac{\alpha}{\beta_{i}}$ (46)
Maximizing (44) using differentiation, we obtain $\hat{b}$ according to (40)
as
$\hat{b}=\frac{\bar{A}-1}{\bar{B}}\ \ \ \mathrm{s.t.}\ \ \ \bar{A}>1$ (47)
where $\bar{A}>1$ owing to the requirement of positivity of parameter $b$ of
Gamma distribution in (26) being approximated in (47). Also noting that the
parameter $a>0$ for validity of the Gamma distribution in (26), $A>1$ is a
sufficient condition for (47) to hold considering (45). Lastly, note that
since the rate parameter of distribution in (27) is positive i.e. $B>0$ any
numerical errors in (47) are avoided. The resulting method namely ASOR is
given as Algorithm 3.
Initialize $w_{i}=1\ \forall\ i$ and $A,B,a,\hat{b}$, $\theta_{i}\ \forall\ i$
Evaluate $\alpha=a+0.5$ and
$\zeta=(\frac{1}{\theta_{i}}-1)\frac{\Gamma(\alpha)}{\Gamma(a)}$
while _the convergence criterion has not met_ do
Variable update:
$\hat{\mathbf{x}}=\underset{\mathbf{x}\in\mathcal{X}}{\operatorname{argmin}}\sum_{i}w_{i}{(r_{i}\left(\mathbf{y}_{i},\mathbf{x}\right))}^{2}$
Residual update:
${\hat{r}_{i}^{2}}={\big{(}r_{i}\left(\mathbf{y}_{i},\hat{\mathbf{x}}\right)\big{)}}^{2}$
$\forall\ i$
Parameteric updates:
$\beta_{i}={0.5{\hat{r}_{i}^{2}}+\hat{b}}$
$\Omega_{i}=\frac{1}{1+\zeta\frac{{\hat{b}}^{a}}{{\beta_{i}^{\alpha}}}e^{0.5{\hat{r}_{i}^{2}}}}$
$\hat{b}=\mfrac{A-1+\sum_{i}a(1-\Omega_{i})}{B+\sum_{i}(1-\Omega_{i})\frac{\alpha}{\beta_{i}}}$
Weight update: $w_{i}=\Omega_{i}+(1-\Omega_{i})\alpha/\beta_{i}$ $\forall\
i>0$
end while
Algorithm 3 ASOR
### Remarks
It is interesting to note that the variable update steps in EROR, ESOR and
ASOR is the same as in GNC reflecting their ability to employ non-minimal
solvers during inference. We propose using the change in
${\sum_{i}}w_{i}{\hat{r}_{i}^{2}}$ during consecutive iterations as the
standard convergence criterion. We lastly remark that in EROR and ESOR an
exception to break the iterations can be added for seamless operation when the
sum of the weights gets close to zero which we experienced in certain
applications with very high ratios of outliers.
## IV Experiments
In this section, we discuss the performance results of the proposed methods
for different spatial perception applications including 3D point cloud
registration (code: MATLAB), mesh registration (code: MATLAB), and pose graph
optimization (PGO) (code: C++) on an Intel i7-8550U processor-based computer
and consider SI units. We consider GNC-GM and GNC-TLS as the benchmark methods
which have shown good results in these applications outperforming methods
including the classical RANSAC and recently introduced ADAPT. For EROR and
ESOR we consider $\chi=\gamma=\bar{c}^{2}$ where $\bar{c}^{2}$, dictating the
maximum error expected for the inliers, is set as specified in the original
work [11]. In ASOR we resort to the choice of initialization which performs
the best across different applications given as
$a=0.5,A=10000,B=1000,\hat{b}=10000,\theta_{i}=0.5\ \forall\ i$. We use the
normalized incremental change of $10^{-5}$ in
$\underset{i}{\sum}w^{i}{\hat{r}_{i}^{2}}$ during consecutive iterations as
the convergence criterion for GNC-TLS, EROR, ESOR and ASOR. For GNC-GM the
convergence criterion remains the same as originally reported.
### IV-A 3D Point Cloud Registration
Figure 3: Point clouds with correspondences in 3D point cloud registration for
the Bunny dataset [31].
In 3D point cloud registration, we assume that a set of 3D points
$\mathbf{p}_{i}\in\mathbb{R}^{3},i=1,...,m$ undergo a transformation, with
rotation $\mathbf{R}\in\mathrm{SO(3)}$ and translation
$\mathbf{t}\in\mathbb{R}^{3}$, resulting in another set of 3D points
$\mathbf{q}_{i}\in\mathbb{R}^{3},i=1,...,m$. The putative correspondences
$(\mathbf{p}_{i},\mathbf{q}_{i})$ can be potentially infested with outliers.
Fig. 3 depicts how the Bunny point cloud from the Stanford repository [31]
undergoes a random transformation in a point cloud registration setup (blue
lines: inliers, red lines: outliers). The objective is to estimate
$\mathbf{R}$ and $\mathbf{t}$ that best aligns the two point clouds by
minimizing the effect of outliers. The problem can be cast in form of (2)
where the ith residual is the Euclidean distance between $\mathbf{q}_{i}$ and
$\mathbf{R}\mathbf{p}_{i}+\mathbf{t}$. We resort to the renowned Horn’s method
as the non-minimal solver for this case which provides closed form estimates
in the outlier-free case [8].
(a) Rotation error.
(b) Translation error.
(c) Computational time.
Figure 4: Performance of robust estimators for 3D point cloud registration
considering the Bunny dataset [31].
(a) Rotation error.
(b) Translation error.
(c) Computational time.
Figure 5: Performance of robust estimators for mesh registration considering
the motorbike model [32].
(a) RMSE (INTEL).
(b) RMSE (CSAIL).
(c) Computational time (INTEL).
Figure 6: Performance of robust estimators for pose graph optimization
considering Intel and CSAIL datasets.
Using the proposed EROR, ESOR and ASOR we robustify the Horn’s method and
report the results for the Bunny dataset. We downsample the point cloud to
$m=100$ points and restrict it within a $[-0.5,0.5]^{3}$ box before applying a
random rotation $\mathbf{R}\in\mathrm{SO(3)}$ and a random translation
$\mathbf{t}\ (\|\mathbf{t}\|_{2}\leq 3)$. The inliers of the transformed
points are corrupted with independent noise samples drawn from
$\mathcal{N}(0,0.001^{2})$ whereas the outliers are randomly generated being
contained within a sphere of diameter $\sqrt{3}$. 20 Monte Carlo (MC) runs are
used to capture the error statistics for up to $90\%$ outlier contamination
ratio.
Figs. 4 showcase the performance results of our proposed methods in comparison
to the GNC methods. The rotation and translational error statistics as
depicted in Figs. 4(a)-4(b) are very similar in all the cases with errors
generally increasing for all the methods at large outlier ratios. In terms of
computational complexity, the gains of the Bayesian heuristics can be observed
for this case in Fig. 4(c) which is a key evaluation parameter for runtime
performance. ESOR exhibits much faster comparative behavior followed by EROR
and ASOR for the entire range of outlier ratios. We have observed similar
error performance of the methods for other values of $m$ (upto maximum number
of available points). Also we have noticed that with an increase in the inlier
noise magnitude and number of points, ASOR slows down becoming computationally
comparative to the GNC methods. Moreover, we have noted that increasing the
parameter $a$ generally reduces the processing time at the cost of slightly
increased errors at higher outlier ratios.
Other specialized solvers like TEASER [33] also exist for point cloud
registration problems which are certifiably robust and can sustain higher
outlier contamination better. However, it has scalability issues thanks to the
involvement of sluggish semidefinite programming (SDP) based solution for
rotation estimation. An improved version of TEASER namely TEASER++ [18] was
recently introduced that leverages the heuristic, GNC-TLS, in its inferential
pipeline for rotation estimation while still certifying the estimates. Owing
to the well-devised estimation pipeline, GNC-TLS has to deal with a smaller
percentage of outliers making TEASER++ faster and improving its practicality.
Since the error performance of the Bayesian heuristics is similar to the GNC
methods and these are generally found to be faster in various scenarios of the
point cloud registration problem this indicates their utility as standalone
estimators. Moreover, these can potentially be useful in other inferential
pipelines like TEASER++ (while enjoying certifiable performance) but need a
thorough evaluation.
### IV-B Mesh registration
In the mesh registration problem, the points $\mathbf{p}_{i}$ from the a point
cloud are transformed to general 3D primitives $\mathbf{q}_{i}$ including
points, lines, and/or planes. Fig. 7 shows the result of a random
transformation of a point cloud to the motorbike mesh model from the PASCAL
dataset [32] (blue lines: inliers, red lines: outliers). The aim is to
estimate the $\mathbf{R}$ and $\mathbf{t}$ by minimizing the squared sum of
residuals which represent the corresponding distances. We resort to [9] as the
basic non-minimal solver which has been proposed to find the globally optimal
solution for the outlier-free case.
Figure 7: Mesh and point cloud with correspondences in mesh registration for
the motorbike mesh model [32]
Using the presented EROR, ESOR and ASOR heuristics we robustify the solver and
report the results for the motorbike mesh model. To create point cloud we
randomly sample points on the vertices, edges and faces of the mesh model, and
then apply a random transformation $(\mathbf{R}\in\mathrm{SO(3)}$,
$\mathbf{t}\ (\|\mathbf{t}{\|}_{2}\leq 5))$ and subsequently add independent
noise samples from $\mathcal{N}(0,0.01^{2})$. Considering $100$ point-to-
point, $80$ point-to-line and $80$ point-to-plane correspondences, we create
outliers by random erroneous correspondences. For this case also, 20 MC runs
are carried out to generate the error statistics for up to $90\%$ outlier
contamination ratio. We see a trend similar to the point cloud registration
case as depicted in Figs. 5(a)-5(b). In particular, the performance in terms
of errors is similar for all the algorithms. However, the Bayesian heuristics
exhibit a general computational advantage, except at very high outlier ratios
where the performance becomes comparable. The Bayesian heuristics generally
have similar runtimes with SOR modifications having a general advantage. We
observed similar performance of the methods for other combinations of
correspondences in this case.
During the experiments of point cloud and mesh registration, we have noticed
that ROR and SOR modifications generally get computationally more
advantageous, with estimation quality remaining similar, when the outliers
become larger in comparison to the nominal noise samples.
### IV-C Pose graph optimization
(a)
(b)
Figure 8: Ground truth paths of datasets considered in pose graph
optimization.
PGO is typically employed for several problems arising in robotic and computer
vision applications like SLAM and structure from motion (SfM) [34]. The
objective is to estimate a set of poses
$(\mathbf{t}_{i},\mathbf{R}_{i}),i=1,...,m$ using pairwise relative
measurements $(\tilde{\mathbf{t}}_{ij},\tilde{\mathbf{R}}_{ij})$. Relative
observations can result in consecutive pose constraints (e.g. from odometry
measurements) or non-successive pose constraints (e.g. from scan matalphang)
also known as loop closures. The residual error for this case is given as
$r\left(\mathbf{R}_{i},t_{i}\right)=\sqrt{\kappa_{ij}\|\mathbf{R}_{j}-\mathbf{R}_{i}\tilde{\mathbf{R}}_{ij}\|_{F}^{2}+\tau_{ij}\|t_{j}-t_{i}-\mathbf{R}_{i}\tilde{t}_{ij}\|_{2}^{2}}$
where $\kappa_{ij}$ and $\tau_{ij}$ encode the measurement noise statistics
and $\|.\|_{F}$ denotes the Frobenius norm. We resort to SE-Sync [10] as the
non-minimal solver for this case and use the Python binding of C++ open-
sourced by the authors. Adopting the same experimentation setup of GNC we
randomly corrupt loop closures and retain odometry observations. For
benchmarking we consider 2D datasets including INTEL and CSAIL which are
available openly (path ground truth plotted in Fig. 8). Since simulations take
much more time as compared to the previous case we carry out 10 MC runs to
obtain the error statistics. Fig. 6(a) showcases the root mean squared (RMSE)
of the trajectory considering the INTEL dataset. The proposed Bayesian
heuristics in this case also exhibit comparative performance to the GNC
methods. At very higher outlier rates ESOR has slightly compromised
performance. In Fig. 6(b) the RMSE for the CSAIL dataset are depicted
reflecting the same pattern. In the PGO examples, GNC-TLS and ASOR outperform
other methods with ASOR generally having the least error for different outlier
ratios. As far as the computational performance is concerned ESOR has the
smallest runtime, followed by ASOR while ROR is the slowest for both cases.
Fig. 6(c) depicts the computational runtime statistics for the INTEL dataset.
CSAIL has similar computational time statistics with ESOR being relatively
much faster as compared to other methods.
Lastly, we also evaluated the Bayesian heuristics using
$\max_{i}({w^{i}{\hat{r}_{i}^{2}}})<\bar{c}^{2}$ as the stopping criteria
which resulted in much faster performance but with slightly degraded error
performance in the considered scenarios of the spatial perception
applications.
## V Conclusion
We have proposed three Bayesian heuristics: EROR, ESOR and ASOR as nonlinear
estimators for spatial perception problems. Like the existing general-purpose
GNC heuristics, these have the ability to invoke existing non-minimal solvers.
Evaluations in several experiments demonstrate their merits as compared to the
GNC heuristics. In particular, in the 3D point cloud and mesh registration
problems EROR, ESOR and ASOR have similar estimation errors over a wide range
of outlier ratios. However, the Bayesian heuristics have a general advantage
in computational terms. For the PGO setups, we generally find the Bayesian
methods to compete with GNC in estimation quality. The devised ROR and SOR
modifications are found to be the least and most computationally efficient for
this case. Using another suggested criteria can lead to further improvement in
computational terms at expense of estimation quality. In short, the proposed
methods provide general purpose options, in addition to the GNC heuristics, to
robustify the existing non-minimal solvers against outliers in different
spatial perception applications indicating their usefulness. Empirical
evidence suggests that the proposed approaches provide a general edge in
computational terms while remaining competitive in terms of error. The actual
possible gains depend on whether the solvers are used standalone or in an
inferential pipeline for the particular application scenarios and should be
evaluated for the case under consideration before deployment. We believe that
the work can be further extended in different directions by aiming to devise
the heuristics without knowledge of nominal noise statistics. Moreover, these
methods can also be tested within hybrid approaches where the estimates
subsequently get certified for optimality.
## References
* [1] B. Drost, M. Ulrich, N. Navab, and S. Ilic, “Model globally, match locally: Efficient and robust 3D object recognition,” in _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 2010, pp. 998–1005.
* [2] D. Scaramuzza and F. Fraundorfer, “Visual odometry [tutorial],” _IEEE Robotics and Automation Magazine_ , vol. 18, no. 4, pp. 80–92, 2011.
* [3] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” _IEEE Transactions on Robotics_ , vol. 32, no. 6, pp. 1309–1332, 2016.
* [4] Y.-Q. Liu, F. Jin, K.-F. Dong, J.-L. Song, W.-Q. Mo, and Y.-J. Hui, “Eccentric optimization of multisensor for SLAM-integrated navigation,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 72, pp. 1–8, 2023.
* [5] A. H. Chughtai, M. Tahir, and M. Uppal, “A robust Bayesian approach for online filtering in the presence of contaminated observations,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 70, pp. 1–15, 2021.
* [6] P. Antonante, V. Tzoumas, H. Yang, and L. Carlone, “Outlier-robust estimation: Hardness, minimally tuned algorithms, and applications,” _IEEE Transactions on Robotics_ , vol. 38, no. 1, pp. 281–301, 2022.
* [7] J. Wang, Z. Meng, and L. Wang, “Efficient probabilistic approach to range-only SLAM with a novel likelihood model,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 70, pp. 1–12, 2021.
* [8] B. K. Horn, “Closed-form solution of absolute orientation using unit quaternions,” _Josa a_ , vol. 4, no. 4, pp. 629–642, 1987.
* [9] J. Briales and J. Gonzalez-Jimenez, “Convex global 3D registration with Lagrangian duality,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 5612–5621.
* [10] D. M. Rosen, L. Carlone, A. S. Bandeira, and J. J. Leonard, “Se-sync: A certifiably correct algorithm for synchronization over the special euclidean group,” _The International Journal of Robotics Research_ , vol. 38, no. 2-3, pp. 95–125, 2019.
* [11] H. Yang, P. Antonante, V. Tzoumas, and L. Carlone, “Graduated non-convexity for robust spatial perception: From non-minimal solvers to global outlier rejection,” _IEEE Robotics and Automation Letters_ , vol. 5, no. 2, pp. 1127–1134, 2020.
* [12] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” _Communications of the ACM_ , vol. 24, no. 6, pp. 381–395, 1981.
* [13] V. Tzoumas, P. Antonante, and L. Carlone, “Outlier-robust spatial perception: Hardness, general-purpose algorithms, and guarantees,” in _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2019, pp. 5383–5390.
* [14] J. L. Schonberger and J.-M. Frahm, “Structure-from-motion revisited,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2016.
* [15] N. Chebrolu, T. Läbe, O. Vysotska, J. Behley, and C. Stachniss, “Adaptive robust kernels for non-linear least squares problems,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 2, pp. 2240–2247, 2021.
* [16] P.-Y. Lajoie, S. Hu, G. Beltrame, and L. Carlone, “Modeling perceptual aliasing in SLAM via discrete–continuous graphical models,” _IEEE Robotics and Automation Letters_ , vol. 4, no. 2, pp. 1232–1239, 2019.
* [17] L. Carlone and G. C. Calafiore, “Convex relaxations for pose graph optimization with outliers,” _IEEE Robotics and Automation Letters_ , vol. 3, no. 2, pp. 1160–1167, 2018.
* [18] H. Yang, J. Shi, and L. Carlone, “TEASER: Fast and certifiable point cloud registration,” _IEEE Transactions on Robotics_ , vol. 37, no. 2, pp. 314–333, 2021.
* [19] E. M. Ronchetti and P. J. Huber, _Robust statistics_. John Wiley & Sons, 2009.
* [20] G. Agamennoni, P. Furgale, and R. Siegwart, “Self-tuning m-estimators,” in _2015 IEEE International Conference on Robotics and Automation (ICRA)_ , 2015, pp. 4628–4635.
* [21] A. Nakabayashi and G. Ueno, “Nonlinear filtering method using a switching error model for outlier-contaminated observations,” _IEEE Transactions on Automatic Control_ , vol. 65, no. 7, pp. 3150–3156, 2020.
* [22] H. Wang, H. Li, J. Fang, and H. Wang, “Robust Gaussian Kalman filter with outlier detection,” _IEEE Signal Processing Letters_ , vol. 25, no. 8, pp. 1236–1240, 2018.
* [23] A. H. Chughtai, M. Tahir, and M. Uppal, “Outlier-robust filtering for nonlinear systems with selective observations rejection,” _IEEE Sensors Journal_ , vol. 22, no. 7, pp. 6887–6897, 2022.
* [24] R. Piché, S. Särkkä, and J. Hartikainen, “Recursive outlier-robust filtering and smoothing for nonlinear systems using the multivariate Student-t distribution,” in _2012 IEEE International Workshop on Machine Learning for Signal Processing_ , 2012, pp. 1–6.
* [25] M. A. Amaral Turkman, C. D. Paulino, and P. Müller, _Computational Bayesian Statistics: An Introduction_ , ser. Institute of Mathematical Statistics Textbooks. Cambridge University Press, 2019.
* [26] S. Särkkä and L. Svensson, _Bayesian filtering and smoothing_. Cambridge university press, 2023, vol. 17.
* [27] V. Šmídl and A. Quinn, _The variational Bayes method in signal processing_. Springer Science & Business Media, 2006.
* [28] K. P. Murphy, _Machine learning : a probabilistic perspective_. Cambridge, Mass. [u.a.]: MIT Press, 2013.
* [29] ——, “Conjugate Bayesian analysis of the Gaussian distribution,” _def_ , vol. 1, no. 2$\sigma$2, p. 16, 2007.
* [30] D. Fink, “A compendium of conjugate priors,” _See http://www. people. cornell. edu/pages/df36/CONJINTRnew% 20TEX. pdf_ , vol. 46, 1997.
* [31] B. Curless and M. Levoy, “A volumetric method for building complex models from range images,” in _Proceedings of the 23rd annual conference on Computer graphics and interactive techniques_ , 1996, pp. 303–312.
* [32] Y. Xiang, R. Mottaghi, and S. Savarese, “Beyond PASCAL: A benchmark for 3D object detection in the wild,” in _IEEE Winter Conference on Applications of Computer Vision_ , 2014, pp. 75–82.
* [33] H. Yang and L. Carlone, “A polynomial-time solution for robust registration with extreme outlier rates,” _Robotics: Science and Systems_ , 2019.
* [34] F. Bai, T. Vidal-Calleja, and G. Grisetti, “Sparse pose graph optimization in cycle space,” _IEEE Transactions on Robotics_ , vol. 37, no. 5, pp. 1381–1400, 2021.
|
# Anisotropic differential conductance of a mixed parity
superconductor/ferromagnet structure
Tim Kokkeler<EMAIL_ADDRESS>Donostia International Physics Center
(DIPC), 20018 Donostia–San Sebastián, Spain University of Twente, 7522 NB
Enschede, The Netherlands Alberto Hijano<EMAIL_ADDRESS>Centro de
Física de Materiales (CFM-MPC) Centro Mixto CSIC-UPV/EHU, E-20018 Donostia-San
Sebastián, Spain Department of Condensed Matter Physics, University of the
Basque Country UPV/EHU, 48080 Bilbao, Spain F. Sebastián Bergeret
<EMAIL_ADDRESS>Centro de Física de Materiales (CFM-MPC) Centro Mixto
CSIC-UPV/EHU, E-20018 Donostia-San Sebastián, Spain Donostia International
Physics Center (DIPC), 20018 Donostia–San Sebastián, Spain
###### Abstract
We study the electronic transport properties of a superconductor (S) with a
mixed s+p-wave pairing attached to a ferromagnetic metal (F) and a normal
electrode (N) in an SFN configuration. Using the quasiclassical Green’s
function method, we compute the differential conductance $\sigma$ of the
junction and demonstrate its dependence on the direction of the exchange field
relative to the direction of the d-vector of the pair potential. If the p-wave
triplet dominates the pairing, the zero bias conductance depends on the
relative direction between the triplet d-vector and the exchange field. In
contrast, if the s-wave singlet dominates the pairing, the zero bias
conductance is isotropic with respect to the field direction. Furthermore, at
zero temperature, the zero bias conductance height can only take two values as
a function of $r$, the parameter quantifying the relative amount of s- and
p-wave pairing, with an abrupt change at $r=1$ when the superconductor goes
from a singlet to triplet dominated ground state. Moreover, we show that the
relative amount of s- and p-wave pairing, can be estimated from the dependence
of the finite bias conductance on the exchange field direction. Our results
provide a way to characterize parity-mixed superconductors performing
electrical measurements.
## I Introduction
Among the various types of unconventional superconductors, much attention has
been paid to the study of superconductors with triplet correlations [1, 2, 3,
4, 5, 6, 7, 8]. These correlations can be induced either via the proximity
effect by combining superconductors with other materials [6, 7], or they may
exist in bulk superconductivity, for example in uranium based ferromagnetic
superconductors [9, 10, 11, 12, 13, 14].
Most works focus on superconductors which have inversion symmetry, that is, in
which the parity of the pair potential is either even or odd. However, in the
past few decades superconductors have been discovered whose underlying crystal
structure lacks inversion symmetry [15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
25, 26, 27]. In such superconductors parity-mixed superconductivity may arise
[15]. Non-centrosymmetric superconductors have interesting applications, for
example, they are very suitable for superconducting diodes due to the
inversion symmetry breaking [28, 29].
An important issue is the determination of the pair potential. There have been
many efforts to explore restrictions on the possible pair potentials and to
predict properties of inversion-symmetry broken superconductors [30, 31, 32,
33, 34, 35, 36, 37, 15, 38, 39, 40]. Still though, in general it is difficult
to determine the type of unconventional pairing. Examples of efforts include
using NMR [41, 42, 43, 44, 45] or measuring the critical field for different
directions of an applied magnetic field [46, 47, 48], to identify spin-triplet
pairing. s+p-wave pairing is predicted to be, under certain conditions, the
most stable pairing, for example in $\text{CePt}_{3}\text{Si}$ [36]. There are
also theoretical suggestions to explore the proximity effect of unconventional
superconductors on normal materials [49, 50, 51, 52, 53, 54]. However, for
many materials, the results are not conclusive.
In this work, we explore non-equilibrium electronic transport through a
superconductor/ferromagnet/normal metal (SFN) junction, to reveal properties
of the parity-mixed pair potential. We focus on the simplest type of a parity-
mixed pair potential, the s+p-wave superconductor, with a helical p-wave
pairing. We calculate the differential conductance $\sigma$ of the junction
shown in Fig. 1 and investigate the dependence of $\sigma$ on both the
amplitude and direction of the intrinsic exchange field of the F metal. We
first focus on the zero bias conductance. It shows a peak when
$\Delta_{t}>\Delta_{s}$. We find that the height of the zero bias conductance
peak (ZBCP) remains unchanged for exchange fields that are perpendicular to
the direction of transport. In contrast, when the exchange field is parallel
to the d-vector, the differential conductance peak shifts to finite voltages
and the zero bias conductance is suppressed. Thus, for large exchange fields
only a broad dome-like shape remains. The zero bias conductance varies
monotonically as a function of the angle between d-vector and exchange field.
We also show that the angular dependence of the differential conductance for
nonzero voltages can be used to determine the mixing parameter, the relative
strength of the singlet and triplet components of the pair potential. If
$\Delta_{s}>\Delta_{t}$, a long junction with $E_{\text{Th}}<\Delta_{0}$, can
be used for this purpose. Here $E_{\text{Th}}=D/L^{2}$ is the Thouless energy,
$L$ and $D$ are the length and diffusion coefficient of the the F link
respectively, and $\Delta_{0}$ the amplitude of the gap. If
$\Delta_{t}>\Delta_{s}$, a short junction with $E_{\text{Th}}\sim\Delta_{0}$
is more suitable for the determination of the mixing parameter. We also find
that the exchange field dependence of the zero bias conductance in both the
long and short junctions is independent of the exact ratio between
$\Delta_{s}$ and $\Delta_{t}$, it is fully determined by whether the singlet
component or the triplet component is dominant. Thus, with the proposed setup
the pair potential of an s+p-wave superconductor can be fully characterized by
electrical measurements.
The work is organized as follows. In section II we introduce the equations
used to describe the system and the boundary conditions at the interfaces
between different materials. In section III, we present our results for the
differential conductance. We also show how the differential conductance can be
used to reveal the mixing parameter between the singlet and triplet
amplitudes. Section IV is devoted to a discussion of the results and an
outlook. Throughout the paper we work in units with $\hbar=k_{B}=1$.
## II The Model
We consider a ferromagnetic metal of mesoscopic dimensions attached to an
s+p-wave superconductor on the left and a normal electrode on the right; see
Fig. 1.
Figure 1: A schematic of the SFN junction. The superconductor is a s+p mixed-
parity superconductor. A voltage is applied to the normal metal electrode (N)
to drive currents through the junction. The differential conductance is
calculated as function of the direction of the exchange field $\vec{h}$ in the
ferromagnetic bar (F).
The S electrode induces superconducting correlations into the F layer via the
superconducting proximity effect. We assume that the pair potential has the
form
$\hat{\Delta}=\Delta_{s}+\Delta_{t}\vec{d}\cdot\vec{\sigma}\;,$ (1)
where $\Delta_{s}$ is the isotropic singlet component, independent of momentum
direction on the Fermi surface. $\Delta_{t}$ and the unit vector $\vec{d}$
describe the amplitude and direction of the p-wave triplet component [4, 5]
respectively. Here $\vec{\sigma}$ is the vector of Pauli matrices in spin
space.
Two important examples of p-wave pairing are chiral p-wave pairing, for
example $d(\phi)=e^{i\phi}\vec{a}$, and helical p-wave pairing, with
$\vec{d}(\phi)=\cos{\phi}\vec{a}+\sin{\phi}\vec{b}$, where $\vec{a},\vec{b}$
are orthogonal unit vectors. Here $\phi$ is the angle with respect to a chosen
axis. We choose this axis to be along the interface normal. Both chiral and
helical superconductors are topological superconductors [55]. The former
breaks time reversal symmetry and has chiral edge states [2, 56], the latter
preserves time-reversal symmetry and has so-called helical edge states [57].
To describe spectral and transport properties of the junction we use the
quasiclassical Green’s function (GF) formalism extended to spin-dependent
fields [58, 7, 59]. In this case, the GF $\bar{G}(\boldsymbol{r},E)$ is an
$8\times 8$ matrix in Keldysh-Nambu-spin space,
$\bar{G}=\begin{bmatrix}\check{G}^{R}&\check{G}^{K}\\\
0&\check{G}^{A}\end{bmatrix}$. In this notation, we represent matrices in
Keldysh-Nambu-spin space with a bar ($\bar{\cdot}$), matrices in Nambu-spin
space with a check ($\check{\cdot}$) accent, and matrices in spin space with a
hat ($\hat{\cdot}$). In the dirty limit, the Green’s function $\bar{G}$ is
determined by a diffusion equation, known as the Usadel equation [60]:
$D\nabla\cdot(\bar{G}\nabla\bar{G})+i[(E+\vec{h}\cdot\vec{\sigma})\tau_{3},\bar{G}]=0\;,$
(2)
where $D$ is the diffusion constant, $E$ is the energy, $\vec{h}$ is the
exchange field, $\tau_{3}$ is the third Pauli matrix in particle-hole space
and $\vec{\sigma}$ is the vector of Pauli matrices in spin space. The Usadel
equation, Eq. (2), together with the normalization condition
$\bar{G}^{2}=\bar{\mathbf{1}}$ and the boundary conditions determine the
quasiclassical GF.
The current $I$, and the differential conductance of the system $\sigma$, can
be calculated from the quasiclassical GF using the following expressions:
$\displaystyle I$
$\displaystyle=\frac{\sigma_{N}}{16e}\int_{-\infty}^{\infty}\mathrm{d}E\text{Tr}\left\\{\tau_{3}(\bar{G}\nabla\bar{G})^{K}\right\\}\;,$
(3) $\displaystyle\sigma$ $\displaystyle=\frac{\partial I}{\partial V}\;,$ (4)
where $\sigma_{N}$ is the normal state conductance and $e$ is the electron
charge.
In order to solve the Usadel equation, Eq. (2), in the F region one needs
boundary conditions describing both interfaces. We assume that the S and N
electrodes are not affected by the F, and keep their bulk properties, that is,
they are treated as reservoirs. At the F/N interface we use the well known
Kupriyanov-Lukichev boundary condition [61], which is written as:
$\displaystyle\bar{G}\nabla\bar{G}(x=L)=\frac{1}{\gamma_{BN}L}[\bar{G}(x=L),\bar{G}_{N}]\;.$
(5)
Here $\bar{G}_{N}$ is the bulk normal metal GF, that is,
$\check{G}_{N}^{R}=\tau_{3}$ and its distribution function is the Fermi-Dirac
distribution function. The transparency of the junction is parameterized by
$\gamma_{BN}$, which is proportional to the interface resistance. In case of a
perfectly transparent interface $\gamma_{BN}\rightarrow 0$ and Eq. (5) is
equivalent to the continuity of $\bar{G}$ at this interface, that is,
$\bar{G}(x=L)=\bar{G}_{N}$.
At the S/F interface we use the Tanaka-Nazarov boundary conditions [62, 63].
These boundary conditions are an extension of the Nazarov boundary conditions
[64], which itself are a generalisation of the Kupriyanov-Luckichev boundary
conditions. Here we use a new form of the Tanaka-Nazarov boundary conditions
[65], which is more suited towards s+p-wave superconductors. Defining $\phi$
as the injection angle with respect to the interface normal vector, the
boundary condition reads:
$\bar{G}\nabla\bar{G}(x=0)=\frac{1}{\gamma_{BS}L}\langle\bar{S}(\phi)\rangle\;,$
(6)
where
$\displaystyle\bar{S}(\phi)$
$\displaystyle=\tilde{T}(1+T_{1}^{2}+T_{1}(\bar{C}\bar{G}+\bar{G}\bar{C}))^{-1}(\bar{C}\bar{G}-\bar{G}\bar{C})\;,$
(7) $\displaystyle\bar{C}$
$\displaystyle=\bar{H}_{+}^{-1}(\bar{\mathbf{1}}-\bar{H}_{-})\;,$ (8)
$\displaystyle\bar{H}_{+}$
$\displaystyle=\frac{1}{2}(\bar{G}_{S}(\phi)+\bar{G}_{S}(\pi-\phi))\;,$ (9)
$\displaystyle\bar{H}_{-}$
$\displaystyle=\frac{1}{2}(\bar{G}_{S}(\phi)-\bar{G}_{S}(\pi-\phi))\;.$ (10)
Here we use the notation $\langle\cdot\rangle$ to denote angular averaging
over all modes that pass through the interface, $\gamma_{BS}=R_{B}/R_{d}$ is
the ratio of the boundary resistance to the resistivity of the F bar in the
absence of a proximity effect,
$T_{1}=\tilde{T}/(2-\tilde{T}+2\sqrt{1-\tilde{T}})$, and $\tilde{T}$ is the
interface transparency given by
$\displaystyle\tilde{T}(\phi)=\frac{\cos^{2}\phi}{\cos^{2}{\phi}+z^{2}}\;,$
(11)
where $z$ is the BTK parameter [66], characterizing the strength of the
barrier. It is assumed that the Fermi surface mismatch is negligible, that is,
that the magnitude of the Fermi momentum is of similar magnitude in the
superconductor and ferromagnet. If $z=0$, there is no barrier. In that case
the junction is highly transparent, and there is no reflection for any mode.
On the other hand, if $z$ is large, the barrier is strong and the boundary has
a low transparency.
In Eqs. (9) and (10) $\bar{G}_{S}(\phi)$ is the Green’s function of a bulk BCS
superconductor with pair potential given by Eq. (1). We parameterized the pair
potentials as
$\displaystyle\hat{\Delta}(\phi)=\Delta_{0}\left(\frac{1}{\sqrt{r^{2}+1}}+\frac{r}{\sqrt{r^{2}+1}}\vec{d}(\phi)\cdot\vec{\sigma}\right)\;,$
(12)
where $\Delta_{0}$ is the energy scale of the superconducting potential,
$r=\frac{\Delta_{t}}{\Delta_{s}}$ the mixing parameter, and $\vec{d}(\phi)$ is
the orientation of the angular dependent d-vector. The matrix pair potential,
Eq. (12), has two eigenvalues, which are both independent of $\phi$, given by
$\displaystyle\Delta_{\pm}=\Delta_{0}\frac{1\pm r}{\sqrt{r^{2}+1}}\;.$ (13)
In the dirty limit only triplet components with d-vector parallel to
$\langle\vec{d}\rangle$ are induced by the superconductor due to angular
averaging [54]. This can be understood as follows, because of the high rate of
scattering the contributions of all modes are mixed, and thus only the angular
average remains.
Here we focus on a helical p-wave superconductor with
$\vec{d}(\phi)=(\cos{\phi},\sin{\phi},0)$. We have also checked that in the
chiral case similar results hold. For the helical pair potential,
$\langle\vec{d}\rangle$ points in the $x$-direction, that is, in the same
direction as the direction of the current. Since the Usadel equation is
unaltered by a change of spin basis, our results are equally valid for any
other pair potential with a d-vector of the form
$\vec{d}(\phi)=\cos{\phi}\vec{a}+\sin{\phi}\vec{b}$, where $\vec{a},\vec{b}$
are orthogonal unit vectors. Since there is no orbital effect, the results
only depend on the angle between $\langle\vec{d}\rangle$ and $\vec{h}$, and
not on the angle between $\vec{h}$ and the direction of current.
The solution of the retarded part of Eq. (2) provides information about the
spectral properties. For the computation of $\sigma$ one also needs to obtain
the Keldysh component of the GF. From the normalization condition the Keldysh
component can be written as
$\check{G}^{K}=\check{G}^{R}\check{f}-\check{f}\check{G}^{A}$, in which the
matrix structure of $\check{f}$ is given by
$\check{f}=f_{L}+f_{T}\tau_{3}+\sum_{i=1}^{3}(f_{Ti}+f_{Li}\tau_{3})\sigma_{i}\;.$
(14)
and satisfies the following equation:
$D\nabla\cdot(\nabla\check{f}-G^{R}\nabla\check{G}^{A})=\check{G}^{R}[\tau_{3}\vec{h}\cdot\vec{\sigma},\check{f}]-[\tau_{3}\vec{h}\cdot\vec{\sigma},\check{f}]\check{G}^{A}.$
(15)
In the electrodes one assumes that the system is in equilibrium such that
$f_{L,T}(E)=\frac{1}{2}\Big{(}\tanh{\frac{E+eV}{2T}}\pm\tanh{\frac{E-eV}{2T}}\Big{)}$
[58], where $V$ is voltage and $T$ is temperature of the corresponding
electrode.
In the following section we show the results obtained by solving numerically
the Usadel equation, Eq. (2), together with the boundary conditions Eqs. (5)
and (6) in the SFN configuration. From the knowledge of the GF we calculate
the differential conductance given by Eqs. (3) and (4).
## III Differential conductance of the SFN junction
In this section, we study the differential conductance for different
magnitudes and directions of the exchange field, and for two superconducting
regimes: the s-wave dominated or p-wave dominated cases, corresponding to
$r<1$ and $r>1$ respectively.
We first focus on the spectral properties of the F layer. The superconducting
correlations in F, induced by the proximity effect, have the general matrix
form:
$\displaystyle\hat{F}$
$\displaystyle=F_{0}\hat{\mathbf{1}}+F_{h}\vec{m}\cdot\vec{\sigma}+F_{d}\vec{d}_{\perp}\cdot\vec{\sigma}\;,$
(16)
where $F_{0}$ is the singlet component whereas the other two are triplet
components, either induced by the exchange field in F or by the proximity
effect. In the equation above, $\vec{m}$ is a unit vector pointing in the
direction of the exchange field, and $\vec{d}_{\perp}$ is a unit vector in the
direction of
$\langle\vec{d}\rangle-(\langle\vec{d}\rangle\cdot\vec{m})\vec{m}$. If
$\vec{d}$ and $\vec{m}$ are parallel this term is absent.
It is instructive to linearize the Usadel equation assuming a weak proximity
effect. In this case the pair amplitudes obey the following linear
differential equations:
$\displaystyle D\nabla^{2}(F_{0}\pm F_{h})$ $\displaystyle=2i(E\pm h)(F_{0}\pm
F_{h}),$ (17) $\displaystyle D\nabla^{2}F_{d}$ $\displaystyle=2iEF_{d}\;.$
(18)
The first equation reflects the singlet - (short range) triplet conversion via
the exchange field known in ferromagnets [67]. According to Eq. (17),
$F_{0}\pm F_{h}$ decay over the magnetic length $\xi_{F}=\sqrt{\frac{D}{2|E\pm
h|}}$. In contrast, according to Eq. (18) the triplet component orthogonal to
the local exchange field, $F_{d}$, decays over the thermal length
$\xi_{E}=\sqrt{\frac{D}{2E}}$. In other words, if the exchange field and the
d-vector are parallel, only $F_{0}$ and $F_{h}$ are non-zero, but, if
$\vec{h}$ and $\langle\vec{d}\rangle$ are perpendicular, $F_{d}$ is non-zero,
and there are long-range triplet correlations. Thus, for large enough exchange
field or long enough junctions, specifically if $h$ is much larger than the
Thouless energy $E_{\text{Th}}=D/L^{2}$, $F_{d}$ dominates the proximity
effect and hence the subgap transport of the junction.
We now go beyond the linearized case and compute numerically the differential
conductance of the SFN junction. We choose following interface parameters [see
Eqs. (6-11)]: $\gamma_{BS}=2$, $z=0.75$, and we assume a perfect contact at
the FN interface at $x=L$, that is $\gamma_{BN}=0$ in Eq. (5). First we assume
a long junction with $(\frac{L}{\xi})^{2}=50$, where
$\xi=\frac{D}{2\Delta_{0}}$. The direction and amplitude of the exchange field
is varied. It is convenient to use the so-called Riccati-parameterization
[68]. The Riccati parameterization and resulting equations are discussed in
appendix 1. The solution method for the distribution functions is discussed in
appendix 2.
The results for $\vec{h}\parallel\vec{d}$ and $\vec{h}\perp\vec{d}$ are shown
in Fig. 2 for different values of the mixing parameter $r$.
Figure 2: The differential conductance in the SFN junction for (a) singlet
dominant ($r=0.5$) pair potential and (b) triplet dominant ($r=2$) pair
potential and different orientations of the exchange field $h=10\Delta_{0}$.
For both panels $L/\xi=10$, $\gamma_{BS}=2$ and $z=0.75$ are used. If the
exchange field is parallel to the average d-vector the zero bias conductance
peak (ZBCP) for triplet dominant pair potentials ($r>1$) is highly suppressed,
whereas this is not the case if the exchange field is perpendicular to the
d-vector.
Figure 3: $\sigma(V)$ curves for different values of the exchange field, for
the triplet dominant case $r=2$ for (a) perpendicular and (b) parallel
exchange fields. In both panels we choose $L/\xi=10$, $\gamma_{BS}=2$ and
$z=0.75$.
There is a clear difference between the s-wave dominated and p-wave dominated
pair potential junction. In the s-wave dominated case, Fig. 2(a), there is no
ZBCP and the dependence of the differential conductance on the direction of
the exchange field is weak. In the p-wave dominated case, Fig 2(b), there is a
ZBCP, with both a dome-like peak and a sharp peak. The dome-like peak has a
width of the order of $\Delta_{0}$ and is due to surface Andreev bound states
(SABS) [69, 70]. On the other hand the sharp peak has a width of the order of
the Thouless energy. Such sharp peaks can also appear in systems with
conventional superconductivity [71]. The sharp zero bias conductance peak is
significantly suppressed when $\vec{h}$ is parallel to the d-vector, but not
suppressed when $\vec{h}$ is perpendicular to the d-vector. The anisotropy in
the response to an exchange field implies that the setup can be used to detect
the presence of triplet pairing, and also to find the direction of the
d-vector of the triplet pairing.
Figure 4: $\sigma(V)$ curves for different orientations of the exchange field,
and $h=0.2\Delta_{0}$ (a) and $h=5\Delta_{0}$ (b). For both panels $r=2$,
$L/\xi=10$, $\gamma_{BS}=2$, and $z=0.75$ are used. Figure 5: The magnitude
of the zero bias conductance relative to the normal state conductance as a
function of the direction of the exchange field for a weak ($h=0.1\Delta_{0}$)
and a strong ($h=10\Delta_{0}$) exchange field for $r=2$. Other parameters
were set to $L/\xi=10$, $\gamma_{BS}=2$ and $z=0.75$.
To investigate the effect of the exchange field in more detail, the dependence
of $\sigma$ on the strength of the exchange field for a triplet dominated pair
potential ($r=2$) is shown in Fig. 3. If the exchange field is perpendicular
to the d-vector, Fig. 3(a), the differential conductance has only a very small
dependence on the strength of the exchange field. If however, the exchange
field is parallel to the d-vector, Fig. 3(b), for low exchange fields the
sharp peak in $\sigma$ is shifted towards $eV\sim h$ and lowered, whereas the
dome-like ZBCP is unaffected. As the exchange field is increased only a dome-
like peak remains, without the sharp contribution. The differential
conductance for $eV\in(|\Delta_{-}|,\Delta_{+})$ is also slightly affected by
the strength of the exchange field, but this effect is orders of magnitude
smaller than the angular dependence of the zero bias conductance.
As the exchange field is rotated, the differential conductance varies between
these two extremes in a continuous fashion. In Fig. 4 we show the $\sigma(eV)$
curves for different values of the angle $\alpha$ between the exchange field
direction and the d-vector. We focus on the triplet dominant case, $r=2$, for
weak, Fig. 4(a), and strong, Fig. 4(b), exchange field. For small exchange
fields, Fig. 4(a), a sharp peak at a nonzero voltage $eV\sim h$ develops as
the angle $\alpha$ between $\vec{h}$ and $\langle\vec{d}\rangle$ is decreased.
If $\alpha\approx\frac{\pi}{4}$ there is a double peak structure. As $\alpha$
decreases towards zero the ZBCP disappears. For large exchange fields, Fig.
4b) no second peak appears, a decrease of $\alpha$ only leads to a suppression
of the zero bias conductance.
Figure 6: The voltage dependence of $\sigma-\sigma(\alpha=0)$ for different
values of the angle $\alpha$ for a singlet dominant junction ($r=0.5$) and for
$h=0.2\Delta_{0}$ (a) and $h=10\Delta_{0}$ (b). The anisotropy is largest for
$\Delta_{-}<eV<\Delta_{+}$, as indicated with dashed lines. In both panels
$L/\xi=10$, $\gamma_{BS}=2$ and $z=0.75$.
The angular dependence of the zero bias conductance for $r=2$ is shown in Fig.
5. As the exchange field is rotated from a perpendicular orientation towards a
parallel orientation the zero bias conductance decreases monotonically. The
effect is stronger if the exchange field is increased.
For the singlet dominated case the exchange field dependence of the
differential conductance is significantly weaker, but present. To highlight
the angular dependence of $\sigma$, we show the change in $\sigma$ when
rotating the direction of the exchange field for $r=0.5$ (Fig. 6).
Specifically, we show $\sigma-\sigma(\alpha=0)$ as a function of voltage for
several different values of $\alpha$. Results for $h/\Delta_{0}=0.2$ are shown
in Fig. 6(a), and results for $h/\Delta_{0}=10$ are shown in Fig. 6(b).
Notably, we find a sizable angular dependence of the differential conductance
in the range $|\Delta_{-}|<eV<\Delta_{+}$. The presence of this regime is
indicative for the presence of a mixed potential, since it is absent for $r=0$
and $r=\infty$. Moreover, the anisotropy is very small for $eV<\Delta_{-}$ and
decays sharply if $eV$ is increased above $\Delta_{+}$. This means that the
results can be used to infer $\Delta_{\pm}$ as illustrated by the dashed lines
at $eV=|\Delta_{\pm}|$ in Fig. 6. From this the mixing parameter $r$ can be
calculated. Thus, measuring the differential conductance provides a way to
estimate both the direction of the d-vector and the value of the mixing
parameter $r$.
The zero bias conductance in SNN s+helical p-wave junctions has another
interesting feature [54]. For superconductors of this type, the zero bias
conductance is independent of the particular value of $r$, it only depends on
whether $r>1$ or $r<1$. We show in Fig. 7 that this property still holds in
the presence of an exchange field, that is, also the exchange field dependence
is independent of the particular value of $r$. This can be understood as
follows. The mixing parameter $r$ only enters through the Tanaka-Nazarov
boundary condition, Eq. (6). Since the exchange field does not enter this
boundary condition, its effect on the zero bias conductance is independent on
the particular value of $r$. The sharp distinction between the two regimes
suggests the presence of a quantum phase transition at $r=1$ between singlet
dominated and triplet dominated superconductivity. For $T\neq 0$ the
dependence of $\sigma(eV=0)$ on $r$ becomes smooth, as shown in Fig. 7.
Figure 7: The zero bias conductance peak as a function of the ratio $r$ of the
magnitude of the singlet and triplet components of the pair potential for zero
(red curve) and finite temperature (blue curve) for a perpendicular field with
$h/\Delta_{0}=0.2$. At zero temperature there is a discontinuity, hinting
towards a phase transition. Insets: The dependence of the zero temperature
zero bias conductance on the angle between the exchange field and d-vector is
independent of the particular value ratio $\Delta_{t}/\Delta_{s}$. Other
parameters are $L/\xi=10$, $\gamma_{BS}=2$ and $z=0.75$.
The results for junctions with an s-wave dominated pair potential (Fig. 6)
show that the anisotropy in the differential conductance for nonzero voltages
can be used to determine the mixing parameter. For the long junction with
p-wave dominated superconductors (Fig. 3) however, the anisotropy of the zero
bias conductance is much larger than the anisotropy in the range
$|\Delta_{-}|<eV<\Delta_{+}$ and thus the mixing parameter is hard to
determine. Therefore, a shorter SF junction ($L=\xi$) with a low transparent
barrier ($\gamma_{BN}=10$) is investigated. In that case the Thouless energy
is large and there is no sharp peak, as shown in Fig. 8 for $r=2$. Only the
dome-like peak remains, which has a much weaker dependence on the exchange
field. The angular dependence of the conductance is largest for
$|\Delta_{-}|<eV<\Delta_{+}$. The results for $h/\Delta_{0}=0.2$ are shown in
Fig. 8(a). In Fig 8(b) it is shown that this angular dependence is monotonic
and $\sigma$ is maximized if the d-vector and exchange field are parallel.
This is in contrast to the zero bias conductance, which is maximized if the
d-vector and exchange field are perpendicular. This difference in sign
compared to the anisotropy of the zero bias conductance peak can be used as a
verification. Therefore, $|\Delta_{-}|$ and $\Delta_{+}$, and thus the mixing
parameter $r$ can be determined accurately, as indicated by the dashed lines
in Fig. 8(a).
Figure 8: Differential conductance of a short SFN junction for the triplet
dominant case, $r=2$ and $h=10\Delta_{0}$. (a) $\sigma(V)$ curves for $h\perp
d$ and $h\parallel d$. (b) differential conductance as a function of the angle
$\alpha$ for different voltages in the three regimes defined by
$|\Delta_{-}|\approx 0.5$ and $\Delta_{+}\approx 1.5$. The regimes are
indicated by dashed lines at $eV=|\Delta_{\pm}|$. For both panels $L/\xi=1$,
$\gamma_{BS}=2$, $z=0.75$ and $\lambda/\xi=10$ are used.
## IV Discussion and Conclusions
We have shown that for an SFN junction an electrical measurement, namely the
differential conductance, can be used to identify s+p-wave pairing, and to
distinguish different types of s+p-wave superconductors. The Keldysh-Usadel
equation together with the Tanaka-Nazarov boundary conditions have been used
to calculate the differential conductance, $\sigma$, for a junction between an
s+p-wave superconductor and a ferromagnetic metal. We have found that the
$\sigma(eV)$ curves depend on both the relative strength of the singlet and
triplet components and the direction of the exchange field. If the exchange
field is parallel to the d-vector of the s+p-wave superconductor, the zero
bias conductance peak is suppressed and a finite bias peak appears. On the
other hand, if the exchange field is perpendicular to the injected spins the
zero bias conductance peak is independent of the exchange field strength.
Thus, the experiment that we propose based on our calculation provides a tool
to characterize the pair potential of superconductors, using only electrical
measurements. This implies that the dependence of the zero bias conductance
peak can be easily extracted from the results.
Our results can be used to determine not only the direction of the d-vector,
but also the mixing parameter $r$. Therefore, by doing the experiment modelled
in our paper, the pair potential of mixed potential superconductors can be
fully characterized.
We found that both the regimes $h\gg\Delta_{0}$ and $h<\Delta_{0}$ are of
interest. Ferromagnets like Fe, Co, or Ni have exchange fields typically much
larger than the critical temperatures of superconductors, and thus, they can
be used for the regime $h\gg\Delta_{0}$. To access the regime $h<\Delta_{0}$
as well, one can use a thin normal metal layer, proximized by a ferromagnetic
insulator [72, 73].
Our method can be generalized to study more general types of mixed-parity
superconductors, including the possibility of d-wave or f-wave pair
potentials.
## Acknowledgements
We thank S. Ilić, A.A. Golubov and Y. Tanaka for fruitful discussions. We
acknowledge financial support from Spanish AEI through project
PID2020-114252GB-I00 (SPIRIT), the Basque Government through grant IT-1591-22,
and European Union’s Horizon 2020 Research and Innovation Framework Programme
under Grant No. 800923 (SUPERTED). A.H. acknowledges funding by the University
of the Basque Country (Project PIF20/05). F.S.B. thanks Prof. Björn Trauzettel
for his hospitality at Würzburg University, and the A. v. Humboldt Foundation
for financial support.
## Appendix A Weak proximity effect
In this appendix, we present analytic results obtained in the limit of a weak
proximity effect. We show that in the case of a field perpendicular to the
d-vector there are long range triplet correlations [7], whereas in the case of
a parallel field all triplet correlations are short-range. These results
indicate why the properties of the junction are different for different
orientations of the exchange field with respect to the direction of the
d-vector. The applied formalism can only be used for the retarded part, since
the effect of the superconductor on the differential conductance is of second
order in the pairing amplitudes and is thus ignored in this limit.
If the proximity effect in the junction is small, that is, the pair amplitudes
are small compared to the density of states, the following approximation can
be made:
$\check{G}^{R}\approx\begin{bmatrix}\hat{\mathbf{1}}&\hat{F}\\\
-\hat{\tilde{F}}&-\hat{\mathbf{1}}\end{bmatrix}\;,$ (A.1)
where $\hat{F},\hat{\tilde{F}}$ are the pair amplitudes. This parameterization
satisfies the normalisation condition up to first order. Introducing
$\vec{d}_{\perp}$ as a unit vector in the direction of
$\langle\vec{d}\rangle-(\vec{m}\cdot\langle\vec{d}\rangle)\vec{m}$, where the
notation $\vec{m}$ is used to denote a unit vector in the direction of
$\vec{h}$, $\hat{F}$ can be decomposed into
$\displaystyle\hat{F}$
$\displaystyle=F_{0}\hat{\mathbf{1}}+F_{h}\vec{m}\cdot\vec{\sigma}+F_{d}\vec{d}_{\perp}\cdot\vec{\sigma}+F_{dh}(\vec{d}_{\perp}\times\vec{m})\cdot\vec{\sigma}\;,$
(A.2) $\displaystyle F_{\pm}$ $\displaystyle=F_{0}\pm F_{h}\;,$ (A.3)
and we decompose the components of $\hat{\tilde{F}}$ analogously. Note that
this decomposition does not use any additional assumption, it is general for
matrices in $\mathbb{C}^{2\text{x}2}$. The following equations are satisfied:
$\displaystyle D\nabla^{2}F_{\pm}=2i(E\pm h)F_{\pm}\;,$ (A.4) $\displaystyle
D\nabla^{2}F_{h,dh}=2iEF_{h,dh}\;.$ (A.5)
Taking into account that $F(x=L)=0$ due to the good contact with the normal
metal reservoir at $x=L$, the solutions to Eqs. (A.4) and (A.5) read:
$\displaystyle F_{\pm}=C_{\pm}\sinh{\sqrt{\frac{2i(E\pm h)}{D}}(L-x)}\;,$
(A.6) $\displaystyle F_{d,dh}=C_{d,dh}\sinh{\sqrt{\frac{2iE}{D}}(L-x)}\;.$
(A.7)
Now, at $x=0$, the relation
$\check{G}^{R}\nabla\check{G}^{R}=\frac{1}{\gamma_{BS}L}[\check{C}^{R},\check{G}^{R}]$
should be satisfied, where $\check{C}^{R}$ is the retarded part of $\bar{C}$,
the boundary term presented in the main text.
The pair amplitudes $F_{\pm}$ have a decay length
$\xi_{F}=\sqrt{\frac{D}{|E\pm h|}}$, whereas the the pair amplitudes
$F_{d,dh}$ decay over a length $\xi_{E}=\sqrt{\frac{D}{E}}$, and are thus
unaffected by the exchange field. These are the so-called long-range triplet
correlations. Using the Tanaka-Nazarov boundary conditions, explicit
expressions for the coefficients can be found. For clarity of notation we only
show the case in which a single mode, the one at normal incidence contributes.
If other modes are taken into account the notation becomes more cumbersome,
but the results are very similar.
First,consider the case in which $\vec{m}=\langle\vec{d}\rangle$. In that case
all spin dependent terms in the problem are proportional to
$\vec{m}\cdot\vec{\sigma}$. This implies that
$(\vec{m}\cdot\vec{\sigma})\check{G}(\vec{m}\cdot\vec{\sigma})=\check{G}$.
Therefore, $F_{\pm},\tilde{F}_{\pm}$ are the only nonzero components.
The boundary conditions imply
$\displaystyle C_{\pm}$ $\displaystyle=\frac{1}{\sinh{(\sqrt{2i\frac{E\pm
h}{D}}L)}}\left(2i(E\pm
h)+\frac{1}{\gamma_{BS}}\frac{2(g_{+}+g_{-})}{1+g_{+}g_{-}-f_{+}f_{-}}\right)^{-1}\frac{1}{1+g_{+}g_{-}-f_{+}f_{-}}(f_{+}+f_{-}\pm(g_{+}f_{-}-g_{-}f_{+}))\;,$
(A.8) $\displaystyle\tilde{C}_{\pm}$
$\displaystyle=\frac{1}{\sinh{(\sqrt{2i\frac{E\pm h}{D}}L)}}\left(2i(E\pm
h)+\frac{1}{\gamma_{BS}}\frac{2(g_{+}+g_{-})}{1+g_{+}g_{-}-f_{+}f_{-}}\right)^{-1}\frac{1}{1+g_{+}g_{-}-f_{+}f_{-}}(f_{+}+f_{-}\mp(g_{+}f_{-}-g_{-}f_{+}))\;.$
(A.9)
For $h=0$, the contributions proportional to $f_{+}+f_{-}$ have the same sign
in $C_{+}$ and $C_{-}$. Therefore, they only contribute to $F_{+}+F_{-}=F_{0}$
and are singlets induced by the singlet component of the pair potential. For
$h\neq 0$, the contributions induced by the singlet pair potential are
partially singlets and partially triplets. On the other hand, the terms
proportional to $(g_{+}f_{-}-g_{-}f_{+})$ are induced by the triplet component
of the pair potential. For $h=0$ they only contribute to $F_{+}-F_{-}=F_{h}$
and thus they are triplets, but for $h\neq 0$ they are partially singlets and
partially triplets.
On the other hand, if $\vec{h}$ and $\vec{d}$ are perpendicular, the terms
induced by the triplet part of the s+p-wave pair potential drop out of Eq.
(A.5) for $F_{\pm},\tilde{F}_{\pm}$, and the expressions for
$C_{\pm},\tilde{C}_{\pm}$ reduce to
$\displaystyle C_{\pm}$
$\displaystyle=\tilde{C}_{\pm}=\frac{1}{\sinh{(\sqrt{2i\frac{E\pm
h}{D}}L)}}\left(2i(E\pm
h)+\frac{1}{\gamma_{BS}}\frac{2(g_{+}+g_{-})}{1+g_{+}g_{-}-f_{+}f_{-}}\right)^{-1}\frac{1}{1+g_{+}g_{-}-f_{+}f_{-}}(f_{+}+f_{-})\;.$
(A.10)
Again, these terms are singlets for $h=0$ and become partially singlets,
partially triplets for $h\neq 0$. The component $F_{d}$ is also nonzero in
this case,
$\displaystyle C_{d}$
$\displaystyle=-\tilde{C}_{d}=\frac{1}{2iE\sinh{(\sqrt{2i\frac{E}{D}}L)}}\frac{(g_{+}f_{-}-g_{-}f_{+})}{1+g_{+}g_{-}-f_{+}f_{-}}\;.$
(A.11)
Since the equation for $F_{d}$ is not mixed with the equation for $F_{0}$,
these triplet correlations can not be converted to singlet correlations. The
boundary condition still has no terms entering Eq. (A.5) for $F_{dh}$, and the
equation for $F_{dh}$ is uncoupled from the other equations. Therefore,
$F_{dh}=0$ for a perpendicular orientation as well.
In conclusion, in the case of a parallel field, the only nonzero components of
the anomalous GF are $F_{\pm}$, which decay on a length scale
$\xi_{F}=\sqrt{\frac{D}{2|E\pm h|}}$, whereas in the case of a perpendicular
field, there are long-range correlations decaying on a scale
$\xi_{E}=\sqrt{\frac{D}{2|E|}}$. Thus, in the case of a perpendicular field
the correlations extend over the full junction as $E\xrightarrow{}0$. This
explains the strong anisotropy of the junction with respect to the exchange
field.
## Appendix B Solution procedure
In this appendix, we discuss the implementation of the Usadel equation using
the parameterization introduced in the main text.
### 1 Retarded equations
The equation for the retarded part $\check{G}^{R}$ reads
$\displaystyle
D\nabla\cdot(\check{G}^{R}\nabla\check{G}^{R})+i[(E+\vec{h}\cdot\vec{\sigma})\tau_{3},\check{G}^{R}]=0\;.$
(B.1)
The Riccati-parameterization [68] is as follows:
$\check{G}^{R}=\begin{bmatrix}(1+\hat{\gamma}\hat{\tilde{\gamma}})^{-1}(1-\hat{\gamma}\hat{\tilde{\gamma}})&2(1+\hat{\gamma}\hat{\tilde{\gamma}})^{-1}\hat{\gamma}\\\
2(1+\hat{\tilde{\gamma}}\hat{\gamma})^{-1}\hat{\tilde{\gamma}}&-(1+\hat{\tilde{\gamma}}\hat{\gamma})^{-1}(1-\hat{\tilde{\gamma}}\hat{\gamma})\end{bmatrix}\;,$
(B.2)
Inserting the parameterization in Eq. (B.2) into the Usadel equation, Eq. (2),
we find, using a derivation similar to the one presented in [74], that the
Ricatti-matrices satisfy the following equations
$\displaystyle\nabla^{2}\hat{\gamma}-2\nabla\hat{\gamma}\cdot\hat{\tilde{N}}\hat{\tilde{\gamma}}\nabla\hat{\gamma}$
$\displaystyle=\frac{2\omega}{D}\hat{\gamma}+\frac{i}{D}\\{\vec{h}\cdot\vec{\sigma},\hat{\gamma}\\}\;,$
(B.3)
$\displaystyle\nabla^{2}\hat{\tilde{\gamma}}-2\nabla\hat{\tilde{\gamma}}\cdot\hat{N}\hat{\gamma}\nabla\hat{\tilde{\gamma}}$
$\displaystyle=\frac{2\omega}{D}\hat{\tilde{\gamma}}+\frac{i}{D}\\{\vec{h}\cdot\vec{\sigma},\hat{\tilde{\gamma}}\\}\;.$
(B.4)
The boundary condition at the SN interface is
$\displaystyle\nabla\hat{\gamma}$
$\displaystyle=\frac{1}{\gamma_{BS}L}\frac{1}{2}(\hat{\mathbf{1}}+\hat{\gamma}\hat{\tilde{\gamma}})(\hat{I}^{R}_{S12}-\hat{I}^{R}_{S11}\hat{\gamma})\;,$
(B.5) $\displaystyle\nabla\hat{\tilde{\gamma}}$
$\displaystyle=\frac{-1}{\gamma_{BS}L}\frac{1}{2}(\hat{\mathbf{1}}+\hat{\tilde{\gamma}}\hat{\gamma})(\hat{I}^{R}_{N21}+\hat{I}^{R}_{N22}\hat{\tilde{\gamma}})\;,$
(B.6)
where
$\hat{I}^{R}_{S11}=\text{Tr}_{\tau}\frac{1}{2}(1+\tau_{3})\check{I}^{R}_{S}$,
$\hat{I}^{R}_{S12}=\text{Tr}_{\tau}\frac{1}{2}(\tau_{1}+i\tau_{2})\check{I}^{R}_{S}$,
$\hat{I}^{R}_{S22}=\text{Tr}_{\tau}\frac{1}{2}(1-\tau_{3})\check{I}^{R}_{S}$,
$\hat{I}^{R}_{S21}=\text{Tr}_{\tau}\frac{1}{2}(\tau_{1}-i\tau_{2})\check{I}^{R}_{S}$,
where the notation $\text{Tr}_{\tau}$ has been introduced to indicate partial
trace over Nambu space, and
$\displaystyle\check{I}^{R}_{S}=\langle\tilde{T}(1+T_{1}^{2}+T_{1}(\check{C}^{R}\check{G}^{R}(x=0)+\check{G}^{R}(x=0)\check{C}^{R}))^{-1}(\check{C}^{R}\check{G}^{R}(x=0)-\check{G}^{R}(x=0)\check{C}^{R})\rangle\;,$
(B.7)
where $\check{G}(x=0)$ is found by substitution of $\hat{\gamma}(x=0)$ and
$\hat{\tilde{\gamma}}(x=0)$, and $T_{1}$ and $\check{C}^{R}$ are as defined in
the main text. Similarly, the boundary conditions at the boundary with the
normal metal reservoir read
$\displaystyle\nabla\hat{\gamma}$
$\displaystyle=\frac{-1}{\gamma_{BN}L}\frac{1}{2}(\hat{\mathbf{1}}+\hat{\gamma}\hat{\tilde{\gamma}})(\hat{I}^{R}_{N12}-\hat{I}^{R}_{N11}\hat{\gamma})\;,$
(B.8) $\displaystyle\nabla\hat{\tilde{\gamma}}$
$\displaystyle=\frac{1}{\gamma_{BN}L}\frac{1}{2}(\hat{\mathbf{1}}+\hat{\tilde{\gamma}}\hat{\gamma})(\hat{I}^{R}_{N21}+\hat{I}^{R}_{N22}\hat{\tilde{\gamma}})\;,$
(B.9)
where
$\hat{I}^{R}_{N11}=\text{Tr}_{\tau}\frac{1}{2}(1+\tau_{3})\check{I}^{R}_{N}$,
$\hat{I}^{R}_{N12}=\text{Tr}_{\tau}\frac{1}{2}(\tau_{1}+i\tau_{2})\check{I}^{R}_{N}$,
$\hat{I}^{R}_{N22}=\text{Tr}_{\tau}\frac{1}{2}(1-\tau_{3})\check{I}^{R}_{N}$,
$\hat{I}^{R}_{N21}=\text{Tr}_{\tau}\frac{1}{2}(\tau_{1}-i\tau_{2})\check{I}^{R}_{N}$,
and
$\displaystyle\check{I}^{R}_{N}=\frac{1}{\gamma_{B}L}[\check{G}(x=L),\check{G}_{N}^{R}]\;,$
(B.10)
where $\check{G}(x=L)$ can be calculated using $\hat{\gamma}(x=L)$ and
$\hat{\tilde{\gamma}}(x=L)$, and $\check{G}_{N}$ is the bulk Green’s function
of a normal metal as given in the main body of the article. Eqs. (B.3) to
(B.9) were solved numerically using the MATLAB built-in bvp5c.
### 2 Keldysh equations
In the case without an exchange field, a relatively compact analytic
expression for the resistance can be found because the equations for the
different spin components can be separated [54]. If an exchange field is
present this is not possible anymore, and the Keldysh equations for the
distribution functions need to be solved. The Usadel equation for the
distribution function $\check{f}$ reads
$\displaystyle
D\nabla\cdot(\nabla\check{f}-\check{G}^{R}\nabla\check{f}\check{G}^{A})+D(\check{G}^{R}\nabla\check{G}^{R})\cdot\nabla\check{f}-D\nabla\check{f}\cdot(\check{G}^{A}\nabla\check{G}^{A})$
$\displaystyle=-iE(\check{G}^{R}[\check{f},\tau_{3}]-[\check{f},\tau_{3}]\check{G}^{A})-i(\check{G}^{R}[\check{f},\tau_{3}\vec{h}\cdot\vec{\sigma}]-[\check{f},\tau_{3}\vec{h}\cdot\vec{\sigma}]\check{G}^{A})\;.$
(B.11)
Since $\check{f}$ only has $\tau_{0}$ and $\tau_{3}$ components, the first
term on the right cancels out. The second term however, does contribute, as
the spin dependence of the distribution functions is non-trivial. Using that
the retarded and advanced Green’s function must satisfy the retarded and
advanced components of the Usadel equation, the equation can be written as
$\displaystyle
D\nabla\cdot(\nabla\check{f}-\check{G}^{R}\nabla\check{f}\check{G}^{A})=i(G^{R}[\tau_{3}\vec{h}\cdot\vec{\sigma},\check{f}]-[\tau_{3}\vec{h}\cdot\vec{\sigma},\check{f}]G^{A})\;.$
(B.12)
Taking the trace of Eq. (B.11) results in
$\displaystyle D\nabla\cdot\Bigg{(}\nabla
f_{L0}(4-\text{Tr}(\check{G}^{R}\check{G}^{A}))-\sum_{i=1}^{3}\nabla
f_{Ti}\text{Tr}(\check{G}^{R}\sigma_{i}\check{G}^{A})\Bigg{)}+D\sum_{i=1}^{3}\nabla
f_{i}\cdot\text{Tr}(\check{G}^{R}\nabla\check{G}^{R}\sigma_{i}-\sigma_{i}\check{G}^{A}\nabla\check{G}^{A})$
$\displaystyle+D\nabla\cdot\Bigg{(}\nabla
f_{T0}(-\text{Tr}(\check{G}^{R}\tau_{3}\check{G}^{A}))-\sum_{i=1}^{3}\nabla
f_{Li}\text{Tr}(\check{G}^{R}\tau_{3}\sigma_{i}\check{G}^{A})\Bigg{)}$
$\displaystyle+(f_{L2}h_{3}-f_{L3}h_{2})\text{Tr}(\check{G}^{R}\tau_{3}\sigma_{x}-\tau_{3}\sigma_{x}\check{G}^{A})+(f_{T2}h_{3}-f_{T3}h_{2})\text{Tr}(\check{G}^{R}\sigma_{x}-\sigma_{x}\check{G}^{A})$
$\displaystyle+(f_{L3}h_{1}-f_{L1}h_{3})\text{Tr}(\check{G}^{R}\tau_{3}\sigma_{y}-\tau_{3}\sigma_{y}\check{G}^{A})+(f_{T3}h_{1}-f_{T1}h_{3})\text{Tr}(\check{G}^{R}\sigma_{y}-\sigma_{y}\check{G}^{A})$
$\displaystyle+(f_{L1}h_{2}-f_{L2}h_{1})\text{Tr}(\check{G}^{R}\tau_{3}\sigma_{z}-\tau_{3}\sigma_{z}\check{G}^{A})+(f_{T1}h_{2}-f_{T2}h_{1})\text{Tr}(\check{G}^{R}\sigma_{z}-\sigma_{z}\check{G}^{A})$
$\displaystyle=0\;.$ (B.13)
In a similar way, equations are obtained by taking the trace after
multiplication by $\tau_{3}$, $\sigma_{j}$ and $\tau_{3}\sigma_{j}$ for
$j=1...3$.
The boundary conditions can be found in a similar way, by taking the
corresponding traces over the equation
$\displaystyle(\nabla\check{f}-\check{G}^{R}\nabla\check{f}\check{G}^{A})+\check{G}^{R}\nabla\check{G}^{R}\check{f}-\check{f}\check{G}^{A}\nabla\check{G}^{A}=\check{I}^{K}_{S/N}\;.$
(B.14)
In this expression $\check{G}^{R,A}$ can be calculated directly from the
retarded equation, using
$\check{G}^{A}=-\tau_{3}(\check{G}^{R})^{\dagger}\tau_{3}$, and
$\check{I}_{K,S/N}$ depends on both the retarded Green’s function
$\check{G}^{R}$ and the distribution function $\check{f}$, evaluated at $x=0$
(for $\check{I}_{K,S}$) or $x=L$ (for $\check{I}_{K,N}$) and the Green’s
function in the electrode. A set of eight non-constant coefficient second
order linear differential equations are found. In the most general case, all
coefficients can be nonzero, and analytical formulas are expansive and do not
give many insights. Therefore, it was decided to solve the equations
numerically using MATLAB bvp5c. The corresponding expressions for current can
then be computed directly. By doing this as a function of the value of
$f_{T0}$ attained at the normal metal reservoir, the current and differential
conductance can be computed.
## References
* Mackenzie _et al._ [2017] A. P. Mackenzie, T. Scaffidi, C. W. Hicks, and Y. Maeno, Even odder after twenty-three years: the superconducting order parameter puzzle of sr2ruo4, npj Quantum Materials 2, 1 (2017).
* Kallin and Berlinsky [2016] C. Kallin and J. Berlinsky, Chiral superconductors, Reports on Progress in Physics 79, 054502 (2016).
* Linder and Balatsky [2019] J. Linder and A. V. Balatsky, Odd-frequency superconductivity, Reviews of Modern Physics 91, 045005 (2019).
* Balian and Werthamer [1963] R. Balian and N. Werthamer, Superconductivity with pairs in a relative p wave, Physical review 131, 1553 (1963).
* Sigrist and Ueda [1991] M. Sigrist and K. Ueda, Phenomenological theory of unconventional superconductivity, Reviews of Modern physics 63, 239 (1991).
* Fu and Kane [2008] L. Fu and C. Kane, Superconducting proximity effect and Majorana fermions at the surface of a topological insulator, Physical Review Letters 100, 096407 (2008).
* Bergeret _et al._ [2005] F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Odd triplet superconductivity and related phenomena in superconductor-ferromagnet structures, Rev. Mod. Phys. 77, 1321 (2005).
* Chiu _et al._ [2021] S.-P. Chiu, C. Tsuei, S.-S. Yeh, F.-C. Zhang, S. Kirchner, and J.-J. Lin, Observation of triplet superconductivity in cosi2/tisi2 heterostructures, Science Advances 7, eabg6569 (2021).
* Aoki _et al._ [2019a] D. Aoki, K. Ishida, and J. Flouquet, Review of u-based ferromagnetic superconductors: Comparison between uge2, urhge, and ucoge, Journal of the Physical Society of Japan 88, 022001 (2019a).
* Saxena _et al._ [2000] S. Saxena, P. Agarwal, K. Ahilan, F. Grosche, R. Haselwimmer, M. Steiner, E. Pugh, I. Walker, S. Julian, P. Monthoux, _et al._ , Superconductivity on the border of itinerant-electron ferromagnetism in uge2, Nature 406, 587 (2000).
* Aoki _et al._ [2001] D. Aoki, A. Huxley, E. Ressouche, D. Braithwaite, J. Flouquet, J.-P. Brison, E. Lhotel, and C. Paulsen, Coexistence of superconductivity and ferromagnetism in urhge, Nature 413, 613 (2001).
* Hardy and Huxley [2005a] F. Hardy and A. Huxley, p-wave superconductivity in the ferromagnetic superconductor urhge, Physical review letters 94, 247006 (2005a).
* Huy _et al._ [2007] N. T. Huy, A. Gasparini, D. E. de Nijs, Y. Huang, J. C. P. Klaasse, T. Gortenmulder, A. de Visser, A. Hamann, T. Görlach, and H. v. Löhneysen, Superconductivity on the border of weak itinerant ferromagnetism in ucoge, Phys. Rev. Lett. 99, 067006 (2007).
* Ran _et al._ [2019] S. Ran, C. Eckberg, Q.-P. Ding, Y. Furukawa, T. Metz, S. R. Saha, I.-L. Liu, M. Zic, H. Kim, J. Paglione, _et al._ , Nearly ferromagnetic spin-triplet superconductivity, Science 365, 684 (2019).
* Bauer and Sigrist [2012] E. Bauer and M. Sigrist, _Non-centrosymmetric superconductors: introduction and overview_ , Vol. 847 (Springer Science & Business Media, 2012).
* Bauer _et al._ [2004] E. Bauer, G. Hilscher, H. Michor, C. Paul, E.-W. Scheidt, A. Gribanov, Y. Seropegin, H. Noël, M. Sigrist, and P. Rogl, Heavy fermion superconductivity and magnetic order in noncentrosymmetric c e p t 3 s i, Physical review letters 92, 027003 (2004).
* Amano _et al._ [2004] G. Amano, S. Akutagawa, T. Muranaka, Y. Zenitani, and J. Akimitsu, Superconductivity at 18 k in yttrium sesquicarbide system, y2c3, Journal of the Physical Society of Japan 73, 530 (2004).
* Akazawa _et al._ [2004] T. Akazawa, H. Hidaka, H. Kotegawa, T. C. Kobayashi, T. Fujiwara, E. Yamamoto, Y. Haga, R. Settai, and Y. Ōnuki, Pressure-induced superconductivity in uir, Journal of the Physical Society of Japan 73, 3129 (2004).
* Togano _et al._ [2004] K. Togano, P. Badica, Y. Nakamori, S. Orimo, H. Takeya, and K. Hirata, Superconductivity in the metal rich li-pd-b ternary boride, Phys. Rev. Lett. 93, 247004 (2004).
* Tateiwa _et al._ [2005] N. Tateiwa, Y. Haga, T. D. Matsuda, S. Ikeda, T. Yasuda, T. Takeuchi, R. Settai, and Y. Ōnuki, Novel pressure phase diagram of heavy fermion superconductor cept3si investigated by ac calorimetry, Journal of the Physical Society of Japan 74, 1903 (2005).
* Kimura _et al._ [2005] N. Kimura, K. Ito, K. Saitoh, Y. Umeda, H. Aoki, and T. Terashima, Pressure-induced superconductivity in noncentrosymmetric heavy-fermion ${\mathrm{cerhsi}}_{3}$, Phys. Rev. Lett. 95, 247004 (2005).
* Sugitani _et al._ [2006] I. Sugitani, Y. Okuda, H. Shishido, T. Yamada, A. Thamizhavel, E. Yamamoto, T. D. Matsuda, Y. Haga, T. Takeuchi, R. Settai, _et al._ , Pressure-induced heavy-fermion superconductivity in antiferromagnet ceirsi3 without inversion symmetry, Journal of the Physical Society of Japan 75, 043703 (2006).
* Honda _et al._ [2010] F. Honda, I. Bonalde, K. Shimizu, S. Yoshiuchi, Y. Hirose, T. Nakamura, R. Settai, and Y. Ōnuki, Pressure-induced superconductivity and large upper critical field in the noncentrosymmetric antiferromagnet ${\text{ceirge}}_{3}$, Phys. Rev. B 81, 140507 (2010).
* Settai _et al._ [2007] R. Settai, I. Sugitani, Y. Okuda, A. Thamizhavel, M. Nakashima, Y. Ōnuki, and H. Harima, Pressure-induced superconductivity in cecoge3 without inversion symmetry, Journal of Magnetism and Magnetic Materials 310, 844 (2007), proceedings of the 17th International Conference on Magnetism.
* Bauer _et al._ [2010] E. Bauer, G. Rogl, X.-Q. Chen, R. T. Khan, H. Michor, G. Hilscher, E. Royanian, K. Kumagai, D. Z. Li, Y. Y. Li, R. Podloucky, and P. Rogl, Unconventional superconducting phase in the weakly correlated noncentrosymmetric ${\text{mo}}_{3}{\text{al}}_{2}\text{C}$ compound, Phys. Rev. B 82, 064511 (2010).
* Xie _et al._ [2020] W. Xie, P. Zhang, B. Shen, W. Jiang, G. Pang, T. Shang, C. Cao, M. Smidman, and H. Yuan, Captas: A new noncentrosymmetric superconductor, Science China Physics, Mechanics & Astronomy 63, 1 (2020).
* Yang _et al._ [2021] J. Yang, J. Luo, C. Yi, Y. Shi, Y. Zhou, and G.-q. Zheng, Spin-triplet superconductivity in k2cr3as3, Science advances 7, eabl4432 (2021).
* Wakatsuki _et al._ [2017] R. Wakatsuki, Y. Saito, S. Hoshino, Y. M. Itahashi, T. Ideue, M. Ezawa, Y. Iwasa, and N. Nagaosa, Nonreciprocal charge transport in noncentrosymmetric superconductors, Science advances 3, e1602390 (2017).
* Narita _et al._ [2022] H. Narita, J. Ishizuka, R. Kawarazaki, D. Kan, Y. Shiota, T. Moriyama, Y. Shimakawa, A. V. Ognev, A. S. Samardak, Y. Yanase, _et al._ , Field-free superconducting diode effect in noncentrosymmetric superconductor/ferromagnet multilayers, Nature Nanotechnology volume 17, 823 (2022).
* Levitov _et al._ [1985] L. Levitov, Y. V. Nazarov, and G. Eliashberg, Magnetostatics of superconductors without an inversion center, JETP Lett 41, 365 (1985).
* Edel’shtein [1989] V. Edel’shtein, Characteristics of the cooper pairing in two-dimensional noncentrosymmetric electron systems, Soviet Physics-JETP (English Translation) 68, 1244 (1989).
* Gor’kov and Rashba [2001] L. P. Gor’kov and E. I. Rashba, Superconducting 2d system with lifted spin degeneracy: mixed singlet-triplet state, Physical Review Letters 87, 037004 (2001).
* Mineev [2004] V. Mineev, Superconductivity in ferromagnetic metals and in compounds without inversion centre, International Journal of Modern Physics B 18, 2963 (2004).
* Frigeri _et al._ [2004] P. Frigeri, D. Agterberg, A. Koga, and M. Sigrist, Superconductivity without inversion symmetry: Mnsi versus c e p t 3 s i, Physical review letters 92, 097001 (2004).
* Frigeri _et al._ [2006] P. Frigeri, D. Agterberg, I. Milat, and M. Sigrist, Phenomenological theory of the s-wave state in superconductors without an inversion center, The European Physical Journal B-Condensed Matter and Complex Systems 54, 435 (2006).
* Yanase and Sigrist [2008] Y. Yanase and M. Sigrist, Superconductivity and magnetism in non-centrosymmetric system: application to cept3si, Journal of the Physical Society of Japan 77, 124711 (2008).
* Gentile _et al._ [2011] P. Gentile, C. Noce, A. Romano, G. Annunziata, J. Linder, and M. Cuoco, Odd-frequency triplet pairing in mixed-parity superconductors, arXiv preprint arXiv:1109.4885 (2011).
* Mineev [2017] V. P. Mineev, Superconductivity in uranium ferromagnets, Physics-Uspekhi 60, 121 (2017).
* Børkje and Sudbø [2006] K. Børkje and A. Sudbø, Tunneling between noncentrosymmetric superconductors with significant spin-orbit splitting studied theoretically within a two-band treatment, Physical Review B 74, 054506 (2006).
* Mineev [2011] V. Mineev, Magnetoelectric effect and the upper critical field in superconductors without inversion center, Low Temperature Physics 37, 872 (2011).
* Samokhin [2005] K. V. Samokhin, Nmr relaxation rate in noncentrosymmetric superconductors, Physical Review B 72, 054514 (2005).
* Hayashi _et al._ [2006] N. Hayashi, K. Wakabayashi, P. A. Frigeri, and M. Sigrist, Nuclear magnetic relaxation rate in a noncentrosymmetric superconductor, Physical Review B 73, 092508 (2006).
* Aso _et al._ [2007] N. Aso, H. Miyano, H. Yoshizawa, N. Kimura, T. Komatsubara, and H. Aoki, Incommensurate magnetic order in the pressure-induced superconductor cerhsi 3, Journal of magnetism and magnetic materials 310, 602 (2007).
* Pustogow _et al._ [2019] A. Pustogow, Y. Luo, A. Chronister, Y.-S. Su, D. A. Sokolov, F. Jerzembeck, A. P. Mackenzie, C. W. Hicks, N. Kikugawa, S. Raghu, E. D. Bauer, and S. E. Brown, Constraints on the superconducting order parameter in $sr_{2}ruo_{4}$ from oxygen-17 nuclear magnetic resonance, Nature 574, 72 (2019).
* Aoki _et al._ [2019b] D. Aoki, K. Ishida, and J. Flouquet, Review of u-based ferromagnetic superconductors: Comparison between uge2, urhge, and ucoge, Journal of the Physical Society of Japan 88, 022001 (2019b).
* Hardy and Huxley [2005b] F. Hardy and A. D. Huxley, $p$-wave superconductivity in the ferromagnetic superconductor urhge, Phys. Rev. Lett. 94, 247006 (2005b).
* Samokhin [2008a] K. Samokhin, Effects of impurities on the upper critical field h c 2 in superconductors without inversion symmetry, Physical Review B 78, 144511 (2008a).
* Samokhin [2008b] K. Samokhin, Upper critical field in noncentrosymmetric superconductors, Physical Review B 78, 224520 (2008b).
* Iniotakis _et al._ [2007] C. Iniotakis, N. Hayashi, Y. Sawa, T. Yokoyama, U. May, Y. Tanaka, and M. Sigrist, Andreev bound states and tunneling characteristics of a noncentrosymmetric superconductor, Physical Review B 76, 012501 (2007).
* Eschrig _et al._ [2010] M. Eschrig, C. Iniotakis, and Y. Tanaka, Theoretical aspects of andreev spectroscopy and tunneling spectroscopy in non-centrosymmetric superconductors: a topical review, arXiv preprint arXiv:1001.2486 10.48550/arXiv.1001.2486 (2010).
* Annunziata _et al._ [2012] G. Annunziata, D. Manske, and J. Linder, Proximity effect with noncentrosymmetric superconductors, Physical Review B 86, 174514 (2012).
* Rahnavard _et al._ [2014] Y. Rahnavard, D. Manske, and G. Annunziata, Magnetic josephson junctions with noncentrosymmetric superconductors, Physical Review B 89, 214501 (2014).
* Mishra _et al._ [2021] V. Mishra, Y. Li, F.-C. Zhang, and S. Kirchner, Effects of spin-orbit coupling in superconducting proximity devices: Application to cosi 2/tisi 2 heterostructures, Physical Review B 103, 184505 (2021).
* Tanaka _et al._ [2022a] Y. Tanaka, T. Kokkeler, and A. Golubov, Spin conductance in s + helical p-wave junctions, ArXiv 2208.06657 (2022a).
* Schnyder _et al._ [2008] A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. Ludwig, Classification of topological insulators and superconductors in three spatial dimensions, Physical Review B 78, 195125 (2008).
* Hillier _et al._ [2009] A. D. Hillier, J. Quintanilla, and R. Cywinski, Evidence for time-reversal symmetry breaking in the noncentrosymmetric superconductor lanic 2, Physical review letters 102, 117007 (2009).
* Chiu _et al._ [2016] C.-K. Chiu, J. C. Teo, A. P. Schnyder, and S. Ryu, Classification of topological quantum matter with symmetries, Reviews of Modern Physics 88, 035005 (2016).
* Belzig _et al._ [1999] W. Belzig, F. K. Wilhelm, C. Bruder, G. Schön, and A. D. Zaikin, Quasiclassical green’s function approach to mesoscopic superconductivity, Superlattices and microstructures 25, 1251 (1999).
* Heikkilä _et al._ [2019] T. T. Heikkilä, M. Silaev, P. Virtanen, and F. S. Bergeret, Thermal, electric and spin transport in superconductor/ferromagnetic-insulator structures, Progress in Surface Science 94, 100540 (2019).
* Usadel [1970] K. D. Usadel, Generalized diffusion equation for superconducting alloys, Phys. Rev. Lett. 25, 507 (1970).
* Kuprianov and Lukichev [1988] M. Kuprianov and V. Lukichev, Influence of boundary transparency on the critical current of dirty SS’S structures, Zh. Eksp. Teor. Fiz 94, 149 (1988).
* Tanaka _et al._ [2003] Y. Tanaka, Y. Nazarov, and S. Kashiwaya, Circuit theory of unconventional superconductor junctions, Physical Review Letters 90, 167003 (2003).
* Tanaka _et al._ [2004] Y. Tanaka, Y. V. Nazarov, A. Golubov, and S. Kashiwaya, Theory of charge transport in diffusive normal metal/unconventional singlet superconductor contacts, Physical Review B 69, 144519 (2004).
* Nazarov [1999] Y. V. Nazarov, Novel circuit theory of andreev reflection, Superlattices and microstructures 25, 1221 (1999).
* Tanaka _et al._ [2022b] Y. Tanaka, T. Kokkeler, and A. Golubov, Theory of proximity effect in $s+p$-wave superconductor junctions, Phys. Rev. B 105, 214512 (2022b).
* Blonder _et al._ [1982] G. Blonder, M. Tinkham, and T. Klapwijk, Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion, Physical Review B 25, 4515 (1982).
* Konschelle _et al._ [2015] F. Konschelle, I. V. Tokatly, and F. S. Bergeret, Theory of the spin-galvanic effect and the anomalous phase shift $\varphi$ 0 in superconductors and josephson junctions with intrinsic spin-orbit coupling, Physical Review B 92, 125443 (2015).
* Schopohl and Maki [1995] N. Schopohl and K. Maki, Quasiparticle spectrum around a vortex line in a d-wave superconductor, Physical Review B 52, 490 (1995).
* Tanaka and Kashiwaya [1995] Y. Tanaka and S. Kashiwaya, Theory of tunneling spectroscopy of d-wave superconductors, Physical review letters 74, 3451 (1995).
* Tanaka and Tamura [2018] Y. Tanaka and S. Tamura, Surface andreev bound states and odd-frequency pairing in topological superconductor junctions, Journal of Low Temperature Physics 191, 61 (2018).
* Volkov _et al._ [1993] A. F. Volkov, A. V. Zaitsev, and T. M. Klapwijk, Proximity effect under nonequilibrium conditions in double-barrier superconducting junctions, Physica C: Superconductivity 210, 21 (1993).
* Zhang _et al._ [2020] X. Zhang, V. Golovach, F. Giazotto, and F. Bergeret, Phase-controllable nonlocal spin polarization in proximitized nanowires, Physical Review B 101, 180502 (2020).
* Hijano _et al._ [2021] A. Hijano, S. Ilić, M. Rouco, C. González-Orellana, M. Ilyn, C. Rogero, P. Virtanen, T. T. Heikkilä, S. Khorshidian, M. Spies, N. Ligato, F. Giazotto, E. Strambini, and F. S. Bergeret, Coexistence of superconductivity and spin-splitting fields in superconductor/ferromagnetic insulator bilayers of arbitrary thickness, Phys. Rev. Research 3, 023131 (2021).
* Jacobsen _et al._ [2015] S. H. Jacobsen, J. A. Ouassou, and J. Linder, Critical temperature and tunneling spectroscopy of superconductor-ferromagnet hybrids with intrinsic rashba-dresselhaus spin-orbit coupling, Physical Review B 92, 024510 (2015).
* Matsushita _et al._ [2022] T. Matsushita, J. Ando, Y. Masaki, T. Mizushima, S. Fujimoto, and I. Vekhter, Spin-nernst effect in time-reversal-invariant topological superconductors, Physical Review Letters 128, 097001 (2022).
* Qi _et al._ [2009] X.-L. Qi, T. L. Hughes, S. Raghu, and S.-C. Zhang, Time-reversal-invariant topological superconductors and superfluids in two and three dimensions, Physical review letters 102, 187001 (2009).
* Mineev [1983] V. P. Mineev, Superfluid 3he: introduction to the subject, Soviet Physics Uspekhi 26, 160 (1983).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.